Search Results

Search found 9751 results on 391 pages for 'merge module'.

Page 274/391 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • Merging changes to a workspace with uncommitted changes

    - by Kim L
    We've just recently switched over from SVN to Mercurial, but now we are running into problems with our workflow. Example I have my local clone of the repository which I work on. I'm making some highly experimental changes to our code base, something that I don't want to commit before I'm sure it works the way it is supposed to, I don't want to commit it even locally. Now, simultaneously, my co-worker has made some significant improvements/bug fixes which I need. He pushes his commits to our main repository. The question is, how can I merge his changes to my workspace without the requirement that I have to commit all my changes, since I need his changes to test my own code? A more day-to-day problem we have with the exact same workflow is where we have a couple of configuration files which are in the repository. Each developer makes a couple of small environment specific changes to the configuration files, but do not commit the changes. These couple of uncommitted files hinders us from making any merges to our workspace, just like with the example above. Ideally, the configuration files probably shouldn't be in the repository, unfortunately, that's just how it has to be for here unnamed reasons.

    Read the article

  • Combining multiple content types into a single search result with Drupal 6 and Views 2

    - by Chaulky
    Hi all, I need to create a somewhat advanced search functionality for my Drupal 6 site. I have a one-to-many relationship between two content types and need to search them, respecting that relationship. To make things more clear... I have content types TypeX and TypeY. TypeY has a node reference CCK field that relates it to a single node of TypeX. So, many nodes of TypeY reference the same node of TypeX. I want to use Views 2 to create a search page for these nodes. I want each search result to be a node of TypeX, along with all the nodes of TypeY that reference it. I know I could just theme the individual results and use a view to add the nodes of TypeY to the single node of TypeX... but that won't allow users to actually search TypeY... it would only search TypeX and merely display some nodes of TypeY along with it. Is there anyway to get the search to account for content in nodes of both content types, but merge the TypeY results into the "parent" node of TypeX? In database terms, it seems like I need to do a join, then filter by the search terms. But I can't figure out how to do this in Views. Thanks for any help i can get!!!

    Read the article

  • Merging Passed Parameters

    - by Josh Crowder
    I have a two data arrays sent in from a form, one called transloaded and the other video which is the actual form for the model. I need to get [:video_encoded][:url] and save that to [:video][:flash_url] This is the passed arguments or transloaded, when I try and access [:transload][:results][:video_encode] I get nil. print params[:transload] { "assembly_id":"d59b4293b3d79d2ccd1948c02421c6a6", "status":"success", "uploads":{ "video":{ "name":"bbc_one.mp4", "mime":"video/mp4", "ext":"mp4", "size":601104, "meta":{ "width":720, "height":404, "video_fps":25, "video_bitrate":null, "video_format":"avc1", "video_codec":"ffh264", "audio_bitrate":"128k", "audio_codec":"faad", "duration":3.07, "device_vendor":null, "device_name":null, "device_software":null, "latitude":null, "longitude":null }, "url":"http://tmp.transloadit.com/" } }, "results":{ "video_encode":{ "name":"bbc_one.flv", "mime":"video/x-flv", "steps":["encode","export"], "ext":"flv", "size":388317, "meta":{ "width":480, "height":320, "video_fps":25, "video_bitrate":"512k", "video_format":"FLV1", "video_codec":"ffflv", "audio_bitrate":"64k", "audio_codec":"mp3", "duration":3.11, "device_vendor":null, "device_name":null, "device_software":null, "latitude":null, "longitude":null }, "url":"http://s3.transloadit.com/b7deac9c96af6c745e914e25d0350baa/7a/2b09e822265ac2328789b40dcc02ae/bbc_one.flv" }, "video_encode_iphone":{ "name":"bbc_one.qt", "mime":"video/quicktime", "steps":["encode_iphone","export"], "ext":"qt", "size":218236, "meta":{ "width":480, "height":320, "video_fps":25, "video_bitrate":null, "video_format":"avc1", "video_codec":"ffh264", "audio_bitrate":"128k", "audio_codec":"faad", "duration":3.04, "device_vendor":null, "device_name":null, "device_software":null, "latitude":null, "longitude":null }, "url":"http://s3.transloadit.com/31/58bcc80d5345e52a42c9773125e8f0/bbc_one.qt" } } } Here is what I am trying to use video_links = { :flash_url => params[:transload][:results][:video_encode][:url], :mp4_url => params[:transload][:results][:video_encode_iphone][:url] } params[:video].merge(video_links)

    Read the article

  • Membership Provider users in different tables

    - by Mike
    I have an existing database with users and administrators in different tables. I am rewriting an existing website in ASP.net and need to decide - should I merge the two tables into one users table and just have one provider, OR leave the tables separated and have two different providers. Administrators, they need the ability to create, edit and delete users. I am thinking that the membership/profile provider way of editing users (i.e. System.Web.Profile.ProfileBase pro = System.Web.Profile.ProfileBase.Create("User1"); pro.Initialize("User1", true); txtEmail.Text = pro["SecondaryEmail"].ToString(); is the best way to edit users because the provider handles it? You cannot use this if you have two separate providers? (because they are both looking at different tables). Or should I make a whole lot of methods to edit the users for the administrators? UPDATE: Making a custom membership provider look at both tables is fine, but then what about the profile provider? The profile provider GetPropertyValues and SetPropertyValues would be going on the same set of properties for users and admins. Mike

    Read the article

  • Add multiple entities to Javascript namespace from different files

    - by Brian M. Hunt
    Given a namespaces ns used in two different files: abc.js ns = ns || (function () { foo = function() { ... }; return { abc : foo }; }()); def.js // is this correct? ns = ns || {} ns.def = ns.def || (function () { defoo = function () { ... }; return { deFoo: defoo }; }()); Is this the proper way to add def to the ns to a namespace? In other words, how does one merge two contributions to a namespace in javascript? If abc.js comes before def.js I'd expect this to work. If def.js comes before abc.js I'd expect ns.abc to not exist because ns is defined at the time. It seems there ought to be a design pattern to eliminate any uncertainty of doing inclusions with the javascript namespace pattern. I'd appreciate thoughts and input on how best to go about this sort of 'inclusion'. Thanks for reading. Brian

    Read the article

  • Solr associations

    - by Tom
    Hi all, The last couple of days we are thinking of using Solr as our search engine of choice. Most of the features we need are out of the box or can be easily configured. There is however one feature that we absolutely need that seems to be well hidden (or missing) in Solr. I'll try to explain with an example. We have lots of documents that are actually businesses: <document> <name>Apache</name> <cat>1</cat> ... </document> <document> <name>McDonalds</name> <cat>2</cat> ... </document> In addition we have another xml file with all the categories and synonyms: <cat id=1> <name>software</name> <synonym>IT<synonym> </cat> <cat id=2> <name>fast food</name> <synonym>restaurant<synonym> </cat> We want to associate both businesses and categories so we can search using the name and/or synonyms of the category. But we do not want to merge these files at indexing time because we should update the categories (adding.remioving synonyms...) without indexing all the businesses again. Is there anything in Solr that does this kind of associations or do we need to develop some specific pieces? All feedback and suggestions are welcome. Thanks in advance, Tom

    Read the article

  • _Painless_ integration of Eclipse with Vim?

    - by Adnan
    Hi, Has anyone managed to get Vim integrated into Eclipse painlessly? I just want to use Vim for the editor while retaining the general Eclipse interface. I have tried using Eclim plugin but the editor seemed to crash more often than work (the site said that the editor replacement functionality is still beta). On the flip side, is there any IDE which matches Eclipse's functionality -- mainly the integration with SVN, ant, etc. -- and is also able to use Vim? I mostly use eclipse for SAS SCL, Java and Javascript programming and find the eclipse editor too "mouse-y". I'd also like, in a perfect world, to use vimdiff as a diff viewer for SVN (we use TortoiseSVN) while checking for diffs or conflicts during merge etc. I admit I havent spent a lot of time trying to get these things to work. I feel guilty about spending too much time on potential wild-goose-chases while my other team members are working away at their code, perfectly content with all that eclipse has to offer. Edit: Just found this while desperately browsing around: Vim plugin.. Any experience using this? Its saturday afternoon in Kiwi land, so I'll try it later and update if no one has an opinion. From the claims on the site, it sounds perfect.

    Read the article

  • Database sharing/versioning

    - by DarkJaff
    Hi everyone, I have a question but I'm not sure of the word to use. My problem: I have an application using a database to stock information. The database can ben in access (local) or in a server (SQL Server or Oracle). We support these 3 kind of database. We want to give the possibility to the user to do what I think we can call versioning. Let me explain : We have a database 1. This is the master. We want to be able to create a database 2 that will be the same thing as database 1 but we can give it to someone else. They each work on each other side, adding, modifying and deleting records on this very complex database. After that, we want the database 1 to include the change from database 2, but with the possibility to dismiss some of the change. For you information, ou application is already multiuser so why don't we just use this multi-user and forget about this versionning? It's because sometimes, we need to give a copy of the database to another company on another site and they can't connect on our server. They work on their side and then, we want to merge. Is there anyone here with experience with this type of requirement? We have a lot of ideas but most of them require a LOT of work, massive modification to the database or to the existing queries. This is a 2 millions and growing C++ app, so rewriting it is not possible! Thanks for any ideas that you may give us! J-F

    Read the article

  • jQuery Autocomplete with _renderItem and catagories

    - by LillyPop
    As a newb to jQuery im wondering if its possible to have both jQuery _renderItem (for custom list item HTML/CSS) AND the categories working together in harmony? Ive got my _renderItem working great but I have no idea how to incorporate categories into the mix. My code so far $(document).ready(function () { $(':input[data-autocomplete]').each(function () { $(':input[data-autocomplete]').autocomplete({ source: $(this).attr("data-autocomplete") }).data("autocomplete")._renderItem = function (ul, item) { var MyHtml = "<a>" + "<div class='ac' >" + "<div class='ac_img_wrap' >" + '<img src="../../uploads/' + item.imageUrl + '.jpg"' + 'width="40" height="40" />' + "</div>" + "<div class='ac_mid' >" + "<div class='ac_name' >" + item.value + "</div>" + "<div class='ac_info' >" + item.info + "</div>" + "</div>" + "</div>" + "</a>"; return $("<li></li>").data("item.autocomplete", item).append(MyHtml).appendTo(ul); }; }); }); The jQuery documentation for the autocomplete gives the following code example : $.widget("custom.catcomplete", $.ui.autocomplete, { _renderMenu: function (ul, items) { var self = this, currentCategory = ""; $.each(items, function (index, item) { if (item.category != currentCategory) { ul.append("<li class='ui-autocomplete-category'>" + item.category + "</li>"); currentCategory = item.category; } self._renderItem(ul, item); }); } }); Id like to get my custom HTML (_renderItem) and categories working together, can anyone help me to merge these two together or point me in the right direction. Thanks

    Read the article

  • How do I load an XML document, add and remove nodes, then apply it to a ASP DataGrid control?

    - by JFOX
    I have a pretty simple operation but am struggling with how to implement it. I am loading XML from an external data source using a DataSet.ReadXml(), the creating a new XMLDataDocument from that data set, then syncing the Dataset back to the XMLDataDocument like so: doc = new XmlDataDocument(dsDataSet); dsDataSet.EnforceConstraints = false; dsDataSet= doc.DataSet; Once loaded I do two things to the XmlDataDocument: Loop through and check if a purely meta node, count, exists right beneath the root node and if so remove it. a thumb node exists in a second level nodelist and if not, create and append it. This is all going a expected because the result of doc.save() looks correct. Where I'm having an issue is updating the Dataset, which is being applied as the data source for an ASP DataGrid. Once all the above XMLDoc manipaulation is done I do this: dsDataSet.Merge(doc.DataSet); dsDataSet.AcceptChanges(); I then apply the data set to the grid control: dgList.DataSource = dsDataSet; dgList.DataBind(); But, when I do this I get this error on the site: System.Web.HttpException: DataBinding: 'System.Data.DataRowView' does not contain a property with the name 'thumb'. What did I miss?

    Read the article

  • i18n redirection breaks my tests ....

    - by Mike
    I have a big application covered by more than a thousand tests via rspec. We just made the choice to redirect any page like : / /foo /foo/4/bar/34 ... TO : /en /en/foo /fr/foo/4/bar/34 .... So I made a before filter in application.rb like so : if params[:locale].blank? headers["Status"] = "301 Moved Permanently" redirect_to request.env['REQUEST_URI'].sub!(%r(^(http.?://[^/]*)?(.*))) { "#{$1}/#{I18n.locale}#{$2}" } end It's working great but ... It's breaking a lot of my tests, ex : it "should return 404" do Video.should_receive(:failed_encodings).and_return([]) get :last_failed_encoding response.status.should == "404 Not Found" end To fix this test, I should do : get :last_failed_encoding, :locale => "en" But ... seriously I don't want to fix all my test one by one ... I tried to make the locale a default parameter like this : class ActionController::TestCase alias_method(:old_get, :get) unless method_defined?(:old_get) def get(path, parameters = {}, headers = nil) parameters.merge({:locale => "fr"}) if parameters[:locale].blank? old_get(path, parameters, headers) end end ... but couldnt make this work ... Any idea ??

    Read the article

  • Looking for a good dev environment for OSGi bundles

    - by Riduidel
    Hi, I'm currently investigating in the field of dev environment for OSGi bundles. My goal is to find a way to develop, test and debug with ease the bundles I'll be coding. Besides, I have some "cultural" requirements. I want to be able to use java continuous integration servers (typically, Hudson) As a consequence of that first requirement, I want to have a repeatable, one-click build process. My typical tool for that is maven. And finally, being long-term Eclipse user, and having the m2eclipse at hand to merge my eclipse env with my maven one, I obviously want to be able to test and debug with that IDE. So far, here are the infos I know I can use (and have already tested) maven-bundle-plugin, maven-ipojo-plugin which both offer clean packaging facilities I have tested maven pax (and eclipse pax) and am not really satisfied with both : maven pax generates a very heavy project, where adding dependencies is very error-prone (the maven pax:import-bundle command line, with all its arguments, is a hell per se) I have taken a look at Karaf, which seems to have some nice direct maven provisionning, but I don't know how to integrate it with my Eclipse, besides using the traditionnal JPDA bridge. However, it seems to be more production-oriented than dev-oriented, and as such may require heavy configuration to fit my need (although the reading of its user manual doesn't revedal that). Have you got any ideas ? Some maven/eclipse plugins ?

    Read the article

  • MVVM pattern: ViewModel updates after Model server roundtrip

    - by Pavel Savara
    I have stateless services and anemic domain objects on server side. Model between server and client is POCO DTO. The client should become MVVM. The model could be graph of about 100 instances of 20 different classes. The client editor contains diverse tab-pages all of them live-connected to model/viewmodel. My problem is how to propagate changes after server round-trip nice way. It's quite easy to propagate changes from ViewModel to DTO. For way back it would be possible to throw away old DTO and replace it whole with new one, but it will cause lot of redrawing for lists/DataTemplates. I could gather the server side changes and transmit them to client side. But the names of fields changed would be domain/DTO specific, not ViewModel specific. And the mapping seems nontrivial to me. If I should do it imperative way after round-trip, it would break SOC/modularity of viewModels. I'm thinking about some kind of mapping rule engine, something like automappper or emit mapper. But it solves just very plain use-cases. I don't see how it would map/propagate/convert adding items to list or removal. How to identify instances in collections so it could merge values to existing instances. As well it should propagate validation/error info. Maybe I should implement INotifyPropertyChanged on DTO and try to replay server side events on it ? And then bind ViewModel to it ? Would binding solve the problems with collection merges nice way ? Is EventAgregator from PRISM useful for that ? Is there any event record-replay component ? Is there better client side pattern for architecture with server side logic ?

    Read the article

  • Git : Failed at pushing to remote server, ' REPOSITORY_PATH ' is not a git command

    - by Judarkness
    I'm using Git with TortoiseGit on Windows XP, and I have a remote bare repository on Windows Vista 64bit version. When I tried to push my local files to remote bare repository, I got the following error message. git.exe push "origin" master:master git: 'C:/Git_Repository/.git' is not a git command. See 'git --help'. fatal: The remote end hung up unexpectedly the arbitrary URL is : username@serverip:C:/Git_Repository/.git The same arbitrary URL worked just fine while doing clone/fetch/pull. Access from a local directory in remote machine to this bare repository has no problem either so I belive there is something wrong with my path. I can push/pull at GitHub correctly but I was using URL provide by GitHub. Does anyone know what's wrong with my configuration? Here is my remote .git/config [core] repositoryformatversion = 0 filemode = false bare = true logallrefupdates = true ignorecase = true hideDotFiles = dotGitOnly Here is my local .git/config [core] repositoryformatversion = 0 filemode = false bare = false logallrefupdates = true symlinks = false ignorecase = true hideDotFiles = dotGitOnly [remote "origin"] fetch = +refs/heads url = username@serverip:C:/Git_Repository/.git [branch "master"] remote = origin merge = refs/heads/master Thanks for the reminding

    Read the article

  • PHP Simple_html_dom issue

    - by stef
    The snippet below loops through some web pages, grabs the html and then looks for table.results and gets the plaintext out of the tags contained in each . $results is ok. Now I'm trying to get the href value of an tag that is found in the second of each . I'd like to include this in the $results array, but I'm not sure how to do this. The third foreach statement gets them but then I need to merge $links with $results. Ideally I'd also get the links in the second foreach statement. Does anyone know how? $i = 0; foreach( $urls as $u ) { $html = file_get_html($u); foreach($html->find('.results tbody tr') as $element) { $result[$i] = $this->extract($element->plaintext); $i++; } foreach($html->find('.results tbody tr a') as $element) { $links[$i] = $element->href; $i++; } } print_r($result); print_r($links); die;

    Read the article

  • delivery mechanism, Rational ClearCase

    - by kadaba
    Hi All, We came up with a stream structure for the Rational ClearCase UCM model. We recently migrated the code base into the new setup. We had three different code bases, i.e. three physical code bases. The way migration was done in this way. we moved the production code first, created a baseline. Then the uat code and created a baseline and then the development code and created a baseline. As of now the integration stream has the latest baseline that is the development baseline. Now we have other two streams for the prd and the uat from which the release will be done in the respective environments. I have my dev stream now. I create an activity and make some changes. now I need to promote these changes into the uat environment. If I deliver the changes to the integration stream, merge is done but on a development basline. I do not want to rebase it to uat as many development apps wil get rebased into the uat which is not desired. How do I achieve promoting changes to the uat environment(uat stream). kindly advice.

    Read the article

  • How to set permissions or alter a git commit process when using local repositories

    - by Tony
    I have a server that contains a central git repository and one of my co-worker's development environment. My co-worker's repository's origin is the central git repository and he pushes there when he has some code to share. Likewise, I develop locally and push to the central git repository when I have some code to share, so my repository's origin is also the central git repository. The issue is that I have the central git repository under a "git" user's home directory. So when I push I am actually SSH'ing into the the server as the "git" user. To be even more clear, my config has these lines: $ more .git/config [remote "origin"] fetch = +refs/heads/*:refs/remotes/origin/* url = [email protected]:fsg [branch "master"] remote = origin merge = refs/heads/master When I push, git handles this SSH + push seamlessly with I am guessing some sort of git shell. The issue is that when my coworker pushes, he is logged in as himself for a user and gets a bunch of crazy permission errors. Is there a typical way to solve this problem without opening up git's directories to a group? I think this will be problematic when I push and therefore overwrite the the repository and those permissions are reset. Thanks!

    Read the article

  • SSIS - How do I use a resultset as input in a SQL task and get data types right?

    - by thursdaysgeek
    I am trying to merge records from an Oracle database table to my local SQL table. I have a variable for the package that is an Object, called OWell. I have a data flow task that gets the Oracle data as a SQL statment (select well_id, well_name from OWell order by Well_ID), and then a conversion task to convert well_id from a DT_STR of length 15 to a DT_WSTR; and convert well_name from a DT_STR of length 15 to DT_WSTR of length 50. That is then stored in the recordset OWell. The reason for the conversions is the table that I want to add records to has an identity field: SSIS shows well_id as a DT_WSTR of length 15, well_name a DT_WSTR of length 50. I then have a SQL task that connects to the local database and attempts to add records that are not there yet. I've tried various things: using the OWell as a result set and referring to it in my SQL statement. Currently, I have the ResultSet set to None, and the following SQL statment: Insert into WELL (WELL_ID, WELL_NAME) Select OWELL_ID, OWELL_NAME from OWell where OWELL_ID not in (select WELL.WELL_ID from WELL) For Parameter Mapping, I have Paramater 0, called OWell_ID, from my variable User::OWell. Parameter 1, called OWell_Name is from the same variable. Both are set to VARCHAR, although I've also tried NVARCHAR. I do not have a Result set. I am getting the following error: Error: 0xC002F210 at Insert records to FLEDG, Execute SQL Task: Executing the query "Insert into WELL (WELL_ID, WELL_NAME) Select OWELL..." failed with the following error: "An error occurred while extracting the result into a variable of type (DBTYPE_STR)". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. I don't think it's a data type issue, but rather that I somehow am not using the resultset properly. How, exactly, am I supposed to refer to that recordset in my SQL task, so that I can use the two recordset fields and add records that are missing?

    Read the article

  • Updating join fields in an ORM command

    - by Jono
    I have a question about object relational updates on join fields. I am working on a project using codeigniter with datamapper dmz. But I think my problem is with general understanding of ORMs. So fell free to answer with any ORM you know. I have two tables, Goods and Tags. One good can have many tags. Everything is working, but I am looking for a way to merge tags. Meaning I decide I want to remove tag A and instead have everything that is tagged by it, now be tagged by tag B. I only have models for the goods and the tags. There is no separate model for the join relationship, as I believe these ORMs were designed to work. I know how to delete a tag. But I dont know how to reach into the join table to redirect the references since there is no model for the join table. I would rather use the ORM then issuing a raw SQL command.

    Read the article

  • Git force complete sync to master

    - by Jesse
    My workplace uses Subversion for source control so I have been playing around with git-svn for the advantages of my own branches, commit as often as I want without touching the main repo, etc. Since my git svn checkout is local, I have cloned it to a network share as well to act as a backup. My thinking is that if my desktop takes a dump I will at least have the repo on the network share to get changes that I have not had a chance to dcommit yet. My workflow is to work from the desktop, make changes, commit, etc. At the end of the day I want to update the repo on the network share with all of my current changes. I had setup the repo on the network share using git clone repo_on_my_desktop and then updating the repo on the network share with git pull origin master. The problem that I am running into is when I used do a git rebase to squish multiple commits prior to dcommitting to the main svn repository. When I do this, I get merge conflicts on the repo on the network share when I try to backup at night. Is there a way to simply sync entirely with the repository on my desktop without doing a new git clone each night?

    Read the article

  • How to best configure a central repository/multiple central repositories for Mercurial?

    - by Mario
    I am new to Mercurial and trying to figure out if it could replace SVN. Everyone I work with has used SVN, CVS and VSS (shiver), so this could be quite a large change. I have been very interested after reading about its merge and branch capability, but have a few reservations. We are currently on SVN, and have one central repository. From my reading, it seems as though there is no ONE central repository for all projects when using Mercurial. NOTE: We consider each project a separate logical set of code, or a Visual Studio Solution. It runs on its own. We have around 60 separate projects in our one central SVN repository. After reading about Mercurial it seems to me that I have to create 60 separate central repositories for each one of these projects on the server. QUESTION #1: Should I create a single repository for each project? If yes, then I am worried about configuring and hosting 60 separate central Mercurial servers. I started thinking I could configure one file, but it seems as if each repository must be individually configured using the “C:...\MyRepository.hg\hgrc” file (Windows install). It also seems as I have to run 60 servers (hg serve), I would assume on different ports. QUESTION #2: If the answer to question 1 is yes, there should be a single central repository for each project, then how have people managed many multiple repositories? Finally, I haven’t looked into moving all history and changes from one SVN repository to a bunch of separate Mercurial repositories, but would appreciate any comments from someone who has done this (or if it is even possible).

    Read the article

  • Compile subversion on CentOs

    - by peter
    I have downloaded, compiled and installed so far: apr-1.3.9 apr-util-1.3.9 sqlite-3.6.23 zlib-1.2.4 libtool-2.2.6b Now after downloading subversion-1.6.9, the config works fine but compiling it will end with the following error: cd subversion/svn && /bin/sh /root/subversion-1.6.9/libtool --tag=CC --silent --mode=link gcc -g -O2 -g -O2 -pthread -rpath /usr/local/lib -o svn add-cmd.o blame-cmd.o cat-cmd.o changelist-cmd.o checkout-cmd.o cleanup-cmd.o commit-cmd.o conflict-callbacks.o copy-cmd.o delete-cmd.o diff-cmd.o export-cmd.o help-cmd.o import-cmd.o info-cmd.o list-cmd.o lock-cmd.o log-cmd.o main.o merge-cmd.o mergeinfo-cmd.o mkdir-cmd.o move-cmd.o notify.o propdel-cmd.o propedit-cmd.o propget-cmd.o proplist-cmd.o props.o propset-cmd.o resolve-cmd.o resolved-cmd.o revert-cmd.o status-cmd.o status.o switch-cmd.o tree-conflicts.o unlock-cmd.o update-cmd.o util.o ../../subversion/libsvn_client/libsvn_client-1.la ../../subversion/libsvn_wc/libsvn_wc-1.la ../../subversion/libsvn_ra/libsvn_ra-1.la ../../subversion/libsvn_delta/libsvn_delta-1.la ../../subversion/libsvn_diff/libsvn_diff-1.la ../../subversion/libsvn_subr/libsvn_subr-1.la /usr/local/apr/lib/libaprutil-1.la -lexpat /usr/local/apr/lib/libapr-1.la -lrt -lcrypt -lpthread -ldl /usr/bin/ld: cannot find -lexpat collect2: ld returned 1 exit status make: * [subversion/svn/svn] Error 1 The file at /usr/local/apr/lib/libapr-1.la exists and seems to be OK (from permission perspective What could be the problem here? Thanks Peter

    Read the article

  • SQLAlchemy: select over multiple tables

    - by ahojnnes
    Hi, I wanted to optimize my database query: link_list = select( columns=[link_table.c.rating, link_table.c.url, link_table.c.donations_in], whereclause=and_( not_(link_table.c.id.in_( select( columns=[request_table.c.recipient], whereclause=request_table.c.donator==donator.id ).as_scalar() )), link_table.c.id!=donator.id, ), limit=20, ).execute().fetchall() and tried to merge those two selects in one query: link_list = select( columns=[link_table.c.rating, link_table.c.url, link_table.c.donations_in], whereclause=and_( link_table.c.active==True, link_table.c.id!=donator.id, request_table.c.donator==donator.id, link_table.c.id!=request_table.c.recipient, ), limit=20, order_by=[link_table.c.rating.desc()] ).execute().fetchall() the database-schema looks like: link_table = Table('links', metadata, Column('id', Integer, primary_key=True, autoincrement=True), Column('url', Unicode(250), index=True, unique=True), Column('registration_date', DateTime), Column('donations_in', Integer), Column('active', Boolean), ) request_table = Table('requests', metadata, Column('id', Integer, primary_key=True, autoincrement=True), Column('recipient', Integer, ForeignKey('links.id')), Column('donator', Integer, ForeignKey('links.id')), Column('date', DateTime), ) There are several links (donator) in request_table pointing to one link in the link_table. I want to have links from link_table, which are not yet "requested". But this does not work. Is it actually possible, what I'm trying to do? If so, how would you do that? Thank you very much in advance!

    Read the article

  • git-svn guestion about creating local branches

    - by leeed25d
    Is there a way to create a local branch, or modify an existing local branch, in such a way that it cannot be dcommit'ed to the svn repo? Here's a description of the scenario. git checkout -b local.farBranch remotes/farBranch git checkout -b patched.local.farBranch git merge local.patches <work on patched branch && test> <do not commit onto patched.local.farBranch> git checkout local.farBranch git commit -am "some changes" git rebase local.farBranch patched.local.farBranch <another work test cycle> git checkout local.farBranch git commit -am "last changes" git svn dcommit Now, I never want to dcommit patched.local.farBranch (which is tracking remotes/farBranch) because that would put my local patches into the SVN repository. This is not a fatal problem but it is a pain in the keester because the patch has to be removed when the SVN farBranch is eventally (SVN) merged onto the trunk. So what I am looking for is a way to prevent this git checkout patched.local.farBranch git svn dcommit <<== ERROR git checkout local.farBranch git svn dcommit <<== OK

    Read the article

  • How do I protect the trunk from hapless newbies?

    - by Michael Haren
    A coworker relayed the following problem, let's say it's fictional to protect the guilty: A team of 5-10 works on a project which is issue-driven. That is, the typical flow goes like this: a chunk of work (bug, enhancement, etc.) is created as an issue in the issue tracker The issue is assigned to a developer The developer resolves the issue and commits their code changes to the trunk At release time, the frozen, and heavily tested trunk or release branch or whatever is built in release mode and released The problem he's having is that a couple newbies made several bad commits that weren't caught due to an unfortunate chain of events. This was followed by a bad release with a rollback or flurry of hot fixes. One idea we're toying with: Revoke commit access to the trunk for newbies and make them develop on a per-developer branch (we're using SVN): Good: newbies are isolated and can't hurt others Good: committers merge newbie branches with the trunk frequently Good: this enforces rigid code reviews Bad: this is burdensome on the committers (but there's probably no way around it since the code needs reviewed!) Bad: it might make traceability of trunk changes a little tougher since the reviewer would be doing the commit--not too sure on this. Update: Thank you, everyone, for your valuable input. I have concluded that this is far less a code/coder problem than I first presented. The root of the issue is that the release procedure failed to capture and test some poor quality changes to the trunk. Plugging that hole is most important. Relying on the false assumption that code in the trunk is "good" is not the solution. Once that hole--testing--is plugged, mistakes by everyone--newbie or senior--will be caught properly and dealt with accordingly. Next, a greater emphasis on code reviews and mentorship (probably driven by some systematic changes to encourage it) will go a long way toward improving code quality. With those two fixes in place, I don't think something as rigid or draconian as what I proposed above is necessary. Thanks!

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >