Search Results

Search found 335 results on 14 pages for 'revisions'.

Page 9/14 | < Previous Page | 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Mercurial - revert back to old version and continue from there

    - by Paolo
    I'm using mercurial locally for a project (it's the only repo there's no pushing/pulling to/from anywhere else). To date it's got a linear history. However, the current thing I'm working on I've now realised is a terrible approach and I want to go back to the version before I started it and implement it a different way. I'm a bit confused with the branch / revert / update -C commands in Mercurial. Basically I want to revert to version 38 (currently on 45) and have my next commits have 38 as a parent and carry on from there. I don't care if revisions 39-45 are lost for ever or end up in a dead-end branch of their own. Which command / set of commands do I need?

    Read the article

  • SVN Mac oSX issue - permissions?

    - by Steve Griff
    Hello there, /Volumes/sites is a connection to a samba share that hosts some of our sites. We authorise using a username & password that is the same user/pass to log onto the mac. When committing, (or even doing a cleanup) from the Mac Client side using the svn command line tool or SCPlugin, this error occurs: Commit succeeded, but other errors follow: Error bumping revisions post-commit (details follow): In directory '/Volumes/sites/foobar/public_html' Error processing command 'committed' in '/Volumes/sites/foobar/public_html' Error replacing text-base of 'index.php' Can't move '/Volumes/sites/foobar/public_html/.svn/tmp/text-base/index.php.svn-base' to '/Volumes/sites/foobar/public_html/.svn/text-base/index.php.svn-base': Operation not permitted Any ideas? I think it's to do with permissions on the mac side not being able to move files around on the samba share. Apologies if my question is kinda vague so any extra information I can give please shout. Regards Steve

    Read the article

  • How can I do a partial update (i.e., get isolated changesets) from subversion with subclipse?

    - by Ingvald
    If a file is committed several times with various changes, how can I fetch one change at a time, i.e., one changeset at a time? I use eclipse, subversion, and subclipse, and I can't change the former two for the time being (or the MS platform..). In my list/ overview a file seems to be listed only in the latest relevant changeset even if all changesets are listed. So an earlier changeset doesn't necessarily show the full set of files in the original commit, nor the original diff for a file in a commit. Update: I'm thinking about using changesets for simplified peer review, so I'd like the partial update represented for all the files commited in one changeset. It's easy to get diffs and specific revisions for specific files in eclipse, but I'd like to step through all the changes in one specific commit/ changeset in a practical manner.

    Read the article

  • What *collaborative* wireframing / UI mockup tools are out there?

    - by taco
    I'm looking for something that applies the collaboration focus (one location/URL, always up-to-date, multi-person online read/write access anywhere) of google docs / google spreadsheets to wireframing. Bonus points if, like Google Docs, it needs only a browser yet also works offline. More bonus points if it supports automatic revisions. Even more bonus points if you can hand out login-less 'invitation' URLs like Flickr does, instead of forcing people into signing up for accounts or using their home accounts. To start off, there's one called iPlotz, but it didn't enchant me -- ironically, mostly because of its akward UI, which can't hold a candle to omnigraffle (don't let that prevent you from giving it a try though). And no, paper prototyping, wonderful as it is, does not qualify: it does not combine being instantly globally shareable & editable very well :-)

    Read the article

  • Mercurial: What is the benefit of fixing errors in earlier versions

    - by Ken Earley
    According to the guide, under the heading: Fixing errors in earlier revisions, it states this: When you find a bug in some earlier revision you have two options: either you can fix it in the current code, or you can go back in history and fix the code exactly where you did it, which creates a cleaner history. How does going back in history make it cleaner? It still makes a new changeset at tip. Does it have something to do with what is recorded as it's parent? Is there a way to view the logs seeing the newly inserted changeset in that order? This lesson is under the main heading of Lone developer with nonlinear history. Is this good practice when working on a team?

    Read the article

  • Ruby on Rails: attr_accessor for submodels

    - by williamjones
    I'm working with some models where a lot of a given model's key attributes are actually stored in a submodel. Example: class WikiArticle has_many :revisions has_one :current_revision, :class_name => "Revision", :order => "created_at DESC" end class Revision has_one :wiki_article end The Revision class has a ton of database fields, and the WikiArticle has very few. However, I often have to access a Revision's fields from the context of a WikiArticle. The most important case of this is probably on creating an article. I've been doing that with lots of methods that look like this, one for each field: def description if @description @description elsif current_revision current_revision.description else "" end end def description=(string) @description = string end And then on my save, I save @description into a new revision. This whole thing reminds me a lot of attr_accessor, only it doesn't seem like I can get attr_accessor to do what I need. How can I define an attr_submodel_accessor such that I could just give field names and have it automatically create all those methods the way attr_accessor does?

    Read the article

  • Strange/simple batch question regarding Java/Ant

    - by Monster
    For my company, I'm making a batch script to go through and compile the latest revisions of code for our current project. I'm using Ant to build the class files, but encountered a strange error. One of the source files imports .* from a directory, where there are no files (only folders), and in fact, the folders needed are imported right after. It compiles perfectly fine in Eclipse, but I'm using an Ant script to automate it outside of the IDE, and Javac throws an error when it encounters this line. Is there any automated procedure I can use to ignore/suppress this error with javac in Ant? I'd even go so far as to create a dummy file in the importing directory, but all of that in contained in a Jar file I don't wish to have to decompress and then recompress with the dummy file. Thanks!

    Read the article

  • Subclipse CollabNet Myster Icon

    - by Rares Saftoiu
    The scenario is that I'm merging a series of cherry picked revisions from on SVN branch into trunk. I'm using the subclipse CollabNet client to do the merge. Everything works great, except for in addition to the files I picked to merge, my working directory shows a series of changes that svn thinks have changed but that I haven't chosen to merge. If I do a diff on the files in question it tells me there's no differences. If I do a commit, I get the screenshot below, with the mystery icon I haven't been able to find document anywhere. Here's a link to the screenshot: http://i.imgur.com/1a92j.png

    Read the article

  • Tortoise SVN revision history

    - by rahul
    I want to know for how long the tortoise svn keeps the revision history. Say I have a file which I deleted from repository through repo browser an year ago, will I be able to still recover that file? If I am able to recover, I also want to know the method to permanently delete that earlier copy of file and related revisions history so that in future nobody is able to access that file. Is it possible? I have run into problems in my organisation as I did frequent updations and deletions assuming that file was getting deleted permanently. The file system of repository has bloated now. Please suggest how to fix it.

    Read the article

  • Syncronize an SVN repo (svnsync) with encoding errors

    - by Hamish
    Is it possible to fix/bypass non-UTF8 encoded svn:log records when syncronizing repositories with svnsync? Background I'm in the process of taking over the maintenance of an open source module that is stored within a large (well over 10,000 revisions) subversion (1.5.5) repository. I do not have admin access to the remote repository to dump/filter/load the module. The old repository is being discontinued and I am trying to sync the original sub module to my local (1.6+) repository with svnsync. For example: svnsync file://home/svn/temp-repo/ http://path.to.repo/modulename/ The problem is that the old repository didn't enforce UTF8 encoding and I'm hitting errors like: svnsync: Cannot accept 'svn:log' property because it is not encoded in UTF-8 I can't modify the log property in the source repository so I need to somehow modify or ignore the property value when the encoding is unknown/invalid. Any ideas? For example, is it possible to write a pre-revprop-change script to modify the log property in transit?

    Read the article

  • How can I compare Core Data models?

    - by Don
    I noticed while doing system testing that a feature of our app had been removed. It looks like at some point, an older version of a file was checked into SVN that was missing a property. This specific file was generated from the Core Data model, and sure enough, the latest version of the model in SVN is missing the same attribute. I need to find out if any other attributes are missing, or if anything else in the model changed. However, the elements file in the .xcodedatamodel folder appears to binary and I can't compare the revisions. Is there a way to find the differences between two Core Data models in SVN? Barring that, what would be the best way to accomplish this task?

    Read the article

  • How to export all changed/added files from Git?

    - by dr Hannibal Lecter
    Hi all! I am very new to Git and I have a slight problem. In SVN [this feels like an Only Fools and Horses story by uncle Albert.."during the war..."] when I wanted to update a production site with my latest changes, I'd do a diff in TSVN and export all the changed/added files between two revisions. As you can imagine, it was easy to get those files to a production site afterwards. However, it seems like I'm unable to find an "export changed files" option in Git. I can do a diff and see the changes, I can get a list of files, but I can't actually export them. Is there a reasonable way to do this? Am I missing something simple? Just to clarify once again, I need to export all the changes between two specific commits. Thanks in advance!

    Read the article

  • tools for testing vim plugins

    - by intuited
    I'm looking for some tools for testing vim scripts. Either vim scripts that do unit/functional testing, or classes for some other library (eg Python's unittest module) that make it convenient to run vim with parameters that cause it to do some tests on its environment, and determine from the output whether or not a given test passed. I'm aware of a couple of vim scripts that do unit testing, but they're sort of vaguely documented and may or may not actually be useful: vim-unit: purports "To provide vim scripts with a simple unit testing framework and tools" first and only version (v0.1) was released in 2004 documentation doesn't mention whether or not it works reliably, other than to state that it is "fare [sic] from finished". unit-test.vim: This one also seems pretty experimental, and may not be particularly reliable. May have been abandoned or back-shelved: last commit was in 2009-11 ( 6 months ago) No tagged revisions have been created (ie no releases) So information from people who are using one of those two existent modules, and/or links to other, more clearly usable, options, are very welcome.

    Read the article

  • When not to use a Drupal node?

    - by stotastic
    I've recently created a very simple CRUD table where the user stores some data. For the data, I created a custom node. The functionality works great for creating, editing, and deleting data in the CRUD table using the basic node functionality (I'm actually amazed how fast and easy it was to program the basic functionality with proper access controls using only a tiny bit of code).... Since the data isn't meant to be treated the same way as 'content' such as a blog post (no title, no body, no commments, no revisions, shouldn't show up on ?q=node page, no previews, no teasers, etc)... I find that I'm spending most of my time 'turning off' and modifying the stuff that drupal does automatically for nodes. I know its a matter of taste, but where should one draw the line on what should be treated as a node and what shouldn't? In other words, would it be better to program this stuff from scratch without using nodes?

    Read the article

  • ant cpptask with ivy

    - by AC
    A company I am working for, has some c binaries build with ant using cpptask. They use ivy to retrieve shared c libraries every time we start a build which wastes a significant amount of time comparing the revisions and downloading, when then only need to be download if the header files have changed. I have added a target which sets a var, which causes the build to skip over the ivy steps but I'd like a better solution. I see that cpptask creates a file history.xml and only rebuilds to binary if any of the sources have change. I'd like to know if there is way to independently test if the binary needs to build, and it does, I'd like it fire off the ivy targets. I'd also like for a variable to be set if the binary was rebuilt so that I can conditionally start an rpm generation task

    Read the article

  • Should old/legacy/unused code be deleted from source control repository?

    - by Checkers
    I've encountered this in multiple projects. As the code base evolves, some libraries, applications, and components get abandoned and/or deprecated. Most people prefer to keep them in. The usual argument is that the code does not really take any space, it can be left alone until needed again. So a repository slowly turns into a cesspool of legacy code, where it's hard to find anything. Some people delete old code, since it creates clutter, raises more questions for new people, and you can restore any old snapshot of the code base anyway. However you can't always find the old code if you don't know where to look, as none of the (common) VCS I know offer search over the entire repository including all historical revisions, and the only way to search the old files is to check out the revision where the deleted file exists. What would be a good approach to repository management?

    Read the article

  • Diff multiple files in perforce across a revision range

    - by Thanatos
    I'd like to diff a bunch of lines across several revisions. Like, I'd like to see a.c, b.c, and c.c from changelist X to changelist Y. p4 diff2 a.c@X a.c@Y (where X & Y are changelist numbers) seems to work, but only sometimes. Specifically, if a.c is non-existent at X, I don't get a diff. I'd like to be able to get the diff (even though it'll be the whole file with only adds) anyways. To get the bigger picture: I have several files, across several commits, and I'd like to merge the diffs of these files in these commits, to basically say "this is a diff of what changed in this set of files during this set of changelists"

    Read the article

  • Is it really wrong to version documents using CouchDB's behaviour?

    - by Tomas Sedovic
    This is one of those "I know I shouldn't do this but it's oh so convenient." questions. Sorry about that. I plan to use CouchDB for storing a bunch of documents and keeping their entire revision history. CouchDB does the versioning automatically, but it is strongly discouraged for programmer's use: "You cannot rely on document revisions for any other purpose than concurrency control." From what I've found on the CouchDB wiki, the versions can get deleted either during compaction or during replication. As far as I can tell, Compaction must always be triggered manually and Replication occurs only when there's more than one database server. The question is: if I won't run compaction and will use only single database instance for my documents, can I just use CouchDB's document versioning and expect it to work? What other problems I might run into? E.g. does not running compaction hurt the performance or consume significantly more disk space (than if I did handle the versioning manually)?

    Read the article

  • How do I add an SVN remote to a Git repository?

    - by Tom
    Hello! I recently used git-svn to clone an SVN repository, for the purposes of maintaining my own branch of an open-source project. I'm also working with others on this branch, so we use a shared Git repository to help with the collaboration. A colleague wishes to fetch new revisions from the original SVN repository. How might he accomplish this? I can simply run "git svn fetch" on my local machine, but seeing that my colleague has cloned from the shared Git repository, his local branch lacks the necessary SVN metadata for fetching. Thanks!

    Read the article

  • foreach with an array of stdclass objects

    - by Jared Steffen
    So, what I want to do is quite simple in my mind. I have an array that consists solely of four objects. I want to create a loop that will echo an attribute of each object in the array. The only success I've had, however, is echoing every object and every property of the objects. I've never dealt with objects so this is probably the TRUE root of the problem. There's been a few revisions but the only thing I've really excelled at is creating error codes. Here is what I have: $categories = get_categories(array('child_of' => '8')); foreach ($categories as $cat) { echo $cat->name; };

    Read the article

  • Find a specific couple of lines of code from large git repo

    - by mustISignUp
    So i remember that i once did something in another project and (later removed it), that could be useful now. Thanks to some other SO post i managed to search for a half remembered string.. git grep halfRemeberedNameOfFunction $(git log -g --pretty=format:%h) and Yay! got some results 2d0bcde:path/to/project/file.c: result = halfRemeberedNameOfFunction( data ); 65fc672:path/to/project/file.c: result = halfRemeberedNameOfFunction( data ); 24f2858:path/to/project/file.c: result = halfRemeberedNameOfFunction( data ); 252e3a5:path/to/project/file.c: result = halfRemeberedNameOfFunction( data, args ); b58bc0b:path/to/project/file.c: result = _halfRemeberedNameOfFunction( data, options ); dce8d9d:path/to/project/file.c: result = halfRemeberedNameOfFunction( data, moreData ); But how do i get that file at one of those revisions? Many thanks

    Read the article

  • GIT Clones on Multiple Machines

    - by Adam
    Here's my setup... Laptop (Mac) - git clone of svn repository Thumb drive - git clone of laptop git repository Server (Win Server 08) - git clone of thumb drive repository I'm having trouble keeping them in sync for some reason... If I make a change on the server, I'll do a "git pull " on the thumb drive to get the changes. Take the thumb drive to the laptop and do "git pull " on the laptop. From there, I can do "git svn dcommit" and everything goes up to the SVN repo with no problem. If I pull changes from SVN with "git svn rebase" and then do a pull onto the thumb drive and do a "git status" it says that I'm ## revisions ahead of the master/origin and I can't figure out why.

    Read the article

  • how can I "force" a branch upon the trunk, in the case I can't "reintegrate"?

    - by davka
    We created a branch from the trunk on which a major refactoring was done. Meanwhile, the trunk advanced a few revisions with some fixes. We don't want these changes on the branch, so we don't want to "catch-up" merge the trunk to the branch, because we don't want to mix the old and new code. But without this I can't reintegrate the branch back to the trunk. Is there a way to impose the branch on the trunk "as-is"? (an idea I considered is to undo ("reverse-merge") the trunk back to the revision where the branch started, and then it is safe to merge it on branch - nothing should happen. Then I can reintegrate. What do you think?) thanks!

    Read the article

  • Selecting the first row out of many sql joins

    - by IcedDante
    Alright, so I'm putting together a path to select a revision of a particular novel: SELECT Catalog.WbsId, Catalog.Revision, NovelRevision.Revision FROM Catalog, BookInCatalog INNER JOIN NovelMaster INNER JOIN HasNovelRevision INNER JOIN NovelRevision ON HasNovelRevision.right = NovelRevision.obid ON HasNovelRevision.Left=NovelMaster.obid ON NovelMaster.obid = BookInCatalog.Right WHERE Catalog.obid = BookInCatalog.Left; This returns all revisions that are in the Novel Master for each Novel Master that is in the catalog. The problem is, I only want the FIRST revision of each novel master in the catalog. How do I go about doing that? Oh, and btw: my flavor of sql is hobbled, as many others are, in that it does not support the LIMIT Function.

    Read the article

  • (Windows Installer) What are some causes for different versions of a program showing 2 entries in ad

    - by Davy8
    Somehow we ended up with something going wrong with one of our recently deployed upgrades (internal deploy, only about a dozen machines or so) and there are now 2 entries for our program showing up in windows add/remove program and I'm trying to figure out what could have caused this. In a nutshell what does windows use to determine whether a program is replacing a previous version or if it's a new program? We are using WiX to create our installers, but nothing in the SVN revisions shows much out of the ordinary (been working fine for the past year with over 100 upgrades). Product version is * because we're forcing a major upgrade each time, but the upgrade code has never changed.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14  | Next Page >