Search Results

Search found 335 results on 14 pages for 'revisions'.

Page 8/14 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • When to use each user research method

    - by user12277104
    There are a lot of user research methods out there, but sometimes we get stuck in a rut, conducting all formative usability testing before coding, or running surveys to gather satisfaction data. I'll be the first to admit that it happens to me, but to get out of a rut, it just takes a minute to look at where I am in the design & development cycle, what kind(s) of data I need, and what methods are available to me. We need reminders, or refreshers, every once in a while. One tool I've found useful is a graphic organizer that I created many years ago. It's been through several revisions, as I've adapted it to the product cycles of the places I've worked, changed my mind about how to categorize it, and added methods that I've used or created over time. I shared a version of this table at the 2012 International UPA conference, and I was contacted by someone yesterday who wanted to use it in a university course on user-center design. I was flattered at the the thought, but embarrassed, because I was sure it needed updating -- that was a year ago, after all. But I opened it today, and really, there's not much I'd change -- sure, I could add some nuance regarding what types of formative testing, such as modality (remote, unmoderated remote, or in-person) or flavor of testing (RITE, RITE-Krug, comparative, performance), but I think it's pretty much ok as is. Click on the image below, to get the full-size PDF. And whether it's entirely "right" or "wrong" isn't the whole value of looking at these methods across the product lifecycle. The real value lies in the reminder that I have options. And what those options are change as the field changes, so while I don't expect this graphic to have an eternal shelf life, it's still ok a year after I last updated it. That said, if you find something missing or out of place, let me know :) 

    Read the article

  • Mercurial says "nothing changed", but it did. Sometimes my software is too clever.

    - by user12608033
    It seems I have found a "bug" in Mercurial. It takes a shortcut when checking for differences in tracked files. If the file's size and modification time are unchanged, it assumes its contents are unchanged: $ hg init . $ cp -p .sccs2hg/2005-06-05_00\:00\:00\,nicstat.c nicstat.c $ ls -ogE nicstat.c -rw-r--r-- 1 14722 2012-08-24 11:22:48.819451726 -0700 nicstat.c $ hg add nicstat.c $ hg commit -m "added nicstat.c" $ cp -p .sccs2hg/2005-07-02_00\:00\:00\,nicstat.c nicstat.c $ ls -ogE nicstat.c -rw-r--r-- 1 14722 2012-08-24 11:22:48.819451726 -0700 nicstat.c $ hg diff $ hg commit nothing changed $ touch nicstat.c $ hg diff diff -r b49cf59d431d nicstat.c --- a/nicstat.c Fri Aug 24 11:21:27 2012 -0700 +++ b/nicstat.c Fri Aug 24 11:22:50 2012 -0700 @@ -2,7 +2,7 @@ * nicstat - print network traffic, Kb/s read and written. Solaris 8+. * "netstat -i" only gives a packet count, this program gives Kbytes. * - * 05-Jun-2005, ver 0.81 (check for new versions, http://www.brendangregg.com) + * 02-Jul-2005, ver 0.90 (check for new versions, http://www.brendangregg.com) * [...] Now, before you agree or disagree with me on whether this is a bug, I will also say that I believe it is a feature. Yes, I feel it is an acceptable shortcut because in "real" situations an edit to a file will change the modification time by at least one second (the resolution that hg diff or hg commit is looking for). The benefit of the shortcut is greatly improved performance of operations like "hg diff" and "hg status", particularly where your repository contains a lot of files. Why did I have no change in modification time? Well, my source file was generated by a script that I have written to convert SCCS change history to Mercurial commits. If my script can generate two revisions of a file within a second, and the files are the same size, then I run afoul of this shortcut. Solution - I will just change my script to apply the modification time from the SCCS history to the file prior to commit. A "touch -t " will do that easily.

    Read the article

  • what will EcmaScript 6 bring to the table for us

    - by user697296
    Our company ported moderate chunks of business logic to JavaScript. We compile the code with a minifier, which further improves performance. Since the language is dynamically typed, it lends itself well to obfuscation, which occurs as a byproduct of minification. We went to great efforts to ensure it positively screams, performance-wise. We can now do what we did before, faster, better, with less code, on more platforms. In summary, we are very satisfied with the current state of the language. I personally love the language especially for its cross-platform nature. So naturally, I read up a lot about the state of JavaScript compilers, performance and compatibility across as many browsers and platforms as I have time to research. The one theme which has been growing louder and louder these days, is the news about ECMAScript 6. So far, what I have been able to gather is that ES6 promises a better development experience; firstly by enabling new ways to do things, secondly by reporting errors early. This sounds great for those who are still waiting for the language to meet their needs before jumping on board. But we have already jumped on board in a big way. Sure, I expect that we will have to do ongoing maintenance and feature revisions on our code through the years, and that we would obviously make use of best practices at the time. But I don't see us refactoring major portions of it to take advantage of language features that are mostly intended to boost developer productivity. I keep wondering, what impact will the language advances ultimately have on our existing, well-written, well-performing code base? Is there something I am missing? Is there something we ought to look out for? Does anyone have tips or guidance on how we should approach the ecmascript.next finalization? Should we care?

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • How can I mark a group of changes/changesets in SVN, Hg, or Git

    - by sylvanaar
    I would like to mark an arbitrary group of commits/changesets with a label. Commit 1 *Mark 1 Commit 2 *Mark 2 Commit 3 Commit 4 *Mark 1 Commit 5 *Mark 2 The goal is to easily locate all the changes for a specific mark, and to have that grouping persisted in the VCS directly, as opposed to some outside system like a bug tracking system. The location and ordering of the marks needs to be arbitrary, and should be able to work with both committed/uncommitted and pushed/unpushed changes. In SVN the best way I know is to just edit the commit notes and add some sort of special text that you can search for e.g. "**Mark 1". Or just to make a fake edit and commit it and use its commit note to list all the included revisions. Is there a better solution for SVN? Are there equivalent or better solutions for Hg or Git?

    Read the article

  • Maintain set of local commits working with git-svn

    - by benizi
    I am using git to develop against a project hosted in subversion, using git-svn: git svn clone svn://project/ My general workflow has been to repeatedly edit-and-commit on the master branch, then commit to the svn repository via: git stash git svn dcommit git stash apply One of the local modifications that 'stash' command is preserving, that I don't want to commit to the svn repository, is a changed database connection string. What's the most convenient way to keep this local change without the extra 'stash' steps? I suspect that something like 'stash' or 'quilt' is what I'm looking for, but I'm still new enough to git that I think I'm missing some terminology that would lead to the exact incantation. Update: The only solution I found that seems to avoid the git stash + git-svn action + git stash apply series was to update the git-svn ref manually: (check in local-only change to 'master', then...) $ cat .git/refs/master > .git/refs/remote/git-svn $ git svn fetch (with at least one new SVN revision) And that leaves the local-only commit as a weird (probably unsafe) commit between two svn revisions.

    Read the article

  • Extending Perforce to use a custom content diff tool for certain file extensions

    - by Fraser Graham
    I have various custom binary files stored in perforce and for many of the file types I have built a custom diff tool to show the content creators a diff of the actual changes to the file. E.g. If the file holds simple key value pairs as a compressed binary blob the diff tool would load each version into an in memory format and generate a list of additions, deletions and edits to the file presented in a nice clean report view. Much like the built in image diff tool in P4V i'd like to be able to use my own diff tool for certain file extensions within my depot and allow the users to use the existing P4V interface to pick revisions to diff between and examine history. So, I am aware you can write add-ins to P4V but I can't find any documentation on it and I'd like to know if this kind of extension functionality is available in P4V and how to use it?

    Read the article

  • CVS: command-line diff on a remote CVS server between two HEAD revs

    - by Gugussee
    My CVS-fu is not very strong anymore (after years of SVN'ing and now Mercurial'ing). I'm trying to do a diff between two revisions of the HEAD branch (everything is in the HEAD anyway). I received an IDE already set up to use a :pserver:myname@cvsserver:port/cvs/project CVS. I'm on Windows XP. I do not want to use the IDE (the goal here is to learn CVS a bit more). Apparently I cannot login using SSH to the CVS server. How can I run a remote CVS diff between two HEAD revs using the command line? P.S: I am new here, mod me up so I can comment etc. :)

    Read the article

  • SVN problems after migration with CVS2SVN

    - by Bjorn C
    We´ve migrated from CVS on AIX to SVN on Linux via CVS2SVN. The migration seems to have went well but when working in SVN we get a lot of Tree Conflicts that doesn´t seem to be conflicts at all? Looking at the revision graphs, one can see that the graph for e.g. trunk and a branch isn´t the same, i.e. they contain different sets of revisions of the file. Either of the 3 ways to resolve this conflict when merging in TortoiseSVN leaves the revision graphs separate, they cannot be "melted" together. Could it be that CVS2SVN didn´t understand that a file in different branches is the same even if the file system path is the same? Anyone who has experienced this? Thanks, Bjorn

    Read the article

  • Git - tidying up a repo

    - by Simon Woods
    Hi I have got my repo into a bit of a state and want to be able to work my way out of it The repo looks a bit like this (A1, B1, C1 etc are obviously commits) A1 ---- A2 ---- A3 ---- A4 ---- A5 ---- A6 ---- A7 ---- A8 / (from a remote repo) B1 ---- B2 --------------------------------- | \ \ C1 ---------------------------------C2 \ / D1 --- D2 --- D3 --- D4 --- D5 --- D6 Ideally I'd like to be able to remove all the revisions (with rebase?) on the B, C and D lines (I'm loathed to say branches simply because there are now no local branches on these lines except ref branches to the remote repo) and try to merge in the remote repo again, perhaps in a better way. I'd be grateful of any suggestions as to how to get rid of all these commits. Could I ask that any answers use revision SHA1s rather than branch names. I thought that somehow I'd be able to revert the merge into A7 but can't quite work out how to do it I hope that is sufficient information. Many thx Simon

    Read the article

  • Should developers know how to use office suites?

    - by systempuntoout
    How deep is your knowledge on Office suites? Personally i don't like them, i hate create and manage word documents, excel datasheets etc. etc. I'm not talking about opening a word document and write some text or calculate sum and division on excel; i'm talking about advanced features like revisions, vba macros and so on. I have a co-worker, actually he's a talented functional analyst, that don't know anything about programming but he's kind a monster guru on Microsoft Office suite. When he sits on my desk and asks me to open and modify some of his hardly complicated Microsoft Excel multicolor multipivotal recursive datasheet, ehm, i feel like a baby in front of a nuclear plant console.It' not a great feeling if you know what i mean. As programmer, do you feel guilty about not knowing office suites enough?

    Read the article

  • SQL SELECT: "Give me all documents where all of the documents procedures are 'work in progress'"

    - by prestonmarshall
    This one really has me stumped. I have a documents table which hold info about the documents, and a procedures table, which is kind of like a revisions table for each document. What I need to do is write a select statement which gives me all of the documents where all of the procedures have the status "work_in_progress". Here's an example procedures table: document_id | status 1 | 'wip' 1 | 'wip' 1 | 'wip' 1 | 'approved' 2 | 'wip' 2 | 'wip' 2 | 'wip' Here, I would want my query to only return document id 2, because all of its statuses are work_in_progress. I DO NOT want document_id 1 since one of its statuses is 'approved'. I believe this is relational division I want, but I'm not sure where to start. This is MySQL 5.0 FYI.

    Read the article

  • How to rotate 4 images, fading between each one?

    - by Darryl Hein
    I have 4 images, which I want to fade between each other in a loop. I have something like the following: <img src="/images/image-1.jpg" id="featureImg1" /> <img src="/images/image-2.jpg" id="featureImg2" style="display:none;" /> <img src="/images/image-3.jpg" id="featureImg3" style="display:none;" /> <img src="/images/image-4.jpg" id="featureImg4" style="display:none;" /> I am up for revisions to the HTML, although I cannot use absolute positioning in this case. I am using jQuery else where on the site, so it's available. I also need to deal with an image not being loaded right away as the images are larger.

    Read the article

  • SVN: is it possible to delete a branch that was copied removed etc for good?

    - by dimus
    I have to remove a branch from svn history for good. Normally I would use svnadmin dump /path/to/repo |svndumpfilter --drop-empty-revs --renumber-revs exclude /branches/bad_branch However this branch was not just created, but also moved and then removed and dump script fails to process downstream information with messages like: Invalid copy source path '/branches/bad_branch' So I imagine 2 ways to cope with the problem keep only last few revisions of the history and put current repository as an archive on the web make a dump up to the revision where the 'bad_branch' was created and apply the rest of the changes as a patch, therefore losing history of a few recent commits. Is there a better, cleaner way to deal with this?

    Read the article

  • In Subversion, I know when a file was added, what's the quickest way to find out when it was deleted

    - by Eric Johnson
    OK, so suppose I know I added file "foo.txt" to my Subversion repository at revision 500. So I can do a svn log -v http://svnrepo/path/foo.txt@500, and that shows all the files added at the same time. What's the fastest way to find when the file was deleted after it was added? I tried svn log -r500:HEAD -v http://svnrepo/path/foo.txt@500, but that gives me "path not found" - perhaps obviously, because the file "foo.txt" doesn't exist at "HEAD". I can try a binary search algorithm going forward through revisions (and that would certainly be faster than typing this question), but is there a better way?

    Read the article

  • Anyone using a CMS with a DAM back-end for Asset Management?

    - by Valien
    Anyone here have any experience with using a CMS system for content and populating the site with images/assets from a DAM system? Working with a large number of assets (photos, logos, files, etc) that will be stored on a DAM system for management, revisions, etc. Would like to build a front-end system to help serve up the assets for the users as well as keep the general site updated with non-asset information (like what's new, faq's, etc.) Any ideas/thoughts on this from anyone who has been down this path? Thanks! ~Allen

    Read the article

  • Subversion merging, tree merge

    - by krystan honour
    I need to merge changes from a branch back into trunk but want to continue work on the existing branch. I was going to use a re-integrate merge but realised this is not suitable as I will need to recreate my branch etc which for a variety of reasons is not desirable. What I really want to do is merge the current revisions in the branch down to head and then keep people working on their current working copies. So my question is , can tree merge be used to solve this or do I have to reintegrate and recreate.

    Read the article

  • Difference between Revert and Update in Mercurial

    - by Edan Maor
    I'm just getting started with Mercurial, and I've come across something which I don't understand. I made changes to several files, and now I want to undo all the changes I made to one of them (i.e. go back to my last commit for one specific file). As far as I can see, the command I want is revert. In the page I linked to, there is the following statement: This operation however does not change the parent revision of the working directory (or revisions in case of an uncommitted merge). To undo an uncomitted merge, you can use "hg update -C -r." which will reset the parents to the first parent. I don't understand the difference between the two (hg revert vs. hg update -C -r). Can anyone enlighten me as to the difference? And in my case, do I really want the revert or the update to go get rid of the changes I made to the file? Thanks

    Read the article

  • How to identify the source of a text selection event coming from a CompareEditorInput in eclipse?

    - by tangens
    In my eclipse plugin I have the following code: public class MyHandler extends AbstractHandler { @Override public Object execute( ExecutionEvent event ) throws ExecutionException { ISelection sel = HandlerUtil .getActiveWorkbenchWindowChecked( event ) .getSelectionService() .getSelection(); if( sel instanceof TextSelection ) { IEditorPart activeEditor = PlatformUI .getWorkbench() .getActiveWorkbenchWindow() .getActivePage() .getActiveEditor(); IEditorInput editorInput = activeEditor.getEditorInput(); if( editorInput instanceof CompareEditorInput ) { // here are two possible sources of the text selection, the // left or the right side of the compare editor. // How can I find out, which side it is from? } } return null; } } Here I'm handling a text selection event coming from an CompareEditorInput, i.e. the result of comparing two remote revisions of a file with subclipse. Now I want to handle the text selection properly. For that I have to know if it selects some text inside the left side editor or inside the right side editor. How can I find that out?

    Read the article

  • I want to version control my entire slice

    - by Tom
    I'm renting a slice (i.e., a VPS) from Slicehost. I've a spent a day or two filling up /usr with my favorite packages, /etc with configs and init scripts, and so on. Now I want to: save this whole setup somewhere (e.g., to load onto another machine). see what changes I've made to which files revert changes, tag revisions, and all that other good version control stuff Saving a disk image gives me (1), but not (2) and (3). Using Subversion (svn import / svn://someotherhost) might give me all three, but I expect problems if I actually try to check a project out into / and maintain .svn directories in root-owned areas. And to load my setup onto a fresh slice, I'd need to install an svn client on it first. Is there a good way to do what I want to do?

    Read the article

  • Revision histories and documenting changes

    - by jasonline
    I work on legacy systems and I used to see revision history of files or functions being modified every release in the source code, for example: // // Rev. No Date Author Description // ------------------------------------------------------- // 1.0 2009/12/01 johnc <Some description> // 1.1 2009/12/24 daveb <Some description> // ------------------------------------------------------- void Logger::initialize() { // a = b; // Old code, just commented and not deleted a = b + c; // New code } I'm just wondering if this way of documenting history is still being practiced by many today? If yes, how do you apply modifications on the source code - do you comment it or delete it completely? If not, what's the best way to document these revisions? If you use version control systems, does it follow that your source files contain pure source codes, except for comments when necessary (no revision history for each function, etc.)?

    Read the article

  • How to properly update a feature branch from trunk?

    - by Pavel Radzivilovsky
    SVN book says: ...Another way of thinking about this pattern is that your weekly sync of trunk to branch is analogous to running svn update in a working copy, while the final merge step is analogous to running svn commit from a working copy I find this approach very unpractical in large developments, for several reasons, mostly related to reintegration step. From SVN v1.5, merging is done rev-by-rev. Cherry-picking the areas to be merged would cause us to resolve the trunk-branch conflicts twice (one when merging trunk revisions to the FB, and once more when merging back). Repository size: trunk changes might be significant for a large code base, and copying the differences files (unlike SVN copy) from trunk elsewhere may be a significant overhead. Instead, we do what we call "re-branching". In this case, when a significant chunk of trunk changes is needed, a new feature branch is opened from current trunk, and the merge is always downward (Feature branches - trunk - stable branches). This does not go along SVN book guidelines and developers see it as extra pain. How do you handle this situation?

    Read the article

  • Drupal setup for proofreaders - "Revision Moderation"

    - by Olav
    I would like to have external proofreaders to work directly inside my Drupal site. Basically they should be able to create new revisions, annotate, comment etc without affecting what users see without my approval. Particularly the node might already be public. "Revision Moderation" module sounds a bit like what I want, but it seems not to be so much used, and I run into other modules like "Workflow". What is important for me: Possibility to work on content already published Easy for the proofreader, easy for me to direct her to the right location Other useful features such as comments (Like those balloons in Word), diffs etc (I guess I could work around (1) by copying the content)

    Read the article

  • How do I determine the revision number of an Android build?

    - by hermo
    I know how to get the API level, android.os.Build.VERSION.SDK_INT, but there are also several revisions of each level release, e.g. for 2.1 there's rev 1 and 2. How do I determine the revision of a build? The reason i'd like to know this is that I have a workaround for a bug in Android 2.1 (and 2.2), and this workaround will break the moment the corresponding bug is fixed. So right now i'm in the odd position of hoping that the bug won't be fixed (at least not until I can find an answer to above question).

    Read the article

  • Mercurial - revert back to old version and continue from there

    - by Paolo
    I'm using mercurial locally for a project (it's the only repo there's no pushing/pulling to/from anywhere else). To date it's got a linear history. However, the current thing I'm working on I've now realised is a terrible approach and I want to go back to the version before I started it and implement it a different way. I'm a bit confused with the branch / revert / update -C commands in Mercurial. Basically I want to revert to version 38 (currently on 45) and have my next commits have 38 as a parent and carry on from there. I don't care if revisions 39-45 are lost for ever or end up in a dead-end branch of their own. Which command / set of commands do I need?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14  | Next Page >