Search Results

Search found 2601 results on 105 pages for 'commit'.

Page 73/105 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • How to validate Windows VC++ DLL on Unix systems

    - by Guildencrantz
    I have a solution, mostly C#, but with a few VC++ projects, that is pushed through our standard release process (perl and bash scripts on Unix boxes). Currently the initiative is to validate DLL and EXE versions as they pass through the process. All the versioning is set so that File Version is of the format $Id: $ (between the colon and the second dollar should be a git commit hash), and the Product Version is of the format $Hudson Build: $ (between the colon and the second dollar should be a string representing the hudson build details). Currently this system works extremely well for the C# projects because this version information is stored as plain strings within the compiled code (you can literally use the unix strings command and see the version information); the problem is that the VC++ projects do not expose this information as strings (I have used a windows system to verify that the version information is correctly being set), so I'm not sure how to extract the version on a unix system. Any suggestions for either A) Getting a string representation of the version embedded in the compiled code, or B) A utility/script which can extract this information?

    Read the article

  • What are the best settings of the H2 database for high concurrency?

    - by dexter
    There are a lot of settings that can be used in H2 database. AUTO_SERVER, MVCC, LOCK_MODE, FILE_LOCK and MULTI_THREADED. I wonder what combination works best for high concurrency setup e.g. one thread is doing INSERTs and another connection does some UPDATEs and SELECTs? I tried MVCC=TRUE;LOCK_MODE=3lFILE_LOCK=NO but whenever I do some UPDATEs in one connection, the other connection does not see it even though I commit it. By the way the connections are from different processes e.g. separate program.

    Read the article

  • Rolling back or re-creating the master branch in git?

    - by Matthew Savage
    I have a git repo which has a few branches - there's the master branch, which is our stable working version, and then there is a development/staging branch which we're doing new work in. Unfortunately it would appear that without thinking I was a bit overzealous with rebasing and have pulled all of the staging code into Master over a period of time (about 80 commits... yes, I know, stupid, clumsy, poor code-man-ship etc....). Due to this it makes it very hard for me to do minor fixes on the current version of our app (a rails application) and push out the changes without also pushing out the 'staged' new features which we don't yet want to release. I am wondering if it is possible to do the following: Determine the last 'trunk' commit Take all commits from that point onward and move them into a separate branch, more or less rolling back the changes Start using the branches like they were made for. Unfortunately, though, I'm still continually learning about git, so I'm a bit confused about what to really do here. Thanks!

    Read the article

  • Hibernate, alter identifier/primary key

    - by Schildmeijer
    I receive the following exception when Im trying to alter my @ID in an @Entity. identifier of an instance of com.google.search.pagerank.ItemEntity was altered from 1 to 2. I know that im altering the primary key in my table. Im using JPA-annotations. I solved this by using this single HQL query: update Table set name=:newName where name=:oldName instead of using the more oo approach: beginTransaction(); T e = session.load(...); e.setName(newName); session.saveOrUdate(e); commit(); Any idea what the diff is?

    Read the article

  • Can I automatically overwrite repository files using svn_load_dirs.pl or similiar?

    - by Andy Strang
    I am working with a legacy VSS repository which was transferred over to a new SVN repository a few months ago. In the meantime, before we go live with the SVN repository, we need to bring over all the changes that have happened on the VSS one between then and now. I was looking at different ways to do this which seem to be things such as: 1.) svn_load_dirs.pl then merge the files manually? 2.) svn import straight into the trunk and merge files manually 3.) checkout a working copy of my SVN repository, copy in the changed files which will overwrite some of the ones in my working copy then commit the changes. My question is, can any of these options be used (or any other options) to automate things so that I don't have to merge the files, and can instead just overwrite them? I think only Option 3 would do this but any help is appreciated.

    Read the article

  • Could I use Revert after abrupting Merge?

    - by John
    Hallo all, Just now I tried to upload a modified working copy to its branch in the following steps: 1. Update 2. Commit Then I attempted to Merge the changes in the trunk to this branch. However during editing of the conflicts I realized there were so many conflicting codes that I could not address completely today, then I gived up the Merge, and the working copy got an exclamation mark immediately. Thru Check for modifications I found that many many files had been modified or had conflicts. It seems that the Merge has been somehow wrongly carried out. My question: could I return to the state before the Merge simply using Revert? Thanks a lot in advance, John

    Read the article

  • Would this prevent the row from being read during the transaction?

    - by acidzombie24
    I remember an example where reads in a transaction then writing back the data is not safe because another transaction may read/write to it in the time between. So i would like to check the date and prevent the row from being modified or read until my transaction is finish. Would this do the trick? and are there any sql variants that this will not work on? update tbl set id=id where date>expire_date and id=@id Note: dateexpire_date happens to be my condition. It could be anything. Would this prevent other transaction from reading the row until i commit or rollback?

    Read the article

  • MySQL Simple query gives "Query was empty". Transaction help needed I think.

    - by user129609
    Hi, I'm trying to do a simple transaction in MySQL delimiter go start transaction; BEGIN DECLARE EXIT HANDLER FOR SQLEXCEPTION, SQLWARNING, NOT FOUND ROLLBACK; INSERT INTO jext_categories (Name) VALUES ('asdfas'); INSERT INTO jext_categories (Name) VALUES ('asdfas2'); END; commit; SELECT * FROM jext_categories; go delimiter ; but I keep getting an error saying query was empty. Could someone please tell me what I am doing wrong, and also, what is the proper format for doing a transaction in MySQL? Thanks!

    Read the article

  • mysql does not utilize my cpu and ram enough?

    - by vick
    Hello Everyone! I am importing a 2.5gb csv file to a mysql table. My storage engine is innodb. Here is the script: use xxx; DROP TABLE IF EXISTS `xxx`.`xxx`; CREATE TABLE `xxx`.`xxx` ( `xxx_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(128) NOT NULL, `yy` varchar(128) NOT NULL, `yyy` varchar(64) NOT NULL, `yyyy` varchar(2) NOT NULL, `yyyyy` varchar(10) NOT NULL, `url` varchar(64) NOT NULL, `p` varchar(10) NOT NULL, `pp` varchar(10) NOT NULL, `category` varchar(256) NOT NULL, `flag` varchar(4) NOT NULL, PRIMARY KEY (`xxx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; set autocommit = 0; load data local infile '/home/xxx/raw.csv' into table company fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n' ( name, yy, yyy, yyyy, yyyyy, url, p, pp, category, flag ); commit; Why does my PC (core i7 920 with 6gb ram) only consume 9% cpu power and 60% ram when running these queries?

    Read the article

  • INSERT SELECT Statement and Rollback SQL

    - by Juan Perez
    Im Working on a creation of a query who uses INSERT SELECT statement using MS SQL Server 2008: INSERT INTO TABLE1 (col1, col2) SELECT col1, col2 FROM TABLE2 Right now the excecution of this query is inside a transaction: Pseudocode: try { begin transaction; query; commit; } catch { rollback; } If TABLE2 has around 40m of rows, at the moment of making the insert on the TABLE1, if there is an error in the middle of the INSERT, will the INSERT SELECT statement make a rollback itself or I need to use a transaction to preserve data integrity? It is necessary to use a transaction? or SQL SERVER it self uses a transaction for this type of sentences.

    Read the article

  • Using ASP.Net 4.0 for new Dev projects

    - by JBeckton
    I am currently in the early stages of developing a couple web applications, I have not written any code yet as I am still just gathering requirements and scoping things out. I want to target ASP.Net 4.0 winforms as the platform for these apps but I want to make sure there are no glaring issues with this new version before I commit. I understand that if I was porting an existing app from 2.0, 3.5 to 4.0 there may be issues but I am starting from scratch on these projects and plan to write these apps to support the new features of 4.0. Should I wait for the first service pack to come out? Just seems like more work to start with 3.5 now only to go back through and tweak things for 4.0 in just a few months or even before I finish the app. Our servers are Win 2K3 with IIS6 and MS SQL 2000, Should I expect any problems with VS 2010 and MS SQL 2000 in regards to Linq to SQL and EF?

    Read the article

  • sqlite3.OperationalError: database is locked - non-threaded application

    - by James C
    Hi, I have a Python application which throws the standard sqlite3.OperationalError: database is locked error. I have looked around the internet and could not find any solution which worked (please note that there is no multiprocesses/threading going on, and as you can see I have tried raising the timeout parameter). The sqlite file is stored on the local hard drive. The following function is one of many which accesses the sqlite database, and runs fine the first time it is called, but throws the above error the second time it is called (it is called as part of a for loop in another function): def update_index(filepath): path = get_setting('Local', 'web') stat = os.stat(filepath) modified = stat.st_mtime index_file = get_setting('Local', 'index') connection = sqlite3.connect(index_file, 30) cursor = connection.cursor() head, tail = os.path.split(filepath) cursor.execute('UPDATE hwlive SET date=? WHERE path=? AND name=?;', (modified, head, tail)) connection.commit() connection.close() Many thanks.

    Read the article

  • Flatten old history in Git

    - by schoetbi
    I have a git project that has run for a while and now I want to throw away the old history, say from start to two years back from now. With throw away I mean replace the many commits within this time with one single commit doing the same. I checked "git rebase -i " but this does not remove the other (full) history containing all commits from git. Here a graphical representation (d being the changesets): (base) -> d1 -> d2 -> d3 -> (HEAD) What I want is: (base,d1,d2) -> d3 -> (HEAD) How could this be done? Thanks.

    Read the article

  • Refactoring common method header and footer

    - by David Wong
    I have the following chunk of header and footer code appearing in alot of methods. Is there a cleaner way of implementing this? Session sess = factory.openSession(); Transaction tx; try { tx = sess.beginTransaction(); //do some work ... tx.commit(); } catch (Exception e) { if (tx!=null) tx.rollback(); throw e; } finally { sess.close(); } The class in question is actually an EJB 2.0 SessionBean which looks like: public class PersonManagerBean implements SessionBean { public void addPerson(String name) { // boilerplate // dostuff // boilerplate } public void deletePerson(Long id) { // boilerplate // dostuff // boilerplate } }

    Read the article

  • Branching and remote heads in Mercurial

    - by hekevintran
    I created a new branch using this command: hg branch new_branch After the first commit to the new branch, the default branch becomes inactive. If this is pushed the central repository will have only one head which belongs to the new branch. When my colleague pushes his commits on the default branch, he will get this error: pushing to ssh://... searching for changes abort: push creates new remote heads! (did you forget to merge? use push -f to force) Is there anything bad about forcing the push? Why are remote heads bad? How do you work remotely on separate branches and push to one repository?

    Read the article

  • How to enable i18n from within setup_app in websetup.py ? (formatted resend)

    - by daniel
    From within the setup_app function (websetup.py) of a pylons i18n application, which is making use of a db, I was trying to initiate multilingual content to be inserted into the db. To do so the idea was something like: #necessary imports here def setup_app(command, conf, vars): .... for lang in langs: set_lang(lang) content=model.Content() content.content=_('content') Session.add(content) Session.commit() Unfortunately it seems that it doesn't work. the set_lang code line is firing an exception as follows: File ".. i18n/translation.py", line 179, in set_lang translator = _get_translator(lang, **kwargs) File ".. i18n/translation.py", line 160, in _get_translator localedir = os.path.join(rootdir, 'i18n') File ".. /posixpath.py", line 67, in join elif path == '' or path.endswith('/'): AttributeError: 'NoneType' object has no attribute 'endswith' Actually I'm even not sure it could be possible launching i18n mechanisms from within this setup_app function without an active request object. Anyone has tried some trick on a similar story ?

    Read the article

  • Jet Database (ms access) ExecuteNonQuery - Can I make it faster?

    - by bluebill
    Hi all, I have this generic routine that I wrote that takes a list of sql strings and executes them against the database. Is there any way I can make this work faster? Typically it'll see maybe 200 inserts or deletes or updates at a time. Sometimes there is a mixture of updates, inserts and deletes. Would it be a good idea to separate the queries by type (i.e. group inserts together, then updates and then deletes)? I am running this against an ms access database and using vb.net 2005. Public Function ExecuteNonQuery(ByVal sql As List(Of String), ByVal dbConnection as String) As Integer If sql Is Nothing OrElse sql.Count = 0 Then Return 0 Dim recordCount As Integer = 0 Using connection As New OleDb.OleDbConnection(dbConnection) connection.Open() Dim transaction As OleDb.OleDbTransaction = connection.BeginTransaction() 'Using cmd As New OleDb.OleDbCommand() Using cmd As OleDb.OleDbCommand = connection.CreateCommand cmd.Connection = connection cmd.Transaction = transaction For Each s As String In sql If Not String.IsNullOrEmpty(s) Then cmd.CommandText = s recordCount += cmd.ExecuteNonQuery() End If Next transaction.Commit() End Using End Using Return recordCount End Function

    Read the article

  • Override Django inlineformset_factory has_changed() to always return True

    - by John
    Hi, I am using the django inlineformset_factory function. a = get_object_or_404(ModelA, pk=id) FormSet = inlineformset_factory(ModelA, ModelB) if request.method == 'POST': metaform = FormSet (instance=a, data=request.POST) if metaform.is_valid(): f = metaform.save(commit=False) for instance in f: instance.updated_by = request.user instance.save() else: metaform = FormSet(instance=a) return render_to_response('nodes/form.html', {'form':metaform}) What is happening is that if I change any of the data then everything works ok and all the data gets updated. However if I don't change any of the data then the data is not updated. i.e. only entries which are changed go through the for loop to be saved. I guess this makes sense as there is no point saving data if it has not changed. However I need to go through and save every object in the form regardless of whether it has any changes on not. So my question is how do I override this so that it goes through and saves every record whether it has any changes or not? Hope this makes sense Thanks

    Read the article

  • Convert unstructured SVN folders to trunk/branches style and retain history?

    - by joelpt
    I have a SVN repository which is currently structured like so: /versions /1.0.0 /1.0.1 /1.0.2 /1.1.0 /(etc) What happened here is that when it was time to start a new release, a team member would make a copy of the previous version's folder and rename that folder; then add/commit that new folder into SVN. As a consequence, all of the revision history for a given version-folder is limited just to changes made in that version-folder. SVN thinks that each file in each version-folder was created anew at the time of version-folder creation. So what I'd like to do is convert this series of folders into a traditional trunk/branches/tag SVN structure. Is it possible to somehow "reconcile" the revision histories of each of these versioned folders back into one common revision-history tree?

    Read the article

  • does this raw sql only one trip to the database or many trips?

    - by Álvaro García
    I gues that I have this sql: string strTSQL = "Begin TRAN delete from MyTable where ID = 1"; string strTSQL = ";delete from MyTable where ID = 2"; string strTSQL = ";delete from MyTable where ID = 3 COMMIT"; using(Entities dbContext = new Entities()) { dbCntext.MyTable.SQLQuery(strTSQL); } This use a transaction in the dataBase, so all the commands are executed or no one. But how I execute it through EF, it does only one trip to the database or many? Thanks.

    Read the article

  • Different analyzers for each field

    - by user72185
    Hi, How can I enable different analyzers for each field in a document I'm indexing with Lucene? Example: RAMDirectory dir = new RAMDirectory(); IndexWriter iw = new IndexWriter(dir, new StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_CURRENT), true, IndexWriter.MaxFieldLength.UNLIMITED); Document doc = new Document(); Field field1 = new Field("field1", someText1, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS); Field field2 = new Field("field2", someText2, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS); doc.Add(field1); doc.Add(field2); iw.AddDocument(doc); iw.Commit(); The analyzer is an argument to the IndexWriter, but I want to use StandardAnalyzer for field1 and SimpleAnalyzer for field2, how can I do that? The same applies when searching, of course. The correct analyzer must be applied for each field.

    Read the article

  • How to implement partial refresh like facebook like/comments?

    - by shillong
    We have java web application. Summary page will display list of rows. For each row, user can vote and add comments. Vote or add comments will commit immediately and refresh total vote number and comments count. We want to refresh current row instead of whole table just like Facebook does. If need, we can show the list of data with form format (iterator List of data) instead of table format. How to implement this feature base on JSF?

    Read the article

  • Branch for each developer in GIT repo

    - by Peter
    I'd like to move my project to GitHub from local svn repository. Multiple developers are curently working on this project. I was thinking that each developer should have their own branch in which they would commit changes. When manager review their work, he will merge it into master branch. I don't want separate repository for each developer as GitHub has limited number of private repositories. Is this a good idea? What are other alternatives?

    Read the article

  • The meaning of tracking in git

    - by user273158
    In an article that has been cited in StackOverflow a few times (e.g. 1) , the author discusses the asymmetry between git push and git pull, and mentions the following: Update: Thanks to David Ongaro, who points out below that since git 1.7.4.2, the recommended value for the push.default option is upstream rather than tracking, although tracking can still be used as a deprecated synonym. The commit message that describes that change is nice, since it suggests that there is an effort underway to deprecate the term “track” in the context of setting this association with the upstream branch in a remote repository. (The totally different meanings of “track” in git branch --track and “remote-tracking branches” has long irritated me when trying to introduce git to people.) What is exactly the difference that he is referring to with: The notion of "tracking" in git branch --track The notion of "tracking" in remote-tracking branches in the last sentence?

    Read the article

  • Caught AttributeError while rendering: 'str' object has no attribute '_meta'

    - by D_D
    def broadcast_display_and_form(request): if request.method == 'POST' : form = PostForm(request.POST) if form.is_valid(): post = form.cleaned_data['post'] obj = form.save(commit=False) obj.person = request.user obj.post = post obj.save() readers = User.objects.all() for x in readers: read_obj = BroadcastReader(person = x) read_obj.post = obj read_obj.save() return HttpResponseRedirect('/broadcast') else : form = PostForm() posts = BroadcastReader.objects.filter(person = request.user) return render_to_response('broadcast/index.html', { 'form' : form , 'posts' : posts ,} ) My template: {% extends "base.html" %} {% load comments %} {% block content %} <form action='.' method='POST'> {{ form.as_p }} <p> <input type="submit" value ="send it" /></input> </p> </form> {% get_comment_count for posts.post as comment_count %} {% render_comment_list for posts.post %} {% for x in posts %} <p> {{ x.post.person }} - {{ x.post.post }} </p> {% endfor %} {% endblock %}

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >