Search Results

Search found 955 results on 39 pages for 'inserts'.

Page 5/39 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Faster Insertion of Records into a Table with SQLAlchemy

    - by Kyle Brandt
    I am parsing a log and inserting it into either MySQL or SQLite using SQLAlchemy and Python. Right now I open a connection to the DB, and as I loop over each line, I insert it after it is parsed (This is just one big table right now, not very experienced with SQL). I then close the connection when the loop is done. The summarized code is: log_table = schema.Table('log_table', metadata, schema.Column('id', types.Integer, primary_key=True), schema.Column('time', types.DateTime), schema.Column('ip', types.String(length=15)) .... engine = create_engine(...) metadata.bind = engine connection = engine.connect() .... for line in file_to_parse: m = line_regex.match(line) if m: fields = m.groupdict() pythonified = pythoninfy_log(fields) #Turn them into ints, datatimes, etc if use_sql: ins = log_table.insert(values=pythonified) connection.execute(ins) parsed += 1 My two questions are: Is there a way to speed up the inserts within this basic framework? Maybe have a Queue of inserts and some insertion threads, some sort of bulk inserts, etc? When I used MySQL, for about ~1.2 million records the insert time was 15 minutes. With SQLite, the insert time was a little over an hour. Does that time difference between the db engines seem about right, or does it mean I am doing something very wrong?

    Read the article

  • How to strip everything between a key phrase and an ending tag?

    - by user3620142
    I am trying to strip everything between a key phrase and ending tag but for some reason it is not working. I always get blank data. I've tried many different ways but no luck. Basically I have a script that connect to IMAP and store emails into MySQL as service tickets. It works great but I am trying to strip off everything except for user reply because currently if a user reply to an email it re-inserts the entire email into MySQL. I added a key phrase at the top of all outgoing emails . Structure is as below: --Reply below this line to respond-- ------------------------------------------------------------------------------------ Email body message... When replying to the message, it becomes: New Message reply...... --Reply below this line to respond-- old message body. I would only like to insert the new reply message only. This is what I've got so far: $message = strip_tags($message, "<br><div><p><u><hr></section>"); $message=preg_replace("</p>", "br /", $message); $message=preg_replace('#--REPLY above this line to respond--(.*?)</section>)#s', ' ', $message); $message=clean("<br/><hr><u>Received On $rep_date / $from_email</u><br><br/>$message"); It inserts the Received On date and From but $message is blank. If I remove $message=preg_replace('#--REPLY above this line to respond--(.*?)</section>)#s', ' ', $message); it inserts the entire email. Any suggestion on what i am doing wrong?

    Read the article

  • Benchmarking MySQL on win7

    - by Patrick
    I've setup a nginx server running php 5.3.6 and mysql 5.5.1.3. My computer is an AMD quadcore 9650, 4gb ram, 500gb 7200rpm HD. I ran the PHP MySQL Benchmark Tool v. 0.1, and got the following results: Testing a(n) MYISAM table using 100000 rows. Successfully created database speedtestdb Sucessfully created table speedtesttable Table Type Verified: MYISAM .. Done. 100000 inserts in 19.73628 seconds or 5067 inserts per second. Done. 100000 row reads in 0.2801 seconds or 357015 row reads per second. Done. 100000 updates in 4.03876 seconds or 24760 updates per second. I'm wondering where this stands as far as performance goes, and what are some steps I can take if any to improve on this. I'm not trying to make anything fantastic, just getting a feel for how to best optimize a web server in this configuration.

    Read the article

  • PostgresQL on Amazon EBS volume, realistic performance, or move to something more lightweight?

    - by Peck
    Hi, I'm working on a little research project, currently running as an instance on ec2, and I'm hoping to figure out whether I'm going down the right path. We, like a thousand other people, are making use of some of twitters streaming feeds to do gather some data to have fun with and my db seems to be having problems keeping up, and queries take what seems to be a very long time. I'm not a DBA by trade, so I'll just dump some info here and add more if need be. System specs: ec2 xl, 15 gigs of ram ebs: 4 100 gb drives, raid 0. The stream we're getting we're looking at around 10k inserts per minute. 3 main tables, with the users we're tracking somewhere in the neighborhood of 26M rows currently. Is this amount of inserts on this hardware too much to ask out of ebs? Should take a look at some things with less overhead like mongodb?

    Read the article

  • Mysql Master-ColdMaster

    - by enedebe
    I explain my case: I'm at Amazon AWS and I want to be fault tolerant on a entire region failure. My basic problem is to have the db in sync with 2 regions. My options: Master-Master (high lag) Hand made sync every 5 minutes Master-ColdMaster?! (copy on the fly but Master won't wait the other region commit) In my system we could afford loosing a piece of data (we're not a bank) the last inserts in the db, but we could not afford more than 10 minutes of downtime. The database is small and the level of inserts is low, and I wouldn't affect the normal usage waiting other region commit. Is the 3 solution posible? And the most important, once the primary fail how we can detect and change the rol between master-coldmaster -- coldmaster-master ? Is there any clean-mode to restore between failure? Thank's!

    Read the article

  • Emacs eshell over SSH not obeying key commands or elisp

    - by Brad Wright
    When SSHing to a remote server Eshell doesn't behave very well, e.g: M-x eshell ssh server <tab> *inserts literal tab instead of trying to complete* Hitting <tab>, for instance, inserts a literal tab. Is there no way to get tab completion, lisp interaction (like find-file blah) etc. over SSH? All the documentation I've read says Eshell is "TRAMP-aware", which I assume meant it could deal with this. Am I just wrong in my assumption that it would work over SSH, or is something broken? This is on Emacs 24.0.94 pretest.

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • How to update model in the database, from asp.net MVC2, using Entity Framework?

    - by Eedoh
    Hello. I'm building ASP.NET MVC2 application, and using Entity Framework as ORM. I am having troubles updating object in the database. Every time I try entity.SaveChanges(), EF inserts new line in the table, regardless of do I want update, or insert to be done. I tried attaching (like in this next example) object to entity, but then I got {"An object with a null EntityKey value cannot be attached to an object context."} Here's my simple function for inserts and updates (it's not really about vehicles, but it's simpler to explain like this, although I don't think that this effects answers at all)... public static void InsertOrUpdateCar(this Vehicles entity, Cars car) { if (car.Id == 0 || car.Id == null) { entity.Cars.AddObject(car); } else { entity.Attach(car); } entitet.SaveChanges(); } I even tried using AttachTo("Cars", car), but I got the same exception. Anyone has experience with this?

    Read the article

  • Linq To Sql and identity_insert

    - by Ronnie Overby
    I am trying to do record inserts on a table where the primary key is an Identity field. I have tried calling mycontext.ExecuteCommand("SET identity_insert myTable ON") but this doesn't do any good. I get an error saying that identity_insert is off when I submit changes. How can I turn it ON from the c# code before I submit changes? EDIT I have read that this is because ExecuteCommand's code gets executed in a different session. EDIT 2 Is there any way I can execute some DDL to remove the Identity Specification from my C# code, do the inserts, and then turn Identity Specification back on?

    Read the article

  • Slow Update/insert into SQL Server CE using LinqToDatasets

    - by Vaccano
    I have a mobile app that is using LinqToDatasets to update/insert into a SQL Server CE 3.5 File. My Code looks like this: // All the MyClass Updates MyTableAdapter myTableAdapter = new MyTableAdapter(); foreach (MyClassToInsert myClass in updates.MyClassChanges) { // Update the row if it is already there int result = myTableAdapter.Update(myClass.FirstColumn, myClass.SecondColumn, myClass.FirstColumn); // If the row was not there then insert it. if (result == 0) { myTableAdapter.Insert(myClass.FirstColumn, myClass.SecondColumn); } } This code is used to keep the hand held database in sync with the server database. Problem is if it is a full update (first time for example) there are a lot of updates (about 125). That makes this code (and more loops like it take a very long time (I have three such loops that take over 30 seconds each). Is there a faster or better way to do updates/inserts like this? (I did see this Codeplex Project, but I could not see how to make it work with both updates and inserts.)

    Read the article

  • Collection <NSCFSet: 0x1b0b30> was mutated while being enumerated. How to determine which set?

    - by jamone
    I'm doing a bunch of core data inserts and after 20k or so inserts with saves every 1-2k I get this error: Terminating app due to uncaught exception 'NSGenericException', reason: '*** Collection <NSCFSet: 0x1b0b30> was mutated while being enumerated.' I'm trying to figure out which NSSet is causing this. I've done a search and the only NSSets in my code are the autogenerated ones that handle the Core Data relationships. I'm using NSXMLParser and for each element found creating a new entity (if a matching one doesn't already exist). So I will create a state entity and then populate all the city entities and then do a save. This means that a state's NSSet *cities is getting added to but I don't see why you can't do that.

    Read the article

  • How to "interleave" two DataTables.

    - by Brent
    Take these two lists: List 1 Red Green Blue List 2 Brown Red Blue Purple Orange I'm looking for a way to combine these lists together to produce: List 3 Brown Red Green Blue Purple Orange I think the basic rules are these: 1) Insert on top the list any row falling before the first common row (e.g., Brown comes before the first common row, Red); 2) Insert items between rows if both lists have two items (e.g., List 1 inserts Green between Red and Blue); and 3) Insert rows on the bottom if the there's no "between-ness" found in 2 (e.g., List 2 inserts Orange at the bottom). The lists are stored in a DataTable. I'm guessing I'll have to switch between them while iterating, but I'm having a hard time figuring out a method of combining the rows. Thanks for any help. --Brent

    Read the article

  • Bulk Insert takes 4x as long on first operation of the day

    - by patrick
    I do Bulk Inserts into a table with about 14 million rows at fiver minute increments during a 7 hour period during the day. These inserts take somewhere between 9-14 secs. However, the first insert always takes about 40 secs. Anyone know what SQL Server 2005 would be doing differently on the first insert into a table for that day? From what I've read I should probably use the SqlBulkCopy class instead of just using a bulk insert in a stored procedure. Is that that the general consensus?

    Read the article

  • BDE, Delphi, ODBC, SQL Native Client & Dead lock

    - by EspenS
    Hi. We have some Delphi code that uses the BDE to Access SQL Server 2008 through the SQL Server Native Client ODBC driver (2005 version). Our issue is that we're experiencing some deadlock issues in a loop doing inserts to multiple tables. The whole loop is done within a [TDatabase].StartTransaction. Looking at the SQL Server Profiler we clearly see that at one point during the loop the SPID (Session ID?) change, and then we naturally end up with a deadlock. (Both SPID doing inserts to the same table) It seems like the BDE at some point does a second connection to the DB... (Although I would love to skip the BDE, it's currently not possible. ) Anyone with experiences to share?

    Read the article

  • My FORM problem issues

    - by Azzyh
    So i have this in my form: <textarea onfocus="javascript:clearContents(this);" rows="5" cols="40" id="comment" name="comment">Skriv hvorfor du vælger at stemme ja/nej. Skal indeholde detaljer, kritik/råd. Klik for at skrive</textarea><br /> Ja: <input type="checkbox" value="Y" id="SCvote" name="SCvote"> eller Nej: <input type="checkbox" value="N" name="SCvote"> Now onfocus (when you click on the box) it clears the text "Skriv hvorfor...", but if you dont write anything and still submit it, my php code thinks it contains something and inserts that text "Skriv hvorfor...". And another issue is the checkboxes. If i dont pick anything, it inserts "Y", even if i pick the other "N" it does "Y", and i dont want it to be possible that you can check both of them. how do i do?

    Read the article

  • Edit Distance in Python

    - by Alice
    I'm programming a spellcheck program in Python. I have a list of valid words (the dictionary) and I need to output a list of words from this dictionary that have an edit distance of 2 from a given invalid word. I know I need to start by generating a list with an edit distance of one from the invalid word(and then run that again on all the generated words). I have three methods, inserts(...), deletions(...) and changes(...) that should output a list of words with an edit distance of 1, where inserts outputs all valid words with one more letter than the given word, deletions outputs all valid words with one less letter, and changes outputs all valid words with one different letter. I've checked a bunch of places but I can't seem to find an algorithm that describes this process. All the ideas I've come up with involve looping through the dictionary list multiple times, which would be extremely time consuming. If anyone could offer some insight, I'd be extremely grateful. Thanks!

    Read the article

  • Oracle global lock across process

    - by Jimm
    I would like to synchronize access to a particular insert. Hence, if multiple applications execute this "one" insert, the inserts should happen one at a time. The reason behind synchronization is that there should only be ONE instance of this entity. If multiple applications try to insert the same entity,only one should succeed and others should fail. One option considered was to create a composite unique key, that would uniquely identify the entity and rely on unique constraint. For some reasons, the dba department rejected this idea. Other option that came to my mind was to create a stored proc for the insert and if the stored proc can obtain a global lock, then multiple applications invoking the same stored proc, though in their seperate database sessions, it is expected that the stored proc can obtain a global lock and hence serialize the inserts. My question is it possible to for a stored proc in oracle version 10/11, to obtain such a lock and any pointers to documentation would be helpful.

    Read the article

  • How often to run getWritableDatabase() and getReadableDatabase()?

    - by jawonlee
    I'm writing a Service, a Content Provider, and one or more apps. The Service writes new data to the Content Provider's SQLite database every 5 minutes or so plus at user input, and is intended to run pretty much forever in the background. The app, when running, will display data pulled from the Content Provider, and will be refreshed whenever the Service puts more data into the Content Provider's database. Given that the Service only inserts into the database once every five minutes, when is the right time to call SQLiteOpenHelper's getWritableDatabase() / getReadableDatabase()? Is it on the onCreate() of the Content Provider, or should I run it every time there is an insert() and close it at the end of insert()? The data being inserted every 5 minutes will contain multiple inserts.

    Read the article

  • Replication - syncronizing most of the data some of the time

    - by uncle brad
    I have some data that isn't properly "partitioned" (for lack of a better word). All inserts, processing and reporting happen on the same table. The bulk of the processing happens not long after the insert and not long after that it becomes immutable (we're talking days). I could do all inserts and processing on a new table that I replicate to the old table. When I detect that the data has become immutable I would delete the data from the new table, but I would edit the delete replication stored procedure so that the delete did not replicate. How bad an idea is this? It seems attractive at the moment (I haven't slept on it yet) because it might mitigate a performance problem with only very small changes to the application. It also seems like it might be a good way to shoot myself in the foot.

    Read the article

  • Insert multiple records from a XML string differing on one parameter in SQL SERVER 2008

    - by Rohit
    Below in a query which inserts records to SimpleDictationProfileMapping table after reading it from a XML string. Now this query inserts a single record in which DictationCaptureProfileID is @dictationCaptureProfileId . Now i want to insert multiple rows in which @dictationCaptureProfileId is different and other 2 values are same. What i want to achieve by this is in case parent changes all child values should also change. INSERT INTO SimpleDictationProfileMapping ( DictationCaptureProfileID, DictationProfileMappingAttributeID, DictationProfileMappingAttributeValue ) SELECT @dictationCaptureProfileId, row.value('@attrId','varchar(max)'), row.value('@value', 'varchar(max)') FROM @simpleDictationCaptureProfileMappings.nodes('/simpleMappingAtribute/attribute') AS d ( row ) ; I want INSERT INTO SimpleDictationProfileMapping ( DictationCaptureProfileID OR (SELECT DictationCaptureProfileID FROM DictationCaptureProfile WHERE SystemDictationCaptureProfileID = @systemDictationCaptureProfileID), DictationProfileMappingAttributeID, DictationProfileMappingAttributeValue ) SELECT @dictationCaptureProfileId , row.value('@attrId','varchar(max)'), row.value('@value', 'varchar(max)') FROM @simpleDictationCaptureProfileMappings.nodes ('/simpleMappingAtribute/attribute') AS d ( row ) ; Please tell how to achieve this.

    Read the article

  • MongoDB vs CouchDB (Speed optimization)

    - by Edward83
    Hi! I made some tests of speed to compare MongoDB and CouchDB. Only inserts were while testing. I got MongoDB 15x faster than CouchDB. I know that it is because of sockets vs http. But, it is very interesting for me how can I optimize inserts in CouchDB? Test platform: Windows XP SP3 32 bit. I used last versions of MongoDB, MongoDB C# Driver and last version of installation package of CouchDB for Windows. Thanks!

    Read the article

  • PHP SQL Form Insert

    - by Prateek Sachan
    I've developed a form that inserts many things into the database. But somehow, when the page is filled up; it inserts only the user_password that too of the database admin. here is the code. Any help would be great. Invalid Name: We want names with more than 3 letters. Invalid E-mail: Type a valid e-mail please. Passwords are invalid: Passwords doesnt match or are invalid! Please enter your contact number. Please enter your age Congratulations! All fields are OK ;)

    Read the article

  • Get the identity value from an insert via a .net dataset update call

    - by DeveloperMCT
    Here is some sample code that inserts a record into a db table: Dim ds As DataSet = New DataSet() da.Fill(ds, "Shippers") Dim RowDatos As DataRow RowDatos = ds.Tables("Shippers").NewRow RowDatos.Item("CompanyName") = "Serpost Peru" RowDatos.Item("Phone") = "(511) 555-5555" ds.Tables("Shippers").Rows.Add(RowDatos) Dim custCB As SqlCommandBuilder = New SqlCommandBuilder(da) da.Update(ds, "Shippers") It inserts a row in the Shippers Table, the ShippersID is a Indentity value. My question is how can i retrieve the Identity value generated when the new row is inserted in the Shippers table. I have done several web searches and the sources I've seen on the net don't answer it speccifically or go on to talk about stored procedures. Any help would be appreciated. Thanks!

    Read the article

  • Do MySQL Locked Tables affect related Views?

    - by CogitoErgoSum
    So after reading http://stackoverflow.com/questions/1415602/performance-in-pdo-php-mysql-transaction-versus-direct-execution in regards to performance issues I was thinking about I did some research on locking tables in MySQL. On http://dev.mysql.com/doc/refman/5.0/en/table-locking.html Table locking enables many sessions to read from a table at the same time, but if a session wants to write to a table, it must first get exclusive access. During the update, all other sessions that want to access this particular table must wait until the update is done. This part struck me particularly becuase most of our queries will be updates rather than inserts. I was wondering if one created a table called foo on which all updates/inserts were carried out and then a view called foo_view (A copy of foo, or perhaps foo and a linkage of several other tables plus foo) on which all selects occured, would this locking issue still occur? That is, would SELECT quries on foo_view still have to wait for an update to finish on foo?

    Read the article

  • Use a SELECT to Print a Bunch of INSERT INTOs

    - by Mikecancook
    I have a bunch of records I want to move to another database and I just want to create a bunch of inserts that I can copy and paste. I've seen someone do this before but I can't figure it out. I'm not getting the escapes right. It's something like this where 'Code', 'Description' and 'Absent' are the columns I want from the table. SELECT 'INSERT INTO AttendanceCodes (Code, Description, Absent) VALUES (' + Code + ',' + Description + ',' + Absent')' FROM AttendanceCodes The end result should be a slew of INSERTS with the correct values like this: INSERT INTO AttendanceCodes (Code, Description, Absent) VALUES ('A','Unverified Absence','UA')

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >