Search Results

Search found 955 results on 39 pages for 'inserts'.

Page 2/39 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to validate selects / inserts are hitting the right server with MySQL Master/Slave

    - by bwizzy
    I've got a rails app using the master_slave_adapter plugin (http://github.com/mauricio/master_slave_adapter/tree/master) to send all selects to a slave, and all other statements to the master. Replication is setup using Mysql master / slave. I'm trying to validate that all the SQL statements are indeed going to the right place. Selects to the slave (db2), inserts to the master (db1) but I'm not sure how to do it. I've tried using tcpdump on the webservers: sudo /usr/sbin/tcpdump -q -i eth0 dst port 3306 and this is the output for a page request with a ton of selects: 10:32:36.570930 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 0 10:32:36.576805 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 0 10:32:36.577201 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 0 10:32:36.577980 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 86 10:32:36.578186 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 21 10:32:36.578359 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 27 10:32:36.578522 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 5 10:32:36.578741 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 13 10:32:36.579611 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 29 10:32:36.588201 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 0 10:32:36.588323 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 0 10:32:36.588677 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 0 10:32:36.588784 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 86 It doesn't look like all the selects are going to the slave. Maybe this isn't the right way to test, anyone know a better way?

    Read the article

  • rake db:migrate fails when trying to do inserts

    - by anthony
    I'm trying to get a database populate so I can begin working on a project. THis project is already built and I'm being brought in to helpwith front-end work. Problem is I can't get rake db:migrate to do any inserts. Every time I run rake db:migrate I get this: ... == 20081220084043 CreateTimeDimension: migrating ============================== -- create_table(:time_dimension) - 0.0870s INSERT time_dimension(time_key, year, month, day, day_of_week, weekend, quarter) VALUES(20080101, 2008, 1, 1, 'Tuesday', false, 1) rake aborted! Could not load driver (uninitialized constant Mysql::Driver) ... I'm building on a MBP with Snow Leopard. I've installed XCode from the disk that comes with the mac. I've updated ruby, installed rails and all the needed gems. I have the 64 bit version of MySQL installed. I've tried the 32 bit version of MySQL and I've even tried installing from macports (via http://www.robbyonrails.com/articles/2010/02/08/installing-ruby-on-rails-passenger-postgresql-mysql-oh-my-zsh-on-snow-leopard-fourth-edition) The mysql gemis installed using: sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/path/to/mysql/bin/mysql_config the migrate creates the tables just fine but it dies every. single. time. it trys an insert. Any help would be great

    Read the article

  • speed up sql INSERTs

    - by sean717
    I have the following method to insert millions of rows of data into a table (I use SQL 2008) and it seems slow, is there any way to speed up INSERTs? Here is the code snippet - I use MS enterprise library public void InsertHistoricData(List<DataRow> dataRowList) { string sql = string.Format( @"INSERT INTO [MyTable] ([Date],[Open],[High],[Low],[Close],[Volumn]) VALUES( @DateVal, @OpenVal, @High, @Low, @CloseVal, @Volumn )"); DbCommand dbCommand = VictoriaDB.GetSqlStringCommand( sql ); DB.AddInParameter(dbCommand, "DateVal", DbType.Date); DB.AddInParameter(dbCommand, "OpenVal", DbType.Currency); DB.AddInParameter(dbCommand, "High", DbType.Currency ); DB.AddInParameter(dbCommand, "Low", DbType.Currency); DB.AddInParameter(dbCommand, "CloseVal", DbType.Currency); DB.AddInParameter(dbCommand, "Volumn", DbType.Int32); foreach (NasdaqHistoricDataRow dataRow in dataRowList) { DB.SetParameterValue( dbCommand, "DateVal", dataRow.Date ); DB.SetParameterValue( dbCommand, "OpenVal", dataRow.Open ); DB.SetParameterValue( dbCommand, "High", dataRow.High ); DB.SetParameterValue( dbCommand, "Low", dataRow.Low ); DB.SetParameterValue( dbCommand, "CloseVal", dataRow.Close ); DB.SetParameterValue( dbCommand, "Volumn", dataRow.Volumn ); DB.ExecuteNonQuery( dbCommand ); } }

    Read the article

  • Batch Inserts And Prepared Query Error

    - by ircmaxell
    Ok, so I need to populate a MS Access database table with results from a MySQL query. That's not hard at all. I've got the program written to where it copies a template .mdb file to a temp name and opens it via odbc. No problem so far. I've noticed that Access does not support batch inserting (VALUES (foo, bar), (second, query), (third query)). So that means I need to execute one query per row (there are potentially hundreds of thousands of rows). Initial performance tests show a rate of around 900 inserts/sec into Access. With our largest data sets, that could mean execution times of minutes (Which isn't the end of the world, but obviously the faster the better). So, I tried testing a prepared statement. But I keep getting an error (Warning: odbc_execute() [function.odbc-execute]: SQL error: [Microsoft][ODBC Microsoft Access Driver]COUNT field incorrect , SQL state 07001 in SQLExecute in D:\....php on line 30). Here's the code I'm using (Line 30 is odbc_execute): $sql = 'INSERT INTO table ([field0], [field1], [field2], [field3], [field4], [field5]) VALUES (?, ?, ?, ?, ?, ?)'; $stmt = odbc_prepare($conn, $sql); for ($i = 200001; $i < 300001; $i++) { $a = array($i, "Field1 $", "Field2 $i", "Field3 $i", "Field4 $i", $i); odbc_execute($stmt, $a); } So my question is two fold. First, is there any idea on why I'm getting that error (I've checked, and the number in the array matches the field list which matches the number of parameter ? markers)? And second, should I even bother with this or just use the straight INSERT statements? Like I said, time isn't critical, but if it's possible, I'd like to get that time as low as possible (Then again, I may be limited by disk throughput, since 900 operations/sec is high already)... Thanks

    Read the article

  • Mysql dropping inserts with triggers

    - by user2891127
    Using mysql 5.5. I have two tables. One has a whitelist of hashes. When I insert a new row into the other table, I want to first compare the hash in the insert statement to the whitelist. If it's in the whitelist, I don't want to do the insert (less data to plow through later). The inserts are generated from another program and are text files with sql statements. I've been playing with triggers, and almost have it working: BEGIN IF (select count(md5hash) from whitelist where md5hash=new.md5hash) 0 THEN SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Already Whitelisted'; END IF; END But there's a problem. The Signal throwing up the error stops the import. I want to skip that line, not stop the whole import. Some searching didn't find any way to silently skip the import. My next idea was to create a duplicate table definition, and redirect the insert to that dup table. But the old and new don't seem to apply to table names. Other then adding an ignore column to my table then doing a mass drop based on that column after the import, is there any way to achieve my goal?

    Read the article

  • Cassandra inserts using Net::Cassandra::Easy in Perl

    - by knorv
    When using the Perl module Net::Cassandra::Easy to interface with Cassandra I use the following code to read colums col[123] from rows row[123] in column-family Standard1: my $cassandra = Net::Cassandra::Easy->new(keyspace => 'Keyspace1', server => 'localhost'); $cassandra->connect(); my $result = $cassandra->get(['row1', 'row2', 'row3'], family => 'Standard1', byname => ['col1', 'col2', 'col3']); This works as expected. However, when trying to insert row row1 with .. $result = $cassandra->mutate(['row1'], family => 'Standard1', insertions => { "col1" => "Value to set." }); .. I get the error message Can't use string ("0") as a SCALAR ref while "strict refs" in use at .../Net/GenThrift/Thrift/BinaryProtocol.pm line 376. What am I doing wrong?

    Read the article

  • SubmitChanges doesn't save but removes inserts from change set, no errors

    - by winston schröder
    Hi Everybody, I have a deeper question regarding debug functionality of Linq to Sql SubmitChanges() Function. I want to save a record in a table of a locally cached db (localdbcache: server SqlExpress 2008 client SqlCE). Before calling SubmitChanges I can find the new item via DataContext.GetChangeSet(). After calling Submit Changes, the items to insert have been removed from the ChangeSet. (That's what this function is supposed to do.) There are no Changes Conflicts and no error in the db's log output. No Exception at all. The table's Count stays at the same value. if ((e.Parameter == null) || (!e.Parameter.GetType().Equals(typeof(LibDB.Client.Vehicles)))) return; LibDB.Client.Vehicles tmp = e.Parameter as LibDB.Client.Vehicles; try { ChangeSet cs = this._dc.GetChangeSet(); if ((tmp == null) || (this._dc == null)) return; if (this._dc.Vehicles.Where(veh => veh.Vin == tmp.Vin).Count() == 0) this._dc.Vehicles.InsertOnSubmit(tmp); else if (this._dc.Vehicles.Where(veh => veh.Vin == tmp.Vin).Count() == 1) this._dc.Vehicles.Attach(tmp, true); else return; using (TransactionScope ts = new TransactionScope()) { try { this._dc.SubmitChanges(); //this._dc.Refresh(RefreshMode.OverwriteCurrentValues, this._dc.Vehicles); } catch (Exception ex) { Console.WriteLine(ex.Message); } } if (this._dc.Vehicles.Where(veh => veh.Vin == tmp.Vin).Count() == 1) MessageBox.Show("Vehicle not saved."); this.vehSelector.ResetLayout(); } I would appreciate any help since I'm loosing hope to find any error, Thanks in Advance Winston

    Read the article

  • Faster bulk inserts in sqlite3?

    - by scubabbl
    I have a file of about 30000 lines of data I want to load into a sqlite3 database. Is there a faster way that generating insert statements for each line of data? The data is space delimited and maps directly to the sqlite3 table. Is there any sort of bulk insert method for adding volume data to a database? Has anyone devised some deviously wonderful way of doing this if it's not built in? I should preface this by asking is there a c++ way to do it from the API? Thanks.

    Read the article

  • Writing to a file in Python inserts null bytes

    - by Javier Badia
    I'm writing a todo list program. It keeps a file with a thing to do per line, and lets the user add or delete items. The problem is that for some reason, I end up with a lot of zero bytes at the start of the file, even though the item is correctly deleted. I'll show you a couple of screenshots to make sure I'm making myself clear. This is the file in Notepad++ before running the program: This is the file after deleting item 3 (counting from 1): This is the relevant code. The actual program is bigger, but running just this part triggers the error. import os TODO_FILE = r"E:\javi\code\Python\todo-list\src\todo.txt" def del_elems(f, delete): """Takes an open file and either a number or a list of numbers, and deletes the lines corresponding to those numbers (counting from 1).""" if isinstance(delete, int): delete = [delete] lines = f.readlines() f.truncate(0) counter = 1 for line in lines: if counter not in delete: f.write(line) counter += 1 f = open(TODO_FILE, "r+") del_elems(f, 3) f.close() Could you please point out where's the mistake?

    Read the article

  • Bulk inserts into sqlite db on the iphone...

    - by akaii
    I'm inserting a batch of 100 records, each containing a dictonary containing arbitrarily long HTML strings, and by god, it's slow. On the iphone, the runloop is blocking for several seconds during this transaction. Is my only recourse to use another thread? I'm already using several for acquiring data from HTTP servers, and the sqlite documentation explicitly discourages threading with the database, even though it's supposed to be thread-safe... Is there something I'm doing extremely wrong that if fixed, would drastically reduce the time it takes to complete the whole operation? NSString* statement; statement = @"BEGIN EXCLUSIVE TRANSACTION"; sqlite3_stmt *beginStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &beginStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); return; } if (sqlite3_step(beginStatement) != SQLITE_DONE) { sqlite3_finalize(beginStatement); printf("db error: %s\n", sqlite3_errmsg(database)); return; } NSTimeInterval timestampB = [[NSDate date] timeIntervalSince1970]; statement = @"INSERT OR REPLACE INTO item (hash, tag, owner, timestamp, dictionary) VALUES (?, ?, ?, ?, ?)"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, [statement UTF8String], -1, &compiledStatement, NULL) == SQLITE_OK) { for(int i = 0; i < [items count]; i++){ NSMutableDictionary* item = [items objectAtIndex:i]; NSString* tag = [item objectForKey:@"id"]; NSInteger hash = [[NSString stringWithFormat:@"%@%@", tag, ownerID] hash]; NSInteger timestamp = [[item objectForKey:@"updated"] intValue]; NSData *dictionary = [NSKeyedArchiver archivedDataWithRootObject:item]; sqlite3_bind_int( compiledStatement, 1, hash); sqlite3_bind_text( compiledStatement, 2, [tag UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text( compiledStatement, 3, [ownerID UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_int( compiledStatement, 4, timestamp); sqlite3_bind_blob( compiledStatement, 5, [dictionary bytes], [dictionary length], SQLITE_TRANSIENT); while(YES){ NSInteger result = sqlite3_step(compiledStatement); if(result == SQLITE_DONE){ break; } else if(result != SQLITE_BUSY){ printf("db error: %s\n", sqlite3_errmsg(database)); break; } } sqlite3_reset(compiledStatement); } timestampB = [[NSDate date] timeIntervalSince1970] - timestampB; NSLog(@"Insert Time Taken: %f",timestampB); // COMMIT statement = @"COMMIT TRANSACTION"; sqlite3_stmt *commitStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &commitStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); } if (sqlite3_step(commitStatement) != SQLITE_DONE) { printf("db error: %s\n", sqlite3_errmsg(database)); } sqlite3_finalize(beginStatement); sqlite3_finalize(compiledStatement); sqlite3_finalize(commitStatement);

    Read the article

  • complicated inserts

    - by liysd
    I have to do something like this insert into object (name, value, first_node) values ('some_name', 123, 0) @id = mysql_last_insert_id() insert nodes (name, object_id) values ('node_name',@id) @id2 = mysql_last_insert_id() update object set first_node=@id2 where id=@id Is it possible to make it simpler? What if I want to insert more pairs (object, node) with resonable efficency?

    Read the article

  • C# Multi CheckboxList update inserts checked records but doesn't delete unchecked records

    - by DLL
    I have a multi checkboxlist on a formview. Both use queries in a tableadapter. I'm using VS 2012. When the user updates the form, I use the following code to update the checkbox data. If a user checks a new box, a new record is inserted correctly, however if the user unchecks a box the existing record is not deleted. The delete query works fine if I run it from the query builder in the table adapter, it's reaching the expected line in the code correctly, all values are correct and I receive no errors. I use a similar query to delete records when the form level data is deleted which works fine. The very last line is the one that doesn't work. Query: DELETE FROM [SLA_Categories] WHERE (([SLA_ID] = @SLA_ID) AND ([Choice_ID] = @Choice_ID)) protected void FormView1_ItemUpdating(object sender, FormViewUpdateEventArgs e) { if (FormView1.DataKey.Value != null) { Categs = (CheckBoxList)FormView1.FindControl("CheckBoxList1"); CurrentSLA_ID = (int)FormView1.DataKey.Value; } if (CurrentSLA_ID > 0) { foreach (ListItem li in Categs.Items) { // See if there's a record for the current sla in this category int CurrentChoice_ID = Convert.ToInt32(li.Value); SLADataSetTableAdapters.SLA_CategoriesTableAdapter myAdapter; myAdapter = new SLADataSetTableAdapters.SLA_CategoriesTableAdapter(); int myCount = (int)myAdapter.FindCategoryBySLA_IDAndChoice_ID(CurrentSLA_ID, CurrentChoice_ID); // If this category is checked and there is not an existing rec, insert one if (li.Selected == true && myCount < 1) { // Insert a rec for this sla myAdapter.InsertCategory(CurrentChoice_ID, CurrentSLA_ID); } // If this category is unchecked and there is and existing rec, delete it if (li.Selected == false && myCount > 0) { // Delete this rec myAdapter.DeleteCategoryBySLA_IDAndChoice_ID(CurrentChoice_ID, CurrentSLA_ID); } } } }

    Read the article

  • Nhibernate criteria query inserts an extra order by expression when using JoinType.LeftOuterJoin and Projections

    - by Aaron Palmer
    Why would this nhibernate criteria query produce the sql query below? return Session.CreateCriteria(typeof(FundingCategory), "fc") .CreateCriteria("FundingPrograms", "fp") .CreateCriteria("Projects", "p", JoinType.LeftOuterJoin) .Add(Restrictions.Disjunction() .Add(Restrictions.Eq("fp.Recipient.Id", recipientId)) .Add(Restrictions.Eq("p.Recipient.Id", recipientId)) ) .SetProjection(Projections.ProjectionList() .Add(Projections.GroupProperty("fc.Name"), "fcn") .Add(Projections.Sum("fp.ObligatedAmount"), "fpo") .Add(Projections.Sum("p.ObligatedAmount"), "po") ) .AddOrder(Order.Desc("fpo")) .AddOrder(Order.Desc("po")) .AddOrder(Order.Asc("fcn")) .List<object[]>(); SELECT this_.Name as y0_, sum(fp1_.ObligatedAmount) as y1_, sum(p2_.ObligatedAmount) as y2_ FROM fundingCategories this_ inner join fundingPrograms fp1_ on this_.fundingCategoryId = fp1_.fundingCategoryId left outer join projects p2_ on fp1_.fundingProgramId = p2_.fundingProgramId WHERE (fp1_.recipientId = 6 /* @p0 */ or p2_.recipientId = 6 /* @p1 */) GROUP BY this_.Name ORDER BY p2_.name asc, y1_ desc, y2_ desc, y0_ asc It is incorrectly putting the p2_name asc into the ORDER BY statement, and causing it to crash. This only happens when I use JoinType.LeftOuterJoin on my Projects criteria. Is this a known nhibernate bug? I'm using nhibernate 2.0.1.4000. Thanks for any insight.

    Read the article

  • visual assist inserts extra spaces?

    - by Kugel
    I'm using Visual Assist X trial on VS2010 Pro. When I do extract method or modify method signature refactorings it gives me this: void Solver::Work( Stack &s, Board &b ) However I would really appreciate if it gave me this: void Solver::Work(Stack &s, Board &b) No extra spaces. Is there a way to set this?

    Read the article

  • Identifying which value of a multi-row inserts fails foreign key constraint

    - by Jonathan
    I have a multi-row insert that looks something like: insert into table VALUES (1, 2, 3), (4, 5, 6), (7, 8, 9); Assume the first attribute (1, 4, 7) is a foreign key to another table and assume that this referenced table does not have the value '4'. A MySQLExeption is thrown with error code 1452. EXCEPTION: Cannot add or update a child row: a foreign key constraint fails (dbName/tableName, CONSTRAINT id FOREIGN KEY (customer_id) REFERENCES referencedTable (customer_id)) Is there a way to identify which value caused the error? I would love to give my user an error message that said something like: Error: '4' does not exist in the referenced table. I am using the .NET mysql connector to execute the insert. Thanks- Jonathan

    Read the article

  • Fastest method for SQL Server inserts, updates, selects

    - by Ian
    I use SPs and this isn't an SP vs code-behind "Build your SQL command" question. I'm looking for a high-throughput method for a backend app that handles many small transactions. I use SQLDataReader for most of the returns since forward only works in most cases for me. I've seen it done many ways, and used most of them myself. Methods that define and accept the stored procedure parameters as parameters themselves and build using cmd.Parameters.Add (with or without specifying the DB value type and/or length) Assembling your SP params and their values into an array or hashtable, then passing to a more abstract method that parses the collection and then runs cmd.Parameters.Add Classes that represent tables, initializing the class upon need, setting the public properties that represent the table fields, and calling methods like Save, Load, etc I'm sure there are others I've seen but can't think of at the moment as well. I'm open to all suggestions.

    Read the article

  • Fastest method for SQL Server inserts, updates, selects from C# ASP.Net 2.0+

    - by Ian
    Hi All, long time listener, first time caller. I use SPs and this isn't an SP vs code-behind "Build your SQL command" question. I'm looking for a high-throughput method for a backend app that handles many small transactions. I use SQLDataReader for most of the returns since forward only works in most cases for me. I've seen it done many ways, and used most of them myself. Methods that define and accept the stored procedure parameters as parameters themselves and build using cmd.Parameters.Add (with or without specifying the DB value type and/or length) Assembling your SP params and their values into an array or hashtable, then passing to a more abstract method that parses the collection and then runs cmd.Parameters.Add Classes that represent tables, initializing the class upon need, setting the public properties that represent the table fields, and calling methods like Save, Load, etc I'm sure there are others I've seen but can't think of at the moment as well. I'm open to all suggestions.

    Read the article

  • Perl - CodeGolf - Nested loops & SQL inserts

    - by CheeseConQueso
    I had to make a really small and simple script that would fill a table with string values according to these criteria: 2 characters long 1st character is always numeric (0-9) 2nd character is (0-9) but also includes "X" Values need to be inserted into a table on a database The program would execute: insert into table (code) values ('01'); insert into table (code) values ('02'); insert into table (code) values ('03'); insert into table (code) values ('04'); insert into table (code) values ('05'); insert into table (code) values ('06'); insert into table (code) values ('07'); insert into table (code) values ('08'); insert into table (code) values ('09'); insert into table (code) values ('0X'); And so on, until the total 110 values were inserted. My code (just to accomplish it, not to minimize and make efficient) was: use strict; use DBI; my ($db1,$sql,$sth,%dbattr); %dbattr=(ChopBlanks => 1,RaiseError => 0); $db1=DBI->connect('DBI:mysql:','','',\%dbattr); my @code; for(0..9) { $code[0]=$_; for(0..9) { $code[1]=$_; insert(@code); } insert($code[0],"X"); } sub insert { my $skip=0; foreach(@_) { if($skip==0) { $sql="insert into table (code) values ('".$_[0].$_[1]."');"; $sth=$db1->prepare($sql); $sth->execute(); $skip++; } else { $skip--; } } } exit; I'm just interested to see a really succinct & precise version of this logic.

    Read the article

  • Why does MySQL autoincrement increase on failed inserts?

    - by Sorcy
    A co-worker just made me aware of a very strange MySQL behavior. Assuming you have a table with an auto_increment field and another field that is set to unique (e.g. a username-field). When trying to insert a row with a username thats already in the table the insert fails, as expected. Yet the auto_increment value is increased as can be seen when you insert a valid new entry after several failed attempts. For example, when our last entry looks like this... ID: 10 Username: myname ...and we try five new entries with the same username value on our next insert we will have created a new row like so: ID: 16 Username: mynewname While this is not a big problem in itself it seems like a very silly attack vector to kill a table by flooding it with failed insert requests, as the MySQL Reference Manual states: "The behavior of the auto-increment mechanism is not defined if [...] the value becomes bigger than the maximum integer that can be stored in the specified integer type." Is this expected behavior?

    Read the article

  • Using entity framework to detect changes in related table and action appropriate inserts and deletes

    - by Kohan
    Lets say i have a Person table, a Role table with a trel table PersonRoles linking them as many to many. I create a new person and assign them to 2 roles (role 1, role 3). I then want to edit this person; so i retrieve their data and bind their roles to a checkboxes. I change the values (Deselect role 1 and select role 2 instead) I then post this data back through a viewmodel. Can i then get Entity Framework to update these roles for me, as in delete the entry in PersonRoles to role 1 and then add a new entry as role 2? Or do i have to do the logic for this myself? Cheers, Kohan

    Read the article

  • A question about indexes regarding to the gain of inserts & updates in database

    - by Mestika
    Hi, I’m having a question about the fine line between the gain of an index to a table there is growing steadily in size every month and the gain of queries with an index. The situation is, that I’ve two tables, Table1 and Table2. Each table grows slowly but regularly each month (with about 100 new rows for Table1 and a couple of rows for Table2). My concrete question is whether to have an index or to drop it. I’ve made some measurement that an covering index on Table2 improve my SELECT queries and some rather much but again, I’ve to consider the pros and cons but having a really hard time to decide. For Table1 it might not be necessary to have an index because the SELECT queries there is not that common. I would appreciate any suggestion, tips or just good advice to what is a good solution. By the way, I’m using IBM DB2 version 9.7 as my Database system Sincerely Mestika

    Read the article

  • Doctrine inserts when it should update

    - by Goran Juric
    I am trying to use do the most simple update query but Doctrine issues an INSERT statement instead of an UPDATE. $q = Doctrine_Query::create() ->from('Image i') ->where('id = ?'); $image = $q->fetchOne($articleId, Doctrine_Core::HYDRATE_RECORD); $image->copyright = "some text"; $image->save(); I have also tried using the example from the manual, but still a new record gets inserted: $userTable = Doctrine_Core::getTable('User'); $user = $userTable->find(2); if ($user !== false) { $user->username = 'Jack Daniels'; $user->save(); } edit: This example from the manual works: $user = new User(); $user->assignIdentifier(1); $user->username = 'jwage'; $user->save(); The funny thing is that I use this on another model and there it works OK. Maybe I have to fetch the whole array graph for this to work (I have another model in a one to many relationship)?

    Read the article

  • SQL server 2008 trigger not working correct with multiple inserts

    - by Rob
    I've got the following trigger; CREATE TRIGGER trFLightAndDestination ON checkin_flight AFTER INSERT,UPDATE AS BEGIN IF NOT EXISTS ( SELECT 1 FROM Flight v INNER JOIN Inserted AS i ON i.flightnumber = v.flightnumber INNER JOIN checkin_destination AS ib ON ib.airport = v.airport INNER JOIN checkin_company AS im ON im.company = v.company WHERE i.desk = ib.desk AND i.desk = im.desk ) BEGIN RAISERROR('This combination of of flight and check-in desk is not possible',16,1) ROLLBACK TRAN END END What i want the trigger to do is to check the tables Flight, checkin_destination and checkin_company when a new record for checkin_flight is added. Every record of checkin_flight contains a flightnumber and desknumber where passengers need to check in for this destination. The tables checkin_destination and checkin_company contain information about companies and destinations restricted to certain checkin desks. When adding a record to checkin_flight i need information from the flight table to get the destination and flightcompany with the inserted flightnumber. This information needs to be checked against the available checkin combinations for flights, destinations and companies. I'm using the trigger as stated above, but when i try to insert a wrong combination the trigger allows it. What am i missing here?

    Read the article

  • Storing a jpa entity where only the timestamp changes results in updates rather than inserts (desire

    - by David Schlenk
    I have a JPA entity that stores a fk id, a boolean and a timestamp: @Entity public class ChannelInUse implements Serializable { @Id @GeneratedValue private Long id; @ManyToOne @JoinColumn(nullable = false) private Channel channel; private boolean inUse = false; @Temporal(TemporalType.TIMESTAMP) private Date inUseAt = new Date(); ... } I want every new instance of this entity to result in a new row in the table. For whatever reason no matter what I do it always results in the row getting updated with a new timestamp value rather than creating a new row. Even tried to just use a native query to run an insert but channel's ID wasn't populated yet so I gave up on that. I've tried using an embedded id class consisting of channel.getId and inUseAt. My equals and hashcode for are: public boolean equals(Object obj){ if(this == obj) return true; if(!(obj instanceof ChannelInUse)) return false; ChannelInUse ciu = (ChannelInUse) obj; return ( (this.inUseAt == null ? ciu.inUseAt == null : this.inUseAt.equals(ciu.inUseAt)) && (this.inUse == ciu.inUse) && (this.channel == null ? ciu.channel == null : this.channel.equals(ciu.channel)) ); } /** * hashcode generated from at, channel and inUse properties. */ public int hashCode(){ int hash = 1; hash = hash * 31 + (this.inUseAt == null ? 0 : this.inUseAt.hashCode()); hash = hash * 31 + (this.channel == null ? 0 : this.channel.hashCode()); if(inUse) hash = hash * 31 + 1; else hash = hash * 31 + 0; return hash; } } I've tried using hibernate's Entity annotation with mutable=false. I'm probably just not understanding what makes an entity unique or something. Hit the google pretty hard but can't figure this one out.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >