Search Results

Search found 36 results on 2 pages for 'upsert'.

Page 1/2 | 1 2  | Next Page >

  • Dynamic upsert in postgresql

    - by Daniel
    I have this upsert function that allows me to modify the fill_rate column of a row. CREATE FUNCTION upsert_fillrate_alarming(integer, boolean) RETURNS VOID AS ' DECLARE num ALIAS FOR $1; dat ALIAS FOR $2; BEGIN LOOP -- First try to update. UPDATE alarming SET fill_rate = dat WHERE equipid = num; IF FOUND THEN RETURN; END IF; -- Since its not there we try to insert the key -- Notice if we had a concurent key insertion we would error BEGIN INSERT INTO alarming (equipid, fill_rate) VALUES (num, dat); RETURN; EXCEPTION WHEN unique_violation THEN -- Loop and try the update again END; END LOOP; END; ' LANGUAGE 'plpgsql'; Is it possible to modify this function to take a column argument as well? Extra bonus points if there is a way to modify the function to take a column and a table.

    Read the article

  • Atomic UPSERT in SQL Server 2005

    - by rabidpebble
    What is the correct pattern for doing an atomic "UPSERT" (UPDATE where exists, INSERT otherwise) in SQL Server 2005? I see a lot of code on SO (e.g. see http://stackoverflow.com/questions/639854/tsql-check-if-a-row-exists-otherwise-insert) with the following two-part pattern: UPDATE ... FROM ... WHERE <condition> -- race condition risk here IF @@ROWCOUNT = 0 INSERT ... or IF (SELECT COUNT(*) FROM ... WHERE <condition>) = 0 -- race condition risk here INSERT ... ELSE UPDATE ... where will be an evaluation of natural keys. None of the above approaches seem to deal well with concurrency. If I cannot have two rows with the same natural key, it seems like all of the above risk inserting rows with the same natural keys in race condition scenarios. I have been using the following approach but I'm surprised not to see it anywhere in people's responses so I'm wondering what is wrong with it: INSERT INTO <table> SELECT <natural keys>, <other stuff...> FROM <table> WHERE NOT EXISTS -- race condition risk here? ( SELECT 1 FROM <table> WHERE <natural keys> ) UPDATE ... WHERE <natural keys> (Note: I'm assuming that rows will not be deleted from this table. Although it would be nice to discuss how to handle the case where they can be deleted -- are transactions the only option? Which level of isolation?) Is this atomic? I can't locate where this would be documented in SQL Server documentation.

    Read the article

  • SQLite UPSERT - ON DUPLICATE KEY UPDATE

    - by Alix Axel
    MySQL has something like this: INSERT INTO visits (ip, hits) VALUES ('127.0.0.1', 1) ON DUPLICATE KEY UPDATE hits = hits + 1; As far as I'm know this feature doesn't exist in SQLite, what I want to know is if there is any way to archive the same effect without having to execute two queries. Also, if this is not possible, what do you prefer: SELECT + (INSERT or UPDATE) or UPDATE (+ INSERT if UPDATE fails)

    Read the article

  • syntax for single row MERGE / upsert in SQL Server

    - by Jacob
    I'm trying to do a single row insert/update on a table but all the examples out there are for sets. Can anyone fix my syntax please: MERGE member_topic ON mt_member = 0 AND mt_topic = 110 WHEN MATCHED THEN UPDATE SET mt_notes = 'test' WHEN NOT MATCHED THEN INSERT (mt_member, mt_topic, mt_notes) VALUES (0, 110, 'test') Resolution per marc_s is to convert the single row to a subquery - which makes me think the MERGE command is not really intended for single row upserts. MERGE member_topic USING (SELECT 0 mt_member, 110 mt_topic) as source ON member_topic.mt_member = source.mt_member AND member_topic.mt_topic = source.mt_topic WHEN MATCHED THEN UPDATE SET mt_notes = 'test' WHEN NOT MATCHED THEN INSERT (mt_member, mt_topic, mt_notes) VALUES (0, 110, 'test');

    Read the article

  • Oracle - UPSERT with update not executed for unmodified values

    - by Buthrakaur
    I'm using following update or insert Oracle statement at the moment: BEGIN UPDATE DSMS SET SURNAME = :SURNAME, FIRSTNAME = :FIRSTNAME, VALID = :VALID WHERE DSM = :DSM; IF (SQL%ROWCOUNT = 0) THEN INSERT INTO DSMS (DSM, SURNAME, FIRSTNAME, VALID) VALUES (:DSM, :SURNAME, :FIRSTNAME, :VALID); END IF; END; This runs fine except that the update statement performs dummy update if the data is same as the parameter values provided. I would not mind the dummy update in normal situation, but there's a replication/synchronization system build over this table using triggers on tables to capture updated records and executing this statement frequently for many records simply means that I'd cause huge traffic in triggers and the sync system. Is there any simple method how to reformulate this code that the update statement wouldn't update record if not necessary without using following IF-EXISTS check code which I find not sleek enough and maybe also not most efficient for this task? DECLARE CNT NUMBER; BEGIN SELECT COUNT(1) INTO CNT FROM DSMS WHERE DSM = :DSM; IF SQL%FOUND THEN UPDATE DSMS SET SURNAME = :SURNAME, FIRSTNAME = :FIRSTNAME, VALID = :VALID WHERE DSM = :DSM AND (SURNAME != :SURNAME OR FIRSTNAME != :FIRSTNAME OR VALID != :VALID); ELSE INSERT INTO DSMS (DSM, SURNAME, FIRSTNAME, VALID) VALUES (:DSM, :SURNAME, :FIRSTNAME, :VALID); END IF; END;

    Read the article

  • Oracle: how to UPSERT (update or insert into a table?)

    - by Mark Harrison
    The UPSERT operation either updates or inserts a row in a table, depending if the table already has a row that matches the data: if table t has a row exists that has key X: update t set mystuff... where mykey=X else insert into t mystuff... Since Oracle doesn't have a specific UPSERT statement, what's the best way to do this?

    Read the article

  • Insert Or update (aka Replace or Upsert)

    - by Davide Mauri
    The topic is really not new but since it’s the second time in few days that I had to explain it different customers, I think it’s worth to make a post out of it. Many times developers would like to insert a new row in a table or, if the row already exists, update it with new data. MySQL has a specific statement for this action, called REPLACE: http://dev.mysql.com/doc/refman/5.0/en/replace.html or the INSERT …. ON DUPLICATE KEY UPDATE option: http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html With SQL Server you can do the very same using a more standard way, using the MERGE statement, with the support of Row Constructors. Let’s say that you have this table: CREATE TABLE dbo.MyTargetTable (     id INT NOT NULL PRIMARY KEY IDENTITY,     alternate_key VARCHAR(50) UNIQUE,     col_1 INT,     col_2 INT,     col_3 INT,     col_4 INT,     col_5 INT ) GO INSERT [dbo].[MyTargetTable] VALUES ('GUQNH', 10, 100, 1000, 10000, 100000), ('UJAHL', 20, 200, 2000, 20000, 200000), ('YKXVW', 30, 300, 3000, 30000, 300000), ('SXMOJ', 40, 400, 4000, 40000, 400000), ('JTPGM', 50, 500, 5000, 50000, 500000), ('ZITKS', 60, 600, 6000, 60000, 600000), ('GGEYD', 70, 700, 7000, 70000, 700000), ('UFXMS', 80, 800, 8000, 80000, 800000), ('BNGGP', 90, 900, 9000, 90000, 900000), ('AMUKO', 100, 1000, 10000, 100000, 1000000) GO If you want to insert or update a row, you can just do that: MERGE INTO     dbo.MyTargetTable T USING     (SELECT * FROM (VALUES ('ZITKS', 61, 601, 6001, 60001, 600001)) Dummy(alternate_key, col_1, col_2, col_3, col_4, col_5)) S ON     T.alternate_key = S.alternate_key WHEN     NOT MATCHED THEN     INSERT VALUES (alternate_key, col_1, col_2, col_3, col_4, col_5) WHEN     MATCHED AND T.col_1 != S.col_1 THEN     UPDATE SET         T.col_1 = S.col_1,         T.col_2 = S.col_2,         T.col_3 = S.col_3,         T.col_4 = S.col_4,         T.col_5 = S.col_5 ; If you want to insert/update more than one row at once, you can super-charge the idea using Table-Value Parameters, that you can just send from your .NET application. Easy, powerful and effective

    Read the article

  • Salesforce - Update/Upsert custom object entry

    - by Phill Pafford
    UPDATE: It's working as expected just needed to pass the correct Id, DUH!~ I have a custom object in salesforce, kind of like the comments section on a case for example. When you add a new comment it has a date/time stamp for that entry, I wanted to update the previous case comment date/time stamp when a new case comment is created. I wanted to do an UPDATE like this: $updateFields = array( 'Id'=>$comment_id, // This is the Id for each comment 'End_Date__c'=>$record_last_modified_date ); function sfUpdateLastCommentDate($sfConnection, $updateFields) { try { $sObjectCustom = new SObject(); $sObjectCustom->type = 'Case_Custom__c'; $sObjectCustom->fields = $updateFields; $createResponse = $sfConnection->update(array($sObjectCustom)); } catch(Exception $e) { $error_msg = SALESFORCE_ERROR." \n"; $error_msg .= $e->faultstring; $error_msg .= $sfConnection->getLastRequest(); $error_msg .= SALESFORCE_MESSAGE_BUFFER_NEWLINE; // Send error message mail(ERROR_TO_EMAIL, ERROR_EMAIL_SUBJECT, $error_msg, ERROR_EMAIL_HEADER_WITH_CC); exit; } } I've also tried the UPSERT but I get the error: Missing argument 2 for SforcePartnerClient::upsert() Any help would be great

    Read the article

  • Deprecate UPDATE FROM? Not if I can help it!

    - by AaronBertrand
    Fellow MVP Hugo Kornelis ( blog ) has suggested that the proprietary UPDATE FROM and DELETE FROM syntax, which has worked for several SQL Server versions, should be deprecated in favor of MERGE. Here is the Connect item he raised: #332437 : Deprecate UPDATE FROM and DELETE FROM As you can see, the response is quite divided (more so than any other item that I can recall) - at the time of writing, it was 11 up-votes and 12 down-votes. I have no shame in admitting that I am one of the people who down-voted...(read more)

    Read the article

  • MERGE -v- UPSERT

    - by Kevin Ross
    Hi, I have an application I’m writing in access with a SQL server backend. One of the most heavily used parts is where the users selects an answer to a question, a stored procedure is then fired which sees if an answer has already been given, if it has an UPDATE is executed, if not an INSERT is executed. This works just fine but now we have upgraded to SQL server 2008 express I was wondering if it would be better/quicker/more efficient to rewrite this SP to use the new MERGE command. Does anyone have any idea if this is faster than doing a SELECT followed by either an INSERT or UPDATE?

    Read the article

  • upsert with addition

    - by cf_PhillipSenn
    How would you write the following in Microsoft SQL Server 2008? IF EXISTS(SELECT * FROM Table WHERE Something=1000) UPDATE Table SET Qty = Qty + 1 WHERE Something=1000 ELSE INSERT INTO Table(Something,Qty) VALUES(1000,1)

    Read the article

  • Upsert - Efficient Update or Insert in VB.Net, SQL Server

    - by HK1
    I'm trying to understand how to streamline the process of inserting a record if none exists or updating a record if it already exists. I'm not using stored procedures, although maybe that would be the most efficient way of doing this. The actual scenario in which this is necessary is saving a user preference/setting to my SettingsUser table. In MS Access I would typically pull a DAO recordset looking for the specified setting. If the recordset comes back empty then I know I need to add a new record which I can do with the same recordset object. On the other hand, if it isn't empty, I can just update the setting's value right away. In theory, this is only two database operations. What is the recommended way of doing this in .NET?

    Read the article

  • Doing a large number of upserts as fast as possible

    - by Jason Swett
    My app (which uses MySQL) is doing a large number of subsequent upserts. Right now my SQL looks like this: INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL OR','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('TRACY L WALTER PERSONAL REP FOR','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('TRACY L WALTER PERSONAL REP FOR','123','123','123') So far I've found INSERT IGNORE to be the fastest way to achieve upserts. Selecting a record to see if it exists and then either updating it or inserting a new one is too slow. Even this is not as fast as I'd like because I need to do a separate statement for each record. Sometimes I'll have around 50,000 of these statements in a row. Is there a way to take care of all of these in just one statement, without deleting any existing records?

    Read the article

  • Does DB2 have an "insert or update" statement?

    - by Mikael Eriksson
    From my code (Java) I want to ensure that a row exists in the database (DB2) after my code is executed. My code now does a select and if no result is returned it does an insert. I really don't like this code since it exposes me to concurrency issuses when running in a multi-threaded environment. What I would like to do is to put this logic in DB2 instead of in my Java code. Does DB2 have an "insert-or-update" statement? Or anything like it that I can use? For example: insertupdate into mytable values ('myid') Another way of doing it would probably be to allways do the insert and catch "SQL-code -803 primary key already exists", but I would like to avoid that if possible.

    Read the article

  • Replace into equivalent for postgresql and then autoincrementing an int

    - by Mohamed Ikal Al-Jabir
    Okay no seriously, if a postgresql guru can help out I'm just getting started. Basically what I want is a simple table like such: CREATE TABLE schema.searches ( search_id serial NOT NULL, search_query character varying(255), search_count integer DEFAULT 1, CONSTRAINT pkey_search_id PRIMARY KEY (search_id) ) WITH ( OIDS=FALSE ); I need something like REPLACE INTO for mysql. I don't know if I have to write my own procedure or something? Basically: check if the query already exists if so, just add 1 to the count it not, add it to the db I can do this in my php code but I'd rather all that be done in postgres C engine Thanks for helping

    Read the article

  • SQLite - ON DUPLICATE KEY UPDATE

    - by Alix Axel
    MySQL has something like this: INSERT INTO visits (ip, hits) VALUES ('127.0.0.1', 1) ON DUPLICATE KEY UPDATE hits = hits + 1; As far as I'm know this feature doesn't exist in SQLite, what I want to know is if there is any way to archive the same effect without having to execute two queries. Also, if this is not possible, what do you prefer: SELECT + (INSERT or UPDATE) or UPDATE (+ INSERT if UPDATE fails)

    Read the article

  • Insert or Update using Oracle and PL/SQL

    - by Shane
    I have a PL/SQL function that performs an update/insert on an Oracle database that maintains a target total and returns the difference between the existing value and the new value. Here is the code I have so far: FUNCTION calcTargetTotal(accountId varchar2, newTotal numeric ) RETURN number is oldTotal numeric(20,6); difference numeric(20,6); begin difference := 0; begin select value into oldTotal from target_total WHERE account_id = accountId for update of value; if (oldTotal != newTotal) then update target_total set value = newTotal WHERE account_id = accountId difference := newTotal - oldTotal; end if; exception when NO_DATA_FOUND then begin difference := newTotal; insert into target_total ( account_id, value ) values ( accountId, newTotal ); -- sometimes a race condition occurs and this stmt fails -- in those cases try to update again exception when DUP_VAL_ON_INDEX then begin difference := 0; select value into oldTotal from target_total WHERE account_id = accountId for update of value; if (oldTotal != newTotal) then update target_total set value = newTotal WHERE account_id = accountId difference := newTotal - oldTotal; end if; end; end; end; return difference end calcTargetTotal; This works as expected in unit tests with multiple threads never failing. However when loaded on a live system we have seen this fail with a stack trace looking like this: ORA-01403: no data found ORA-00001: unique constraint () violated ORA-01403: no data found The line numbers (which I have removed since they are meaningless out of context) verify that the first update fails due to no data, the insert fail due to uniqueness, and the 2nd update is failing with no data, which should be impossible. From what I have read on other thread a MERGE statement is also not atomic and could suffer similar problems. Does anyone have any ideas how to prevent this from occurring?

    Read the article

  • Question about inserting/updating rows with SQL Server (ASP.NET MVC)

    - by Alex
    I have a very big table with a lot of rows, every row has stats for every user for certain days. And obviously I don't have any stats for future. So to update the stats I use UPDATE Stats SET Visits=@val WHERE ... a lot of conditions ... AND Date=@Today But what if the row doesn't exist? I'd have to use INSERT INTO Stats (...) VALUES (Visits=@val, ..., Date=@Today) How can I check if the row exists or not? Is there any way different from doing the COUNT(*)? If I fill the table with empty cells, it'd take hundreds of thousands of rows taking megabytes and storing no data.

    Read the article

  • Question about inserting/updating rows with MS SQL (ASP.NET MVC)

    - by Alex
    I have a very big table with a lot of rows, every row has stats for every user for certain days. And obviously I don't have any stats for future. So to update the stats I use UPDATE Stats SET Visits=@val WHERE ... a lot of conditions ... AND Date=@Today But what if the row doesn't exist? I'd have to use INSERT INTO Stats (...) VALUES (Visits=@val, ..., Date=@Today) How can I check if the row exists or not? Is there any way different from doing the COUNT(*)? If I fill the table with empty cells, it'd take hundreds of thousands of rows taking megabytes and storing no data.

    Read the article

  • Random MongoDb Syntax: Updates

    - by Liam McLennan
    I have a MongoDb collection called tweets. Each document has a property system_classification. If the value of system_classification is ‘+’ I want to change it to ‘positive’. For a regular relational database the query would be: update tweets set system_classification = 'positive' where system_classification = '+' the MongoDb equivalent is: db.tweets.update({system_classification: '+'}, {$set: {system_classification:'positive'}}, false, true) Parameter Description { system_classification: '+' } the first parameter identifies the documents to select { $set: { system_classification: 'positive' } } the second parameter is an operation ($set) and the parameter to that operation {system_classification: ‘positive’} false the third parameter indicates if this is a regular update or an upsert (true for upsert) true the final parameter indicates if the operation should be applied to all selected documents (or just the first)

    Read the article

  • SQL Server 2012 memory usage steadily growing

    - by pgmo
    I am very worried about the SQL Server 2012 Express instance on which my database is running: the SQL Server process memory usage is growing steadily (1.5GB after only 2 days working). The database is made of seven tables, each having a bigint primary key (Identity) and at least one non-unique index with some included columns to serve the majority of incoming queries. An external application is calling via Microsoft OLE DB some stored procedures, each of which do some calculations using intermediate temporary tables and/or table variables and finally do an upsert (UPDATE....IF @@ROWCOUNT=0 INSERT.....) - I never DROP those temporary tables explicitly: the frequency of those calls is about 100 calls every 5 seconds (I saw that the DLL used by the external application open a connection to SQL Server, do the call and then close the connection for each and every call). The database files are organized in only one filgegroup, recovery type is set to simple. Some questions to diagnose the problem: is that steadily growing memory normal? did I do any mistake in database design which probably lead to this behaviour? (no explicit temp-table drop, filegroup organization, etc) can SQL Server manage such a stored procedure call rate (100 calls every 5 seconds, i.e. 100 upsert every 5 seconds, beyond intermediate calculations)? do the continuous "open connection/do sp call/close connection" pattern disturb SQL Server? is it possible to diagnose what is causing such a memory usage? Perhaps queues of wating requests? (I ran sp_who2, but I didn't see a big amount of orphan connections from the external application) if I restrict the amount of memory which SQL Server is allowed to use, may I sooner or later get into trouble?

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • "Siebel2FusionCRM Integration" solution by ec4u (D)

    - by Richard Lefebvre
    ec4u, a CRM System Integration leader based in Germany and Switzerland, and an historical Oracle/Siebel partner, offers a complete "Siebel2FusionCRM Integration" solution, based on tools methodology and services. ec4u Siebel2FusionCRM Integration solution's main objectives are: Integration between Siebel (on-premise) and Fusion CRM / Marketing (“in the cloud”) Accounts, Contacts and Addresses are maintained by Sales in Siebel CRM and synchronized in real-time into Fusion CRM / Marketing CDM Processing ensures clean data for marketing campaigns (validation and deduplication) Create E-Mail marketing campaigns and newsletters in Fusion The solution features: Upsert processes figure out what information needs to be updated, inserted or terminated (deleted). However, as Siebel is the data master, it is still a one-way synchronization. Handle deleted or nullified information by terminating them in Fusion CRM (set start and end date to define the validity period) Initial load and real-time synchronization use the same processes Invocations/Operations can be repeated due to no transactional support from Fusion web services Tagging sub entries in case of 1 to N mapping (Example: Telephone number is one simple field in Siebel but in Fusion you can have multiple telephone numbers in a sub table) E-Mail-Notification in case of any error (containing error message, instance number, detailed payload) Schematron Validation Interested? Looking for more details or a partnership with ec4u for a "Siebel2FusionCRM Integration" project? Contact: Gregor Bublitz, Director Expert Services ([email protected])

    Read the article

1 2  | Next Page >