Search Results

Search found 29702 results on 1189 pages for 'select insert'.

Page 244/1189 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • smart device cab project with visual studio: inserting of registry value

    - by nuttynibbles
    hi, i have created a cab file using visual studio smart device cab project. i've managed to generate a cab file and install onto a mobile phone successfully. for my installation, it will also insert into registry. however, for this it is not working. the registry value is suppose to be like this including the double quote: "\Program Files\app\file.exe" "%1" i read that to include '%', i have to insert extra % which will be like this: "\Program Files\app\file.exe" "%%1" but this is still not getting insert into the registry. however, if i remove the double quotes which looks like this, it works: \Program Files\app\file.exe %%1 Thanks.

    Read the article

  • SQLserver multithreaded locking with TABLOCKX

    - by WilfriedVS
    I have a table "tbluser" with 2 fields: userid = integer (autoincrement) user = nvarchar(100) I have a multithreaded/multi server application that uses this table. I want to accomplish the following: Guarantee that field user is unique in my table Guarantee that combination userid/user is unique in each server's memory I have the following stored procedure: CREATE PROCEDURE uniqueuser @user nvarchar(100) AS BEGIN BEGIN TRAN DECLARE @userID int SET nocount ON SET @userID = (SELECT @userID FROM tbluser WITH (TABLOCKX) WHERE [user] = @user) IF @userID <> '' BEGIN SELECT userID = @userID END ELSE BEGIN INSERT INTO tbluser([user]) VALUES (@user) SELECT userID = SCOPE_IDENTITY() END COMMIT TRAN END Basically the application calls the stored procedure and provides a username as parameter. The stored procedure either gets the userid or insert the user if it is a new user. Am I correct to assume that the table is locked (only one server can insert/query)?

    Read the article

  • Does std::multiset guarntee insertion order?

    - by Naveen
    I have a std::multiset which stores elements of class A. I have provided my own implementation of operator< for this class. My question is if I insert two equivalent objects into this multiset is their order guaranteed? For example, first I insert a object a1 into the set and then I insert an equivalent object a2 into this set. Can I expect the a1 to come before a2 when I iterate through the set? If no, is there any way to achieve this using multiset?

    Read the article

  • How can I generate sql inserts from pipe delimited data?

    - by user568866
    Given a set of delimited data in the following format: 1|Star Wars: Episode IV - A New Hope|1977|Action,Sci-Fi|George Lucas 2|Titanic|1997|Drama,History,Romance|James Cameron How can I generate sql insert statements in this format? insert into table values(1,"Star Wars: Episode IV - A New Hope",1977","Action,Sci-Fi","George Lucas",0); insert into table values(2,"Titanic",1997,"Drama,History,Romance","James Cameron",0); To simplify the problem, let's allow for a parameter to tell which columns are text or numeric. (e.g. 0,1,0,1,1)

    Read the article

  • toplink prefixes table with TL_ while update operation

    - by Dewfy
    I have very simple named query on JPA (toplink ): UPDATE Server s SET s.isECM = 0 I don't carry about cache or validity of already preloaded entities. But database connection is performed from restricted account (only INSERT/UPDATE/DELETE). It is appeared that toplink on this query executes (and failed since TL_Server is not exists) very strange SQL: INSERT INTO TL_Server (elementId, IsECM) SELECT t0.ElementId, ? FROM Element t0, Server t1 WHERE ((t1.elementId = t0.ElementId) AND (t0.elementType = ?)) bind => [0, Server] What is this? How the simple UPDATE appears an INSERT? Why toplink queries TL_?

    Read the article

  • Recreation of DB using "mysql mydb < mydb.sql" is really slow when the table has tens of millions of

    - by Jian Lin
    It seems that a MySQL database that has a table with tens of millions of records will get a big INSERT INTO statement when the following mysqldump some_db > some_db.sql is done to back up the database. (is it 1 insert statement that handles all the records?) So when reconstructing the DB using mysql some_db < some_db.sql then the CPU is hardly busy (about 1.8% usage by the mysql process... I don't see a mysqld either?) and also the hard disk doesn't seem to be too busy... Last time, the whole restore process took 5 hours. Is there a way to make it faster? Such as, when doing mysqldump, can it break the INSERT statement into shorter ones, so that the mysql doesn't need to parse the line so hard when restoring the DB?

    Read the article

  • PostgreSQL pgdb driver raises "can't rollback" exception

    - by David Parunakian
    Hello, for some reason I'm experiencing the Operational Error with "can't rollback" message when I attempt to roll back my transaction in the following context: try: cursors[instance].execute("lock revision, app, timeout IN SHARE MODE") cursors[instance].execute("insert into app (type, active, active_revision, contents, z) values ('session', true, %s, %s, 0) returning id", (cRevision, sessionId)) sAppId = cursors[instance].fetchone()[0] cursors[instance].execute("insert into revision (app_id, type) values (%s, 'active')", (sAppId,)) cursors[instance].execute("insert into timeout (app_id, last_seen) values (%s, now())", (sAppId,)) connections[instance].commit() except pgdb.DatabaseError, e: connections[instance].rollback() return "{status: 'error', errno:4, errmsg: \"%s\"}"%(str(e).replace('\"', '\\"').replace('\n', '\\n').replace('\r', '\\r')) The driver in use is PGDB. What is fundamentally wrong here?

    Read the article

  • T-SQL MERGE - finding out which action it took

    - by IanC
    I need to know if a MERGE statement performed an INSERT. In my scenario, the insert is either 0 or 1 rows. Test code: DECLARE @t table (C1 int, C2 int) DECLARE @C1 INT, @C2 INT set @c1 = 1 set @c2 = 1 MERGE @t as tgt USING (SELECT @C1, @C2) AS src (C1, C2) ON (tgt.C1 = src.C1) WHEN MATCHED AND tgt.C2 != src.C2 THEN UPDATE SET tgt.C2 = src.C2 WHEN NOT MATCHED BY TARGET THEN INSERT VALUES (src.C1, src. C2) OUTPUT deleted.*, $action, inserted.*; SELECT inserted.* The last line doesn't compile (no scope, unlike a trigger). I can't get access to @action, or the output. Actually, I don't want any output meta data. How can I do this?

    Read the article

  • How can i pass Twig sub parameter?

    - by aliocee
    hi, Help i cant pass my hash key to twig subroutine. here: foreach my $word (sort { $keywords{$a} <=> $keywords{$b} } keys (%keywords)) { my $t = XML::Twig->new( twig_roots => { 'Id' => \&insert($keywords{$word}) } ); $t->parse($docsums); sub insert { my($t, $id, $k)= @_; my $p = $id->text; my $query = "insert into pres (id, wid, p) values(DEFAULT, '$k', '$p')"; my $sql = $connect->prepare($query); $sql->execute( ); } } Thanks.

    Read the article

  • Php/ODBC encoding problem

    - by JohnM2
    I use ODBC to connect to SQL Server from PHP. In PHP I read some string (nvarchar column) data from SQL Server and then want to insert it to mysql database. When I try to insert such value to mysql database table I get this mysql error: Incorrect string value: '\xB3\xB9ow...' for column 'name' at row 1 For string with all ASCII characters everything is fine, the problem occurs when non-ASCII characters (from some European languages) exist. So, in more general terms: there is a Unicode string in MS SQL Server database, which is retrieved by PHP trough ODBC. Then it is put in sql insert query (as value for utf-8 varchar column) which is executed for mysql database. Can someone explain to me what is happening in this situation in terms of encoding? At which step what character encoding convertions may take place? I use: PHP 5.2.5, MySQL5.0.45-community-nt, MS Sql Server 2005.

    Read the article

  • n-grams from text in PostgreSQL

    - by harshsinghal
    I am looking to create n-grams from text column in PostgreSQL. I currently split(on white-space) data(sentences) in a text column to an array. select regexp_split_to_array(sentenceData,E'\s+') from tableName Once I have this array, how do I go about: Creating a loop to find n-grams, and write each to a row in another table Using unnest I can obtain all the elements of all the arrays on separate rows, and maybe I can then think of a way to get n-grams from a single column, but I'd loose the sentence boundaries which I wise to preserve. Sample SQL code for PostgreSQL to emulate the above scenario create table tableName(sentenceData text); INSERT INTO tableName(sentenceData) VALUES('This is a long sentence'); INSERT INTO tableName(sentenceData) VALUES('I am currently doing grammar, hitting this monster book btw!'); INSERT INTO tableName(sentenceData) VALUES('Just tonnes of grammar, problem is I bought it in TAIWAN, and so there aint any englihs, just chinese and japanese'); select regexp_split_to_array(sentenceData,E'\s+') from tableName; select unnest(regexp_split_to_array(sentenceData,E'\s+')) from tableName;

    Read the article

  • Is it OK to re-create many SQL connections (SQL 2008)

    - by Mr. Flibble
    When performing many inserts into a database I would usually have code like this: using (var connection = new SqlConnection(connStr)) { connection.Open(); foreach (var item in items) { var cmd = new SqlCommand("INSERT ...") cmd.ExecuteNonQuery(); } } I now want to shard the database and therefore need to choose the connection string based on the item being inserted. This would make my code run more like this foreach (var item in items) { connStr = GetConnectionString(item); using (var connection = new SqlConnection(connStr)) { connection.Open(); var cmd = new SqlCommand("INSERT ...") cmd.ExecuteNonQuery(); } } Which basically means it's creating a new connection to the database for each item. Will this work or will recreating connections for each insert cause terrible overhead?

    Read the article

  • How to use DML on Oracle temporary table without generating much undo log

    - by Sambath
    Hi, Using an Oracle temporary table does not generate much redo log as a normal table. However, the undo log is still generated. Thus, how can I write insert, update, or delete statement on a temporary table but Oracle will not generate undo log or generate as little as possible? Moreover, using /+append/ in the insert statement will generate little undo log. Am I correct? If not, could anyone explain me about using the hint /+append/? INSERT /*+APPEND*/ INTO table1(...) VALUES(...); Thank you.

    Read the article

  • Administrator account: Where, when and how?

    - by Pickels
    Where, when and how to insert/create the administrator account for a website? Here are a few ways I encountered in other websites/webapplication. Installation wizard: You see this a lot in blog software or forums. When you install the application it will ask you to create an administrator user. Private webapplication will most likely not have this. Installation file: A file you run to install your application. This file will create the administrator account for you. Configuration files: A configuration file that holds the credentials for the administrator account. Manually insert it into a database: Manually insert the administrator info into the database.

    Read the article

  • Possible to get PrimayKey IDs back after a SQL BulkCopy?

    - by chobo2
    Hi I am using C# and using SqlBulkCopy. I have a problem though. I need to do a mass insert into one table then another mass insert into another table. These 2 have a PK/FK relationship. Table A Field1 -PK auto incrementing (easy to do SqlBulkCopy as straight forward) Table B Field1 -PK/FK - This field makes the relationship and is also the PK of this table. It is not auto incrementing and needs to have the same id as to the row in Table A. So these tables have a one to one relationship but I am unsure how to get back all those PK Id that the mass insert made since I need them for Table B.

    Read the article

  • sql server 2008 multiple inserts over 2 tables

    - by Rob
    I got the following trigger on my sql server 2008 database CREATE TRIGGER tr_check_stoelen ON Passenger AFTER INSERT, UPDATE AS BEGIN IF EXISTS( SELECT 1 FROM Passenger p INNER JOIN Inserted i on i.flight= p.flight WHERE p.flight= i.flightAND p.seat= i.seat ) BEGIN RAISERROR('Seat taken!',16,1) ROLLBACK TRAN END END The trigger is throwing errors when i try to run the query below. This query i supposed to insert two different passengers in a database on two different flights. I'm sure both seats aren't taken, but i can't figure out why the trigger is giving me the error. Does it have to do something with correlation? INSERT INTO passagier VALUES (13392,5315,3,'Janssen Z','2A','October 30, 2006 10:43','M'), (13333,5316,2,'Janssen Q','2A','October 30, 2006 11:51','V')

    Read the article

  • Javascript Prototype Best Practice Event Handlers

    - by nahum
    Hi this question is more a consulting of best practice, Sometimes when I'm building a complete ajax application I usually add elements dynamically for example. When you'r adding a list of items, I do something like: var template = new Template("<li id='list#{id}'>#{value}</li>"); var arrayTemplate = []; arrayOfItem.each(function(item, index){ arrayTemplate.push(template.evaluate( id : index, value : item)) }); after this two options add the list via "update" or "insert" ----- $("elementToUpdate").update("<ul>" + arrayTemplate.join("") + "</ul">); the question is how can I add the event handler without repeat the process of read the array, this is because if you try add a Event before the update or insert you will get an Error because the element isn't still on the DOM. so what I'm doing by now is after insert or update: arrayOfItem.each(function(item, index){ $("list" + index).observe("click", function(){ alert("I see the world"); }) }); so the question is exist a better way to doing this??????

    Read the article

  • Overriding unique indexed values

    - by Yeti
    This is what I'm doing right now (name is UNIQUE): SELECT * FROM fruits WHERE name='apple'; Check if the query returned any result. If yes, don't do anything. If no, a new value has to be inserted: INSERT INTO fruits (name) VALUES ('apple'); Instead of the above is it ok to insert the value into the table without checking if it already exists? If the name already exists in the table, an error will be thrown and if it doesn't, a new record will be inserted. Right now I am having to insert 500 records in a for loop, which results in 1000 queries. Will it be ok to skip the "already-exists" check?

    Read the article

  • How can i add encoding to the python generated CSV file

    - by user1958218
    I am following this post http://stackoverflow.com/a/9016545 and i want to know that how can i do that in Python. I don't know how can i insert BOM data in there This is my current code response = HttpResponse(content_type='text/csv') response['Content-Type'] = 'application/octet-stream' response['Content-Disposition'] = 'attachment; filename="results.csv"' writer = UnicodeWriter(response, quoting=csv.QUOTE_ALL, encoding="utf-8") I want to convert to utf -16 . BOm data is this but don't know how to insert it From here http://stackoverflow.com/a/4440143 echo "\xEF\xBB\xBF"; // UTF-8 BOM But i want it for python and utf-16 I tried opening that csv in notepad and insert \xef\xbb\xb in beginning and excel displayed that correctly. But it is also visible before first column. How can i hide that because user wont like that

    Read the article

  • SQLite SQLite column can't be updated to any value but zero (real type)

    - by user1065320
    Hi I'm trying to create a three fields table (date,text,real), I can insert and update the first two fields but can't set or update the last one. createSQL = @"CREATE TABLE IF NOT EXISTS ACTIVITIES (MYDATE DATETIME PRIMARY KEY, ACTIVITY TEXT , LENGTH REAL);"; try to insert char *update = "INSERT OR REPLACE INTO ACTIVITIES (MYDATE, ACTIVITY , LENGTH) VALUES (?, ? ,?);"; sqlite3_stmt *stmt; if (sqlite3_prepare_v2(database, update , -1, &stmt, nil) == SQLITE_OK) { sqlite3_bind_double(stmt, 1, valueToWrite); sqlite3_bind_text(stmt, 2, [task UTF8String], -1, NULL); sqlite3_bind_double(stmt, 3, 1.5); } if (sqlite3_step(stmt) != SQLITE_DONE) { NSAssert1(0, @"Error updating table: %s", errorMsg); } sqlite3_finalize(stmt); sqlite3_close(database); The first two values are updated but the last one remains 0 even if I'm writing a value like 1.5 or put a double variable with a different value Thanks

    Read the article

  • How to skip the invalid rows while inserting the data into Database

    - by Dinesh
    We have a statement., that is inserting some rows in a temporary table (say e.g., 10 rows), while inserting 5th row, it has some issue with one of the column format and giving an error and then it stopped inserting the rows. What I want is, it should skip the error rows and insert valid rows. For those error rows, it can skip that error column and insert with some null value & different status. create table #tb_pagecontent_value (pageid int,formid uniqueidentifier, id_field xml,fieldvalue xml,label_final xml) … … insert into #tb_pagecontent_xml select A.pageid,B.formid,A.PageData.query('/CPageDataXML/control') from Pagedata A inner join page B on A.PageId=B.PageId inner join FormAssociation C on B.FormId=C.FormId where B.pageid in (select pageId from jobs where jobtype='zba' and StatusFlag!=1) in the above e.g., I want to apply that logic. Any help is appreciated.

    Read the article

  • Error: Too Many Arguments Specified when Inserting Values from ASP.NET to SQL Server

    - by SidC
    Good Afternoon All, I have a wizard control that contains 20 textboxes for part numbers and another 20 for quantities. I want the part numbers and quantities loaded into the following table: USE [Diel_inventory] GO /****** Object: Table [dbo].[QUOTEDETAILPARTS] Script Date: 05/09/2010 16:26:54 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[QUOTEDETAILPARTS]( [QuoteDetailPartID] [int] IDENTITY(1,1) NOT NULL, [QuoteDetailID] [int] NOT NULL, [PartNumber] [float] NULL, [Quantity] [int] NULL, CONSTRAINT [pkQuoteDetailPartID] PRIMARY KEY CLUSTERED ( [QuoteDetailPartID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[QUOTEDETAILPARTS] WITH CHECK ADD CONSTRAINT [fkQuoteDetailID] FOREIGN KEY([QuoteDetailID]) REFERENCES [dbo].[QUOTEDETAIL] ([ID]) ON UPDATE CASCADE ON DELETE CASCADE GO Here's the snippet from my sproc for this insert: set @ID=scope_identity() Insert into dbo.QuoteDetailParts (QuoteDetailPartID, QuoteDetailID, PartNumber, Quantity) values (@ID, @QuoteDetailPartID, @PartNumber, @Quantity) When I run the ASPX page, I receive an error that there are too many arguments specified for my stored procedure. I understand why I'm getting the error, given the above table layout. However, I need help in structuring my insert syntax to look for values in all 20 PartNumber and Quantity field pairs. Thanks, Sid

    Read the article

  • Multivalue Mysql Inserts using HibernateTemplate

    - by Langali
    I am using Spring HibernateTemplate and need to insert hundreds of records into a mysql database every second. Not sure what is the most performant way of doing it, but I am trying to see how the multi value mysql inserts do using hibernate. String query = "insert into user(age, name, birth_date) values(24, 'Joe', '2010-05-19 14:33:14'), (25, 'Joe1', '2010-05-19 14:33:14')" getHibernateTemplate().execute(new HibernateCallback(){ public Object doInHibernate(Session session) throws HibernateException, SQLException { return session.createSQLQuery(query).executeUpdate(); } }); But I get this error: 'could not execute native bulk manipulation query.' Please check your query ..... Any idea of I can use a multi value mysql insert using Hibernate? or is my query incorrect? Any other ways that I can improve the performance? I did try the saveOrUpdateAll() method, and that wasn't good enough!

    Read the article

  • Mysql on duplicate key update + sub query

    - by jwzk
    Using the answer from this question: http://stackoverflow.com/questions/662877/need-mysql-insert-select-query-for-tables-with-millions-of-records new_table * date * record_id (pk) * data_field INSERT INTO new_table (date,record_id,data_field) SELECT date, record_id, data_field FROM old_table ON DUPLICATE KEY UPDATE date=old_table.data, data_field=old_table.data_field; I need this to work with a group by and join.. so to edit: INSERT INTO new_table (date,record_id,data_field,value) SELECT date, record_id, data_field, SUM(other_table.value) as value FROM old_table JOIN other_table USING(record_id) ON DUPLICATE KEY UPDATE date=old_table.data, data_field=old_table.data_field, value = value; I can't seem to get the value updated. If I specify old_table.value I get a not defined in field list error.

    Read the article

  • how to implement ajax in Ruby on rails via jquery

    - by swaroopsm
    how do i pass few of my form field(s) values to a controller usnig ajax/jquery? For eg.: In php/jquery I do something like this: $("#test-btn".click(function(){ var name=$("#name").val(); var age=$("#age").val(); $.post("insert.php",{name: name,age: age}, function(data){ $("#respone").html(data).hide().fadeIn(500); }); }); //insert.php <?php //insert values to database! ?> how do i acheive a similar functionality in rails using ajax/jquery?

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >