Search Results

Search found 26993 results on 1080 pages for 'multiple insert'.

Page 413/1080 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • TSQL ID generation

    - by Markus
    Hi. I have a question regarding locking in TSQL. Suppose I have a the following table: A(int id, varchar name), where id is the primary key, but is NOT an identity column. I want to use the following pseudocode to insert a value into this table: lock (A) uniqueID = GenerateUniqueID() insert into A values (uniqueID, somename) unlock(A) How can this be accomplished in terms of TSQL? The computation of the next id should be done with the table A locked in order to avoid other sessions to do the same operation at the same time and get the same id.

    Read the article

  • MySQL Locking Up

    - by Ian
    I've got a innodb table that gets a lot of reads and almost no writes (like, 1 write for every 400,000 reads approx). I'm running into a pretty big problem though when I do INSERT into the table. MySQL completely locks up. It uses 100% cpu, and every single other table (in other databases even) have their statuses set to "Locked" until the INSERT is done. This is a big problem because MySQL stays locked up for up to 4 minutes. I'm using version 5.1.47 (rpm from mysql.com). Any ideas?

    Read the article

  • php push 2d array into mysql

    - by john
    Hay All, I cant seem to get my head around this dispite the number to examples i read. Basically I have a 2d array and want to insert it into MySQL. The array contains a few strings. I cant get the following to work... $value = addslashes(serialize($temp3));//temp3 is my 2d array, do i need to use keys? (i am not at the moment) $query = "INSERT INTO table sip (id,keyword,data,flags) VALUES(\"$value\")"; mysql_query($query) or die("Failed Query"); Thanks Guys,

    Read the article

  • linq to sql string property from non-null column with default

    - by Barry Fandango
    I have a LINQ to SQL class "VoucherRecord" based on a simple table. One property "Note" is a string that represents an nvarchar(255) column, which is non-nullable and has a default value of empty string (''). If I instantiate a VoucherRecord the initial value of the Note property is null. If I add it using a DataContext's InsertOnSubmit method, I get a SQL error message: Cannot insert the value NULL into column 'Note', table 'foo.bar.tblVoucher'; column does not allow nulls. INSERT fails. Why isn't the database default kicking in? What sort of query could bypass the default anyway? How do I view the generated sql for this action? Thanks for your help!

    Read the article

  • Bitwise Shifting in C

    - by user313943
    I've recently decided to undertake an SMS project for sending and receiving SMS though a mobile. The data is sent in PDU format - I am required to change ASCII characters to 7 bit GSM alphabet characters. To do this I've come across several examples, such as http://www.dreamfabric.com/sms/hello.html This example shows Rightmost bits of the second septet, being inserted into the first septect, to create an octect. Bitwise shifts do not cause this to happen, as will insert to the left, and << to the right. As I understand it, I need some kind of bitwise rotate to create this - can anyone tell me how to move bits from the right handside and insert them on the left? Thanks,

    Read the article

  • Sql inline query with parameters. Parameter is not read when the query is executed.

    - by fzshah76
    Hi All: I am having a problem with my sql query in c#, basically it's inline query with parameters, but when I run it it tells me that parameter 1 or parameter 2 is not there here is my query declared on top of the page as public: public const string InsertStmtUsersTable = "insert into Users (username, password, email, userTypeID, memberID, CM7Register) " + "Values(@username, @password, @email, @userTypeID, @memberID,@CM7Register ); select @@identity"; this is my code for assigning the parameters, I know I am having problem so I am assigning the params twice: Username =(cmd.Parameters["@username"].Value = row["username"].ToString()) as string; cmd.Parameters["@username"].Value = row["username"].ToString(); In 1 methopd it calls this query and tries to insert to table, here is the code: Result = Convert.ToInt32(SqlHelper.ExecuteScalar(con, CommandType.Text,InsertStmtUsersTable)); Exact error message is: Must declare the variable '@username'.

    Read the article

  • SQLite Delphi thowing up incorrect exception.

    - by NeoNMD
    "Project CompetitionServer.exe raised exception class ESQLiteException with message 'Error executing SQL. Error [1]: SQL error or missing database. "INSERT INTO MatchesTable(MatchesID,RoundID,AkaFirstName,AoFirstName)VALUES(1,2,p,o)": no such column: p'. Process stopped. Use Step or Run to continue." This is clear to everone here that the notice is correct, p is NOT a column, it is some data I am trying to insert. What is the problem here?

    Read the article

  • Insriting into a bitstream

    - by evilertoaster
    I'm looking for a way to efficiently insert bits into a bitstream and have it 'overflow', padding with 0's. So for example if you had a byte array with 2 bytes: 231 and 109 (11100111 01101101), and did BitInsert(byteArray,4,00) it would insert two bits at bit offset 4 making 11100001 11011011 01000000 (225,219,24). It would be ok even the method only allowed 1 bit insertions e.g. BitInsert(byteArray,4,true) or BitInsert(byteArray,4,false). I have one method of doing it, but it has to walk the stream with a bitmask bit by bit, so I'm wondering if there's a simpler approach... Answers in assembly or a C derivative would be appreciated.

    Read the article

  • How do I make OneNote 2007 images searchable when inserted via code?

    - by Scott Bruns
    When I insert an image into OneNote 2007 using C# my images have 'Make Text in Image Searchable' set to Disabled. How do I insert an image with Make Text in Image Searchable enabled, or how do I enable this property after the image is imported. I have already imported a lot of images. How to I make the existing imported images searchable? I already know how to do this manually by right clicking the image and setting the language. The OCR works fine, I just need to do it automatically.

    Read the article

  • Copying Data between table without identity column

    - by user668479
    I have two table and I need to copy the data across from SRCServiceUsers to Clients Everytime i run it I get the following: Violation of PRIMARY KEY constraint 'PK_Clients'. Cannot insert duplicate key in object 'dbo.Clients'. The statement has been terminated. The Primary key ClientId field is not an identity column and therefore requires filling To date I have the following insert into Clients( ClientID, Title, Forenames, FamilyName, [Address], Town, County, PostCode, PhoneNumber, StartDate) SELECT ( Select Max(Clients.ClientID)+ 1, SRCServiceUsers.Title, SRCServiceUsers.[First Names], SRCServiceUsers.Surname, --BUILD UP MUITIPLE COLUMNS SRCServiceUsers.[Property Name] + ', ' + SRCServiceUsers.Street + ', ' + SRCServiceUsers.Suburb as [Address], SRCServiceUsers.Town, SRCServiceUsers.County, SRCServiceUsers.Postcode, SRCServiceUsers.Telephone, SRCServiceUsers.[Start Date] From srcsERVICEuSERS How can i autoincrement the PK field - CLientID when inserting the data? Many thanks Andrew

    Read the article

  • Java Hibernate id auto increment

    - by vinise
    Hy I'v a little problem with hibernate on netbeans. I've a table with an Auto increment id : CREATE TABLE "DVD" ( "DVD_ID" INT not null primary key GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1), "TITLE" VARCHAR(150), "COM" LONG VARCHAR, "COVER" VARCHAR(150) ); But this auto increment is not properly detected with Reverse Engineering. I get a map file with this : <id name="dvdId" type="int"> <column name="DVD_ID" /> <generator class="assigned" /> </id> i've looked on google and on this site ... foud some stuf but i'm still stuck.. i've tried to add insert="false" update="false" on the map file but i get back : Caused by: org.xml.sax.SAXParseException: Attribute "insert" must be declared for element type "id". Anny help will be pleased Vincent

    Read the article

  • Iterating Over <select> Using jQuery + Multi Select

    - by Kezzer
    This isn't quite as straight forward as one may think. I'm using a plugin called jQuery MultiSelect and multiple <select options using XSLT as follows: <xsl:for-each select="RootField"> <select id="{RootField}" multiple="multiple" size="3"> <option value=""></option> <xsl:for-each select="ChildField"> <option value="{ChildField}"><xsl:value-of select="ChildField"/></option> </xsl:for-each> </select> </xsl:for-each> The accompanying JavaScript is as follows: var selects = document.getElementsByTagName("select"); $.each(selects, function() { $(this).multiSelect(); }); This allows me to apply the multiSelect(); function to every single <select on the page. The behaviour is quite strange, every other <select is being changed into the dropdown list (all the even ones anyway). I can't see anything wrong in my JavaScript to cause this issue as it would iterate over every single one. To make it more clear, the only lists that have that JavaScript applied to it are ones in position 2, 4, 6 and 8 (out of the 9 which are on the page). Any ideas?

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • when i click on checkbox ,the image should be hiden though i dont make it happen somehow and i can g

    - by user309381
    function Psend() { new Ajax.Request('Handler.ashx', { method: 'get', onSuccess: function(transport) { var response = transport.responseText || "no response text"; //alert("Success! \n\n" + response); var obj = response.evalJSON(true); for (i = 0; i < 4; i++) { DeCheBX = $('MyDiv').insert(new Element('input', { 'type': 'checkbox', 'id': "Img" + obj[i].Nam, 'value': obj[i].IM, 'onClick': 'SayHi(this,i)' })); document.body.appendChild(DeCheBX); DeImg = $('MyDiv').insert(new Element('img', { 'id': "img" + obj[i].Nam, 'src': obj[i].IM })); document.body.appendChild(DeImg); SayHi = function(x,i) { try { if ($(x).checked == true) { img = "img" + obj[i].Nam; alert(img); $('img').hide(); } } catch (e) { alert("error"); } }; } }, onFailure: function() { alert('Something went wrong...') } }); }

    Read the article

  • problem with sql query, variable value not inserting itself.

    - by Kaustubh
    the old query works, the new on doesnt. the android logcat gives me error as: Failure 1: no such column abcname. abcname is the value of a editview that im trying to get from a popup in android. ================OLD QUERY: MapsActivity.myDB.execSQL("INSERT INTO " +MapsActivity.TableName + " (name, user_comment, latitude, longitude) " + " VALUES ('tagName','tagComment','tagLatitude','tagLongitude');"); ================NEW QUERY: MapsActivity.myDB.execSQL("INSERT INTO " +MapsActivity.TableName + " (name, user_comment, latitude, longitude) " + " VALUES ("+tagName +",'tagComment','tagLatitude','tagLongitude');"); what is the problem?

    Read the article

  • Named previously unnamed branch

    - by Jab
    It seems naming a previously unnamed branch doesn't really work out. It creates a nasty multiple heads problem that I can't find a solution for. Here is the workflow... UserA starts working on feature that they expect to be small, so they just start working(off the default branch). The change turns out to be a large project and will need multiple contributors. So UserA issues... hg branch "Feature1" and continues working, committing locally s needed. UserA then pulls down the changes from the central repo so he can push. At this point, why does hg heads return 3 heads? It shows 2 for default and 1 for Feature1. The first head for default is the latest change by another user on the branch(irrelevant). The second default head is the commit prior to the hg branch "Feature1" commit. The central repository has rules enforced so that only 1 head per branch is allowed, so forcing a push isn't an option. The repo doesn't want multiple heads on the default branch. UserA should be able to push these changes so that other users can see the Feature1 branch and help out. I can't seem to find a way to "correct" this. I don't think I can re-write the branch of the initial commits for the feature, before it was a named branch. I know the initial changes before the named branch are technically on the default branch, but does that mean they will be heads until that Feature1 branch is merged?

    Read the article

  • PHP error can't figure it out something to do with SQL stuff I think

    - by MrEnder
    Ok the error is showing up somewhere in this here code if($error==false) { $query = pg_query("INSERT INTO chatterlogins(firstName, lastName, gender, password, ageMonth, ageDay, ageYear, email, createDate) VALUES('$firstNameSignup', '$lastNameSignup', '$genderSignup', md5('$passwordSignup'), $monthSignup, $daySignup, $yearSignup, '$emailSignup', now());"); $query = pg_query("INSERT INTO chatterprofileinfo(email, lastLogin) VALUES('$email', now())";); $_SESSION['$userNameSet'] = $email; header('Location: signup_step2.php'.$rdruri); } anyone see what I did wrong??? sorry for being so unspecific but ive been staring at it for 10 mins and I can't figure it out.

    Read the article

  • g_tree_insert overwrites all data

    - by pechenie
    I wonder how I should use the GTree (from GLib) to store data? Every new value I insert into GTree with g_tree_insert routine is overwrite the previous one! GTree *tree; //init tree = g_tree_new( g_str_equal ); //"g_str_equal" is a GLib default compare func //... for( i = 0; i < 100; ++i ) g_tree_insert( tree, random_key(), random_value() ); //insert some random vals // printf( "%d", g_tree_nnodes( tree ) ); //should be 100? NO! Prints "1"!!! What am I doing wrong? Thank you.

    Read the article

  • Thread Safety of C# List<T> for readers

    - by ILIA BROUDNO
    I am planning to create the list once in a static constructor and then have multiple instances of that class read it (and enumerate through it) concurrently without doing any locking. In this article http://msdn.microsoft.com/en-us/library/6sh2ey19.aspx MS describes the issue of thread safety as follows: Public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. A List can support multiple readers concurrently, as long as the collection is not modified. Enumerating through a collection is intrinsically not a thread-safe procedure. In the rare case where an enumeration contends with one or more write accesses, the only way to ensure thread safety is to lock the collection during the entire enumeration. To allow the collection to be accessed by multiple threads for reading and writing, you must implement your own synchronization. The "Enumerating through a collection is intrinsically not a thread-safe procedure." Statement is what worries me. Does this mean that it is thread safe for readers only scenario, but as long as you do not use enumeration? Or is it safe for my scenario?

    Read the article

  • Tracing Erlang Functions - Short forms

    - by Roberto Aloi
    As you might know, it's now possible to trace Erlang functions by using the short form: dbg:tpl(Module, Function, x). Instead of the usual: dbg:tpl(Module, Function, dbg:fun2ms(fun(_) -> exception_trace() end)). I'm actually wondering if a similar short form is available for return_trace(). Something like: dbg:tpl(Module, Function, r). Instead of: dbg:tpl(Module, Function, dbg:fun2ms(fun(_) -> return_trace() end)). The source code in the dbg module seems to suggest not: new_pattern_table() -> PT = ets:new(dbg_tab, [ordered_set, public]), ets:insert(PT, {x, term_to_binary([{'_',[],[{exception_trace}]}])}), ets:insert(PT, {exception_trace, term_to_binary(x)}), PT. But I might be wrong. Do you know of any?

    Read the article

  • Drag and drop an image from desktop to a web text editor (implementation in javascript)

    - by fatmatto
    I tried to write reasonably short title but i failed i guess.. Hi everybody here's what i'm trying to do: I want to implement a web text editor able to recognize when the user drag a image file over it's editing surface and it automa(gically) starts the upload and insert the image near the cursor position. In other words i don't want the user to do the usual "insert-image-browse-ok". Atm i am not very good at javascript ... i know JQuery but i have not a clear idea about how to implement this... i don't know if there's an event handler able to help me in this situation... if not then there should be i think or web apps would miss some kind of interactivity. I've heard miracles about HTML5 could it help me? I've seen such things in Google Wave but that surface doesn't seem to be a form field... google lab's black magic i guess.... Thank you in advance.

    Read the article

  • Why does php show error for my SQL query

    - by ZincX
    UPDATE: My mistake - I made a typo. Nevermind this question. I'm using php to update a mysql database. The resultant query I'm using when i print it out on my webpage before executing is as follows: INSERT INTO perch2_content_items (itemOrder, regionID, pageID, itemRev, itemID, itemJSON, itemSearch ) SELECT MAX(itemOrder)+1, 105, 81, 11, 118, 'json', 'search' FROM perch2_content_items WHERE regionID=105 When I copy and paste this query directly into the phpmyadmin SQL interface, it works fine. The table gets updated. However, when I try to execute it using my php code as follows, it throws an error. $insertToPerch = "INSERT INTO perch2_content_items (itemOrder, regionID, pageID, itemRev, itemID, itemJSON, itemSearch ) SELECT MAX(itemOrder)+1, $regionID, $pageID, $regionRev, $newItemID, 'json', 'search' FROM perch2_content_items WHERE regionID=$regionID"; mysql_query(insertToPerch) or die(mysql_error()); The error I'm getting is: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'insertToPerch' at line 1 Can anybody help me figure out why it is failing.

    Read the article

  • Problem calling stored procedure with a fixed length binary parameter using Entity Framework

    - by Dave
    I have a problem calling stored procedures with a fixed length binary parameter using Entity Framework. The stored procedure ends up being called with 8000 bytes of data no matter what size byte array I use to call the function import. To give some example, this is the code I am using. byte[] cookie = new byte[32]; byte[] data = new byte[2]; entities.Insert("param1", "param2", cookie, data); The parameters are nvarchar(50), nvarchar(50), binary(32), varbinary(2000) When I run the code through SQL profiler, I get this result. exec [dbo].[Insert] @param1=N'param1',@param2=N'param2',@cookie=0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 [SNIP because of 16000 zeros] ,@data=0x0000 All parameters went through ok other than the binary(32) cookie. The varbinary(2000) seemed to work fine and the correct length was maintained. Is there a way to prevent the extra data being sent to SQL server? This seems like a big waste of network resource.

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >