Search Results

Search found 10691 results on 428 pages for 'batch insert'.

Page 327/428 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • Problem storing string containing quotes

    - by Jack
    I have the following table - $sql = "CREATE TABLE received_queries ( sender_screen_name varchar(50), text varchar(150) )"; I use the following SQL statement to store values in the table $sql = "INSERT INTO received_queries VALUES ('$sender_screen_name', '$text')"; Now I am trying to store the following string as 'text'. One more #haiku: Cotton wool in mind; feeling like a sleep won't cure; I need some coffee. and I get the following error message Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 't cure; I need some coffee.')' at line 1 I think must be a pretty common problem. How do I solve it?

    Read the article

  • How I can move table to another filegroup ?

    - by denisioru
    Hello, I have MSSQL 2008 Ent and OLTP database with two big tables. How I can move this tables to another filegroup without service interrupting? Now, about 100-130 records inserted and 30-50 records updated each second in this tables. Each table have about 100M records and six fields (including one field geography). I looking for solution via google, but all solutions contain "create second table, insert rows from first table, drop first table, bla bla bla". Can I use partitioning functions for solving this problem? Thank you.

    Read the article

  • Arry of pointers in objective c using NSArray

    - by Amir
    Hello, I am writting program for my iphone and have a qestion. lets say i have class named my_obj class my_obj { NSString *name; NSinteger *id; NSinteger *foo; NSString *boo; } now i allocate 100 objects from type my_obj and insert them to array from type NSArray. then i want to sort the Array in two different ways. one by the name and the second by the id. i want to allocate another two arrays from type NSArray *arraySortByName *arraySortById what i need to do if i just want the sorted arrays to be referenced to the original array so i will get two sorted arrays that point to the original array (that didnt changed!) i other word i dont want to allocate another 100 objects to each sorted array.

    Read the article

  • Any good opensource SharePoint components that can abstract you from the inner SharePoint plumbings?

    - by JL
    I am looking for a good reusable set of components that can be used to communicate with SharePoint via web services, preferably open source. I want some abstraction from CAML and WebDav and SharePoint Web Services that could help me speed up my development time. Ideally I want to select, insert, update and delete from lists, manage attachments in list items, download items from sharepoint, retrieve user meta data from owner info. This sort of thing. Does any such abstraction exist for Sharepoint that use SharePoints web service model, obviously the use of the MOSS Component API is out of the question because it will only run on the hosted MOSS server, and I am writing an SOA app. Thank you

    Read the article

  • Sorting 1000-2000 elements with many cache misses

    - by Soylent Graham
    I have an array of 1000-2000 elements which are pointers to objects. I want to keep my array sorted and obviously I want to do this as quick as possible. They are sorted by a member and not allocated contiguously so assume a cache miss whenever I access the sort-by member. Currently I'm sorting on-demand rather than on-add, but because of the cache misses and [presumably] non-inlining of the member access the inner loop of my quick sort is slow. I'm doing tests and trying things now, (and see what the actual bottleneck is) but can anyone recommend a good alternative to speeding this up? Should I do an insert-sort instead of quicksorting on-demand, or should I try and change my model to make the elements contigious and reduce cache misses? OR, is there a sort algorithm I've not come accross which is good for data that is going to cache miss?

    Read the article

  • What to do with twitter oauth token once retreived?

    - by mcintyre321
    I'm writing a web app that will use twitter as its primary log on method. I've written code which gets the oauth token back from Twitter. My plan is now to Find the entry in my Users table for the twitter username retreived using the token, or create the entry if necessary Update the Users.TwitterOAuthToken column with the new OAuth token Create a permanent cookie with a random guid on the site and insert a record into my UserCookies table matching Cookie to User when a request comes in I will look for the browser cookie id in the UserCookies table, then use that to figure out the user, and make twitter requests on their behalf Write the oauth token into some pages as a js variable so that javascript can make requests on behalf of the user If the user clears his/her cookies the user will have to log in again to twitter Is this the correct process? Have I created any massive security holes? thanks!

    Read the article

  • Processing XML file with Huge data

    - by Manish Dhanotiya
    Hi,be m I am working on an application which has below requiements - 1. Download a ZIP file from a server. 2. Uncompress the ZIP file, get the content (which is in XML format) from this file into a String. 3. Pass this content into another method for parsing and further processing. Now, my concerns here is the XML file may be of Huge size say like '100MB', and my JVM has memory of only 512 MB, so how can I get this content into Chunks and pass for Parsing and then insert the data into PL/SQL tables. Since there can be multiple requests running at the same time and considering 512MB of memory what will be the best possible to process this. How I can get the data into Chunks and pass it as Stream for XML parsing. I googled on this, but didnt find any implementation. :( Thanks,

    Read the article

  • Duplicate / Copy records in the same MySQL table

    - by Digits
    Hello, I have been looking for a while now but I can not find an easy solution for my problem. I would like to duplicate a record in a table, but of course, the unique primary key needs to be updated. I have this query: INSERT INTO invoices SELECT * FROM invoices AS iv WHERE iv.ID=XXXXX ON DUPLICATE KEY UPDATE ID = (SELECT MAX(ID)+1 FROM invoices) the proble mis that this just changes the ID of the row instead of copying the row. Does anybody know how to fix this ? Thank you verrry much, Digits //edit: I would like to do this without typing all the field names because the field names can change over time.

    Read the article

  • How to convert c++ std::list element to multimap iterator

    - by user63898
    Hello all, I have std::list<multimap<std::string,std::string>::iterator> > Now i have new element: multimap<std::string,std::string>::value_type aNewMmapValue("foo1","test") I want to avoid the need to set temp multimap and do insert to the new element just to get its iterator back so i could to push it back to the: std::list<multimap<std::string,std::string>::iterator> > can i somehow avoid this creation of the temp multimap. Thanks

    Read the article

  • JQuery post to php

    - by RussP
    Why is it that I can never get JQuery serialize to work properly. I guess I must be missing something. I can serialize a form data and it shows in an alert: var forminfo = $j('#frmuserinfo').serialize(); alert(forminfo); I then post to my PHP page thus: $j.ajax({ type: "POST", url: "cv-user-process.php", data: "forminfo="+forminfo, cache: false, complete: function(data) { } }); But WHENEVER (not the first time) I try to insert/update the data in the DB I only ever get 1 varaible passed: Here is my PHP script: $testit = mysql_query("UPDATE cv_usersmeta SET inputtest='".$_POST['forminfo']."' WHERE user='X'"); the data passed only ever gets the first variable. why? I think it is more the way I deal with the php but it drives me nuts and always takes me far too long to find where I am going wrong.

    Read the article

  • Spreatsheet:WriteExcel create Chart

    - by yaohung
    Hi, I used csv2xls.pl to convert a text log into .xls file, and then apply create chart function as following: my $chart3 = $workbook-add_chart( type = 'line' , embedded = 1); Configure the series. $chart3-add_series( categories = '=Sheet1!$B$2:$B$64', values = '=Sheet1!$C$2:$C$64', name = 'Test data series 1', ); Add some labels. $chart3-set_title( name = 'Bridge Rate Analysis' ); $chart3-set_x_axis( name = 'Packet Size ' ); $chart3-set_y_axis( name = 'BVI Rate' ); Insert the chart into the main worksheet. $worksheet-insert_chart( 'G2', $chart3 ); ========== I can see the chart in .xls file, however, all the data is in text format, not number, therefore, the chart looks wrong. I am wondering can you tell me how to convert text into number before apply this create chart function? One other thing is any idea how to apply sorting on the .xls file before create chart? Thanks. Yaohung

    Read the article

  • Random select is not always returning a single row.

    - by Lieven
    The intention of following (simplified) code fragment is to return one random row. Unfortunatly, when we run this fragment in the query analyzer, it returns between zero and three results. As our input table consists of exactly 5 rows with unique ID's and as we perform a select on this table where ID equals a random number, we are stumped that there would ever be more than one row returned. Note: among other things, we already tried casting the checksum result to an integer with no avail. DECLARE @Table TABLE ( ID INTEGER IDENTITY (1, 1) , FK1 INTEGER ) INSERT INTO @Table SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 SELECT * FROM @Table WHERE ID = ABS(CHECKSUM(NEWID())) % 5 + 1

    Read the article

  • Index for wildcard match of end of string

    - by Anders Abel
    I have a table of phone numbers, storing the phone number as varchar(20). I have a requirement to implement searching of both entire numbers, but also on only the last part of the number, so a typical query will be: SELECT * FROM PhoneNumbers WHERE Number LIKE '%1234' How can I put an index on the Number column to make those searchs efficient? Is there a way to create an index that sorts the records on the reversed string? Another option might be to reverse the numbers before storing them, which will give queries like: SELECT * FROM PhoneNumbers WHERE ReverseNumber LIKE '4321%' However that will require all users of the database to always reverse the string. It might be solved by storing both the normal and reversed number and having the reversed number being updated by a trigger on insert/update. But that kind of solution is not very elegant. Any other suggestions?

    Read the article

  • How to change size of STL container in C++

    - by Jaime Pardos
    I have a piece of performance critical code written with pointers and dynamic memory. I would like to rewrite it with STL containers, but I'm a bit concerned with performance. Is there a way to increase the size of a container without initializing the data? For example, instead of doing ptr = new BYTE[x]; I want to do something like vec.insert(vec.begin(), x, 0); However this initializes every byte to 0. Isn't there a way to just make the vector grow? I know about reserve() but it just allocates memory, it doesn't change the size of the vector, and doesn't allows me to access it until I have inserted valid data. Thank you everyone.

    Read the article

  • Vector.erase(Iterator) causes bad memory access

    - by xon1c
    Hi, I am trying to do a Z-Index reordering of videoObjects stored in a vector. The plan is to identify the videoObject which is going to be put on the first position of the vector, erase it and then insert it at the first position. Unfortunately the erase() function always causes bad memory access. Here is my code: testApp.h: vector<videoObject> videoObjects; vector<videoObject>::iterator itVid; testApp.cpp: // Get the videoObject which relates to the user event for(itVid = videoObjects.begin(); itVid != videoObjects.end(); ++itVid){ if(videoObjects.at(itVid - videoObjects.begin()).isInside(ofPoint(tcur.getX(), tcur.getY()))){ videoObjects.erase(itVid); } } This should be so simple but I just don't see where I'm taking the wrong turn. Thx, xonic

    Read the article

  • Linq To Sql Entity Updated from Trigger

    - by James Helms
    I have a Table called Address. I have a Trigger for insert on that table that does some spacial calculations on the address that determines what neighborhood boundaries it is in. address = new Address { Street = this.Street, City = this.City, State = this.State, ZipCode = this.ZipCode, latitude = this.Latitude, longitude = this.Longitude, YearBuilt = this.YearBuilt, LotSize = this.LotSize, FinishedSize = this.FinishedSize, Bedrooms = this.Bedrooms, Bathrooms = this.Bathrooms, UseCode = this.UseCode, HOA = this.HOA, UpdateDate = DateTime.Now }; db.AddToAddresses(address); db.SaveChanges(); In the database i can clearly see that the Trigger ran and updated the neighborhoodID in the address table for the row. I tried to just reload that record to get the assigned id like this: address = (from a in db.Addresses where a.AddressID == address.AddressID select a).First(); In the debugger i can clearly see that the address.AddressID is correct, entity doesn't update in memory. Is there any work around for this?

    Read the article

  • SQL Server Stored Procedure that return processed records number

    - by Ras
    I have a winform application that fires a Stored Procedure which elaborates several records (around 500k). In order to inform the user about how many record have been processed, I would need a SP which returns a value every n records. For example, every 1000 row processed (most are INSERT). Otherwise I would be able only to inform when ALL record are processed. Any hints how to solve this? I thought it could be useful to use a trigger or some scheduled task, but I cannot figure out how to implement it.

    Read the article

  • Does anyone use AODL in a real application?

    - by HyperQuantum
    We are currently using the Excel interop API in .NET to generate simple spreadsheet documents from a template. So we load the template first, insert some rows, fill in some data (dates, text, and numbers), and make Excel visible so that the user can print or save the document we just generated. But I'd like to get rid of the Excel dependency, and switch to the ODF format as well. Googling suggests AODL (C# libs for generating ODF docs) as the most obvious solution. But their last release is 1.3.0.0 BETA, and seems to be 3 years old. So I'm not sure if it's a good idea to depend on a potentially dead project... In that case, I'd need to find another solution. Any ideas? Or maybe someone could assure me that AODL is still alive?

    Read the article

  • Server.Transfer - What could be the issue here?

    - by Younes
    We have implemented a website with the ability for the user to post his actioncode. This then will be checked by the code and when the user has a price the website will Server.Transfer him to another page. The strange thing here is that the user information will be submitted to the database and the actioncode can't be used again. Here we go... I have one user in this database that is inserted twice very fast after the first time he was added. This are the timestamps: 2010-04-23 07:54:41.133 2010-04-23 07:54:41.417 The insert statement is only called once from the code and the user gets Server.Transfered to the Price.aspx page where he sees what price he won. How can it be that this happened? I'm guessing the user hitted F5 but then he had to be very very fast... Thx!

    Read the article

  • How do detect that transaction has already been started?

    - by xelurg
    I am using Zend_Db to insert some data inside a transaction. My function starts a transaction and then calls another method that also attempts to start a transaction and of course fails(I am using MySQL5). So, the question is - how do I detect that transaction has already been started? Here is a sample bit of code: try { Zend_Registry::get('database')->beginTransaction(); $totals = self::calculateTotals($Cart); $PaymentInstrument = new PaymentInstrument; $PaymentInstrument->create(); $PaymentInstrument->validate(); $PaymentInstrument->save(); Zend_Registry::get('database')->commit(); return true; } catch(Zend_Exception $e) { Bootstrap::$Log->err($e->getMessage()); Zend_Registry::get('database')->rollBack(); return false; } Inside PaymentInstrument::create there is another beginTransaction statement that produces the exception that says that transaction has already been started.

    Read the article

  • Problem with NHibernate and saving - NHibernate doesn't detect changes and uses old values.

    - by Vilx-
    When I do this: Cat x = Session.Load<Cat>(123); x.Name = "fritz"; Session.Flush(); NHibernate detects the change and UPDATEs the DB. But, when I do this: Cat x = new Cat(); Session.Save(x); x.Name = "fritz"; Session.Flush(); I get NULL for name, because that's what was there when I called Session.Save(). Why doesn't NHibernate detect the changes - or better yet, take the values for the INSERT statement at the time of Flush()?

    Read the article

  • How to add data to sql but for one id and more than one data ?

    - by Phsika
    i have one id and more filepaths.forexample: StudyInstanceUid:123456 FilePath: C:/a.jpg C:/b.jpg C:/c.jpg C:/d.jpg C:/e.jpg Result added table: 123456|C:/a.jpg 123456|C:/b.jpg 123456|C:/c.jpg 123456|C:/d.jpg 123456|C:/e.jpg How can i add more than one path for one id public bool AddDCMPath2(string StudyInstanceUid, string[] FilePath) { SqlConnection con = new SqlConnection("Data Source=(localhost);Initial Catalog=ImageServer; User ID=sa; Password=GENOTIP;"); SqlCommand cmd = new SqlCommand("INSERT INTO StudyDCM (StudyInstanceUid,FilePath) VALUES (@StudyInstanceUid,@FilePath)", con); try { con.Open(); cmd.CommandType = CommandType.Text; foreach (string filepath in FilePath) { cmd.Parameters.AddWithValue("@StudyInstanceUid", StudyInstanceUid); cmd.Parameters.AddWithValue("@FilePath", filepath); cmd.ExecuteNonQuery(); } } finally { if((con!=null)) con.Dispose(); if((cmd!=null)) cmd.Dispose(); } return true; }

    Read the article

  • Cleanest way to store lists of filter coefficients in a C header

    - by Nick T
    I have many (~100 or so) filter coefficients calculated with the aid of some Matlab and Excel that I want to dump into a C header file for general use, but I'm not sure what the best way to do this would be. I was starting out as so: #define BUTTER 1 #define BESSEL 2 #define CHEBY 3 #if FILT_TYPE == BUTTER #if FILT_ROLLOFF == 0.010 #define B0 256 #define B1 512 #define B2 256 #define A1 467 #define A2 -214 #elif FILT_ROLLOFF == 0.015 #define B0 256 #define B1 512 // and so on... However, if I do that and shove them all into a header, I need to set the conditionals (FILT_TYPE, FILT_ROLLOFF) in my source before including it, which seems kinda nasty. What's more, if I have 2+ different filters that want different roll-offs/filter types it won't work. I could #undef my 5 coefficients (A1-2, B0-2) in that coefficients file, but it still seems wrong to have to insert an #include buried in code.

    Read the article

  • How to read changed values with native query during one transaction? (Spring and JPA)

    - by knarf1983
    We have container transaction with Spring and JPA (Hibernate). I need to make an update on a table to "flag" some rows via native statements. Then we insert some rows via EntityManager from JPATemplate to this table. After that, we need to calculate changes in the table via native statement (with Oracle's union and minus, complex groups...) I see that changes from step 1 and 2 are not commited and thats why the statement from 3 fails. I already tried with transaction propagation REQUIRES_NEW, EntityManager.flush... Didn't work. 1) update SOMETABLE acolumn = somevalue (native) 2) persist some values into SOMETABLE (via entity manager) 3) select values from SOMETABLE Is there a possibility to read the changes from step 1 and 2 in step 3?

    Read the article

  • Table Variables in SSIS

    - by aceinthehole
    In one SQL Task can I create a table variable DELCARE @TableVar TABLE (...) Then in another SQL Task or DataSource destination and select or insert into the table variable? The other option I have considered is using a Temp Table. CREATE TABLE #TempTable (...) I would prefer to use Table Variable so that it remains in memory. But can use temp table if it is not possible to use table variable. Also I cannot use the record set destination as I need to preform straight SQL tasks on it later on.

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >