Search Results

Search found 14354 results on 575 pages for 'existing records'.

Page 298/575 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • Ruby: would using Fibers increase my DB insert throughput?

    - by Zombies
    Currently I am using Ruby 1.9.1 and the 'ruby-mysql' gem, which unlike the 'mysql' gem is written in ruby only. This is pretty slow actually, as it seems to insert at a rate of almost 1 per second (SLOOOOOWWWWWW). And I have a lot of inserts to make too, its pretty much what this script does ultamitely. I am using just 1 connection (since I am using just one thread). I am hoping to speed things up by creating a fiber that will create a new DB connection insert 1-3 records close the DB connection I would imagine launching 20-50 of these would greatly increase DB throughput. Am I correct to go along this route? I feel that this is the best option, as opposed to refactoring all of my DB code :(

    Read the article

  • Filtering results and pagination

    - by alj
    I have a template that shows a filter form and below it a list of the result records. I bind the form to the request so that the filter form sets itself to the options the user submitted when the results are returned. I also use pagination. Using the code in the pagination documentation means that when the user clicks for the next page, the form data is lost. What is the best way of dealing with pagination and filtering in this way? Passing the querystring to the paginiation links. Change the pagination links to form buttons and therefore submit the filter form at the same time, but this assumes that the user hasn't messed about with the filter options. As above but with the original data as hidden fields. ALJ

    Read the article

  • Need help on nested loop of queries in php and mysql?

    - by mysqllearner
    Hi, I am trying to get do this: <?php $good_customer = 0; $q = mysql_query("SELECT user FROM users WHERE activated = '1'"); // this gives me about 40k users while($r = mysql_fetch_assoc($q)){ $money_spent = 0; $user = $r['user']; // Do queries on another 20 tables for($i = 1; $i<=20 ; $i++){ $tbl_name = 'data' . $i; $q2 = mysql_query("SELECT money_spent FROM $tbl_name WHERE user = '{$user}'"); while($r2 = mysql_fetch_assoc($q2)){ $money_spend += $r2['money_spent']; } if($money_spend > 1000000){ $good_customer += 1; } } } This is just an example. I am testing on localhost, for single user, it returns very fast. But when I try 1000, it takes forever, not even mentioned 40k users. Anyway to optimise/improve this code? EDIT: By the way, each of the others 20 tables has ~20 - 40k records

    Read the article

  • SQL Server Delete - Froregin Key

    - by Ahmet Altun
    I have got two tables in Sql Server 2005: USER Table: information about user and so on. COUNTRY Table : Holds list of whole countries on the world. USER_COUNTRY Table: Which matches, which user has visited which county. It holds, UserID and CountryID. For example, USER_COUNTRY table looks like this: ID -- UserID -- CountryID 1 -- 1 -- 34 2 -- 1 -- 5 3 -- 2 -- 17 4 -- 2 -- 12 5 -- 2 -- 21 6 -- 3 -- 19 My question is that: When a user is deleted in USER table, how can I make associated records in USER_COUNTRY table deleted directly. Maybe, by using Foreign Key Constaint?

    Read the article

  • How to block the possibility to add the same record to a SPList?

    - by truthseeker
    Hi, Is there a possibility to block chance to add the same data to SPList? I know that two records always are different regarding the ID field. I would like to validate other custom fields added previously by me, and don't allow of adding same field's value. Can anybody tell me how to implement this? I can guess that event receivers could be the answer but I couldn't find how to add a receiver to SPList. Can anybody tel me If I'm right and what is step by step procedure to add such event receiver? I would like to know how to build it and install it using Feature file. Best Regards T.S.

    Read the article

  • Huge page buffer vs. multiple simultaneous processes

    - by Andrei K.
    One of our customer has a 35 Gb database with average active connections count about 70-80. Some tables in database have more than 10M records per table. Now they have bought new server: 4 * 6 Core = 24 Cores CPU, 48 Gb RAM, 2 RAID controllers 256 Mb cache, with 8 SAS 15K HDD on each. 64bit OS. I'm wondering, what would be a fastest configuration: 1) FB 2.5 SuperServer with huge buffer 8192 * 3500000 pages = 29 Gb or 2) FB 2.5 Classic with small buffer of 1000 pages. Maybe some one has tested such case before and will save me days of work :) Thanks in advance.

    Read the article

  • How to use Zend_Cache Identifier ?

    - by ArneRie
    Hi Folks, i think iam getting crazy, iam trying to implement Zend_Cache to cache my database query. I know how it works and how to configure. But i cant find a good way to set the Identifier for the cache entrys. I have an method wich search for records in my database (based on an array with search values). /** * Find Record(s) * Returns one record, or array with objects * * @param array $search Search columns => value * @param integer $limit Limit results * @return array One record , or array with objects */ public function find(array $search, $limit = null) { $identifier = 'NoIdea'; if (!($data = $this->_cache->load($identifier))) { // fetch // save to cache with $identifier.. } But what kind of identifier can use in this situation?

    Read the article

  • Update through Linq-to-SQL DataContext not working.

    - by Puneet Dudeja
    I have created a small test for updating table using Linq-to-SQL DataContext as follows: using (pessimistic_exampleDataContext db = new pessimistic_exampleDataContext()) { var query = db.test1s.First(t => t.a == 3); if (query.b == "a") query.b = "b"; else query.b = "a"; db.SubmitChanges(); } But after executing this code in a console application in Main method, when I select records from the table, the record is not updated. I have debugged through the code, and it is not even throwing any exception. What can be the problem ?

    Read the article

  • SSRS Column Grouping with specific order

    - by AmiT
    Hi Experts, Is it possible to change order of records/groups in a result-set from a query using Group By? =I have a query: SELECT Category, Subcategory, ProductName, CreatedDate, Sales From TableCategory tc INNER JOIN TableSubCategory ts ON tc.col1 = ts.col2 INNER JOIN TableProductName tp ON ts.col2 = tp.col3 Group By Category, SubCategory, ProductName, CreatedDate, Sales = Now, I am creating a ssrs report where Category is Primary row group, then SubCategory is its child row group. Then ProductName is a Primary Column Group. It works perfect, But it shows the ProductNames in alphabatic order. I want it to show the ProductNames in custom order(defined by me).Like, ProductNo5 in 3rd column, ProductNo8 in 4th column, ProductNo1 in 5th column ... and so on!

    Read the article

  • how to optimize an oracle query that has to_char in where clause for date

    - by panorama12
    I have a table that contains about 49403459 records. I want to query the table on a date range. say 04/10/2010 to 04/10/2010. However, the dates are stored in the table as format 10-APR-10 10.15.06.000000 AM (time stamp). As a result. When I do: SELECT bunch,of,stuff,create_date FROM myTable WHERE TO_CHAR (create_date,'MM/DD/YYYY)' >= '04/10/2010' AND TO_CHAR (create_date, 'MM/DD/YYYY' <= '04/10/2010' I get 529 rows but in 255.59 seconds! which is because I guess I am doing to_char on EACH record. However, When I do SELECT bunch,of,stuff,create_date FROM myTable WHERE create_date >= to_date('04/10/2010','MM/DD/YYYY') AND create_date <= to_date('04/10/2010','MM/DD/YYYY') then I get 0 results in 0.14 seconds. How can I make this query fast and still get valid (529) results?? At this point I can not change indexes. Right now I think index is created on create_date column

    Read the article

  • Best way to implement a List(Of) with a maximum number of items

    - by Ben
    I'm trying to figure out a good way of implementing a List(Of) that holds a maximum number of records. e.g. I have a List(Of Int32) - it's being populated every 2 seconds with a new Int32 item. I want to store only the most current 2000 items. How can I make the list hold a maximum of 2000 items, then when the 2001'th item is attempted to be added, the List drops the 2000'th item (resulting in the current total being 1999). Thing is, I need to make sure I'm dropping only the oldest item and adding a new item into the List. Ben

    Read the article

  • LINQ-to-SQL: Searching against a CSV

    - by Peter Bridger
    I'm using LINQtoSQL and I want to return a list of matching records for a CSV contains a list of IDs to match. The following code is my starting point, having turned a CSV string in a string array, then into a generic list (which I thought LINQ would like) - but it doesn't: Error Error 22 Operator '==' cannot be applied to operands of type 'int' and 'System.Collections.Generic.List<int>' C:\Documents and Settings\....\Search.cs 41 42 C:\...\ Code DataContext db = new DataContext(); List<int> geographyList = new List<int>( Convert.ToInt32(geography.Split(',')) ); var geographyMatches = from cg in db.ContactGeographies where cg.GeographyId == geographyList select new { cg.ContactId }; Where do I go from here?

    Read the article

  • What's the proper way to use sqlite at xCode?

    - by Elliot Chen
    Hi, Experts: Can you please give some suggestions on sqlite using at xcode? Within my application, I use a sqlite DB to store all local data. Two methods can be used to retrieve those data during running time. 1, Load all the data into memory at initialization stage. (More memory used, less DB open/close operation needed) 2, Read corresponding records when necessary, free the occupied memory after using. (Good habit for memory using, but much DB open/close operations needs). I prefer to use method 2, but not sure whether too many DB opening/closing operations could affect app's efficiency. Or do you think I can 'upgrade' method 2 by opening DB when app launches and closing DB when app quits? Thanks for your suggestions very much!

    Read the article

  • Core Data fetch request with array

    - by JK
    I am trying to set a fetch request with a predicate to obtain records in the store whose identifiers attribute match an array of identifiers specified in the predicate e.g. NSString *predicateString = [NSString stringWithFormat:@"identifier IN %@", employeeIDsArray]; The employeeIDsArray contains a number of NSNumber objects that match IDs in the store. However, I get an error "Unable to parse the format string". This type of predicate works if it is used for filtering an array, but as mentioned, fails for a core data fetch. How should I set the predicate please?

    Read the article

  • JsonStore.insert() causes exception in extjs

    - by kalan
    I have an EditorGridPanel with toolbar button to add new records. Everything works fine except one scenario. When I try to insert a record which already exists in database, server sends back: {"success":false,"message":"already exists","data":{}} but grid creates a new row marked with red triangle. If after that I try to insert a new record (even if it doesn't exist in database), everything works fine on the server side, but i get an 'uncaught exception' in firebug. It says: 'uncaught exception: Ext.data.DataReader: #realize was called with invalid remote-data. Please see the docs for DataReader#realize and review your DataReader configuration.' why is that?

    Read the article

  • Importing a CSV file without headers into SQL 2008

    - by Luiggi
    I want to import a CSV with 4,8M records into a SQL 2008 table. I'm trying to do it with the Management Studio wizard but it keeps trying to recognize a header row which the CSV doesnt have. I don't find any option to skip this and although I specify the columns myself, the wizard still tries to find a header row and doesnt import anything without it. The structure of the CSV is "818180","25529","Dario","Pereyra","Rosario","SF","2010-09-02" I've also tried alternatives like BULK INSERT but then I find out that with BULK INSERT I can't import files with a text qualifier.

    Read the article

  • How to print lines from a file that have repeated more than six times

    - by Mike
    I have a file containing the data shown below. The first comma-delimited field may be repeated any number of times, and I want to print only the lines after the sixth repetition of any value of this field For example, there are eight fields with 1111111 as the first field, and I want to print only the seventh and eighth of these records Input file: 1111111,aaaaaaaa,14 1111111,bbbbbbbb,14 1111111,cccccccc,14 1111111,dddddddd,14 1111111,eeeeeeee,14 1111111,ffffffff,14 1111111,gggggggg,14 1111111,hhhhhhhh,14 2222222,aaaaaaaa,14 2222222,bbbbbbbb,14 2222222,cccccccc,14 2222222,dddddddd,14 2222222,eeeeeeee,14 2222222,ffffffff,14 2222222,gggggggg,14 3333333,aaaaaaaa,14 3333333,bbbbbbbb,14 3333333,cccccccc,14 3333333,dddddddd,14 3333333,eeeeeeee,14 3333333,ffffffff,14 3333333,gggggggg,14 3333333,hhhhhhhh,14 Output: 1111111,gggggggg,14 1111111,hhhhhhhh,14 2222222,gggggggg,14 3333333,gggggggg,14 3333333,hhhhhhhh,14 What I have tried is to transponse the 2nd and 3rd fields with respect to 1st, so that I can use nawk on the field of $7 or $8 #!/usr/bin/ksh awk -F"," '{ a[$1]; b[$1]=b[$1]","$2 c[$1]=c[$1]","$3} END{ for(i in a){ print i","b[i]","c[i]} } ' file > output.txt

    Read the article

  • Alternative to NOT EXISTS

    - by Dave Colwell
    Hi all, I have two tables linked by an ID column, lets call them Table A and table B. My goal is to find all the records in table A that have no record in table B. For instance: Table A: ID----Value 1-----value1 2-----value2 3-----value3 4-----value4 Table B ID----Value 1-----x 2-----y 4-----z 4-----l As you can see, record with ID = 3 does not exist in table B, so i want a query that will give me record 3 from table A. the way i am currently doing this is by saying AND NOT EXISTS (SELECT ID FROM TableB) but since the tables are huge, the performance on this is terrible. Also, when i tried using a Left Join where TableB.ID is null, it didnt work. Can anyone suggest an alternative?

    Read the article

  • Removing values from a returned linq query

    - by Diver D
    HI there I am hoping for some help with a query I have. I have this query var group = from r in CustomerItem group r by r.StoreItemID into g select new { StoreItemID = g.Key, ItemCount = g.Count(), ItemAmount = Customer.Sum(cr => cr.ItemAmount),RedeemedAmount = Customer.Sum(x => x.RedeemedAmount) }; I am returning my results to a list so I can bind it listbox. I have a property called EntryType which is an int. There are 2 available numbers 1 or 2 Lets say I had 3 items that my query is working with 2 of them had the EntryType = 1 and the 3rd had EntryType2. The first records had a ItemAmount of 55.00 and the 3rd had a ItemAmount of 50.00 How can I group using something simlar to above but minus the ItemAmount of 50.00 from the grouped amount to return 60.00? Any help would be great!!

    Read the article

  • Exploring search options for PHP

    - by Joshua
    I have innoDB table using numerous foreign keys, but we just want to look up some basic info out of it. I've done some research but still lost. 1) How can I tell if my host has Sphinx installed already? I don't see it as an option for table storage method (i.e. innodb, myisam). 2) Zend_Search_Lucene, responsive enough for AJAX functionality of millions of records? 3) Mirror my innoDB with a myisam? Make every innodb transaction end with a write to the myisam, then use 1:1 lookups? How would I do this automagically? This should make MyISAM ACID-compliant and free(er) from corruption no? 4) PostgreSQL fulltext queries don't even look like SQL to me wtf, I don't have time to learn a new SQL syntax I need noob options 5) ???????????????????? This is high volume site on a decently-equipped VPS Thanks very much for any ideas.

    Read the article

  • RewriteRule and php download counter

    - by rcourtna
    (1) I have a site that serves up MP3 files: http://domain/files/1234567890.mp3 (2) I have a php script that tracks file download counts: http://domain/modules/download_counter.php?file=/files/1234567890.mp3 After download_counter.php records the download, it redirects to the original file: Header("Location: $FQDN_url"); (3) I'd like all my public links to be presented as the direct file urls from (1). I'm trying to use Apache to redirect the requests to download_counter.php: RewriteRule ^files/(.+\.mp3)$ /modules/download_counter.php?file=/files/$1 [L] I'm currently stuck on (3), as it results in a redirect loop, since download_counter.php simply redirects the request back to the original file (rather than streaming the file contents). I'm also motivated to use download_counter.php as is (without modifying it's redirect behaviour). This is because the script is part of a larger CMS module, and I'd like to avoid complicating my upgrade path. Perhaps there is no solution to my problem (other than modifying the download_counter script). WDYT?

    Read the article

  • Python implementation of avro slow?

    - by lazy1
    I'm reading some data from avro file using the avro library. It takes about a minute to load 33K objects from the file. This seem very slow to me, specially with the Java version reading the same file in about 1sec. Here is the code, am I doing something wrong? import avro.datafile import avro.io from time import time def load(filename): fo = open(filename, "rb") reader = avro.datafile.DataFileReader(fo, avro.io.DatumReader()) for i, record in enumerate(reader): pass return i + 1 def main(argv=None): import sys from argparse import ArgumentParser argv = argv or sys.argv parser = ArgumentParser(description="Read avro file") start = time() num_records = load("events.avro") end = time() print("{0} records in {1} seconds".format(num_records, end - start)) if __name__ == "__main__": main()

    Read the article

  • How to 'insert if not exists' in MySQL?

    - by warren
    I started by googling, and found this article which talks about mutex tables. I have a table with ~14 million records. If I want to add more data in the same format, is there a way to ensure the record I want to insert does not already exist without using a pair of queries (ie, one query to check and one to insert is the result set is empty)? Does a unique constraint on a field guarantee the insert will fail if it's already there? It seems that with merely a constraint, when I issue the insert via php, the script croaks.

    Read the article

  • LINQ in SQLite for Windows store app does not have 'ThenBy' to order by multiple columns

    - by user1131657
    I have a Windows 8 store application and I'm using the latest version on SQLite for my database. So I want to return some records from the database and I want to order them by more that one column. However SQLite doesn't seem to have the ThenBy statement? So my LINQ statement is below: from i in connection.Table<MyTable>() where i.Type == type orderby i.Usage_Counter // ThenBy i.ID select i); So how do I sort by multiple columns in SQLite without doing another LINQ statement?

    Read the article

  • Multivalue Mysql Inserts using HibernateTemplate

    - by Langali
    I am using Spring HibernateTemplate and need to insert hundreds of records into a mysql database every second. Not sure what is the most performant way of doing it, but I am trying to see how the multi value mysql inserts do using hibernate. String query = "insert into user(age, name, birth_date) values(24, 'Joe', '2010-05-19 14:33:14'), (25, 'Joe1', '2010-05-19 14:33:14')" getHibernateTemplate().execute(new HibernateCallback(){ public Object doInHibernate(Session session) throws HibernateException, SQLException { return session.createSQLQuery(query).executeUpdate(); } }); But I get this error: 'could not execute native bulk manipulation query.' Please check your query ..... Any idea of I can use a multi value mysql insert using Hibernate? or is my query incorrect? Any other ways that I can improve the performance? I did try the saveOrUpdateAll() method, and that wasn't good enough!

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >