Search Results

Search found 23955 results on 959 pages for 'insert query'.

Page 412/959 | < Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >

  • Android database closed exception

    - by Bombastic
    I'm working on a project where I'm downloading and saving data from web to sqlite database. A few minutes ago I receive a strange exception to our server from a user which is saying that the sqlite database is already closed..and I just checked the whole file where the exception happened and I'm not calling dbHelper.close();. Here is the function where the app crashes and LogCat message : public void insertCollectionCountries(JSONObject obj, Context context) { //Insert in collection_countries if(RPCCommunicator.isServiceRunning){ Log.w("","JsonCollection - insertCollectionCountries"); ContentValues valuesCountries = new ContentValues(); try { collectionId = Integer.parseInt(obj.getString("collection_id")); dbHelper.deleteSQL("collection_countries", "collection_id=?", new String[] {Integer.toString(collectionId)}); JSONArray arrayCountries = obj.getJSONArray("country_availability"); for (int i=0; i<arrayCountries.length(); i++) { valuesCountries.put("collection_id", collectionId); String countryCode = arrayCountries.getString(i); valuesCountries.put("country_code", countryCode); dbHelper.executeQuery("collection_countries", valuesCountries); } } catch (JSONException e){ e.printStackTrace(); } } } and the error is on that line : dbHelper.executeQuery("collection_countries", valuesCountries); here is the LogCat message : java.lang.IllegalStateException: database /data/data/com.stampii.stampii/databases/stampii_sys_tpl.sqlite (conn# 0) already closed at android.database.sqlite.SQLiteDatabase.verifyDbIsOpen(SQLiteDatabase.java:2123) at android.database.sqlite.SQLiteDatabase.setTransactionSuccessful(SQLiteDatabase.java:734) at com.stampii.stampii.comm.rpc.SystemDatabaseHelper.execQuery(SystemDatabaseHelper.java:298) at com.stampii.stampii.comm.rpc.SystemDatabaseHelper.executeQuery(SystemDatabaseHelper.java:291) at com.stampii.stampii.jsonAPI.JsonCollection.insertCollectionCounries(JsonCollection.java:548) at com.stampii.stampii.jsonAPI.JsonCollection.executeInsert(JsonCollection.java:181) at com.stampii.stampii.collections.MyService.downloadCollections(MyService.java:122) at com.stampii.stampii.collections.MyService$2.run(MyService.java:74) at java.lang.Thread.run(Thread.java:1020) and function in my dbHelperClass which I'm using to insert data : public boolean executeQuery(String tableName,ContentValues values){ return execQuery(tableName,values); } private boolean execQuery(String tableName,ContentValues values){ sqliteDb = instance.getWritableDatabase(); sqliteDb.beginTransaction(); sqliteDb.insert(tableName, null, values); sqliteDb.setTransactionSuccessful(); sqliteDb.endTransaction(); return true; } Any ideas which can close my sqlite database or what can cause that exception, because I've tested this code on a few emulators with different Android versions, different devices (HTC EVO 3D, Samsung Galaxy Nexus,HTC Desire, LG OPTIMUS PAD, Samsung Galaxy S2, Samsung Galaxy Note) and it's working fine. Thanks in advance!

    Read the article

  • MySQL Locking Up

    - by Ian
    I've got a innodb table that gets a lot of reads and almost no writes (like, 1 write for every 400,000 reads approx). I'm running into a pretty big problem though when I do INSERT into the table. MySQL completely locks up. It uses 100% cpu, and every single other table (in other databases even) have their statuses set to "Locked" until the INSERT is done. This is a big problem because MySQL stays locked up for up to 4 minutes. I'm using version 5.1.47 (rpm from mysql.com). Any ideas?

    Read the article

  • Gotchas INSERTing into SQLite on Android?

    - by paul.meier
    Hi friends, I'm trying to set up a simple SQLite database in Android, handling the schema via a subclass of SQLiteOpenHelper. However, when I query my tables, the columns I think I've inserted are never present. Namely, in SQLiteOpenHelper's onCreate(SQLiteDatabase db) method, I use db.execSQL() to run CREATE TABLE commands, then have tried both db.execSQL and db.insert() to run INSERT commands on the tables I've just created. This appears to run fine, but when I try to query them I always get 0 rows returned (for debugging, the queries I'm running are simple SELECT * FROM table and checking the Cursor's getCount()). Anybody run into anything like this before? These commands seem to run on command-line sqlite3. Are they're gotchas that I'm missing (e.g. INSERTS must/must not be semicolon terminated, or some issue involving multiple tables)? I've attached some of the code below. Thanks for your time, and let me know if I can clarify further. @Override public void onCreate(SQLiteDatabase db) { db.execSQL("CREATE TABLE "+ LEVEL_TABLE +" (" + " "+ _ID +" INTEGER PRIMARY KEY AUTOINCREMENT," + " level TEXT NOT NULL,"+ " rows INTEGER NOT NULL,"+ " cols INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ DYNAMICS_TABLE +" (" + " level_id INTEGER NOT NULL," + " row INTEGER NOT NULL,"+ " col INTEGER NOT NULL,"+ " type INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ SCORE_TABLE +" (" + " level_id INTEGER NOT NULL," + " score INTEGER NOT NULL,"+ " date_achieved DATE NOT NULL,"+ " name TEXT NOT NULL);"); this.enterFirstLevel(db); } And a sample of the insert code I'm currently using, which gets called in enterFirstLevel() (some values hard-coded just to get it running...): private void insertDynamic(SQLiteDatabase db, int row, int col, int type) { ContentValues values = new ContentValues(); values.put("level_id", "1"); values.put("row", Integer.toString(row)); values.put("col", Integer.toString(col)); values.put("type", Integer.toString(type)); db.insertOrThrow(DYNAMICS_TABLE, "col", values); } Finally, query code looks like this: private Cursor fetchLevelDynamics(int id) { SQLiteDatabase db = this.leveldata.getReadableDatabase(); try { String fetchQuery = "SELECT * FROM " + DYNAMICS_TABLE; String[] queryArgs = new String[0]; Cursor cursor = db.rawQuery(fetchQuery, queryArgs); Activity activity = (Activity) this.context; activity.startManagingCursor(cursor); return cursor; } finally { db.close(); } }

    Read the article

  • Can I clone an IQueryable in linq? For UNION purposes?

    - by user169867
    I have a table of WorkOrders. The table has a PrimaryWorker & PrimaryPay field. It also has a SecondaryWorker & SecondaryPay field (which can be null). I wish to run 2 very similar queries & union them so that it will return a Worker Field & Pay field. So if a single WorkOrder record had both the PrimaryWorker and SecondaryWorker field populated I would get 2 records back. The "where clause" part of these 2 queries is very similar and long to construct. Here's a dummy example var q = ctx.WorkOrder.Where(w => w.WorkDate >= StartDt && w.WorkDate <= EndDt); if(showApprovedOnly) { q = q.Where(w => w.IsApproved); } //...more filters applied Now I also have a search flag called "hideZeroPay". If that's true I don't want to include the record if the worker was payed $0. But obviously for 1 query I need to compare the PrimaryPay field and in the other I need to compare the SecondaryPay field. So I'm wondering how to do this. Can I clone my base query "q" and make a primary & secondary worker query out of it and then union those 2 queries together? I'd greatly appreciate an example of how to correctly handle this. Thanks very much for any help.

    Read the article

  • i dont understand error while connecting php and mysql? user denied ? plz help me out to solve. ?

    - by user309381
    class MySQLDatabase { public $connection; function _construct() { $this->open_connection();} public function open_connection() {$this->connection = mysql_connect(DB_SERVER,DB_USER,DB_PASS); if(!$this->connection){die("Database Connection Failed" . mysql_error());} else{$db_select = mysql_select_db(DB_NAME,$this->connection); if(!$db_select){die("Database Selection Failed" . mysql_error()); } }} public function close_connection({ if(isset($this->connection)){ mysql_close($this->connection); unset($this->connection);}} public function query(/*$sql*/){ $sql = "SELECT*FROM users where id = 1"; $result = mysql_query($sql); $this->confirm_query($result); //return $result;while( $found_user = mysql_fetch_assoc($result)) { echo $found_user ['username']; } } private function confirm_query($result) { if(!$result) { die("The Query has problem" . mysql_error()); } } } $database = new MySQLDatabase(); $database->open_connection(); $database->query(); $database->close_connection(); I am getting error like denied for user system@locahost(using password no).i have also other database but it runs fine and i dont also i have set the password after encountered the error what else can do to solve plz help ?

    Read the article

  • subquery in join with doctrine dql

    - by Martijn de Munnik
    I want to use DQL to create a query which looks like this in SQL: select e.* from e inner join ( select uuid, max(locale) as locale from e where locale = 'nl_NL' or locale = 'nl' group by uuid ) as e_ on e.uuid = e_.uuid and e.locale = e_.locale I tried to use QueryBuilder to generate the query and subquery. I think they do the right thing by them selves but I can't combine them in the join statement. Does anybody now if this is possible with DQL? I can't use native SQL because I want to return real objects and I don't know for which object this query is run (I only know the base class which have the uuid and locale property). $subQueryBuilder = $this->_em->createQueryBuilder(); $subQueryBuilder ->addSelect('e.uuid, max(e.locale) as locale') ->from($this->_entityName, 'e') ->where($subQueryBuilder->expr()->in('e.locale', $localeCriteria)) ->groupBy('e.uuid'); $queryBuilder = $this->_em->createQueryBuilder(); $queryBuilder ->addSelect('e') ->from($this->_entityName, 'e') ->join('('.$subQueryBuilder.') as', 'e_')->join ->where('e.uuid = e_.uuid') ->andWhere('e.locale = e_.locale');

    Read the article

  • Bitwise Shifting in C

    - by user313943
    I've recently decided to undertake an SMS project for sending and receiving SMS though a mobile. The data is sent in PDU format - I am required to change ASCII characters to 7 bit GSM alphabet characters. To do this I've come across several examples, such as http://www.dreamfabric.com/sms/hello.html This example shows Rightmost bits of the second septet, being inserted into the first septect, to create an octect. Bitwise shifts do not cause this to happen, as will insert to the left, and << to the right. As I understand it, I need some kind of bitwise rotate to create this - can anyone tell me how to move bits from the right handside and insert them on the left? Thanks,

    Read the article

  • Relational MySQL - fetched properties?

    - by Kelso.b
    I'm currently using the following PHP code: // Get all subordinates $subords = array(); $supervisorID = $this->session->userdata('supervisor_id'); $result = $this->db->query(sprintf("SELECT * FROM users WHERE supervisor_id=%d AND id!=%d",$supervisorID, $supervisorID)); $user_list_query = 'user_id='.$supervisorID; foreach($result->result() as $user){ $user_list_query .= ' OR user_id='.$user->id; $subords[$user->id] = $user; } // Get Submissions $submissionsResult = $this->db->query(sprintf("SELECT * FROM submissions WHERE %s", $user_list_query)); $submissions = array(); foreach($submissionsResult->result() as $submission){ $entriesResult = $this->db->query(sprintf("SELECT * FROM submittedentries WHERE timestamp=%d", $submission->timestamp)); $entries = array(); foreach($entriesResult->result() as $entries) $entries[] = $entry; $submissions[] = array( 'user' => $subords[$submission->user_id], 'entries' => $entries ); $entriesResult->free_result(); } Basically I'm getting a list of users that are subordinates of a given supervisor_id (every user entry has a supervisor_id field), then grabbing entries belonging to any of those users. I can't help but think there is a more elegant way of doing this, like SELECT FROM tablename where user->supervisor_id=2222 Is there something like this with PHP/MySQL? Should probably learn relational databases properly sometime. :(

    Read the article

  • Getting wierd issue with TO_NUMBER function in Oracle

    - by Fazal
    I have been getting an intermittent issue when executing to_number function in the where clause on a varchar2 column if number of records exceed a certain number n. I used n as there is no exact number of records on which it happens. On one DB it happens after n was 1 million on another when it was 0.1. million. E.g. I have a table with 10 million records say Table Country which has field1 varchar2 containing numberic data and Id If I do a query as an example select * from country where to_number(field1) = 23 and id 1 and id < 100000 This works But if i do the query select * from country where to_number(field1) = 23 and id 1 and id < 100001 It fails saying invalid number Next I try the query select * from country where to_number(field1) = 23 and id 2 and id < 100001 It works again As I only got invalid number it was confusing, but in the log file it said Memory Notification: Library Cache Object loaded into SGA Heap size 3823K exceeds notification threshold (2048K) KGL object name :with sqlplan as ( select c006 object_owner, c007 object_type,c008 object_name from htmldb_collections where COLLECTION_NAME='HTMLDB_QUERY_PLAN' and c007 in ('TABLE','INDEX','MATERIALIZED VIEW','INDEX (UNIQUE)')), ws_schemas as( select schema from wwv_flow_company_schemas where security_group_id = :flow_security_group_id), t as( select s.object_owner table_owner,s.object_name table_name, d.OBJECT_ID from sqlplan s,sys.dba_objects d It seems its related to SGA size, but google did not give me much help on this. Does anyone have any idea about this issue with TO_NUMBER or oracle functions for large data?

    Read the article

  • SOLR - wildcard search with capital letter

    - by Yurish
    I have a problem with SOLR searching. When i`am searching query: dog* everything is ok, but when query is Dog*(with first capital letter), i get no results. Any advice? My config: <fieldType name="text" class="solr.TextField" positionIncrementGap="100"> <analyzer type="index"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="0"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/> </analyzer> <analyzer type="query"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="0"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/> </analyzer> </fieldType>

    Read the article

  • SQLite Delphi thowing up incorrect exception.

    - by NeoNMD
    "Project CompetitionServer.exe raised exception class ESQLiteException with message 'Error executing SQL. Error [1]: SQL error or missing database. "INSERT INTO MatchesTable(MatchesID,RoundID,AkaFirstName,AoFirstName)VALUES(1,2,p,o)": no such column: p'. Process stopped. Use Step or Run to continue." This is clear to everone here that the notice is correct, p is NOT a column, it is some data I am trying to insert. What is the problem here?

    Read the article

  • Grails g:paginate tag and custom URL

    - by aboxy
    Hello, I am trying to use g:paginate in a shared template where depending on the controller, url changes e.g. For my homepage url should be : mydomain[DOT]com/news/recent/(1..n) For search Page: www[DOT]mydomain[DOT]com/search/query/"ipad apps"/filter/this month and my g:paginate looks like this: g:paginate controller=${customeController} action=${customAction} total:${total} For the first case, I was able to provide controller as 'news' and action as 'recent' and mapped url /news/recent/$offset to my controller. But for the search page, I am not able to achieve what I want to do. I have a URL mapping defined as /search/$filter**(controller:"search",action:"fetch") $filter can be /query/"ipad apps"/filter/thismonth/filter/something/filter/somethingelse. I want to be able to show the url as above rather than ?query="ipad apps"&filter=thismonth&filter=something&filter=somethingelse. I believe I can pass all the parameters in params attribute of g:paginate but that will not give me pretty URL. What would be the best way to achieve this? Please feel free to ask questions If i missed anything.Thanks in advance.

    Read the article

  • linq to sql string property from non-null column with default

    - by Barry Fandango
    I have a LINQ to SQL class "VoucherRecord" based on a simple table. One property "Note" is a string that represents an nvarchar(255) column, which is non-nullable and has a default value of empty string (''). If I instantiate a VoucherRecord the initial value of the Note property is null. If I add it using a DataContext's InsertOnSubmit method, I get a SQL error message: Cannot insert the value NULL into column 'Note', table 'foo.bar.tblVoucher'; column does not allow nulls. INSERT fails. Why isn't the database default kicking in? What sort of query could bypass the default anyway? How do I view the generated sql for this action? Thanks for your help!

    Read the article

  • How do I make OneNote 2007 images searchable when inserted via code?

    - by Scott Bruns
    When I insert an image into OneNote 2007 using C# my images have 'Make Text in Image Searchable' set to Disabled. How do I insert an image with Make Text in Image Searchable enabled, or how do I enable this property after the image is imported. I have already imported a lot of images. How to I make the existing imported images searchable? I already know how to do this manually by right clicking the image and setting the language. The OCR works fine, I just need to do it automatically.

    Read the article

  • Inserting into a bitstream

    - by evilertoaster
    I'm looking for a way to efficiently insert bits into a bitstream and have it 'overflow', padding with 0's. So for example if you had a byte array with 2 bytes: 231 and 109 (11100111 01101101), and did BitInsert(byteArray,4,00) it would insert two bits at bit offset 4 making 11100001 11011011 01000000 (225,219,24). It would be ok even the method only allowed 1 bit insertions e.g. BitInsert(byteArray,4,true) or BitInsert(byteArray,4,false). I have one method of doing it, but it has to walk the stream with a bitmask bit by bit, so I'm wondering if there's a simpler approach... Answers in assembly or a C derivative would be appreciated.

    Read the article

  • Insriting into a bitstream

    - by evilertoaster
    I'm looking for a way to efficiently insert bits into a bitstream and have it 'overflow', padding with 0's. So for example if you had a byte array with 2 bytes: 231 and 109 (11100111 01101101), and did BitInsert(byteArray,4,00) it would insert two bits at bit offset 4 making 11100001 11011011 01000000 (225,219,24). It would be ok even the method only allowed 1 bit insertions e.g. BitInsert(byteArray,4,true) or BitInsert(byteArray,4,false). I have one method of doing it, but it has to walk the stream with a bitmask bit by bit, so I'm wondering if there's a simpler approach... Answers in assembly or a C derivative would be appreciated.

    Read the article

  • How to write this function as a pL/pgSQl function ?

    - by morpheous
    I am trying to implement some business logic in a PL/pgSQL function. I have hacked together some pseudo code that explains the type of business logic I want to include in the function. Note: This function returns a table, so I can use it in a query like: SELECT A.col1, B.col1 FROM (SELECT * from some_table_returning_func(1, 1, 2, 3)) as A, tbl2 as B; The pseudocode of the pl/PgSQL function is below: CREATE FUNCTION some_table_returning_func(uid int, type_id int, filter_type_id int, filter_id int) RETURNS TABLE AS $$ DECLARE where_clause text := 'tbl1.id = ' + uid; ret TABLE; BEGIN switch (filter_type_id) { case 1: switch (filter_id) { case 1: where_clause += ' AND tbl1.item_id = tbl2.id AND tbl2.type_id = filter_id'; break; //other cases follow ... } break; //other cases follow ... } // where clause has been built, now run query based on the type ret = SELECT [COL1, ... COLN] WHERE where_clause; IF (type_id <> 1) THEN return ret; ELSE return select * from another_table_returning_func(ret,123); ENDIF; END; $$ LANGUAGE plpgsql; I have the following questions: How can I write the function correctly to (i.e. EXECUTE the query with the generated WHERE clause, and to return a table How can I write a PL/pgSQL function that accepts a table and an integer and returns a table (another_table_returning_func) ?

    Read the article

  • BasicDBObject or QueryBuilder and some newbie questions of Java and mongo

    - by Kevin Xu
    hi I'm a fresh newbie to mongodb Q1 using query=new BasicDBObject(); query.put("i", new BasicDBObject("$gt",13)); and query=new QueryBuilder().put("i").Greaterthan(13).get() is there any difference inside of the system? Q2 I've created a class class findkv extends BasicDBObject{ //is gt gte lt lte public findkv(String fieldname,String op,Object tvalue) { if (op=="") this.put(fieldname,tvalue); else this.put(fieldname, new BasicDBObject(op,tvalue)); } } shall I use it or shall I just use original function? Q3 I've used mongo shell for a few weeks, and was customed to it, and find writing in mongo shell faster and shorter, which side has more advantage, writing in mongo or in java? I shall dump them from mongo to mysql Q4 I've an if (statement==true) return else dowhat; seems can't be compiled I know I can write if (statement!=true) dowhat else return, but can I still write in first style? q5 my eclipse is Eclipse Java EE IDE for Web Developers. Version: Juno Release Build id: 20120614-1722 I'd like to install Perl which I haven't learned yet I choose Install Update http://e-p-i-c.sf.net/updates/testing but it doesn't work, any method to install perl to eclipse manually?

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Java Hibernate id auto increment

    - by vinise
    Hy I'v a little problem with hibernate on netbeans. I've a table with an Auto increment id : CREATE TABLE "DVD" ( "DVD_ID" INT not null primary key GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1), "TITLE" VARCHAR(150), "COM" LONG VARCHAR, "COVER" VARCHAR(150) ); But this auto increment is not properly detected with Reverse Engineering. I get a map file with this : <id name="dvdId" type="int"> <column name="DVD_ID" /> <generator class="assigned" /> </id> i've looked on google and on this site ... foud some stuf but i'm still stuck.. i've tried to add insert="false" update="false" on the map file but i get back : Caused by: org.xml.sax.SAXParseException: Attribute "insert" must be declared for element type "id". Anny help will be pleased Vincent

    Read the article

  • SQL n:m Inheritance join

    - by Nightmares
    I want to join a table which contains n:m relationship between groups. (Groups are defined in a separate table). This table only has entries listing a member_group_id and a parent_group_id. Given this structure: id(int) | member_group_id(int) | parent_group_id(int) The "base" query looks like this: select p1.group_id, p2.group_id, p1.member_group_id, p2.member_group_id from group_member_group as p1 join group_member_group as p2 on p2.member_group_id = p1.member_group_id The "base" query correctly shows all relationships (I checked by doing it manually.) The problem is when I try to apply a where clause to this query to filter for a specific group as "point of origin" (the first group for which I want all parent groups) it returns only the closest parents. For example like this: select p1.group_id, p2.group_id, p1.member_group_id, p2.member_group_id from group_member_group as p1 join group_member_group as p2 on p2.member_group_id = p1.member_group_id where p1.group_id = 1 Can anyone give a clue how I can fix this? Or a different approach to realize this. (I suppose I could always do this in my C++ source code on the server side but I would have to transfer a entire table which has a high growth potential to the application server.) UPDATE: select p1.group_id, p2.group_id, p1.member_group_id, p2.member_group_id from group_member_group as p1 join group_member_group as p2 on p2.group_id = p1.member_group_id Typing mistake confirmed. Now I don't get past first level of inheritance period. Thanks at denied for pointing that out.

    Read the article

  • Algorithm for finding similar users through a join table

    - by Gdeglin
    I have an application where users can select a variety of interests from around 300 possible interests. Each selected interest is stored in a join table containing the columns user_id and interest_id. Typical users select around 50 interests out of the 300. I would like to build a system where users can find the top 20 users that have the most interests in common with them. Right now I am able to accomplish this using the following query: SELECT i2.user_id, count(i2.interest_id) AS count FROM interests_users as i1, interests_users as i2 WHERE i1.interest_id = i2.interest_id AND i1.user_id = 35 GROUP BY i2.user_id ORDER BY count DESC LIMIT 20; However, this query takes approximately 500 milliseconds to execute with 10,000 users and 500,000 rows in the join table. All indexes and database configuration settings have been tuned to the best of my ability. I have also tried avoiding the use of joins altogether using the following query: select user_id,count(interest_id) count from interests_users where interest_id in (13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,508) group by user_id order by count desc limit 20; But this one is even slower (~800 milliseconds). How could I best lower the time that I can gather this kind of data to below 100 milliseconds? I have considered putting this data into a graph database like Neo4j, but I am not sure if that is the easiest solution or if it would even be faster than what I am currently doing.

    Read the article

  • PHP error can't figure it out something to do with SQL stuff I think

    - by MrEnder
    Ok the error is showing up somewhere in this here code if($error==false) { $query = pg_query("INSERT INTO chatterlogins(firstName, lastName, gender, password, ageMonth, ageDay, ageYear, email, createDate) VALUES('$firstNameSignup', '$lastNameSignup', '$genderSignup', md5('$passwordSignup'), $monthSignup, $daySignup, $yearSignup, '$emailSignup', now());"); $query = pg_query("INSERT INTO chatterprofileinfo(email, lastLogin) VALUES('$email', now())";); $_SESSION['$userNameSet'] = $email; header('Location: signup_step2.php'.$rdruri); } anyone see what I did wrong??? sorry for being so unspecific but ive been staring at it for 10 mins and I can't figure it out.

    Read the article

  • SQL Server 2005 stored procedure error

    - by user1670625
    I have created a stored procedure of insert command for employee details in SQL Server 2005 in which one of the parameters is an image for which I have used varbinary as the datatype in the table.. But when I am adding that parameter in the stored procedure I am getting the following error- Implicit conversion from data type varchar to varbinary is not allowed. Use the CONVERT function to run this query. Stored procedure: ( @Employee_ID nvarchar(10)='', @Password nvarchar(10)='', @Security_Question nvarchar(50)='', @Answer nvarchar(50)='', @First_Name nvarchar(20)='', @Middle_Name nvarchar(20)='', @Last_Name nvarchar(20)='', @Employee_Type nvarchar(15)='', @Department nvarchar(15)='', @Photo varbinary(50)='' ) insert into Registration ( Employee_ID, Password, Security_Question, Answer, First_Name, Middle_Name, Last_Name, Employee_Type, Department, Photo ) values ( @Employee_ID, @Password, @Security_Question, @Answer, @First_Name, @Middle_Name, @Last_Name, @Employee_Type, @Department, @Photo ) Table structure: Column Name Data Type Allow Nulls Employee_ID nvarchar(10) Unchecked Password nvarchar(10) Checked Security_Question nvarchar(50) Checked Answer nvarchar(50) Checked First_Name nvarchar(20) Checked Middle_Name nvarchar(20) Checked Last_Name nvarchar(20) Checked Employee_Type nvarchar(15) Checked Department nvarchar(15) Checked Photo varbinary(50) Checked I am not getting what to do..can anyone give me some suggestion or solution? Thanks in advance.

    Read the article

< Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >