Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 6/145 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Python halts while iteratively processing my 1GB csv file

    - by Dan
    I have two files: metadata.csv: contains an ID, followed by vendor name, a filename, etc hashes.csv: contains an ID, followed by a hash The ID is essentially a foreign key of sorts, relating file metadata to its hash. I wrote this script to quickly extract out all hashes associated with a particular vendor. It craps out before it finishes processing hashes.csv stored_ids = [] # this file is about 1 MB entries = csv.reader(open(options.entries, "rb")) for row in entries: # row[2] is the vendor if row[2] == options.vendor: # row[0] is the ID stored_ids.append(row[0]) # this file is 1 GB hashes = open(options.hashes, "rb") # I iteratively read the file here, # just in case the csv module doesn't do this. for line in hashes: # not sure if stored_ids contains strings or ints here... # this probably isn't the problem though if line.split(",")[0] in stored_ids: # if its one of the IDs we're looking for, print the file and hash to STDOUT print "%s,%s" % (line.split(",")[2], line.split(",")[4]) hashes.close() This script gets about 2000 entries through hashes.csv before it halts. What am I doing wrong? I thought I was processing it line by line. ps. the csv files are the popular HashKeeper format and the files I am parsing are the NSRL hash sets. http://www.nsrl.nist.gov/Downloads.htm#converter UPDATE: working solution below. Thanks everyone who commented! entries = csv.reader(open(options.entries, "rb")) stored_ids = dict((row[0],1) for row in entries if row[2] == options.vendor) hashes = csv.reader(open(options.hashes, "rb")) matches = dict((row[2], row[4]) for row in hashes if row[0] in stored_ids) for k, v in matches.iteritems(): print "%s,%s" % (k, v)

    Read the article

  • Dealing with huge SQL resultset

    - by Dave McClelland
    I am working with a rather large mysql database (several million rows) with a column storing blob images. The application attempts to grab a subset of the images and runs some processing algorithms on them. The problem I'm running into is that, due to the rather large dataset that I have, the dataset that my query is returning is too large to store in memory. For the time being, I have changed the query to not return the images. While iterating over the resultset, I run another select which grabs the individual image that relates to the current record. This works, but the tens of thousands of extra queries have resulted in a performance decrease that is unacceptable. My next idea is to limit the original query to 10,000 results or so, and then keep querying over spans of 10,000 rows. This seems like the middle of the road compromise between the two approaches. I feel that there is probably a better solution that I am not aware of. Is there another way to only have portions of a gigantic resultset in memory at a time? Cheers, Dave McClelland

    Read the article

  • SQL MIN in Sub query causes huge delay

    - by Spencer
    I have a SQL query that I'm trying to debug. It works fine for small sets of data, but in large sets of data, this particular part of it causes it to take 45-50 seconds instead of being sub second in speed. This subquery is one of the select items in a larger query. I'm basically trying to figure out when the earliest work date is that fits in the same category as the current row we are looking at (from table dr) ISNULL(CONVERT(varchar(25),(SELECT MIN(drsd.DateWorked) FROM [TableName] drsd WHERE drsd.UserID = dr.UserID AND drsd.Val1 = dr.Val1 OR (((drsd.Val2 = dr.Val2 AND LEN(dr.Val2) > 0) AND (drsd.Val3 = dr.Val3 AND LEN(dr.Val3) > 0) AND (drsd.Val4 = dr.Val4 AND LEN(dr.Val4) > 0)) OR (drsd.Val5 = dr.Val5 AND LEN(dr.Val5) > 0) OR ((drsd.Val6 = dr.Val6 AND LEN(dr.Val6) > 0) AND (drsd.Val7 = dr.Val7 AND LEN(dr.Val2) > 0))))), '') AS WorkStartDate, This winds up executing a key lookup some 18 million times on a table that has 346,000 records. I've tried creating an index on it, but haven't had any success. Also, selecting a max value in this same query is sub second in time, as it doesn't have to execute very many times at all. Any suggestions of a different approach to try? Thanks!

    Read the article

  • PHP create huge errors file on my server feof() fread()

    - by Nik
    I have a script that allows users to 'save as' a pdf, this is the script - <?php header("Content-Type: application/octet-stream"); $file = $_GET["file"] .".pdf"; header("Content-Disposition: attachment; filename=" . urlencode($file)); header("Content-Type: application/force-download"); header("Content-Type: application/octet-stream"); header("Content-Type: application/download"); header("Content-Description: File Transfer"); header("Content-Length: " . filesize($file)); flush(); // this doesn't really matter. $fp = fopen($file, "r"); while (!feof($fp)) { echo fread($fp, 65536); flush(); // this is essential for large downloads } fclose($fp); ? An error log is being created and gets to be more than a gig in a few days the errors I receive are- [10-May-2010 12:38:50] PHP Warning: filesize() [function.filesize]: stat failed for BYJ-Timetable.pdf in /home/byj/public_html/pdf_server.php on line 10 [10-May-2010 12:38:50] PHP Warning: Cannot modify header information - headers already sent by (output started at /home/byj/public_html/pdf_server.php:10) in /home/byj/public_html/pdf_server.php on line 10 [10-May-2010 12:38:50] PHP Warning: fopen(BYJ-Timetable.pdf) [function.fopen]: failed to open stream: No such file or directory in /home/byj/public_html/pdf_server.php on line 12 [10-May-2010 12:38:50] PHP Warning: feof(): supplied argument is not a valid stream resource in /home/byj/public_html/pdf_server.php on line 13 [10-May-2010 12:38:50] PHP Warning: fread(): supplied argument is not a valid stream resource in /home/byj/public_html/pdf_server.php on line 15 [10-May-2010 12:38:50] PHP Warning: feof(): supplied argument is not a valid stream resource in /home/byj/public_html/pdf_server.php on line 13 [10-May-2010 12:38:50] PHP Warning: fread(): supplied argument is not a valid stream resource in /home/byj/public_html/pdf_server.php on line 15 The line 13 and 15 just continue on and on... I'm a bit of a newbie with php so any help is great. Thanks guys Nik

    Read the article

  • MySQL Query performance - huge difference in time

    - by Damo
    I have a query that is returning in vastly different amounts of time between 2 datasets. For one set (database A) it returns in a few seconds, for the other (database B)....well I haven't waited long enough yet, but over 10 minutes. I have dumped both of these databases to my local machine where I can reproduce the issue running MySQL 5.1.37. Curiously, database B is smaller than database A. A stripped down version of the query that reproduces the problem is: SELECT * FROM po_shipment ps JOIN po_shipment_item psi USING (ship_id) JOIN po_alloc pa ON ps.ship_id = pa.ship_id AND pa.UID_items = psi.UID_items JOIN po_header ph ON pa.hdr_id = ph.hdr_id LEFT JOIN EVENT_TABLE ev0 ON ev0.TABLE_ID1 = ps.ship_id AND ev0.EVENT_TYPE = 'MAS0' LEFT JOIN EVENT_TABLE ev1 ON ev1.TABLE_ID1 = ps.ship_id AND ev1.EVENT_TYPE = 'MAS1' LEFT JOIN EVENT_TABLE ev2 ON ev2.TABLE_ID1 = ps.ship_id AND ev2.EVENT_TYPE = 'MAS2' LEFT JOIN EVENT_TABLE ev3 ON ev3.TABLE_ID1 = ps.ship_id AND ev3.EVENT_TYPE = 'MAS3' LEFT JOIN EVENT_TABLE ev4 ON ev4.TABLE_ID1 = ps.ship_id AND ev4.EVENT_TYPE = 'MAS4' LEFT JOIN EVENT_TABLE ev5 ON ev5.TABLE_ID1 = ps.ship_id AND ev5.EVENT_TYPE = 'MAS5' WHERE ps.eta >= '2010-03-22' GROUP BY ps.ship_id LIMIT 100; The EXPLAIN query plan for the first database (A) that returns in ~2 seconds is: +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | ps | range | PRIMARY,IX_ETA_DATE | IX_ETA_DATE | 4 | NULL | 174 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | ev0 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev1 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev2 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev3 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev4 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev5 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | psi | ref | PRIMARY,IX_po_shipment_item_po_shipment1,FK_po_shipment_item_po_shipment1 | IX_po_shipment_item_po_shipment1 | 4 | UNIVIS_PROD.ps.ship_id | 1 | | | 1 | SIMPLE | pa | ref | IX_po_alloc_po_shipment_item2,IX_po_alloc_po_details_old,FK_po_alloc_po_shipment1,FK_po_alloc_po_shipment_item1,FK_po_alloc_po_header1 | FK_po_alloc_po_shipment1 | 4 | UNIVIS_PROD.psi.ship_id | 5 | Using where | | 1 | SIMPLE | ph | eq_ref | PRIMARY,IX_HDR_ID | PRIMARY | 4 | UNIVIS_PROD.pa.hdr_id | 1 | | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+------------------------------+------+----------------------------------------------+ The EXPLAIN query plan for the second database (B) that returns in 600 seconds is: +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+--------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+--------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | ps | range | PRIMARY,IX_ETA_DATE | IX_ETA_DATE | 4 | NULL | 38 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | psi | ref | PRIMARY,IX_po_shipment_item_po_shipment1,FK_po_shipment_item_po_shipment1 | IX_po_shipment_item_po_shipment1 | 4 | UNIVIS_DEV01.ps.ship_id | 1 | | | 1 | SIMPLE | ev0 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.psi.ship_id,const | 1 | | | 1 | SIMPLE | ev1 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.psi.ship_id,const | 1 | | | 1 | SIMPLE | ev2 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev3 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.psi.ship_id,const | 1 | | | 1 | SIMPLE | ev4 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.psi.ship_id,const | 1 | | | 1 | SIMPLE | ev5 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.ps.ship_id,const | 1 | | | 1 | SIMPLE | pa | ref | IX_po_alloc_po_shipment_item2,IX_po_alloc_po_details_old,FK_po_alloc_po_shipment1,FK_po_alloc_po_shipment_item1,FK_po_alloc_po_header1 | IX_po_alloc_po_shipment_item2 | 4 | UNIVIS_DEV01.ps.ship_id | 4 | Using where | | 1 | SIMPLE | ph | eq_ref | PRIMARY,IX_HDR_ID | PRIMARY | 4 | UNIVIS_DEV01.pa.hdr_id | 1 | | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+--------------------------------+------+----------------------------------------------+ When database B is running I can look at the MySQL Administrator and the state remains at "Copying to tmp table" indefinitely. Database A also has this state but for only a second or so. There are no differences in the table structure, indexes, keys etc between these databases (I have done show create tables and diff'd them). The sizes of the tables are: database A: po_shipment 1776 po_shipment_item 1945 po_alloc 36298 po_header 71642 EVENT_TABLE 1608 database B: po_shipment 463 po_shipment_item 470 po_alloc 3291 po_header 56149 EVENT_TABLE 1089 Some points to note: Removing the WHERE clause makes the query return < 1 sec. Removing the GROUP BY makes the query return < 1 sec. Removing ev5, ev4, ev3 etc makes the query get faster for each one removed. Can anyone suggest how to resolve this issue? What have I missed? Many Thanks.

    Read the article

  • SQLite INTERSECT gives a huge performance decrease

    - by Derk
    I have a query that runs in less than 1 ms: SELECT product_to_value.category AS category, features.name AS featurename, featurevalues.name AS valuename FROM product_to_value, features, featurevalues WHERE product_to_value.category IN(:int, :bla, :bla1) AND product_to_value.feature = features.int AND product_to_value.value = featurevalues.int LIMIT 10 However, when I combine it with another query using INTERSECT, the query now takes more than 250ms: SELECT product_to_value.category AS category, features.name AS featurename, featurevalues.name AS valuename FROM product_to_value, features, featurevalues WHERE product_to_value.category IN(:int, :bla, :bla1) AND product_to_value.feature = features.int AND product_to_value.value = featurevalues.int INTERSECT SELECT product_to_value.category AS category, features.name AS featurename, featurevalues.name AS valuename FROM product_to_value, features, featurevalues WHERE product_to_value.category IN(:int, :bla, :bla1) AND product_to_value.feature = features.int AND product_to_value.value = featurevalues.int LIMIT 10 This can't be right. I've tried several index combinations, for example an index on all columns I use in my query, but to no avail. I've tried compound indexes as well, but they only slow things down even more. I have read a few things about SQLite and how it treats indexes. I know SQLite is capable of delivering sick performance, and surely I must be overlooking something.

    Read the article

  • jquery ui datepicker is huge

    - by iggnition
    Hi, im trying to add a datepicker to my symfony application and i got it working, but the size of the datepicker is about 3 times bigger than normal (on the demo page). I have not edited any CSS, i just used the default ui lightness theme no modifications. Does anybody have any idea why the size is blown up so big? CSS: http://paste2.org/p/835414 tough i doubt that will be very usefull.

    Read the article

  • Huge mysql table with Zend Framework

    - by Uffo
    I have a mysql table with over 4 million of data; well the problem is that SOME queries WORK and SOME DON'T it depends on the search term, if the search term has a big volume of data in the table than I get the following error: Fatal error: Allowed memory size of 1048576000 bytes exhausted (tried to allocate 75 bytes) in /home/****/public_html/Zend/Db/Statement/Pdo.php on line 290 I currently have Zend Framework cache for metadata enabled, I have index on all the fields from that table.The site is running on a dedicated server with 2gb of ram. I've also set memory limit to: ini_set("memory_limit","1000M"); Any other things that I can optimize? Those are the types of query that I'm currently using: $do = $this->select() ->where('branche LIKE ?','%'.mysql_escape_string($branche).'%') ->order('premium DESC'); } //For name if(empty($branche) && empty($plz)) { $do = $this->select("MATCH(`name`) AGAINST ('{$theString}') AS score") ->where('MATCH(`name`) AGAINST( ? IN BOOLEAN MODE)', $theString) ->order('premium DESC, score'); } And a few other, but they are pretty much the same. Best Regards

    Read the article

  • Retrieving many huge sized EPS files and converting them to JPEG in ASP.NET application

    - by Ashish Gupta
    I have many (600) EPS files(300 KB - 1 MB) in database. In my ASP.NET application (using ASP.NET 4.0) I need to retrieve them one by one and call a web service which would convert the content to the JPEG file and update the database (JPEGContent column with the JPEG content). However, retrieving the content for 600 of them itself takes too long from the SQL management studio itself (takes 5 minutes for 10 EPS contents). So I have two issues:- 1) How to get the EPS content ( unfortunately, selecting certain number of content is not an option :-( ):- Approach 1:- foreach(var DataRow in DataTable.Rows) { // get the Id and byte[] of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or foreach(var DataRow in DataTable.Rows) { // get only the Id of EPS // Hit database to get the content of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or Any other approach? 2) Converting EPS to JPEG using a web method for 600 contents. Ofcourse, each call would be a long running operation. Would task parellel library (TPL) be a better way to achieve this? Also, is doing the entire thing in a SQL CLR function a good idea?

    Read the article

  • Unable to load huge XML document (incorrectly suppose it's due to the XSLT processing)

    - by krisvandenbergh
    I'm trying to match certain elements using XSLT. My input document is very large and the source XML fails to load after processing the following code (consider especially the first line). <xsl:template match="XMI/XMI.content/Model_Management.Model/Foundation.Core.Namespace.ownedElement/Model_Management.Package/Foundation.Core.Namespace.ownedElement"> <rdf:RDF> <rdf:Description rdf:about=""> <xsl:for-each select="Foundation.Core.Class"> <xsl:for-each select="Foundation.Core.ModelElement.name"> <owl:Class rdf:ID="@Foundation.Core.ModelElement.name" /> </xsl:for-each> </xsl:for-each> </rdf:Description> </rdf:RDF> </xsl:template> Apparently the XSLT fails to load after "Model_Management.Model". The PHP code is as follows: if ($xml->loadXML($source_xml) == false) { die('Failed to load source XML: ' . $http_file); } It then fails to perform loadXML and immediately dies. I think there are two options now. 1) I should set a maximum executing time. Frankly, I don't know how that I do this for the built-in PHP 5 XSLT processor. 2) Think about another way to match. What would be the best way to deal with this? The input document can be found at http://krisvandenbergh.be/uml_pricing.xml Any help would be appreciated! Thanks.

    Read the article

  • Huge dataset point in polygon in .net (collision detection)

    - by Rickard Liljeberg
    I have a pretty big mesh with polygons, usually triangles but sometimes rectangles. Each point in my mesh has a value (value has nothing to do with coordinates). Now I am creating a second mesh in the same coordinate-space as the old mesh. I now want to interpolate out values for all points (vertices) in the new mesh using the values from the old mesh. Now I could loop each polygon in the new mesh and detect which old vertices are in each polygon by making 2d collision detection (altho even this I don't get to function properly so if anyone has simple and fast code for 2d collision detection (triangle is enough) I would gladly see it). However to my main point again. looping each old vertice for each new polygon seems less than efficient. is there a better way?

    Read the article

  • next line character a huge influence on xmlparser?

    - by jovany
    I have question about a basic xml file I'm parsing and just putting in simple nextlines(Enters). I'll try to explain my problem with this next example. I'm( still) building an xml tree and all it has to do ( this is a testtree ) is put the summary in an itemlist. I then export it to a plist so I can see if everything is done correctly. A method that does this is in the parser which looks like this if([elementName isEqualToString:@"Book"]) { [appDelegate.books addObject:aBook]; [aBook release]; aBook = nil; } else { [aBook setValue:currentElementValue forKey:elementName]; NSString *directions = [NSString stringWithFormat:currentElementValue]; [directionTree = setObject:directions forKey:@"directions"]; } [currentElementValue release]; currentElementValue = nil; } the export for the plistfile happens at the endtag of books. Below is the first xmlfile <?xml version="1.0" encoding="UTF-8"?> <Books><Book id="1"><summary>Ero adn the ancient quest to measure the globe.</summary></Book><Book id="2"><summary>how the scientific revolution began.</summary></Book></Books> This is my output http://img139.imageshack.us/img139/9175/picture6rtn.png If I make some adjustments like here <?xml version="1.0" encoding="UTF-8"?> <Books><Book id="1"> <summary>Ero adn the ancient quest to measure the globe.</summary> </Book> <Book id="2"> <summary>how the scientific revolution began.</summary> </Book> </Books> My directions key with type string remains empty... http://img248.imageshack.us/img248/5838/picture7y.png I never knew that if I just put in an enter it would have such an influence. Does anyone know a solution to this since my real xml file looks like this. ps. the funny thing is I can actually see ( when debugging)my directions string (NSString directions ) fill up with the currentElementValue in both cases.

    Read the article

  • Best way to handle huge strings in c#

    - by srk
    I have to write the data below to a textfile after replacing two values with ##IP##, ##PORT##. what is the best way ? should i hold all in a string and use Replace and write to textfile ? Data : [APP] iVersion= 101 pcVersion=1.01a pcBuildDate=Mar 27 2009 [MAIN] iFirstSetup= 0 rcMain.rcLeft= 676 rcMain.rcTop= 378 rcMain.rcRight= 1004 rcMain.rcBottom= 672 iShowLog= 0 iMode= 1 [GENERAL] iTips= 1 iTrayAnimation= 1 iCheckColor= 1 iPriority= 1 iSsememcpy= 1 iAutoOpenRecv= 1 pcRecvPath=C:\Documents and Settings\karthikeyan\My Documents\Downloads\fremote101a\FantasyRemote101a\recv pcFileName=FantasyRemote iLanguage= 1 [SERVER] iAcceptVideo= 1 iAcceptAudio= 1 iAcceptInput= 1 iAutoAccept= 1 iAutoTray= 0 iConnectSound= 1 iEnablePassword= 0 pcPassword= pcPort=7902 [CLIENT] iAutoConnect= 0 pcPassword= pcDefaultPort=7902 [NETWORK] pcConnectAddr=##IP## pcPort=##Port## [VIDEO] iEnable= 1 pcFcc=AMV3 pcFccServer= pcDiscription= pcDiscriptionServer= iFps= 30 iMouse= 2 iHalfsize= 0 iCapturblt= 0 iShared= 0 iSharedTime= 5 iVsync= 1 iCodecSendState= 1 iCompress= 2 pcPlugin= iPluginScan= 0 iPluginAspectW= 16 iPluginAspectH= 9 iPluginMouse= 1 iActiveClient= 0 iDesktop1= 1 iDesktop2= 2 iDesktop3= 0 iDesktop4= 3 iScan= 1 iFixW= 16 iFixH= 8 [AUDIO] iEnable= 1 iFps= 30 iVolume= 6 iRecDevice= 0 iPlayDevice= 0 pcSamplesPerSec=44100Hz pcChannels=2ch:Stereo pcBitsPerSample=16bit iRecBuffNum= 150 iPlayBuffNum= 4 [INPUT] iEnable= 1 iFps= 30 iMoe= 0 iAtlTab= 1 [MENU] iAlwaysOnTop= 0 iWindowMode= 0 iFrameSize= 4 iSnap= 1 [HOTKEY] iEnable= 1 key_IDM_HELP=0x00000070 mod_IDM_HELP=0x00000000 key_IDM_ALWAYSONTOP=0x00000071 mod_IDM_ALWAYSONTOP=0x00000000 key_IDM_CONNECT=0x00000072 mod_IDM_CONNECT=0x00000000 key_IDM_DISCONNECT=0x00000073 mod_IDM_DISCONNECT=0x00000000 key_IDM_CONFIG=0x00000000 mod_IDM_CONFIG=0x00000000 key_IDM_CODEC_SELECT=0x00000000 mod_IDM_CODEC_SELECT=0x00000000 key_IDM_CODEC_CONFIG=0x00000000 mod_IDM_CODEC_CONFIG=0x00000000 key_IDM_SIZE_50=0x00000074 mod_IDM_SIZE_50=0x00000000 key_IDM_SIZE_100=0x00000075 mod_IDM_SIZE_100=0x00000000 key_IDM_SIZE_200=0x00000076 mod_IDM_SIZE_200=0x00000000 key_IDM_SIZE_300=0x00000000 mod_IDM_SIZE_300=0x00000000 key_IDM_SIZE_400=0x00000000 mod_IDM_SIZE_400=0x00000000 key_IDM_CAPTUREWINDOW=0x00000077 mod_IDM_CAPTUREWINDOW=0x00000004 key_IDM_REGION=0x00000077 mod_IDM_REGION=0x00000000 key_IDM_DESKTOP1=0x00000078 mod_IDM_DESKTOP1=0x00000000 key_IDM_ACTIVE_MENU=0x00000079 mod_IDM_ACTIVE_MENU=0x00000000 key_IDM_PLUGIN=0x0000007A mod_IDM_PLUGIN=0x00000000 key_IDM_PLUGIN_SCAN=0x00000000 mod_IDM_PLUGIN_SCAN=0x00000000 key_IDM_DESKTOP2=0x00000078 mod_IDM_DESKTOP2=0x00000004 key_IDM_DESKTOP3=0x00000079 mod_IDM_DESKTOP3=0x00000004 key_IDM_DESKTOP4=0x0000007A mod_IDM_DESKTOP4=0x00000004 key_IDM_WINDOW_NORMAL=0x0000000D mod_IDM_WINDOW_NORMAL=0x00000004 key_IDM_WINDOW_NOFRAME=0x0000000D mod_IDM_WINDOW_NOFRAME=0x00000002 key_IDM_WINDOW_FULLSCREEN=0x0000000D mod_IDM_WINDOW_FULLSCREEN=0x00000001 key_IDM_MINIMIZE=0x00000000 mod_IDM_MINIMIZE=0x00000000 key_IDM_MAXIMIZE=0x00000000 mod_IDM_MAXIMIZE=0x00000000 key_IDM_REC_START=0x00000000 mod_IDM_REC_START=0x00000000 key_IDM_REC_STOP=0x00000000 mod_IDM_REC_STOP=0x00000000 key_IDM_SCREENSHOT=0x0000002C mod_IDM_SCREENSHOT=0x00000002 key_IDM_AUDIO_MUTE=0x00000073 mod_IDM_AUDIO_MUTE=0x00000004 key_IDM_AUDIO_VOLUME_DOWN=0x00000074 mod_IDM_AUDIO_VOLUME_DOWN=0x00000004 key_IDM_AUDIO_VOLUME_UP=0x00000075 mod_IDM_AUDIO_VOLUME_UP=0x00000004 key_IDM_CTRLALTDEL=0x00000023 mod_IDM_CTRLALTDEL=0x00000003 key_IDM_QUIT=0x00000000 mod_IDM_QUIT=0x00000000 key_IDM_MENU=0x0000007B mod_IDM_MENU=0x00000000 [OVERLAY] iIndicator= 1 iAlphaBlt= 1 iEnterHide= 0 pcFont=MS UI Gothic [AVI] iSound= 1 iFileSizeLimit= 100000 iPool= 4 iBuffSize= 32 iStartDiskSpaceCheck= 1 iStartDiskSpace= 1000 iRecDiskSpaceCheck= 1 iRecDiskSpace= 100 iCache= 0 iAutoOpen= 1 pcPath=C:\Documents and Settings\karthikeyan\My Documents\Downloads\fremote101a\FantasyRemote101a\avi [SCREENSHOT] iSound= 1 iAutoOpen= 1 pcPath=C:\Documents and Settings\karthikeyan\My Documents\Downloads\fremote101a\FantasyRemote101a\ss pcPlugin=BMP [CDLG_SERVER] mrcWnd.rcLeft= 667 mrcWnd.rcTop= 415 mrcWnd.rcRight= 1013 mrcWnd.rcBottom= 634 [CWND_CLIENT] miShowLog= 0 m_iOverlayLock= 0 [CDLG_CONFIG] mrcWnd.rcLeft= 467 mrcWnd.rcTop= 247 mrcWnd.rcRight= 1213 mrcWnd.rcBottom= 802 miTabConfigSel= 2

    Read the article

  • XSLT fails to load huge XML doc after matching certain elements

    - by krisvandenbergh
    I'm trying to match certain elements using XSLT. My input document is very large and the source XML fails to load after processing the following code (consider especially the first line). <xsl:template match="XMI/XMI.content/Model_Management.Model/Foundation.Core.Namespace.ownedElement/Model_Management.Package/Foundation.Core.Namespace.ownedElement"> <rdf:RDF> <rdf:Description rdf:about=""> <xsl:for-each select="Foundation.Core.Class"> <xsl:for-each select="Foundation.Core.ModelElement.name"> <owl:Class rdf:ID="@Foundation.Core.ModelElement.name" /> </xsl:for-each> </xsl:for-each> </rdf:Description> </rdf:RDF> </xsl:template> Apparently the XSLT fails to load after "Model_Management.Model". The PHP code is as follows: if ($xml->loadXML($source_xml) == false) { die('Failed to load source XML: ' . $http_file); } It then fails to perform loadXML and immediately dies. I think there are two options now. 1) I should set a maximum executing time. Frankly, I don't know how that I do this for the built-in PHP 5 XSLT processor. 2) Think about another way to match. What would be the best way to deal with this? The input document can be found at http://krisvandenbergh.be/uml_pricing.xml Any help would be appreciated! Thanks.

    Read the article

  • How do you keep yourself and your co-workers from creating huge classes

    - by PieterG
    Stackoverflow users, How do you keep yourself from creating large classes with large bodied methods. When deadlines are tight, you end up trying to hack things together and it ends up being a mess which would need to be refactored. For me, the one way was to start with test driven development and that lends itself to good class design as well as the SRP (Single Responsibility Principle). I also see developers just double clicking on controls and typing out line after line in the event method that gets fired. What are your suggestions?

    Read the article

  • .net real time stream processing - needed huge and fast RAM buffer

    - by mack369
    The application I'm developing communicates with an digital audio device, which is capable of sending 24 different voice streams at the same time. The device is connected via USB, using FTDI device (serial port emulator) and D2XX Drivers (basic COM driver is to slow to handle transfer of 4.5Mbit). Basically the application consist of 3 threads: Main thread - GUI, control, ect. Bus reader - in this thread data is continuously read from the device and saved to a file buffer (there is no logic in this thread) Data interpreter - this thread reads the data from file buffer, converts to samples, does simple sample processing and saves the samples to separate wav files. The reason why I used file buffer is that I wanted to be sure that I won't loose any samples. The application doesn't use recording all the time, so I've chosen this solution because it was safe. The application works fine, except that buffered wave file generator is pretty slow. For 24 parallel records of 1 minute, it takes about 4 minutes to complete the recording. I'm pretty sure that eliminating the use of hard drive in this process will increase the speed much. The second problem is that the file buffer is really heavy for long records and I can't clean this up until the end of data processing (it would slow down the process even more). For RAM buffer I need at lest 1GB to make it work properly. What is the best way to allocate such a big amount of memory in .NET? I'm going to use this memory in 2 threads so a fast synchronization mechanism needed. I'm thinking about a cycle buffer: one big array, the Bus Reader saves the data, the Data Interpreter reads it. What do you think about it? [edit] Now for buffering I'm using classes BinaryReader and BinaryWriter based on a file.

    Read the article

  • Huge amount of time sending data with suds and proxy

    - by Roman
    Hi everyone, I have the following code to send data through a proxy using suds: import suds t = suds.transport.http.HttpTransport() proxy = urllib2.ProxyHandler({'http': 'http://192.168.3.217:3128'}) opener = urllib2.build_opener(proxy) t.urlopener = opener ws = suds.client.Client('http://xxxxxxx/web.asmx?WSDL', transport=t) req = ws.factory.create('ActionRequest.request') req.SerialNumber = 'asdf' req.HostName = 'hola' res = ws.service.ActionRequest(req) I don't know why, but it can be sending data above 2 or 3 minutes, or even more and it raises a "Gateway timeout" exception sometimes. If I don't use the proxy, the amount of time used is above 2 seconds or less. Here is the SOAP reply: (ActionResponse){ Id = None Action = "Action.None" Objects = "" } The proxy is running right with other requests through urllib2, or using normal web browsers like firefox. Does anyone have any idea what's happening here with suds? Thanks a lot in advance!!!

    Read the article

  • Avoid having a huge collection of ids by calling a DAO.getAll()

    - by Michael Bavin
    Instead of returning a List<Long> of ids when calling PersonDao.getAll() we wanted not to have an entire collection of ids in memory. Seems like returning a org.springframework.jdbc.support.rowset.SqlRowSet and iterate over this rowset would not hold every object in memory. The only problem here is i cannot cast this row to my entity. Is there a better way for this?

    Read the article

  • Huge AS 3.0 Export Frame

    - by goorj
    I am creating a very simple flash animation with no code or complex effects, just text and simple tweens (Flash CS5) But I have problems reducing the size of my swf. From the generated size report, it looks like it has to do with fonts and/or exported actionscript classes. The frame with AS 3.0 Classes is over 100K, and even though I am only using one fonts, the same characters are embedded/exported multiple times My questions are: Do embedding of mixed TLF/Classic text (or mixing other text properties, spacing/kerning etc) require the same characters to be embedded twice? Do text transformations on TLF text (rotation and different transformations not available in classic text) require embedding of ("internal") AS3 classes that will increase the size of the .swf? (even though none of these classes are explicitly used by me, there are no scripts in the fla project) I have tried removing all the text instances one by one, and at one point, the swf is reduced to only 5-6K, but I am not able to pinpoint exactly what causes the ballooning of the swf

    Read the article

  • Most efficient way for a lookup/search in a huge list (python)

    - by user229269
    Hey guys, -- I just parsed a big file and I created a list containing 42.000 strings/words. I want to query [against this list] to check if a given word/string belongs to it. So my question is: What is the most efficient way for such a lookup? A first approach is to sort the list [list.sort()] and then just use the if word in list: print 'word' -- which is really trivial and I am sure there is a better way to do it. My goal is to apply a fast lookup that finds whether a given string is in this list or not. If you have any ideas of another data structure, they are welcome. Yet, I want to avoid for now more sophisticated data-structures like Tries etc. I am interested in hearing ideas (or tricks) about fast lookups or any other python library methods that might do the search faster than the simple 'in'. Thanks in advance!

    Read the article

  • Unit testing huge applications - Proven methodologies?

    - by NLV
    Hello members I've been working in windows forms applications and ASP.Net applications for the past 10 months. I've always wondered how to perform proper unit testing on the complete application in a robust manner covering all the scenarios. I've the following questions regarding them - What are the standard mechanisms in performing unit testing and writing test cases? Does the methodologies change based on the application nature such as Windows Forms, Web applications etc? What is the best approach to make sure we cover all the scenarios? Any popular books on this? Popular tools for performing unit testing?

    Read the article

  • JBoss envers and huge audit tables

    - by LeChe
    All, I am auditing my JEE application with JBoss Evers and the nature of my application causes the audit table to grow very fast. The historic data is queried infrequently and access time is not really an issue, apart from the data from the last week. This data IS queried frequently and access needs to be fast. Ideally, I would split the data and distribute it over two tables, with the older data in compressed format. Unfortunately, Envers does not allow spreading data over multiple tables as far as I can tell from the docs. Does somebody have any idea what would be the best way to achieve this (if possible while still using Envers)? Cheers, LeChe

    Read the article

  • Creating huge images

    - by David Rutten
    My program has the feature to export a hi-res image of the working canvas to the disk. Users will frequently try to export images of about 20,000 x 10,000 pixels @ 32bpp which equals about 800MB. Add that to the serious memory consumption already going on in your average 3D CAD program and you'll pretty much guarantee an out-of-memory crash on 32-bit platforms. So now I'm exporting tiles of 1000x1000 pixels which the user has to stitch together afterwards in a pixel editor. Is there a way I can solve this problem without the user doing any work? I figured I could probably write a small exe that gets command-lined into the process and performs the stitching automatically. It would be a separate process and it would thus have 2GB of ram all to itself. Or is there a better way still? I'd like to support jpg, png and bmp so writing the image as a bytestream to the disk is not really possible.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >