Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 121/184 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • How fast are App Engine db.get(keys) and A.all(keys_only=True).filter('b =', b).fetch(1000)?

    - by Liron Shapira
    A db.get() of 50 keys seems to take me 5-6 seconds. Is that normal? What is the time a function of? I also did a A.all(keys_only=True).filter('b =', b).fetch(1000) where A.b is a ReferenceProperty. I did 50 such round trips to the datastore, with different values of b, and the total time was only 3-4 seconds. How is this possible? db.get() is done in parallel, with only one trip to the datastore, and I would think that looking up an entity by key is a faster operation than fetch.

    Read the article

  • WebSocket send extra information on connection

    - by MattDiPasquale
    Is there a way for a WebSocket client to send additional information on the initial connection to the WebSocket server? I'm asking because I want the client to send the user ID (string) to the server immediately. I don't want the client to send the user ID in the onopen callback. Why? Because it's faster and simpler. If the WebSocket API won't let you do this, why not? If there's no good reason, how could I suggest they add this simple feature?

    Read the article

  • Rewriting .each() loop as for loop to optimize, how to replicate $(this).attr()

    - by John B
    I running into some performance issues with a jquery script i wrote when running in ie so I'm going through it trying to optimize any way possible. Apparently using for loops is way faster than using the jQuery .each method. This has led me to a question regarding the equivalent of $(this) inside a for loop. I'm simplifying what I'm doing in my loop down to just using an attr() function as it gets across my main underlying question. Im doing this with each(simplified) var existing = $('#existing'); existing.each(function(){ console.log($(this).attr('id')); }); And I've tried rewriting it as a for loop as such: var existing = $('#existing'); for(var i = 0;i < existing.length;i++) { console.log(existing[i].attr('id')); } Its throwing an error saying: Uncaught TypeError: Object #<HTMLDivElement> has no method 'attr' Thanks.

    Read the article

  • Migrating Data to MSSQL 2008

    - by Fred Clown
    I am trying to migrate data from an Informix database to MSSQL 2008. I've got quite a lot of data to move. I've been try multiple methods to get the data over, and so far SQLBulkCopy in multiple chunks seems to be the fastest that I can find. Does anyone know of a faster means of getting the data over? I'm trying to cut down on the transfer time so that on my cut-over date I don't run out of time to do the full cut-over. Thanks.

    Read the article

  • .net Compiler Optimizations

    - by Dested
    I am writing an application that I need to run at incredibly low speeds. The application creates and destroys memory in creative ways throughout its run, and it works just fine. I am wondering what compiler optimizations occur so I can try to build to that. One trick off hand is that the CLR handles arrays much faster than lists, so if you need to handle a ton of elements in a List, you may be better off calling ToArray() and handling it rather than calling ElementAt() again and again. I am wondering if there is any sort of comprehensive list for this kind of thing, or maybe the SO community can create one :-)

    Read the article

  • Fastest way to convert file from latin1 to utf-8 in python.

    - by xsaero00
    I need fastest way to convert files from latin1 to utf-8 in python. The files are large ~ 2G. ( I am moving DB data ). So far I have import codecs infile = codecs.open(tmpfile, 'r', encoding='latin1') outfile = codecs.open(tmpfile1, 'w', encoding='utf-8') for line in infile: outfile.write(line) infile.close() outfile.close() but it is still slow. The conversion takes one fourth of the whole migration time. I could also use a linux command line utility if it is faster than native python code.

    Read the article

  • Fastest Way To Format a Plain Text Using Javascript

    - by Nathan Campos
    I have a huge plain text document, about 700kb which is very big for plain texts and I need to format it on cloud converting it to HTML, but the only things that I need to replace, format to HTML so it can be displayed by the browser, are bold and italic. For bold at the plain text they are like this: Not on bold... **bold text here** not bold here And italic like this: Not italic... *italic text* no italic Just like StackOverflow does for their formatting, but the problem is that I need to make it a lot faster, since the text is so big... One of my ideas was to add a page slide, so I the script just need to format some part of the text, not it all, then after the user changes the page the script would be called again, but the problem is how I can make the code for this all?

    Read the article

  • SharePoint 2010 Development on Virtual Machine - Windows 7 or Server 2008?

    - by webworm
    I recently switched to a MacBook Pro for my development machine (for many reasons). I want to setup a Virtual Machine for ASP.NET, IIS, and Visual Studio 2010 development. I also have need to do some development work with SharePoint 2010. What I am wondering is if I should use Windows 7 (64 bit) or Windows Server 2008 (64 bit) as the OS for my development virtual machine. I don't really need most of the services running in Server 2008 so I felt that Windows 7 would probably run faster in the VM environment however I am fairly new to SharePoint 2010 so I am not sure if Windows 7 (64 bit) can be used as a development environment for it. Thanks for any input.

    Read the article

  • What's quicker and better to determine if an array key exists in PHP?

    - by alex
    Consider these 2 examples $key = 'jim'; // example 1 if (isset($array[$key])) { doWhatIWant(); } // example 2 if (array_key_exists($key, $array)) { doWhatIWant(); } I'm interested in knowing if either of these are better. I've always used the first, but have seen a lot of people use the second example on this site. So, which is better? Faster? Clearer intent? Update Thanks for the quality answers. I now understand the difference between the 2. A benchmark states that isset() alone is quicker than array_key_exists(). However, if you want the isset() to behave like array_key_exists() it is slower.

    Read the article

  • Are ASCII diagrams worth my time?

    - by Jesse Stimpson
    Are ASCII diagrams within source code worth the time they take to create? I could create a bitmap diagram much faster, but images are much more difficult to in line in a source file (until VS2010). For the record, I'm not talking about decorative ASCII art. Here's an example of a diagram I recently created for my code that I probably could have constructed in half the time in MS Paint. Scenario A: v (U)_________________(N)_______<--(P) Legend: ' / | J = ... ' / | P = ... ' /d | U = ... ' / | v = ... ' / | d = ... '/ | N = ... (J) | | | |___________________|

    Read the article

  • sql server procedure optimization

    - by stackoverflow
    SQl Server 2005: Option: 1 CREATE TABLE #test (customerid, orderdate, field1 INT, field2 INT, field3 INT) CREATE UNIQUE CLUSTERED INDEX Idx1 ON #test(customerid) CREATE INDEX Idx2 ON #test(field1 DESC) CREATE INDEX Idx3 ON #test(field2 DESC) CREATE INDEX Idx4 ON #test(field3 DESC) INSERT INTO #test (customerid, orderdate, field1 INT, field2 INT, field3 INT) SELECT customerid, orderdate, field1, field2, field3 FROM ATABLERETURNING4000000ROWS compared to Option: 2 CREATE TABLE #test (customerid, orderdate, field1 INT, field2 INT, field3 INT) INSERT INTO #test (customerid, orderdate, field1 INT, field2 INT, field3 INT) SELECT customerid, orderdate, field1, field2, field3 FROM ATABLERETURNING4000000ROWS CREATE UNIQUE CLUSTERED INDEX Idx1 ON #test(customerid) CREATE INDEX Idx2 ON #test(field1 DESC) CREATE INDEX Idx3 ON #test(field2 DESC) CREATE INDEX Idx4 ON #test(field3 DESC) When we use the second option it runs close to 50% faster. Why is this?

    Read the article

  • Fastest way to do a weighted tag search in SQL Server

    - by Hasan Khan
    My table is as follows ObjectID bigint Tag nvarchar(50) Weight float Type tinyint I want to get search for all objects that has tags 'big' or 'large' I want the objectid in order of sum of weights (so objects having both the tags will be on top) select objectid, row_number() over (order by sum(weight) desc) as rowid from tags where tag in ('big', 'large') and type=0 group by objectid the reason for row_number() is that i want paging over results. The query in its current form is very slow, takes a minute to execute over 16 million tags. What should I do to make it faster? I have a non clustered index (objectid, tag, type) Any suggestions?

    Read the article

  • All things equal what is the fastest way to output data to disk in C++?

    - by user260197
    I am running simulation code that is largely bound by CPU speed. I am not interested in pushing data in/out to a user interface, simply saving it to disk as it is computed. What would be the fastest solution that would reduce overhead? iostreams? printf? I have previously read that printf is faster. Will this depend on my code and is it impossible to get an answer without profiling? Edit: Output data needs to be in text format, whether tab or comma separated. This will require formatting, precision, etc. Running in Windows.

    Read the article

  • array insert in db

    - by gloris
    Hi, How best to put the array (100 or more length) in the database (MySQL)? I do not want multiple access to the database because it is so loaded. So my solution is as follows: string insert = "INSERT INTO programs (name, id) VALUES "; for(int i = 0; i < name.Length; i++) { if (i != 0) { insert = insert + ",("; } else { insert = insert + "("; } insert = insert + "'" + name[i] + "','" + id[i] + "'"; insert = insert + ")"; } //INSERT INTO programs (name, id) VALUES ('Peter','32'),('Rikko','343') .... But maybe is a faster version? Thanks

    Read the article

  • efficiently trimming postgresql tables

    - by agilefall
    I have about 10 tables with over 2 million records and one with 30 million. I would like to efficiently remove older data from each of these tables. My general algorithm is: create a temp table for each large table and populate it with newer data truncate the original tables copy tmp data back to original tables using: "insert into originaltable (select * from tmp_table)" However, the last step of copying the data back is taking longer than I'd like. I thought about deleting the original tables and making the temp tables "permanent", but I lose constraint/foreign key info. If I delete from the tables directly, it takes much longer. Given that I need to preserve all foreign keys and constraints, are there any faster ways of removing the older data? Thanks.

    Read the article

  • SQL Server 2008 vs 2005 udf xml perfomance problem.

    - by user344495
    Ok we have a simple udf that takes a XML integer list and returns a table: CREATE FUNCTION [dbo].[udfParseXmlListOfInt] ( @ItemListXml XML (dbo.xsdListOfInteger) ) RETURNS TABLE AS RETURN ( --- parses the XML and returns it as an int table --- SELECT ListItems.ID.value('.','INT') AS KeyValue FROM @ItemListXml.nodes('//list/item') AS ListItems(ID) ) In a stored procedure we create a temp table using this UDF INSERT INTO @JobTable (JobNumber, JobSchedID, JobBatID, StoreID, CustID, CustDivID, BatchStartDate, BatchEndDate, UnavailableFrom) SELECT JOB.JobNumber, JOB.JobSchedID, ISNULL(JOB.JobBatID,0), STO.StoreID, STO.CustID, ISNULL(STO.CustDivID,0), AVL.StartDate, AVL.EndDate, ISNULL(AVL.StartDate, DATEADD(day, -8, GETDATE())) FROM dbo.udfParseXmlListOfInt(@JobNumberList) TMP INNER JOIN dbo.JobSchedule JOB ON (JOB.JobNumber = TMP.KeyValue) INNER JOIN dbo.Store STO ON (STO.StoreID = JOB.StoreID) INNER JOIN dbo.JobSchedEvent EVT ON (EVT.JobSchedID = JOB.JobSchedID AND EVT.IsPrimary = 1) LEFT OUTER JOIN dbo.Availability AVL ON (AVL.AvailTypID = 5 AND AVL.RowID = JOB.JobBatID) ORDER BY JOB.JobSchedID; For a simple list of 10 JobNumbers in SQL2005 this returns in less than 1 second, in 2008 this run against the exact same data returns in 7 min. This is on a much faster machine with more memory. Any ideas?

    Read the article

  • Should I Split Tables Relevant to X Module Into Different DB? Mysql

    - by Michael Robinson
    I've inherited a rather large and somewhat messy codebase, and have been tasked with making it faster, less noodly and generally better. Currently we use one big database to hold all data for all aspects of the site. As we need to plan for significant growth in the future, I'm considering splitting tables relevant to specific sections of the site into different databases, so if/when one gets too large for one server I can more easily migrate some user data to different mysql servers while retaining overall integrity. I would still need to use joins on some tables across the new databases. Is this a normal thing to do? Would I incur a performance hit because of this?

    Read the article

  • Color generation based on random number

    - by Mikulas Dite
    I would like to create a color generator based on random numbers, which might differ just slightly, but I need colors to be easily recognizable from each other. I was thinking about generation then in a rgb format which would be probably easiest. I'm afraid simply multiplying given arguments wouldn't do very well. What algorithm do you suggest using? Also, second generated color should not be the same as previous one, but I don't want to store them - nor multiplying with (micro)time would do well since the scripts' parts are usually faster.

    Read the article

  • In SQL, why is "Distinct" not used in a subquery, when looking for some items "not showing up" in th

    - by Jian Lin
    Usually when looking for some items not showing up in the other table, we can use: select * from gifts where giftID not in (select giftID from sentgifts); or select * from gifts where giftID not in (select distinct giftID from sentgifts); the second line is with "distinct" added, so that the resulting table is smaller, and probably let the search for "not in" faster too. So, won't using "distinct" be desirable? Often than not, I don't see it being used in the subquery in such a case. Is there advantage or disadvantage of using it? thanks.

    Read the article

  • how to access jquery internal data?

    - by Unknown
    As you may or may not be aware as of jQuery 1.7 the whole event system was rewritten from the ground up. The codebase is much faster and with the new .on() method there is a lot of uniformity to wiring up event handlers. One used to be able to access the internal events data and investiate what events are registered on any given element, but recently this internal information has been hidden based on the following scenario... It seems that the "private" data is ALWAYS stored on the .data(jQuery.expando) - For "objects" where the deletion of the object should also delete its caches this makes some sense. In the realm of nodes however, I think we should store these "private" members in a separate (private) cache so that they don't pollute the object returned by $.fn.data()" Although I agree with the above change to hide the internal data, I have found having access to this information can be helpful for debugging and unit testing. What was the new way of getting the internal jquery event object in jQuery 1.7?

    Read the article

  • Perl: Fastest way to get directory (and subdirs) size on unix - using stat() at the moment

    - by ivicas
    I am using Perl stat() function to get the size of directory and it's subdirectories. I have a list of about 20 parent directories which have few thousand recursive subdirs and every subdir has few hundred records. Main computing part of script looks like this: sub getDirSize { my $dirSize = 0; my @dirContent = <*>; my $sizeOfFilesInDir = 0; foreach my $dirContent (@dirContent) { if (-f $dirContent) { my $size = (stat($dirContent))[7]; $dirSize += $size; } elsif (-d $dirContent) { $dirSize += getDirSize($dirContent); } } return $dirSize; } The script is executing for more than one hour and I want to make it faster. I was trying with the shell du command, but the output of du (transfered to bytes) is not accurate. And it is also quite time consuming. I am working on HP-UNIX 11i v1.

    Read the article

  • Icons in Silverlight: Images vs. Vectors

    - by Shnitzel
    I like using the vector drawing feature of Expression Blend to create icons. That way I can change colors easily on my icons without having to resort to an image editor. But my question is... Say I have a treeview control that has an icon next to each tree element and say I have hundreds of elements. Do you think using images is faster - performance wise than using vector icons? B/c I'd rather use vectors but I'm wondering about performance concerns.

    Read the article

  • Trouble creating a makefile

    - by catia
    Hi everyone! I'm having some trouble making a Makefile. Write now I just compile everything every time. Although, the professor is ok with that, I do want to get it working faster and to avoid unnecessary compiling. Here's what I have. FILES= p6.cpp SetIter.cpp Node.cpp Set.cpp CFLAGS= -ansi -pendantic -Wall -Wextra CC= g++ MakePg6: p6.cpp SetIter.cpp Node.cpp Set.cpp $(CC) $(CFLAGS) $(FILES) -o pg6 Node.cpp - node class Set.cpp - uses nodes. Friend of Node. SetIter.cpp - gets a set and uses a pointer to iterator through I'm confused with some of the depencies arising from the friends thing and the point of lib.o being included in the Makefile as some sites have. Any help is greatly appreciated. Thanks in advance.

    Read the article

  • C: Recursive function for inverting an int

    - by Jorge
    I had this problem on an exam yesterday. I couldn't resolve it so you can imagine the result... Make a recursive function: int invertint( int num) that will receive an integer and return it but inverted, example: 321 would return as 123 I wrote this: int invertint( int num ) { int rest = num % 10; int div = num / 10; if( div == 0 ) { return( rest ); } return( rest * 10 + invert( div ) ) } Worked for 2 digits numbers but not for 3 digits or more. Since 321 would return 1 * 10 + 23 in the last stage. Thanks a lot! PS: Is there a way to understand these kind of recursion problems in a faster manner or it's up to imagination of one self?

    Read the article

  • From xcode not able to execute DISTINCT keyword for sqlite

    - by mac
    -(void) readProductsFromDatabase { // Setup the database object sqlite3 *database; // Init the animals Array products = [[NSMutableArray alloc] init]; // Open the database from the users filessytem if(sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { NSLog(@"db opened"); // Setup the SQL Statement and compile it for faster access const char *sqlStatement = "SELECT DISTINCT productname FROM iphone "; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { // Loop through the results and add them to the feeds array while(sqlite3_step(compiledStatement) == SQLITE_ROW) { NSLog(@"inside sqlite3 prepare"); // Read the data from the result row NSString *aName = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 2)]; } } // Release the compiled statement from memory sqlite3_finalize(compiledStatement); } sqlite3_close(database); } My problem is const char *sqlStatement = "SELECT DISTINCT productname FROM iphone "; This line not executing ,i am using sqlite3, thanks in advance,

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >