Search Results

Search found 12988 results on 520 pages for 'performance'.

Page 167/520 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • JavaScript: Is there a better way to retain your array but efficiently concat or replace items?

    - by Michael Mikowski
    I am looking for the best way to replace or add to elements of an array without deleting the original reference. Here is the set up: var a = [], b = [], c, i, obj; for ( i = 0; i < 100000; i++ ) { a[ i ] = i; b[ i ] = 10000 - i; } obj.data_list = a; Now we want to concatenate b INTO a without changing the reference to a, since it is used in obj.data_list. Here is one method: for ( i = 0; i < b.length; i++ ) { a.push( b[ i ] ); } This seems to be a somewhat terser and 8x (on V8) faster method: a.splice.apply( a, [ a.length, 0 ].concat( b ) ); I have found this useful when iterating over an "in-place" array and don't want to touch the elements as I go (a good practice). I start a new array (let's call it keep_list) with the initial arguments and then add the elements I wish to retain. Finally I use this apply method to quickly replace the truncated array: var keep_list = [ 0, 0 ]; for ( i = 0; i < a.length; i++ ){ if ( some_condition ){ keep_list.push( a[ i ] ); } // truncate array a.length = 0; // And replace contents a.splice.apply( a, keep_list ); There are a few problems with this solution: there is a max call stack size limit of around 50k on V8 I have not tested on other JS engines yet. This solution is a bit cryptic Has anyone found a better way?

    Read the article

  • Intersection() and Except() is too slow with large collections of custom objects

    - by Theo
    I am importing data from another database. My process is importing data from a remote DB into a List<DataModel> named remoteData and also importing data from the local DB into a List<DataModel> named localData. I am then using LINQ to create a list of records that are different so that I can update the local DB to match the data pulled from remote DB. Like this: var outdatedData = this.localData.Intersect(this.remoteData, new OutdatedDataComparer()).ToList(); I am then using LINQ to create a list of records that no longer exist in remoteData, but do exist in localData, so that I delete them from local database. Like this: var oldData = this.localData.Except(this.remoteData, new MatchingDataComparer()).ToList(); I am then using LINQ to do the opposite of the above to add the new data to the local database. Like this: var newData = this.remoteData.Except(this.localData, new MatchingDataComparer()).ToList(); Each collection imports about 70k records, and each of the 3 LINQ operation take between 5 - 10 minutes to complete. How can I make this faster? Here is the object the collections are using: internal class DataModel { public string Key1{ get; set; } public string Key2{ get; set; } public string Value1{ get; set; } public string Value2{ get; set; } public byte? Value3{ get; set; } } The comparer used to check for outdated records: class OutdatedDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { var e = string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2) && ( !string.Equals(x.Value1, y.Value1) || !string.Equals(x.Value2, y.Value2) || x.Value3 != y.Value3 ); return e; } public int GetHashCode(DataModel obj) { return 0; } } The comparer used to find old and new records: internal class MatchingDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { return string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2); } public int GetHashCode(DataModel obj) { return 0; } }

    Read the article

  • effective counter for unique number of visits in PHP & MySQL

    - by Adnan
    Hello, I am creating a counter for unique number of visits on a post, so what I have until now is a table for storing data like this; cvp_post_id | cvp_ip | cvp_user_id In cases a registered user visits a post, for the first time a record is inserted with cpv_post_id and cvp_user_id, so for his next visit I query the table and if the record is available I do not count him as a new visitor. In cases of an anonymous user the same happens but now the cvp_ip and cpv_post_id are used. My concerns is that I do a query every time anyone visits a post for checking if there has been a visit, what would be a more effective way for doing this?

    Read the article

  • Java JRE vs GCJ

    - by Martijn Courteaux
    Hi, I have this results from a speed test I wrote in Java: Java real 0m20.626s user 0m20.257s sys 0m0.244s GCJ real 3m10.567s user 3m5.168s sys 0m0.676s So, what is the but of GCJ then? With this results I'm sure I'm not going to compile it with GCJ! I tested this on Linux, are the results in Windows maybe better than that? This was the code from the application: public static void main(String[] args) { String str = ""; System.out.println("Start!!!"); for (long i = 0; i < 5000000L; i++) { Math.sqrt((double) i); Math.pow((double) i, 2.56); long j = i * 745L; String string = new String(String.valueOf(i)); string = string.concat(" kaka pipi"); // "Kaka pipi" is a kind of childly call in Dutch. string = new String(string.toUpperCase()); if (i % 300 == 0) { str = ""; } else { str += Long.toHexString(i); } } System.out.println("Stop!!!"); }

    Read the article

  • Fastest way for inserting very large number of records into a Table in SQL

    - by Irchi
    The problem is, we have a huge number of records (more than a million) to be inserted into a single table from a Java application. The records are created by the Java code, it's not a move from another table, so INSERT/SELECT won't help. Currently, my bottleneck is the INSERT statements. I'm using PreparedStatement to speed-up the process, but I can't get more than 50 recods per second on a normal server. The table is not complicated at all, and there are no indexes defined on it. The process takes too long, and the time it takes will make problems. What can I do to get the maximum speed (INSERT per second) possible? Database: MS SQL 2008. Application: Java-based, using Microsoft JDBC driver.

    Read the article

  • Are conditional subqueries optimized out, if the condition is false?

    - by Tobias Schulte
    I have a table foo and a table bar, where each foo might have a bar (and a bar might belong to multiple foos). Now I need to select all foos with a bar. My sql looks like this SELECT * FROM foo f WHERE [...] AND ($param IS NULL OR (SELECT ((COUNT(*))>0) FROM bar b WHERE f.bar = b.id)) with $param being replaced at runtime. The question is: Will the subquery be executed even if param is null, or will the dbms optimize the subquery out? We are using mysql, mssql and oracle. Is there a difference between these regarding the above?

    Read the article

  • Converting keys of an array/object-tree to lowercase

    - by tstenner
    Im currently optimizing a PHP application and found one function being called around 10-20k times, so I'd thought I'd start optimization there. function keysToLower($obj) { if(!is_object($obj) && !is_array($obj)) return $obj; foreach($obj as $key=>$element) { $element=keysToLower($element); if(is_object($obj)) { $obj->{strtolower($key)}=$element; if(!ctype_lower($key)) unset($obj->{$key}); } else if(is_array($obj) && ctype_upper($key)) { $obj[strtolower($key)]=$element; unset($obj[$key]); } } return $obj; } Most of the time is spent in recursive calls (which are quite slow in PHP), but I don't see any way to convert it to a loop. What would you do?

    Read the article

  • Read large amount of data from file in Java

    - by Crozin
    Hello I've got text file that contains 1 000 002 numbers in following formation: 123 456 1 2 3 4 5 6 .... 999999 100000 Now I need to read that data and allocate it to int variables (the very first two numbers) and all the rest (1 000 000 numbers) to an array int[]. It's not a hard task, but - it's horrible slow. My first attempt was java.util.Scanner: Scanner stdin = new Scanner(new File("./path")); int n = stdin.nextInt(); int t = stdin.nextInt(); int array[] = new array[n]; for (int i = 0; i < n; i++) { array[i] = stdin.nextInt(); } It works as excepted but it takes about 7500 ms to execute. I need to fetch that data in up to several hundred of milliseconds. Then I tried java.io.BufferedReader: Using BufferedReader.readLine() and String.split() I got the same results in about 1700 ms, but it's still too many. How can I read that amount of data in less that 1 second? The final result should be equal to: int n = 123; int t = 456; int array[] = { 1, 2, 3, 4, ..., 999999, 100000 };

    Read the article

  • Cache bandwidth per tick for modern CPUs

    - by osgx
    Hello What is a speed of cache accessing for modern CPUs? How many bytes can be read or written from memory every processor clock tick by Intel P4, Core2, Corei7, AMD? Please, answer with both theoretical (width of ld/sd unit with its throughput in uOPs/tick) and practical numbers (even memcpy speed tests, or STREAM benchmark), if any. PS it is question, related to maximal rate of load/store instructions in assembler. There can be theoretical rate of loading (all Instructions Per Tick are widest loads), but processor can give only part of such, a practical limit of loading.

    Read the article

  • Visual Studio 2010 - Is it slow for anyone else?

    - by AngryHacker
    I've read a lot of stuff about VS2010 being much more performant than VS2008. When I've finally installed it, I found that it, in fact, is much slower (save for the Add References dialog). For instance, Silverlight projects take twice as long to load, the startup of the IDE itself is much slower, etc... Am I missing something here or is it like this for everyone?

    Read the article

  • Running Boggle Solver takes over an hour to run. What is wrong with my code?

    - by user1872912
    So I am running a Boggle Solver in java on the NetBeans IDE. When I run it, i have to quit after 10 minutes or so because it will end up taking about 2 hour to run completely. Is there something wrong with my code or a way that will make is substantially faster? public void findWords(String word, int iLoc, int jLoc, ArrayList<JLabel> labelsUsed){ if(iLoc < 0 || iLoc >= 4 || jLoc < 0 || jLoc >= 4){ return; } if(labelsUsed.contains(jLabels[iLoc][jLoc])){ return; } word += jLabels[iLoc][jLoc].getText(); labelsUsed.add(jLabels[iLoc][jLoc]); if(word.length() >= 3 && wordsPossible.contains(word)){ wordsMade.add(word); } findWords(word, iLoc-1, jLoc, labelsUsed); findWords(word, iLoc+1, jLoc, labelsUsed); findWords(word, iLoc, jLoc-1, labelsUsed); findWords(word, iLoc, jLoc+1, labelsUsed); findWords(word, iLoc-1, jLoc+1, labelsUsed); findWords(word, iLoc-1, jLoc-1, labelsUsed); findWords(word, iLoc+1, jLoc-1, labelsUsed); findWords(word, iLoc+1, jLoc+1, labelsUsed); labelsUsed.remove(jLabels[iLoc][jLoc]); } here is where I call this method from: public void findWords(){ ArrayList <JLabel> labelsUsed = new ArrayList<JLabel>(); for(int i=0; i<jLabels.length; i++){ for(int j=0; j<jLabels[i].length; j++){ findWords(jLabels[i][j].getText(), i, j, labelsUsed); //System.out.println("Done"); } } } edit: BTW I am using a GUI and the letters on the board are displayed by using a JLabel.

    Read the article

  • Ant Junit tests are running much slower via ant than via IDE - what to look at?

    - by Alex B
    I am running my junit tests via ant and they are running substantially slower than via the IDE. My ant call is: <junit fork="yes" forkmode="once" printsummary="off"> <classpath refid="test.classpath"/> <formatter type="brief" usefile="false"/> <batchtest todir="${test.results.dir}/xml"> <formatter type="xml"/> <fileset dir="src" includes="**/*Test.java" /> </batchtest> </junit> The same test that runs in near instantaneously in my IDE (0.067s) takes 4.632s when run through Ant. In the past, I've been able to speed up test problems like this by using the junit fork parameter but this doesn't seem to be helping in this case. What properties or parameters can I look at to speed up these tests? More info: I am using the reported time from the IDE vs. the time that the junit task outputs. This is not the sum total time reported at the end of the ant run. So, bizarrely, this problem has resolved itself. What could have caused this problem? The system runs on a local disk so that is not the problem.

    Read the article

  • Why is Dictionary.First() so slow?

    - by Rotsor
    Not a real question because I already found out the answer, but still interesting thing. I always thought that hash table is the fastest associative container if you hash properly. However, the following code is terribly slow. It executes only about 1 million iterations and takes more than 2 minutes of time on a Core 2 CPU. The code does the following: it maintains the collection todo of items it needs to process. At each iteration it takes an item from this collection (doesn't matter which item), deletes it, processes it if it wasn't processed (possibly adding more items to process), and repeats this until there are no items to process. The culprit seems to be the Dictionary.Keys.First() operation. The question is why is it slow? Stopwatch watch = new Stopwatch(); watch.Start(); HashSet<int> processed = new HashSet<int>(); Dictionary<int, int> todo = new Dictionary<int, int>(); todo.Add(1, 1); int iterations = 0; int limit = 500000; while (todo.Count > 0) { iterations++; var key = todo.Keys.First(); var value = todo[key]; todo.Remove(key); if (!processed.Contains(key)) { processed.Add(key); // process item here if (key < limit) { todo[key + 13] = value + 1; todo[key + 7] = value + 1; } // doesn't matter much how } } Console.WriteLine("Iterations: {0}; Time: {1}.", iterations, watch.Elapsed); This results in: Iterations: 923007; Time: 00:02:09.8414388. Simply changing Dictionary to SortedDictionary yields: Iterations: 499976; Time: 00:00:00.4451514. 300 times faster while having only 2 times less iterations. The same happens in java. Used HashMap instead of Dictionary and keySet().iterator().next() instead of Keys.First().

    Read the article

  • 2k rows update is very slow in MySQL

    - by sergeik
    Hi all, I have 2 tables: 1. news (450k rows) 2. news_tags (3m rows) There are some triggers on news table update which updating listings. This SQL executes too long... UPDATE news SET news_category = some_number WHERE news_id IN (SELECT news_id FROM news_tags WHERE tag_id = some_number); #about 3k rows How can I make it faster? Thanks in advance, S.

    Read the article

  • Optimizing Code

    - by Claudiu
    You are given a heap of code in your favorite language which combines to form a rather complicated application. It runs rather slowly, and your boss has asked you to optimize it. What are the steps you follow to most efficiently optimize the code? What strategies have you found to be unsuccessful when optimizing code? Re-writes: At what point do you decide to stop optimizing and say "This is as fast as it'll get without a complete re-write." In what cases would you advocate a simple complete re-write anyway? How would you go about designing it?

    Read the article

  • How to explain to a developer that adding extra if - else if conditions is not a good way to "improv

    - by Lilit
    Recently I've bumped into the following C++ code: if (a) { f(); } else if (b) { f(); } else if (c) { f(); } Where a, b and c are all different conditions, and they are not very short. I tried to change the code to: if (a || b || c) { f(); } But the author opposed saying that my change will decrease readability of the code. I had two arguments: 1) You should not increase readability by replacing one branching statement with three (though I really doubt that it's possible to make code more readable by using else if instead of ||). 2) It's not the fastest code, and no compiler will optimize this. But my arguments did not convince him. What would you tell a programmer writing such a code? Do you think complex condition is an excuse for using else if instead of OR?

    Read the article

  • Is putting the javascript before the closing body tag okay on an asp.net website?

    - by Jason Weber
    I pretty much stated what I have to ask. But is taking all of your external .js files and putting them before the closing body tag on your master pages okay on an asp.net website? I'm just going off of what yslow and google speed have been showing. I can't combine these javascripts, so I'm trying to load them "after page load", but doing so makes them useless; some of my jquery things don't work. I moved my .js files above the opening body tag, and they work. What am I doing wrong? And what could I do to load my .js files after page load? Thanks for any advice anybody can offer!

    Read the article

  • Strange profiler behavior: same functions, different performances

    - by arthurprs
    I was learning to use gprof and then i got weird results for this code: int one(int a, int b) { return a / (b + 1); } int two(int a, int b) { return a / (b + 1); } int main() { for (int i = 1; i < 30000000; i++) { two(i, i * 2); one(i, i * 2); } return 0; } and this is the profiler output % cumulative self self total time seconds seconds calls ns/call ns/call name 48.39 0.90 0.90 29999999 30.00 30.00 one(int, int) 40.86 1.66 0.76 29999999 25.33 25.33 two(int, int) 10.75 1.86 0.20 main If i call one then two the result is the inverse, two takes more time than one both are the same functions, but the first calls always take less time then the second Why is that? Note: The assembly code is exactly the same and code is being compiled with no optimizations

    Read the article

  • How to optimize this simple function which translates input bits into words?

    - by psihodelia
    I have written a function which reads an input buffer of bytes and produces an output buffer of words where every word can be either 0x0081 for each ON bit of the input buffer or 0x007F for each OFF bit. The length of the input buffer is given. Both arrays have enough physical place. I also have about 2Kbyte free RAM which I can use for lookup tables or so. Now, I found that this function is my bottleneck in a real time application. It will be called very frequently. Can you please suggest a way how to optimize this function? I see one possibility could be to use only one buffer and do in-place substitution. void inline BitsToWords(int8 *pc_BufIn, int16 *pw_BufOut, int32 BufInLen) { int32 i,j,z=0; for(i=0; i<BufInLen; i++) { for(j=0; j<8; j++, z++) { pw_BufOut[z] = ( ((pc_BufIn[i] >> (7-j))&0x01) == 1? 0x0081: 0x007f ); } } } Please do not offer any compiler specific or CPU/Hardware specific optimization, because it is a multi-platform project.

    Read the article

  • sql query is too slow, how to improve speed

    - by user1289282
    I have run into a bottleneck when trying to update one of my tables. The player table has, among other things, id, skill, school, weight. What I am trying to do is: SELECT id, skill FROM player WHERE player.school = (current school of 4500) AND player.weight = (current weight of 14) to find the highest skill of all players returned from the query UPDATE player SET starter = 'TRUE' WHERE id = (highest skill) move to next weight and repeat when all weights have been completed move to next school and start over all schools completed, done I have this code implemented and it works, but I have approximately 4500 schools totaling 172000 players and the way I have it now, it would take probably a half hour or more to complete (did not wait it out), which is way too slow. How to speed this up? Short of reducing the scale of the system, I am willing to do anything that gets the intended result. Thanks! *the weights are the standard folk style wrestling weights ie, 103, 113, 120, 126, 132, 138, 145, 152, 160, 170, 182, 195, 220, 285 pounds

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >