Search Results

Search found 14282 results on 572 pages for 'performance counter'.

Page 174/572 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Slow loading of UITableView. How know why?

    - by mamcx
    I have a UITableView that show a long list of data. Use sections and follow the sugestion of http://stackoverflow.com/questions/695814/how-solve-slow-scrolling-in-uitableview . The flow is load a main UITableView & push a second selecting a row from there. However, with 3000 items take 11 seconds to show. I suspect first from the load of the records from sqlite (I preload the first 200). So I cut it to only 50. However, no matter if I preload only 1 or 500, the time is the same. The view is made from IB and all is opaque. I run out of ideas in how detect the problem. I run the Instruments tool but not know what to look. Also, when the user select a cell from the previous UITable, no visual feedback is show (ie: the cell not turn selected) for a while so he thinks he not select it and try several times. Is related to this problem. What to do? NOTE: The problem is only in the actual device: iPod Touch 2d generation Using fmdb as sqlite api Doing the caching in viewDidLoad Using NSDictionary for the caching Using a NSAutoreleasePool for the caching part. Only caching the row ID & mac 4 fields necesary to show the cell data UIView made with interface builder, SDK 2.2.1 Instruments say I use 2.5 MB in the device

    Read the article

  • Speed up :visible:input selector avoiding filter

    - by macca1
    I have a jQuery selector that is running way too slow on my unfortunately large page: $("#section").find(":visible:input").filter(":first").focus(); Is there a quicker way to select the first visible input without having to find ALL the visible inputs and then filtering THAT selection for the first? I want something like :visible:input:first but that doesn't seem to work.

    Read the article

  • finding long repeated substrings in a massive string

    - by Will
    I naively imagined that I could build a suffix trie where I keep a visit-count for each node, and then the deepest nodes with counts greater than one are the result set I'm looking for. I have a really really long string (hundreds of megabytes). I have about 1 GB of RAM. This is why building a suffix trie with counting data is too inefficient space-wise to work for me. To quote Wikipedia's Suffix tree: storing a string's suffix tree typically requires significantly more space than storing the string itself. The large amount of information in each edge and node makes the suffix tree very expensive, consuming about ten to twenty times the memory size of the source text in good implementations. The suffix array reduces this requirement to a factor of four, and researchers have continued to find smaller indexing structures. And that was wikipedia's comments on the tree, not trie. How can I find long repeated sequences in such a large amount of data, and in a reasonable amount of time (e.g. less than an hour on a modern desktop machine)? (Some wikipedia links to avoid people posting them as the 'answer': Algorithms on strings and especially Longest repeated substring problem ;-) )

    Read the article

  • Why PHP (script) serves more requests than CGI (compiled)?

    - by Lucas Batistussi
    I developed the following CGI script and run on Apache 2 (http://localhost/test.chtml). I did same script in PHP (http://localhost/verifica.php). Later I performed Apache benchmark using Apache Benchmark tool. The results are showed in images. include #include <stdlib.h> int main(void) { printf("%s%c%c\n", "Content-Type:text/html;charset=iso-8859-1",13,10); printf("<TITLE>Multiplication results</TITLE>\n"); printf("<H3>Multiplication results</H3>\n"); return 0; } Someone can explain me why PHP serves more requests than CGI script?

    Read the article

  • What influences running time of reading a bunch of images?

    - by remi
    I have a program where I read a handful of tiny images (50000 images of size 32x32). I read them using OpenCV imread function, in a program like this: std::vector<std::string> imageList; // is initialized with full path to the 50K images for(string s : imageList) { cv::Mat m = cv::imread(s); } Sometimes, it will read the images in a few seconds. Sometimes, it takes a few minutes to do so. I run this program in GDB, with a breakpoint further away than the loop for reading images so it's not because I'm stuck in a breakpoint. The same "erratic" behaviour happens when I run the program out of GDB. The same "erratic" behaviour happens with program compiled with/without optimisation The same "erratic" behaviour happens while I have or not other programs running in background The images are always at the same place in the hard drive of my machine. I run the program on a Linux Suse distrib, compiled with gcc. So I am wondering what could affect the time of reading the images that much?

    Read the article

  • Cache bandwidth per tick for modern CPUs

    - by osgx
    Hello What is a speed of cache accessing for modern CPUs? How many bytes can be read or written from memory every processor clock tick by Intel P4, Core2, Corei7, AMD? Please, answer with both theoretical (width of ld/sd unit with its throughput in uOPs/tick) and practical numbers (even memcpy speed tests, or STREAM benchmark), if any. PS it is question, related to maximal rate of load/store instructions in assembler. There can be theoretical rate of loading (all Instructions Per Tick are widest loads), but processor can give only part of such, a practical limit of loading.

    Read the article

  • C: using a lot of structs can make a program slow?

    - by nunos
    I am coding a breakout clone. I had one version in which I only had one level deep of structures. This version runs at 70 fps. For more clarity in the code I decided the code should have more abstractions and created more structs. Most of the times I have two two three level deep of structures. This version runs at 30 fps. Since there are some other differences besides the structures, I ask you: Does using a lot of structs in C can slow down the code significantly? Thanks.

    Read the article

  • Fastest way for inserting very large number of records into a Table in SQL

    - by Irchi
    The problem is, we have a huge number of records (more than a million) to be inserted into a single table from a Java application. The records are created by the Java code, it's not a move from another table, so INSERT/SELECT won't help. Currently, my bottleneck is the INSERT statements. I'm using PreparedStatement to speed-up the process, but I can't get more than 50 recods per second on a normal server. The table is not complicated at all, and there are no indexes defined on it. The process takes too long, and the time it takes will make problems. What can I do to get the maximum speed (INSERT per second) possible? Database: MS SQL 2008. Application: Java-based, using Microsoft JDBC driver.

    Read the article

  • Visual Studio 2010 - Is it slow for anyone else?

    - by AngryHacker
    I've read a lot of stuff about VS2010 being much more performant than VS2008. When I've finally installed it, I found that it, in fact, is much slower (save for the Add References dialog). For instance, Silverlight projects take twice as long to load, the startup of the IDE itself is much slower, etc... Am I missing something here or is it like this for everyone?

    Read the article

  • Ant Junit tests are running much slower via ant than via IDE - what to look at?

    - by Alex B
    I am running my junit tests via ant and they are running substantially slower than via the IDE. My ant call is: <junit fork="yes" forkmode="once" printsummary="off"> <classpath refid="test.classpath"/> <formatter type="brief" usefile="false"/> <batchtest todir="${test.results.dir}/xml"> <formatter type="xml"/> <fileset dir="src" includes="**/*Test.java" /> </batchtest> </junit> The same test that runs in near instantaneously in my IDE (0.067s) takes 4.632s when run through Ant. In the past, I've been able to speed up test problems like this by using the junit fork parameter but this doesn't seem to be helping in this case. What properties or parameters can I look at to speed up these tests? More info: I am using the reported time from the IDE vs. the time that the junit task outputs. This is not the sum total time reported at the end of the ant run. So, bizarrely, this problem has resolved itself. What could have caused this problem? The system runs on a local disk so that is not the problem.

    Read the article

  • Project Euler #119 Make Faster

    - by gangqinlaohu
    Trying to solve Project Euler problem 119: The number 512 is interesting because it is equal to the sum of its digits raised to some power: 5 + 1 + 2 = 8, and 8^3 = 512. Another example of a number with this property is 614656 = 28^4. We shall define an to be the nth term of this sequence and insist that a number must contain at least two digits to have a sum. You are given that a2 = 512 and a10 = 614656. Find a30. Question: Is there a more efficient way to find the answer than just checking every number until a30 is found? My Code int currentNum = 0; long value = 0; for (long a = 11; currentNum != 30; a++){ //maybe a++ is inefficient int test = Util.sumDigits(a); if (isPower(a, test)) { currentNum++; value = a; System.out.println(value + ":" + currentNum); } } System.out.println(value); isPower checks if a is a power of test. Util.sumDigits: public static int sumDigits(long n){ int sum = 0; String s = "" + n; while (!s.equals("")){ sum += Integer.parseInt("" + s.charAt(0)); s = s.substring(1); } return sum; } program has been running for about 30 minutes (might be overflow on the long). Output (so far): 81:1 512:2 2401:3 4913:4 5832:5 17576:6 19683:7 234256:8 390625:9 614656:10 1679616:11 17210368:12 34012224:13 52521875:14 60466176:15 205962976:16 612220032:17

    Read the article

  • Java JRE vs GCJ

    - by Martijn Courteaux
    Hi, I have this results from a speed test I wrote in Java: Java real 0m20.626s user 0m20.257s sys 0m0.244s GCJ real 3m10.567s user 3m5.168s sys 0m0.676s So, what is the but of GCJ then? With this results I'm sure I'm not going to compile it with GCJ! I tested this on Linux, are the results in Windows maybe better than that? This was the code from the application: public static void main(String[] args) { String str = ""; System.out.println("Start!!!"); for (long i = 0; i < 5000000L; i++) { Math.sqrt((double) i); Math.pow((double) i, 2.56); long j = i * 745L; String string = new String(String.valueOf(i)); string = string.concat(" kaka pipi"); // "Kaka pipi" is a kind of childly call in Dutch. string = new String(string.toUpperCase()); if (i % 300 == 0) { str = ""; } else { str += Long.toHexString(i); } } System.out.println("Stop!!!"); }

    Read the article

  • Why is Dictionary.First() so slow?

    - by Rotsor
    Not a real question because I already found out the answer, but still interesting thing. I always thought that hash table is the fastest associative container if you hash properly. However, the following code is terribly slow. It executes only about 1 million iterations and takes more than 2 minutes of time on a Core 2 CPU. The code does the following: it maintains the collection todo of items it needs to process. At each iteration it takes an item from this collection (doesn't matter which item), deletes it, processes it if it wasn't processed (possibly adding more items to process), and repeats this until there are no items to process. The culprit seems to be the Dictionary.Keys.First() operation. The question is why is it slow? Stopwatch watch = new Stopwatch(); watch.Start(); HashSet<int> processed = new HashSet<int>(); Dictionary<int, int> todo = new Dictionary<int, int>(); todo.Add(1, 1); int iterations = 0; int limit = 500000; while (todo.Count > 0) { iterations++; var key = todo.Keys.First(); var value = todo[key]; todo.Remove(key); if (!processed.Contains(key)) { processed.Add(key); // process item here if (key < limit) { todo[key + 13] = value + 1; todo[key + 7] = value + 1; } // doesn't matter much how } } Console.WriteLine("Iterations: {0}; Time: {1}.", iterations, watch.Elapsed); This results in: Iterations: 923007; Time: 00:02:09.8414388. Simply changing Dictionary to SortedDictionary yields: Iterations: 499976; Time: 00:00:00.4451514. 300 times faster while having only 2 times less iterations. The same happens in java. Used HashMap instead of Dictionary and keySet().iterator().next() instead of Keys.First().

    Read the article

  • 2k rows update is very slow in MySQL

    - by sergeik
    Hi all, I have 2 tables: 1. news (450k rows) 2. news_tags (3m rows) There are some triggers on news table update which updating listings. This SQL executes too long... UPDATE news SET news_category = some_number WHERE news_id IN (SELECT news_id FROM news_tags WHERE tag_id = some_number); #about 3k rows How can I make it faster? Thanks in advance, S.

    Read the article

  • Database/NoSQL - Lowest latecy way to retreive the following data...

    - by Nickb
    I have a real estate application and a "house" contains the following information: house: - house_id - address - city - state - zip - price - sqft - bedrooms - bathrooms - geo_latitude - geo_longitude I need to perform an EXTREMELY fast (low latency) retrieval of all homes within a geo-coordinate box. Something like the SQL below (if I were to use a database): SELECT * from houses WHERE latitude IS BETWEEN xxx AND yyy AND longitude IS BETWEEN www AND zzz Question: What would be the quickest way for me to store this information so that I can perform the fastest retrieval of data based on latitude & longitude? (e.g. database, NoSQL, memcache, etc)?

    Read the article

  • Converting keys of an array/object-tree to lowercase

    - by tstenner
    Im currently optimizing a PHP application and found one function being called around 10-20k times, so I'd thought I'd start optimization there. function keysToLower($obj) { if(!is_object($obj) && !is_array($obj)) return $obj; foreach($obj as $key=>$element) { $element=keysToLower($element); if(is_object($obj)) { $obj->{strtolower($key)}=$element; if(!ctype_lower($key)) unset($obj->{$key}); } else if(is_array($obj) && ctype_upper($key)) { $obj[strtolower($key)]=$element; unset($obj[$key]); } } return $obj; } Most of the time is spent in recursive calls (which are quite slow in PHP), but I don't see any way to convert it to a loop. What would you do?

    Read the article

  • How to optimize this simple function which translates input bits into words?

    - by psihodelia
    I have written a function which reads an input buffer of bytes and produces an output buffer of words where every word can be either 0x0081 for each ON bit of the input buffer or 0x007F for each OFF bit. The length of the input buffer is given. Both arrays have enough physical place. I also have about 2Kbyte free RAM which I can use for lookup tables or so. Now, I found that this function is my bottleneck in a real time application. It will be called very frequently. Can you please suggest a way how to optimize this function? I see one possibility could be to use only one buffer and do in-place substitution. void inline BitsToWords(int8 *pc_BufIn, int16 *pw_BufOut, int32 BufInLen) { int32 i,j,z=0; for(i=0; i<BufInLen; i++) { for(j=0; j<8; j++, z++) { pw_BufOut[z] = ( ((pc_BufIn[i] >> (7-j))&0x01) == 1? 0x0081: 0x007f ); } } } Please do not offer any compiler specific or CPU/Hardware specific optimization, because it is a multi-platform project.

    Read the article

  • require_once at the beginning or when really needed?

    - by takeshin
    Where should I put require_once statements, and why? Always on the beginning of a file, before the class, In the actual method when the file is really needed It depends ? Most frameworks put includes at the beginning and do not care if the file is really needed. Using autoloader is the other case here.

    Read the article

  • JAVA bytecode optimization

    - by Idob
    This is a basic question. I have code which shouldn't run on metadata beans. All metadata beans are located under metadata package. Now, I use reflection API to find out whether a class is located in the the metadata package. if (newEntity.getClass().getPackage().getName().contains("metadata")) I use this If in several places within this code. The question is: Should I do this once with: boolean isMetadata = false if (newEntity.getClass().getPackage().getName().contains("metadata")) { isMetadata = true; } C++ makes optimizations and knows that this code was already called and it won't call it again. Does JAVA makes optimization? I know reflection API is a beat heavy and I prefer not to lose expensive runtime.

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >