Search Results

Search found 20275 results on 811 pages for 'general performance'.

Page 208/811 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • oracle index for string column - does format of data affects quality of index?

    - by Jayan
    We have following type of "Unique ID" column for many tables in the database (Oracle). It is a string with following format <randomnumber>-<ascendingnumber>-<machinename> So we have some thing like this U1234-12345-NBBJD U1234-12346-NBBJD U1234-12347-NBBJD U1234-12348-NBBJD U1234-12349-NBBJD The UID value is unique, we have unique index on them. Does the following format is more efficient than above for index scans? NBBJD-U1234-12345 NBBJD-U1234-12346 NBBJD-U1234-12347 NBBJD-U1234-12348 NBBJD-U1234-12349

    Read the article

  • Fastest way for inserting very large number of records into a Table in SQL

    - by Irchi
    The problem is, we have a huge number of records (more than a million) to be inserted into a single table from a Java application. The records are created by the Java code, it's not a move from another table, so INSERT/SELECT won't help. Currently, my bottleneck is the INSERT statements. I'm using PreparedStatement to speed-up the process, but I can't get more than 50 recods per second on a normal server. The table is not complicated at all, and there are no indexes defined on it. The process takes too long, and the time it takes will make problems. What can I do to get the maximum speed (INSERT per second) possible? Database: MS SQL 2008. Application: Java-based, using Microsoft JDBC driver.

    Read the article

  • finding long repeated substrings in a massive string

    - by Will
    I naively imagined that I could build a suffix trie where I keep a visit-count for each node, and then the deepest nodes with counts greater than one are the result set I'm looking for. I have a really really long string (hundreds of megabytes). I have about 1 GB of RAM. This is why building a suffix trie with counting data is too inefficient space-wise to work for me. To quote Wikipedia's Suffix tree: storing a string's suffix tree typically requires significantly more space than storing the string itself. The large amount of information in each edge and node makes the suffix tree very expensive, consuming about ten to twenty times the memory size of the source text in good implementations. The suffix array reduces this requirement to a factor of four, and researchers have continued to find smaller indexing structures. And that was wikipedia's comments on the tree, not trie. How can I find long repeated sequences in such a large amount of data, and in a reasonable amount of time (e.g. less than an hour on a modern desktop machine)? (Some wikipedia links to avoid people posting them as the 'answer': Algorithms on strings and especially Longest repeated substring problem ;-) )

    Read the article

  • What is the most efficient method to find x contiguous values of y in an array?

    - by Alec
    Running my app through callgrind revealed that this line dwarfed everything else by a factor of about 10,000. I'm probably going to redesign around it, but it got me wondering; Is there a better way to do it? Here's what I'm doing at the moment: int i = 1; while ( ( (*(buffer++) == 0xffffffff && ++i) || (i = 1) ) && i < desiredLength + 1 && buffer < bufferEnd ); It's looking for the offset of the first chunk of desiredLength 0xffffffff values in a 32 bit unsigned int array. It's significantly faster than any implementations I could come up with involving an inner loop. But it's still too damn slow.

    Read the article

  • Cache bandwidth per tick for modern CPUs

    - by osgx
    Hello What is a speed of cache accessing for modern CPUs? How many bytes can be read or written from memory every processor clock tick by Intel P4, Core2, Corei7, AMD? Please, answer with both theoretical (width of ld/sd unit with its throughput in uOPs/tick) and practical numbers (even memcpy speed tests, or STREAM benchmark), if any. PS it is question, related to maximal rate of load/store instructions in assembler. There can be theoretical rate of loading (all Instructions Per Tick are widest loads), but processor can give only part of such, a practical limit of loading.

    Read the article

  • effective counter for unique number of visits in PHP & MySQL

    - by Adnan
    Hello, I am creating a counter for unique number of visits on a post, so what I have until now is a table for storing data like this; cvp_post_id | cvp_ip | cvp_user_id In cases a registered user visits a post, for the first time a record is inserted with cpv_post_id and cvp_user_id, so for his next visit I query the table and if the record is available I do not count him as a new visitor. In cases of an anonymous user the same happens but now the cvp_ip and cpv_post_id are used. My concerns is that I do a query every time anyone visits a post for checking if there has been a visit, what would be a more effective way for doing this?

    Read the article

  • Best memory settings for eclipse 4.2 (STS 3.1) on Windows 7 64 bit?

    - by jorrebor
    I apoligize in advance if this question is indeed too subjective as SO warns me. My workstation has 8 gb of ram and runs windows 7 64 bit. I use the Spring tool Suite (3.1) but as soon as i am starting to open and modify the spring config (.xml) files, STS becomes incredibly slow. I already tried switching off "build automatically" and to increase memory settings but no luck. How should i change my .ini ? this is what i have set now: -vm C:/Program Files/Java/jdk1.7.0_07/bin/javaw.exe -startup plugins/org.eclipse.equinox.launcher_1.3.0.v20120522-1813.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_64_1.1.200.v20120522-1813 -product org.springsource.sts.ide --launcher.defaultAction openFile --launcher.XXMaxPermSize 4096M -vmargs -Dosgi.requiredJavaVersion=1.5 -Xms512m -Xmx2048m -XX:MaxPermSize=512m My collageu running the same project in IntelliJ, has no problems. Thank you!

    Read the article

  • Speed up :visible:input selector avoiding filter

    - by macca1
    I have a jQuery selector that is running way too slow on my unfortunately large page: $("#section").find(":visible:input").filter(":first").focus(); Is there a quicker way to select the first visible input without having to find ALL the visible inputs and then filtering THAT selection for the first? I want something like :visible:input:first but that doesn't seem to work.

    Read the article

  • Long primitive or AtomicLong for a counter?

    - by Rich
    Hi I have a need for a counter of type long with the following requirements/facts: Incrementing the counter should take as little time as possible. The counter will only be written to by one thread. Reading from the counter will be done in another thread. The counter will be incremented regularly (as much as a few thousand times per second), but will only be read once every five seconds. Precise accuracy isn't essential, only a rough idea of the size of the counter is good enough. The counter is never cleared, decremented. Based upon these requirements, how would you choose to implement your counter? As a simple long, as a volatile long or using an AtomicLong? Why? At the moment I have a volatile long but was wondering whether another approach would be better. I am also incrementing my long by doing ++counter as opposed to counter++. Is this really any more efficient (as I have been led to believe elsewhere) because there is no assignment being done? Thanks in advance Rich

    Read the article

  • Why PHP (script) serves more requests than CGI (compiled)?

    - by Lucas Batistussi
    I developed the following CGI script and run on Apache 2 (http://localhost/test.chtml). I did same script in PHP (http://localhost/verifica.php). Later I performed Apache benchmark using Apache Benchmark tool. The results are showed in images. include #include <stdlib.h> int main(void) { printf("%s%c%c\n", "Content-Type:text/html;charset=iso-8859-1",13,10); printf("<TITLE>Multiplication results</TITLE>\n"); printf("<H3>Multiplication results</H3>\n"); return 0; } Someone can explain me why PHP serves more requests than CGI script?

    Read the article

  • C: using a lot of structs can make a program slow?

    - by nunos
    I am coding a breakout clone. I had one version in which I only had one level deep of structures. This version runs at 70 fps. For more clarity in the code I decided the code should have more abstractions and created more structs. Most of the times I have two two three level deep of structures. This version runs at 30 fps. Since there are some other differences besides the structures, I ask you: Does using a lot of structs in C can slow down the code significantly? Thanks.

    Read the article

  • What influences running time of reading a bunch of images?

    - by remi
    I have a program where I read a handful of tiny images (50000 images of size 32x32). I read them using OpenCV imread function, in a program like this: std::vector<std::string> imageList; // is initialized with full path to the 50K images for(string s : imageList) { cv::Mat m = cv::imread(s); } Sometimes, it will read the images in a few seconds. Sometimes, it takes a few minutes to do so. I run this program in GDB, with a breakpoint further away than the loop for reading images so it's not because I'm stuck in a breakpoint. The same "erratic" behaviour happens when I run the program out of GDB. The same "erratic" behaviour happens with program compiled with/without optimisation The same "erratic" behaviour happens while I have or not other programs running in background The images are always at the same place in the hard drive of my machine. I run the program on a Linux Suse distrib, compiled with gcc. So I am wondering what could affect the time of reading the images that much?

    Read the article

  • Slow loading of UITableView. How know why?

    - by mamcx
    I have a UITableView that show a long list of data. Use sections and follow the sugestion of http://stackoverflow.com/questions/695814/how-solve-slow-scrolling-in-uitableview . The flow is load a main UITableView & push a second selecting a row from there. However, with 3000 items take 11 seconds to show. I suspect first from the load of the records from sqlite (I preload the first 200). So I cut it to only 50. However, no matter if I preload only 1 or 500, the time is the same. The view is made from IB and all is opaque. I run out of ideas in how detect the problem. I run the Instruments tool but not know what to look. Also, when the user select a cell from the previous UITable, no visual feedback is show (ie: the cell not turn selected) for a while so he thinks he not select it and try several times. Is related to this problem. What to do? NOTE: The problem is only in the actual device: iPod Touch 2d generation Using fmdb as sqlite api Doing the caching in viewDidLoad Using NSDictionary for the caching Using a NSAutoreleasePool for the caching part. Only caching the row ID & mac 4 fields necesary to show the cell data UIView made with interface builder, SDK 2.2.1 Instruments say I use 2.5 MB in the device

    Read the article

  • where should this check logic go?

    - by Benny
    I have a function that draw a string on graphics: private void DrawSmallImage(Graphics g) { if (this.SmallImage == null) return; var smallPicHeight = this.Height / 5; var x = this.ClientSize.Width - smallPicHeight; var y = this.ClientSize.Height - smallPicHeight; g.DrawImage(this.SmallImage, x, y, smallPicHeight, smallPicHeight); } the check if (this.SmallImage == null) return; should be in the function DrawSmallImage or should be in the caller? which is better?

    Read the article

  • How to measure how long is a function running?

    - by rhose87
    I want to see how long a function is running. So I added a timer object on my form and I came out with this code: private int counter = 0; //inside button click I have: timer = new Timer(); timer.Tick += new EventHandler(timer_Tick); timer.Start(); Result result = new Result(); result = new GeneticAlgorithms().TabuSearch(parametersTabu, functia); timer.Stop(); and: private void timer_Tick(object sender, EventArgs e) { counter++; btnTabuSearch.Text = counter.ToString(); } But this is not counting anything. Any ideas?

    Read the article

  • Visual Studio 2010 - Is it slow for anyone else?

    - by AngryHacker
    I've read a lot of stuff about VS2010 being much more performant than VS2008. When I've finally installed it, I found that it, in fact, is much slower (save for the Add References dialog). For instance, Silverlight projects take twice as long to load, the startup of the IDE itself is much slower, etc... Am I missing something here or is it like this for everyone?

    Read the article

  • List circular group membership from active directory

    - by KAPes
    We have 40K+ groups in our active directory and we are increasingly facing problem of circular nested groups which are creating problems for some applications. Does anyone know how to list down the full route through which a circular group membership exists ? e.g. G1 --> G2 --> G3 --> G4 --> G1 How do I list it down.

    Read the article

  • Project Euler #119 Make Faster

    - by gangqinlaohu
    Trying to solve Project Euler problem 119: The number 512 is interesting because it is equal to the sum of its digits raised to some power: 5 + 1 + 2 = 8, and 8^3 = 512. Another example of a number with this property is 614656 = 28^4. We shall define an to be the nth term of this sequence and insist that a number must contain at least two digits to have a sum. You are given that a2 = 512 and a10 = 614656. Find a30. Question: Is there a more efficient way to find the answer than just checking every number until a30 is found? My Code int currentNum = 0; long value = 0; for (long a = 11; currentNum != 30; a++){ //maybe a++ is inefficient int test = Util.sumDigits(a); if (isPower(a, test)) { currentNum++; value = a; System.out.println(value + ":" + currentNum); } } System.out.println(value); isPower checks if a is a power of test. Util.sumDigits: public static int sumDigits(long n){ int sum = 0; String s = "" + n; while (!s.equals("")){ sum += Integer.parseInt("" + s.charAt(0)); s = s.substring(1); } return sum; } program has been running for about 30 minutes (might be overflow on the long). Output (so far): 81:1 512:2 2401:3 4913:4 5832:5 17576:6 19683:7 234256:8 390625:9 614656:10 1679616:11 17210368:12 34012224:13 52521875:14 60466176:15 205962976:16 612220032:17

    Read the article

  • Why is Dictionary.First() so slow?

    - by Rotsor
    Not a real question because I already found out the answer, but still interesting thing. I always thought that hash table is the fastest associative container if you hash properly. However, the following code is terribly slow. It executes only about 1 million iterations and takes more than 2 minutes of time on a Core 2 CPU. The code does the following: it maintains the collection todo of items it needs to process. At each iteration it takes an item from this collection (doesn't matter which item), deletes it, processes it if it wasn't processed (possibly adding more items to process), and repeats this until there are no items to process. The culprit seems to be the Dictionary.Keys.First() operation. The question is why is it slow? Stopwatch watch = new Stopwatch(); watch.Start(); HashSet<int> processed = new HashSet<int>(); Dictionary<int, int> todo = new Dictionary<int, int>(); todo.Add(1, 1); int iterations = 0; int limit = 500000; while (todo.Count > 0) { iterations++; var key = todo.Keys.First(); var value = todo[key]; todo.Remove(key); if (!processed.Contains(key)) { processed.Add(key); // process item here if (key < limit) { todo[key + 13] = value + 1; todo[key + 7] = value + 1; } // doesn't matter much how } } Console.WriteLine("Iterations: {0}; Time: {1}.", iterations, watch.Elapsed); This results in: Iterations: 923007; Time: 00:02:09.8414388. Simply changing Dictionary to SortedDictionary yields: Iterations: 499976; Time: 00:00:00.4451514. 300 times faster while having only 2 times less iterations. The same happens in java. Used HashMap instead of Dictionary and keySet().iterator().next() instead of Keys.First().

    Read the article

  • Java JRE vs GCJ

    - by Martijn Courteaux
    Hi, I have this results from a speed test I wrote in Java: Java real 0m20.626s user 0m20.257s sys 0m0.244s GCJ real 3m10.567s user 3m5.168s sys 0m0.676s So, what is the but of GCJ then? With this results I'm sure I'm not going to compile it with GCJ! I tested this on Linux, are the results in Windows maybe better than that? This was the code from the application: public static void main(String[] args) { String str = ""; System.out.println("Start!!!"); for (long i = 0; i < 5000000L; i++) { Math.sqrt((double) i); Math.pow((double) i, 2.56); long j = i * 745L; String string = new String(String.valueOf(i)); string = string.concat(" kaka pipi"); // "Kaka pipi" is a kind of childly call in Dutch. string = new String(string.toUpperCase()); if (i % 300 == 0) { str = ""; } else { str += Long.toHexString(i); } } System.out.println("Stop!!!"); }

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >