Search Results

Search found 60072 results on 2403 pages for 'application performance'.

Page 275/2403 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Problem measuring N times the execution time of a code block

    - by Nazgulled
    EDIT: I just found my problem after writing this long post explaining every little detail... If someone can give me a good answer on what I'm doing wrong and how can I get the execution time in seconds (using a float with 5 decimal places or so), I'll mark that as accepted. Hint: The problem was on how I interpreted the clock_getttime() man page. Hi, Let's say I have a function named myOperation that I need to measure the execution time of. To measure it, I'm using clock_gettime() as it was recommend here in one of the comments. My teacher recommends us to measure it N times so we can get an average, standard deviation and median for the final report. He also recommends us to execute myOperation M times instead of just one. If myOperation is a very fast operation, measuring it M times allow us to get a sense of the "real time" it takes; cause the clock being used might not have the required precision to measure such operation. So, execution myOperation only one time or M times really depends if the operation itself takes long enough for the clock precision we are using. I'm having trouble dealing with that M times execution. Increasing M decreases (a lot) the final average value. Which doesn't make sense to me. It's like this, on average you take 3 to 5 seconds to travel from point A to B. But then you go from A to B and back to A 5 times (which makes it 10 times, cause A to B is the same as B to A) and you measure that. Than you divide by 10, the average you get is supposed to be the same average you take traveling from point A to B, which is 3 to 5 seconds. This is what I want my code to do, but it's not working. If I keep increasing the number of times I go from A to B and back A, the average will be lower and lower each time, it makes no sense to me. Enough theory, here's my code: #include <stdio.h> #include <time.h> #define MEASUREMENTS 1 #define OPERATIONS 1 typedef struct timespec TimeClock; TimeClock diffTimeClock(TimeClock start, TimeClock end) { TimeClock aux; if((end.tv_nsec - start.tv_nsec) < 0) { aux.tv_sec = end.tv_sec - start.tv_sec - 1; aux.tv_nsec = 1E9 + end.tv_nsec - start.tv_nsec; } else { aux.tv_sec = end.tv_sec - start.tv_sec; aux.tv_nsec = end.tv_nsec - start.tv_nsec; } return aux; } int main(void) { TimeClock sTime, eTime, dTime; int i, j; for(i = 0; i < MEASUREMENTS; i++) { printf(" » MEASURE %02d\n", i+1); clock_gettime(CLOCK_REALTIME, &sTime); for(j = 0; j < OPERATIONS; j++) { myOperation(); } clock_gettime(CLOCK_REALTIME, &eTime); dTime = diffTimeClock(sTime, eTime); printf(" - NSEC (TOTAL): %ld\n", dTime.tv_nsec); printf(" - NSEC (OP): %ld\n\n", dTime.tv_nsec / OPERATIONS); } return 0; } Notes: The above diffTimeClock function is from this blog post. I replaced my real operation with myOperation() because it doesn't make any sense to post my real functions as I would have to post long blocks of code, you can easily code a myOperation() with whatever you like to compile the code if you wish. As you can see, OPERATIONS = 1 and the results are: » MEASURE 01 - NSEC (TOTAL): 27456580 - NSEC (OP): 27456580 For OPERATIONS = 100 the results are: » MEASURE 01 - NSEC (TOTAL): 218929736 - NSEC (OP): 2189297 For OPERATIONS = 1000 the results are: » MEASURE 01 - NSEC (TOTAL): 862834890 - NSEC (OP): 862834 For OPERATIONS = 10000 the results are: » MEASURE 01 - NSEC (TOTAL): 574133641 - NSEC (OP): 57413 Now, I'm not a math wiz, far from it actually, but this doesn't make any sense to me whatsoever. I've already talked about this with a friend that's on this project with me and he also can't understand the differences. I don't understand why the value is getting lower and lower when I increase OPERATIONS. The operation itself should take the same time (on average of course, not the exact same time), no matter how many times I execute it. You could tell me that that actually depends on the operation itself, the data being read and that some data could already be in the cache and bla bla, but I don't think that's the problem. In my case, myOperation is reading 5000 lines of text from an CSV file, separating the values by ; and inserting those values into a data structure. For each iteration, I'm destroying the data structure and initializing it again. Now that I think of it, I also that think that there's a problem measuring time with clock_gettime(), maybe I'm not using it right. I mean, look at the last example, where OPERATIONS = 10000. The total time it took was 574133641ns, which would be roughly 0,5s; that's impossible, it took a couple of minutes as I couldn't stand looking at the screen waiting and went to eat something.

    Read the article

  • Very slow Eclipse 4.2, how to make it more responsive?

    - by Laurent
    I'm using Eclipse PDT on a rather large PHP project and the IDE is almost unusable. It takes nearly 30 seconds to open a file, and other actions, like selecting a folder in the file explorer, editing some text, etc. are equally slow. I followed various instructions to speed it up but nothing seems to work. This is my current eclipse.ini file. Any idea how I can improve it? -startup plugins/org.eclipse.equinox.launcher_1.3.0.v20120522-1813.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.1.200.v20120522-1813 -showsplash org.eclipse.platform --launcher.XXMaxPermSize 256m --launcher.defaultAction openFile -vmargs -server -Dosgi.requiredJavaVersion=1.7 -Xmn128m -Xms1024m -Xmx1024m -Xss2m -XX:PermSize=128m -XX:MaxPermSize=128m -XX:+UseParallelGC System: Eclipse 4.2.0, Windows 7, 4 GB RAM

    Read the article

  • Same query, different execution plans

    - by A..
    Hi, I am trying to find a solution for a problem that is driving me mad... I have a query which runs very fast in a QA Server but it is very slow in production. I realised that they have different execution plans... so I have try recompiling, cleanning the cache for the execution plans, update statistics, check the type of collation... but I still can't find what's going on... The databases where the query is running are exactly the same and the SQL Servers have also the same configuration. Any new ideas would be much appreciated. Thanks, A.

    Read the article

  • How To Make Moving News Bar in Windows Forms Application without Timer

    - by Ehab Sutan
    I'm making a desktop application in C# which contains a moving News Bar labels. I'm using a timer to move these labels but the problem is that when i make the interval of this timer low (1-10 for example) the application takes very high percentage of CPU Usage, And when i make it higher(200 -500 ) the movement of the labels becomes intermittent or not smooth movement even that the user may not be able to read the news in Comfortable way. ((More Information)) it is Windows form application. the way i move the labels is as follows : the news items from RSS feeds are represented in a group of linklabels. All these linklabels are added to a flowlayout container. The timer moves the whole flowlayout container. I found this way according to my knowledge the best way to making the news bar. If you have better idea or solution please help

    Read the article

  • Adding another view to a view based application

    - by Joy
    Hi, I'm developing an iphone apllication that stores some data into a database. That is working fine. But now i have a problem when i want to show the data. Because to show the data i need to design another view. But i'm facing problem when i try to add another view.Is it possible to have multiple view in a view-based application,cause my application is a view-based application? If yes then how to do that? Please help me Thanks in advance Joy

    Read the article

  • Good .NET library for fast streaming / batching trigonometry (Atan)?

    - by Sean
    I need to call Atan on millions of values per second. Is there a good library to perform this operation in batch very fast. For example, a library that streams the low level logic using something like SSE? I know that there is support for this in OpenCL, but I would prefer to do this operation on the CPU. The target machine might not support OpenCL. I also looked into using OpenCV, but it's accuracy for Atan angles is only ~0.3 degrees. I need accurate results.

    Read the article

  • How many layers are between my program and the hardware?

    - by sub
    I somehow have the feeling that modern systems, including runtime libraries, this exception handler and that built-in debugger build up more and more layers between my (C++) programs and the CPU/rest of the hardware. I'm thinking of something like this: 1 + 2 OS top layer Runtime library/helper/error handler a hell lot of DLL modules OS kernel layer Do you really want to run 1 + 2?-Windows popup (don't take this serious) OS kernel layer Hardware abstraction Hardware Go through at least 100 miles of circuits Eventually arrive at the CPU ADD 1, 2 Go all the way back to my program Nearly all technical things are simply wrong and in some random order, but you get my point right? How much longer/shorter is this chain when I run a C++ program that calculates 1 + 2 at runtime on Windows? How about when I do this in an interpreter? (Python|Ruby|PHP) Is this chain really as dramatic in reality? Does Windows really try "not to stand in the way"? e.g.: Direct connection my binary < hardware?

    Read the article

  • Javascript clears a variable after there is no further reference it

    - by Praveen Prasad
    It is said, javascript clears a variable from memory after its being referenced last. just for the sake of this question i created a JS file with only one variable; //file start //variable defined var a=["Hello"] //refenence to that variable alert(a[0]); // //file end no further reference to that variable, so i expect javascript to clear varaible 'a' Now i just ran this page and then opened firebug and ran this code alert(a[0]); Now this alerts the value of variable, If the statement "Javascript clears a variable after there is no further reference it" is true how come alert() shows its value. Is it because all variable defined in global context become properties of window object, and since even after the execution file window objects exist so does it properties.

    Read the article

  • devArt's dotConnect for Oracle vs. ODP.net/OCI performanc.

    - by Sieg
    Does anybody have any experience going from ODP.net to devArt's dotConnect for Oracle? Some initial testing is showing Direct Connect in 64bit dotConnect running 30% slower at times than our original ODP.net/OCI 32 bit solution. Trying to determine if that's normal or if something may be wrong in my testing approach. Thanks!

    Read the article

  • PL/SQL profiler missing data

    - by user289429
    We are using pl/sql profiler to collect metrics. We noticed that on one of the environment the plsql_profiler_runs table is populated with the total execution time but the finer details that gets collected in the table plsql_profiler_data is missing. Any idea why this would be happening? We do use dbms_profiler.flush_data() before stopping the profiler and have seen this work fine in another environment.

    Read the article

  • What's holding up my PHP script?

    - by gAMBOOKa
    We've got a PHP crawler running on our web server. When the crawler is running, there are no cpu, memory or network bandwidth spikes. Everything is normal. But our website (also PHP), hosted on the same server, stops responding. Basically the crawler blocks any other php script from running. What could be the problem? EDIT: ** fsockopen is being used to download files to crawler! **

    Read the article

  • Grails application hogging too much memory

    - by RN
    Tomcat 5.5.x and 6.0.x Grails 1.6.x Java 1.6.x OS CentOS 5.x (64bit) VPS Server with memory as 384M export JAVA_OPTS='-Xms128M -Xmx512M -XX:MaxPermSize=1024m' I have created a blank Grails application i.e simply by giving the command grails create-app and then WARed it I am running Tomcat on a VPS Server When I simply start the Tomcat server, with no apps deployed, the free memory is about 236M and used memory is about 156M When I deploy my "blank" application, the memory consumption spikes to 360M and finally the Tomcat instance is killed as soon as it takes up all free memory As you have seen, my app is as light as it can be. Not sure why the memory consumption is as high it is. I am actually troubleshooting a real application, but have narrowed down to this scenario which is easier to share and explain.

    Read the article

  • How to handle large dataset with JPA (or at least with Hibernate)?

    - by Roman
    I need to make my web-app work with really huge datasets. At the moment I get either OutOfMemoryException or output which is being generated 1-2 minutes. Let's put it simple and suppose that we have 2 tables in DB: Worker and WorkLog with about 1000 rows in the first one and 10 000 000 rows in the second one. Latter table has several fields including 'workerId' and 'hoursWorked' fields among others. What we need is: count total hours worked by each user; list of work periods for each user. The most straightforward approach (IMO) for each task in plain SQL is: 1) select Worker.name, sum(hoursWorked) from Worker, WorkLog where Worker.id = WorkLog.workerId group by Worker.name; //results of this query should be transformed to Multimap<Worker, Long> 2) select Worker.name, WorkLog.start, WorkLog.hoursWorked from Worker, WorkLog where Worker.id = WorkLog.workerId; //results of this query should be transformed to Multimap<Worker, Period> //if it was JDBC then it would be vitally //to set resultSet.setFetchSize (someSmallNumber), ~100 So, I have two questions: how to implement each of my approaches with JPA (or at least with Hibernate); how would you handle this problem (with JPA or Hibernate of course)?

    Read the article

  • How to get a count of ManagementObjects (WMI results) without enumerating through the collection in

    - by Mark
    When querying for large ammount of data through WMI (say the windows events log Win32_NTLogEvent) it is very useful to know what kind of numbers you are getting yourself into before downloading all the content. Is there a way two do this? From what i know there is no "Select Count(*) FROM Win32_NTLogEvent" in WQL. From what i know the Count property of the ManagementObjectCollection actually enumerates through all the results whether you have the Rewindable property set to true or false. If it cannot be done in .NET, can it be done by directly using the underlying IWbem objects Thanks

    Read the article

  • Anyone using IronPython in a production application?

    - by Scott P
    I've been toying with the idea of adding IronPython for extending a scientific application I support. Is this a good or horrible idea? Are there any good examples of IronPython being used in a production application. I've seen Resolver, which is kind of cute. Are there any other apps out there? What I don't get is this. Is it any easier to use IronPython than to just use something like code DOM to create script like extensibility in your application? Anyone have some horror stories or tales of glorious success with IronPython / IronRuby?

    Read the article

  • Where should the partitioning column go in the primary key on SQL Server?

    - by Bialecki
    Using SQL Server 2005 and 2008. I've got a potentially very large table (potentially hundreds of millions of rows) consisting of the following columns: CREATE TABLE ( date SMALLDATETIME, id BIGINT, value FLOAT ) which is being partitioned on column date in daily partitions. The question then is should the primary key be on date, id or value, id? I can imagine that SQL Server is smart enough to know that it's already partitioning on date and therefore, if I'm always querying for whole chunks of days, then I can have it second in the primary key. Or I can imagine that SQL Server will need that column to be first in the primary key to get the benefit of partitioning. Can anyone lend some insight into which way the table should be keyed?

    Read the article

  • Detecting Connection Speed / Bandwidth in .net/WCF

    - by Mystagogue
    I'm writing both client and server code using WCF, where I need to know the "perceived" bandwidth of traffic between the client and server. I could use ping statistics to gather this information separately, but I wonder if there is a way to configure the channel stack in WCF so that the same statistics can be gathered simultaneously while performing my web service invocations. This would be particularly useful in cases where ICMP is disabled (e.g. ping won't work). In short, while making my regular business-related web service calls (REST calls to be precise), is there a way to collect connection speed data implicitly? Certainly I could time the web service round trip, compared to the size of data used in the round-trip, to give me an idea of throughput - but I won't know how much of that perceived bandwidth was network related, or simply due to server-processing latency. I could perhaps solve that by having the server send back a time delta, representing server latency, so that the client can compute the actual network traffic time. If a more sophisticated approach is not available, that might be my answer...

    Read the article

  • Getting "on the wire" Size of Messages in WCF

    - by Mystagogue
    While I'm making SOAP or REST invocations to WCF, I'd like to have the channel stack on either end (client and server) record the on-the-wire size of the data received. So I'm guessing I need to add a custom behavior to the channel stack on either side. That is, on the server side I'd record the IP-header advertised size that was received. On the client side I'd record the IP-header advertised size that was returned from the server. But this presupposes that this information is visible to a custom WCF behavior at the channel stack level. Perhaps it is only visible at the level of ASP.NET (at a layer beneath WCF)? In short, does anyone have any further insight on if and how this information is accessible? I must qualify that this "size" data will be collected in a production environment, as part of regular business logic calls. This question is related to my earlier bandwidth question.

    Read the article

  • Efficient data structure design

    - by Sway
    Hi there, I need to match a series of user inputed words against a large dictionary of words (to ensure the entered value exists). So if the user entered: "orange" it should match an entry "orange' in the dictionary. Now the catch is that the user can also enter a wildcard or series of wildcard characters like say "or__ge" which would also match "orange" The key requirements are: * this should be as fast as possible. * use the smallest amount of memory to achieve it. If the size of the word list was small I could use a string containing all the words and use regular expressions. however given that the word list could contain potentially hundreds of thousands of enteries I'm assuming this wouldn't work. So is some sort of 'tree' be the way to go for this...? Any thoughts or suggestions on this would be totally appreciated! Thanks in advance, Matt

    Read the article

  • Increasing speed of webservice - howto

    - by Koran
    Hi, Our client-server product has the protocol between them as XML over HTTP. Here, the client asks a GET/POST query to the web server and the server responds with XML. The server is written using django. The server has to be on the web because there are many clients across the world using this. The server code uses extensive memoization and also there is very less db queries - most queries dont have any db queries, some of them has max 1. The biggest problem is the speed. Every query takes close to 5 seconds for the reply. The data replied is also very less - in the range of 4-6 Kb. What are the mechanisms to improve speed of the web service? Is this the usual way of writing a client-server? Are there other technologies and are we missing out on it? Thank you K

    Read the article

  • Why is insertion into my tree faster on sorted input than random input?

    - by Juliet
    Now I've always heard binary search trees are faster to build from randomly selected data than ordered data, simply because ordered data requires explicit rebalancing to keep the tree height at a minimum. Recently I implemented an immutable treap, a special kind of binary search tree which uses randomization to keep itself relatively balanced. In contrast to what I expected, I found I can consistently build a treap about 2x faster and generally better balanced from ordered data than unordered data -- and I have no idea why. Here's my treap implementation: http://pastebin.com/VAfSJRwZ And here's a test program: using System; using System.Collections.Generic; using System.Linq; using System.Diagnostics; namespace ConsoleApplication1 { class Program { static Random rnd = new Random(); const int ITERATION_COUNT = 20; static void Main(string[] args) { List<double> rndTimes = new List<double>(); List<double> orderedTimes = new List<double>(); rndTimes.Add(TimeIt(50, RandomInsert)); rndTimes.Add(TimeIt(100, RandomInsert)); rndTimes.Add(TimeIt(200, RandomInsert)); rndTimes.Add(TimeIt(400, RandomInsert)); rndTimes.Add(TimeIt(800, RandomInsert)); rndTimes.Add(TimeIt(1000, RandomInsert)); rndTimes.Add(TimeIt(2000, RandomInsert)); rndTimes.Add(TimeIt(4000, RandomInsert)); rndTimes.Add(TimeIt(8000, RandomInsert)); rndTimes.Add(TimeIt(16000, RandomInsert)); rndTimes.Add(TimeIt(32000, RandomInsert)); rndTimes.Add(TimeIt(64000, RandomInsert)); rndTimes.Add(TimeIt(128000, RandomInsert)); string rndTimesAsString = string.Join("\n", rndTimes.Select(x => x.ToString()).ToArray()); orderedTimes.Add(TimeIt(50, OrderedInsert)); orderedTimes.Add(TimeIt(100, OrderedInsert)); orderedTimes.Add(TimeIt(200, OrderedInsert)); orderedTimes.Add(TimeIt(400, OrderedInsert)); orderedTimes.Add(TimeIt(800, OrderedInsert)); orderedTimes.Add(TimeIt(1000, OrderedInsert)); orderedTimes.Add(TimeIt(2000, OrderedInsert)); orderedTimes.Add(TimeIt(4000, OrderedInsert)); orderedTimes.Add(TimeIt(8000, OrderedInsert)); orderedTimes.Add(TimeIt(16000, OrderedInsert)); orderedTimes.Add(TimeIt(32000, OrderedInsert)); orderedTimes.Add(TimeIt(64000, OrderedInsert)); orderedTimes.Add(TimeIt(128000, OrderedInsert)); string orderedTimesAsString = string.Join("\n", orderedTimes.Select(x => x.ToString()).ToArray()); Console.WriteLine("Done"); } static double TimeIt(int insertCount, Action<int> f) { Console.WriteLine("TimeIt({0}, {1})", insertCount, f.Method.Name); List<double> times = new List<double>(); for (int i = 0; i < ITERATION_COUNT; i++) { Stopwatch sw = Stopwatch.StartNew(); f(insertCount); sw.Stop(); times.Add(sw.Elapsed.TotalMilliseconds); } return times.Average(); } static void RandomInsert(int insertCount) { Treap<double> tree = new Treap<double>((x, y) => x.CompareTo(y)); for (int i = 0; i < insertCount; i++) { tree = tree.Insert(rnd.NextDouble()); } } static void OrderedInsert(int insertCount) { Treap<double> tree = new Treap<double>((x, y) => x.CompareTo(y)); for(int i = 0; i < insertCount; i++) { tree = tree.Insert(i + rnd.NextDouble()); } } } } And here's a chart comparing random and ordered insertion times in milliseconds: Insertions Random Ordered RandomTime / OrderedTime 50 1.031665 0.261585 3.94 100 0.544345 1.377155 0.4 200 1.268320 0.734570 1.73 400 2.765555 1.639150 1.69 800 6.089700 3.558350 1.71 1000 7.855150 4.704190 1.67 2000 17.852000 12.554065 1.42 4000 40.157340 22.474445 1.79 8000 88.375430 48.364265 1.83 16000 197.524000 109.082200 1.81 32000 459.277050 238.154405 1.93 64000 1055.508875 512.020310 2.06 128000 2481.694230 1107.980425 2.24 I don't see anything in the code which makes ordered input asymptotically faster than unordered input, so I'm at a loss to explain the difference. Why is it so much faster to build a treap from ordered input than random input?

    Read the article

  • Java iteration reading & parsing

    - by Patrick Lorio
    I have a log file that I am reading to a string public static String Read (String path) throws IOException { StringBuilder sb = new StringBuilder(); InputStream in = new BufferedInputStream(new FileInputStream(path)); int r; while ((r = in.read()) != -1) { sb.append(r); } return sb.toString(); } Then I have a parser that iterates over the entire string once void Parse () { String con = Read("log.txt"); for (int i = 0; i < con.length; i++) { /* parsing action */ } } This is hugely a waste of cpu cycles. I loop over all the content in Read. Then I loop over all the content in Parse. I could just place the /* parsing action */ under the while loop in the Read method, which would be find but I don't want to copy the same code all over the place. How can I parse the file in one iteration over the contents and still have separate methods for parsing and reading? In C# I understand there is some sort of yield return thing, but I'm locked with Java. What are my options in Java?

    Read the article

  • NHibernate slow mapping

    - by Rob A
    My question is what can I do to determine the cause of the slowness, or what can I do to speed it up without knowing the exact cause. I am running a simple query and it appears that the mapping back to the entities is taking taking forever. The result set is 350, which is not much data in my opinion. IRepository repo = ObjectFactory.GetInstance<IRepository>(); var q = repo.Query<Order>(item => item.Ordereddate > DateTime.Now.AddDays(-40)); foreach (var order in q) { Console.WriteLine(order.TransactionNumber); } The profiler is telling me it is executing the query 7ms / 35257ms, I am assuming that the former is the actual response from the db and the latter is the time it takes NH to do it's magic. 35 seconds is too long. This is a simple mapping, one table, nested components, using fluent interface to do mappings. I just start up a simple console app and run the one query, the slowness is measured after the SessionFactory is initialized, there should only be one session, and I am not using a transaction. Thanks

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >