Search Results

Search found 13206 results on 529 pages for 'performance measurement'.

Page 166/529 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • General question: Filesystem or database?

    - by poeschlorn
    Hey guys, i want to create a small document management system. there are several users who store their files. each file which is uploaded contains an info which user uploaded it and the document content itself. In a view there are displayed all files of ONE specific user, ordered by date. What would be better: 1) giving the documents a name or metadata(XML) which contain the date and user (and iterate through them to get the metadata) or 2) giving the files a random/unique name and store metadata in a DB? something like this: date | user | filename What would you say and why? The used programming language is java and the DB is MySQL.

    Read the article

  • PHP – Slow String Manipulation

    - by Simon Roberts
    I have some very large data files and for business reasons I have to do extensive string manipulation (replacing characters and strings). This is unavoidable. The number of replacements runs into hundreds of thousands. It's taking longer than I would like. PHP is generally very quick but I'm doing so many of these string manipulations that it's slowing down and script execution is running into minutes. This is a pain because the script is run frequently. I've done some testing and found that str_replace is fastest, followed by strstr, followed by preg_replace. I've also tried individual str_replace statements as well as constructing arrays of patterns and replacements. I'm toying with the idea of isolating string manipulation operation and writing in a different language but I don't want to invest time in that option only to find that improvements are negligible. Plus, I only know Perl, PHP and COBOL so for any other language I would have to learn it first. I'm wondering how other people have approached similar problems? I have searched and I don't believe that this duplicates any existing questions.

    Read the article

  • javascript and css loadings

    - by Mike
    I was wondering, If I have, let's say 6 javascripts includes on a page and 4-5 css includes as well on the same page, does it actually makes it optimal for the page to load if I do create one file or perhaps two and append them all together instead of having bunch of them?

    Read the article

  • Which method of adding items to the ASP.NET Dictionary class is more efficient?

    - by ahmd0
    I'm converting a comma separated list of strings into a dictionary using C# in ASP.NET (by omitting any duplicates): string str = "1,2, 4, 2, 4, item 3,item2, item 3"; //Just a random string for the sake of this example and I was wondering which method is more efficient? 1 - Using try/catch block: Dictionary<string, string> dic = new Dictionary<string, string>(); string[] strs = str.Split(','); foreach (string s in strs) { if (!string.IsNullOrWhiteSpace(s)) { try { string s2 = s.Trim(); dic.Add(s2, s2); } catch { } } } 2 - Or using ContainsKey() method: string[] strs = str.Split(','); foreach (string s in strs) { if (!string.IsNullOrWhiteSpace(s)) { string s2 = s.Trim(); if (!dic.ContainsKey(s2)) dic.Add(s2, s2); } }

    Read the article

  • Most efficient way of checking if Date object and Calendar object are in the same month

    - by Indigenuity
    I am working on a project that will run many thousands of comparisons between dates to see if they are in the same month, and I am wondering what the most efficient way of doing it would be. This isn't exactly what my code looks like, but here's the gist: List<Date> dates = getABunchOfDates(); Calendar month = Calendar.getInstance(); for(int i = 0; i < numMonths; i++) { for(Date date : dates) { if(sameMonth(month, date) .. doSomething } month.add(Calendar.MONTH, -1); } Creating a new Calendar object for every date seems like a pretty hefty overhead when this comparison will happen thousands of times, soI kind of want to cheat a bit and use the deprecated method Date.getMonth() and Date.getYear() public static boolean sameMonth(Calendar month, Date date) { return month.get(Calendar.YEAR) == date.getYear() && month.get(Calendar.MONTH) == date.getMonth(); } I'm pretty close to just using this method, since it seems to be the fastest, but is there a faster way? And is this a foolish way, since the Date methods are deprecated? Note: This project will always run with Java 7

    Read the article

  • Objective - C, fastest way to show sequence of images in UIImageView

    - by Almas Adilbek
    I have hundreds of images, which are frame images of one animation (24 images per second). Each image size is 1024x690. My problem is, I need to make smooth animation iterating each image frame in UIImageView. I know I can use animationImages of UIImageView. But it crashes, because of memory problem. Also, I can use imageView.image = [UIImage imageNamed:@""] that would cache each image, so that the next repeat animation will be smooth. But, caching a lot of images crashed app. Now I use imageView.image = [UIImage imageWithContentsOfFile:@""], which does not crash app, but doesn't make animation so smooth. Maybe there is a better way to make good animation of frame images? Maybe I need to make some preparations, in order to somehow achieve better result. I need your advices. Thank you!

    Read the article

  • Fastest Way to generate 1,000,000+ random numbers in python

    - by Sandro
    I am currently writing an app in python that needs to generate large amount of random numbers, FAST. Currently I have a scheme going that uses numpy to generate all of the numbers in a giant batch (about ~500,000 at a time). While this seems to be faster than python's implementation. I still need it to go faster. Any ideas? I'm open to writing it in C and embedding it in the program or doing w/e it takes. Constraints on the random numbers: A Set of numbers 7 numbers that can all have different bounds: eg: [0-X1, 0-X2, 0-X3, 0-X4, 0-X5, 0-X6, 0-X7] Currently I am generating a list of 7 numbers with random values from [0-1) then multiplying by [X1..X7] A Set of 13 numbers that all add up to 1 Currently just generating 13 numbers then dividing by their sum Any ideas? Would pre calculating these numbers and storing them in a file make this faster? Thanks!

    Read the article

  • Can I optimize this at all?

    - by Moshe
    I'm working on an iOS app and I'm using the following code for one of my tables to return the number of rows in a particular section: return [[kSettings arrayForKey:@"views"] count]; Is there any other way to write that line of code so that it is more memory efficient? EDIT: kSettings = NSUserDefaults standardUserDefaults. Is there any way to rewrite my line of code so that whatever memory it occupies is released sooner than it is released now?

    Read the article

  • Does table columns increase select statement execution time

    - by paokg4
    I have 2 tables, same structure, same rows, same data but the first has more columns (fields). For example: I select the same 3 fields from both of them (SELECT a,b,c FROM mytable1 and then SELECT a,b,c FROM mytable2) I've tried to run those queries on 100,000 records (for each table) but at the end I got the same execution time (0.0006 sec) Do you know if the number of the columns (and in the end the size of the one table is bigger than the other) has to do something with the query execution time?

    Read the article

  • What to have in mind when building a AJAX-based webapp

    - by Industrial
    Hi everyone, We're in the first steps of what will be a AJAX-based webapp where information and generated HTML will be sent backwards and forwards with the help of JSON/POST techniques. We're able to get the data out quickly without putting to much load on the database with the help of a cache-layer that features memcached as well as disc-based cache. Besides that - what's essential to have in mind when designing AJAX heavy webapps? Thanks a lot,

    Read the article

  • SQL Database Schema Design For Large 3 Billion Relationship Database.

    - by K-Bell
    Get your geek on. Can you solve this? I am designing a products database for SQL Server 2008 R2 Ed. (not Enterprise Ed.) that will be used to store custom product configurations for over 30,000 distinct products. The database will have up to 500 users at a time. Here is the design problem… Each Product has a collection of Parts (up to 50 parts per product). So if I have 30,000 Products and each of them can have up to 50 Parts, that’s 1.5 million distinct Product-to-Part relationships …or as an equation… 30,000 (Products) X 50 (Parts) = 1.5 million Product-to-Parts records. …and If… Each Part can have up to 2000 finish options (A finish is a paint color). NOTE: Only one finish will be selected by a user at run-time. The 2000 finish options I need to store are the allowed options for a specific part on a specific product. So if I have 1.5 million distinct product-to-part relationships/records and each of those parts can have up to 2,000 finishes that is 3 billion allowable product-to-part-to finish relationships/records …or as an equation… 1.5 million (Parts) x 2,000 (Finishes) = 3 Billion Product-to-Part-to-Finishes records. How can I design this database so that I can execute fast and efficient queries for a specific product and return its list of Parts and all the allowable Finishes for each part without 3 Billion Product-to-Part-to-Finish records? Read time is more important then write time. Please post your thoughts/suggestions if you have experience with large databases. Thanks!

    Read the article

  • Treeview Slow in IE?!?!

    - by Mike
    I have a treeview with around 200 records that needs to be fully expanded at all times (so no loading on demand). It is inside of an update panel with the updatemode set to conditional. There are other update panels on the page as well that are set to conditional. Depending on user actions the tree may need to be rebuilt by calling databind and updating the updatepanel. Everything works fine in firefox, longest postback about 2 seconds. With IE I have to wait up to 30 seconds sometimes and the action may have nothing to do with the tree just changing a dropdown in its own updatepanel takes forever. I have considered the size of viewstate and just raw HTML generated may be causing the delay but wouldn't that effect both browsers? Anyone have anyideas what is making it so slow in IE??? Thanks!

    Read the article

  • Profilers for ASP.Net Web Applications?

    - by Earlz
    I was recently wanting to do some profiling on an ASP.Net project and was surprised to see that Visual Studio (at least seems to be) lacking a profiler. So my question is what profiler do you use for ASP.Net? Are there any decent ones out there that are free? I've seen a few general .Net profilers but have yet to see one that can be used with ASP.Net..

    Read the article

  • Measuring the CPU frequency scaling effect

    - by Bryan Fok
    Recently I am trying to measure the effect of the cpu scaling. Is it accurate if I use this clock to measure it? template<std::intmax_t clock_freq> struct rdtsc_clock { typedef unsigned long long rep; typedef std::ratio<1, clock_freq> period; typedef std::chrono::duration<rep, period> duration; typedef std::chrono::time_point<rdtsc_clock> time_point; static const bool is_steady = true; static time_point now() noexcept { unsigned lo, hi; asm volatile("rdtsc" : "=a" (lo), "=d" (hi)); return time_point(duration(static_cast<rep>(hi) << 32 | lo)); } }; Update: According to the comment from my another post, I believe redtsc cannot use for measure the effect of cpu frequency scaling because the counter from the redtsc does not affected by the CPU frequency, am i right?

    Read the article

  • Time complexity of a powerset generating function

    - by Lirik
    I'm trying to figure out the time complexity of a function that I wrote (it generates a power set for a given string): public static HashSet<string> GeneratePowerSet(string input) { HashSet<string> powerSet = new HashSet<string>(); if (string.IsNullOrEmpty(input)) return powerSet; int powSetSize = (int)Math.Pow(2.0, (double)input.Length); // Start at 1 to skip the empty string case for (int i = 1; i < powSetSize; i++) { string str = Convert.ToString(i, 2); string pset = str; for (int k = str.Length; k < input.Length; k++) { pset = "0" + pset; } string set = string.Empty; for (int j = 0; j < pset.Length; j++) { if (pset[j] == '1') { set = string.Concat(set, input[j].ToString()); } } powerSet.Add(set); } return powerSet; } So my attempt is this: let the size of the input string be n in the outer for loop, must iterate 2^n times (because the set size is 2^n). in the inner for loop, we must iterate 2*n times (at worst). 1. So Big-O would be O((2^n)*n) (since we drop the constant 2)... is that correct? And n*(2^n) is worse than n^2. if n = 4 then (4*(2^4)) = 64 (4^2) = 16 if n = 100 then (10*(2^10)) = 10240 (10^2) = 100 2. Is there a faster way to generate a power set, or is this about optimal?

    Read the article

  • SQL Profiler and Tuning Advisor for Reporting Services - what events should be selected?

    - by chris
    I've used the SQL Profiler to generate a trace file, and tuning advisor to take that trace file and provide some recommendations on db updates. However, the SQL Profiler doesn't seem to track the queries when running against a Reporting Server, the profiler doesn't seem to be capturing any of the queries. I'm logging the defaults (SQL:BatchCompleted and Starting, RPC:completed, and Sessions - Existing Connections) What events should I be capturing in SQL Profiler in order to run the tuning advisor?

    Read the article

  • apache alias and .htacess willing to understand configuration?

    - by sushil bharwani
    On our local dev enviornment we had just one server and to add far future expires and cache control header to static images we kept a .htaccess file in the root of the application things worked fine. But on our prod we have multiple apache servers having aliases to a code base on a different server. Here in this case i am not sure where to keep .htacess file on. Should i be keeping it on code base or on the individual apache servers. How can i write the same stuff that i have written in .htaccess file to httpd.conf file.

    Read the article

  • Very simple python functions takes spends long time in function and not subfunctions

    - by John Salvatier
    I have spent many hours trying to figure what is going on here. The function 'grad_logp' in the code below is called many times in my program, and cProfile and runsnakerun the visualize the results reveals that the function grad_logp spends about .00004s 'locally' every call not in any functions it calls and the function 'n' spends about .00006s locally every call. Together these two times make up about 30% of program time that I care about. It doesn't seem like this is function overhead as other python functions spend far less time 'locally' and merging 'grad_logp' and 'n' does not make my program faster, but the operations that these two functions do seem rather trivial. Does anyone have any suggestions on what might be happening? Have I done something obviously inefficient? Am I misunderstanding how cProfile works? def grad_logp(self, variable, calculation_set ): p = params(self.p,self.parents) return self.n(variable, self.p) def n (self, variable, p ): gradient = self.gg(variable, p) return np.reshape(gradient, np.shape(variable.value)) def gg(self, variable, p): if variable is self: gradient = self._grad_logps['x']( x = self.value, **p) else: gradient = __builtin__.sum([self._pgradient(variable, parameter, value, p) for parameter, value in self.parents.iteritems()]) return gradient

    Read the article

  • Session Timeout and page response time

    - by Johnny5
    Hi, I'm load testing an asp.net app. The load test is simulating 500 user doing searchs on the site and browsing the results. I'm observing that the more I reduce the session timeout limit (in web.config) the better the page response time. For exemple, with a timeout at 10 minutes, I got an average response time of 8.35 seconds. With a timout at 3 minutes, the average response time for the same page is 3,98 seconds. The session in stored "InProc". I supposed the memory used by the "no more used but still actives" sessions may be in cause. But, even if there is more memory used when the timeout is at 10, there is still plenty of memory available (about 2.7Gb). Any ideas?

    Read the article

  • C++ Program performs better when piped

    - by ET1 Nerd
    I haven't done any programming in a decade. I wanted to get back into it, so I made this little pointless program as practice. The easiest way to describe what it does is with output of my --help codeblock: ./prng_bench --help ./prng_bench: usage: ./prng_bench $N $B [$T] This program will generate an N digit base(B) random number until all N digits are the same. Once a repeating N digit base(B) number is found, the following statistics are displayed: -Decimal value of all N digits. -Time & number of tries taken to randomly find. Optionally, this process is repeated T times. When running multiple repititions, averages for all N digit base(B) numbers are displayed at the end, as well as total time and total tries. My "problem" is that when the problem is "easy", say a 3 digit base 10 number, and I have it do a large number of passes the "total time" is less when piped to grep. ie: command ; command |grep took : ./prng_bench 3 10 999999 ; ./prng_bench 3 10 999999|grep took .... Pass# 999999: All 3 base(10) digits = 3 base(10). Time: 0.00005 secs. Tries: 23 It took 191.86701 secs & 99947208 tries to find 999999 repeating 3 digit base(10) numbers. An average of 0.00019 secs & 99 tries was needed to find each one. It took 159.32355 secs & 99947208 tries to find 999999 repeating 3 digit base(10) numbers. If I run the same command many times w/o grep time is always VERY close. I'm using srand(1234) for now, to test. The code between my calls to clock_gettime() for start and stop do not involve any stream manipulation, which would obviously affect time. I realize this is an exercise in futility, but I'd like to know why it behaves this way. Below is heart of the program. Here's a link to the full source on DB if anybody wants to compile and test. https://www.dropbox.com/s/6olqnnjf3unkm2m/prng_bench.cpp clock_gettime() requires -lrt. for (int pass_num=1; pass_num<=passes; pass_num++) { //Executes $passes # of times. clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &temp_time); //get time start_time = timetodouble(temp_time); //convert time to double, store as start_time for(i=1, tries=0; i!=0; tries++) { //loops until 'comparison for' fully completes. counts reps as 'tries'. <------------ for (i=0; i<Ndigits; i++) //Move forward through array. | results[i]=(rand()%base); //assign random num of base to element (digit). | /*for (i=0; i<Ndigits; i++) //---Debug Lines--------------- | std::cout<<" "<<results[i]; //---a LOT of output.---------- | std::cout << "\n"; //---Comment/decoment to disable/enable.*/ // | for (i=Ndigits-1; i>0 && results[i]==results[0]; i--); //Move through array, != element breaks & i!=0, new digits drawn. -| } //If all are equal i will be 0, nested for condition satisfied. -| clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &temp_time); //get time draw_time = (timetodouble(temp_time) - start_time); //convert time to dbl, subtract start_time, set draw_time to diff. total_time += draw_time; //add time for this pass to total. total_tries += tries; //add tries for this pass to total. /*Formated output for each pass: Pass# ---: All -- base(--) digits = -- base(10) Time: ----.---- secs. Tries: ----- (LINE) */ std::cout<<"Pass# "<<std::setw(width_pass)<<pass_num<<": All "<<Ndigits<<" base("<<base<<") digits = " <<std::setw(width_base)<<results[0]<<" base(10). Time: "<<std::setw(width_time)<<draw_time <<" secs. Tries: "<<tries<<"\n"; } if(passes==1) return 0; //No need for totals and averages of 1 pass. /* It took ----.---- secs & ------ tries to find --- repeating -- digit base(--) numbers. (LINE) An average of ---.---- secs & ---- tries was needed to find each one. (LINE)(LINE) */ std::cout<<"It took "<<total_time<<" secs & "<<total_tries<<" tries to find " <<passes<<" repeating "<<Ndigits<<" digit base("<<base<<") numbers.\n" <<"An average of "<<total_time/passes<<" secs & "<<total_tries/passes <<" tries was needed to find each one. \n\n"; return 0;

    Read the article

  • Is it better to use GL_FIXED or GL_FLOAT on Android.

    - by Timmmm
    I would have assumed that GL_FIXED was faster, but the iPhone docs actually say to use GL_FLOAT because GL_FIXED has to be converted to GL_FLOAT. Is it the same on Android? I suppose it varies by phone, but what about recent popular ones (Nexus One, Droid/Milestone, etc.)? Bonus points: This appears to be completely undocumented (e.g. search google for GL_FIXED!) but where is the 'point' in GL_FIXED? I.e. how much is (GL_FIXED)1 worth?

    Read the article

  • What is the cost of object creating.

    - by Tony
    Hi If I have to choose between static method and creating an instance and use instance method, I will choose static methods always. but what is the detailed overhead of creating an instance? for example I saw a DAL which can be done with static classes but they choose to make it instance now in the BLL at every single call they call something like. new Customer().GetData(); how far this can be bad? Thanks

    Read the article

  • Will Algorithm written in OCaml compiled from C be Faster than Algorithm written in Pure C code?

    - by Ole Jak
    So I have some cool Image Processing algorithm. I have written it in OCaml. It performs well. I now I can compile it as C code with such command ocamlc -output-obj -o foo.c foo.ml (I have a situation where I am not alowed to use OCaml compiler to bild my programm for my arcetecture, I can use only specialy modified gcc. so I will compile that programm with sometyhing like gcc -L/usr/lib/ocaml foo.c -lcamlrun -lm -lncurses and Itll run on my archetecture.) I want to know in general case will my OCaml code compiled into C run faster than algorithm implemented in pure C?

    Read the article

  • Suggestion on Database structure for relational data

    - by miccet
    Hi there. I've been wrestling with this problem for quite a while now and the automatic mails with 'Slow Query' warnings are still popping in. Basically, I have Blogs with a corresponding table as well as a table that keeps track of how many times each Blog has been viewed. This last table has a huge amount of records since this page is relatively high traffic and it logs every hit as an individual row. I have tried with indexes on the fields that are included in the WHERE clause, but it doesn't seem to help. I have also tried to clean the table each week by removing old ( 1.weeks) records. SO, I'm asking you guys, how would you solve this? The query that I know is causing the slowness is generated by Rails and looks like this: SELECT count(*) AS count_all FROM blog_views WHERE (created_at >= '2010-01-01 00:00:01' AND blog_id = 1); The tables have the following structures: CREATE TABLE IF NOT EXISTS 'blogs' ( 'id' int(11) NOT NULL auto_increment, 'name' varchar(255) default NULL, 'perma_name' varchar(255) default NULL, 'author_id' int(11) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, 'blog_picture_id' int(11) default NULL, 'blog_picture2_id' int(11) default NULL, 'page_id' int(11) default NULL, 'blog_picture3_id' int(11) default NULL, 'active' tinyint(1) default '1', PRIMARY KEY ('id'), KEY 'index_blogs_on_author_id' ('author_id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; And CREATE TABLE IF NOT EXISTS 'blog_views' ( 'id' int(11) NOT NULL auto_increment, 'blog_id' int(11) default NULL, 'ip' varchar(255) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, PRIMARY KEY ('id'), KEY 'index_blog_views_on_blog_id' ('blog_id'), KEY 'created_at' ('created_at') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >