Search Results

Search found 13376 results on 536 pages for 'performance engineers'.

Page 165/536 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • finding out memory allocation hotspots in java

    - by Zamir
    Our GC is working hard and we have some pauses that we want to decrease. We have some memory allocation issues that we want to solve before or while we are tweaking with the actual JVM GC args. I would like to know which objects are making the GC sweat: is there a way to know which objects are evacuated every time the GC is working? is there a way to know which objects are moved between areas every time the GC is working? Is there a way to know which objects are in Eden area? I am working extensively with Jprofiler and Memory Analyzer. I would like to get this information on a running application in my staging environment.

    Read the article

  • apache alias and .htacess willing to understand configuration?

    - by sushil bharwani
    On our local dev enviornment we had just one server and to add far future expires and cache control header to static images we kept a .htaccess file in the root of the application things worked fine. But on our prod we have multiple apache servers having aliases to a code base on a different server. Here in this case i am not sure where to keep .htacess file on. Should i be keeping it on code base or on the individual apache servers. How can i write the same stuff that i have written in .htaccess file to httpd.conf file.

    Read the article

  • Will Algorithm written in OCaml compiled from C be Faster than Algorithm written in Pure C code?

    - by Ole Jak
    So I have some cool Image Processing algorithm. I have written it in OCaml. It performs well. I now I can compile it as C code with such command ocamlc -output-obj -o foo.c foo.ml (I have a situation where I am not alowed to use OCaml compiler to bild my programm for my arcetecture, I can use only specialy modified gcc. so I will compile that programm with sometyhing like gcc -L/usr/lib/ocaml foo.c -lcamlrun -lm -lncurses and Itll run on my archetecture.) I want to know in general case will my OCaml code compiled into C run faster than algorithm implemented in pure C?

    Read the article

  • Profilers for ASP.Net Web Applications?

    - by Earlz
    I was recently wanting to do some profiling on an ASP.Net project and was surprised to see that Visual Studio (at least seems to be) lacking a profiler. So my question is what profiler do you use for ASP.Net? Are there any decent ones out there that are free? I've seen a few general .Net profilers but have yet to see one that can be used with ASP.Net..

    Read the article

  • Time complexity of a powerset generating function

    - by Lirik
    I'm trying to figure out the time complexity of a function that I wrote (it generates a power set for a given string): public static HashSet<string> GeneratePowerSet(string input) { HashSet<string> powerSet = new HashSet<string>(); if (string.IsNullOrEmpty(input)) return powerSet; int powSetSize = (int)Math.Pow(2.0, (double)input.Length); // Start at 1 to skip the empty string case for (int i = 1; i < powSetSize; i++) { string str = Convert.ToString(i, 2); string pset = str; for (int k = str.Length; k < input.Length; k++) { pset = "0" + pset; } string set = string.Empty; for (int j = 0; j < pset.Length; j++) { if (pset[j] == '1') { set = string.Concat(set, input[j].ToString()); } } powerSet.Add(set); } return powerSet; } So my attempt is this: let the size of the input string be n in the outer for loop, must iterate 2^n times (because the set size is 2^n). in the inner for loop, we must iterate 2*n times (at worst). 1. So Big-O would be O((2^n)*n) (since we drop the constant 2)... is that correct? And n*(2^n) is worse than n^2. if n = 4 then (4*(2^4)) = 64 (4^2) = 16 if n = 100 then (10*(2^10)) = 10240 (10^2) = 100 2. Is there a faster way to generate a power set, or is this about optimal?

    Read the article

  • Can I optimize this at all?

    - by Moshe
    I'm working on an iOS app and I'm using the following code for one of my tables to return the number of rows in a particular section: return [[kSettings arrayForKey:@"views"] count]; Is there any other way to write that line of code so that it is more memory efficient? EDIT: kSettings = NSUserDefaults standardUserDefaults. Is there any way to rewrite my line of code so that whatever memory it occupies is released sooner than it is released now?

    Read the article

  • Suggestion on Database structure for relational data

    - by miccet
    Hi there. I've been wrestling with this problem for quite a while now and the automatic mails with 'Slow Query' warnings are still popping in. Basically, I have Blogs with a corresponding table as well as a table that keeps track of how many times each Blog has been viewed. This last table has a huge amount of records since this page is relatively high traffic and it logs every hit as an individual row. I have tried with indexes on the fields that are included in the WHERE clause, but it doesn't seem to help. I have also tried to clean the table each week by removing old ( 1.weeks) records. SO, I'm asking you guys, how would you solve this? The query that I know is causing the slowness is generated by Rails and looks like this: SELECT count(*) AS count_all FROM blog_views WHERE (created_at >= '2010-01-01 00:00:01' AND blog_id = 1); The tables have the following structures: CREATE TABLE IF NOT EXISTS 'blogs' ( 'id' int(11) NOT NULL auto_increment, 'name' varchar(255) default NULL, 'perma_name' varchar(255) default NULL, 'author_id' int(11) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, 'blog_picture_id' int(11) default NULL, 'blog_picture2_id' int(11) default NULL, 'page_id' int(11) default NULL, 'blog_picture3_id' int(11) default NULL, 'active' tinyint(1) default '1', PRIMARY KEY ('id'), KEY 'index_blogs_on_author_id' ('author_id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; And CREATE TABLE IF NOT EXISTS 'blog_views' ( 'id' int(11) NOT NULL auto_increment, 'blog_id' int(11) default NULL, 'ip' varchar(255) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, PRIMARY KEY ('id'), KEY 'index_blog_views_on_blog_id' ('blog_id'), KEY 'created_at' ('created_at') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

    Read the article

  • Is there a better way to count the messages in an Message Queue (MSMQ)?

    - by Damovisa
    I'm currently doing it like this: MessageQueue queue = new MessageQueue(".\Private$\myqueue"); MessageEnumerator messageEnumerator = queue.GetMessageEnumerator2(); int i = 0; while (messageEnumerator.MoveNext()) { i++; } return i; But for obvious reasons, it just feels wrong - I shouldn't have to iterate through every message just to get a count, should I? Is there a better way?

    Read the article

  • What is the cost of object creating.

    - by Tony
    Hi If I have to choose between static method and creating an instance and use instance method, I will choose static methods always. but what is the detailed overhead of creating an instance? for example I saw a DAL which can be done with static classes but they choose to make it instance now in the BLL at every single call they call something like. new Customer().GetData(); how far this can be bad? Thanks

    Read the article

  • Most efficient way of checking if Date object and Calendar object are in the same month

    - by Indigenuity
    I am working on a project that will run many thousands of comparisons between dates to see if they are in the same month, and I am wondering what the most efficient way of doing it would be. This isn't exactly what my code looks like, but here's the gist: List<Date> dates = getABunchOfDates(); Calendar month = Calendar.getInstance(); for(int i = 0; i < numMonths; i++) { for(Date date : dates) { if(sameMonth(month, date) .. doSomething } month.add(Calendar.MONTH, -1); } Creating a new Calendar object for every date seems like a pretty hefty overhead when this comparison will happen thousands of times, soI kind of want to cheat a bit and use the deprecated method Date.getMonth() and Date.getYear() public static boolean sameMonth(Calendar month, Date date) { return month.get(Calendar.YEAR) == date.getYear() && month.get(Calendar.MONTH) == date.getMonth(); } I'm pretty close to just using this method, since it seems to be the fastest, but is there a faster way? And is this a foolish way, since the Date methods are deprecated? Note: This project will always run with Java 7

    Read the article

  • Measuring the CPU frequency scaling effect

    - by Bryan Fok
    Recently I am trying to measure the effect of the cpu scaling. Is it accurate if I use this clock to measure it? template<std::intmax_t clock_freq> struct rdtsc_clock { typedef unsigned long long rep; typedef std::ratio<1, clock_freq> period; typedef std::chrono::duration<rep, period> duration; typedef std::chrono::time_point<rdtsc_clock> time_point; static const bool is_steady = true; static time_point now() noexcept { unsigned lo, hi; asm volatile("rdtsc" : "=a" (lo), "=d" (hi)); return time_point(duration(static_cast<rep>(hi) << 32 | lo)); } }; Update: According to the comment from my another post, I believe redtsc cannot use for measure the effect of cpu frequency scaling because the counter from the redtsc does not affected by the CPU frequency, am i right?

    Read the article

  • Improving WPF memory usage

    - by Krishna
    Hello developers Is there any way you can store the UI state to disk when a WPF form has been minimised. I have a complex GUI with few Tab Controls and it consumes quite a bit of memory which is kept allocated when the application is not active. I was hoping one of you may have got this working along the lines (or similar) Application Active User does work, plays with UI - enters some information in the text boxes and moves around the tabs User minimises the form to work with other applications On Minimise, Save the current state to the disk and dispose the root tabcontrol On Activiate, build the root tabcontrol from the disk and add to the controls collection Before I divein to do this, I thought it will help me if I ask this question here. Please let me know your thoughts

    Read the article

  • Fast way to pass a simple java object from one thread to another

    - by Adal
    I have a callback which receives an object. I make a copy of this object, and I must pass it on to another thread for further processing. It's very important for the callback to return as fast as possible. Ideally, the callback will write the copy to some sort of lock-free container. I only have the callback called from a single thread and one processing thread. I only need to pass a bunch of doubles to the other thread, and I know the maximum number of doubles (around 40). Any ideas? I'm not very familiar with Java, so I don't know the usual ways to pass stuff between threads.

    Read the article

  • Does having a longer string in a SQL Like expression allow hinder or help query executing speed?

    - by Allain Lalonde
    I have a db query that'll cause a full table scan using a like clause and came upon a question I was curious about... Which of the following should run faster in Mysql or would they both run at the same speed? Benchmarking might answer it in my case, but I'd like to know the why of the answer. The column being filtered contains a couple thousand characters if that's important. SELECT * FROM users WHERE data LIKE '%=12345%' or SELECT * FROM users WHERE data LIKE '%proileId=12345%' I can come up for reasons why each of these might out perform the other, but I'm curious to know the logic.

    Read the article

  • PHP – Slow String Manipulation

    - by Simon Roberts
    I have some very large data files and for business reasons I have to do extensive string manipulation (replacing characters and strings). This is unavoidable. The number of replacements runs into hundreds of thousands. It's taking longer than I would like. PHP is generally very quick but I'm doing so many of these string manipulations that it's slowing down and script execution is running into minutes. This is a pain because the script is run frequently. I've done some testing and found that str_replace is fastest, followed by strstr, followed by preg_replace. I've also tried individual str_replace statements as well as constructing arrays of patterns and replacements. I'm toying with the idea of isolating string manipulation operation and writing in a different language but I don't want to invest time in that option only to find that improvements are negligible. Plus, I only know Perl, PHP and COBOL so for any other language I would have to learn it first. I'm wondering how other people have approached similar problems? I have searched and I don't believe that this duplicates any existing questions.

    Read the article

  • Session Timeout and page response time

    - by Johnny5
    Hi, I'm load testing an asp.net app. The load test is simulating 500 user doing searchs on the site and browsing the results. I'm observing that the more I reduce the session timeout limit (in web.config) the better the page response time. For exemple, with a timeout at 10 minutes, I got an average response time of 8.35 seconds. With a timout at 3 minutes, the average response time for the same page is 3,98 seconds. The session in stored "InProc". I supposed the memory used by the "no more used but still actives" sessions may be in cause. But, even if there is more memory used when the timeout is at 10, there is still plenty of memory available (about 2.7Gb). Any ideas?

    Read the article

  • Optimizing NSNumber numberWithInt:

    - by Riviera
    I am profiling an iPhone app and I noticed a strange pattern. In a certain block of code that's called quite frequently... [item setQuadrant:[NSNumber numberWithInt:a]]; [item setIndex:[NSNumber numberWithInt:b]]; [item setTimestamp:[NSNumber numberWithInt:c]]; [item setState:[NSNumber numberWithInt:d]]; [item setCompletionPercentage:[NSNumber numberWithInt:e]]; [item setId_:[NSNumber numberWithInt:f]]; ...the first call to [NSNumber numberWithInt:] takes an inordinate amount of time, in the order of 10-15x that of the remaining calls. I've verified that the results are consistent if I shuffle the lines (the first line is always the slow one, by the same ratio). Is there something going on that I'm not aware of? Perhaps this happens because this block is inside a try/catch?

    Read the article

  • Does table columns increase select statement execution time

    - by paokg4
    I have 2 tables, same structure, same rows, same data but the first has more columns (fields). For example: I select the same 3 fields from both of them (SELECT a,b,c FROM mytable1 and then SELECT a,b,c FROM mytable2) I've tried to run those queries on 100,000 records (for each table) but at the end I got the same execution time (0.0006 sec) Do you know if the number of the columns (and in the end the size of the one table is bigger than the other) has to do something with the query execution time?

    Read the article

  • SQL Database Schema Design For Large 3 Billion Relationship Database.

    - by K-Bell
    Get your geek on. Can you solve this? I am designing a products database for SQL Server 2008 R2 Ed. (not Enterprise Ed.) that will be used to store custom product configurations for over 30,000 distinct products. The database will have up to 500 users at a time. Here is the design problem… Each Product has a collection of Parts (up to 50 parts per product). So if I have 30,000 Products and each of them can have up to 50 Parts, that’s 1.5 million distinct Product-to-Part relationships …or as an equation… 30,000 (Products) X 50 (Parts) = 1.5 million Product-to-Parts records. …and If… Each Part can have up to 2000 finish options (A finish is a paint color). NOTE: Only one finish will be selected by a user at run-time. The 2000 finish options I need to store are the allowed options for a specific part on a specific product. So if I have 1.5 million distinct product-to-part relationships/records and each of those parts can have up to 2,000 finishes that is 3 billion allowable product-to-part-to finish relationships/records …or as an equation… 1.5 million (Parts) x 2,000 (Finishes) = 3 Billion Product-to-Part-to-Finishes records. How can I design this database so that I can execute fast and efficient queries for a specific product and return its list of Parts and all the allowable Finishes for each part without 3 Billion Product-to-Part-to-Finish records? Read time is more important then write time. Please post your thoughts/suggestions if you have experience with large databases. Thanks!

    Read the article

  • fastest way to check to see if a certain index in a linq statement is null

    - by tehdoommarine
    Basic Details I have a linq statement that grabs some records from a database and puts them in a System.Linq.Enumerable: var someRecords = someRepoAttachedToDatabase.Where(p=>true); Suppose this grabs tons (25k+) of records, and i need to perform operations on all of them. to speed things up, I have to decided to use paging and perform the operations needed in blocks of 100 instead of all of the records at the same time. The Question The line in question is the line where I count the number of records in the subset to see if we are on the last page; if the number of records in subset is less than the size of paging - then that means there are no more records left. What I would like to know is what is the fastest way to do this? Code in Question int pageSize = 100; bool moreData = true; int currentPage = 1; while (moreData) { var subsetOfRecords = someRecords.Skip((currentPage - 1) * pageSize).Take(pageSize); //this is also a System.Linq.Enumerable if (subsetOfRecords.Count() < pageSize){ moreData = false;} //line in question //do stuff to records in subset currentPage++; } Things I Have Considered subsetOfRecords.Count() < pageSize subsetOfRecords.ElementAt(pageSize - 1) == null (causes out of bounds exception - can catch exception and set moreData to false there) Converting subsetOfRecords to an array (converting someRecords to an array will not work due to the way subsetOfRecords is declared - but I am open to changing it) I'm sure there are plenty of other ideas that I have missed.

    Read the article

  • What would be a better way of doing the following

    - by thecoshman
    if(get_magic_quotes_gpc()) { $location_name = trim(mysql_real_escape_string(trim(stripslashes($_GET['location_name'])))); } else { $location_name = trim(mysql_real_escape_string(trim($_GET['location_name']))); } That's the code I have so far. seems to me this code is fundamentally ... OK. Do you think I can safely remove the inner trim(). Please try not a spam me with endless version of this, I want to try to learn how to do this better.

    Read the article

  • Any reason not to use USE_ETAGS with CommonMiddleware in Django?

    - by allyourcode
    The only reason I can think of is that calculating ETag's might be expensive. If pages change very quickly, the browser's cache is likely to be invalidated by the ETag. In that case, calculating the ETag would be a waste of time. On the other hand, a giving a 304 response when possible minimizes the amount of time spent in transmission. What are some good guidelines for when ETag's are likely to be a net winner when implemented with Django's CommonMiddleware?

    Read the article

  • SQL Profiler and Tuning Advisor for Reporting Services - what events should be selected?

    - by chris
    I've used the SQL Profiler to generate a trace file, and tuning advisor to take that trace file and provide some recommendations on db updates. However, the SQL Profiler doesn't seem to track the queries when running against a Reporting Server, the profiler doesn't seem to be capturing any of the queries. I'm logging the defaults (SQL:BatchCompleted and Starting, RPC:completed, and Sessions - Existing Connections) What events should I be capturing in SQL Profiler in order to run the tuning advisor?

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >