Search Results

Search found 12988 results on 520 pages for 'performance'.

Page 161/520 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Are there any tools to optimize the number of consumer and producer threads on a JMS queue?

    - by lindelof
    I'm working on an application that is distributed over two JBoss instances and that produces/consumes JMS messages on several JMS queues. When we configured the application we had to determine which threading model we would use, in particular the number of producing and consuming threads per queue. We have done this in a rather ad-hoc fashion but after reading the most recent columns by Herb Sutter in Dr Dobbs (in particular this one) I would like to size our threads in a more rigorous manner. Are there any methods/tools to measure the throughput of JMS queues (in particular JBoss Messaging queues) as a function of the number of producing/consuming threads?

    Read the article

  • List of divisors of an integer n (Haskell)

    - by Code-Guru
    I currently have the following function to get the divisors of an integer: -- All divisors of a number divisors :: Integer -> [Integer] divisors 1 = [1] divisors n = firstHalf ++ secondHalf where firstHalf = filter (divides n) (candidates n) secondHalf = filter (\d -> n `div` d /= d) (map (n `div`) (reverse firstHalf)) candidates n = takeWhile (\d -> d * d <= n) [1..n] I ended up adding the filter to secondHalf because a divisor was repeating when n is a square of a prime number. This seems like a very inefficient way to solve this problem. So I have two questions: How do I measure if this really is a bottle neck in my algorithm? And if it is, how do I go about finding a better way to avoid repetitions when n is a square of a prime?

    Read the article

  • XSLT 1.0: restrict entries in a nodeset

    - by Mike
    Hi, Being relatively new to XSLT I have what I hope is a simple question. I have some flat XML files, which can be pretty big (eg. 7MB) that I need to make 'more hierarchical'. For example, the flat XML might look like this: <D0011> .... .... and it should end up looking like this: <D0011> .... .... I have a working XSLT for this, and it essentially gets a nodeset of all the b elements and then uses the 'following-sibling' axis to get a nodeset of the nodes following the current b node (ie. following-sibling::*[position() =$nodePos]). Then recursion is used to add the siblings into the result tree until another b element is found (I have parameterised it of course, to make it more generic). I also have a solution that just sends the position in the XML of the next b node and selects the nodes after that one after the other (using recursion) via a *[position() = $nodePos] selection. The problem is that the time to execute the transformation increases unacceptably with the size of the XML file. Looking into it with XML Spy it seems that it is the 'following-sibling' and 'position()=' that take the time in the two respective methods. What I really need is a way of restricting the number of nodes in the above selections, so fewer comparisons are performed: every time the position is tested, every node in the nodeset is tested to see if its position is the right one. Is there a way to do that ? Any other suggestions ? Thanks, Mike

    Read the article

  • How to increase query speed without using full-text search?

    - by andre matos
    This is my simple query; By searching selectnothing I'm sure I'll have no hits. SELECT nome_t FROM myTable WHERE nome_t ILIKE '%selectnothing%'; This is the EXPLAIN ANALYZE VERBOSE Seq Scan on myTable (cost=0.00..15259.04 rows=37 width=29) (actual time=2153.061..2153.061 rows=0 loops=1) Output: nome_t Filter: (nome_t ~~* '%selectnothing%'::text) Total runtime: 2153.116 ms myTable has around 350k rows and the table definition is something like: CREATE TABLE myTable ( nome_t text NOT NULL, ) I have an index on nome_t as stated below: CREATE INDEX idx_m_nome_t ON myTable USING btree (nome_t); Although this is clearly a good candidate for Fulltext search I would like to rule that option out for now. This query is meant to be run from a web application and currently it's taking around 2 seconds which is obviously too much; Is there anything I can do, like using other index methods, to improve the speed of this query?

    Read the article

  • Scalable way of doing self join with many to many table

    - by johnathan
    I have a table structure like the following: user id name profile_stat id name profile_stat_value id name user_profile user_id profile_stat_id profile_stat_value_id My question is: How do I evaluate a query where I want to find all users with profile_stat_id and profile_stat_value_id for many stats? I've tried doing an inner self join, but that quickly gets crazy when searching for many stats. I've also tried doing a count on the actual user_profile table, and that's much better, but still slow. Is there some magic I'm missing? I have about 10 million rows in the user_profile table and want the query to take no longer than a few seconds. Is that possible?

    Read the article

  • querying huge database table takes too much of time in mysql

    - by Vijay
    Hi all, I am running sql queries on a mysql db table that has 110Mn+ unique records for whole day. Problem: Whenever I run any query with "where" clause it takes at least 30-40 mins. Since I want to generate most of data on the next day, I need access to whole db table. Could you please guide me to optimize / restructure the deployment model? Site description: mysql Ver 14.12 Distrib 5.0.24, for pc-linux-gnu (i686) using readline 5.0 4 GB RAM, Dual Core dual CPU 3GHz RHEL 3 my.cnf contents : [root@reports root]# cat /etc/my.cnf [mysqld] datadir=/data/mysql/data/ socket=/tmp/mysql.sock sort_buffer_size = 2000000 table_cache = 1024 key_buffer = 128M myisam_sort_buffer_size = 64M # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 [mysql.server] user=mysql basedir=/data/mysql/data/ [mysqld_safe] err-log=/data/mysql/data/mysqld.log pid-file=/data/mysql/data/mysqld.pid [root@reports root]# DB table details: CREATE TABLE `RAW_LOG_20100504` ( `DT` date default NULL, `GATEWAY` varchar(15) default NULL, `USER` bigint(12) default NULL, `CACHE` varchar(12) default NULL, `TIMESTAMP` varchar(30) default NULL, `URL` varchar(60) default NULL, `VERSION` varchar(6) default NULL, `PROTOCOL` varchar(6) default NULL, `WEB_STATUS` int(5) default NULL, `BYTES_RETURNED` int(10) default NULL, `RTT` int(5) default NULL, `UA` varchar(100) default NULL, `REQ_SIZE` int(6) default NULL, `CONTENT_TYPE` varchar(50) default NULL, `CUST_TYPE` int(1) default NULL, `DEL_STATUS_DEVICE` int(1) default NULL, `IP` varchar(16) default NULL, `CP_FLAG` int(1) default NULL, `USER_LOCATE` bigint(15) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=200000000; Thanks in advance! Regards,

    Read the article

  • Spring Web application internal visual monitoring heap space and permgen.

    - by Veniamin
    If I have created web application used Spring framework(based on maven), can I include some module/dependency/plugin into my app to monitor Heap space, Permgen, etc. It whould be greate to get some charts output likes as in VisuaVM. For example: http://localhost:8080/monitoring = Including something likes VisualVM I have found next dependency without link to repo: <dependency> <groupId>com.sun.tools.visualvm</groupId> <artifactId>core</artifactId> <version>1.3.3</version> </dependency> How to use it?

    Read the article

  • NHibernate + Fluent long startup time

    - by PaRa
    Hi all, am new to NHibernate. When performing below test took 11.2 seconds (debug mode) i am seeing this large startup time in all my tests (basically creating the first session takes a tone of time) setup = Windows 2003 SP2 / Oracle10gR2 latest CPU / ODP.net 2.111.7.20 / FNH 1.0.0.636 / NHibernate 2.1.2.4000 / NUnit 2.5.2.9222 / VS2008 SP1 using System; using System.Collections; using System.Data; using System.Globalization; using System.IO; using System.Text; using System.Data; using NUnit.Framework; using System.Collections.Generic; using System.Data.Common; using NHibernate; using log4net.Config; using System.Configuration; using FluentNHibernate; [Test()] public void GetEmailById() { Email result; using (EmailRepository repository = new EmailRepository()) { results = repository.GetById(1111); } Assert.IsTrue(results != null); } public class EmailRepository : RepositoryBase { public EmailRepository():base() { } } In my RepositoryBase public T GetById(object id) { using (var session = sessionFactory.OpenSession()) using (var transaction = session.BeginTransaction()) { try { T returnVal = session.Get(id); transaction.Commit(); return returnVal; } catch (HibernateException ex) { // Logging here transaction.Rollback(); return null; } } } The query time is very small. The resulting entity is really small. Subsequent queries are fine. Its seems to be getting the first session started. Has anyone else seen something similar?

    Read the article

  • How can I most accurately calculate the execution time of an ASP.NET page while also displaying it o

    - by henningst
    I want to calculate the execution time of my ASP.NET pages and display it on the page. Currently I'm calculating the execution time using a System.Diagnostics.Stopwatch and then store the value in a log database. The stopwatch is started in OnInit and stopped in OnPreRenderComplete. This seems to be working quite fine, and it's giving a similar execution time as the one shown in the page trace. The problem now is that I'm not able to display the execution time on the page because the stopwatch is stopped too late in the life cycle. What is the best way to do this?

    Read the article

  • Time complexity O() of isPalindrome()

    - by Aran
    I have this method, isPalindrome(), and I am trying to find the time complexity of it, and also rewrite the code more efficiently. boolean isPalindrome(String s) { boolean bP = true; for(int i=0; i<s.length(); i++) { if(s.charAt(i) != s.charAt(s.length()-i-1)) { bP = false; } } return bP; } Now I know this code checks the string's characters to see whether it is the same as the one before it and if it is then it doesn't change bP. And I think I know that the operations are s.length(), s.charAt(i) and s.charAt(s.length()-i-!)). Making the time-complexity O(N + 3), I think? This correct, if not what is it and how is that figured out. Also to make this more efficient, would it be good to store the character in temporary strings?

    Read the article

  • Cache layer for MVC - Model or controller?

    - by Industrial
    Hi everyone, I am having some second thoughts about where to implement the caching part. Where is the most appropriate place to implement it, you think? Inside every model, or in the controller? Approach 1 (psuedo-code): // mycontroller.php MyController extends Controller_class { function index () { $data = $this->model->getData(); echo $data; } } // myModel.php MyModel extends Model_Class{ function getData() { $data = memcached->get('data'); if (!$data) { $query->SQL_QUERY("Do query!"); } return $data; } } Approach 2: // mycontroller.php MyController extends Controller_class { function index () { $dataArray = $this->memcached->getMulti('data','data2'); foreach ($dataArray as $key) { if (!$key) { $data = $this->model->getData(); $this->memcached->set($key, $data); } } echo $data; } } // myModel.php MyModel extends Model_Class{ function getData() { $query->SQL_QUERY("Do query!"); return $data; } } Thoughts: Approach 1: No multiget/multi-set. If a high number of keys would be returned, overhead would be caused. Easier to maintain, all database/cache handling is in each model Approach 2: Better performancewise - multiset/multiget is used More code required Harder to maintain Tell me what you think!

    Read the article

  • Definition of Connect, Processing, Waiting in apache bench.

    - by rpatel
    When I run apache bench I get results like: Command: abs.exe -v 3 -n 10 -c 1 https://mysite Connection Times (ms) min mean[+/-sd] median max Connect: 203 213 8.1 219 219 Processing: 78 177 88.1 172 359 Waiting: 78 169 84.6 156 344 Total: 281 389 86.7 391 563 I can't seem to find the definition of Connect, Processing and Waiting. What do those numbers mean?

    Read the article

  • Why does Go compile quickly?

    - by Evan Kroske
    I've Googled and poked around the Go website, but I can't seem to find an explanation for Go's extraordinary build times. Are they products of the language features (or lack thereof), a highly optimized compiler, or something else? I'm not trying to promote Go; I'm just curious.

    Read the article

  • percentage of memory used used by a process

    - by benjamin button
    percentage of memory used used by a process. normally prstat -J will give the memory of process image and RSS(resident set size) etc. how do i knowlist of processes with percentage of memory is used by a each process. i am working on solaris unix. addintionally ,what are the regular commands that you use for monitoring processes,performences of processes that might be very useful to all!

    Read the article

  • NetNamedPipe: varying response time when communication is idling

    - by Sven Künzler
    I have two WCF apps communicating one-way over named pipes. All is nice, except for one thing: Normally, the request/response cycle takes zero (marginal) time. However, if there was a time span of, say, half a minute without any communication, the request/response increases up to ~300-500ms. I looked around the net and I got the idea of using a heart beat/ping mechanism to keep the communication channel busy. Using trial and error I found that when doing a request each 10 seconds, the response times stay low. Starting at around 15s intervals, the "hiccup" response times begin to appear. Now I'm wondering where this phenomenon is originating from. I tried setting alle conceivable timeouts on both sides to 1 minute, but that did not help. Can anybody explain what's going on there?

    Read the article

  • Is Java serialization a tool to shrink the memory footprint?

    - by Pentius
    Hey folks, does serialization in Java always have to shrink the memory that is used to hold an object structure? Or is it likely that serialization will have higher costs? In other words: Is serialization a tool to shrink the memory footprint of object structures in Java? Edit I'm totally aware of what serialization was intended for, but thanks anyway :-) But you know, tools can be misused. My question is, whether it is a good tool to decrease the memory usage. So what reasons can you imagine, why memory usage should increase/decrease? What will happen in most cases?

    Read the article

  • How to profile object creation in Java?

    - by gooli
    The system I work with is creating a whole lot of objects and garbage collecting them all the time which results in a very steeply jagged graph of heap consumption. I would like to know which objects are being generated to tune the code, but I can't figure out a way to dump the heap at the moment the garbage collection starts. When I tried to initiate dumpHeap via JConsole manually at random times, I always got results after GC finished its run, and didn't get any useful data. Any notes on how to track down excessive temporary object creation are welcome.

    Read the article

  • How to set up a load/stress test for a web site?

    - by Ryan
    I've been tasked with stress/load testing our company web site out of the blue and know nothing about doing so. Every search I make on google for "how to load test a web site" just comes back with various companies and software to physically do the load testing. For now I'm more interested in how to actually go about setting up a load test like what I should take into account prior to load testing, what pages within my site I should be testing load against and what things I'm going to want to monitor when doing the test. Our web site is on a multi-tier system complete with a separate database server (IIS 7 Web Server, SQL Server 2000 db). I imagine I'd want to monitor both the web server and the database server for testing load however when setting up scenarios to load test the web server I'd have to use pages that query the database to see any load on the database server at the same time. Are web servers and database servers generally tested simultaneously or are they done as separate tests? As you can see I'm pretty clueless as to the whole operation so any incite as to how to go about this would be very helpful. FYI I have been tinkering with Pylot and was able to create and run a scenario against our site but I'm not sure what I should be looking for in the results or if the scenario I created is even a scenario worth measuring for our site. Thanks in advance.

    Read the article

  • mySQL & Relational databases: How to handle sharding/splitting on application level?

    - by Industrial
    Hi everybody, I have thought a bit about sharding tables, since partitioning cannot be done with foreign keys in a mySQL table. Maybe there's an option to switch to a different relational database that features both, but I don't see that as an option right now. So, the sharding idea seems like a pretty decent thing. But, what's a good approach to do this on a application level? I am guessing that a take-off point would be to prefix tables with a max value for the primary key in each table. Something like products_4000000 , products_8000000 and products_12000000. Then the application would have to check with a simple if-statement the size of the id (PK) that will be requested is smaller then four, eight or twelve million before doing any actual database calls. So, is this a step in the right direction or are we doing something really stupid?

    Read the article

  • Python Speeding Up Retrieving data from extremely large string

    - by Burninghelix123
    I have a list I converted to a very very long string as I am trying to edit it, as you can gather it's called tempString. It works as of now it just takes way to long to operate, probably because it is several different regex subs. They are as follow: tempString = ','.join(str(n) for n in coords) tempString = re.sub(',{2,6}', '_', tempString) tempString = re.sub("[^0-9\-\.\_]", ",", tempString) tempString = re.sub(',+', ',', tempString) clean1 = re.findall(('[-+]?[0-9]*\.?[0-9]+,[-+]?[0-9]*\.?[0-9]+,' '[-+]?[0-9]*\.?[0-9]+'), tempString) tempString = '_'.join(str(n) for n in clean1) tempString = re.sub(',', ' ', tempString) Basically it's a long string containing commas and about 1-5 million sets of 4 floats/ints (mixture of both possible),: -5.65500020981,6.88999986649,-0.454999923706,1,,,-5.65500020981,6.95499992371,-0.454999923706,1,,, The 4th number in each set I don't need/want, i'm essentially just trying to split the string into a list with 3 floats in each separated by a space. The above code works flawlessly but as you can imagine is quite time consuming on large strings. I have done a lot of research on here for a solution but they all seem geared towards words, i.e. swapping out one word for another. EDIT: Ok so this is the solution i'm currently using: def getValues(s): output = [] while s: # get the three values you want, discard the 3 commas, and the # remainder of the string v1, v2, v3, _, _, _, s = s.split(',', 6) output.append("%s %s %s" % (v1.strip(), v2.strip(), v3.strip())) return output coords = getValues(tempString) Anyone have any advice to speed this up even farther? After running some tests It still takes much longer than i'm hoping for. I've been glancing at numPy, but I honestly have absolutely no idea how to the above with it, I understand that after the above has been done and the values are cleaned up i could use them more efficiently with numPy, but not sure how NumPy could apply to the above. The above to clean through 50k sets takes around 20 minutes, I cant imagine how long it would be on my full string of 1 million sets. I'ts just surprising that the program that originally exported the data took only around 30 secs for the 1 million sets

    Read the article

  • MySQL Single Query Benchmarking Strategies

    - by Pepper
    Hello, I have a slow mySQL query in my application that I need to re-write. The problem is, it's only slow on my production server and only when it's not cached. The first time I run it, it will take 12 seconds, then anytime after that it'll be 500 milliseconds. Is there an easy way to test this query without it hitting the query cache so I can see the results of my refactoring? Thanks!

    Read the article

  • Time to start a counter on client-side.

    - by Felipe
    Hi everybody, I'm developing an web application using asp.net mvc, and i need to do a stopwatch (chronometer) (with 30 seconds preprogrammed to start in a certain moment) on client-side using the time of the server, by the way, the client's clock can't be as the server's clock. So, i'm using Jquery to call the server by JSon and get the time, but it's very stress because each one second I call the server to get time, something like this: $(function() { GetTimeByServer(); }); function GetTimeByServer() { $.getJSon('/Home/Time', null, function(result) { if (result.SecondsPending < 30) { // call another function to start an chronometer } else { window.SetTimeout(GetTimeByServer, 1000); //call again each 1 second! } }); } It works fine, but when I have more than 3 or 4 call like this, the browser slowly but works! I'd like to know, how improve more performace in client side, or if is there any way to do this... is there any way to client listen the server like a "socket" to know if the chronometer should start... PS: Sorry for my english! thanks Cheers

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >