Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 264/883 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Are there any tools to optimize the number of consumer and producer threads on a JMS queue?

    - by lindelof
    I'm working on an application that is distributed over two JBoss instances and that produces/consumes JMS messages on several JMS queues. When we configured the application we had to determine which threading model we would use, in particular the number of producing and consuming threads per queue. We have done this in a rather ad-hoc fashion but after reading the most recent columns by Herb Sutter in Dr Dobbs (in particular this one) I would like to size our threads in a more rigorous manner. Are there any methods/tools to measure the throughput of JMS queues (in particular JBoss Messaging queues) as a function of the number of producing/consuming threads?

    Read the article

  • how to optimize an oracle query that has to_char in where clause for date

    - by panorama12
    I have a table that contains about 49403459 records. I want to query the table on a date range. say 04/10/2010 to 04/10/2010. However, the dates are stored in the table as format 10-APR-10 10.15.06.000000 AM (time stamp). As a result. When I do: SELECT bunch,of,stuff,create_date FROM myTable WHERE TO_CHAR (create_date,'MM/DD/YYYY)' >= '04/10/2010' AND TO_CHAR (create_date, 'MM/DD/YYYY' <= '04/10/2010' I get 529 rows but in 255.59 seconds! which is because I guess I am doing to_char on EACH record. However, When I do SELECT bunch,of,stuff,create_date FROM myTable WHERE create_date >= to_date('04/10/2010','MM/DD/YYYY') AND create_date <= to_date('04/10/2010','MM/DD/YYYY') then I get 0 results in 0.14 seconds. How can I make this query fast and still get valid (529) results?? At this point I can not change indexes. Right now I think index is created on create_date column

    Read the article

  • SQL Server uncorrelated subquery very slow

    - by brianberns
    I have a simple, uncorrelated subquery that performs very poorly on SQL Server. I'm not very experienced at reading execution plans, but it looks like the inner query is being executed once for every row in the outer query, even though the results are the same each time. What can I do to tell SQL Server to execute the inner query only once? The query looks like this: select * from Record record0_ where record0_.RecordTypeFK='c2a0ffa5-d23b-11db-9ea3-000e7f30d6a2' and ( record0_.EntityFK in ( select record1_.EntityFK from Record record1_ join RecordTextValue textvalues2_ on record1_.PK=textvalues2_.RecordFK and textvalues2_.FieldFK = '0d323c22-0ec2-11e0-a148-0018f3dde540' and (textvalues2_.Value like 'O%' escape '~') ) )

    Read the article

  • How do I set up a test duplicate of a Django and Postgresql based web application?

    - by cojadate
    Not sure if this is an excessively broad and newbie-ish question for Stack Overflow but here goes: I paid someone else to build a web application for me and now I want to tweak certain aspects of it myself. I learn best by trial and error – changing stuff and seeing what happens. Obviously that's not a great way to treat a live site, so I need to duplicate the site on some kind of test server which I can play with without fear of the consequences. Unfortunately the closest I've come to programming has been creating ActionScript-based websites. I've never touched a database. So I really don't know where to start with setting up a test server. I would really appreciate any advice about where to start. I am completely ignorant and lost here. The web application is built in python/django using a Postgresql database. I use Mac OS X 10.6 if that makes any difference.

    Read the article

  • Should I aim for fewer HTTP requests or more cacheable CSS files?

    - by Jonathan Hanson
    We're being told that fewer HTTP requests per page load is a Good Thing. The extreme form of that for CSS would be to have a single, unique CSS file per page, with any shared site-wide styles duplicated in each file. But there's a trade off there. If you have separate shared global CSS files, they can be cached once when the front page is loaded and then re-used on multiple pages, thereby reducing the necessary size of the page-specific CSS files. So which is better in real-world practice? Shorter CSS files through multiple discrete CSS files that are cacheable, or fewer HTTP requests through fewer-but-larger CSS files?

    Read the article

  • querying huge database table takes too much of time in mysql

    - by Vijay
    Hi all, I am running sql queries on a mysql db table that has 110Mn+ unique records for whole day. Problem: Whenever I run any query with "where" clause it takes at least 30-40 mins. Since I want to generate most of data on the next day, I need access to whole db table. Could you please guide me to optimize / restructure the deployment model? Site description: mysql Ver 14.12 Distrib 5.0.24, for pc-linux-gnu (i686) using readline 5.0 4 GB RAM, Dual Core dual CPU 3GHz RHEL 3 my.cnf contents : [root@reports root]# cat /etc/my.cnf [mysqld] datadir=/data/mysql/data/ socket=/tmp/mysql.sock sort_buffer_size = 2000000 table_cache = 1024 key_buffer = 128M myisam_sort_buffer_size = 64M # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 [mysql.server] user=mysql basedir=/data/mysql/data/ [mysqld_safe] err-log=/data/mysql/data/mysqld.log pid-file=/data/mysql/data/mysqld.pid [root@reports root]# DB table details: CREATE TABLE `RAW_LOG_20100504` ( `DT` date default NULL, `GATEWAY` varchar(15) default NULL, `USER` bigint(12) default NULL, `CACHE` varchar(12) default NULL, `TIMESTAMP` varchar(30) default NULL, `URL` varchar(60) default NULL, `VERSION` varchar(6) default NULL, `PROTOCOL` varchar(6) default NULL, `WEB_STATUS` int(5) default NULL, `BYTES_RETURNED` int(10) default NULL, `RTT` int(5) default NULL, `UA` varchar(100) default NULL, `REQ_SIZE` int(6) default NULL, `CONTENT_TYPE` varchar(50) default NULL, `CUST_TYPE` int(1) default NULL, `DEL_STATUS_DEVICE` int(1) default NULL, `IP` varchar(16) default NULL, `CP_FLAG` int(1) default NULL, `USER_LOCATE` bigint(15) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=200000000; Thanks in advance! Regards,

    Read the article

  • How to find the worst performing queries in MS SQL Server 2008?

    - by Thomas Bratt
    How to find the worst performing queries in MS SQL Server 2008? I found the following example but it does not seem to work: SELECT TOP 5 obj.name, max_logical_reads, max_elapsed_time FROM sys.dm_exec_query_stats a CROSS APPLY sys.dm_exec_sql_text(sql_handle) hnd INNER JOIN sys.sysobjects obj on hnd.objectid = obj.id ORDER BY max_logical_reads DESC Taken from: http://www.sqlservercurry.com/2010/03/top-5-costly-stored-procedures-in-sql.html

    Read the article

  • Using tarantula to test a Rails app

    - by Benjamin Oakes
    I'm using Tarantula to test a Rails app I'm developing. It works pretty well, but I'm getting some strange 404s. After looking into it, Tarantula is following DELETE requests (destroy actions on controllers) throughout my app when it tests. Since Tarantula gets the index action first (and seems to keep a list of unvisited URLs), it eventually tries to follow a link to a resource which it had deleted... and gets a 404. Tarantula is right that the URL doesn't exist anymore (because it deleted the resource itself). However, it's flagging it as an error -- that's hardly the behavior I would expect. I'm basically just using the Rails scaffolding and this problem is happening. How do I prevent Tarantula doing this? (Or, is there a better way of specifying the links?) Updates: Still searching, but I found a relevant thread here: http://github.com/relevance/tarantula/issues#issue/3 Seems to be coming from relying on JS too much, in a way (see also http://thelucid.com/2010/03/15/rails-can-we-please-have-a-delete-action-by-default/)

    Read the article

  • Decent profiler for Windows?

    - by olliej
    Does windows have any decent sampling (eg. non-instrumenting) profilers available? Preferably something akin to Shark on MacOS, although i am willing to accept that i am going to have to pay for such a profiler on windows. I've tried the profiler in VS Team Suite and was not overly impressed, and was wondering if there were any other good ones. [Edit: Erk, i forgot to say this is for C/C++, rather than .NET -- sorry for any confusion]

    Read the article

  • How to unit-test a Wicket component with AbstractAjaxTimerBehavior?

    - by Juha Syrjälä
    I have a Wicket panel that has AbstractAjaxTimeBehavior, that I'd like to unit test. How can I trigger a ajax event during the unit test that end up calling AbstractAjaxTimeBehavior's .onTimer(AjaxRequestTarget target) method? behavior = new AbstractAjaxTimerBehavior(Duration.seconds(pollingPeriodInSeconds)) { protected void onTimer(AjaxRequestTarget target) { // how to unit test this? } } add(behavior);

    Read the article

  • How to unit test private methods in BDD / TDD?

    - by robert_d
    I am trying to program according to Behavior Driven Development, which states that no line of code should be written without writing failing unit test first. My question is, how to use BDD with private methods? How can I unit test private methods? Is there better solution than: - making private methods public first and then making them private when I write public method that uses those private methods; or - in C# making all private methods internal and using InternalsVisibleTo attribute. Robert

    Read the article

  • JUnit terminates child threads

    - by Marco
    Hi to all, When i test the execution of a method that creates a child thread, the JUnit test ends before the child thread and kills it. How do i force JUnit to wait for the child thread to complete its execution? Thanks

    Read the article

  • Creating C++ client app for some abstract windows server - how to manage TCP connection to server speed?

    - by Kabumbus
    So we have some server with some address port and ip. we are developing that server so we can implement on it what ever we need for help. What are standard/best practices for data transfer speed management between C++ windows client app and server (C++)? My main point is in how to get how much data can be uploaded/downloaded from/to client via his low speed network to my relatively super fast server. (I need it for set up of his live stream Audio/Video bit rate) My try on explaining number 3. We do not care how fast is our server. It is always faster than needed. We care about client tyring to stream out to our server his media. he streams encoded (via ffmpeg) live video data to our server. But he has say ADSL with 500kb/s of outgoing traffic. Also he uses some ICQ or what so ever so he has less than 500 kb/s per second. And he wants to stream live video! So we need to set up our ffmpeg to encode video with respect to the bit rate user can provide. We develop server side and client side. We need a way of finding out how much user can upload per second currently (so value can change dynamically over time)

    Read the article

  • XSLT 1.0: restrict entries in a nodeset

    - by Mike
    Hi, Being relatively new to XSLT I have what I hope is a simple question. I have some flat XML files, which can be pretty big (eg. 7MB) that I need to make 'more hierarchical'. For example, the flat XML might look like this: <D0011> .... .... and it should end up looking like this: <D0011> .... .... I have a working XSLT for this, and it essentially gets a nodeset of all the b elements and then uses the 'following-sibling' axis to get a nodeset of the nodes following the current b node (ie. following-sibling::*[position() =$nodePos]). Then recursion is used to add the siblings into the result tree until another b element is found (I have parameterised it of course, to make it more generic). I also have a solution that just sends the position in the XML of the next b node and selects the nodes after that one after the other (using recursion) via a *[position() = $nodePos] selection. The problem is that the time to execute the transformation increases unacceptably with the size of the XML file. Looking into it with XML Spy it seems that it is the 'following-sibling' and 'position()=' that take the time in the two respective methods. What I really need is a way of restricting the number of nodes in the above selections, so fewer comparisons are performed: every time the position is tested, every node in the nodeset is tested to see if its position is the right one. Is there a way to do that ? Any other suggestions ? Thanks, Mike

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • How to clear APC cache entries?

    - by lo_fye
    I need to clear all APC cache entries when I deploy a new version of the site. APC.php has a button for clearing all opcode caches, but I don't see buttons for clearing all User Entries, or all System Entries, or all Per-Directory Entries. Is it possible to clear all cache entries via the command-line, or some other way?

    Read the article

  • How do I test how customers use my Cocoa application?

    - by John Gallagher
    I'm interested in finding out how customers use features in my Cocoa application. I want to build up statistics on which features people use and how they use them, so that I can measure the value of features I'm implementing. This feedback of course will be off by default and anonymous. Does anyone know of any frameworks that have been developed that can achieve this without me having to write stuff from scratch?

    Read the article

  • How can I include utility functions from another file into a Sproutcore unit test file?

    - by Lauri
    Lets say I have a few utility functions in file tests/utils/functions.js. I would like to use these functions from several unit test files. However, I'm not able to use them as the Sproutcore build system does not include any external files into the html page used to run the unit tests. Only application code and the code from the unit tests to be run are included. So is it possible to somehow include Javascript files to be used in unit test files in Sproutcore? I could add the functions.js file into some other directory inside my application to be able to use them. However, this is not what I want to do as the utility functions are useless in final production build and would only make my application larger.

    Read the article

  • How to test a site rigorously?

    - by Sarfraz
    Hello, I recently created a big portal site. It's time for putting it to test. How do you guys test a site rigorously? What are the ways and tools for that? Can we sort of mimic hundreds of virtual users visiting the site to see its load handling? The test should be for both security and speed Thanks in advance.

    Read the article

  • Quickest way to compare a bunch of array or list of values.

    - by zapping
    Can you please let me know on the quickest and efficient way to compare a large set of values. Its like there are a list of parent codes(string) and each code has a series of child values(string). The child lists have to be compared with each other and find out duplicates and count how many times they repeat. code1(code1_value1, code1_value2, code3_value3, ..., code1_valueN); code2(code2_value1, code1_value2, code2_value3, ..., code2_valueN); code3(code2_value1, code3_value2, code3_value3, ..., code3_valueN); . . . codeN(codeN_value1, codeN_value2, codeN_value3, ..., codeN_valueN); The lists are huge say like there are 100 parent codes and each has about 250 values in them. There will not be duplicates within a code list. Doing it in java and the solution i could figure out is. Store the values of first set of code in as codeMap.put(codeValue, duplicateCount). The count initialized to 0. Then compare the rest of the values with this. If its in the map then increment the count otherwise append it to the map. The downfall of this is to get the duplicates. Another iteration needs to be performed on a very large list. An alternative is to maintain another hashmap for duplicates like duplicateCodeMap.put(codeValue, duplicateCount) and change the initial hashmap to codeMap.put(codeValue, codeValue). Speed is what is requirement. Hope one of you can help me with it.

    Read the article

  • Rewarding iOS app beta testers with in app purchase?

    - by Partridge
    My iOS app is going to be free, but with additional functionality enabled via in app purchase. Currently beta testers are doing a great job finding bugs and I want to reward them for their hard work. I think the least I can do is give them a full version of the app so that they don't have to buy the functionality themselves. However, I'm not sure what the best way to do this is. There do not appear to be promo codes for in app purchase so I can't just email out promo codes. I have all the tester device UDIDs so when the app launches I could grab the device UDID and compare it to an internal list of 'approved' UDIDs. Is this what other developers do? My concerns: The in app purchase content would not be tied to their iTunes account, so if beta testers move to a new device they would not be able to enable the content unless I released a new build in the app store with their new UDID. So they may have to buy it eventually anyway. Having an internal list leaves a hole for hackers to modify the list and add themselves to it. What would you do?

    Read the article

  • Slow query with unexpected scan

    - by zerkms
    Hello I have this query: SELECT * FROM SAMPLE SAMPLE INNER JOIN TEST TEST ON SAMPLE.SAMPLE_NUMBER = TEST.SAMPLE_NUMBER INNER JOIN RESULT RESULT ON TEST.TEST_NUMBER = RESULT . TEST_NUMBER WHERE SAMPLED_DATE BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows clustered index seek if i replace * in SELECT clause to RESULT.TEST_NUMBER (covered with index) - then all become fast in first case too. this points to hdd io issues, but doesn't clarifies changing plan. so, any ideas?

    Read the article

  • List of divisors of an integer n (Haskell)

    - by Code-Guru
    I currently have the following function to get the divisors of an integer: -- All divisors of a number divisors :: Integer -> [Integer] divisors 1 = [1] divisors n = firstHalf ++ secondHalf where firstHalf = filter (divides n) (candidates n) secondHalf = filter (\d -> n `div` d /= d) (map (n `div`) (reverse firstHalf)) candidates n = takeWhile (\d -> d * d <= n) [1..n] I ended up adding the filter to secondHalf because a divisor was repeating when n is a square of a prime number. This seems like a very inefficient way to solve this problem. So I have two questions: How do I measure if this really is a bottle neck in my algorithm? And if it is, how do I go about finding a better way to avoid repetitions when n is a square of a prime?

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >