Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 281/883 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • Sql Server 2000 Stored Procedure Prevents Parallelism or something?

    - by user187305
    I have a huge disgusting stored procedure that wasn't slow a couple months ago, but now is. I barely know what this thing does and I am in no way interested in rewriting it. I do know that if I take the body of the stored procedure and then declare/set the values of the parameters and run it in query analyzer that it runs more than 20x faster. From the internet, I've read that this is probably due to a bad cached query plan. So, I've tried running the sp with "WITH RECOMPILE" after the EXEC and I've also tried putting the "WITH RECOMPLE" inside the sp, but neither of those helped even a little bit. When I look at the execution plan of the sp vs the query, the biggest difference is that the sp has "Parallelism" operations all over the place and the query doesn't have any. Can this be the cause of the difference in speeds? Thank you, any ideas would be great... I'm stuck.

    Read the article

  • Maven + Tomcat acceleration

    - by Bar
    I am writing a web application with Maven in the Eclipse IDE, and use Tomcat servlet container. So, I run Maven like this: mvn clean compile. It is reasonable that after this oepration I must re-run Tomcat so it can reinitialize the context (Sysdeo Tomcat launcher helps a lot). The problem is Maven execution and subsequebt Tomcat re-running takes noticable amount of time (like 10+ seconds for Maven and 20+ sec. for Tomcat, because of logging, Hibernate mappings, etc.) every time I do it. Is there any automated and more faster solution for these two operatioins? As I see it, a way better solution can be moving re-compiled classes only to the target dir.

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • Wpf. Chart optimization. More than million points

    - by Evgeny
    I have custom control - chart with size, for example, 300x300 pixels and more than one million points (maybe less) in it. And its clear that now he works very slowly. I am searching for algoritm which will show only few points with minimal visual difference. I have link to component which have functionallity exactly what i need (2 million points demo): http://www.mindscape.co.nz/demo/SilverlightElements/demopage.html#/ChartOverviewPage I will be grateful for any matherials, links or thoughts how to realize such functionallity.

    Read the article

  • Why does false invalidate validates_presence_of?

    - by DJTripleThreat
    Ok steps to reproduce this: prompt> rails test_app prompt> cd test_app prompt> script/generate model event_service published:boolean then go into the migration and add not null and default published to false: class CreateEventServices < ActiveRecord::Migration def self.up create_table :event_services do |t| t.boolean :published, :null => false, :default => false t.timestamps end end def self.down drop_table :event_services end end now migrate your changes and run your tests: prompt>rake db:migrate prompt>rake You should get no errors at this time. Now edit the model so that you validate_presence_of published: class EventService < ActiveRecord::Base validates_presence_of :published end Now edit the unit test event_service_test.rb: require 'test_helper' class EventServiceTest < ActiveSupport::TestCase test "the truth" do e = EventServer.new e.published = false assert e.valid? end end and run rake: prompt>rake You will get an error in the test. Now set e.published to true and rerun the test. IT WORKS! I think this probably has something to do with the field being boolean but I can't figure it out. Is this a bug in rails? or am I doing something wrong?

    Read the article

  • JUnit Theories: Why can't I use Lists (instead of arrays) as DataPoints?

    - by MatrixFrog
    I've started using the new(ish) JUnit Theories feature for parameterizing tests. If your Theory is set up to take, for example, an Integer argument, the Theories test runner picks up any Integers marked with @DataPoint: @DataPoint public static Integer number = 0; as well as any Integers in arrays: @DataPoints public static Integer[] numbers = {1, 2, 3}; or even methods that return arrays like: @DataPoints public static Integer[] moreNumbers() { return new Integer[] {4, 5, 6};}; but not in Lists. The following does not work: @DataPoints public static List<Integer> numberList = Arrays.asList(7, 8, 9); Am I doing something wrong, or do Lists really not work? Was it a conscious design choice not to allow the use Lists as data points, or is that just a feature that hasn't been implemented yet? Are there plans to implement it in a future version of JUnit?

    Read the article

  • How to check restricted access pages for broken links?

    - by kumarsfriends
    Hi All, I was googling for tools for checking broken links in a remote web page. The w3c validator seemed a good one. But I am still unsure as how to check for pages which are restricted, i.e. the pages which I can only access by logging in to the site. Can we do that using the w3c validator? If not than is there any other tool for the same?

    Read the article

  • Django: IE doesn't load locahost or loads very SLOWLY

    - by reedvoid
    I'm just starting to learn Django, building a project on my computer, running Windows 7 64-bit, Python 2.7, Django 1.3. Basically whatever I write, it loads in Chrome and Firefox instantly. But for IE (version 9), it just stalls there, and does nothing. I can load up "http://127.0.0.1:8000" on IE and leave the computer on for hours and it doesn't load. Sometimes, when I refresh a couple of times or restart IE it'll work. If I change something in the code, again, Chrome and Firefox reflects changes instantly, whereas IE doesn't - if it loads the page at all. What is going on? I'm losing my mind here....

    Read the article

  • Nested loop traversing arrays

    - by alecco
    There are 2 very big series of elements, the second 100 times bigger than the first. For each element of the first series, there are 0 or more elements on the second series. This can be traversed and processed with 2 nested loops. But the unpredictability of the amount of matching elements for each member of the first array makes things very, very slow. The actual processing of the 2nd series of elements involves logical and (&) and a population count. I couldn't find good optimizations using C but I am considering doing inline asm, doing rep* mov* or similar for each element of the first series and then doing the batch processing of the matching bytes of the second series, perhaps in buffers of 1MB or something. But the code would be get quite messy. Does anybody know of a better way? C preferred but x86 ASM OK too. Many thanks! Sample/demo code with simplified problem, first series are "people" and second series are "events", for clarity's sake. (the original problem is actually 100m and 10,000m entries!) #include <stdio.h> #include <stdint.h> #define PEOPLE 1000000 // 1m struct Person { uint8_t age; // Filtering condition uint8_t cnt; // Number of events for this person in E } P[PEOPLE]; // Each has 0 or more bytes with bit flags #define EVENTS 100000000 // 100m uint8_t P1[EVENTS]; // Property 1 flags uint8_t P2[EVENTS]; // Property 2 flags void init_arrays() { for (int i = 0; i < PEOPLE; i++) { // just some stuff P[i].age = i & 0x07; P[i].cnt = i % 220; // assert( sum < EVENTS ); } for (int i = 0; i < EVENTS; i++) { P1[i] = i % 7; // just some stuff P2[i] = i % 9; // just some other stuff } } int main(int argc, char *argv[]) { uint64_t sum = 0, fcur = 0; int age_filter = 7; // just some init_arrays(); // Init P, P1, P2 for (int64_t p = 0; p < PEOPLE ; p++) if (P[p].age < age_filter) for (int64_t e = 0; e < P[p].cnt ; e++, fcur++) sum += __builtin_popcount( P1[fcur] & P2[fcur] ); else fcur += P[p].cnt; // skip this person's events printf("(dummy %ld %ld)\n", sum, fcur ); return 0; } gcc -O5 -march=native -std=c99 test.c -o test

    Read the article

  • Slowing process creation under Java?

    - by oconnor0
    I have a single, large heap (up to 240GB, though in the 20-40GB range for most of this phase of execution) JVM [1] running under Linux [2] on a server with 24 cores. We have tens of thousands of objects that have to be processed by an external executable & then load the data created by those executables back into the JVM. Each executable produces about half a megabyte of data (on disk) that when read right in, after the process finishes, is, of course, larger. Our first implementation was to have each executable handle only a single object. This involved the spawning of twice as many executables as we had objects (since we called a shell script that called the executable). Our CPU utilization would start off high, but not necessarily 100%, and slowly worsen. As we began measuring to see what was happening we noticed that the process creation time [3] continually slows. While starting at sub-second times it would eventually grow to take a minute or more. The actual processing done by the executable usually takes less than 10 seconds. Next we changed the executable to take a list of objects to process in an attempt to reduce the number of processes created. With batch sizes of a few hundred (~1% of our current sample size), the process creation times start out around 2 seconds & grow to around 5-6 seconds. Basically, why is it taking so long to create these processes as execution continues? [1] Oracle JDK 1.6.0_22 [2] Red Hat Enterprise Linux Advanced Platform 5.3, Linux kernel 2.6.18-194.26.1.el5 #1 SMP [3] Creation of the ProcessBuilder object, redirecting the error stream, and starting it.

    Read the article

  • can a program written in C be faster than one written in OCaml and translated to C?

    - by Ole Jak
    So I have some cool Image Processing algorithm. I have written it in OCaml. It performs well. I now I can compile it as C code with such command ocamlc -output-obj -o foo.c foo.ml (I have a situation where I am not alowed to use OCaml compiler to bild my programm for my arcetecture, I can use only specialy modified gcc. so I will compile that programm with sometyhing like gcc -L/usr/lib/ocaml foo.c -lcamlrun -lm -lncurses and Itll run on my archetecture.) I want to know in general case can a program written in C be faster than one written in OCaml and translated to C?

    Read the article

  • NUnit [Test] is not a valid attribute

    - by tyndall
    I've included the necessary assemblies into a Windows Class project in VS2008. When I start to try to write a test I get a red squiggle line and the message [Test] is not a valid attribute. I've used NUnit before... maybe an earlier version. What am I doing wrong? I'm on version 2.5.2. using System; using System.Collections.Generic; using System.Linq; using System.Text; using NUnit; using NUnit.Core; using NUnit.Framework; namespace AccessPoint.Web.Test { public class LoginTests { [Test] public void CanLogin() { } } }

    Read the article

  • Test a database conection without codeigniter throwing a fit, can it be done?

    - by RobertWHurst
    I'm just about finished my first release of automailer, a program I've been working on for a while now. I've just got to finish writing the installer. Its job is to rewrite the codigniter configs from templates. I've got the read/write stuff working, but I'd like to be able to test the server credentials given by the user without codingiter throwing a system error if they're wrong. Is there a function other than mysql_connect that I can use to test a connection that will return true or false and won't make codeigniter have a fit? This is what I have function _test_connection(){ if(mysql_connect($_POST['host'], $_POST['username'], $_POST['password'], TRUE)) return TRUE; else return FALSE; } Codigniter doesn't like this and throws a system error. <div style="border:1px solid #990000;padding-left:20px;margin:0 0 10px 0;"> <h4>A PHP Error was encountered</h4> <p>Severity: Warning</p> <p>Message: mysql_connect() [<a href='function.mysql-connect'>function.mysql-connect</a>]: Unknown MySQL server host 'x' (1)</p> <p>Filename: controllers/install.php</p> <p>Line Number: 57</p> </div> I'd rather not turn off error reporting.

    Read the article

  • F# why my recursion is faster than Seq.exists?

    - by user38397
    I am pretty new to F#. I'm trying to understand how I can get a fast code in F#. For this, I tried to write two methods (IsPrime1 and IsPrime2) for benchmarking. My code is: // Learn more about F# at http://fsharp.net open System open System.Diagnostics #light let isDivisible n d = n % d = 0 let IsPrime1 n = Array.init (n-2) ((+) 2) |> Array.exists (isDivisible n) |> not let rec hasDivisor n d = match d with | x when x < n -> (n % x = 0) || (hasDivisor n (d+1)) | _ -> false let IsPrime2 n = hasDivisor n 2 |> not let SumOfPrimes max = [|2..max|] |> Array.filter IsPrime1 |> Array.sum let maxVal = 20000 let s = new Stopwatch() s.Start() let valOfSum = SumOfPrimes maxVal s.Stop() Console.WriteLine valOfSum Console.WriteLine("IsPrime1: {0}", s.ElapsedMilliseconds) ////////////////////////////////// s.Reset() s.Start() let SumOfPrimes2 max = [|2..max|] |> Array.filter IsPrime2 |> Array.sum let valOfSum2 = SumOfPrimes2 maxVal s.Stop() Console.WriteLine valOfSum2 Console.WriteLine("IsPrime2: {0}", s.ElapsedMilliseconds) Console.ReadKey() IsPrime1 takes 760 ms while IsPrime2 takes 260ms for the same result. What's going on here and how I can make my code even faster?

    Read the article

  • Update table with index is too slow

    - by pauloya
    Hi, I was watching the Profiler on a live system of our application and I saw that there was an update instruction that we run periodically (every second) that was quite slow. It took around 400ms every time. The query includes this update (which is the slow part) UPDATE BufferTable SET LrbCount = LrbCount + 1, LrbUpdated = getdate() WHERE LrbId = @LrbId This is the table CREATE TABLE BufferTable( LrbId [bigint] IDENTITY(1,1) NOT NULL, ... LrbInserted [datetime] NOT NULL, LrbProcessed [bit] NOT NULL, LrbUpdated [datetime] NOT NULL, LrbCount [tinyint] NOT NULL, ) The table has 2 indexes (non unique and non clustered) with the fields by this order: * Index1 - (LrbProcessed, LrbCount) * Index2 - (LrbInserted, LrbCount, LrbProcessed) When I looked at this I thought that the problem would come from Index1 since LrbCount is changing a lot and it changes the order of the data in the index. But after desactivating index1 I saw the query was taking the same time as initially. Then I rebuilt index1 and desactivated index2, this time the query was very fast. It seems to me that Index2 should be faster to update, the order of the data shouldn't change since the LrbInserted time is not changed. Can someone explain why index2 is much heavier to update then index1? Thank you!

    Read the article

  • How to test Language DLLs?

    - by EKI
    Our application offer the user to display different languages if they have the approppriate Language DLL (say German.DLL, French.DLL, even Chinese.DLL). We have functional test to verify that those DLLs enable the right options in a Combobox and that choosing them will actually translate strings in the UI. I would like to know options to test this translation dll's more in depth, maybe ensuring that all the characters in the selected langauge (and in the file) can be correctly displayed, or that the internal structure of the DLL is consistent, there are no strings exceeding the limits that are expected of them, etc... Any suggestions on what to test and how to test it? Does anyone know specific problems that may arise and we should check? Thanks in advance.

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >