Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 446/3203 | < Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >

  • Sql Server 2000 Stored Procedure Prevents Parallelism or something?

    - by user187305
    I have a huge disgusting stored procedure that wasn't slow a couple months ago, but now is. I barely know what this thing does and I am in no way interested in rewriting it. I do know that if I take the body of the stored procedure and then declare/set the values of the parameters and run it in query analyzer that it runs more than 20x faster. From the internet, I've read that this is probably due to a bad cached query plan. So, I've tried running the sp with "WITH RECOMPILE" after the EXEC and I've also tried putting the "WITH RECOMPLE" inside the sp, but neither of those helped even a little bit. When I look at the execution plan of the sp vs the query, the biggest difference is that the sp has "Parallelism" operations all over the place and the query doesn't have any. Can this be the cause of the difference in speeds? Thank you, any ideas would be great... I'm stuck.

    Read the article

  • Best way to store this data?

    - by Malfist
    I have just been assigned to renovate an old website, and I get to move it from some old archaic system to drupal. The only problem is that it's a real-estate system and a lot of data is stored. Currently all the information is stored in a single table, an id represents the house and then everything else is key/value pairs. There are a possible 243 keys per estate, there are 23840 estates in the system. As you can imagine the system is slow and difficult to query. I don't think a table with 243 rows would be a very good idea, and probably worse than the current situation. I've done some investigating and here's what I've found out: Missing data does not indicate a 0 value, data is merged from two, unique sources/formats. Some guessing is involved. I have no control over the source of the data. There are 4 keys that are common to all estates, all values look like something that is commonly searched for and could be indexed There are 10 keys that are in the [90-100)% range 8 of these are information like who's selling it, and it's address. The other two seem to belong with the below range There are 80 keys that are in the [80-90)% range This range seems to mostly just list room types and how many the house has (e.g. bedrooms_possible, bathrooms, family_room_3rd, etc) This range also includes some minor information like school districts, one or two more pieces of data on the address. The 179 keys that are in the [0-80)% range include all sorts of miscellaneous information about the estate My best idea was a hybrid approach, create a table that stores important, common information and keep a smaller key/value table. How would you store this information?

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • move data in bulk from oracle to SQL database

    - by Soja
    Hi All... Would like to know which is the best way to move data in bulk from oracle to SQL database programmatically in VB.NET application.This application is suppose to run continuously and moves data from Oracle to SQL whenever data comes. I have found OPENDATASOURCE but does not know the exact syntax. Can someone help me out. Thanks in advance,

    Read the article

  • F# why my recursion is faster than Seq.exists?

    - by user38397
    I am pretty new to F#. I'm trying to understand how I can get a fast code in F#. For this, I tried to write two methods (IsPrime1 and IsPrime2) for benchmarking. My code is: // Learn more about F# at http://fsharp.net open System open System.Diagnostics #light let isDivisible n d = n % d = 0 let IsPrime1 n = Array.init (n-2) ((+) 2) |> Array.exists (isDivisible n) |> not let rec hasDivisor n d = match d with | x when x < n -> (n % x = 0) || (hasDivisor n (d+1)) | _ -> false let IsPrime2 n = hasDivisor n 2 |> not let SumOfPrimes max = [|2..max|] |> Array.filter IsPrime1 |> Array.sum let maxVal = 20000 let s = new Stopwatch() s.Start() let valOfSum = SumOfPrimes maxVal s.Stop() Console.WriteLine valOfSum Console.WriteLine("IsPrime1: {0}", s.ElapsedMilliseconds) ////////////////////////////////// s.Reset() s.Start() let SumOfPrimes2 max = [|2..max|] |> Array.filter IsPrime2 |> Array.sum let valOfSum2 = SumOfPrimes2 maxVal s.Stop() Console.WriteLine valOfSum2 Console.WriteLine("IsPrime2: {0}", s.ElapsedMilliseconds) Console.ReadKey() IsPrime1 takes 760 ms while IsPrime2 takes 260ms for the same result. What's going on here and how I can make my code even faster?

    Read the article

  • loading remote page into DOM with javascript

    - by scoobydoo
    I am trying to write a web widget which will allow users to display customized information (from my website) in their own web page. The mechanism I want to use (for creating the web widget) is javascript. So basically, I want to be able to write some javascript code like this (this is what the end user copies into their HTML page, to get my widget displayed in their page) <script type="text/javascript"> /* javascript here to fetch page from remote url and insert into DOM */ </script> I have two questions: how do I write a javascript code to fetch the page from the remote url? Ideally this will be PLAIN javascript (i.e. not using jQuery etc - since I dont want to force the user to get third party scripts jQuery which may conflict with other scripts on their page etc) The page I am fetching contains inline javascript, which gets executed in an body.onLoad event, as well as other functions which are used in response to user actions - my questions are: i). will the body.onLoad event be triggered for the retrieved document?. ii). If the retrieved page is dumped directly into the DOM, then the document will contain two <body> sections, which is no longer valid (X)HTML - however, I need the body.onLoad event to be triggered for the page to be setup correctly, and I also need the other functions in the retrieved page, for the retrieved page to be able to respond to the user interaction. Any suggestions/tips on how I can solve these problems?

    Read the article

  • Maven + Tomcat acceleration

    - by Bar
    I am writing a web application with Maven in the Eclipse IDE, and use Tomcat servlet container. So, I run Maven like this: mvn clean compile. It is reasonable that after this oepration I must re-run Tomcat so it can reinitialize the context (Sysdeo Tomcat launcher helps a lot). The problem is Maven execution and subsequebt Tomcat re-running takes noticable amount of time (like 10+ seconds for Maven and 20+ sec. for Tomcat, because of logging, Hibernate mappings, etc.) every time I do it. Is there any automated and more faster solution for these two operatioins? As I see it, a way better solution can be moving re-compiled classes only to the target dir.

    Read the article

  • Using FILE_FLAG_NO_BUFFERING will return noticeable speed gain?

    - by 9dan
    Recently noticed detail description of FILE_FLAG_NO_BUFFERING flag in MSDN, and read several Google search results about unbuffered I/O in Windows. http://msdn.microsoft.com/en-us/library/aa363858(v=vs.85).aspx I wondering now, is it really important to consider unbuffered option in file I/O programming? Because many programs use plain old C stream I/O or C++ iostream, I didn't gave any attention to FILE_FLAG_NO_BUFFERING flag before. Let's say we are developing photo explorer program like Picasa. If we implement unbuffered I/O, could thumbnail display speed show noticeable difference in ordinary users?

    Read the article

  • Nested loop traversing arrays

    - by alecco
    There are 2 very big series of elements, the second 100 times bigger than the first. For each element of the first series, there are 0 or more elements on the second series. This can be traversed and processed with 2 nested loops. But the unpredictability of the amount of matching elements for each member of the first array makes things very, very slow. The actual processing of the 2nd series of elements involves logical and (&) and a population count. I couldn't find good optimizations using C but I am considering doing inline asm, doing rep* mov* or similar for each element of the first series and then doing the batch processing of the matching bytes of the second series, perhaps in buffers of 1MB or something. But the code would be get quite messy. Does anybody know of a better way? C preferred but x86 ASM OK too. Many thanks! Sample/demo code with simplified problem, first series are "people" and second series are "events", for clarity's sake. (the original problem is actually 100m and 10,000m entries!) #include <stdio.h> #include <stdint.h> #define PEOPLE 1000000 // 1m struct Person { uint8_t age; // Filtering condition uint8_t cnt; // Number of events for this person in E } P[PEOPLE]; // Each has 0 or more bytes with bit flags #define EVENTS 100000000 // 100m uint8_t P1[EVENTS]; // Property 1 flags uint8_t P2[EVENTS]; // Property 2 flags void init_arrays() { for (int i = 0; i < PEOPLE; i++) { // just some stuff P[i].age = i & 0x07; P[i].cnt = i % 220; // assert( sum < EVENTS ); } for (int i = 0; i < EVENTS; i++) { P1[i] = i % 7; // just some stuff P2[i] = i % 9; // just some other stuff } } int main(int argc, char *argv[]) { uint64_t sum = 0, fcur = 0; int age_filter = 7; // just some init_arrays(); // Init P, P1, P2 for (int64_t p = 0; p < PEOPLE ; p++) if (P[p].age < age_filter) for (int64_t e = 0; e < P[p].cnt ; e++, fcur++) sum += __builtin_popcount( P1[fcur] & P2[fcur] ); else fcur += P[p].cnt; // skip this person's events printf("(dummy %ld %ld)\n", sum, fcur ); return 0; } gcc -O5 -march=native -std=c99 test.c -o test

    Read the article

  • How to return data structure from stored procedure

    - by rodnower
    Hello, I have C# application that retrieve data from AQ with some oracle stored procedure, that stored in package. The scheme is: C# code - Stored Procedure in Package - AQ Inside of this stored procedure I use DBMS_AQ for dequeue the data to some object of some type. Now I have this object. My question is how I return it? Previously I: Created some virtual table, Make EXTEND() to table Inserted the data from object to table, Perform select on the table, And return sys_refcursor. In side of C# I filled DataSet with help of OracleDataAdapter.Fill() After that I upgraded it to return data fields during OUT parameters. But now I have much fields, and I may not to create so much OUT parameters... What the best way to do this? Thank you for ahead.

    Read the article

  • Javascript jQuery ajax submit default data

    - by onlineracoon
    How to submit default data when using $.ajax? I have tried: $.ajaxPrefilter(function(options, originalOptions, jqXHR) { // can't access anything here... (tried jqXHR.data but undefined) }); $.ajaxSetup({ beforeSend: function(jqXHR, settings) { // can't access anything here... (tried jqXHR.data but undefined) } }); I need this so I can send my CSRF token, each time and when logged in submit the user ID each time. I prefer not to submit a token in headers. Thanks

    Read the article

  • How to interpret binary data as an integer?

    - by StackedCrooked
    The codebase at work contains some code that looks roughly like this: #define DATA_LENGTH 64 u_int32 SmartKey::SerialNumber() { unsigned char data[DATA_LENGTH]; // ... initialized data buffer return *(u_int32*)data; } This code works correctly, but GCC gives the following warning: warning: dereferencing pointer ‘serialNumber’ does break strict-aliasing rules Can someone explain this warning? Is this code potentially dangerous? How can it be improved?

    Read the article

  • Can't send data via xmlhttp

    - by darkandcold
    Hello, I could use xmlhttp and asptear or any other components to post data to external sites on WINDOWS 2003(iis 6). not i am using 2008 server(iis 7) and i can't get worked that components to send data. could you please guide me ? (PS. components are tearing the pages but don't send post data, seems like tear as GET method)

    Read the article

  • What is the time complexity of LinkedList.getLast() in Java?

    - by i.
    I have a private LinkedList in a Java class & will frequently need to retrieve the last element in the list. The lists need to scale, so I'm trying to decide whether I need to keep a reference to the last element when I make changes (to achieve O(1)) or if the LinkedList class does that already with the getLast() call. What is the big-O cost of LinkedList.getLast() and is it documented? (i.e. can I rely on this answer or should I make no assumptions & cache it even if it's O(1)?)

    Read the article

  • can a program written in C be faster than one written in OCaml and translated to C?

    - by Ole Jak
    So I have some cool Image Processing algorithm. I have written it in OCaml. It performs well. I now I can compile it as C code with such command ocamlc -output-obj -o foo.c foo.ml (I have a situation where I am not alowed to use OCaml compiler to bild my programm for my arcetecture, I can use only specialy modified gcc. so I will compile that programm with sometyhing like gcc -L/usr/lib/ocaml foo.c -lcamlrun -lm -lncurses and Itll run on my archetecture.) I want to know in general case can a program written in C be faster than one written in OCaml and translated to C?

    Read the article

  • What is microbenchmarking?

    - by polygenelubricants
    I've heard this term used, but I'm not entirely sure what it means, so: What DOES it mean and what DOESN'T it mean? What are some examples of what IS and ISN'T microbenchmarking? What are the dangers of microbenchmarking and how do you avoid it? (or is it a good thing?)

    Read the article

  • Data base design with Blob

    - by mmuthu
    Hi, I have a situation where i need to store the binary data into database as blob column. There are three different table exists in my database where in i need to store a blob data for each record. Not every record will have the blob data all the time. It is time and user based. The table one will have to store the *.doc files almost for all the record The table two will have to store the *.xml optionally. The table three will have to store images (not sure what is frequency, etc) Now my questions is whether it is a good idea to maintain a separate table to store the blob data pointing it to the respective table PK's (Yes, there will be no FK's and assuming program will maintain it). It will be some thing like below, BLOB|PK_ID|TABLE_NAME Alternatively, is it a good idea to keep the blob column in respective tables. As for as my application runtime is concerned, The table 2 will be read very frequently. Though the blob column will not be required. The table 2 record will gets deleted frequently. Similarly other blob data in respective table will not be accessed frequently. All of the blob content will be read on-demand basis. I'm thinking first approach will work better for me. What do you guys think? Btw, I'm using Oracle.

    Read the article

  • PHP : flatten array - fastest way?

    - by Industrial
    Is there any fast way to flatten an array and select subkeys ('key'&'value' in this case) without running a foreach loop, or is the foreach always the fastest way? Array ( [0] => Array ( [key] => string [value] => a simple string [cas] => 0 ) [1] => Array ( [key] => int [value] => 99 [cas] => 0 ) [2] => Array ( [key] => array [value] => Array ( [0] => 11 [1] => 12 ) [cas] => 0 ) ) To: Array ( [int] => 99 [string] => a simple string [array] => Array ( [0] => 11 [1] => 12 ) )

    Read the article

  • Why is i-- faster than i++ in loops? [closed]

    - by Afshin Mehrabani
    Possible Duplicate: JavaScript - Are loops really faster in reverse…? I don't know if this question is valid in other languages or not, but I'm asking this specifically for JavaScript. I see in some articles and questions that the fastest loop in JavaScript is something like: for(var i = array.length; i--; ) Also in Sublime Text 2, when you try to write a loop, it suggests: for (var i = Things.length - 1; i >= 0; i--) { Things[i] }; I want to know, why is i-- faster than i++ in loops?

    Read the article

  • VS2008 is very slow on a specific large C++ solution

    - by VioletRose
    I have a solution with 21 C++ projects and 1 VB.NET project. The IDE responds very slowly when I simply move the carret in a file or try to open the menu. The process seems to take 50% of CPU for each movement. It only happens with this solution and only on my machine. The solution has total of 2380 source and header files, of which 1280 are header files. I tried to remove all connection to the source control (Perforce) but it didn't help. Also, I have Visual Assist installed but even after removing it (uninstall), the same behavior continued. Any idea?

    Read the article

< Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >