Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 309/1257 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • Why does dojo parsing time depend on css and images availability?

    - by Kniganapolke
    I have been profiling javascript on my page that uses dojo widgets. I don't use explicit parsing - the parser runs on page load. What I noticed is that if I clear browser cache before refreshing the page, dojo parsing takes much more time than if all the files are already cached. Note that we build all the required dojo modules into a layer (a single file), so we don't lazy-load any js files. I wonder if dojo parsing process depends on images and css resources, as far as I know it only instantiates widgets and injects dom nodes. Do you have any ideas why dojo parser runs longer (2-3 times longer in my case) when the cache is cleared?

    Read the article

  • What's the fastest way to check if a word from one string is in another string?

    - by Mike Trpcic
    I have a string of words; let's call them bad: bad = "foo bar baz" I can keep this string as a whitespace separated string, or as a list: bad = bad.split(" "); If I have another string, like so: str = "This is my first foo string" What's the fasted way to check if any word from the bad string is within my comparison string, and what's the fastest way to remove said word if it's found? #Find if a word is there bad.split(" ").each do |word| found = str.include?(word) end #Remove the word bad.split(" ").each do |word| str.gsub!(/#{word}/, "") end

    Read the article

  • Speeding up PostgreSQL query where data is between two dates

    - by Roger
    I have a large table ( 50m rows) which has some data with an ID and timestamp. I need to query the table to select all rows with a certain ID where the timestamp is between two dates, but it currently takes over 2 minutes on a high end machine. I'd really like to speed it up. I have found this tip which recommends using a spatial index, but the example it gives is for IP addresses. However, the speed increase (436s to 3s) is impressive. How can I use this with timestamps?

    Read the article

  • Hanging Forms When I Drag And Move My Forms

    - by hosseinsinohe
    I Write a Windows Application By C Sharp. I Use a Picture in Background of my Form (MainForm) And I Use Many Picture in Buttons in This Form,And also I Use Some Panel And Label with Transparent Background Color. My Forms,Panels And Buttons has flicker. I solve this problem by a method in this thread. But Still when other Forms Start over this Form,my Forms hangs when I Drag and Move my Forms over this Form.How can I Solve this Problem to Move And Drags my Forms easily And Speed? Edit:: My Forms Load Data From Access 2007 DataBase file.I Use Datasets,DataGridViews And Other Components to Load And show Data in My Forms.

    Read the article

  • Python faster way to read fixed length fields form a file into dictionary

    - by Martlark
    I have a file of names and addresses as follows (example line) OSCAR ,CANNONS ,8 ,STIEGLITZ CIRCUIT And I want to read it into a dictionary of name and value. Here self.field_list is a list of the name, length and start point of the fixed fields in the file. What ways are there to speed up this method? (python 2.6) def line_to_dictionary(self, file_line,rec_num): file_line = file_line.lower() # Make it all lowercase return_rec = {} # Return record as a dictionary for (field_start, field_length, field_name) in self.field_list: field_data = file_line[field_start:field_start+field_length] if (self.strip_fields == True): # Strip off white spaces first field_data = field_data.strip() if (field_data != ''): # Only add non-empty fields to dictionary return_rec[field_name] = field_data # Set hidden fields # return_rec['_rec_num_'] = rec_num return_rec['_dataset_name_'] = self.name return return_rec

    Read the article

  • Fast read of certain bytes of multiple files in C/C++

    - by Alejandro Cámara
    I've been searching in the web about this question and although there are many similar questions about read/write in C/C++, I haven't found about this specific task. I want to be able to read from multiple files (256x256 files) only sizeof(double) bytes located in a certain position of each file. Right now my solution is, for each file: Open the file (read, binary mode): fstream fTest("current_file", ios_base::out | ios_base::binary); Seek the position I want to read: fTest.seekg(position*sizeof(test_value), ios_base::beg); Read the bytes: fTest.read((char *) &(output[i][j]), sizeof(test_value)); And close the file: fTest.close(); This takes about 350 ms to run inside a for{ for {} } structure with 256x256 iterations (one for each file). Q: Do you think there is a better way to implement this operation? How would you do it?

    Read the article

  • SQL Server query problem

    - by user335160
    I want to achieved the results shown in the attached image. The Table Structure and Data are the ffg below: Table Relationship Overall IB Limit->one to many-> Facility Limit Facility Limit->one to many-> Facility Sub Limit Tables Structure and Data Overall IB Limit Id SCAF Reference Approval Date 1 NEW-001 January 1, 2011 2 NEW-002 January 2, 2011 3 NEW-003 January 3, 2011 ---------------------------- Facility Limit Id OverallIBLimitId Product Type 1 1 RPA 2 1 CG 3 2 RPA 4 3 CG ---------------------------- Facility Sub Limit Id FacilityLimitId Sub-Limit Type Amount Tenor Status Status Date 1 1 RPA at max 2,000,0000.00 2 months Approved January 5, 2011 2 1 Oil 3,000,0000.00 3 yrs Approved January 5, 2011 3 2 CG at minor 4,000,0000.00 1 yr Approved January 5, 2011 4 2 CG at max 5,000,0000.00 6 months Approved January 5, 2011 5 2 Flood Component 1 5,000,0000.00 6 months Approved January 5, 2011 6 2 Flood Component 2 6,000,0000.00 3 yrs Approved January 5, 2011 7 3 RPA at minor 1,000,0000.00 6 months Approved January 5, 2011 8 4 One-Off 1,000,0000.00 6 months Approved January 5, 2011

    Read the article

  • STL find performs bettern than hand-crafter loop

    - by dusha
    Hello all, I have some question. Given the following C++ code fragment: #include <boost/progress.hpp> #include <vector> #include <algorithm> #include <numeric> #include <iostream> struct incrementor { incrementor() : curr_() {} unsigned int operator()() { return curr_++; } private: unsigned int curr_; }; template<class Vec> char const* value_found(Vec const& v, typename Vec::const_iterator i) { return i==v.end() ? "no" : "yes"; } template<class Vec> typename Vec::const_iterator find1(Vec const& v, typename Vec::value_type val) { return find(v.begin(), v.end(), val); } template<class Vec> typename Vec::const_iterator find2(Vec const& v, typename Vec::value_type val) { for(typename Vec::const_iterator i=v.begin(), end=v.end(); i<end; ++i) if(*i==val) return i; return v.end(); } int main() { using namespace std; typedef vector<unsigned int>::const_iterator iter; vector<unsigned int> vec; vec.reserve(10000000); boost::progress_timer pt; generate_n(back_inserter(vec), vec.capacity(), incrementor()); //added this line, to avoid any doubts, that compiler is able to // guess the data is sorted random_shuffle(vec.begin(), vec.end()); cout << "value generation required: " << pt.elapsed() << endl; double d; pt.restart(); iter found=find1(vec, vec.capacity()); d=pt.elapsed(); cout << "first search required: " << d << endl; cout << "first search found value: " << value_found(vec, found)<< endl; pt.restart(); found=find2(vec, vec.capacity()); d=pt.elapsed(); cout << "second search required: " << d << endl; cout << "second search found value: " << value_found(vec, found)<< endl; return 0; } On my machine (Intel i7, Windows Vista) STL find (call via find1) runs about 10 times faster than the hand-crafted loop (call via find2). I first thought that Visual C++ performs some kind of vectorization (may be I am mistaken here), but as far as I can see assembly does not look the way it uses vectorization. Why is STL loop faster? Hand-crafted loop is identical to the loop from the STL-find body. I was asked to post program's output. Without shuffle: value generation required: 0.078 first search required: 0.008 first search found value: no second search required: 0.098 second search found value: no With shuffle (caching effects): value generation required: 1.454 first search required: 0.009 first search found value: no second search required: 0.044 second search found value: no Many thanks, dusha. P.S. I return the iterator and write out the result (found or not), because I would like to prevent compiler optimization, that it thinks the loop is not required at all. The searched value is obviously not in the vector.

    Read the article

  • Node & Redis: Crucial Design Issues in Production Mode

    - by Ali
    This question is a hybrid one, being both technical and system design related. I'm developing the backend of an application that will handle approx. 4K request per second. We are using Node.js being super fast and in terms of our database struction we are using MongoDB, with Redis being a layer between Node and MongoDB handling volatile operations. I'm quite stressed because we are expecting concurrent requests that we need to handle carefully and we are quite close to launch. However I do not believe I've applied the correct approach on redis. I have a class Student, and they constantly change stages(such as 'active', 'doing homework','in lesson' etc. Thus I created a Redis DB for each state. (1 for being 'active', 2 for being 'doing homework'). Above I have the structure of the 'active' students table; xa5p - JSON stringified object #1 pQrW - JSON stringified object #2 active_student_table - {{studentId:'xa5p'}, {studentId:'pQrW'}} Since there is no 'select all keys method' in Redis, I've been suggested to use a set such that when I run command 'smembers' I receive the keys and later on do 'get' for each id in order to find a specific user (lets say that age older than 15). I've been also suggested that in fact I used never use keys in production mode. My question is, no matter how 'conceptual' it is, what specific things I should avoid doing in Node & Redis in production stage?. Are they any issues related to my design? Students must be objects and I sure can list them in a list but I haven't done yet. Is it that crucial in production stage?

    Read the article

  • Copyrighting software, templates, etc. under real name or screen name?

    - by Abluescarab
    My question is hopefully simple--should I copyright my work (art, software, web design, etc.) under my real name or my screen name? My real name and screen name are also easily connected with a bit of searching, so does it really matter in the end? I'm not a professional (at this point). I read this article: Is it a bad idea to sell Android apps in the Android Market under your real name? and they recommended releasing on the app market under a company name. I also read this article: On what name should I claim copyright in open source software?, but that didn't answer my question. I know it probably matters for big projects, but for little projects, does it matter? Thanks ahead of time!

    Read the article

  • PHP fastest method of reading server response

    - by Peter John
    Hi there, im having some real problems with the lag produced by using fgets to grab the server's response to some batch database calls im making. Im sending through a batch of say, 10,000 calls and ive tracked the lag down to fgets causing the hold up in the speed of my application as the response for each call needs to be grabbed. I have found this thread http://bugs.php.net/bug.php?id=32806 which explains the problem quite well, but hes reading a file, not a server response so fread could be a bit tricky as i could get part of the next line, and extra stuff which i dont want. Any help much appreciated!

    Read the article

  • .NET Data Provider for SqlServer

    - by DMcKenna
    Has anybody managed to get the ".NET Data Provider for SqlServer" to actually work within perfmon.exe. I have a .NET app that uses nhibernate to interact with sql server 2005 db. All I want to do is to view the NumberOfActiveConnectionPools, NumberOfActiveConnections and the NumberOfFreeConnections within perfmon.exe Can somebody explain to me how exactly I get this to work? Regards, David

    Read the article

  • Unicorn: Which number of worker processes to use?

    - by blackbird07
    I am running a Ruby on Rails app on a virtual Linux server that is capped at 1GB RAM. Currently, I am constantly hitting the limit and would like to optimize memory utilization. One option I am looking at is reducing the number of unicorn workers. So what is the best way to determine the number of unicorn workers to use? The current setting is 10 workers, but the maximum number of requests per second I have seen on Google Analytics Real-Time is 3 (only scored once at a peak time; in 99% of the time not going above 1 request per second). So is it a save assumption that I can - for now - go with 4 workers, leaving room for unexpected amounts of requests? What are the metrics I should have a look at for determining the number of workers and what are the tools I can use for that on my Ubuntu machine?

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

  • fastest way to crawl recursive ntfs directories in C++

    - by Peter Parker
    I have written a small crawler to scan and resort directory structures. It based on dirent(which is a small wrapper around FindNextFileA) In my first benchmarks it is surprisingy slow: around 123473ms for 4500 files(thinkpad t60p local samsung 320 GB 2.5" HD). 121481 files found in 123473 milliseconds Is this speed normal? This is my code: int testPrintDir(std::string strDir, std::string strPattern="*", bool recurse=true){ struct dirent *ent; DIR *dir; dir = opendir (strDir.c_str()); int retVal = 0; if (dir != NULL) { while ((ent = readdir (dir)) != NULL) { if (strcmp(ent->d_name, ".") !=0 && strcmp(ent->d_name, "..") !=0){ std::string strFullName = strDir +"\\"+std::string(ent->d_name); std::string strType = "N/A"; bool isDir = (ent->data.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) !=0; strType = (isDir)?"DIR":"FILE"; if ((!isDir)){ //printf ("%s <%s>\n", strFullName.c_str(),strType.c_str());//ent->d_name); retVal++; } if (isDir && recurse){ retVal += testPrintDir(strFullName, strPattern, recurse); } } } closedir (dir); return retVal; } else { /* could not open directory */ perror ("DIR NOT FOUND!"); return -1; } }

    Read the article

  • Code runs 6 times slower with 2 threads than with 1

    - by Edward Bird
    So I have written some code to experiment with threads and do some testing. The code should create some numbers and then find the mean of those numbers. I think it is just easier to show you what I have so far. I was expecting with two threads that the code would run about 2 times as fast. Measuring it with a stopwatch I think it runs about 6 times slower! void findmean(std::vector<double>*, std::size_t, std::size_t, double*); int main(int argn, char** argv) { // Program entry point std::cout << "Generating data..." << std::endl; // Create a vector containing many variables std::vector<double> data; for(uint32_t i = 1; i <= 1024 * 1024 * 128; i ++) data.push_back(i); // Calculate mean using 1 core double mean = 0; std::cout << "Calculating mean, 1 Thread..." << std::endl; findmean(&data, 0, data.size(), &mean); mean /= (double)data.size(); // Print result std::cout << " Mean=" << mean << std::endl; // Repeat, using two threads std::vector<std::thread> thread; std::vector<double> result; result.push_back(0.0); result.push_back(0.0); std::cout << "Calculating mean, 2 Threads..." << std::endl; // Run threads uint32_t halfsize = data.size() / 2; uint32_t A = 0; uint32_t B, C, D; // Split the data into two blocks if(data.size() % 2 == 0) { B = C = D = halfsize; } else if(data.size() % 2 == 1) { B = C = halfsize; D = hsz + 1; } // Run with two threads thread.push_back(std::thread(findmean, &data, A, B, &(result[0]))); thread.push_back(std::thread(findmean, &data, C, D , &(result[1]))); // Join threads thread[0].join(); thread[1].join(); // Calculate result mean = result[0] + result[1]; mean /= (double)data.size(); // Print result std::cout << " Mean=" << mean << std::endl; // Return return EXIT_SUCCESS; } void findmean(std::vector<double>* datavec, std::size_t start, std::size_t length, double* result) { for(uint32_t i = 0; i < length; i ++) { *result += (*datavec).at(start + i); } } I don't think this code is exactly wonderful, if you could suggest ways of improving it then I would be grateful for that also.

    Read the article

  • How to shift pixels of a pixmap efficient in Qt4

    - by stanleyxu2005
    Hello, I have implemented a marquee text widget using Qt4. I painted the text content onto a pixmap first. And then paint a portion of this pixmap onto a paint device by calling painter.drawTiledPixmap(offsetX, offsetY, myPixmap) My Imagination is that, Qt will fill the whole marquee text rectangle with the content from myPixmap. Is there a ever faster way, to shift all existing content to left by 1px and than fill the newly exposed 1px wide and N-px high area with the content from myPixmap?

    Read the article

  • Why is JavaScript not used for classical application development (compiled software)?

    - by Jose Faeti
    During my years of web development with JavaScript, I come to the conclusion that it's an incredible powerful language, and you can do amazing things with it. It offers a rich set of features, like: Dynamic typing First-class functions Nested functions Closures Functions as methods Functions as Object constructors Prototype-based Objects-based (almost everything is an object) Regex Array and Object literals It seems to me that almost everything can be achieved with this kind of language, you can also emulate OO programming, since it provides great freedom and many different coding styles. With more software-oriented custom functionalities (I/O, FileSystem, Input devices, etc.) I think it will be great to develop applications with. Though, as far as I know, it's only used in web development or in existing softwares as a scripting language only. Only recently, maybe thanks to the V8 Engine, it's been used more for other kind of tasks (see node.js for example). Why until now it's only be relegated only to web development? What is keeping it away from software development?

    Read the article

  • mysql subquery strangely slow

    - by aviv
    I have a query to select from another sub-query select. While the two queries look almost the same the second query (in this sample) runs much slower: SELECT user.id ,user.first_name -- user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); This query takes 1.2s Explain on this query results: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user index first_name 152 141192 Using where; Using index 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where The second query: SELECT -- user.id -- user.first_name user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); Takes 45sec to run, with explain: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user ALL 141192 Using where 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where Why is it slower if i query only by index fields? Why both queries scans the full length of the user table? Any ideas how to improve? Thanks.

    Read the article

  • How to decide on what hardware to deploy web application

    - by Yuval A
    Suppose you have a web application, no specific stack (Java/.NET/LAMP/Django/Rails, all good). How would you decide on which hardware to deploy it? What rules of thumb exist when determining how many machines you need? How would you formulate parameters such as concurrent users, simultaneous connections and DB read/write ratio to a decision on how much, and which, hardware you need? Any resources on this issue would be very helpful...

    Read the article

  • WCF high instance count: anyone knows negative sideffects?

    - by Alex
    Hi there! Did anyone experience or know of negative side effects from having a high service instance count like 60k? Aside from the memory consumption of course. I am planning to increase the threshold for the maximum allowed instance count in our production environments. I am basically sick of severe production incidents just because "something" forgot to close a proxy properly. I plan to go to something like 60k instances which will allow the service to survive using default session timeouts at a call rate average for our clients. Thanks, Alex

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >