Search Results

Search found 14861 results on 595 pages for 'high speed computing'.

Page 370/595 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • SQL Query - 20mil records - Best practice to return information

    - by eqiz
    I have a SQL database that has the following table: Table: PhoneRecords ID(identity Seed) FirstName LastName PhoneNumber ZipCode Very simple straight forward table. This table has over 20million records. I am looking for the best way to do queries that pull out records based off area codes from the table. For instance here is an example query that I have done. SELECT phonenumber, firstname FROM [PhoneRecords] WHERE (phone LIKE '2012042%') OR (phone LIKE '2012046%') OR (phone LIKE '2012047%') OR (phone LIKE '2012083%') OR (phone LIKE '2012088%') OR (phone LIKE '2012841%') As you can see this is an ugly query, but it would get the job done (if I wasn't running into timeout issues) Can anyone tell me the best way for speed/optimization to do the above query to display the results? Currently that query above takes around 2 hours to complete on a 9gb 1600mhz ram, i7 930 quadcore OC'd 4.01ghz. I obviously have the computer power required to do such a query, but still takes too long for queries.

    Read the article

  • Optimizing Dijkstra for dense graph?

    - by Jason
    Is there another way to calculate the shortest path for a near complete graph other than Dijkstra? I have about 8,000 nodes and about 18 million edges. I've gone through the thread "a to b on map" and decided to use Dijkstra. I wrote my script in Perl using the Boost::Graph library. But the result isn't what I expected. It took about 10+ minutes to calculate one shortest path using the call $graph-dijkstra_shortest_path($start_node,$end_node); I understand there are a lot of edges and it may be the reason behind the slow running time. Am I dead in the water? Is there any other way to speed this up?

    Read the article

  • Build Pipelining and Continuous Integration with Maven and Hudson

    - by Brandon
    Currently the my team is considering splitting our single CI build process into a more streamlined multi-stage process to speed up basic build feedback and isolate different ci concerns. The idea we had was to have each stage exist in Hudson as a different build with the correct maven goal or maven plugin execution, then chain them together using the post-build hooks of Hudson. However to my knowledge, Maven as a build tool mandates that any lifecycle phase which is performed automatically builds every preceding lifecycle phase. This presents a number of problems the most significant of which is that maven is recreating the build resources with each distinct call and not using those of the previous stage. This not only breaks the consistency of the build lifecycle but has much more unnecessary processing overhead. Is there a way to accomplish pipelining with CI using Maven? Assuming there is, is there a way to let Hudson know to use those resources built from the previous stage in the next one?

    Read the article

  • Magento cache wrong read permissions?

    - by Lucasmus
    There seems to be a problem in Magento's reading of the var/cache directory. I've disabled Full Page Caching for testing. When I execute the bash command chmod -R 777var/cache/` before loading the page, it loads ~3 seconds quicker (the time it takes before 'mage::dispatch::routers_match' is reached in the Profiler is reduced from ~4 seconds to ~1 second). This speed-up remains a while but then is lost until the chmod is called again. I'm guessing this has to do with writing permissions somehow? The odd thing is, the cache contents are afaik owned by the process that is executing magento (the web user). Does anyone have any clues what could be the problem or what could be changed to prevent this?

    Read the article

  • Indexing datetime in MySQL

    - by User1
    What is the best way to index a datetime in MySQL? Which method is faster: Store the datetime as a double (via unix timestamp) Store the datetime as a datetime The application generating the timestamp data can output either format. Unfortunately, datetime will be a key for this particular data structure so speed will matter. Also, is it possible to make an index on an expression? For example, index on UNIX_TIMESTAMP(mydate) where mydate is a field in a table and UNIX_TIMESTAMP is a mysql function. I know that Postgres can do it. I'm thinking there must be a way in mysql as well.

    Read the article

  • Is it possible to give a python dict an initial capacity (and is it usefull)

    - by Peter Smit
    I am filling a python dict with around 10,000,000 items. My understanding of dict (or hashtables) is that when too much elements get in them, the need to resize, an operation that cost quite some time. Is there a way to say to a python dict that you will be storing at least n items in it, so that it can allocate memory from the start? Or will this optimization not do any good to my running speed? (And no, I have not checked that the slowness of my small script is because of this, I actually wouldn't now how to do that. This is however something I would do in Java, set the initial capacity of the HashSet right)

    Read the article

  • Import Data from text file into sql DB using CSLA

    - by New Developer
    I am trying to import data from ~ Delimited Text file into SQL Server using CSLA. My text file has 92,000 records in it. Here are the issues i am having with the import When i create a BusinnessListBase .new and add all my records to it, it gives me a "Out of meory exception". So to fix it i create a new bussinessbase object and save it. this works fine and is much faster too. It takes 15 minutes I have to run my program again and check for any changes and hence update them, this is where it takes too much time. Is there any alternative way to speed up my import?

    Read the article

  • Which is quicker? Memcache or file query? (using maxmind geoip.dat file)

    - by tomcritchlow
    Hi, I'm using Python on Appengine and am looking up the geolocation of an IP address like this: import pygeoip gi = pygeoip.GeoIP('GeoIP.dat') Location = gi.country_code_by_addr(self.request.remote_addr) (pygeoip can be found here: http://code.google.com/p/pygeoip/) I want to geolocate each page of my app for a user so currently I lookup the IP address once then store it in memcache. My question - which is quicker? Looking up the IP address each time from the .dat file or fetching it from memcache? Are there any other pros/cons I need to be aware of? For general queries like this, is there a good guide to teach me how to optimise my code and run speed tests myself? I'm new to python and coding in general so apologies if this is a basic concept. Thanks! Tom

    Read the article

  • migrating C++ code from structures to classes

    - by eSKay
    I am migrating some C++ code from structures to classes. I was using structures mainly for bit-field optimizations which I do not need any more (I am more worried about speed than saving space now). What are the general guidelines for doing this migration? I am still in the planning stage as this is a very big move affecting a major part of the code. I want to plan everything first before doing it. What are all the essential things I should keep in mind?

    Read the article

  • Resources for learning how to better read code

    - by rsteckly
    Hi, I recently inherited a large codebase and am having to read it. The thing is, I've usually been the dev starting a project. As a result, I don't have a lot of experience reading code. My reaction to having to read a lot of code is, well, umm to rewrite it. But I need to bring myself up to speed quickly and build on top of an existing system. Do other people have techniques they've learned to absorb a code base? At this point, I'm just reading through the code. I've tried generating UML diagrams using UModel. They're so big they won't print cleanly and when I zoom in, I really do lose the perspective of seeing all the relationships. How have other people dealt with this problem?

    Read the article

  • Apache server-side files caching via .htaccess?

    - by purpler
    Hi, I'm starting new website and gonna include several JS libs and would like to know how .htaccess file template should look like with caching of media and JS files on? Whats better for compression, GZip or Deflate? Is it better/faster solution to serve those JS libs of the Google CDN perhaps then locally? I'm asking CDN question since some of scripts served off GoogleCDN are potentially going to update and eventually break the website layout so i thought it would be better for me to host them locally and cache via webserver if its going to work with same/near-same speed.

    Read the article

  • running Hadoop software on office computers (when they are idle)

    - by Shahbaz
    Is there a project which helps setup a Hadoop cluster on office desktops, when they are idle? I'd like to experiment with Hadoop/MR/hbase but don't have acces to 5-10 computers. The computers at work are idle after hours and are connected to each other through a very high speed connection. What's more, data on these computers stays within our network so there is no privacy issue. In order for this to work I need a fairly light weight monitor running on each machine. When the computer has been idle for X hours, it will join the cluster. If the user logs on, it has to drop out of the cluster and return all CPU/memory back. Does something like this exist?

    Read the article

  • Scalable (half-million files) version control system

    - by hashable
    We use SVN for our source-code revision control and are experimenting using it for non-source-code files. We are working with a large set (300-500k) of short (1-4kB) text files that will be updated on a regular basis and need to version control it. We tried using SVN in flat-file mode and it is struggling to handle the first commit (500k files checked in) taking about 36 hours. On a daily basis, we need the system to be able to handle 10k modified files per commit transaction in a short time (<5 min). My questions: Is SVN the right solution for my purpose. The initial speed seems too slow for practical use. If Yes, is there a particular svn server implementation that is fast? (We are currently using the gnu/linux default svn server and command line client.) If No, what are the best f/oss/commercial alternatives Thanks

    Read the article

  • Is it possible to generate plain-old XML using Haml?

    - by lsdr
    I've been working on a piece of software where I need to generate a custom XML file to send back to a client application. The current solutions on Ruby/Rails world for generating XML files are slow, at best. Using builder or event Nokogiri, while have a nice syntax and are maintainable solutions, they consume too much time and processing. I definetly could go to ERB, which provides a good speed at the expense of building the whole XML by hand. HAML is a great tool, have a nice and straight-forward syntax and is fairly fast. But I'm struggling to build pure XML files using it. Which makes me wonder, is it possible at all? Does any one have some pointers to some code or docs showing how to do this, build a full, valid XML from HAML?

    Read the article

  • str_replace() with two-dimensional array

    - by Qiao
    You can use arrays with str_replace(): $array_from = array ('from1', 'from2'); $array_to = array ('to1', 'to2'); $text = str_replace ($array_from, $array_to, $text); But what if you have two-dimensional array? $array_from_to = array ( 'from1' => 'to1'; 'from2' => 'to2'; ); How can you use it with str_replace()? Speed matters - array is big enough.

    Read the article

  • What is an efficient way to erase substrings?

    - by Legend
    I have a long string and a set of <end-index, string> list like the following: long_sentence = "This is a long long long long sentence" indices = [[6, "is"], [8, "is a"], [18, "long"], [23, "long"]] An element 6, "is" indicates that 6 is the end index of the word "is" in the string. I want to get the following string in the end: >> print long_sentence This .... long ......... long sentence" I tried an approach like this: temp = long_sentence for i in indices: temp = temp[:int(i[0]) - len(i[1])] + '.'*(len(i[1])+1) + temp[i[0]+1:] While this seems to be working, it is taking exceptionally long time (more than 6 hours on 5000 strings inside a 300 MB file). Is there a way to speed this up?

    Read the article

  • Extract anything that looks like links from large amount of data in python

    - by Riz
    Hi, I have around 5 GB of html data which I want to process to find links to a set of websites and perform some additional filtering. Right now I use simple regexp for each site and iterate over them, searching for matches. In my case links can be outside of "a" tags and be not well formed in many ways(like "\n" in the middle of link) so I try to grab as much "links" as I can and check them later in other scripts(so no BeatifulSoup\lxml\etc). The problem is that my script is pretty slow, so I am thinking about any ways to speed it up. I am writing a set of test to check different approaches, but hope to get some advices :) Right now I am thinking about getting all links without filtering first(maybe using C module or standalone app, which doesn't use regexp but simple search to get start and end of every link) and then using regexp to match ones I need.

    Read the article

  • fast scrolling background

    - by Andre
    i want a game that scrolls the background in a similar way to a UItableView. I solved it with a timer that moves the background up and brings another copy of the same picture up if (bg1.center.y <= - self.view.bounds.size.height/2 ) { bg1.center = CGPointMake(bg1.center.x, 690); } if (bg2.center.y <= - self.view.bounds.size.height/2 ) { bg2.center = CGPointMake(bg2.center.x, 690); bg1.center = CGPointMake(bg1.center.x, bg1.center.y - movement); bg2.center = CGPointMake(bg2.center.x, bg2.center.y - movement); But the faster i move the pictures the more problems occur: There are appearing gaps between the backgrounds and they are getting biggiger the faster i move them! movement is defined by the speed of swiping over the screen Any idea to solve that?

    Read the article

  • In game programming are global variables bad?

    - by Joe.F
    I know my gut reaction to global variables is "badd!" but in the two game development courses I've taken at my college globals were used extensively, and now in the DirectX 9 game programming tutorial I am using (www.directxtutorial.com) I'm being told globals are okay in game programming ...? The site also recommends using only structs if you can when doing game programming to help keep things simple. I'm really confused on this issue, and all the research I've been trying to do is very confusing. I realize there are issues when using global variables (threading issues, they make code harder to maintain, the state of them is hard to track etc) but also there is a cost associated with not using globals, I'd have to pass a loooot of information around very often which can be confusing and I imagine time-costing, although I guess pointers would speed the process up (this is my first time writing a game in C++.) Anyway, I realize there is probably no "right" or "wrong" answer here since both ways work, but I want my code to be as proper as I can so any input would be good, thank you very much!

    Read the article

  • How to save optimized png images with java's ImageIO?

    - by Christoph
    I am generating lots of images in java and saving them through the ImageIO.write method like this: final BufferedImage img = createSomeImage(); ImageIO.write( img, "png", new File( "/some/file.png" ); I was happy with the results until Google's firefox addon 'Page Speed' told me that i can save up to 60% of the size if i optimize the images. The images are QR codes, their size is around 900B each and the firefox-plugin optimized versions are around 300B. I'd like to save such optimized 300B Images directly from java. So here my question again: How to save optimized png images with java's ImageIO?

    Read the article

  • Products combining framework and visual IDE for web development?

    - by Tom Hubbard
    We are looking for some tools to help us with our web development speed. The two main areas that we have pinpointed as parts of the problem are "Framework/Flow Management" and "Visual/Layout Development" Ideally we would find a tool that handles both rather well. However, it seems like there are few tools that handle the middle ground well. Usually it is just a Framework, or and IDE, not both. The best thing we have found so far is Agile Platform. Are we missing any obvious products? Platform at this point is not a huge concern. We can migrate to the best tool.

    Read the article

  • Slow Client connection blocks Mongrel

    - by Sanjay
    I have a Apache + Haproxy + Mongrel setup for my rails application. When I hit a particular server page, mongrel takes around 100ms to process the request and I get the page in around 5 secs due to data transmission time on my slow home connection. Now I see that during these 5 secs of data transmission, mongrel does not serve any other request. I am surprised as that means mongrel is serving the response html to the client and is blocked till the client receives it. Shouldn't serving response be the job of Apache? This puts serious bottleneck in the no of requests Mongrel can serve as that would depend on the speed of the client connection. Is there any way that html generated by mongrel is served by apache/haproxy or any other web server like nginx? I wonder how the other high traffic sites are managing it?

    Read the article

  • how to get apache mod_cache work with mod_wsgi (django)?

    - by harmv
    I thought i'd speed up my django projects, by letting apache doing some caching for me. Unfortunately I see that apache never caches my dynamic pages. Has mod_cache problems with mod_wsgi served code ? My apache config: <VirtualHost *:80 ServerName myserver.com CacheEnable mem / # for testing only CacheIgnoreQueryString On CacheIgnoreCacheControl On WSGIDaemonProcess aname processes=1 threads=25 WSGIProcessGroup aname Alias /media/ /home/harm/projects/test/media/ WSGIScriptAlias / /home/harm/projects/test/wsgi.py The response does have the correct caching headers: Content-Length 2647 Content-Encoding gzip Vary Accept-Encoding Cache-Control public, max-age=3600 Keep-Alive timeout=15, max=100 Connection Keep-Alive Content-Type application/x-javascript Am I missing something ?

    Read the article

  • How do I connect two apps

    - by sevaxx
    I am considering building an app in C++ that will be parsing text from the web and create some statistical results. These results I want to be fed in an external app in real time. The external app (to whose code I have no access, but can ask for a - paid - custom made addition) will then need some code to read and use these results. I am wondering what would be the best way to interconnect the two apps, in terms of speed and ease of implementation. I am considering : disk I/O (slow) a Windows service a DLL a web service a web page Perhaps I am missing a better solution ? Thank you.

    Read the article

  • get all elements under a mouse drag selection

    - by Jayapal Chandran
    Hi, You guys would have seen the image cropping tool which has a selection option (marque tool) created in javascript. http://marqueetool.net/examples/ Like this i want to get all elements under a selection. For example in windows we have mouse group selection of files and folders by dragging the mouse to select multiple files. Like that i need to get all the elements under a selection in javascript. I can try other gimmicks but if there could be a simple tool then i can speed up my work.

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >