Search Results

Search found 8262 results on 331 pages for 'optimization algorithm'.

Page 152/331 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Creating thousands of records in Rails

    - by willCosgrove
    Let me set the stage: My application deals with gift cards. When we create cards they have to have a unique string that the user can use to redeem it with. So when someone orders our gift cards, like a retailer, we need to make a lot of new card objects and store them in the DB. With that in mind, I'm trying to see how quickly I can have my application generate 100,000 Cards. Database expert, I am not, so I need someone to explain this little phenomena: When I create 1000 Cards, it takes 5 seconds. When I create 100,000 cards it should take 500 seconds right? Now I know what you're wanting to see, the card creation method I'm using, because the first assumption would be that it's getting slower because it's checking the uniqueness of a bunch of cards, more as it goes along. But I can show you my rake task desc "Creates cards for a retailer" task :order_cards, [:number_of_cards, :value, :retailer_name] => :environment do |t, args| t = Time.now puts "Searching for retailer" @retailer = Retailer.find_by_name(args[:retailer_name]) puts "Retailer found" puts "Generating codes" value = args[:value].to_i number_of_cards = args[:number_of_cards].to_i codes = [] top_off_codes(codes, number_of_cards) while codes != codes.uniq codes.uniq! top_off_codes(codes, number_of_cards) end stored_codes = Card.all.collect do |c| c.code end while codes != (codes - stored_codes) codes -= stored_codes top_off_codes(codes, number_of_cards) end puts "Codes are unique and generated" puts "Creating bundle" @bundle = @retailer.bundles.create!(:value => value) puts "Bundle created" puts "Creating cards" @bundle.transaction do codes.each do |code| @bundle.cards.create!(:code => code) end end puts "Cards generated in #{Time.now - t}s" end def top_off_codes(codes, intended_number) (intended_number - codes.size).times do codes << ReadableRandom.get(CODE_LENGTH) end end I'm using a gem called readable_random for the unique code. So if you read through all of that code, you'll see that it does all of it's uniqueness testing before it ever starts creating cards. It also writes status updates to the screen while it's running, and it always sits for a while at creating. Meanwhile it flies through the uniqueness tests. So my question to the stackoverflow community is: Why is my database slowing down as I add more cards? Why is this not a linear function in regards to time per card? I'm sure the answer is simple and I'm just a moron who knows nothing about data storage. And if anyone has any suggestions, how would you optimize this method, and how fast do you think you could get it to create 100,000 cards? (When I plotted out my times on a graph and did a quick curve fit to get my line formula, I calculated how long it would take to create 100,000 cards with my current code and it says 5.5 hours. That maybe completely wrong, I'm not sure. But if it stays on the line I curve fitted, it would be right around there.)

    Read the article

  • Does visibility affect DOM manipulation performance?

    - by Chetan Sastry
    IE7/Windows XP I have a third party component in my page that does a lot of DOM manipulation to adjust itself each time the browser window is resized. Unfortunately I have little control of what it does internally and I have optimized everything else (such as callbacks and event handlers) as much as I can. I can't take the component off the flow by setting display:none because it fails measuring itself if I do so. In general, does setting visibility of the container to invisible during the resize help improve DOM rendering performance?

    Read the article

  • Cost of exception handlers in Python

    - by Thilo
    In another question, the accepted answer suggested replacing a (very cheap) if statement in Python code with a try/except block to improve performance. Coding style issues aside, and assuming that the exception is never triggered, how much difference does it make (performance-wise) to have an exception handler, versus not having one, versus having a compare-to-zero if-statement?

    Read the article

  • TSQL -- Make it better

    - by user319353
    Hi: -- Very Narrow (all IDs are passed in) IF(@EmpID IS NOT NULL AND @DeptID IS NOT NULL AND @CityID IS NOT NULL) BEGIN SELECT e.EmpName ,d.DeptName ,c.CityName FROM Employee e WITH (NOLOCK) JOIN Department d WITH (NOLOCK) ON e.deptid = d.deptid JOIN City c WITH (NOLOCK) ON e.CityID = c.CityID WHERE e.EmpID = @EmpID END -- Just 2 IDs passed in ELSE IF(@DeptID IS NOT NULL AND @CityID IS NOT NULL) BEGIN SELECT e.EmpName ,d.DeptName ,NULL AS [CityName] FROM Employee e WITH (NOLOCK) JOIN Department d WITH (NOLOCK) ON e.deptid = d.deptid JOIN City c WITH (NOLOCK) ON e.CityID = c.CityID WHERE d.deptID = @DeptID END -- Very Broad (just 1 ID passed in) ELSE IF(@CityID IS NOT NULL) BEGIN SELECT e.EmpName ,NULL AS [DeptName] ,NULL AS [CityName] FROM Employee e WITH (NOLOCK) JOIN Department d WITH (NOLOCK) ON e.deptid = d.deptid JOIN City c WITH (NOLOCK) ON e.CityID = c.CityID WHERE c.CityID = @CityID END -- None (Nothing passed in) ELSE BEGIN SELECT NULL AS [EmpName] ,NULL AS [DeptName] ,NULL AS [CityName] END Question: Is there any better way (OR specifically can I do anything without IF...ELSE condition?

    Read the article

  • What are good memory management techniques in Flash/as3

    - by Parris
    Hello! So I am pretty familiar with memory management in Java, C and C++; however, in flash what constructs are there for memory management? I assume flash has a sort of virtual machine like java, and I have been assuming that things get garbage collected when they are set to null. I am not sure if this is actually the case though. Also is there a way to force garbage collection in Flash? Any other tips? Thanks

    Read the article

  • very quickly getting total size of folder

    - by freakazo
    I want to quickly find the total size of any folder using python. def GetFolderSize(path): TotalSize = 0 for item in os.walk(path): for file in item[2]: try: TotalSize = TotalSize + getsize(join(item[0], file)) except: print("error with file: " + join(item[0], file)) return TotalSize That's the simple script I wrote to get the total size of the folder, it took around 60 seconds (+-5 seconds). By using multiprocessing I got it down to 23 seconds on a quad core machine. Using the Windows file explorer it takes only ~3 seconds (Right click- properties to see for yourself). So is there a faster way of finding the total size of a folder close to the speed that windows can do it? Windows 7, python 2.6 (Did searches but most of the time people used a very similar method to my own) Thanks in advance.

    Read the article

  • Why is shrink_to_fit non-binding?

    - by Roger Pate
    The C++0x FCD states in 23.3.6.2 vector capacity: void shrink_to_fit(); Remarks: shrink_to_fit is a non-binding request to reduce capacity() to size(). [Note: The request is non-binding to allow latitude for implementation-specific optimizations. —end note] Why is it non-binding, and what optimizations are intended to be allowed?

    Read the article

  • How can I encode four unsigned bytes (0-255) to a float and back again using HLSL?

    - by Statement
    Hello! I am facing a task where one of my hlsl shaders require multiple texture lookups per pixel. My 2d textures are fixed to 256*256, so two bytes should be sufficient to address any given texel given this constraint. My idea is then to put two xy-coordinates in each float, giving me eight xy-coordinates in pixel space when packed in a Vector4 format image. These eight coordinates are then used to sample another texture(s). The reason for doing this is to save graphics memory and an attempt to optimize processing time, since then I don't require multiple texture lookups. By the way: Does anyone know if encoding/decoding 16 bytes from/to 4 floats using 1 sampling is slower than 4 samplings with unencoded data?

    Read the article

  • How to open DataSet in Visual Studio 2008 faster?

    - by Ekkapop
    When I open DataSet in Visual Studio 2008 to design or modify it, it always take a very long time (more than five minutes) before I can continue to do my job. While I'm waiting I can't do anything on Visual Studio, moreover CPU and memory usage is growth dramatically. I want to know, Is it has anyway to reduce this waiting time? Hardware - Desktop CPU: Intel Q6600 Memory: 4 GB HDD: 320 GB 7200 rpm OS: Windows XP 32 bit with Service Pack 3

    Read the article

  • performance issue in a select query from a single table

    - by daedlus
    Hi , I have a table as below dbo.UserLogs ------------------------------------- Id | UserId |Date | Name| P1 | Dirty ------------------------------------- There can be several records per userId[even in millions] I have clustered index on Date column and query this table very frequently in time ranges. The column 'Dirty' is non-nullable and can take either 0 or 1 only so I have no indexes on 'Dirty' I have several millions of records in this table and in one particular case in my application i need to query this table to get all UserId that have at least one record that is marked dirty. I tried this query - select distinct(UserId) from UserLogs where Dirty=1 I have 10 million records in total and this takes like 10min to run and i want this to run much faster than this. [i am able to query this table on date column in less than a minute.] Any comments/suggestion are welcome. my env 64bit,sybase15.0.3,Linux

    Read the article

  • How do you optimize stunicholls' "Professional dropdown #2" with jquery?

    - by geff_chang
    Link to menu: Professional dropdown #2 I was wondering if these posts Suckerfish meets jQuery or Son of Suckerfish dropdowns in jQuery could optimize the menu above. I need the menu to be optimized for IE6, because when I use the menu as it is, the menu hangs after I click on a menu item that loads a page with heavy processing. It takes too long for the menu to be enabled again. Any ideas?

    Read the article

  • Automatically find compiler options for fastest exe on given machine?

    - by dehmann
    Is there a method to automatically find the best compiler options (on a given machine), which result in the fastest possible executable? Naturally, I use g++ -O3, but there are additional flags that may make the code run faster, e.g. -ffast-math and others, some of which are hardware-dependent. Does anyone know some code I can put in my configure.ac file (GNU autotools), so that the flags will be added to the Makefile automatically by the ./configure command? In addition to automatically determining the best flags, I would be interested in some useful compiler flags that are good to use as a default for most optimized executables.

    Read the article

  • OpenMP timer doesn't work on inline assembly code?

    - by Brett
    I'm trying to compare some code samples for speed, and I decided to use the OpenMP timer since I'll eventually be multi threading the code. The timer works great on two of my four code snippets, but not on the other two start=omp_get_wtime(); /*code here*/ finish = omp_get_wtime() - start_time; The four code here sections are serial code, xmmintrin.h code, and two inline assembly codes. The serial and xmminstrin.h code are able to be timed, but the inline assembly codes returns -1.#IND00 for a time. I can't seem to figure out why this is? Thanks for any help or suggestions!

    Read the article

  • JavaScript replace with callback - performance question

    - by Tomalak
    In JavaScript, you can define a callback handler in regex string replace operations: str.replace(/str[123]|etc/, replaceCallback); Imagine you have a lookup object of strings and replacements. var lookup = {"str1": "repl1", "str2": "repl2", "str3": "repl3", "etc": "etc" }; and this callback function: var replaceCallback = function(match) { if (lookup[match]) return lookup[match]; else return match; } How would you assess the performance of the above callback? Are there solid ways to improve it? Would if (match in lookup) //.... or even return lookup[match] | match; lead to opportunities for the JS compiler to optimize, or is it all the same thing?

    Read the article

  • Speed of CSS

    - by Ólafur Waage
    This is just a question to help me understand CSS rendering better. Lets say we have a million lines of this. <div class="first"> <div class="second"> <span class="third">Hello World</span> </div> </div> Which would be the fastest way to change the font of Hello World to red? .third { color: red; } div.third { color: red; } div.second div.third { color: red; } div.first div.second div.third { color: red; } Also, what if there was at tag in the middle that had a unique id of "foo". Which one of the CSS methods above would be the fastest. I know why these methods are used etc, im just trying to grasp better the rendering technique of the browsers and i have no idea how to make a test that times it. UPDATE: Nice answer Gumbo. From the looks of it then it would be quicker in a regular site to do the full definition of a tag. Since it finds the parents and narrows the search for every parent found. That could be bad in the sense you'd have a pretty large CSS file though.

    Read the article

  • Why is Magento so slow?

    - by mr-euro
    Is Magento usually so terrible slow? This is my first experience with it and the admin panel simply takes ages to load and save changes. It is a default installation with the test data. The server it is hosted on serves other non-Magento sites super fast. What is it about the PHP code that Magento uses that makes it so slow, and what can be done to fix it?

    Read the article

  • Is using iframes to improve page performance an acceptable approach?

    - by Denis Hoctor
    Hi all, I have a complex page that has several user controls like galleries, maps, ads etc. I've tried optimising them by ensuring full separation of html/css/js, placing js at the bottom of the page and trying to ensure I have well written code in all 3 but alas I still have a slow page. It's not really noticeable to a modern browser but can see the stats and IE6/7. So I'm now looking to do what we've done previously for Adtech flash crap - an iframe. Apart from the SEO impact which I'm not worried about in the case of these controls, what do people think of this as an approach? PROS and CONS please. Thanks, Denis

    Read the article

  • How do we greatly optimize our MySQL database (or replace it) when using joins?

    - by jkaz
    Hi there, This is the first time I'm approaching an extremely high-volume situation. This is an ad server based on MySQL. However, the query that is used incorporates a lot of JOINs and is generally just slow. (This is Rails ActiveRecord, btw) sel = Ads.find(:all, :select = '*', :joins = "JOIN campaigns ON ads.campaign_id = campaigns.id JOIN users ON campaigns.user_id = users.id LEFT JOIN countries ON countries.campaign_id = campaigns.id LEFT JOIN keywords ON keywords.campaign_id = campaigns.id", :conditions = [flashstr + "keywords.word = ? AND ads.format = ? AND campaigns.cenabled = 1 AND (countries.country IS NULL OR countries.country = ?) AND ads.enabled = 1 AND campaigns.dailyenabled = 1 AND users.uenabled = 1", kw, format, viewer['country'][0]], :order = order, :limit = limit) My questions: Is there an alternative database like MySQL that has JOIN support, but is much faster? (I know there's Postgre, still evaluating it.) Otherwise, would firing up a MySQL instance, loading a local database into memory and re-loading that every 5 minutes help? Otherwise, is there any way I could switch this entire operation to Redis or Cassandra, and somehow change the JOIN behavior to match the (non-JOIN-able) nature of NoSQL? Thank you!

    Read the article

  • Converting python collaborative filtering code to use Map Reduce

    - by Neil Kodner
    Using Python, I'm computing cosine similarity across items. given event data that represents a purchase (user,item), I have a list of all items 'bought' by my users. Given this input data (user,item) X,1 X,2 Y,1 Y,2 Z,2 Z,3 I build a python dictionary {1: ['X','Y'], 2 : ['X','Y','Z'], 3 : ['Z']} From that dictionary, I generate a bought/not bought matrix, also another dictionary(bnb). {1 : [1,1,0], 2 : [1,1,1], 3 : [0,0,1]} From there, I'm computing similarity between (1,2) by calculating cosine between (1,1,0) and (1,1,1), yielding 0.816496 I'm doing this by: items=[1,2,3] for item in items: for sub in items: if sub >= item: #as to not calculate similarity on the inverse sim = coSim( bnb[item], bnb[sub] ) I think the brute force approach is killing me and it only runs slower as the data gets larger. Using my trusty laptop, this calculation runs for hours when dealing with 8500 users and 3500 items. I'm trying to compute similarity for all items in my dict and it's taking longer than I'd like it to. I think this is a good candidate for MapReduce but I'm having trouble 'thinking' in terms of key/value pairs. Alternatively, is the issue with my approach and not necessarily a candidate for Map Reduce?

    Read the article

  • Optimizing PHP require_once's for low disk i/o?

    - by buggedcom
    Q1) I'm designing a CMS (-who isn't!) but priority is being given to caching. Literally everything is cached. DB rows, DB id queries, Configuration data, processed data, compiled templates. Currently it has two layers of caching. The first is a opcode cache or memory cache such as apc, eaccelerator, xcache or memcached. If an entry is not found in there it is then searched for in the secondary slow cache, ie php includes. Are the opcode caches actually faster than doing a require_once to a php file with a var_export'd array of data in it? My tests are inconclusive as my development box (5.3 of XAMPP) keeps throwing errors installing any of the aforementioned programs. Q2) The CMS has numerous helper classes that are autoloaded on demand instead of loading all files. Mostly each has a require before it so no autoloading needs to take place, however this is not the question. Because a page script can have up to 50/60 helper files included I have a feeling that if the site was under pressure it would buckle because of all the i/o that this incurs. Ignore for the moment that there is output cache in place that would remove the need for what I am about to suggest, and also that opcode caches would render this moot. What I have tried to do is join all the helper files required for the scripts execution in one single file. This is achievable and works well, however it has a side effect of greatly increasing the memory usage dramatically even though technically the same code is being used. What are your thoughts and opinions on this?

    Read the article

  • io operations in compilers

    - by Aastha
    How are constructs of io operations handled by a compiler? Like the RTL mapping for memory related operations which is done in a compiler at the time of target code generation, where and how exactly is the same done for io operations? How are the appeoaches different for processors supporting MMIO and I/O mapped I/O? Are there any optimizations done for the io operations in compilers?

    Read the article

  • Mysql query help - Alter this mysql query to get these results?

    - by sandeepan-nath
    Please execute the following queries first to set up so that you can help me:- CREATE TABLE IF NOT EXISTS `Tutor_Details` ( `id_tutor` int(10) NOT NULL auto_increment, `firstname` varchar(100) NOT NULL default '', `surname` varchar(155) NOT NULL default '', PRIMARY KEY (`id_tutor`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=41 ; INSERT INTO `Tutor_Details` (`id_tutor`,`firstname`, `surname`) VALUES (1, 'Sandeepan', 'Nath'), (2, 'Bob', 'Cratchit'); CREATE TABLE IF NOT EXISTS `Classes` ( `id_class` int(10) unsigned NOT NULL auto_increment, `id_tutor` int(10) unsigned NOT NULL default '0', `class_name` varchar(255) default NULL, PRIMARY KEY (`id_class`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=229 ; INSERT INTO `Classes` (`id_class`,`class_name`, `id_tutor`) VALUES (1, 'My Class', 1), (2, 'Sandeepan Class', 2); CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=18 ; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (1, 'Bob'), (6, 'Class'), (2, 'Cratchit'), (4, 'Nath'), (3, 'Sandeepan'), (5, 'My'); CREATE TABLE IF NOT EXISTS `Tutors_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) default NULL, KEY `Tutors_Tag_Relations` (`id_tag`), KEY `id_tutor` (`id_tutor`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tutors_Tag_Relations` (`id_tag`, `id_tutor`) VALUES (3, 1), (4, 1), (1, 2), (2, 2); CREATE TABLE IF NOT EXISTS `Class_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_class` int(10) default NULL, `id_tutor` int(10) NOT NULL, KEY `Class_Tag_Relations` (`id_tag`), KEY `id_class` (`id_class`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Class_Tag_Relations` (`id_tag`, `id_class`, `id_tutor`) VALUES (5, 1, 1), (6, 1, 1), (3, 2, 2), (6, 2, 2); In the present system data which I have given , tutor named "Sandeepan Nath" has has created class named "My Class" and tutor named "Bob Cratchit" has created class named "Sandeepan Class". Requirement - To execute a single query with limit on the results to show search results as per AND logic on the search keywords like this:- If "Sandeepan Class" is searched , Tutor Sandeepan Nath's record from Tutor Details table is returned(because "Sandeepan" is the firstname of Sandeepan Nath and Class is present in class name of Sandeepan's class) If "Class" is searched Both the tutors from the Tutor_details table are fetched because Class is present in the name of the class created by both the tutors. Following is what I have so far achieved (PHP Mysql):- <?php $searchTerm1 = "Sandeepan"; $searchTerm2 = "Class"; mysql_select_db("test"); $sql = "SELECT td.* FROM Tutor_Details AS td LEFT JOIN Tutors_Tag_Relations AS ttagrels ON td.id_tutor = ttagrels.id_tutor LEFT JOIN Classes AS wc ON td.id_tutor = wc.id_tutor LEFT JOIN Class_Tag_Relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN Tags as t1 on ((t1.id_tag = ttagrels.id_tag) OR (t1.id_tag = wtagrels.id_tag)) LEFT JOIN Tags as t2 on ((t2.id_tag = ttagrels.id_tag) OR (t2.id_tag = wtagrels.id_tag)) where t1.tag LIKE '%".$searchTerm1."%' AND t2.tag LIKE '%".$searchTerm2."%' GROUP BY td.id_tutor LIMIT 10 "; $result = mysql_query($sql); echo $sql; if($result) { while($rec = mysql_fetch_object($result)) $recs[] = $rec; //$rec = mysql_fetch_object($result); echo "<br><br>"; if(is_array($recs)) { foreach($recs as $each) { print_r($each); echo "<br>"; } } } ?> But the results are :- If "Sandeepan Nath" is searched, it does not return any tutor (instead of only Sandeepan's row) If "Sandeepan Class" is searched, it returns Sandeepan's row (instead of Both tutors ) If "Bob Class" is searched, it correctly returns Bob's row If "Bob Cratchit" is searched, it does not return any tutor (instead of only

    Read the article

  • Most optimized way to calculate modulus in C

    - by hasanatkazmi
    I have minimize cost of calculating modulus in C. say I have a number x and n is the number which will divide x when n == 65536 (which happens to be 2^16): mod = x % n (11 assembly instructions as produced by GCC) or mod = x & 0xffff which is equal to mod = x & 65535 (4 assembly instructions) so, GCC doesn't optimize it to this extent. In my case n is not x^(int) but is largest prime less than 2^16 which is 65521 as I showed for n == 2^16, bit-wise operations can optimize the computation. What bit-wise operations can I preform when n == 65521 to calculate modulus.

    Read the article

  • GCC (ld) option to strip unreferenced data/functions

    - by legends2k
    I've written an program which uses a library which has numerous functuions, but I only limited functions from it. GCC is the compiler I use. Once I've created a binary, when I used nm to see the symbols in it, it shows all the unwanted (unreferenced) functions which are never used. How do I removed those unreferenced functions and data from the executable? Is the -s option right? I'm tols that it strips all symbol table and relocation data from the binary, but does this remove the function and data too? I'm not sure on how to verify this too, since after using -s nm doesn't work since it's stripped all sym. table data too.

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >