Search Results

Search found 15401 results on 617 pages for 'memory optimization'.

Page 179/617 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • Why better isolation level means better performance in SQL Server

    - by Oleg Zhylin
    When measuring performance on my query I came up with a dependency between isolation level and elapsed time that was surprising to me READUNCOMMITTED - 409024 READCOMMITTED - 368021 REPEATABLEREAD - 358019 SERIALIZABLE - 348019 Left column is table hint, and the right column is elapsed time in microseconds (sys.dm_exec_query_stats.total_elapsed_time). Why better isolation level gives better performance? This is a development machine and no concurrency whatsoever happens. I would expect READUNCOMMITTED to be the fasted due to less locking overhead. Update: I did measure this with DBCC DROPCLEANBUFFERS DBCC FREEPROCCACHE issued and Profiler confirms there're no cache hits happening. Update2: The query in question is an OLAP one and we need to run it as fast as possible. Closing the production server from outside world to get the computation done is not out of question if this gives performance benefits.

    Read the article

  • Releasing NSData causes exception...

    - by badmanj
    Hi, Can someone please explain why the following code causes my app to bomb? NSData *myImage = UIImagePNGRepresentation(imageView.image); : [myImage release]; If I comment out the 'release' line, the app runs... but a few times calling the function containing this code and I get a crash - I guess caused by a memory leak. Even if I comment EVERYTHING else in the function out and just leave those two lines, when the release executes, the app crashes. I'm sure this must be a newbie "you don't know how to clean up your mess properly" kind of thing ;-) Cheers, Jamie.

    Read the article

  • Complicated idea - how to create car racing for my RPG game's players

    - by Donator
    So, I want to create car racing for my RPG game's players. Player can create race and choose how many participants can participate in race. After race is being created, other people can join it. When the maximum participants are collected, race begins. My idea, when the last participant joins, then instantly choose the winner (who's car is the best, that person wins), but how can I do it? If I choose to pick the winner after the last participant joins, then I have to put many queries in one page (select data from table, then delete the race, then select players' cars' statistics and pick the winner and then again, using mysql, send message to everyone). But this idea is really not optimal and it will lag cruelly for that last person. Maybe you have any ideas how I can avoid lag and make it more optimal. Thank you very much.

    Read the article

  • A very basic auto-expanding list/array

    - by MainMa
    Hi, I have a method which returns an array of fixed type objects (let's say MyObject). The method creates a new empty Stack<MyObject>. Then, it does some work and pushes some number of MyObjects to the end of the Stack. Finally, it returns the Stack.ToArray(). It does not change already added items or their properties, nor remove them. The number of elements to add will cost performance. There is no need to sort/order the elements. Is Stack a best thing to use? Or must I switch to Collection or List to ensure better performance and/or lower memory cost?

    Read the article

  • Is it possible to do A/B testing by page rather than by individual?

    - by mojones
    Lets say I have a simple ecommerce site that sells 100 different t-shirt designs. I want to do some a/b testing to optimise my sales. Let's say I want to test two different "buy" buttons. Normally, I would use AB testing to randomly assign each visitor to see button A or button B (and try to ensure that that the user experience is consistent by storing that assignment in session, cookies etc). Would it be possible to take a different approach and instead, randomly assign each of my 100 designs to use button A or B, and measure the conversion rate as (number of sales of design n) / (pageviews of design n) This approach would seem to have some advantages; I would not have to worry about keeping the user experience consistent - a given page (e.g. www.example.com/viewdesign?id=6) would always return the same html. If I were to test different prices, it would be far less distressing to the user to see different prices for different designs than different prices for the same design on different computers. I also wonder whether it might be better for SEO - my suspicion is that Google would "prefer" that it always sees the same html when crawling a page. Obviously this approach would only be suitable for a limited number of sites; I was just wondering if anyone has tried it?

    Read the article

  • Grand Central Strategy for Opening Multiple Files

    - by user276632
    I have a working implementation using Grand Central dispatch queues that (1) opens a file and computes an OpenSSL DSA hash on "queue1", (2) writing out the hash to a new "side car" file for later verification on "queue2". I would like to open multiple files at the same time, but based on some logic that doesn't "choke" the OS by having 100s of files open and exceeding the hard drive's sustainable output. Photo browsing applications such as iPhoto or Aperture seem to open multiple files and display them, so I'm assuming this can be done. I'm assuming the biggest limitation will be disk I/O, as the application can (in theory) read and write multiple files simultaneously. Any suggestions? TIA

    Read the article

  • Jruby rspec to be run parallely

    - by Priyank
    Hi. Is there something like Spork for Jruby too? We want to parallelize our specs to run faster and pre-load the classes while running the rake task; however we have not been able to do so. Since our project is considerable in size, specs take about 15 minutes to complete and this poses a serious challenge to quick turnaround. Any ideas are more than welcome. Cheers

    Read the article

  • What is the best algorithm for this array-comparison problem?

    - by mark
    What is the most efficient for speed algorithm to solve the following problem? Given 6 arrays, D1,D2,D3,D4,D5 and D6 each containing 6 numbers like: D1[0] = number D2[0] = number ...... D6[0] = number D1[1] = another number D2[1] = another number .... ..... .... ...... .... D1[5] = yet another number .... ...... .... Given a second array ST1, containing 1 number: ST1[0] = 6 Given a third array ans, containing 6 numbers: ans[0] = 3, ans[1] = 4, ans[2] = 5, ......ans[5] = 8 Using as index for the arrays D1,D2,D3,D4,D5 and D6, the number that goes from 0, to the number stored in ST1[0] minus one, in this example 6, so from 0 to 6-1, compare each res array against each D array My algorithm so far is: I tried to keep everything unlooped as much as possible. EML := ST1[0] //number contained in ST1[0] EML1 := 0 //start index for the arrays D While EML1 < EML if D1[ELM1] = ans[0] goto two if D2[ELM1] = ans[0] goto two if D3[ELM1] = ans[0] goto two if D4[ELM1] = ans[0] goto two if D5[ELM1] = ans[0] goto two if D6[ELM1] = ans[0] goto two ELM1 = ELM1 + 1 return 0 //If the ans[0] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers two: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[1] goto three if D2[ELM1] = ans[1] goto three if D3[ELM1] = ans[1] goto three if D4[ELM1] = ans[1] goto three if D5[ELM1] = ans[1] goto three if D6[ELM1] = ans[1] goto three ELM1 = ELM1 + 1 return 0 //If the ans[1] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers three: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[2] goto four if D2[ELM1] = ans[2] goto four if D3[ELM1] = ans[2] goto four if D4[ELM1] = ans[2] goto four if D5[ELM1] = ans[2] goto four if D6[ELM1] = ans[2] goto four ELM1 = ELM1 + 1 return 0 //If the ans[2] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers four: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[3] goto five if D2[ELM1] = ans[3] goto five if D3[ELM1] = ans[3] goto five if D4[ELM1] = ans[3] goto five if D5[ELM1] = ans[3] goto five if D6[ELM1] = ans[3] goto five ELM1 = ELM1 + 1 return 0 //If the ans[3] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers five: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[4] goto six if D2[ELM1] = ans[4] goto six if D3[ELM1] = ans[4] goto six if D4[ELM1] = ans[4] goto six if D5[ELM1] = ans[4] goto six if D6[ELM1] = ans[4] goto six ELM1 = ELM1 + 1 return 0 //If the ans[4] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers six: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[5] return 1 ////If the ans[1] number is not found in either D1[0-6]..... if D2[ELM1] = ans[5] return 1 which will then include ans[0-6] numbers return 1 if D3[ELM1] = ans[5] return 1 if D4[ELM1] = ans[5] return 1 if D5[ELM1] = ans[5] return 1 if D6[ELM1] = ans[5] return 1 ELM1 = ELM1 + 1 return 0 As language of choice, it would be pure c

    Read the article

  • How does loop address alignment affect the speed on Intel x86_64?

    - by Alexander Gololobov
    I'm seeing 15% performance degradation of the same C++ code compiled to exactly same machine instructions but located on differently aligned addresses. When my tiny main loop starts at 0x415220 it's faster then when it is at 0x415250. I'm running this on Intel Core2 Duo. I use gcc 4.4.5 on x86_64 Ubuntu. Can anybody explain the cause of slowdown and how I can force gcc to optimally align the loop? Here is the disassembly for both cases with profiler annotation: 415220 576 12.56% |XXXXXXXXXXXXXX 48 c1 eb 08 shr $0x8,%rbx 415224 110 2.40% |XX 0f b6 c3 movzbl %bl,%eax 415227 0.00% | 41 0f b6 04 00 movzbl (%r8,%rax,1),%eax 41522c 40 0.87% | 48 8b 04 c1 mov (%rcx,%rax,8),%rax 415230 806 17.58% |XXXXXXXXXXXXXXXXXXX 4c 63 f8 movslq %eax,%r15 415233 186 4.06% |XXXX 48 c1 e8 20 shr $0x20,%rax 415237 102 2.22% |XX 4c 01 f9 add %r15,%rcx 41523a 414 9.03% |XXXXXXXXXX a8 0f test $0xf,%al 41523c 680 14.83% |XXXXXXXXXXXXXXXX 74 45 je 415283 ::Run(char const*, char const*)+0x4b3 41523e 0.00% | 41 89 c7 mov %eax,%r15d 415241 0.00% | 41 83 e7 01 and $0x1,%r15d 415245 0.00% | 41 83 ff 01 cmp $0x1,%r15d 415249 0.00% | 41 89 c7 mov %eax,%r15d 415250 679 13.05% |XXXXXXXXXXXXXXXX 48 c1 eb 08 shr $0x8,%rbx 415254 124 2.38% |XX 0f b6 c3 movzbl %bl,%eax 415257 0.00% | 41 0f b6 04 00 movzbl (%r8,%rax,1),%eax 41525c 43 0.83% |X 48 8b 04 c1 mov (%rcx,%rax,8),%rax 415260 828 15.91% |XXXXXXXXXXXXXXXXXXX 4c 63 f8 movslq %eax,%r15 415263 388 7.46% |XXXXXXXXX 48 c1 e8 20 shr $0x20,%rax 415267 141 2.71% |XXX 4c 01 f9 add %r15,%rcx 41526a 634 12.18% |XXXXXXXXXXXXXXX a8 0f test $0xf,%al 41526c 749 14.39% |XXXXXXXXXXXXXXXXXX 74 45 je 4152b3 ::Run(char const*, char const*)+0x4c3 41526e 0.00% | 41 89 c7 mov %eax,%r15d 415271 0.00% | 41 83 e7 01 and $0x1,%r15d 415275 0.00% | 41 83 ff 01 cmp $0x1,%r15d 415279 0.00% | 41 89 c7 mov %eax,%r15d

    Read the article

  • PostgreSQL - fetch the row which has the Max value for a column

    - by Joshua Berry
    I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1);

    Read the article

  • Does Delphi really handle dynamic classes better than static?

    - by John
    Hello, I was told more than once that Delphi handles dynamic classes better than static.Thereby using the following: type Tsomeclass=class(TObject) private procedure proc1; public someint:integer; procedure proc2; end; var someclass:TSomeclass; implementation ... initialization someclass:=TSomeclass.Create; finalization someclass.Free; rather than type Tsomeclass=class private class procedure proc1; public var someint:integer; class procedure proc2; end; 90% of the classes in the project I'm working on have and need only one instance.Do I really have to use the first way for using those classes? Is it better optimized,handled by Delphi? Sorry,I have no arguments to backup this hypothesis,but I want an expert's opinion. Thanks in advance!

    Read the article

  • SQL Table Setup Advice

    - by Ozzy
    Hi all. Basically I have an xml feed from an offsite server. The xml feed has one parameter ?value=n now N can only be between 1 and 30 What ever value i pick, there will always be 4000 rows returned from the XML file. My script will call this xml file 30 times for each value once a day. So thats 120000 rows. I will be doing quite complicated queries on these rows. But the main thing is I will always filter by value first so SELECT * WHERE value = 'N' etc. That will ALWAYS be used. Now is it better to have one table where all 120k rows are stored? or 30 tables were 4k rows are stored? EDIT: the SQL database in question will be MySQL

    Read the article

  • Are there any modern platforms with non-IEEE C/C++ float formats?

    - by Patrick Niedzielski
    Hi all, I am writing a video game, Humm and Strumm, which requires a network component in its game engine. I can deal with differences in endianness easily, but I have hit a wall in attempting to deal with possible float memory formats. I know that modern computers have all a standard integer format, but I have heard that they may not all use the IEEE standard for floating-point integers. Is this true? While certainly I could just output it as a character string into each packet, I would still have to convert to a "well-known format" of each client, regardless of the platform. The standard printf() and atod() would be inadequate. Please note, because this game is a Free/Open Source Software program that will run on GNU/Linux, *BSD, and Microsoft Windows, I cannot use any proprietary solutions, nor any single-platform solutions. Cheers, Patrick

    Read the article

  • Does the .NET CLR Really Optimize for the Current Processor

    - by dewald
    When I read about the performance of JITted languages like C# or Java, authors usually say that they should/could theoretically outperform many native-compiled applications. The theory being that native applications are usually just compiled for a processor family (like x86), so the compiler cannot make certain optimizations as they may not truly be optimizations on all processors. On the other hand, the CLR can make processor-specific optimizations during the JIT process. Does anyone know if Microsoft's (or Mono's) CLR actually performs processor-specific optimizations during the JIT process? If so, what kind of optimizations?

    Read the article

  • iPhone: custom UITableViewCell with Interface Builder -> how to release cell objects?

    - by Stefan Klumpp
    The official documentation tells me I've to do these 3 things in order to manage the my memory for "nib objects" correctly. @property (nonatomic, retain) IBOutlet UIUserInterfaceElementClass *anOutlet; "You should then either synthesize the corresponding accessor methods, or implement them according to the declaration, and (in iPhone OS) release the corresponding variable in dealloc." - (void)viewDidUnload { self.anOutlet = nil; [super viewDidUnload]; } That makes sense for a normal view. However, how am I gonna do that for a UITableView with custom UITableViewCells loaded through a .nib-file? There the IBOutlets are in MyCustomCell.h (inherited from UITableViewCell), but that is not the place where I load the nib and apply it to the cell instances, because that happens in MyTableView.m So do I still release the IBOutlets in the dealloc of MyCustomCell.m or do I have to do something in MyTableView.m? Also MyCustomCell.m doesn't have a - (void)viewDidUnload {} where I can set my IBOutlets to nil, while my MyTableView.m does.

    Read the article

  • Can anyone recommend a decent tool for optimizing images other than Photoshop

    - by toomanyairmiles
    Can anyone recommend a decent tool for optimising images other than adobe photoshop, the gimp etc? I'm looking to optimise images for the web preferably online and free. Basically I have a client who can't install additional software on their work PC but needs to optimise photographs and other images for their website and is presently uploading 1 or 2 Mb files. On a personal level I'm interested to see what other people are using...

    Read the article

  • Any difference between lazy loading Javascript files vs. placing just before </body>

    - by mhr
    Looked around, couldn't find this specific question discussed. Pretty sure the difference is negligible, just curious as to your thoughts. Scenario: All Javascript that doesn't need to be loaded before page render has been placed just before the closing </body> tag. Are there any benefits or detriments to lazy loading these instead through some Javascript code in the head that executes when the DOM load/ready event is fired? Let's say that this only concerns downloading one entire .js file full of functions and not lazy loading several individual files as needed upon usage. Hope that's clear, thanks.

    Read the article

  • help optimize sql query

    - by msony
    I have tracking table tbl_track with id, session_id, created_date fields I need count unique session_id for one day here what i got: select count(0) from ( select distinct session_id from tbl_track where created_date between getdate()-1 and getdate() group by session_id )tbl im feeling that it could be better solution for it

    Read the article

  • Efficient data structure design

    - by Sway
    Hi there, I need to match a series of user inputed words against a large dictionary of words (to ensure the entered value exists). So if the user entered: "orange" it should match an entry "orange' in the dictionary. Now the catch is that the user can also enter a wildcard or series of wildcard characters like say "or__ge" which would also match "orange" The key requirements are: * this should be as fast as possible. * use the smallest amount of memory to achieve it. If the size of the word list was small I could use a string containing all the words and use regular expressions. however given that the word list could contain potentially hundreds of thousands of enteries I'm assuming this wouldn't work. So is some sort of 'tree' be the way to go for this...? Any thoughts or suggestions on this would be totally appreciated! Thanks in advance, Matt

    Read the article

  • F# - Facebook Hacker Cup - Double Squares

    - by Jacob
    I'm working on strengthening my F#-fu and decided to tackle the Facebook Hacker Cup Double Squares problem. I'm having some problems with the run-time and was wondering if anyone could help me figure out why it is so much slower than my C# equivalent. There's a good description from another post; Source: Facebook Hacker Cup Qualification Round 2011 A double-square number is an integer X which can be expressed as the sum of two perfect squares. For example, 10 is a double-square because 10 = 3^2 + 1^2. Given X, how can we determine the number of ways in which it can be written as the sum of two squares? For example, 10 can only be written as 3^2 + 1^2 (we don't count 1^2 + 3^2 as being different). On the other hand, 25 can be written as 5^2 + 0^2 or as 4^2 + 3^2. You need to solve this problem for 0 = X = 2,147,483,647. Examples: 10 = 1 25 = 2 3 = 0 0 = 1 1 = 1 My basic strategy (which I'm open to critique on) is to; Create a dictionary (for memoize) of the input numbers initialzed to 0 Get the largest number (LN) and pass it to count/memo function Get the LN square root as int Calculate squares for all numbers 0 to LN and store in dict Sum squares for non repeat combinations of numbers from 0 to LN If sum is in memo dict, add 1 to memo Finally, output the counts of the original numbers. Here is the F# code (See code changes at bottom) I've written that I believe corresponds to this strategy (Runtime: ~8:10); open System open System.Collections.Generic open System.IO /// Get a sequence of values let rec range min max = seq { for num in [min .. max] do yield num } /// Get a sequence starting from 0 and going to max let rec zeroRange max = range 0 max /// Find the maximum number in a list with a starting accumulator (acc) let rec maxNum acc = function | [] -> acc | p::tail when p > acc -> maxNum p tail | p::tail -> maxNum acc tail /// A helper for finding max that sets the accumulator to 0 let rec findMax nums = maxNum 0 nums /// Build a collection of combinations; ie [1,2,3] = (1,1), (1,2), (1,3), (2,2), (2,3), (3,3) let rec combos range = seq { let count = ref 0 for inner in range do for outer in Seq.skip !count range do yield (inner, outer) count := !count + 1 } let rec squares nums = let dict = new Dictionary<int, int>() for s in nums do dict.[s] <- (s * s) dict /// Counts the number of possible double squares for a given number and keeps track of other counts that are provided in the memo dict. let rec countDoubleSquares (num: int) (memo: Dictionary<int, int>) = // The highest relevent square is the square root because it squared plus 0 squared is the top most possibility let maxSquare = System.Math.Sqrt((float)num) // Our relevant squares are 0 to the highest possible square; note the cast to int which shouldn't hurt. let relSquares = range 0 ((int)maxSquare) // calculate the squares up front; let calcSquares = squares relSquares // Build up our square combinations; ie [1,2,3] = (1,1), (1,2), (1,3), (2,2), (2,3), (3,3) for (sq1, sq2) in combos relSquares do let v = calcSquares.[sq1] + calcSquares.[sq2] // Memoize our relevant results if memo.ContainsKey(v) then memo.[v] <- memo.[v] + 1 // return our count for the num passed in memo.[num] // Read our numbers from file. //let lines = File.ReadAllLines("test2.txt") //let nums = [ for line in Seq.skip 1 lines -> Int32.Parse(line) ] // Optionally, read them from straight array let nums = [1740798996; 1257431873; 2147483643; 602519112; 858320077; 1048039120; 415485223; 874566596; 1022907856; 65; 421330820; 1041493518; 5; 1328649093; 1941554117; 4225; 2082925; 0; 1; 3] // Initialize our memoize dictionary let memo = new Dictionary<int, int>() for num in nums do memo.[num] <- 0 // Get the largest number in our set, all other numbers will be memoized along the way let maxN = findMax nums // Do the memoize let maxCount = countDoubleSquares maxN memo // Output our results. for num in nums do printfn "%i" memo.[num] // Have a little pause for when we debug let line = Console.Read() And here is my version in C# (Runtime: ~1:40: using System; using System.Collections.Generic; using System.Diagnostics; using System.IO; using System.Linq; using System.Text; namespace FBHack_DoubleSquares { public class TestInput { public int NumCases { get; set; } public List<int> Nums { get; set; } public TestInput() { Nums = new List<int>(); } public int MaxNum() { return Nums.Max(); } } class Program { static void Main(string[] args) { // Read input from file. //TestInput input = ReadTestInput("live.txt"); // As example, load straight. TestInput input = new TestInput { NumCases = 20, Nums = new List<int> { 1740798996, 1257431873, 2147483643, 602519112, 858320077, 1048039120, 415485223, 874566596, 1022907856, 65, 421330820, 1041493518, 5, 1328649093, 1941554117, 4225, 2082925, 0, 1, 3, } }; var maxNum = input.MaxNum(); Dictionary<int, int> memo = new Dictionary<int, int>(); foreach (var num in input.Nums) { if (!memo.ContainsKey(num)) memo.Add(num, 0); } DoMemoize(maxNum, memo); StringBuilder sb = new StringBuilder(); foreach (var num in input.Nums) { //Console.WriteLine(memo[num]); sb.AppendLine(memo[num].ToString()); } Console.Write(sb.ToString()); var blah = Console.Read(); //File.WriteAllText("out.txt", sb.ToString()); } private static int DoMemoize(int num, Dictionary<int, int> memo) { var highSquare = (int)Math.Floor(Math.Sqrt(num)); var squares = CreateSquareLookup(highSquare); var relSquares = squares.Keys.ToList(); Debug.WriteLine("Starting - " + num.ToString()); Debug.WriteLine("RelSquares.Count = {0}", relSquares.Count); int sum = 0; var index = 0; foreach (var square in relSquares) { foreach (var inner in relSquares.Skip(index)) { sum = squares[square] + squares[inner]; if (memo.ContainsKey(sum)) memo[sum]++; } index++; } if (memo.ContainsKey(num)) return memo[num]; return 0; } private static TestInput ReadTestInput(string fileName) { var lines = File.ReadAllLines(fileName); var input = new TestInput(); input.NumCases = int.Parse(lines[0]); foreach (var lin in lines.Skip(1)) { input.Nums.Add(int.Parse(lin)); } return input; } public static Dictionary<int, int> CreateSquareLookup(int maxNum) { var dict = new Dictionary<int, int>(); int square; foreach (var num in Enumerable.Range(0, maxNum)) { square = num * num; dict[num] = square; } return dict; } } } Thanks for taking a look. UPDATE Changing the combos function slightly will result in a pretty big performance boost (from 8 min to 3:45): /// Old and Busted... let rec combosOld range = seq { let rangeCache = Seq.cache range let count = ref 0 for inner in rangeCache do for outer in Seq.skip !count rangeCache do yield (inner, outer) count := !count + 1 } /// The New Hotness... let rec combos maxNum = seq { for i in 0..maxNum do for j in i..maxNum do yield i,j }

    Read the article

  • speed up wamp server + drupal on windows vista

    - by Andrew Welch
    Hi, My localhost performance with drupal six is pretty slow. I found a solution to add a # before the :: localhost line of the system32/etc/hosts file but this was something I had already done and didn't help much. does anyone know of any other optimisations that might work? tHanks Andy

    Read the article

  • efficacy of register allocation algorithms!

    - by aksci
    i'm trying to do a research/project on register allocation using graph coloring where i am to test the efficiency of different optimizing register allocation algorithms in different scenarios. how do i start? what are the prerequisites and the grounds with which i can test them. what all algos can i use? thank you!

    Read the article

  • Can my loop be optimized any more? (C++)

    - by Sagekilla
    Below is one of my inner loops that's run several thousand times, with input sizes of 20 - 1000 or more. Is there anything I can do to help squeeze any more performance out of this? I'm not looking to move this code to something like using tree codes (Barnes-Hut), but towards optimizing the actual calculations happening inside, since the same calculations occur in the Barnes-Hut algorithm. Any help is appreciated! typedef double real; struct Particle { Vector pos, vel, acc, jerk; Vector oldPos, oldVel, oldAcc, oldJerk; real mass; }; class Vector { private: real vec[3]; public: // Operators defined here }; real Gravity::interact(Particle *p, size_t numParticles) { PROFILE_FUNC(); real tau_q = 1e300; for (size_t i = 0; i < numParticles; i++) { p[i].jerk = 0; p[i].acc = 0; } for (size_t i = 0; i < numParticles; i++) { for (size_t j = i+1; j < numParticles; j++) { Vector r = p[j].pos - p[i].pos; Vector v = p[j].vel - p[i].vel; real r2 = lengthsq(r); real v2 = lengthsq(v); // Calculate inverse of |r|^3 real r3i = Constants::G * pow(r2, -1.5); // da = r / |r|^3 // dj = (v / |r|^3 - 3 * (r . v) * r / |r|^5 Vector da = r * r3i; Vector dj = (v - r * (3 * dot(r, v) / r2)) * r3i; // Calculate new acceleration and jerk p[i].acc += da * p[j].mass; p[i].jerk += dj * p[j].mass; p[j].acc -= da * p[i].mass; p[j].jerk -= dj * p[i].mass; // Collision estimation // Metric 1) tau = |r|^2 / |a(j) - a(i)| // Metric 2) tau = |r|^4 / |v|^4 real mij = p[i].mass + p[j].mass; real tau_est_q1 = r2 / (lengthsq(da) * mij * mij); real tau_est_q2 = (r2*r2) / (v2*v2); if (tau_est_q1 < tau_q) tau_q = tau_est_q1; if (tau_est_q2 < tau_q) tau_q = tau_est_q2; } } return sqrt(sqrt(tau_q)); }

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >