Search Results

Search found 7551 results on 303 pages for 'pre optimization'.

Page 73/303 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Is this implementation truely tail-recursive?

    - by CFP
    Hello everyone! I've come up with the following code to compute in a tail-recursive way the result of an expression such as 3 4 * 1 + cos 8 * (aka 8*cos(1+(3*4))) The code is in OCaml. I'm using a list refto emulate a stack. type token = Num of float | Fun of (float->float) | Op of (float->float->float);; let pop l = let top = (List.hd !l) in l := List.tl (!l); top;; let push x l = l := (x::!l);; let empty l = (l = []);; let pile = ref [];; let eval data = let stack = ref data in let rec _eval cont = match (pop stack) with | Num(n) -> cont n; | Fun(f) -> _eval (fun x -> cont (f x)); | Op(op) -> _eval (fun x -> cont (op x (_eval (fun y->y)))); in _eval (fun x->x) ;; eval [Fun(fun x -> x**2.); Op(fun x y -> x+.y); Num(1.); Num(3.)];; I've used continuations to ensure tail-recursion, but since my stack implements some sort of a tree, and therefore provides quite a bad interface to what should be handled as a disjoint union type, the call to my function to evaluate the left branch with an identity continuation somehow irks a little. Yet it's working perfectly, but I have the feeling than in calling the _eval (fun y->y) bit, there must be something wrong happening, since it doesn't seem that this call can replace the previous one in the stack structure... Am I misunderstanding something here? I mean, I understand that with only the first call to _eval there wouldn't be any problem optimizing the calls, but here it seems to me that evaluation the _eval (fun y->y) will require to be stacked up, and therefore will fill the stack, possibly leading to an overflow... Thanks!

    Read the article

  • MySQL won't use index for query?

    - by Jack Sleight
    I have this table: CREATE TABLE `point` ( `id` INT(11) NOT NULL AUTO_INCREMENT, `siteid` INT(11) NOT NULL, `lft` INT(11) DEFAULT NULL, `rgt` INT(11) DEFAULT NULL, `level` SMALLINT(6) DEFAULT NULL, PRIMARY KEY (`id`), KEY `point_siteid_site_id` (`siteid`), CONSTRAINT `point_siteid_site_id` FOREIGN KEY (`siteid`) REFERENCES `site` (`id`) ON DELETE CASCADE ) ENGINE=INNODB AUTO_INCREMENT=35 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci And this query: SELECT * FROM `point` WHERE siteid = 1; Which results in this EXPLAIN information: +----+-------------+-------+------+----------------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+----------------------+------+---------+------+------+-------------+ | 1 | SIMPLE | point | ALL | point_siteid_site_id | NULL | NULL | NULL | 6 | Using where | +----+-------------+-------+------+----------------------+------+---------+------+------+-------------+ Question is, why isn't the query using the point_siteid_site_id index?

    Read the article

  • Optimizing an iphone app for 3G in landscape with opengl, camera, quartz

    - by Joey
    I have an iphone app that basically uses the camera, an opengl layer, and UIViews (some drawing with Quartz). It runs ok on 3GS, but on the 3G it is unusable. Particularly, when I press a UIButton, it literally takes sometimes 10 seconds to register the press. Shark doesn't do me much good because it crashes when I try to profile even a tiny portion, and I've tried turning off some of the layers to see if they might be obvious contributors to the lag. I've noticed that turning off the camera really helps. I'm wondering if anyone has any familiarity with this and might suggest some likely causes. I had issues with extreme slowdown from running my app in landscape mode and using transforms, so considered that might be a cause, but I'm wondering if hoping for a 3G to run something with all of the above elements is just not really possible considering the camera seems to really cost a lot. The fact that the buttons are horribly delayed in their response makes me think there is something fundamental that I might be missing.

    Read the article

  • Why is Javascript's Math.floor the slowest way to calculate floor in Javascript?

    - by z5h
    I'm generally not a fan of microbenchmarks. But this one has a very interesting result. http://ernestdelgado.com/archive/benchmark-on-the-floor/ It suggests that Math.floor is the SLOWEST way to calculate floor in Javascript. ~~n, n|n, n&n all being faster. This seems pretty shocking as I would expect that people implementing Javascript in today's modern browsers would be some pretty smart people. Does floor do something important that the other methods fail to do? Is there any reason to use it?

    Read the article

  • Splitting tables by field to optimize MySQL?

    - by AK
    Do splitting fields into multiple tables ever yield faster queries? Consider the following two scenarios: Table1 ----------- int PersonID text Value1 float Value2 or Table1 ----------- int PersonID text Value1 Table2 ----------- int PersonID float Value2 If Value1 and Value2 are always being displayed together, I imagine Table1 is always faster because the second schema would require two SELECT statements. But are there any situations where you would choose the second? If the number of records were expected to be really large?

    Read the article

  • Drawbacks of Dynamic Query in Sqlserver 2005 ?

    - by KuldipMCA
    I have using the many dynamic Query in my database for the procedures because my filter is not fix so i have taken @filter as parameter and pass in the procedure. Declare @query as varchar(8000) Declare @Filter as varchar(1000) set @query = 'Select * from Person.Address where 1=1 and ' + @Filter exec(@query) Like that my filter contain any Field from the table for comparison. It will affect my performance or not ? is there any alternate way to achieve this type of things

    Read the article

  • How to make Visual C++ 9 not emit code that is actually never called?

    - by sharptooth
    My native C++ COM component uses ATL. In DllRegisterServer() I call CComModule::RegisterServer(): STDAPI DllRegisterServer() { return _Module.RegisterServer(FALSE); // <<< notice FALSE here } FALSE is passed to indicate to not register the type library. ATL is available as sources, so I in fact compile the implementation of CComModule::RegisterServer(). Somewhere down the call stack there's an if statement: if( doRegisterTypeLibrary ) { //<< FALSE goes here // do some stuff, then call RegisterTypeLib() } The compiler sees all of the above code and so it can see that in fact the if condition is always false, yet when I inspect the linker progress messages I see that the reference to RegisterTypeLib() is still there, so the if statement is not eliminated. Can I make Visual C++ 9 perform better static analysis and actually see that some code is never called and not emit that code?

    Read the article

  • Avoid the use of loops (for) with R

    - by albergali
    Hi, I'm working with R and I have a code like this: i<-1 j<-1 for (i in 1:10) for (j in 1:100) if (data[i] == paths[j,1]) cluster[i,4] <- paths[j,2] where : data is a vector with 100 rows and 1 column paths is a matrix with 100 rows and 5 columns cluster is a matrix with 100 rows and 5 columns My question is: how could I avoid the use of "for" loops to iterate through the matrix? I don't know whether apply functions (lapply, tapply...) are useful in this case. This is a problem when j=10000 for example, because execution time is very long. Thank you

    Read the article

  • Does the order of columns in a query matter?

    - by James Simpson
    When selecting columns from a MySQL table, is performance affected by the order that you select the columns as compared to their order in the table (not considering indexes that may cover the columns)? For example, you have a table with rows uid, name, bday, and you have the following query. SELECT uid, name, bday FROM table Does MySQL see the following query any differently and thus cause any sort of performance hit? SELECT uid, bday, name FROM table

    Read the article

  • Should Python import statements always be at the top of a module?

    - by Adam J. Forster
    PEP 08 states: Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants. However if the class/method/function that I am importing is only used in rare cases, surely it is more efficient to do the import when it is needed? Isn't this: class SomeClass(object): def not_often_called(self) from datetime import datetime self.datetime = datetime.now() more efficient than this? from datetime import datetime class SomeClass(object): def not_often_called(self) self.datetime = datetime.now()

    Read the article

  • Delphi fast large bitmap creation (without clearing)

    - by Ritsaert Hornstra
    When using the TBitmap wrapper for a GDI bitmap from the unit Graphics I noticed it will always clear out the bitmap (using a PatBlt call) when setting up a bitmap with SetSize( w, h ). When I copy in the bits later on (see routine below) it seems ScanLine is the fastest possibility and not SetDIBits. function ToBitmap: TBitmap; var i, N, x: Integer; S, D: PAnsiChar; begin Result := TBitmap.Create(); Result.PixelFormat := pf32bit; Result.SetSize( width, height ); S := Src; D := Result.ScanLine[ 0 ]; x := Integer( Result.ScanLine[ 1 ] ) - Integer( D ); N := width * sizeof( longword ); for i := 0 to height - 1 do begin Move( S^, D^, N ); Inc( S, N ); Inc( D, x ); end; end; The bitmaps I need to work with are quite large (150MB of RGB memory). With these iomages it takes 150ms to simply create an empty bitmap and a further 140ms to overwrite it's contents. Is there a way of initializing a TBitmap with the correct size WITHOUT initializing the pixels itself and leaving the memory of the pixels uninitialized (eg dirty)? Or is there another way to do such a thing. I know we could work on the pixels in place but this still leaves the 150ms of unnessesary initializtion of the pixels.

    Read the article

  • How to insert zeros between bits in a bitmap?

    - by anatolyg
    I have some performance-heavy code that performs bit manipulations. It can be reduced to the following well-defined problem: Given a 13-bit bitmap, construct a 26-bit bitmap that contains the original bits spaced at even positions. To illustrate: 0000000000000000000abcdefghijklm (input, 32 bits) 0000000a0b0c0d0e0f0g0h0i0j0k0l0m (output, 32 bits) I currently have it implemented in the following way in C: if (input & (1 << 12)) output |= 1 << 24; if (input & (1 << 11)) output |= 1 << 22; if (input & (1 << 10)) output |= 1 << 20; ... My compiler (MS Visual Studio) turned this into the following: test eax,1000h jne 0064F5EC or edx,1000000h ... (repeated 13 times with minor differences in constants) I wonder whether i can make it any faster. I would like to have my code written in C, but switching to assembly language is possible. Can i use some MMX/SSE instructions to process all bits at once? Maybe i can use multiplication? (multiply by 0x11111111 or some other magical constant) Would it be better to use condition-set instruction (SETcc) instead of conditional-jump instruction? If yes, how can i make the compiler produce such code for me? Any other idea how to make it faster? Any idea how to do the inverse bitmap transformation (i have to implement it too, bit it's less critical)?

    Read the article

  • In PHP is faster to get a value from an if statement or from an array?

    - by Vittorio Vittori
    Maybe this is a stupid question but what is faster? <?php function getCss1 ($id = 0) { if ($id == 1) { return 'red'; } else if ($id == 2) { return 'yellow'; } else if ($id == 3) { return 'green'; } else if ($id == 4) { return 'blue'; } else if ($id == 5) { return 'orange'; } else { return 'grey'; } } function getCss2 ($id = 0) { $css[] = 'grey'; $css[] = 'red'; $css[] = 'yellow'; $css[] = 'green'; $css[] = 'blue'; $css[] = 'orange'; return $css[$id]; } echo getCss1(3); echo getCss2(3); ?> I suspect is faster the if statement but I prefere to ask!

    Read the article

  • Unicorn: Which number of worker processes to use?

    - by blackbird07
    I am running a Ruby on Rails app on a virtual Linux server that is capped at 1GB RAM. Currently, I am constantly hitting the limit and would like to optimize memory utilization. One option I am looking at is reducing the number of unicorn workers. So what is the best way to determine the number of unicorn workers to use? The current setting is 10 workers, but the maximum number of requests per second I have seen on Google Analytics Real-Time is 3 (only scored once at a peak time; in 99% of the time not going above 1 request per second). So is it a save assumption that I can - for now - go with 4 workers, leaving room for unexpected amounts of requests? What are the metrics I should have a look at for determining the number of workers and what are the tools I can use for that on my Ubuntu machine?

    Read the article

  • What algorithm can I use to determine points within a semi-circle?

    - by khayman218
    I have a list of two-dimensional points and I want to obtain which of them fall within a semi-circle. Originally, the target shape was a rectangle aligned with the x and y axis. So the current algorithm sorts the pairs by their X coord and binary searches to the first one that could fall within the rectangle. Then it iterates over each point sequentially. It stops when it hits one that is beyond both the X and Y upper-bound of the target rectangle. This does not work for a semi-circle as you cannot determine an effective upper/lower x and y bounds for it. The semi-circle can have any orientation. Worst case, I will find the least value of a dimension (say x) in the semi-circle, binary search to the first point which is beyond it and then sequentially test the points until I get beyond the upper bound of that dimension. Basically testing an entire band's worth of points on the grid. The problem being this will end up checking many points which are not within the bounds.

    Read the article

  • WPF: Improving Performance for Running on Older PCs

    - by Phil Sandler
    So, I'm building a WPF app and did a test deployment today, and found that it performed pretty poorly. I was surprised, as we are really not doing much in the way of visual effects or animations. I deployed on two machines: the fastest and the slowest that will need to run the application (the slowest PC has an Intel Celeron 1.80GHz with 2GB RAM). The application ran pretty well on the faster machine, but was choppy on the slower machine. And when I say "choppy", I mean the cursor jumped even just passing it over any open window of the app that had focus. I opened the Task Manager Performance window, and could see that the CPU usage jumped whenever the app had focus and the cursor was moving over it. If I gave focus to another (e.g. Excel), the CPU usage went back down after a second. This happened on both machines, but the choppiness was only noticeable on the slower machine. I had very limited time to tinker on the deployment machines, so didn't do a lot of detailed testing. The app runs fine on my development machine, but I also see the CPU spiking up to 10% there, just running the cursor over the window. I downloaded the WPF performance tool from MS and have been tinkering with it (on my dev machine). The docs say this about the "Frame Rate" metric in the Perforator tool: For applications without animation, this value should be near 0. The app is not doing any heavy animation, but the frame rate stays near 50 when the cursor is over any window. The screens I tested on have column headers in a grid that "highlight" and buttons that change color and appearance when scrolled over. Even moving the mouse on blank areas of the windows cause the same Frame rate and CPU usage (doesn't seem to be related to these minor animations). (Also, I am unable to figure out how to get anything but the two default tools--Perforator and Visual Profiler--installed into the WPF performance tool. That is probably a separate question). I also have Redgate's profiling tool, but I'm not sure if that can shed any light on rendering performance. So, I realize this is not an easy thing to troubleshoot without specifics or sample code (which I can't post). My questions are: What are some general things to look for (or avoid) in the code to improve performance? What steps can I take using the WPF performance tool to narrow down the problem? Is the PC spec listed above (Intel Celeron 1.80GHz with 2GB RAM) too slow to be running even vanilla WPF applications?

    Read the article

  • SEO Google - Navigation Title vs. Page Heading

    - by louism
    Hi, i was wondering if anyone knows if theres a connection between what a navigation item is named and the page heading it goes to - does this have an impact on SEO? so for example, if i had in my navigation menu an item called About Us, but when you click it you come to a page with the heading Learn Who We Are (i.e. wrapped in [h1] heading tags) because there isnt an exact one-to-one match, is that a bad thing in terms of SEO? thanks

    Read the article

  • NHibernate, each property is filled with a different select statement

    - by Eitan
    I'm retrieving a list of nhibernate entites which have relationships to other tables/entities. I've noticed instead of NHibernate performing JOINS and populating the properties, it retrieves the entity and then calls a select for each property. For example if a user can have many roles and I retrieve a user from the DB, Nhibernate retrieves the user and then populates the roles with another select statement. The problem is that I want to retrieve oh let's say a list of products which have various many-to-many relationships and relationships to items which have their own relationships. In the end I'm left with over a thousand DB calls to retrieve a list of 30 products. Thanks. I've also set default lazy loading to false because whenever I save the list of entities to a session, I get an error when trying to retrieve it on another page: LazyInitializationException: could not initialize proxy If anybody could shed any light I would truly appreciate it. Thanks. Eitan

    Read the article

  • Nodes set of the same type with if-test. Make it less.

    - by Kalinin
    How to make the code more beautiful (compact)? <xsl:template match="part"> <table class="part"> <xsl:if test="name != ''"> <tr> <td>????????</td><td><xsl:value-of select="name"/></td> </tr> </xsl:if> <xsl:if test="model != ''"> <tr> <td>??????</td><td><xsl:value-of select="model"/></td> </tr> </xsl:if> <xsl:if test="year != ''"> <tr> <td>???</td><td><xsl:value-of select="year"/></td> </tr> </xsl:if> <xsl:if test="glass_type != ''"> <tr> <td>???</td><td><xsl:value-of select="glass_type"/></td> </tr> </xsl:if> <xsl:if test="scancode != ''"> <tr> <td>???????</td><td><xsl:value-of select="scancode"/></td> </tr> </xsl:if> <xsl:if test="eurocode != ''"> <tr> <td>???????</td><td><xsl:value-of select="eurocode"/></td> </tr> </xsl:if> <xsl:if test="coment != ''"> <tr> <td>???????????</td><td><xsl:value-of select="coment"/></td> </tr> </xsl:if> <xsl:if test="glass_size != ''"> <tr> <td>??????</td><td><xsl:value-of select="glass_size"/></td> </tr> </xsl:if> <xsl:if test="vendor != ''"> <tr> <td>?????????????</td><td><xsl:value-of select="vendor"/></td> </tr> </xsl:if> <xsl:if test="trademark != ''"> <tr> <td>???????? ?????</td><td><xsl:value-of select="trademark"/></td> </tr> </xsl:if> <xsl:if test="fprice != ''"> <tr> <td>????</td><td><xsl:value-of select="fprice"/></td> </tr> </xsl:if> </table> </xsl:template> Update: i wrote: <my:translations xmlns:my="my:my"> <w e="name" r="????????"/> <w e="model" r="??????"/> <w e="year" r="???"/> <w e="glass_type" r="???"/> <w e="scancode" r="???????"/> <w e="eurocode" r="???????"/> <w e="comment" r="???????????"/> <w e="glass_size" r="??????"/> <w e="vendor" r="?????????????"/> <w e="trademark" r="???????? ?????"/> <w e="fprice" r="????"/> </my:translations> <xsl:value-of select="//w/@r"/> And have no result from this code. Is it normal? And how can i get new element w?

    Read the article

  • Script Speed vs Memory Usage

    - by Doug Neiner
    I am working on an image generation script in PHP and have gotten it working two ways. One way is slow but uses a limited amount of memory, the second is much faster, but uses 6x the memory . There is no leakage in either script (as far as I can tell). In a limited benchmark, here is how they performed: -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 65.626 | 540,036 | 200 Two | 20.207 | 3,269,600 | 200 -------------------------------------------- And here is the average of the previous numbers (if you don't want to do your own math): -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 0.328 | 540,036 | 1 Two | 0.101 | 3,269,600 | 1 -------------------------------------------- Which method should I use and why? I anticipate this being used by a high volume of users, with each user making 10-20 requests to this script during a normal visit. I am leaning toward the faster method because though it uses more memory, it is for a 1/3 of the time and would reduce the number of concurrent requests.

    Read the article

  • Which fieldtype is best for storing PRICE values?

    - by BerggreenDK
    Hi there I am wondering whats the best "price field" in MSSQL for a shoplike structure? Looking at this overview: http://www.teratrax.com/sql_guide/data_types/sql_server_data_types.html We have datatypes called money, smallmoney, then we have decimal/numeric and lastly float and real Name, memory/disk-usage and value ranges: Money: 8 bytes (values: -922,337,203,685,477.5808 to +922,337,203,685,477.5807) Smallmoney: 4 bytes (values: -214,748.3648 to +214,748.3647) Decimal: 9 [default, min. 5] bytes (values: -10^38 +1 to 10^38 -1 ) Float: 8 bytes (values: -1.79E+308 to 1.79E+308 ) Real: 4 bytes (values: -3.40E+38 to 3.40E+38 ) My question is: is it really wise to store pricevalues in those types? what about eg. INT? Int: 4 bytes (values: -2,147,483,648 to 2,147,483,647) Lets say a shop uses dollars, they have cents, but I dont see prices being $49.2142342 so the use of a lot of decimals showing cents seems waste of SQL bandwidth. Secondly, most shops wouldn't show any prices near 200.000.000 (not in normal webshops at least... unless someone is trying to sell me a famous tower in Paris) So why not go for an int? An int is fast, its only 4 bytes and you can easily make decimals, by saving values in cents instead of dollars and then divide when you present the values. The other approach would be to use smallmoney which is 4 bytes too, but this will require the math part of the CPU to do the calc, where as Int is integer power... on the downside you will need to divide every single outcome. Are there any "currency" related problems with regionalsettings when using smallmoney/money fields? what will these transfer too in C#/.NET ? Any pros/cons? Go for integer prices or smallmoney or some other? Whats does your experience tell?

    Read the article

  • MySQL Paritioning performance

    - by Imran Pathan
    Measured performance on key partitioned tables and normal tables separately. But we couldn't find any performance improvement with partitioning. Queries are pruned. Using MySQL 5.1.47 on RHEL 4. Table details: UserUsage - Will have entries for user mobile number and data usage for each date. Mobile number and Date as PRI KEY. UserProfile - Queries prev table and stores summary for each mobile number. Mobile number PRI KEY. CREATE TABLE `UserUsage` ( `Msisdn` decimal(20,0) NOT NULL, `Date` date NOT NULL, . . PRIMARY KEY USING BTREE (`Msisdn`,`Date`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; CREATE TABLE `UserProfile` ( `Msisdn` decimal(20,0) NOT NULL, . . PRIMARY KEY (`Msisdn`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; Second table is updated by query select and order by date in first table in a perl program, query is select * from UserUsage where Msisdn=number order by Date desc limit 7 [Process data in perl] update UserProfile values(....) where Msisdn=number explain partition for select, shows row being scanned in a particular partition only. Is something wrong with partition design or queries as partitioning is taking almost same or more time compared to normal tables?

    Read the article

  • Vector [] vs copying

    - by sak
    What is faster and/or generally better? vector<myType> myVec; int i; myType current; for( i = 0; i < 1000000; i ++ ) { current = myVec[ i ]; doSomethingWith( current ); doAlotMoreWith( current ); messAroundWith( current ); checkSomeValuesOf( current ); } or vector<myType> myVec; int i; for( i = 0; i < 1000000; i ++ ) { doSomethingWith( myVec[ i ] ); doAlotMoreWith( myVec[ i ] ); messAroundWith( myVec[ i ] ); checkSomeValuesOf( myVec[ i ] ); } I'm currently using the first solution. There are really millions of calls per second and every single bit comparison/move is performance-problematic.

    Read the article

  • changing the serialization procedure for a graph of objects (.net framework)

    - by pierusch
    Hello I'm developing a scientific application using .net framework. The application depends heavily upon a large data structure (a tree like structure) that has been serialized using a standard binaryformatter object. The graph structure looks like this: <serializable()>Public class BigObjet inherits list(of smallObject) end class <serializable()>public class smallObject inherits list(of otherSmallerObjects) end class ... The binaryFormatter object does a nice job but it's not optimized at all and the entire data structure reaches around 100Mb on my filesystem. Deserialization works too but it's pretty slow (around 30seconds on my quad core). I've found a nice .dll on codeproject (see "optimizing serialization...") so I wrote a modified version of the classes above overriding the default serialization/deserialization procedure reaching very good results. The problem is this: I can't lose the data previosly serialized with the old version and I'd like to be able to use the new serialization/deserialization method. I have some ideas but I'm pretty sure someone will be able to give me a proper and better advice ! use an "helper" graph of objects who takes care of the entire serialization/deserialization procedure reading data from the old format and converting them into the classes I nedd. This could work but the binaryformatter "needs" to know the types being serialized so........ :( modify the "old" graph to include a modified version of serialization procedure...so I'll be able to deserialize old file and save them with the new format......this doesn't sound too good imho. well any help will be higly highly appreciated :)

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >