Search Results

Search found 15401 results on 617 pages for 'memory optimization'.

Page 185/617 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Why is Javascript's Math.floor the slowest way to calculate floor in Javascript?

    - by z5h
    I'm generally not a fan of microbenchmarks. But this one has a very interesting result. http://ernestdelgado.com/archive/benchmark-on-the-floor/ It suggests that Math.floor is the SLOWEST way to calculate floor in Javascript. ~~n, n|n, n&n all being faster. This seems pretty shocking as I would expect that people implementing Javascript in today's modern browsers would be some pretty smart people. Does floor do something important that the other methods fail to do? Is there any reason to use it?

    Read the article

  • Splitting tables by field to optimize MySQL?

    - by AK
    Do splitting fields into multiple tables ever yield faster queries? Consider the following two scenarios: Table1 ----------- int PersonID text Value1 float Value2 or Table1 ----------- int PersonID text Value1 Table2 ----------- int PersonID float Value2 If Value1 and Value2 are always being displayed together, I imagine Table1 is always faster because the second schema would require two SELECT statements. But are there any situations where you would choose the second? If the number of records were expected to be really large?

    Read the article

  • PostGres - run a query in batches?

    - by CaffeineIV
    Is it possible to loop through a query so that if (for example) 500,000 rows are found, it'll return results for the first 10,000 and then rerun the query again? So, what I want to do is run a query and build an array, like this: $result = pg_query("SELECT * FROM myTable"); $i = 0; while($row = pg_fetch_array($result) ) { $myArray[$i]['id'] = $row['id']; $myArray[$i]['name'] = $row['name']; $i++; } But, I know that there will be several hundred thousand rows, so I wanted to do it in batches of like 10,000... 1- 9,999 and then 10,000 - 10,999 etc... The reason why is because I keep getting this error: Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 3 bytes) Which, incidentally, I don't understand how 3 bytes could exhaust 512M... So, if that's something that I can just change, that'd be great, although, still might be better to do this in batches?

    Read the article

  • Measuring debug vs release of ASP.NET applications

    - by Alex Angas
    A question at work came up about building ASP.NET applications in release vs debug mode. When researching further (particularly on SO), general advice is that setting <compilation debug="true"> in web.config has a much bigger impact. Has anyone done any testing to get some actual numbers about this? Here's the sort of information I'm looking for (which may give away my experience with testing such things): Execution time | Debug build | Release build -------------------+---------------+--------------- Debug web.config | average 1 | average 2 Retail web.config | average 3 | average 4 Max memory usage | Debug build | Release build -------------------+---------------+--------------- Debug web.config | average 1 | average 2 Retail web.config | average 3 | average 4 Output file size | Debug build | Release build -------------------+---------------+--------------- | size 1 | size 2

    Read the article

  • Avoid the use of loops (for) with R

    - by albergali
    Hi, I'm working with R and I have a code like this: i<-1 j<-1 for (i in 1:10) for (j in 1:100) if (data[i] == paths[j,1]) cluster[i,4] <- paths[j,2] where : data is a vector with 100 rows and 1 column paths is a matrix with 100 rows and 5 columns cluster is a matrix with 100 rows and 5 columns My question is: how could I avoid the use of "for" loops to iterate through the matrix? I don't know whether apply functions (lapply, tapply...) are useful in this case. This is a problem when j=10000 for example, because execution time is very long. Thank you

    Read the article

  • Resize an image and maintain quality?

    - by JasonS
    Hi, I have a problem with resizing images. What happens is that if you upload a file larger than the stated parameters, the image is cropped, then saved at 100% quality. So if I upload a large jpeg which is 272Kb. The image is cropped by 100 odd pixels. The file size then goes up to 1.2Mb. We are saving images at a 100% quality. I assume that this is what is causing the problem. The image is exported from Photoshop at 30% quality which reduces the file size. Resaving the image at 100% quality creates the same image but I assume with a lot of redundant file data. Has anyone encountered this before? Does anyone have a solution? This is what we are using. $source_im = imagecreatefromjpeg ($file); $dest_im = imagecreatetruecolor ($newsize_x, $newsize_y); imagecopyresampled ( $dest_im, $source_im, 0, 0, $offset_x, $offset_y, $newsize_x, $newsize_y, $sourceWidth, $sourceHeight ); imagedestroy ($source_im); if ($greyscale) { $dest_im = $this->imageconvertgreyscale ($dest_im); } imagejpeg($dest_im, $save_to_file, $quality); break;

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at Game Development

    Read the article

  • Optimize slow ranking query

    - by Juan Pablo Califano
    I need to optimize a query for a ranking that is taking forever (the query itself works, but I know it's awful and I've just tried it with a good number of records and it gives a timeout). I'll briefly explain the model. I have 3 tables: player, team and player_team. I have players, that can belong to a team. Obvious as it sounds, players are stored in the player table and teams in team. In my app, each player can switch teams at any time, and a log has to be mantained. However, a player is considered to belong to only one team at a given time. The current team of a player is the last one he's joined. The structure of player and team is not relevant, I think. I have an id column PK in each. In player_team I have: id (PK) player_id (FK -> player.id) team_id (FK -> team.id) Now, each team is assigned a point for each player that has joined. So, now, I want to get a ranking of the first N teams with the biggest number of players. My first idea was to get first the current players from player_team (that is one record top for each player; this record must be the player's current team). I failed to find a simple way to do it (tried GROUP BY player_team.player_id HAVING player_team.id = MAX(player_team.id), but that didn't cut it. I tried a number of querys that didn't work, but managed to get this working. SELECT COUNT(*) AS total, pt.team_id, p.facebook_uid AS owner_uid, t.color FROM player_team pt JOIN player p ON (p.id = pt.player_id) JOIN team t ON (t.id = pt.team_id) WHERE pt.id IN ( SELECT max(J.id) FROM player_team J GROUP BY J.player_id ) GROUP BY pt.team_id ORDER BY total DESC LIMIT 50 As I said, it works but looks very bad and performs worse, so I'm sure there must be a better way to go. Anyone has any ideas for optimizing this? I'm using mysql, by the way. Thanks in advance Adding the explain. (Sorry, not sure how to format it properly) id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY t ALL PRIMARY NULL NULL NULL 5000 Using temporary; Using filesort 1 PRIMARY pt ref FKplayer_pt77082,FKplayer_pt265938,new_index FKplayer_pt77082 4 t.id 30 Using where 1 PRIMARY p eq_ref PRIMARY PRIMARY 4 pt.player_id 1 2 DEPENDENT SUBQUERY J index NULL new_index 8 NULL 150000 Using index

    Read the article

  • Python Profiling In Windows, How do you ignore Builtin Functions

    - by Tim McJilton
    I have not been capable of finding this anywhere online. I was looking to find out using a profiler how to better optimize my code, and when sorting by which functions use up the most time cumulatively, things like str(), print, and other similar widely used functions eat up much of the profile. What is the best way to profile a python program to get the user-defined functions only to see what areas of their code they can optimize? I hope that makes sense, any light you can shed on this subject would be very appreciated.

    Read the article

  • Drawbacks of Dynamic Query in Sqlserver 2005 ?

    - by KuldipMCA
    I have using the many dynamic Query in my database for the procedures because my filter is not fix so i have taken @filter as parameter and pass in the procedure. Declare @query as varchar(8000) Declare @Filter as varchar(1000) set @query = 'Select * from Person.Address where 1=1 and ' + @Filter exec(@query) Like that my filter contain any Field from the table for comparison. It will affect my performance or not ? is there any alternate way to achieve this type of things

    Read the article

  • Alternate User select interface in django admin to reduce page size on large site?

    - by David Eyk
    I have a Django-based site with roughly 300,000 User objects. Admin pages for objects with a ForeignKey field to User take a very long time to load as the resulting form is about 6MB in size. Of course, the resulting dropdown isn't particularly useful, either. Are there any off-the-shelf replacements for handling this case? I've been googling for a snippet or a blog entry, but haven't found anything yet. I'd like to have a smaller download size and a more usable interface.

    Read the article

  • Building static (but complicated) lookup table using templates.

    - by MarkD
    I am currently in the process of optimizing a numerical analysis code. Within the code, there is a 200x150 element lookup table (currently a static std::vector < std::vector < double ) that is constructed at the beginning of every run. The construction of the lookup table is actually quite complex- the values in the lookup table are constructed using an iterative secant method on a complicated set of equations. Currently, for a simulation, the construction of the lookup table is 20% of the run time (run times are on the order of 25 second, lookup table construction takes 5 seconds). While 5-seconds might not seem to be a lot, when running our MC simulations, where we are running 50k+ simulations, it suddenly becomes a big chunk of time. Along with some other ideas, one thing that has been floated- can we construct this lookup table using templates at compile time? The table itself never changes. Hard-coding a large array isn't a maintainable solution (the equations that go into generating the table are constantly being tweaked), but it seems that if the table can be generated at compile time, it would give us the best of both worlds (easily maintainable, no overhead during runtime). So, I propose the following (much simplified) scenario. Lets say you wanted to generate a static array (use whatever container suits you best- 2D c array, vector of vectors, etc..) at compile time. You have a function defined- double f(int row, int col); where the return value is the entry in the table, row is the lookup table row, and col is the lookup table column. Is it possible to generate this static array at compile time using templates, and how?

    Read the article

  • How to make Visual C++ 9 not emit code that is actually never called?

    - by sharptooth
    My native C++ COM component uses ATL. In DllRegisterServer() I call CComModule::RegisterServer(): STDAPI DllRegisterServer() { return _Module.RegisterServer(FALSE); // <<< notice FALSE here } FALSE is passed to indicate to not register the type library. ATL is available as sources, so I in fact compile the implementation of CComModule::RegisterServer(). Somewhere down the call stack there's an if statement: if( doRegisterTypeLibrary ) { //<< FALSE goes here // do some stuff, then call RegisterTypeLib() } The compiler sees all of the above code and so it can see that in fact the if condition is always false, yet when I inspect the linker progress messages I see that the reference to RegisterTypeLib() is still there, so the if statement is not eliminated. Can I make Visual C++ 9 perform better static analysis and actually see that some code is never called and not emit that code?

    Read the article

  • How to increase query speed without using full-text search?

    - by andre matos
    This is my simple query; By searching selectnothing I'm sure I'll have no hits. SELECT nome_t FROM myTable WHERE nome_t ILIKE '%selectnothing%'; This is the EXPLAIN ANALYZE VERBOSE Seq Scan on myTable (cost=0.00..15259.04 rows=37 width=29) (actual time=2153.061..2153.061 rows=0 loops=1) Output: nome_t Filter: (nome_t ~~* '%selectnothing%'::text) Total runtime: 2153.116 ms myTable has around 350k rows and the table definition is something like: CREATE TABLE myTable ( nome_t text NOT NULL, ) I have an index on nome_t as stated below: CREATE INDEX idx_m_nome_t ON myTable USING btree (nome_t); Although this is clearly a good candidate for Fulltext search I would like to rule that option out for now. This query is meant to be run from a web application and currently it's taking around 2 seconds which is obviously too much; Is there anything I can do, like using other index methods, to improve the speed of this query?

    Read the article

  • SEO Google - Navigation Title vs. Page Heading

    - by louism
    Hi, i was wondering if anyone knows if theres a connection between what a navigation item is named and the page heading it goes to - does this have an impact on SEO? so for example, if i had in my navigation menu an item called About Us, but when you click it you come to a page with the heading Learn Who We Are (i.e. wrapped in [h1] heading tags) because there isnt an exact one-to-one match, is that a bad thing in terms of SEO? thanks

    Read the article

  • Why freed struct in C still has data?

    - by kliketa
    When I run this code: #include <stdio.h> typedef struct _Food { char name [128]; } Food; int main (int argc, char **argv) { Food *food; food = (Food*) malloc (sizeof (Food)); snprintf (food->name, 128, "%s", "Corn"); free (food); printf ("%d\n", sizeof *food); printf ("%s\n", food->name); } I still get 128 Corn although I have freed food. Why is this? Is memory really freed?

    Read the article

  • Feedback on Optimizing C# NET Code Block

    - by Brett Powell
    I just spent quite a few hours reading up on TCP servers and my desired protocol I was trying to implement, and finally got everything working great. I noticed the code looks like absolute bollocks (is the the correct usage? Im not a brit) and would like some feedback on optimizing it, mostly for reuse and readability. The packet formats are always int, int, int, string, string. try { BinaryReader reader = new BinaryReader(clientStream); int packetsize = reader.ReadInt32(); int requestid = reader.ReadInt32(); int serverdata = reader.ReadInt32(); Console.WriteLine("Packet Size: {0} RequestID: {1} ServerData: {2}", packetsize, requestid, serverdata); List<byte> str = new List<byte>(); byte nextByte = reader.ReadByte(); while (nextByte != 0) { str.Add(nextByte); nextByte = reader.ReadByte(); } // Password Sent to be Authenticated string string1 = Encoding.UTF8.GetString(str.ToArray()); str.Clear(); nextByte = reader.ReadByte(); while (nextByte != 0) { str.Add(nextByte); nextByte = reader.ReadByte(); } // NULL string string string2 = Encoding.UTF8.GetString(str.ToArray()); Console.WriteLine("String1: {0} String2: {1}", string1, string2); // Reply to Authentication Request MemoryStream stream = new MemoryStream(); BinaryWriter writer = new BinaryWriter(stream); writer.Write((int)(1)); // Packet Size writer.Write((int)(requestid)); // Mirror RequestID if Authenticated, -1 if Failed byte[] buffer = stream.ToArray(); clientStream.Write(buffer, 0, buffer.Length); clientStream.Flush(); } I am going to be dealing with other packet types as well that are formatted the same (int/int/int/str/str), but different values. I could probably create a packet class, but this is a bit outside my scope of knowledge for how to apply it to this scenario. If it makes any difference, this is the Protocol I am implementing. http://developer.valvesoftware.com/wiki/Source_RCON_Protocol

    Read the article

  • Is code clearness killing application performance?

    - by Jorge Córdoba
    As today's code is getting more complex by the minute, code needs to be designed to be maintainable - meaning easy to read, and easy to understand. That being said, I can't help but remember the programs that ran a couple of years ago such as Winamp or some games in which you needed a high performance program because your 486 100 Mhz wouldn't play mp3s with that beautiful mp3 player which consumed all of your CPU cycles. Now I run Media Player (or whatever), start playing an mp3 and it eats up a 25-30% of one of my four cores. Come on!! If a 486 can do it, how can the playback take up so much processor to do the same? I'm a developer myself, and I always used to advise: keep your code simple, don't prematurely optimize for performance. It seems that we've gone from "trying to get it to use the least amount of CPU as possible" to "if it doesn't take too much CPU is all right". So, do you think we are killing performance by ignoring optimizations?

    Read the article

  • NHibernate, each property is filled with a different select statement

    - by Eitan
    I'm retrieving a list of nhibernate entites which have relationships to other tables/entities. I've noticed instead of NHibernate performing JOINS and populating the properties, it retrieves the entity and then calls a select for each property. For example if a user can have many roles and I retrieve a user from the DB, Nhibernate retrieves the user and then populates the roles with another select statement. The problem is that I want to retrieve oh let's say a list of products which have various many-to-many relationships and relationships to items which have their own relationships. In the end I'm left with over a thousand DB calls to retrieve a list of 30 products. Thanks. I've also set default lazy loading to false because whenever I save the list of entities to a session, I get an error when trying to retrieve it on another page: LazyInitializationException: could not initialize proxy If anybody could shed any light I would truly appreciate it. Thanks. Eitan

    Read the article

  • changing the serialization procedure for a graph of objects (.net framework)

    - by pierusch
    Hello I'm developing a scientific application using .net framework. The application depends heavily upon a large data structure (a tree like structure) that has been serialized using a standard binaryformatter object. The graph structure looks like this: <serializable()>Public class BigObjet inherits list(of smallObject) end class <serializable()>public class smallObject inherits list(of otherSmallerObjects) end class ... The binaryFormatter object does a nice job but it's not optimized at all and the entire data structure reaches around 100Mb on my filesystem. Deserialization works too but it's pretty slow (around 30seconds on my quad core). I've found a nice .dll on codeproject (see "optimizing serialization...") so I wrote a modified version of the classes above overriding the default serialization/deserialization procedure reaching very good results. The problem is this: I can't lose the data previosly serialized with the old version and I'd like to be able to use the new serialization/deserialization method. I have some ideas but I'm pretty sure someone will be able to give me a proper and better advice ! use an "helper" graph of objects who takes care of the entire serialization/deserialization procedure reading data from the old format and converting them into the classes I nedd. This could work but the binaryformatter "needs" to know the types being serialized so........ :( modify the "old" graph to include a modified version of serialization procedure...so I'll be able to deserialize old file and save them with the new format......this doesn't sound too good imho. well any help will be higly highly appreciated :)

    Read the article

  • How return a std::string from C's "getcwd" function

    - by rubenvb
    Sorry to keep hammering on this, but I'm trying to learn :). Is this any good? And yes, I care about memory leaks. I can't find a decent way of preallocating the char*, because there simply seems to be no cross-platform way. const string getcwd() { char* a_cwd = getcwd(NULL,0); string s_cwd(a_cwd); free(a_cwd); return s_cwd; } UPDATE2: without Boost or Qt, the most common stuff can get long-winded (see accepted answer)

    Read the article

  • Vector [] vs copying

    - by sak
    What is faster and/or generally better? vector<myType> myVec; int i; myType current; for( i = 0; i < 1000000; i ++ ) { current = myVec[ i ]; doSomethingWith( current ); doAlotMoreWith( current ); messAroundWith( current ); checkSomeValuesOf( current ); } or vector<myType> myVec; int i; for( i = 0; i < 1000000; i ++ ) { doSomethingWith( myVec[ i ] ); doAlotMoreWith( myVec[ i ] ); messAroundWith( myVec[ i ] ); checkSomeValuesOf( myVec[ i ] ); } I'm currently using the first solution. There are really millions of calls per second and every single bit comparison/move is performance-problematic.

    Read the article

  • Need help on nested loop of queries in php and mysql?

    - by mysqllearner
    Hi, I am trying to get do this: <?php $good_customer = 0; $q = mysql_query("SELECT user FROM users WHERE activated = '1'"); // this gives me about 40k users while($r = mysql_fetch_assoc($q)){ $money_spent = 0; $user = $r['user']; // Do queries on another 20 tables for($i = 1; $i<=20 ; $i++){ $tbl_name = 'data' . $i; $q2 = mysql_query("SELECT money_spent FROM $tbl_name WHERE user = '{$user}'"); while($r2 = mysql_fetch_assoc($q2)){ $money_spend += $r2['money_spent']; } if($money_spend > 1000000){ $good_customer += 1; } } } This is just an example. I am testing on localhost, for single user, it returns very fast. But when I try 1000, it takes forever, not even mentioned 40k users. Anyway to optimise/improve this code? EDIT: By the way, each of the others 20 tables has ~20 - 40k records

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >