Search Results

Search found 97855 results on 3915 pages for 'code performance'.

Page 264/3915 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • iPhone Image Resources, ICO vs PNG, app bundle filesize

    - by Jasarien
    My application has a collection of around 1940 icons that are used throughout. They're currently in ICO and new images provided to me come in ICO format too. I have noticed that they contain a 16x16 and 32x32 representation of each icon in one file. Each file is roughly 4KB in filesize (as reported by finder, but ls reports that they vary from being ~1000 bytes to 5000 bytes) A very small number of these icons only contain the 32x32 representation, and as a result are only around 700 bytes in size. Currently I am bundling these icons with my application and they are inflating the size of the app a bit more than I would like. Altogether, the images total just about 25.5MB. Xcode must do some kind of compression because the resulting app bundle is about 12.4MB. Compressing this further into a ZIP (as it would be when submitted to the App Store), results in a final file of 5.8MB. I'm aware that the maximum limit for over the air App Store downloads has been raised to 20MB since the introduction of the iPad (I'm not sure if that extends to iPhone apps as well as iPad apps though, if not the limit would be 10MB). My worry is that new icons are going to be added (sometimes up to 10 icons per week), and will continue to inflate the app bundle over time. What is the best way to distribute these icons with my app? Things I've tried and not had much success with: Converting the icons from ICO to PNG: I tried this in the hopes that the pngcrush utility would help out with the filesize. But it appears that it doesn't make much of a difference between a normal PNG and a crushed png (I believe it just optimises the image for display on the iPhone's GPU rather than compress it's size). Also in going from ICO to PNG actually increased the size of the icon file... Zipping the images, and then uncompressing them on first run. While this did reduce the overall image sizes, I found that the effort needed to unzip them, copy them to the documents folder and ensure that duplication doesn't happen on upgrades was too much hassle to be worth the benefit. Also, on original and 3G iPhones unzipping and copying around 25MB of images takes too long and creates a bad experience... Things I've considered but not yet tried: Instead of distributing the icons within the app bundle, host them online, and download each icon on demand (it depends on the user's data as to which icons will actually be displayed and when). Issues with this is that bandwidth costs money, and image downloads will be bandwidth intensive. However, my app currently has a small userbase of around 5,500 users (of which I estimate around 1500 to be active based on Flurry stats), and I have a huge unused bandwidth allowance with my current hosting package. So I'm open to thoughts on how to solve this tricky issue.

    Read the article

  • ASP .NET page runs slow in production

    - by Brandi
    I have created an ASP .NET page that works flawlessly and quickly from Visual Studio. It does a very large database read from a database on our network to load a gridview inside of an update panel. It displays progress in an Ajax modalpopupextender. Of course I don't expect it to be instant what with the large db reads, but it takes on the order of seconds, not on the order of minutes. This is all working great until I put it up on the server - it is very, VERY slow when I access it via the internet - takes several minutes to load the database information into the gridview. I'm baffled why it would not perform the exact same as it had from Visual Studio. (It is in release mode and I have taken off the debug flag) I have since been trying things like eliminating unneeded update panels and throwing out the ajax tool. Nothing has made it any faster on production. It is not the database as far as I know, since it has been consistently fast from my computer (from visual studio) and consistently slow from the server. I am wondering, where do I look next? Has anyone else had this problem before? Could this be caused by update panels or Ajax modalpopupextenders in different parts of the application? Why would the live behaviour differ so much from the localhost behaviour? Both the server with the ASP .NET page and the server with the database are servers on our network. I'm using Visual Studio 2008. Thank you in advance for any insight or advice.

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • Foolproof way to check for nonzero (error) return code in windows batch file

    - by Pat
    Intro There's a lot of advice out there for dealing with return codes in batch files (using the ERROLEVEL mechanism), e.g. Get error code from within a batch file ERRORLEVEL inside IF Some of the advice is to do if errorlevel 1 goto somethingbad, while others recommend using the %ERRORLEVEL% variable and using ==, EQU, LSS, etc. There seem to be issues within IF statements and such, so then delayedexpansion is encouraged, but it seems to come with quirks of its own. Question What is a foolproof (i.e. robust, so it will work on nearly any system with nearly any return code) way to know if a bad (nonzero) code has been returned? My attempt For basic usage, the following seems to work ok to catch any nonzero return code: if not errorlevel 0 ( echo error level was nonzero )

    Read the article

  • advantages of Zend_Db_Table vs raw (My)SQL?

    - by sunwukung
    Currently working on a new Zend application and developing the Model. Having worked with Zend_Db_Table before, I opted to replace references in the Model to the Table API with a custom SQL script to take care of data access duties. Now I'm looking at developing a new application/domain model, and I wanted to get some feedback from people re: their experiences with Zend_Db API vs raw SQL, and use cases where it would be preferable to use the API. From a project perspective, the DB platform is unlikely to change from MySQL - so it doesn't need to be particularly abstract - and I assume writing a custom SQL API will be more performant than the assorted classes the Zend DB API requires.

    Read the article

  • Does malloc() allocate a contiguous block of memory?

    - by user66854
    I have a piece of code written by a very old school programmer :-) . it goes something like this typedef struct ts_request { ts_request_buffer_header_def header; char package[1]; } ts_request_def; ts_request_buffer_def* request_buffer = malloc(sizeof(ts_request_def) + (2 * 1024 * 1024)); the programmer basically is working on a buffer overflow concept. I know the code looks dodgy. so my questions are: Does malloc always allocate contiguous block of memory ?. because in this code if the blocks are not contiguous , the code will fail big time Doing free(request_buffer) , will it free all the bytes allocated by malloc i.e sizeof(ts_request_def) + (2 * 1024 * 1024), or only the bytes of the size of the structure sizeof(ts_request_def) Do you see any evident problems with this approach , i need to discuss this with my boss and would like to point out any loopholes with this approach

    Read the article

  • MySQL Locking Up

    - by Ian
    I've got a innodb table that gets a lot of reads and almost no writes (like, 1 write for every 400,000 reads approx). I'm running into a pretty big problem though when I do INSERT into the table. MySQL completely locks up. It uses 100% cpu, and every single other table (in other databases even) have their statuses set to "Locked" until the INSERT is done. This is a big problem because MySQL stays locked up for up to 4 minutes. I'm using version 5.1.47 (rpm from mysql.com). Any ideas?

    Read the article

  • Which memory related Tomcat JVM startup parameters are worth tuning?

    - by knorv
    I'm trying to understand the fine art of tuning Tomcat memory settings. In this quest I have the following three questions: Which memory related JVM startup parameters are worth setting when running Tomcat? Why? What are useful rule-of-thumbs when fine-tuning the memory settings for a Tomcat installation? How do you monitor the memory consumption of your live Tomcat installation?

    Read the article

  • How do polymorphic inline caches work with mutable types?

    - by kingkilr
    A polymorphic inline cache works by caching the actual method by the type of the object, in order to avoid the expensive lookup procedures (usually a hashtable lookup). How does one handle the type comparison if the type objects are mutable (i.e. the method might be monkey patched into something different at run time). The one idea I've come up with would be a "class counter" that gets incremented each time a method is adjusted, however this seems like it would be exceptionally expensive in a heavily monkey patched environ since it would kill all the PICs for that class, even if the methods for them weren't altered. I'm sure there must be a good solution to this, as this issue is directly applicable to Javascript and AFAIK all 3 of the big JS VMs have PICs (wow acronym ahoy).

    Read the article

  • preg_replace on xss code

    - by proyb2
    Can this code help to sanitize malicious code in user submit form? function rex($string) { $patterns = array(); $patterns[0] = '/=/i'; $patterns[1] = '/javascript:/i'; $replacements = array(); $replacements[0] = ''; $replacements[1] = ''; return preg_replace($patterns, $replacements, $string); I have included htmlentities() to prevent XSS on client side, is all the code shown is safe enough to prevent attack?

    Read the article

  • how to design this relation in a DB schema

    - by raticulin
    I have a table Car in my db, one of the columns is purchaseDate. I want to be able to tag every car with a number of Policies (limited to 10 policies). Each policy has a time to life (ttl, a duration of time, like '5 years', '10 months' etc), that is, for how long since the car's purchaseDate the policy can be applied. I need to perform the following actions: when inserting a Car, it will be set with a number of Policies (at least one is set) sometimes a Car will be updated to add/remove a Policy searches must be done taking into account date/policies, for example: 'select all cars that are not covered by any policy as of today' My current design is (pol0..pol9 are the policies): CREATE TABLE Car ( id int NOT NULL IDENTITY(1,1), purchaseDate datetime NOT NULL, //more stuff... pol0 smallint default NULL, pol1 smallint default NULL, pol2 smallint default NULL, pol3 smallint default NULL, pol4 smallint default NULL, pol5 smallint default NULL, pol6 smallint default NULL, pol7 smallint default NULL, pol8 smallint default NULL, pol9 smallint default NULL, PRIMARY KEY (id) ) CREATE TABLE Policy ( id smallint NOT NULL, name varchar(50) collate Latin1_General_BIN NOT NULL, ttl varchar(100) collate Latin1_General_BIN NOT NULL, PRIMARY KEY (id) ) The problem I am facing is that the sql to perform the query above is a nightmare to write. As I don't know in which column each policy can be, so I have to check all columns for every policy etc etc. So I am wondering wether it is worth changing this. My questions are: The smallint as Policy id was chosen instead of an 'int IDENTITY' in order to save some space as there are going to be millions of Car records. It just adds complexity when creating a Policy as we must handle the id etc. Was it worth doing this? I am thinking that maybe there is a much better design? Obviously we could move the policy/car relation to its own table CarPolicy, benefits would be: no limit on 10 policies per car adding/removing etc much easier when only the default policy is applied (when no others are applied one called Default policy is applied), we could signal that by not having any entry in CarPolicy, now this is just done inserting the Default policy id in one of the columns. The cons are that we would need to change the DB, ORM classes etc. What would you recommend? Maybe there is another smart way to implement this that we are not aware without using the CarPolicy table?

    Read the article

  • Using VirtualMode on a DataGridView when the number of rows/columns isn't known

    - by Nathan Baulch
    I need to display an unknown length sequence of dictionaries with unknown keys efficiently in a data grid. This sequence is the result of a potentially slow LINQ query that could contain any number of results. At first I thought that VirtualMode on DataGridView was what I was looking for but it appears that the number of rows and columns must be known upfront. I tried adding a single row and column then adding more as needed from the CellValueNeeded event but this doesn't work. Is this even possible with VirtualMode? Or do I need to estimate how many rows are visible on the screen and manually build up the rows/columns? And if so, how do I ensure that a vertical scrollbar is present and react appropriately when a user uses it?

    Read the article

  • Cocoa - does CGDataProviderCopyData() actually copy the bytes? Or just the pointer?

    - by jtrim
    I'm running that method in quick succession as fast as I can, and the faster the better, so obviously if CGDataProviderCopyData() is actually copying the data byte-for-byte, then I think there must be a faster way to directly access that data...it's just bytes in memory. Anyone know for sure if CGDataProviderCopyData() actually copies the data? Or does it just create a new pointer to the existing data?

    Read the article

  • tool to auto-format R code

    - by Keith
    Is there any tool (editor, script, whatever...) available that can automatically reformat R code? It does not need to be customizable but it must be able to recognize statements separated by either semicolons or newlines since this code has both. If it can put all statements on a separate line, consistently indent code blocks and consistently place braces I will be very happy.

    Read the article

  • MSSQL Server high CPU and I/O activity database tuning

    - by zapping
    Our application tends to be running very slow recently. On debugging and tracing found out that the process is showing high cpu cycles and SQL Server shows high I/O activity. Can you please guide as to how it can be optimised? The application is now about an year old and the database file sizes are not very big or anything. The database is set to auto shrink. Its running on win2003, SQL Server 2005 and the application is a web application coded in c# i.e vs2005

    Read the article

  • Hashtable is that fast

    - by Costa
    Hi s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]. Is the hash function of the java string, I assume the rest of languages is similar or close to this implementation. If we have hash-Table and a list of 50 elements. each element is 7 chars ABCDEF1, ABCDEF2, ABCDEF3..... ABCDEFn If each bucket of hashtable contains 5 strings (I think this function will make it one string per bucket, but let us assume it is 5). If we call col.Contains("ABCDEFn"); // will do 6 comparisons and discover the difference on the 7th. The hash-table will take around 70 operations (multiplication and additions) to get the hashcode and to compare with 5 strings in bucket. and BANG it found. For list it will take around 300 comparisons to find it. for the case that there is only 10 elements, the list will take around 70 operations but the Hashtable will take around 50 operations. and note that hashtable operations are more time consuming (it is multiplications). I conclude that HybirdDictionary in .Net probably is the best choice for that most cases that require Hashtable with unknown size, because it will let me use a list till the list becomes more than 10 elements. still need something like HashSet rather than a Dictionary of keys and values, I wonder why there is no HybirdSet!! So what do u think? Thanks

    Read the article

  • Python: how to run several scripts (or functions) at the same time under windows 7 multicore processor 64bit

    - by Gianni
    sorry for this question because there are several examples in Stackoverflow. I am writing in order to clarify some of my doubts because I am quite new in Python language. i wrote a function: def clipmyfile(inFile,poly,outFile): ... # doing something with inFile and poly and return outFile Normally I do this: clipmyfile(inFile="File1.txt",poly="poly1.shp",outFile="res1.txt") clipmyfile(inFile="File2.txt",poly="poly2.shp",outFile="res2.txt") clipmyfile(inFile="File3.txt",poly="poly3.shp",outFile="res3.txt") ...... clipmyfile(inFile="File21.txt",poly="poly21.shp",outFile="res21.txt") I had read in this example Run several python programs at the same time and i can use (but probably i wrong) from multiprocessing import Pool p = Pool(21) # like in your example, running 21 separate processes to run the function in the same time and speed my analysis I am really honest to say that I didn't understand the next step. Thanks in advance for help and suggestion Gianni

    Read the article

  • Why does SQLite take such a long time to fetch the data?

    - by Derk
    I have two possible queries, both giving the result set I want. Query one takes about 30ms, but 150ms to fetch the data from the database. SELECT id FROM featurevalues as featval3 WHERE featval3.feature IN (?,?,?,?) AND EXISTS ( SELECT 1 FROM product_to_value, product_to_value as prod2, features, featurevalues WHERE product_to_value.feature = features.int AND product_to_value.value = featurevalues.id AND features.id = ? AND featurevalues.id IN (?,?) AND product_to_value.product = prod2.product AND prod2.value = featval3.id ) Query two takes about 3ms -this is the one I therefore prefer-, but also takes 170ms to fetch the data. SELECT ( SELECT prod2.value FROM product_to_value, product_to_value as prod2, features, featurevalues WHERE product_to_value.feature = features.int AND product_to_value.value = featurevalues.id AND features.id = ? AND featurevalues.id IN (?,?) AND product_to_value.product = prod2.product AND prod2.value = featval3.id ) as id FROM featurevalues as featval3 WHERE featval3.feature IN (?,?,?,?) The 170ms seems to be related to the number of rows from table featval3. After an index is used on featval3.feature IN (?,?,?,?), 151 items "remain" in featval3. Is there something obvious I am missing regarding the slow fetching? As far as I know everything is properly indexed.. I am confused because the second query only takes a blazing 3ms to run.

    Read the article

  • Is there a lightweight datagrid alternative in Flex ?

    - by Wayne
    What is the most performant way of displaying a table of data in Flex? Are there alternatives to the native Flex Datagrid Component? Alternatives that are noted for their rendering speed? Are there other ways to display a table? I have a datagrid with roughly 70 lines and 7 columns of simple text data. This is currently created and loaded in memory. This is being refreshed rapidly (about 800 msec) and there is a slight lag in other animations when it is rendering the table... So I am trying to cut down this render time.

    Read the article

  • C++ & proper TDD

    - by Kotti
    Hi! I recently tried developing a small-sized project in C# and during the whole project our team used the Test-Driven-Development (TDD) technique (xunit, moq). I really think this was awesome, because (when paired with C#) this approach allowed to relax when coding, relax when projecting and relax when refactoring. I suspect that all this TDD-stuff actually simplifies the coding process and, well, it allowed (eventually, for me) to get the same result with fewer brain cells working. Right after that I tried using TDD paired with C++ (I used Google Test and Google Mock libraries), and, I don't know why but I actually think that TDD here was a step back in terms of rapid application development. I had some moments when I had to spend huge amounts of time thinking of my tests, building proper mocks, rebuilding them and swearing at my monitor. And, well, I obviously can't ask something like "what I did wrong?" or "what was wrong in my approach?", because I don't know what to describe. But if there are any people who are used to TDD in C++ (and, probably C#) too, could you please advise me how to do this properly. Framework recommendations, architecture approaches, plain coding advices - if you are experienced in TDD & C++, please respond.

    Read the article

  • What type of websites does memcached speed up

    - by Saif Bechan
    I have read this article about 400% boost of your website. This is done by a combination of nginx and memcached. The how-to part of this website is quite good, but i mis the part where it says to what types of websites this applies. I know nginx is a http engine, I need no explanation for that. I thought memcached had something to do with caching database result. However i don't understand what this has to do with the http request, can someone please explain that to me. Another question I have is for what types of websites is this used. I have a website where the important part of the website consist of data that changes often. Often being minutes. Will this method still apply to me, or should I just stick with the basic boring setup of apache and nothing else.

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >