Search Results

Search found 31410 results on 1257 pages for 'disk based'.

Page 718/1257 | < Previous Page | 714 715 716 717 718 719 720 721 722 723 724 725  | Next Page >

  • iPhone App takes up too much memory

    - by Stephen Furlani
    Ok, so here's my problem. My iPhone app is 1.2MB on disk. Granted I have a bunch of Images for the GUI buttons and backgrounds, etc. In-memory, my app takes up a whopping 15MB! That means if I then take a picture with the camera, 8MB default, it gives a memory warning (several) even before the picker calls its delegate! How can I tell what is grabbing so much memory, and how to remove it? I've removed all of my debugging symbols and added [-Os], but it still takes up a huge amount of memory! Also, (how) can I change the default resolution of the camera?

    Read the article

  • How to handle images folder with many images

    - by Billy
    I'm developing a new aspnet website with 200k images in a /Images/ -folder. Many operations in Visual Studio is slow because it access the folder, adding a web service takes 10 minutes. The images is not checked into scm (svn). How should I structure the tree of code, to improve performance in VS? It would also be neat if not all developers needed to copy 200k images to their local disk to be able to develop on the site. Images as DB blobs is not an option.

    Read the article

  • A 110 kb .NET 4.0 app needs 10 seconds for a cold start, thats not acceptable !

    - by msfanboy
    Hello, I am using the .NET 4.0 client profile for my app and I run a dual core with 4 GB Ram and a fast hard disk. nothing big is done at the start just showing a generic List in a wpf listview. How can I make the cold start faster of my assembly ? I have done now again a cold start and run the windowsapplication.exe in my \obj\x86\Debug folder and my harddisk run like hell and it took 10,5 seconds ??? What is wrong? The warm start after the cold one took 1 second. Java 6 apps has not that problem, not at all just to compare...

    Read the article

  • sharpziplib - can you add a file without it copying the entire zip first?

    - by schmoopy
    Im trying to add an existing file to a .zip file using sharpziplib - problem is, the zip file is 1GB in size. When i try to add 1 small file (400k) sharpziplib creates a copy/temp of the orig zip file before adding the new file - this poses a problem when the amount of free disk space is less than 2x the zip file you are trying to update. for example: 1GB zip myfile.zip 1GB zip myfile.zip.tmp.293 ZipFile zf = new ZipFile(path); zf.BeginUpdate(); zf.Add(file); // Adding a 400k file here causes a 1GB temp file to be created zf.EndUpdate(); zf.Close(); Is there a more efficient way to do this? Thanks :-)

    Read the article

  • Oracle Triggers Query..

    - by AGeek
    Lets consider a Table STUD and a ROW-Level TRIGGER is implemented over INSERT query.. My scenario goes like this, whenever a row is inserted, a trigger is fired and it should access some script file which is placed in the hard disk, and ultimately should print the result. So, is this thing is possible? and if yes, then this thing should exist in dynamic form, i.e. if we change the content of script file, then the oracle should reflect those changes as well. I have tried doing this for java using External Procedures, but doesn't feel that much satisfied with the result that i wanted. Kindly give your point-of-view for this kind of scenario and ways that this can be implemented.

    Read the article

  • Django: How do I go about changing my simple app to use Ajax?

    - by swisstony
    I currently have a web page where the user enters some data and then clicks a submit button. I process the data in views.py and then use the same Django template to return and display the original data and the results. What I would like to do is try to give it a bit more of a modern look and feel. You know the sort of thing, the page doesn't refresh but displays a spinning disk until the results are displayed. I assume this means using Ajax? How difficult is it to modify a simple app like this to use Ajax? What is involved? What are the best tools to use? JQuery?

    Read the article

  • What's exactly the nature of .Net Framework 3.5 Service Pack1?

    - by Richard77
    Hello, I did some recovery disk operations with my laptop because it became instable. Now, I'm about to re-install SQL Server 2008 Professional, but it keeps telling me that I need to install .net framework 3.5 with service pack1. What's strange is I was asked to do the same when I installed Visual Studio 2008 Professional earlier. I'd like to know: what's .Net Framework Service Pack1? is it One piece of software sitting on top of windows ? Several software having the same name so that Visual Studio has it, SQL Server also has it, and do on... And why the naming Service Pack1 behind .Net Framework? I'm really lost. Thanks for helping

    Read the article

  • Large Video Uploads via a website

    - by Andrew
    Some of the problems that can happen are timeouts, disconnections, and not being able to resume a file and having to start from the beginning. Assuming these files are up to around 5gigs in size, what is the best solution for dealing with this problem? I'm using a Drupal 6 install for the website. Some of my constraints due to the server setup I have to deal with: Shared hosting with max 200 connections at a time (unlimited disk space) Shared hosting. Unable to create users through an API (so can't automatically generate ftp accounts) I do have the ability to run cron-type scripts via a Drupal module. My initial thought was to create ftp users based off of Drupal accounts and requiring them to download an ftp client for their OS of choice. But the lack of API to auto-create ftp accounts and the inability to do it from command line kind of hinder that solution. If there's a workaround someone can think of, let me know! Thanks

    Read the article

  • Return an object after parsing xml with SAX

    - by sentimental_turtle
    I have some large XML files to parse and have created an object class to contain my relevant data. Unfortunately, I am unsure how to return the object for later processing. Right now I pickle my data and moments later depickle the object for access. This seems wasteful, and there surely must be a way of grabbing my data without hitting the disk. def endElement(self, name): if name == "info": # done collecting this iteration self.data.setX(self.x) self.data.setY(self.y) elif name == "lastTagOfInterest": # done with file # want to return my object from here filehandler = open(self.outputname + ".pi", "w") pickle.dump(self.data, filehandler) filehandler.close() I have tried putting a return statement in my endElement tag, but that does not seem to get passed up the chain to where I call the SAX parser. Thanks for any tips.

    Read the article

  • Is it really wrong to version documents using CouchDB's behaviour?

    - by Tomas Sedovic
    This is one of those "I know I shouldn't do this but it's oh so convenient." questions. Sorry about that. I plan to use CouchDB for storing a bunch of documents and keeping their entire revision history. CouchDB does the versioning automatically, but it is strongly discouraged for programmer's use: "You cannot rely on document revisions for any other purpose than concurrency control." From what I've found on the CouchDB wiki, the versions can get deleted either during compaction or during replication. As far as I can tell, Compaction must always be triggered manually and Replication occurs only when there's more than one database server. The question is: if I won't run compaction and will use only single database instance for my documents, can I just use CouchDB's document versioning and expect it to work? What other problems I might run into? E.g. does not running compaction hurt the performance or consume significantly more disk space (than if I did handle the versioning manually)?

    Read the article

  • MEMORY(HEAP) vs. InnoDB in a Read and Write Envirnment

    - by Johannes
    I want to programm a real-time application using MySQL. It needs a small table (less than 10000 rows) that will be under heavy read (scan) and write (update and some insert/delete) load. I am really speaking of 10000 updates or selects per second. These statements will be executed on only a few (less than 10) open mysql connections. The table is small and does not contain any data that needs to be stored on disk. So I ask which is faster: InnoDB or MEMORY (HEAP)? My thoughts are: Both enginges will probably serve SELECTs directly from memory, as even InnoDB will cache the whole table. What about the UPDATAEs? (innodb_flush_log_at_trx_commit?) My main concern is the locking behavior: InnoDB row lock vs. MEMORY table lock. Will this present the bottleneck in the MEMORY implementation? Thanks for your thoughts!

    Read the article

  • How can I persist a large Perl object for re-use between runs?

    - by Alnitak
    I've got a large XML file, which takes over 40 seconds to parse with XML::Simple. I'd like to be able to cache the resulting parsed object so that on the next run I can just retrieve the parsed object and not reparse the whole file. I've looked at using Data::Dumper but the documentation is a bit lacking on how to store and retrieve its output from disk files. Other classes I've looked at (e.g. Cache::Cache appear designed for storage of many small objects, not a single large one. Can anyone recommend a module designed for this? EDIT. The XML file is ftp://ftp.rfc-editor.org/in-notes/rfc-index.xml On my Mac Pro benchmark figures for reading the entire file with XML::Simple vs Storable are: s/iter test1 test2 test1 47.8 -- -100% test2 0.148 32185% --

    Read the article

  • Is shortening properties names worth it?

    - by raam86
    in how to node Blog rolling with node.js and mongoDB the author mentions it's a good idea to shorten proprieties names: ....oft-reported issue with mongoDB is the size of the data on the disk... each and every record stores all the field-names .... This means that it can often be more space-efficient to have properties such as 't', or 'b' rather than 'title' or 'body', however for fear of confusion I would avoid this unless truly required! I am aware of solutions of how to do it I am more intrested in when is it truly required?

    Read the article

  • Streaming data to the browser as a file of unknown size

    - by Sir Psycho
    I have some data which is queried from the database and I'd like to send it to the client as a csv file. The file size varies each time due to the fact that the DB data returned can be of any size. Instead of saving this file to the hard disk, I'd like to send it to the browser at the same time it's being processed into a CSV by my algorithm. Response.Write seems useless. For some reason, the file download dialog is only displayed once my processing is finished. This seems odd as I'm writting all my output to the Response.Output stream. I have downloaded files on the web before where the filesize is not known and the browser just keeps on downloading. Is there any way to achieve this? The following stackoverflow thread did not offer any good advise. http://stackoverflow.com/questions/873995/asp-net-downloading-large-files-of-unknown-size Thanks

    Read the article

  • Organizing PHP includes in your development environment

    - by Andrew Heath
    I'm auditing my site design based on the excellent Essential PHP Security by Chris Shiflett. One of the recommendations I'd like to adopt is moving all possible files out of webroot, this includes includes. Doing so on my shared host is simple enough, but I'm wondering how people handle this on their development testbeds? Currently I've got an XAMPP installation configured so that localhost/mysite/ matches up with D:\mysite\ in which includes are stored at D:\mysite\includes\ In order to keep include paths accurate, I'm guess I need to replicate the server's path on my local disk? Something like D:\mysite\public_html\ Is there a better way?

    Read the article

  • Color Themes for Eclipse?

    - by John Stauffer
    I am a recovering Emacs user, who is trying to ease into Eclipse usage. (Since I'm encouraging the rest of the team to use it, I guess I should at least try to get along). My current excuse is that it hurts my eyes. I'm currently using the excellent zenburn theme in emacs, and would love to find it for eclipse. However, I find that changing my color theme every few months makes for a great way to procrastinate, so ideally I'd like to find a repository for eclipse color themes. There don't appear to be any eclipse themes indexed by Google, so all the great themes must be sitting on your hard disk somewhere. Please share them. Thanks

    Read the article

  • Generate a pdf thumbnail (open source/free)

    - by AndrewB
    Looking at other posts for this could not find an adequate solution that for my needs. Trying to just get the first page of a pdf document as a thumbnail. This is to be run as a server application so would not want to write out a pdf document to file to then call a third application that reads the pdf to generate the image on disk. doc = new PDFdocument("some.pdf"); page = doc.page(1); Image image = page.image; Thanks.

    Read the article

  • messages stuck permanently in session

    - by Tim Whitlock
    I am getting Drupal messages stuck permanently in session, so that after being displayed they are not cleared. The unsetting code in function drupal_get_messages in bootstrap.inc is firing - It's as if the session is sleeping (i.e. serializing to disk) before the messages array is cleared. Have you witnessed such a thing? UPDATE The call that commits the session starts from drupal_page_footer at the bottom of index.php - for some reason this is executing twice per request! once with the emptied messages and then again with the messages back in the array.

    Read the article

  • guide on crawling the entire web ?

    - by bohohasdhfasdf
    i just had this thought, and was wondering if it's possible to crawl the entire web (just like the big boys!) on a single dedicated server (like Core2Duo, 8gig ram, 750gb disk 100mbps) . I've come across a paper where this was done....but i cannot recall this paper's title. it was like about crawling the entire web on a single dedicated server using some statistical model. Anyways, imagine starting with just around 10,000 seed URLs, and doing exhaustive crawl.... is it possible ? I am in need of crawling the web but limited to a dedicated server. how can i do this, is there an open source solution out there already ? for example see this real time search engine. http://crawlrapidshare.com the results are exteremely good and freshly updated....how are they doing this ?

    Read the article

  • Using nsIZipWriter or other to compress a string as a string?

    - by Daniel
    I need to be able to take a javascript string, compress it using any fast and available means and get back a binary string/blob. Background: The extension I'm developing needs to send various large content to my server. It does this conveniently by dynamically creating a form, adding fields to the form and posting it. Some of these fields are just too big bandwidth wise for multiple use. I'd like to be able to compress them before adding them and then maybe base64'ing them if the characters cause a problem in the message. Any ideas? I could use nsiZipWriter with temporary files on disk but that is quite ugly and probably sluggish.

    Read the article

  • How can I measure file access performance (and volume) of a (Java) application

    - by stmoebius
    Given an application, how can I measure the amount of data read and written by that application? the time spent reading/writing to disk? The specific application is Java-based (JBoss), and multi-threaded, and running as a service on Windows 7/2008 x64. The overall goal I have is determining whether and why file access is a bottleneck in my application. Therefore, running the application in a defined and repeatable scenario is a given. File access may be local as well as on network shares. Windows performance monitor appears to be too hard to use (unless someone can point me to a helpful explanation). Any ideas?

    Read the article

  • Visual C++ 9 Linker file size limitation.

    - by Raindog
    It appears that the visual C++ 9 linker has a file allocation algorithm that doubles the size of the file every allocation, so you get 512mb, 1024mb, 2048mb, 4096mb. The problem is that it is using a library that cannot handle files larger 2048MB, and as such crashes with an error such as "cannot read file at is the disk full or write protected". Is there a way to bypass this limitation or otherwise replace the linker with something else that works? A bit of background, I have a code generator that generates a large number of files, ~15k cpp files, I've managed to reduce the number of files to something about 6k to get something that at least completes the linking process, I would like to be able to include all 15k without having to create multiple libs.

    Read the article

  • Solr adding document cycle & wait on response issue

    - by user1585896
    I am trying to send http post request to Solr for adding 50000 documents (all individual request one after another in while loop). I am using DefaultHttpClient in java to connect to Solr and when I use execute method on my HttpPost Solr takes 3 to 4 ms to response. I have commit=false, autoCommit=false, autoSoftCommit=false. My question is why it takes that much time to response and why cycle it follows to add new document. Basically I want to send add request but do not want to commit to see how many request can Solr handle without doing any kind of commits(without having to do any disk access). My guess is with above parameter tuned off I should be hitting Solr about 10000 times every second, but my result is 300 times a second. I am generating random data to add in my code.

    Read the article

  • Cache images provided through script

    - by Wim Haanstra
    I have a script, which by using several querystring variables provides an image. I am also using URL rewriting within IIS 7.5. So images have an URL like this: http://mydomain/pictures/ajfhajkfhal/44/thumb.jpg or http://mydomain/pictures/ajfhajkfhal/44.jpg This is rewritten to: http://mydomain/Picture.aspx?group=ajfhajkfhal&id=44&thumb=thumb.jpg or http://mydomain/Picture.aspx?group=ajfhajkfhal&id=44 I added caching rules to IIS to cache JPG images when they are requested. This works with my images that are REAL images on the disk. When images are provided through the script, they are somehow always requested through the script, without being cached. The images do not change that often, so if the cache at least is being kept for 30 minutes (or until file change) that would be best. I am using .NET/C# 4.0 for my website. I tried setting several cache options in C#, but I cant seem to find how to cache these images (client-side), while my static images are cached properly.

    Read the article

  • Need a Concatenating VBA code to prevent memory issue workaround

    - by doharr
    My set up: Have 50,000 rows of data. ( My row count will increase in the future. So might as well say I have a full worksheet of 64000+ rows.) All Data is TEXT, no formulas, etc. Column A is open Columns B thru AC contain the Data that needs to be concatenated The Data in the rows once concatenated to Column A will contain 60,000 digits or 6kb in file size. After additional maniuplation each cell will become a file. I have tried concatenating in Excel and I run into memory issues. The memory issue is when I Select and fill down the concatenating function into the worksheet. It crashes at the 8200 +/-row. My system is 2gb of ram, windows xp professional and Excel 2003. Have 4GB of disk space Hoping to find a VBA code that will conserve memory, and not crash like it does in excel. Thank you

    Read the article

< Previous Page | 714 715 716 717 718 719 720 721 722 723 724 725  | Next Page >