Search Results

Search found 18811 results on 753 pages for 'dynamic memory allocation'.

Page 347/753 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • What is faster: multiple `send`s or using buffering?

    - by dauerbaustelle
    I'm playing around with sockets in C/Python and I wonder what is the most efficient way to send headers from a Python dictionary to the client socket. My ideas: use a send call for every header. Pros: No memory allocation needed. Cons: many send calls -- probably error prone; error management should be rather complicated use a buffer. Pros: one send call, error checking a lot easier. Cons: Need a buffer :-) malloc/realloc should be rather slow and using a (too) big buffer to avoid realloc calls wastes memory. Any tips for me? Thanks :-)

    Read the article

  • Creating a custom module for Orchard

    - by Moran Monovich
    I created a custom module using this guide from Orchard documentation, but for some reason I can't see the fields in the content type when I want to create a new one. this is my model: public class CustomerPartRecord : ContentPartRecord { public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual int PhoneNumber { get; set; } public virtual string Address { get; set; } public virtual string Profession { get; set; } public virtual string ProDescription { get; set; } public virtual int Hours { get; set; } } public class CustomerPart : ContentPart<CustomerPartRecord> { [Required(ErrorMessage="you must enter your first name")] [StringLength(200)] public string FirstName { get { return Record.FirstName; } set { Record.FirstName = value; } } [Required(ErrorMessage = "you must enter your last name")] [StringLength(200)] public string LastName { get { return Record.LastName; } set { Record.LastName = value; } } [Required(ErrorMessage = "you must enter your phone number")] [DataType(DataType.PhoneNumber)] public int PhoneNumber { get { return Record.PhoneNumber; } set { Record.PhoneNumber = value; } } [StringLength(200)] public string Address { get { return Record.Address; } set { Record.Address = value; } } [Required(ErrorMessage = "you must enter your profession")] [StringLength(200)] public string Profession { get { return Record.Profession; } set { Record.Profession = value; } } [StringLength(500)] public string ProDescription { get { return Record.ProDescription; } set { Record.ProDescription = value; } } [Required(ErrorMessage = "you must enter your hours")] public int Hours { get { return Record.Hours; } set { Record.Hours = value; } } } this is the Handler: class CustomerHandler : ContentHandler { public CustomerHandler(IRepository<CustomerPartRecord> repository) { Filters.Add(StorageFilter.For(repository)); } } the Driver: class CustomerDriver : ContentPartDriver<CustomerPart> { protected override DriverResult Display(CustomerPart part, string displayType, dynamic shapeHelper) { return ContentShape("Parts_Customer", () => shapeHelper.Parts_BankCustomer( FirstName: part.FirstName, LastName: part.LastName, PhoneNumber: part.PhoneNumber, Address: part.Address, Profession: part.Profession, ProDescription: part.ProDescription, Hours: part.Hours)); } //GET protected override DriverResult Editor(CustomerPart part, dynamic shapeHelper) { return ContentShape("Parts_Customer", () => shapeHelper.EditorTemplate( TemplateName:"Parts/Customer", Model: part, Prefix: Prefix)); } //POST protected override DriverResult Editor(CustomerPart part, IUpdateModel updater, dynamic shapeHelper) { updater.TryUpdateModel(part, Prefix, null, null); return Editor(part, shapeHelper); } the migration: public class Migrations : DataMigrationImpl { public int Create() { // Creating table CustomerPartRecord SchemaBuilder.CreateTable("CustomerPartRecord", table => table .ContentPartRecord() .Column("FirstName", DbType.String) .Column("LastName", DbType.String) .Column("PhoneNumber", DbType.Int32) .Column("Address", DbType.String) .Column("Profession", DbType.String) .Column("ProDescription", DbType.String) .Column("Hours", DbType.Int32) ); return 1; } public int UpdateFrom1() { ContentDefinitionManager.AlterPartDefinition("CustomerPart", builder => builder.Attachable()); return 2; } public int UpdateFrom2() { ContentDefinitionManager.AlterTypeDefinition("Customer", cfg => cfg .WithPart("CommonPart") .WithPart("RoutePart") .WithPart("BodyPart") .WithPart("CustomerPart") .WithPart("CommentsPart") .WithPart("TagsPart") .WithPart("LocalizationPart") .Creatable() .Indexed()); return 3; } } Can someone please tell me if I am missing something?

    Read the article

  • App like default photo browser in iphone ?

    - by goeff27
    Hi All i am developing a app which contains feature like default photo browser in iphone. I done some what similar to that. but after loading some(near about 10-15) images from remote server,i am receiving memory warning.My requirement is loading image one by one. For this, on scroll view i am putting an images and increasing the contentSize of scroll view. it will work fine. but due to memory warning app quite. Guys, any have any idea to approach for this feature which work similar to photo app without problem? thanks in advance .

    Read the article

  • XML Processing on iPhone: What is the best option?

    - by gonso
    Hello Im building a new version of an iPhone application and Im wondering if I should review how my app communicates with the server. My iPhone client sends and receives XML over HTTP requests. To send the information I use ASIHTTPRequest framework. I "manually" build the XML request by appending strings. To parse the response Im using a NSXMLParser. My question is if I have better options to A) Create an XML string from a memory object. B) Create a memory object from the XML string. Is there anything like JAXB to marshal XML into object? Thanks Gonso

    Read the article

  • What would happen to GC if I run process with priority = RealTime?

    - by Bobb
    I have a C# app which runs with priority RealTime. It was all fine until I made few hectic changes in past 2 days. Now it runs out of memory in few hours. I am trying to find whether it is a memory leak I created of this is because I consume lot more objects than before and GC simply cant collect them because it runs with same priority. My question is - what could happen to GC when it tries to collect objects in application with RealTime priority (there is also at least one thread running with Highest thread priority)? (P.S. by realtime priority I mean Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.RealTime) Sorry forgot to tell. GC is in Server mode

    Read the article

  • Does JavaScript have an equivalent to Perl's DESTROY method?

    - by Eric Strom
    Is there any method that is called or event that is dispatched right before an Element is cleaned up by the JavaScript garbage collector? In Perl I would write: package MyObj; sub new {bless {}} sub DESTROY {print "cleaning up @_\n"} and then later: { my $obj = MyObj->new; # do something with obj } # scope ends, and assuming there are no external references to $obj, # the DESTROY method will be called before the object's memory is freed My target platform is Firefox (and I have no need to support other browsers), so if there is only a Firefox specific way of doing this, that is fine. And a little background: I am writing the Perl module XUL::Gui which serves as a bridge between Perl and Firefox, and I am currently working on plugging a few memory leaks related to DOM Elements sticking around forever, even after they are gone and no more references remain on the Perl side. So I am looking for ways to either figure out when JavaScript Elements will be destroyed, or a way to force JavaScript to cleanup an object.

    Read the article

  • How can I detect endianness on a system where all primitive integer sizes are the same?

    - by Joe Wreschnig
    (This question came out of explaining the details of CHAR_BIT, sizeof, and endianness to someone yesterday. It's entirely hypothetical.) Let's say I'm on a platform where CHAR_BIT is 32, so sizeof(char) == sizeof(short) == sizeof(int) == sizeof(long). I believe this is still a standards-conformant environment. The usual way to detect endianness at runtime (because there is no reliable way to do it at compile time) is to make a union { int i, char c[sizeof(int)] } x; x.i = 1 and see whether x.c[0] or x.c[sizeof(int)-1] got set. But that doesn't work on this platform, as I end up with a char[1]. Is there a way to detect whether such a platform is big-endian or little-endian, at runtime? Obviously it doesn't matter inside this hypothetical system, but one can imagine it is writing to a file, or some kind of memory-mapped area, which another machine reads and reconstructs it according to its (saner) memory model.

    Read the article

  • c++ std::ostringstream vs std::string::append

    - by NickSoft
    In all examples that use some kind of buffering I see they use stream instead of string. How is std::ostringstream and << operator different than using string.append. Which one is faster and which one uses less resourses (memory). One difference I know is that you can output different types into output stream (like integer) rather than the limited types that string::append accepts. Here is an example: std::ostringstream os; os << "Content-Type: " << contentType << ";charset=" << charset << "\r\n"; std::string header = os.str(); vs std::string header("Content-Type: "); header.append(contentType); header.append(";charset="); header.append(charset); header.append("\r\n"); Obviously using stream is shorter, but I think append returns reference to the string so it can be written like this: std::string header("Content-Type: "); header.append(contentType) .append(";charset=") .append(charset) .append("\r\n"); And with output stream you can do: std::string content; ... os << "Content-Length: " << content.length() << "\r\n"; But what about memory usage and speed? Especially when used in a big loop. Update: To be more clear the question is: Which one should I use and why? Is there situations when one is preferred or the other? For performance and memory ... well I think benchmark is the only way since every implementation could be different. Update 2: Well I don't get clear idea what should I use from the answers which means that any of them will do the job, plus vector. Cubbi did nice benchmark with the addition of Dietmar Kühl that the biggest difference is construction of those objects. If you are looking for an answer you should check that too. I'll wait a bit more for other answers (look previous update) and if I don't get one I think I'll accept Tolga's answer because his suggestion to use vector is already done before which means vector should be less resource hungry.

    Read the article

  • What is the effect on record size of reordering columns in PostgreSQL?

    - by Summer
    Since Postgres can only add columns at the end of tables, I end up re-ordering by adding new columns at the end of the table, setting them equal to existing columns, and then dropping the original columns. So, what does PostgreSQL do with the memory that's freed by dropped columns? Does it automatically re-use the memory, so a single record consumes the same amount of space as it did before? But that would require a re-write of the whole table, so to avoid that, does it just keep a bunch of blank space around in each record? Thanks! ~S

    Read the article

  • Multimap Space Issue: Guava

    - by Arpssss
    In my Java code, I am using Guavas Multimap (com.google.common.collect.Multimap) by using this: Multimap< Integer, Integer Index = HashMultimap.create() Here, Multimap key is some portion of URL and value is another portion of URL (converted into integer). Now, I assign my JVM 2560 Mb (2.5 GB) heap space (by using Xmx and Xms). However, it can only store 9 millions of such (key,value) pairs of integers (approx 10 million). But, theoretically (according to memory occupied by int) it should store more. Can anybody help me, 1) Why is this happening (means Multimap is taking lots of space) ? I checked my code with out inserting pairs in Multimap, it takes only 1/2 MB space. 2) Is there any other way or home-baked solution to solve this space issue ? More clearly, how to solve this memory issue ? Thanks in advance and any idea is perfectly OK for me.

    Read the article

  • Using CreateSourceQuery in CTP4 Code First

    - by Adam Rackis
    I'm guessing this is impossible, but I'll throw it out there anyway. Is it possible to use CreateSourceQuery when programming with the EF4 CodeFirst API, in CTP4? I'd like to eagerly load properties attached to a collection of properties, like this: var sourceQuery = this.CurrentInvoice.PropertyInvoices.CreateSourceQuery(); sourceQuery.Include("Property").ToList(); But of course CreateSourceQuery is defined on EntityCollection<T>, whereas CodeFirst uses plain old ICollection (obviously). Is there some way to convert? I've gotten the below to work, but it's not quite what I'm looking for. Anyone know how to go from what's below to what's above (code below is from a class that inherits DbContext)? ObjectSet<Person> OSPeople = base.ObjectContext.CreateObjectSet<Person>(); OSPeople.Include(Pinner => Pinner.Books).ToList(); Thanks! EDIT: here's my version of the solution posted by zeeshanhirani - who's book by the way is amazing! dynamic result; if (invoice.PropertyInvoices is EntityCollection<PropertyInvoice>) result = (invoices.PropertyInvoices as EntityCollection<PropertyInvoice>).CreateSourceQuery().Yadda.Yadda.Yadda else //must be a unit test! result = invoices.PropertyInvoices; return result.ToList(); EDIT2: Ok, I just realized that you can't dispatch extension methods whilst using dynamic. So I guess we're not quite as dynamic as Ruby, but the example above is easily modifiable to comport with this restriction EDIT3: As mentioned in zeeshanhirani's blog post, this only works if (and only if) you have change-enabled proxies, which will get created if all of your properties are declared virtual. Here's another version of what the method might look like to use CreateSourceQuery with POCOs public class Person { public virtual int ID { get; set; } public virtual string FName { get; set; } public virtual string LName { get; set; } public virtual double Weight { get; set; } public virtual ICollection<Book> Books { get; set; } } public class Book { public virtual int ID { get; set; } public virtual string Title { get; set; } public virtual int Pages { get; set; } public virtual int OwnerID { get; set; } public virtual ICollection<Genre> Genres { get; set; } public virtual Person Owner { get; set; } } public class Genre { public virtual int ID { get; set; } public virtual string Name { get; set; } public virtual Genre ParentGenre { get; set; } public virtual ICollection<Book> Books { get; set; } } public class BookContext : DbContext { public void PrimeBooksCollectionToIncludeGenres(Person P) { if (P.Books is EntityCollection<Book>) (P.Books as EntityCollection<Book>).CreateSourceQuery().Include(b => b.Genres).ToList(); }

    Read the article

  • Creating an Application to Save Arbitrary Application State

    - by ashes999
    See this SuperUser question. To summarize, VM software lets you save state of arbitrary applications (by saving the whole VM image). Would it be possible to write some software for Windows that allows you to save and reload arbitrary application state? If so (and presumably so), what would it entail? I would be looking to implement this, if possible, in a high-level language like C#. I presume if I used something else, I would need to dump memory registers (or maybe dump the entire application memory block) to a file somewhere and load it back somewhere to refresh state. So how do I build this thing?

    Read the article

  • Processing XML file with Huge data

    - by Manish Dhanotiya
    Hi,be m I am working on an application which has below requiements - 1. Download a ZIP file from a server. 2. Uncompress the ZIP file, get the content (which is in XML format) from this file into a String. 3. Pass this content into another method for parsing and further processing. Now, my concerns here is the XML file may be of Huge size say like '100MB', and my JVM has memory of only 512 MB, so how can I get this content into Chunks and pass for Parsing and then insert the data into PL/SQL tables. Since there can be multiple requests running at the same time and considering 512MB of memory what will be the best possible to process this. How I can get the data into Chunks and pass it as Stream for XML parsing. I googled on this, but didnt find any implementation. :( Thanks,

    Read the article

  • Using the windows api and C++, how could I load an exe from the hard drive and run it in its own thread?

    - by returneax
    For the sake of learning I'm trying to do what the OS does when launching a program ie. parsing a PE file and giving it a thread of execution. If I have two exe's one called foo.exe and the other bar.exe, how could I have foo.exe load the contents of bar.exe into memory then have it execute from there in its own thread? I know how to get it into memory using MapViewOfFile or by simple loading the contents on the hard drive into a buffer. I'm assuming simply copying the contents of bar.exe on disk into its own suspended thread and running it wouldn't work. I am semi-familiar with PE file internals. All help is very much appreciated, of course :)

    Read the article

  • I can connect to Samba server but cannot access shares.

    - by jlego
    I'm having trouble getting samba sharing working to access shares. I have setup a stand-alone box running Fedora 16 to use as a file-sharing and web development server. It needs to be able to share files with a Windows 7 PC and a Mac running OSX Snow Leopard. I've setup Samba using the Samba configuration GUI tool on Fedora. Added users to Fedora and connected them as Samba users (which are the same as the Windows and Mac usernames and passwords). The workgroup name is the same as the Windows workgroup. Authentication is set to User. I've allowed Samba and Samba client through the firewall and set the ethernet to a trusted port in the firewall. Both the Windows and Mac machines can connect to the server and view the shares, however when trying to access the shares, Windows throws error: 0x80070035 " Windows cannot access \\SERVERNAME\ShareName." Windows user is not prompted for a username or password when accessing the server (found under "Network Places"). This also happens when connecting with the IP rather than the server name. The Mac can also connect to the server and see the shares but when choosing a share gives the error: The original item for ShareName cannot be found. When connecting via IP, the Mac user is prompted for username and password, which when authenticated gives a list of shares, however when choosing a share to connect to, the error is displayed and the user cannot access the share. Since both machines are acting similarly when trying to access the shares, I assume it is an issue with how Samba is configured. smb.conf: [global] workgroup = workgroup server string = Server log file = /var/log/samba/log.%m max log size = 50 security = user load printers = yes cups options = raw printcap name = lpstat printing = cups [homes] comment = Home Directories browseable = no writable = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes printable = yes [FileServ] comment = FileShare path = /media/FileServ read only = no browseable = yes valid users = user1, user2 [webdev] comment = Web development path = /var/www/html/webdev read only = no browseable = yes valid users = user1 How do I get samba sharing working? UPDATE: I Figured it out, it was because I was sharing a second hard drive. See checked answer below. Speculation 1: Before this box I had another box with the same version of fedora installed (16) and samba working for these same computers. I started up the old machine and copied the smb.conf file from the old machine to the new one (editing the share definitions for the new shares of course) and I still get the same errors on both client machines. The only difference in environment is the hardware and the router. On the old machine the router received a dynamic public IP and assigned dynamic private IPs to each device on the network while the new machine is connected to a router that has a static public IP (still dynamic internal IPs though.) Could either one of these be affecting Samba? Speculation 2: As the directory I am trying to share is actually an entire internal disk, I have tried these things: 1.) changing the owner of the mounted disk from root to my user (which is the same username as on the Windows machine) 2.) made a share that only included one of the folders on the disk instead of the entire disk with my user again as the owner. Both tests failed giving me the same errors regarding the network address. Speculation 3: Whenever I try to connect to the share on the Windows 7 client I am prompted for my username and password. When I enter the correct credentials I get an access denied message. However I did notice that under the login box "domain: WINDOWS-PC-NAME" is listed. I believe this could very well be the problem. Speculation 4: So I've completely reinstalled Fedora and Samba now. I've created a share on the first harddrive (one fedora is installed on) and I can access that fine from Windows. However when I try to share any data on the second disk, I am receiving the same error. This I believe is the problem. I think I need to change some things in fstab or fdisk or something. Speculation 5: So in fstab I mapped the drive to automount in a folder which works correctly. I also added the samba_share_t SElinux label to the mountpoint directory which now allows me to access the shares on the Windows machine, however I cannot see any of the files in the directory on the windows machine. (They are there, I can see them in the fedora file browser locally)

    Read the article

  • Integer v/s int

    - by Siddhartha
    Read this on the oracle docs java.lang page: Frequently it is necessary to represent a value of primitive type as if it were an object. The wrapper classes Boolean, Character, Integer, Long, Float, and Double serve this purpose. I'm not sure I understand why these are needed. It says they have useful functions such as equals(). But if I can do (a==b), why would I ever want to declare them as Integer, use more memory and use equals()? How does the memory usage differ for the 2?

    Read the article

  • SELECT only a certain set of rows at a time

    - by prmatta
    I need to select data from one table and insert it into another table. Currently the SQL looks something like this: INSERT INTO A (x, y, z) SELECT x, y, z FROM B b WHERE ... However, the SELECT is huge, resulting in over 2 millions rows and we think it is taking up too much memory. Informix, the db in this case, runs out of virtual memory when the query is run. How would I go about selecting and inserting a set of rows (say 2000)? Given that I don't think there are any row ids etc.

    Read the article

  • Search in big text log files

    - by 0xFF
    Hi, let's say you have an game server which creating text log files of gamers actions, and from time to time you need to lookup something in those logs files (like investigating an scam or loosing an item). Just for example you have 100 files and each file have size between 20MB and 50MB - How you would search them quickly? What I have already tried to do is create several threads and each invidual thread will map his own file to memory (let say memory should not be problem if it not exceed 500MB of ram) perform search here, result was something around 1 second per file : File:a26.log - read in: 0.891, lines: 625282, matches: 78848 Is there better way how to do that ? - because it seems to me kinda slow. thanks. (java was used for this case)

    Read the article

  • [MySQL, InnoDb] Rating place

    - by Pavel
    I'm trying to generate rating place table using following receipt http://stackoverflow.com/questions/1776821/assign-places-in-the-rating-mysql-php but my database is high loaded. I tried not to create table, but use MEMORY TABLE and update it using following SQL query insert into tops (uid) select uid from users order by exp desc; but got the following MySQL error Deadlock found when trying to get lock; try restarting transaction because there are too many queries until SQL select is being executed. How to solve this problem? P.S. CREATE TABLE tops as SELECT work almost fine except high server load... up to load average: 50 if tops are non-memory table. My table users has near 4.5 millions of rows. Thanks for any advices.

    Read the article

  • How to select table column names in a view and pass to controller in rails?

    - by zachd1_618
    So I am new to Rails, and OO programming in general. I have some grasp of the MVC architecture. My goal is to make a (nearly) completely dynamic plug-and-play plotting web server. I am fairly confused with params, forms, and select helpers. What I want to do is use Rails drop downs to basically pass parameters as strings to my controller, which will use the params to select certain column data from my database and plot it dynamically. I have the latter part of the task working, but I can't seem to pass values from my view to controller. For simplicity's sake, say my database schema looks like this: --------------Plot--------------- |____x____|____y1____|____y2____| | 1 | 1 | 1 | | 2 | 2 | 4 | | 3 | 3 | 9 | | 4 | 4 | 16 | | 5 | 5 | 25 | ... and in my Model, I have dynamic selector scopes that will let me select just certain columns of data: in Plot.rb class Plot < ActiveRecord::Base scope :select_var, lambda {|varname| select(varname)} scope :between_x, lambda {|x1,x2| where("x BETWEEN ? and ?","#{x1}","#{x2}")} So this way, I can call: irb>>@p1 = Plot.select_var(['x','y1']).between_x(1,3) and get in return a class where @p1.x and @p1.y1 are my only attributes, only for values between x=1 to x=4, which I dynamically plot. I want to start off in a view (plot/index), where I can dynamically select which variable names (table column names), and which rows from the database to fetch and plot. The problem is, most select helpers don't seem to work with columns in the database, only rows. So to select columns, I first get an array of column names that exist in my database with a function I wrote. Plots Controller def index d=Plot.first @tags = d.list_vars end So @tags = ['x','y1','y2'] Then in my plot/index.html.erb I try to use a drop down to select wich variables I send back to the controller. index.html.erb <%= select_tag( :variable, options_for_select(@plots.first.list_vars,:name,:multiple=>:true) )%> <%= button_to 'Plot now!', :controller =>"plots/plot_vars", :variable => params[:variable]%> Finally, in the controller again Plots controller ... def plot_vars @plot_data=Plot.select_vars([params[:variable]]) end The problem is everytime I try this (or one of a hundred variations thereof), the params[:variable] is nill. How can I use a drop down to pass a parameter with string variable names to the controller? Sorry its so long, I have been struggling with this for about a month now. :-( I think my biggest problem is that this setup doesn't really match the Rails architecture. I don't have "users" and "articles" as individual entities. I really have a data structure, not a data object. Trying to work with the structure in terms of data object speak is not necessarily the easiest thing to do I think. For background: My actual database has about 250 columns and a couple million rows, and they get changed and modified from time to time. I know I can make the database smarter, but its not worth it on my end. I work at a scientific institute where there are a ton of projects with databases just like this. Each one has a web developer that spends months setting up a web interface and their own janky plotting setups. I want to make this completely dynamic, as a plug-and-play solution so all you have to do is specify your database connection, and this rails setup will automatically show and plot which data you want in it. I am more of a sequential programmer and number cruncher, as are many people here. I think this project could be very helpful in the end, but its difficult to figure out for me right now.

    Read the article

  • Why in Objective-C, we use self = [super init] instead of just [super init]?

    - by ????
    In a book, I saw that if a subclass is overriding a superclass's method, we may have self = [super init]; First, is this supposed to be done in the subclass's init method? Second, I wonder why the call is not just [super init]; ? I mean, at the time of calling init, the memory is allocated by alloc already (I think by [Foobar alloc] where Foobar is the subclass's name. So can't we just call [super init] to initialize the member variables? Why do we have to get the return value of init and assign to self? I mean, before calling [super init], self should be pointing to a valid memory allocation chuck... so why assigning something to self again? (if assigning, won't [super init] just return self's existing value?)

    Read the article

  • C# What would happen to GC if I run process with priority = RealTime?

    - by Bobb
    I have a C# app which runs with priority RealTime. It was all fine until I made few hectic changes in past 2 days. Now it runs out of memory in few hours. I am trying to find whether it is a memory leak I created of this is because I consume lot more objects than before and GC simply cant collect them because it runs with same priority. My question is - what could happen to GC when it tries to collect objects in application with RealTime priority (there is also at least one thread running with Highest thread priority)? (P.S. by realtime priority I mean Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.RealTime)

    Read the article

  • Data format for content heavy iPhone app - Plist or XML?

    - by Toby
    Hello, I'm building an iPhone app that is essentially a book, it will be bundled with a lot of text-heavy content. I considered bundling the data as XML and load it when the application starts but the XML would contain a lot of nested structures and be a bit of a pain to parse. Would it be better to use a plist? I'm concerned about memory usage and plists are loaded entirely into memory - can they be parsed in chunks? Is there a maximum size to a plist and how efficient are they? I'm not sure how big the bundled content is going to be yet but I should imagine it could be anywhere from 500k to 4MB. Thanks in advance.

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >