Search Results

Search found 12588 results on 504 pages for 'memory allocation'.

Page 309/504 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • I need a 3ds loader for opengl

    - by Shaza
    Hey, I have an opengl project with C++, and I need to load like 10 3ds objects in my scene with their texture on, but unfortunatly the loader I'm using now is causing memory leakage, I knew that when my scene freezed after running the project by one min, so can you suggest a 3ds loader which can be very effective in loading a big number of 3ds objects??

    Read the article

  • Do I have to call release on an objective-c retain class variable when setting it to a new object?

    - by Andrew Arrow
    Say I have: @property (nonatomic,retain) NSString *foo; in some class. And I call: myclass.foo = [NSString stringWithString:@"string1"]; myclass.foo = [NSString stringWithString:@"string2"]; Should I have called [myclass.foo release] before setting it to "string2" to avoid a memory leak? Or the fact that nothing is pointing to the first "string1" object anymore is good enough? And in the dealloc method [foo release] will be called.

    Read the article

  • How to create newline in a rebol block ?

    - by Rebol Tutorial
    let's say I have a config.txt which contains: "param11" "param12" "param21" "param22" I'll load it in memory with config: load %config.txt I can save it back with save %config.txt config So far so good. Now the problem occurs for me when I want to add "param31" "param32" I have tried append config reduce [newline "param31" "param32"] save %config.txt config But that doesn't give the expected result "param11" "param12" "param21" "param22" "param31" "param32" but this instead "param11" "param12" "param21" "param22" #"^/" "param31" "param32" So how to ?

    Read the article

  • Understanding REST through an example

    - by grifaton
    My only real exposure to the ideas of REST has been through Ruby on Rails' RESTful routing. This has suited me well for the kind of CRUD-based applications I have built with Rails, but consequently my understanding of RESTfulness is somewhat limited. Let's say we have a finite collection of Items, each of which has a unique ID, and a number of properties, such as colour, shape, and size (which might be undefined for some Items). Items can be used by a client for a period of time, but each Item can only be used by one client at once. Access to Items is regulated by a server. Clients can request the temporary use of certain items from a server. Usually, clients will only be interested in getting access to a number of Items with particular properties, rather than getting access to specific Items. When a client requests use of a number of Items, the server responds with a list of IDs corresponding to the request, or with a response that says that the requested Items are not currently available or do not exist. A client can make the following kinds of request: Tell me how many green triangle Items there are (in total/available). Give me use of 200 large red Items. I have finished with Items 21, 23, 23. Add 100 new red square Items. Delete 50 small green Items. Modify all big yellow pentagon Items to be blue. The toy example above is like a resource allocation problem I have had to deal with recently. How should I go about thinking about it RESTfully?

    Read the article

  • Jaxb to generate the XML directly to the OutputStream

    - by sonu
    Hi, I have a 500Mb csv file. I need to convert it into XML file. I am using the Jaxb to created the xml file. It is working fine for small amout of data. but for large amout of data like 300 mb it is throwing out of memory exception. Can anyone tell me that How can I create each element and write it into a file without creating the whole tree using the jaxb?" Thanks Sonu

    Read the article

  • SQL Server 2008 - Query takes forever to finish even though work is actually done

    - by Brian
    Running the following simple query in SSMS: UPDATE tblEntityAddress SET strPostCode= REPLACE(strPostCode,' ','') The update to the data (at least in memory) is complete in under a minute. I verified this by performing another query with transaction isolation level read uncommitted. The update query, however, continues to run for another 30 minutes. What is the issue here? Is this caused by a delay to write to disk? TIA

    Read the article

  • fetchBatchSize to be same as fetchLimit

    - by user1730622
    What does it mean to have fetchBatchSize to be the same as fetchLimit, say both are set to be 5. My understanding is that, with the fetchLimit, then only 5 records will be in the fetch result set; and additionally with the fetchBatchSize, only the ids/identities of the records will be read to the memory, and then the full records won't be retrieved until they are accessed. Is that a correct understanding?

    Read the article

  • Can the .NET MethodInfo cache be cleared or disabled?

    - by Anton
    Per MSDN, calling Type.GetMethods() stores reflected method information in a MemberInfo cache so the expensive operation doesn't have to be performed again. I have an application that scans assemblies/types, looking for methods that match a given specification. The problem is that memory consumption increases significantly (especially with large numbers of referenced assemblies) since .NET hangs onto the method metadata. Is there any way to clear or disable this MemberInfo cache?

    Read the article

  • How to use length indicator in a C++ program

    - by cj
    I want to make a program in C++ that reads a file where each field will have a number before it that indicates how long it is. The problem is I read every record in object of a class; how do I make the attributes of the class dynamic? For example if the field is "john" it will read it in a 4 char array. I don't want to make an array of 1000 elements as minimum memory usage is very important.

    Read the article

  • C99 variable length automatic array performance

    - by aaa
    Is there significant cpu/memory overhead associated with using automatic arrays with g++/Intel on 64-bit x86 linux platform? int function(int N) { double array[N]; overhead compared to allocating array before hand (assuming function is called multiple times) overhead compared to using new overhead compared to using malloc The range of N may be from 1kb to 16kb roughly, stack overrun is not a problem.

    Read the article

  • Visual Studio Solution: static or shared projects?

    - by goodrone
    When a whole project (solution) consists of multiple subprojects (.vcproj), what is a preferable way to tie them: as static libraries or as shared libraries? Assuming that those subprojects are not used elsewhere, the shared libraries approach shouldn't decrease memory usage or load time.

    Read the article

  • Is there a way to efficiently yield every file in a directory containing millions of files?

    - by Josh Smeaton
    I'm aware of os.listdir, but as far as I can gather, that gets all the filenames in a directory into memory, and then returns the list. What I want, is a way to yield a filename, work on it, and then yield the next one, without reading them all into memory. Is there any way to do this? I worry about the case where filenames change, new files are added, and files are deleted using such a method. Some iterators prevent you from modifying the collection during iteration, essentially by taking a snapshot of the state of the collection at the beginning, and comparing that state on each move operation. If there is an iterator capable of yielding filenames from a path, does it raise an error if there are filesystem changes (add, remove, rename files within the iterated directory) which modify the collection? There could potentially be a few cases that could cause the iterator to fail, and it all depends on how the iterator maintains state. Using S.Lotts example: filea.txt fileb.txt filec.txt Iterator yields filea.txt. During processing, filea.txt is renamed to filey.txt and fileb.txt is renamed to filez.txt. When the iterator attempts to get the next file, if it were to use the filename filea.txt to find it's current position in order to find the next file and filea.txt is not there, what would happen? It may not be able to recover it's position in the collection. Similarly, if the iterator were to fetch fileb.txt when yielding filea.txt, it could look up the position of fileb.txt, fail, and produce an error. If the iterator instead was able to somehow maintain an index dir.get_file(0), then maintaining positional state would not be affected, but some files could be missed, as their indexes could be moved to an index 'behind' the iterator. This is all theoretical of course, since there appears to be no built-in (python) way of iterating over the files in a directory. There are some great answers below, however, that solve the problem by using queues and notifications. Edit: The OS of concern is Redhat. My use case is this: Process A is continuously writing files to a storage location. Process B (the one I'm writing), will be iterating over these files, doing some processing based on the filename, and moving the files to another location. Edit: Definition of valid: Adjective 1. Well grounded or justifiable, pertinent. (Sorry S.Lott, I couldn't resist). I've edited the paragraph in question above.

    Read the article

  • If you stick to standard coding in .NET, is there reason to manually invoke the GC or run finalizers

    - by Matt
    If you stick to managed code and standard coding (nothing that does unconventional things withe CLR) in .NET, is there any reason to manually invoke the GC or request to run finalizers on unreferenced objects? The reason I ask is thaty I have an app that grows huge in Working Memory set. I'm wondering if calling System.GC.Collect(); and System.GC.RunFinalizers(); would help, and if it would force anything that wouldn't be done by the CLR normally anyways.

    Read the article

  • How string accepting interface should look like?

    - by ybungalobill
    Hello, This is a follow up of this question. Suppose I write a C++ interface that accepts or returns a const string. I can use a const char* zero-terminated string: void f(const char* str); // (1) The other way would be to use an std::string: void f(const string& str); // (2) It's also possible to write an overload and accept both: void f(const char* str); // (3) void f(const string& str); Or even a template in conjunction with boost string algorithms: template<class Range> void f(const Range& str); // (4) My thoughts are: (1) is not C++ish and may be less efficient when subsequent operations may need to know the string length. (2) is bad because now f("long very long C string"); invokes a construction of std::string which involves a heap allocation. If f uses that string just to pass it to some low-level interface that expects a C-string (like fopen) then it is just a waste of resources. (3) causes code duplication. Although one f can call the other depending on what is the most efficient implementation. However we can't overload based on return type, like in case of std::exception::what() that returns a const char*. (4) doesn't work with separate compilation and may cause even larger code bloat. Choosing between (1) and (2) based on what's needed by the implementation is, well, leaking an implementation detail to the interface. The question is: what is the preffered way? Is there any single guideline I can follow? What's your experience?

    Read the article

  • Does anyone know of a good guide to configure GC in Java?

    - by evilpenguin
    I'm having trouble with a JVM running an app, whose heap memory looks like a comb. It's constantly jumping from 1.5 GB to 3 GB and slowly deteriorating to higher values. I'm using G1 GC algorithm, but have no idea how to configure it. I do not have access to the code of the app I'm running and, needless to say, it's a rather large app. So, bottom line, does anyone know of a good guide to configure GC in Java?

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >