Search Results

Search found 12588 results on 504 pages for 'memory allocation'.

Page 2/504 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Memory leak - debugger and memory analyzer disagreeing

    - by Joe
    There is a memory leak in my android game - I've managed to narrow it down to a certain object, which has a list of objects to render on a texture. This object clears the list every time it draws though - so I can't work out how its managed to get thousands of elements in the list. I checked in the debugger and it doesn't have all these thousands of elements - usually about 2-20 which is what I'd expect... The game definitely slows down progressively only if I have rendering to texturing on. Here is a picture of Memory Analyzer showing 6,111 items: Memory Analyzer Here is a picture of the debugger showing 2: Debugger Can anyone help me find out whats wrong?

    Read the article

  • Manual memory allocation and purity

    - by Eonil
    Language like Haskell have concept of purity. In pure function, I can't mutate any state globally. Anyway Haskell fully abstracts memory management, so memory allocation is not a problem here. But if languages can handle memory directly like C++, it's very ambiguous to me. In these languages, memory allocation makes visible mutation. But if I treat making new object as impure action, actually, almost nothing can be pure. So purity concept becomes almost useless. How should I handle purity in languages have memory as visible global object?

    Read the article

  • Memory dump much smaller than available memory

    - by Daniel
    I have a Tomcat Application Server that is configured to create a memory dump on OOM, and it is started with -Xmx1024M, so a Gigabyte should be available to him. Now I found one such dump and it contains only 260MB of unretained memory. How is it possible that the dump is so much smaller than the size he should have available?

    Read the article

  • what kind of memory can be categorized as Modified Memory in Resource Monitor

    - by Kavin
    In Windows 7 and Windows 2008 R2, there is a new Resource Monitor that is very useful and powerful to monitor the system. In the Memory section, I see a section called Modified (orange) The official description is: Memory whose contents must be to disk before it can be used for another purpose. But I am still confused. What kinds of memory is Modified? In which case can we say that this number of memory is Modified? Can anyone give me a specific example? Is the following guess correct? When a program want to write something into disk, it actually write the content to an IO buffer, which is in the memory. After OS flush this area of memory into disk, the memory is modified or standby?

    Read the article

  • very large string in memory

    - by bushman
    Hi, I am writing a program for formatting 100s of MB String data (nearing a gig) into xml == And I am required to return it as a response to an HTTP (GET) request . I am using a StringWriter/XmlWriter to build an XML of the records in a loop and returning the stringWriter.ToString() during testing I saw a few --out of memory exceptions-- and quite clueless on how to find a solution? do you guys have any suggestions for a memory optimized delivery of the response? is there a memory efficient way of encoding the data? or maybe chunking the data -- I just can not think of how to return it without building the whole thing into one HUGE string object thanks

    Read the article

  • Objects leaking immediately from allocation using either new or [[Object alloc] init];

    - by Sam
    While running Instruments to find leaks in my code, after I've loaded a file and populate an NSMutableArray with new objects, leaks pop up! I am correctly releasing the objects. Sample code below: //NSMutableArray declared as a retained property in the parent class if(!mutableArray) mutableArray = [[NSMutableArray alloc] initWithCapacity:objectCount]; else [mutableArray removeAllObjects]; //Iterates through the read in data and populate the NSMutableArray for(int i = 0; i < objectCount; i++){ //Initializes a new object with data MyObject *object = [MyObject new]; //Adds the object to the mutableArray [mutableArray addObject:object]; //Releases the object [object release]; } I get a number of leaks from Instruments terminating at the addition of the 'object' into the 'mutableArray', but also including the allocation of the 'object' and the 'mutableArray'. I don't get it. Not to mention, this is happening on the first call of the enclosing method so the allocation of the NSMutableArray is being hit in the logic block, not the 'removeAllObjects' selector. Lastly, does Core Foundation have a major bug in it that randomly creates CFStrings and mismanages their memory? My code does not even use those, nor do the leaks where they occur have anything to do with my code. Almost all of my applications so far deal with OpenGL (in case anyone knows of a threading issue that arises from trying to synch the backend of the program with the front end of displaying the contents of an NSOpenGLView class or whatever it is).

    Read the article

  • Simple dynamic memory allocation bug.

    - by M4design
    I'm sure you (pros) can identify the bug's' in my code, I also would appreciate any other comments on my code. BTW, the code crashes after I run it. #include <stdlib.h> #include <stdio.h> #include <stdbool.h> typedef struct { int x; int y; } Location; typedef struct { bool walkable; unsigned char walked; // number of times walked upon } Cell; typedef struct { char name[40]; // Name of maze Cell **grid; // 2D array of cells int rows; // Number of rows int cols; // Number of columns Location entrance; } Maze; Maze *maz_new() { int i = 0; Maze *mazPtr = (Maze *)malloc(sizeof (Maze)); if(!mazPtr) { puts("The memory couldn't be initilised, Press ENTER to exit"); getchar(); exit(-1); } else { // allocating memory for the grid mazPtr->grid = (Cell **) malloc((sizeof (Cell)) * (mazPtr->rows)); for(i = 0; i < mazPtr->rows; i++) mazPtr->grid[i] = (Cell *) malloc((sizeof (Cell)) * (mazPtr->cols)); } return mazPtr; } void maz_delete(Maze *maz) { int i = 0; if (maz != NULL) { for(i = 0; i < maz->rows; i++) free(maz->grid[i]); free(maz->grid); } } int main() { Maze *ptr = maz_new(); maz_delete(ptr); getchar(); return 0; } Thanks in advance.

    Read the article

  • Decreasing cached memory and increasing Free memory in RAM

    - by Greenhorn
    Hi, Im using windows 2007 server 64 bit OS, I've uploaded the snap shot of my task manager when minimum processes running It shows Total memory 8190 mb *Cached memory 4315 mb* Free 3402 mb So effectively I get only 3402 mb of total RAM usage My question here is more than half is used for cached memory is there any means I can decrease this cached memory inturn I can increase my free memory. I need to do this because my Application requires at least 5GB RAM and it crashed when run in this system. Please give me a solution for this Thanks in advance

    Read the article

  • NSCFString Memory Leak

    - by Lakshmie
    Hello, I have been solving a lot of memory leaks but have been unsuccessful in solving this one. There are tons of NSCF memory leaks coming due to [NSCFString substringWithRange:]. I have been checking all the String allocations and have released all of them at appropriate places. The responsible library: Foundation. Has anyone encountered this problem before? Can anyone suggest me as how I should takle this issue? Thanks, Lakshmie

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • sizes of RAM, of virtual memory and of swap for 32-bit OS

    - by Tim
    If I understand correctly, a 32-bit OS (Ubuntu) can only address 4GiB memory, so RAM with size larger than 4Gib will only be used 4Gib of itself and the rest is a waste. I am now confused about this situation for RAM with similar one for virtual memory and for swap. with virtual memory being swap + RAM, if the size of the virtual memory exceeds 4Gib, will the exceeding part be a waste for the 32-bit OS? if I now have to choose the size for my swap partition, is it a factor to consider that the 32-bit OS can only address 4GiB memory? Does the size of swap have to be chosen with respect to the 4Gib addressible limitation? Will the swap exceeding 4GiB always be a waste? is virtual memory equal to RAM and swap? or can virtual memory use space on the hard drive outside the swap partition? Thanks and regards!

    Read the article

  • How do I mock memory allocation failures ?

    - by Andrei Ciobanu
    I want to extensively test some pieces of C code for memory leaks. On my machine I have 4 Gb of RAM, so it's very unlikely for a dynamic memory allocation to fail. Still I want to see the comportment of the code if memory allocation fails, and see if the recover mechanism is "strong" enough. What do you suggest ? How do I emulate an environment with lower memory specs ? How do i mock my tests ? EDIT: I want my tests to be code independent. I only have "access" to return values for different functions in the library I am testing. I am not supposed to write "test logic" inside the code I am testing.

    Read the article

  • Best memory allocation strategy for iOS ?

    - by Mr.Gando
    Hey guys, I'm debating myself about memory allocation on iOS. I write most of my code in C++ and I really like using ObjectPools, FreeLists, etc. In order to pre-allocate a lot of the stuff that I'll be constantly "alloc/dealloc" during the course of my game, ( like particles, game entities, etc ). Still on iOS, it's not like we are developing for a console like PSP, where I can know for fact that I'll get a fixed amount of memory. iOS , will issue "memory warnings" when the system needs memory. Does anyone have some suggestions about this ? Is it too serious since the new iPod touch/iPhone 4 are carrying more RAM ? or it's still a big concern ? Thanks!

    Read the article

  • Does scheduling recycling app pool in IIS7 help the server conserve memory better?

    - by user29266
    Hello, I have a VPS (IIS7 with Win 2008) It's got: 40 websites and a SQL Server 2008 powering them with only 2 Gigs of RAM. None of the sites are mission critical, they are all just demos. I often have ram issues on the server because each site has does caching and generally uses a lot of memory. Would it make sense to set the application pools to recycle every 3 hours? I'm sure this would free up any memory leaks or processes left "hanging" Are there any other tips on this? Thank you very much!, Aron

    Read the article

  • Loads of memory in "standby" on Windows Server 2008 R2

    - by Jaap
    In our SharePoint farm, our Web Front End servers all have loads of memory in "standby" mode, meaning very little is available for our IIS worker process. We have 32 GB of RAM in each of the boxes, and standby memory will creep up to about 28 GB, whereas the IIS worker process only seems to be using about 2 GB. Also, we've seen the machine use the swap file extensively while this memory was in standby, so I am starting to think that this memory in standby mode is stopping IIS from using it, forcing it to swap to disk, causing more performance problems. I used SysInternals RamMap to indentify what is being kept in memory, and it was able to tell me that almost everything in standby memory is of type "Mapped File". When I sort the files listed under the file summary tab in RamMap by file size, the largest files (around a few hundred meg each) are IIS log files and SharePoint log files. I would like to understand which process is loading these files into standby memory and why they are not being released. When I do an iisreset, it does not release the memory. Any ideas? Thanks!

    Read the article

  • Memory Usage of SQL Server

    - by Ashish
    SQL Server instance on my server is using almost full memory available in my Physical Server. Say if i am having 8GB of RAM than SQL Server is using 7.8 GB of RAM from system. I also have read articles and also read many similar questions regarding same on this forum and i understand that memory is reserved and it is using memory. But i have 2 same servers and 2 SQL Servers, why this is happening on a single SQL Instance not on other. Also when i run DBCC MemoryStatus than it is showing up... VM Reserved 8282008 VM Committed 537936 so from this we know that SQL reserved whole 8GB memory, but why this VM Committed keeps increasing. What i understand is VM Committed is: VM Committed: This value shows the overall amount of VAS that SQL Server has committed. VAS that is committed has been associated with physical memory. So this is the memory SQL Server has committed (from this i understand that physical memory actually SQL Server is using at instance). So like to know the reason behind this ever increasing VM Committed memory on my server and not on another. Thanks in Advance.

    Read the article

  • Memory allocation in case of static variables

    - by eSKay
    I am always confused about static variables, and the way memory allocation happens for them. For example: int a = 1; const int b = 2; static const int c = 3; int foo(int &arg){ arg++; return arg; } How is the memory allocated for a,b and c? What is the difference (in terms of memory) if I call foo(a), foo(b) and foo(c)?

    Read the article

  • Yet another Memory Leak Issue (memory is still gone when program terminates)- C program on SLES

    - by user1426181
    I run my C program on Suse Linux Enterprise that compresses several thousand large files (between 10MB and 100MB in size), and the program gets slower and slower as the program runs (it's running multi-threaded with 32 threads on a Intel Sandy Bridge board). When the program completes, and it's run again, it's still very slow. When I watch the program running, I see that the memory is being depleted while the program runs, which you would think is just a classic memory leak problem. But, with a normal malloc()/free() mismatch, I would expect all the memory to return when the program terminates. But, most of the memory doesn't get reclaimed when the program completes. The free or top command shows Mem: 63996M total, 63724M used, 272M free when the program is slowed down to a halt, but, after the termination, the free memory only grows back to about 3660M. When the program is rerun, the free memory is quickly used up. The top program only shows that the program, while running, is using at most 4% or so of the memory. I thought that it might be a memory fragmentation problem, but, I built a small test program that simulates all the memory allocation activity in the program (many randomized aspects were built in - size/quantity), and it always returns all the memory upon completion. So, I don't think that's it. Questions: Can there be a malloc()/free() mismatch that will lose memory permanently, i.e. even after the process completes? What other things in a C program (not C++) can cause permanent memory loss, i.e. after the program completes, and even the terminal window closes? Only a reboot brings the memory back. I've read other posts about files not being closed causing problems, but, I don't think I have that problem. Is it valid to be looking at top and free for the memory statistics, i.e. do they accurately describe the memory situation? They do seem to correspond to the slowness of the program. If the program only shows a 4% memory usage, will something like valgrind find this problem?

    Read the article

  • Virtual memory on Linux doesn't add up?

    - by Brendan Long
    I was looking at System Monitor on Linux and noticed that Firefox is using 441 MB of memory, and several other applications are using 274, 257, 232, etc (adding up to over 3 GB of virtual memory). So I switch over to the Resources tab, and it says I'm using 462 MB of memory and not touching swap. I'm confused. What does the virtual memory amount mean then if the programs aren't actually using it. I was thinking maybe memory they've requested but aren't using, but how would the OS know that? I can't think of any "I might need this much memory in the future" function..

    Read the article

  • Free / Cached / Available memory on Linux

    - by pkoraca
    I have read that linux uses free memory for caching, to make system faster. However, both Nagios and Paessler PRTG monitoring system show me that my memory usage is critical. I could change Nagios mem_usage script to sum free and cached memory, but would that be correct information? I doubt that they misunderstood Linux memory usage. Lets say I have 8 GB RAM. 5 GB are used, 2 GB is cached, and I have 1 GB of free memory. Real available memory should be free+cached (3 GB)? If some new application would need additional 3 GB RAM, could it take everything from cache and free without using swap, or is there a minimum that should be in cache? Real example: $ cat /proc/meminfo MemTotal: 5984256 kB MemFree: 137052 kB Buffers: 140484 kB Cached: 3439616 kB SwapCached: 244 kB Active: 3148824 kB Inactive: 2341768 kB ... My monitoring tools show that I have 137 MB free RAM, however I have ~3,5 GB in Cache. Thanks!

    Read the article

  • How UIWindow#addSubview can make memory leak?

    - by Jakub
    Hello, I started to learn using Instrument, but I cannot figure it out. After I start my application, the UI shows up, I do nothing and after few seconds I can see memory leak detected: When I have a look at the second leak I can see the following stack: When I double click on the cell related to my code I can see that it is pointing to the following line of code: [window addSubview:newPostUIViewController.view]; from the method: - (void)applicationDidFinishLaunching:(UIApplication *)application { //creating view controller newPostUIViewController = [[NewPostUIViewController alloc] initWithNibName:@"NewPostView" bundle:nil]; newPostUIViewController.title = @"Post it!"; [window addSubview:newPostUIViewController.view]; // Override point for customization after application launch [window makeKeyAndVisible]; } I wonder, how this can be a reason of a leak? I release newPostUIViewController in the dealloc method of PostItAppDelegate class. Any ideas how this could be explained?

    Read the article

  • Ubuntu Linux: Process swap memory and memory usage

    - by David Halter
    My Ubuntu eats more memory than the task manager is showing: sudo ps -e --format rss | awk 'BEGIN{c=0} {c+=$1} END{print c/1024}' 1043.84 free -m total used free shared buffers cached Mem: 3860 1878 1982 0 20 679 -/+ buffers/cache: 1178 2681 Swap: 2729 1035 1693 That's strange. Can someone explain this difference? But what is more important: I'd like to know how much memory a process is really using. I don't want to know the virtual memory size, but rather the resident memory plus swap of a process. I have also tried to output the format param "sz" of 'ps', but the sum of this is to high (5450 MB) (param 'size' gives 8323.45 MB). Are there any other options? I really want to use this, to determine which programs/processes are eating to much memory (and swap), to kill them, because hibernate might not be working if the swap partition is to little.

    Read the article

  • Problem with memory leaks

    - by user191723
    Sorry, having difficulty formattin code to appear correct here??? I am trying to understand the readings I get from running instruments on my app which are telling me I am leaking memory. There are a number, quite a few in fact, that get reported from inside the Foundation, AVFoundation CoreGraphics etc that I assume I have no control over and so should ignore such as: Malloc 32 bytes: 96 bytes, AVFoundation, prepareToRecordQueue or Malloc 128 bytes: 128 bytes, CoreGraphics, open_handle_to_dylib_path Am I correct in assuming these are something the system will resolve? But then there are leaks that are reported that I believe I am responsible for, such as: This call reports against this line leaks 2.31KB [self createAVAudioRecorder:frameAudioFile]; Immediately followed by this: -(NSError*) createAVAudioRecorder: (NSString *)fileName { // flush recorder to start afresh [audioRecorder release]; audioRecorder = nil; // delete existing file to ensure we have clean start [self deleteFile: fileName]; VariableStore *singleton = [VariableStore sharedInstance]; // get full path to target file to create NSString *destinationString = [singleton.docsPath stringByAppendingPathComponent: fileName]; NSURL *destinationURL = [NSURL fileURLWithPath: destinationString]; // configure the recording settings NSMutableDictionary *recordSettings = [[NSMutableDictionary alloc] initWithCapacity:6]; //****** LEAKING 384 BYTES [recordSettings setObject:[NSNumber numberWithInt:kAudioFormatLinearPCM] forKey: AVFormatIDKey]; //***** LEAKING 32 BYTES float sampleRate = 44100.0; [recordSettings setObject:[NSNumber numberWithFloat: sampleRate] forKey: AVSampleRateKey]; //***** LEAKING 48 BYTES [recordSettings setObject:[NSNumber numberWithInt:2] forKey:AVNumberOfChannelsKey]; int bitDepth = 16; [recordSettings setObject: [NSNumber numberWithInt:bitDepth] forKey:AVLinearPCMBitDepthKey]; //***** LEAKING 48 BYTES [recordSettings setObject:[NSNumber numberWithBool:YES] forKey:AVLinearPCMIsBigEndianKey]; [recordSettings setObject:[NSNumber numberWithBool: NO]forKey:AVLinearPCMIsFloatKey]; NSError *recorderSetupError = nil; // create the new recorder with target file audioRecorder = [[AVAudioRecorder alloc] initWithURL: destinationURL settings: recordSettings error: &recorderSetupError]; //***** LEAKING 1.31KB [recordSettings release]; recordSettings = nil; // check for erros if (recorderSetupError) { UIAlertView *alert = [[UIAlertView alloc] initWithTitle: @"Can't record" message: [recorderSetupError localizedDescription] delegate: nil cancelButtonTitle: @"OK" otherButtonTitles: nil]; [alert show]; [alert release]; alert = nil; return recorderSetupError; } [audioRecorder prepareToRecord]; //***** LEAKING 512 BYTES audioRecorder.delegate = self; return recorderSetupError; } I do not understand why there is a leak as I release audioRecorder at the start and set to nil and I release recordSettings and set to nil? Can anyone enlighten me please? Thanks

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >