Search Results

Search found 15939 results on 638 pages for 'low memory'.

Page 6/638 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Emgu CV - memory-leaks (memory consumption)

    - by martin pilch
    I am using EmguCV, the OpenCV wrapper for .NET. I am disposing all created objects but my app is still using more and more memory (in release configuration too). I have debugged my app using .NET Memory profiler and get this result: http://img532.imageshack.us/img532/2503/screenqv.png all objects instance count is oscilating but GChandle instance counr is increasing until my machine is unusable. Garbage collector does not release memory (i think). I am using VS 2008 professional, Win7 prof 32-bit, both up to date, and last stable version of emguCV. I can post some app code, if it will help. Thanks and sorry for my English. Martin

    Read the article

  • As a programmer, should I know low and high-level programming languages?

    - by job
    I been contacted to do some work remote controlling LEDs displays over TCP/IP, but my experience and preparation is mostly about high-level programming language. I said that to the person who contact me about the work and he told me that: "if you call yourself a programmer you should know all these things" Should a programmer really know the details of low-level programming? Or can I treat it as a black box concept, as theoretical knowledge but not necessarily doing it or implementing low level language solutions, having in mind that low-level programming is not my expertise?

    Read the article

  • Why does my Delphi program's memory continue to grow?

    - by lkessler
    I am using Delphi 2009 which has the FastMM4 memory manager built into it. My program reads in and processes a large dataset. All memory is freed correctly whenever I clear the dataset or exit the program. It has no memory leaks at all. Using the CurrentMemoryUsage routine given in spenwarr's answer to: http://stackoverflow.com/questions/437683/how-to-get-the-memory-used-by-a-delphi-program, I have displayed the memory used by FastMM4 during processing. What seems to be happening is that memory is use is growing after every process and release cycle. e.g.: 1,456 KB used after starting my program with no dataset. 218,455 KB used after loading a large dataset. 71,994 KB after clearing the dataset completely. If I exit at this point (or any point in my example), no memory leaks are reported. 271,905 KB used after loading the same dataset again. 125,443 KB after clearing the dataset completely. 325,519 KB used after loading the same dataset again. 179,059 KB after clearing the dataset completely. 378,752 KB used after loading the same dataset again. It seems that my program's memory use is growing by about 53,400 KB upon each load/clear cycle. Task Manager confirms that this is actually happening. I have heard that FastMM4 does not always release all of the program's memory back to the Operating system when objects are freed so that it can keep some memory around when it needs more. But this continual growing bothers me. Since no memory leaks are reported, I can't identify a problem. Does anyone know why this is happening, if it is bad, and if there is anything I can or should do about it?

    Read the article

  • Windows Physical Direct Memory Mapping

    - by chrisjleaf
    I'm a bit disappointed there is almost no discussion of this no matter where I look so I guess I'll have to ask. I'm writing a cross platform memory bench marking application which requires direct physical address mapping rather than virtual addressing. EDIT The solution would look something like the Linux/Unix system calls: int fd = open("/dev/mem", O_RDONLY); mmap(NULL, len, PROT_READ, MAP_SHARED, fd, PHYSICAL_ADDRESS_OFFSET); which will require the kernel to either give you a virtual page mapping to the desired physical address or return that it failed. This does require supervisor privileges but that is ok. I have seen a lot of information about shared memory and memory mapped files but all of these reside on disc and are thus not really useful when I'm trying to make a system dependent read. It is very similar to writing an IO driver although I do no need write permissions to the physical address. This site gives an example of how to do it on a driver level using the Windows Driver Kit: NT Insider: Sharing Memory between drivers and applications This solution would probably require Visual Studio which currently I do not have access to. (I have downloaded the WDK api but it complained about my use of GCC for Windows). I'm traditionally a Linux programmer so I'm hoping there might be something really simple I'm missing. Thanks in advance if you know something I don't!

    Read the article

  • G++ Multi-platform memory leak detection tool

    - by indyK1ng
    Does anyone know where I can find a memory memory leak detection tool for C++ which can be either run in a command line or as an Eclipse plug-in in Windows and Linux. I would like it to be easy to use. Preferably one that doesn't overwrite new(), delete(), malloc() or free(). Something like GDB if its gonna be in the command line, but I don't remember that being used for detecting memory leaks. If there is a unit testing framework which does this automatically, that would be great. This question is similar to other questions (such as http://stackoverflow.com/questions/283726/memory-leak-detection-under-windows-for-gnu-c-c ) however I feel it is different because those ask for windows specific solutions or have solutions which I would rather avoid. I feel I am looking for something a bit more specific here. Suggestions don't have to fulfill all requirements, but as many as possible would be nice. Thanks. EDIT: Since this has come up, by "overwrite" I mean anything which requires me to #include a library or which otherwise changes how C++ compiles my code, if it does this at run time so that running the code in a different environment won't affect anything that would be great. Also, unfortunately, I don't have a Mac, so any suggestions for that are unhelpful, but thank you for trying. My desktop runs Windows (I have Linux installed but my dual monitors don't work with it) and I'd rather not run Linux in a VM, although that is certainly an option. My laptop runs Linux, so I can use that tool on there, although I would definitely prefer sticking to my desktop as the screen space is excellent for keeping all of the design documentation and requirements in view without having to move too much around on the desktop. NOTE: While I may try answers, I won't mark one as accepted until I have tried the suggestion and it is satisfactory. EDIT2: I'm not worried about the cross-platform compatibility of my code, it's a command line application using just the C++ libraries.

    Read the article

  • How is an array stored in memory?

    - by George
    In an interest to delve deeper into how memory is allocated and stored, I have written an application that can scan memory address space, find a value, and write out a new value. I developed a sample application with the end goal to be able to programatically locate my array, and overwrite it with a new sequence of numbers. In this situation, I created a single dimensional array, with 5 elements, e.g. int[] array = new int[] {8,7,6,5,4}; I ran my application and searched for a sequence of the five numbers above. I was looking for any value that fell between 4 and 8, for a total of 5 numbers in a row. Unforuntately, my the sequential numbers in my array matched hundreds of results, as the numbers 4 through 8, in no particular sequence happened to be next to each other, in memory, in many situations. Is there any way to distinguish that a set of numbers within memory, represents an array, not simply integers that are next to each other? Is there any way of knowing that if I find a certain value, that the matching values proceeding it are that of an array? I would assume that when I declare int[] array, its pointing at the first address of my array, which would provide some kind of meta-data to what existed in the array, e.g. 0x123456789 meta-data, 5 - 32 bit integers 0x123456789 + 32 "8" 0x123456789 + 64 "7" 0x123456789 + 96 "6" 0x123456789 + 128 "5" 0x123456789 + 160 "4" Am I way off base?

    Read the article

  • Firefox consumes too much memory

    - by Vivek
    I have firefox version 11.0 and am running ubuntu 11.10. Firefox takes upto 850MB RAM with only six or seven tabs opened and all the tabs loaded with light weight websites only. I wonder why would a browser consume so much memory. It keeps increasing its memory consumption over time. I have 3GB RAM and most of the times firefox consumes upto 30% of my memory. How do I fix this? EDIT: The output of the command sudo iotop -oPa as asked by @Jippie

    Read the article

  • EBS: OPP Out of memory issue...

    - by ashish.shrivastava
    FO Processor is little more hungry for memory compare to other Java process. If XSLT scalable option is not set and the same time your RTF template is not well optimized definitely you are going to hit Out of memory exception while working with large volume of data. If the memory requirement is not too bad, you can set the OOP Heap size using following SQL queries. Check the current OPP JVM Heap size using following SQL query SQL select DEVELOPER_PARAMETERS from FND_CP_SERVICES where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP' DEVELOPER_PARAMETERS ----------------------------------------------------- J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx512m Set the JVM Heap size using following SQL query SQL update FND_CP_SERVICES set DEVELOPER_PARAMETERS = 'J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx2048m' where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP'); SQLCommit; . You need to restart the Concurrent Manager to make it effective. If this does not resolve the issue, You need to optimize RTF template and set the XSLT scalable option true.

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • Design review for application facing memory issues

    - by Mr Moose
    I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the problem domain is as follows; The application takes in a “dataset” that consists of numerous text files containing related data An individual text file within the dataset usually contains approx 20 “headers” that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. The memory problem Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; perform all the validation whilst the data was in memory relate it using an object model Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We’ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can’t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. Potential improvements I’ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can’t be trusted to be unique across all submitters). Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. Questions For systems like this that import large quantities of data, how to you go about keeping transactions small. I’ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? The tab delimited data section is read into a DataTable to be viewed in a grid. I don’t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention.

    Read the article

  • Applications affected by memory performance

    - by robotron
    I'm writing a paper on the topic of applications affected more by memory performance than processor performance. I've got a lot written regarding the gap between the two, however I can't seem to find anything about the applications that might be affected more by memory performance than by processor speed. I suppose these are applications that make a large amount of memory references, but I have no idea what kind of applications would make such large number of references to make it stand out? Perhaps databases? Can you please give me any pointers on how to proceed, some links to papers? I'm really stuck.

    Read the article

  • Wheres my memory going?

    - by Stu2000
    My machine keeps 'freezing' before eventaully logging out with all the programs exiting. This is rather annoying, and I think its because I keep running out of memory. I am not running any custom software, just netbeans, chrome etc. (Stuff I usually run on other ubuntu computers without issue). For some reason my memory usage is through the roof as seen here, but I can't quite figure out why. Here is a screenshot which may be useful with htop and gnome-system monitor open as user and as root. I notice that my console-kit-daemon is taking up about a gig of 'virtual memory'. Is that normal? Any tips/advice will be helpful. In the meantime I have ordered 2 x 4 gig ram sticks to try and just throw hardware at the issue.

    Read the article

  • Loading saved byte array to memory stream causes out of memory exception

    - by user2320861
    At some point in my program the user selects a bitmap to use as the background image of a Panel object. When the user does this, the program immediately draws the panel with the background image and everything works fine. When the user clicks "Save", the following code saves the bitmap to a DataTable object. MyDataSet.MyDataTableRow myDataRow = MyDataSet.MyDataTableRow.NewMyDataTableRow(); //has a byte[] column named BackgroundImageByteArray using (MemoryStream stream = new MemoryStream()) { this.Panel.BackgroundImage.Save(stream, ImageFormat.Bmp); myDataRow.BackgroundImageByteArray = stream.ToArray(); } Everything works fine, there is no out of memory exception with this stream, even though it contains all the image bytes. However, when the application launches and loads saved data, the following code throws an Out of Memory Exception: using (MemoryStream stream = new MemoryStream(myDataRow.BackGroundImageByteArray)) { this.Panel.BackgroundImage = Image.FromStream(stream); } The streams are the same length. I don't understand how one throws an out of memory exception and the other doesn't. How can I load this bitmap? P.S. I've also tried using (MemoryStream stream = new MemoryStream(myDataRow.BackgroundImageByteArray.Length)) { stream.Write(myDataRow.BackgroundImageByteArray, 0, myDataRow.BackgroundImageByteArray.Length); //throw OoM exception here. }

    Read the article

  • Memory allocation in case of static variables

    - by eSKay
    I am always confused about static variables, and the way memory allocation happens for them. For example: int a = 1; const int b = 2; static const int c = 3; int foo(int &arg){ arg++; return arg; } How is the memory allocated for a,b and c? What is the difference (in terms of memory) if I call foo(a), foo(b) and foo(c)?

    Read the article

  • Is there an NSCFTimer memory leak?

    - by mystify
    I tracked down a memory leak with instruments. I always end up with the information that the responsible library is Foundation. When I track that down in my code, I end up here, but there's nothing wrong with my memory management: - (void)setupTimer { // stop timer if still there [self stopAnimationTimer]; NSTimer *timer = [NSTimer scheduledTimerWithTimeInterval:0.2 target:self selector:@selector(step:) userInfo:nil repeats:YES]; self.animationTimer = timer; // retain property, -release in -dealloc method } the property animationTimer is retaining the timer. In -dealloc I -release it. Now that looks like a framework bug? I checked with iPhone OS 3.0 and 3.1, both have that problem every time I use NSTimer like this. Any idea what else could be the problem? (my memory leak scan interval was 0.1 seconds. but same thing with 5 seconds)

    Read the article

  • UIWebView memory management

    - by wolfrevo
    Hello, I have a problem with memory management. I am developing an application that makes heavy use of UIWebView. This app generates dynamically lots of UIWebViews while loading content from my server. Some of these UIWebViews are quite large and have a lot of pictures. If I use instruments to detect leaks, I do not detect any. However, lots of objects are allocated and I suspect that has to do with the UIWebViews. When the webviews release because no longer needed, it appears that not all memory is released. I mean, after a request to my server the app creates an UITableView and many webviews (instruments say about 8Mb). When user tap back, all of them are released but memory usage only decrements about 2-3 Mb, and after 5-10 minutes using the app it crashes. Am I missing something? Anyone know what could be happening? Thank you!

    Read the article

  • Does it take time to deallocate memory?

    - by jm1234567890
    I have a C++ program which, during execution, will allocate about 3-8Gb of memory to store a hash table (I use tr1/unordered_map) and various other data structures. However, at the end of execution, there will be a long pause before returning to shell. For example, at the very end of my main function I have std::cout << "End of execution" << endl; But the execution of my program will go something like $ ./program do stuff... End of execution [long pause of maybe 2 min] $ -- returns to shell Is this expected behavior or am I doing something wrong? I'm guessing that the program is deallocating the memory at the end. But, commercial applications which use large amounts of memory (such as photoshop) do not exhibit this pause when you close the application. Please advise :)

    Read the article

  • How does VirtualBox's memory usage work?

    - by DrFredEdison
    I've been running several VM's with VirtualBox, and the memory usage reported from various perspectives, and I'm having trouble figuring how much memory my VMs actually use. Here is an example: I have a VM running Windows 7 (as the Guest OS) on my windows XP Host machine. The Host Machine Has 3 GB of RAM The Guest VM is setup to have a base memory of 1 GB If I run Task Manger on the Guest OS, I see memory usage of 430 MB If I run Task Manger on the host OS, I see 3 processes that seem to belong to VirtualBox: VirtualBox.exe (1), using 60 MB of memory (This one seems to have the most CPU usage) VirtualBox.exe (2), using 20 MB of memory VBoxSvc.exe, using 11.5 MB of memory While running the VM, the Host OS's memory usage is about 2 GB When I shut down the VM, the Host OS's it goes back to memory usage goes down to about 900 MB So clearly, there are some huge differences here. I really don't understand how the GuestOS can use 400+ MB, while the Host OS only shows about 75 MB allocated to the VM. Are there other processes used by VirtualBox that aren't as obviously named? Also, I'd like to know if I run a machine with 1 GB, is that going to take 1 GB away from my host OS, or only the amount of memory the Guest machine is currently using? update Somene expressed distrust over my memory usage numbers, and I'm not sure if that distrust was directed at me, or my Host OS's Task Manager's reporting (which is perhaps the culprit), but for any skeptics, here is a screenshot of those processes on the host machine:

    Read the article

  • Bios Memory settings and Virtualization + Ubuntu (Unofficial Answers Welcome) [closed]

    - by TardisGuy
    Attempting to optimize my (Main Windowless) Ubuntu system for my uses I will detail questions below, I understand this might be the wrong place to ask these questions. If so, my apologies and I thank you so much for your patience. Thanks to all the volenteers that have helped me learn ubuntu over the years (Since 5.10) This is a "short" list of questions I have been trying to figure out for some time. If you feel you can answer one but not another, that's already more than I could ask for. I have wrote this up in a format for easy navigation to important points Hopefully to less annoy your eyes. You're welcome :) or i'm sorry i annoy you. :( If you would be so kind, Please format answers as follows: question 1: _ _ _ _ _ or question 1-a: _ _ _ _ _ If you want to simply link me to relevant information, rather than type up something really detailed; that would be more than awesome! Memory Specific Questions Goal: Maximizing memory bandwith to better perform in Virtualization, and Large file compression. (Possible conflict?) Ganged vs Unganged "which is better?"** is relative, i know. But what about ganged vs unganged - With or without Bank/channel interleaving? a: Speculation - If i understand correctly, "channel interleaving has something to do with using both channels to read or write in a kind of "striping" pattern, as opposed to a standard half duplex operation.(probably wrong) but wouldn't ganged channels make this irrelevant? Memory Interleaving(bank). Does it have a down side? Does it require a ratio of clocks? (If I run 4x4gig ddr3) a. If im reading correctly(trying to learn), this is designed to spread operations between latency cycles to work around the higher latency of "normal" operation. b. However it seems to me that it has to be: divisible by fractions of a master clock? So if i run memory at 1333mhz, then the mean between 2 (physical) banks would operate every (roughly) 600Mhz? Warning! Possibly utter nonsense: (1333/2 interleaving to act like 1 memory module per 2 sticks of a total of 4 sticks, meaning 2x channels@4) c. which makes me wonder if there would be left over clock cycles the system would have to... "truncate/balance" or something? But I'm certain theres a feature somewhere i don't understand. Virtualization Questions AMD-V - Option of IOMMU Turned it on, why do i have extra option of "64MB"? If IOMMU is on, but "64MB" is "disabled", Is it on? (have scoured google, I still dont know) a. I think i understand that its supposed to (kind of) "set aside" a part of ram to act as a faster interactive zone for "stuff" (usb, Graphics, and... what?) b. I am using Nvidia graphics on AMD (Used kernel option "iommu=pt iommu=1, pt "passthrough"? No idea what they do, found it on google to solve boot up issue) c. Will this option help me use low latency sound hardware, like my midi keyboard? Can you recommend any additional tweaks? a. sysctl settings? b. swap settings? Grats, youve reached the end. Thanks for Reading.

    Read the article

  • XNA WP7 Texture memory and ContentManager

    - by jlongstreet
    I'm trying to get my WP7 XNA game's memory under control and under the 90MB limit for submission. One target I identified was UI textures, especially fullscreen ones, that would never get unloaded. So I moved UI texture loads to their own ContentManager so I can unload them. However, this doesn't seem to affect the value of Microsoft.Phone.Info.DeviceExtendedProperties.GetValue("ApplicationCurrentMemoryUsage"), so it doesn't look like the memory is actually being released. Example: splash screens. In Game.LoadContent(): Application.Instance.SetContentManager("UI"); // set the current content manager for (int i = 0; i < mSplashTextures.Length; ++i) { // load the splash textures mSplashTextures[i] = Application.Instance.Content.Load<Texture2D>(msSplashTextureNames[i]); } // set the content manager back to the global one Application.Instance.SetContentManager("Global"); When the initial load is done and the title screen starts, I do: Application.Instance.GetContentManager("UI").Unload(); The three textures take about 6.5 MB of memory. Before unloading the UI ContentManager, I see that ApplicationCurrentMemoryUsage is at 34.29 MB. After unloading the ContentManager (and doing a full GC.Collect()), it's still at 34.29 MB. But after that, I load another fullscreen texture (for the title screen background) and memory usage still doesn't change. Could it be keeping the memory for these textures allocated and reusing it? edit: very simple test: protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); PrintMemUsage("Before texture load: "); // TODO: use this.Content to load your game content here red = this.Content.Load<Texture2D>("Untitled"); PrintMemUsage("After texture load: "); } private void PrintMemUsage(string tag) { Debug.WriteLine(tag + Microsoft.Phone.Info.DeviceExtendedProperties.GetValue("ApplicationCurrentMemoryUsage")); } protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here if (count++ == 100) { GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("Before CM unload: "); this.Content.Dispose(); GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("After CM unload: "); red = null; spriteBatch.Dispose(); GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("After SpriteBatch Dispose(): "); } base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here if (red != null) { spriteBatch.Begin(); spriteBatch.Draw(red, new Vector2(0,0), Color.White); spriteBatch.End(); } base.Draw(gameTime); } This prints something like (it changes every time): Before texture load: 7532544 After texture load: 10727424 Before CM unload: 9875456 After CM unload: 9953280 After SpriteBatch Dispose(): 9953280

    Read the article

  • What is the maximum memory a process (MySQL) can consume on a 32-bit OS?

    - by mmattax
    I have MySQL running on a 32-bit RHEL box. The server itself has 4GB total memory with 2GB allocated to MySQL. I would like to know the max amount of memory I can put in the box and how much of that I can allocate to MySQL. I have heard both 2GB and 4GB as the per-process-limit on a 32-bit OS... Ultimately I'd like to know if I can increase the memory for MySQL without upgrading to a 64-bit OS.

    Read the article

  • High Load - Low IO - Low CPU usage

    - by devup
    I have a system whose load is rather high. As you can see from the top output below, CPU usage and I/O is negligible: top - 17:31:59 up 4 days, 2:34, 2 users, load average: 1.00, 0.99, 1.00 Tasks: 71 total, 1 running, 70 sleeping, 0 stopped, 0 zombie Cpu(s): 2.0%us, 2.0%sy, 0.0%ni, 95.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 960720k total, 707288k used, 253432k free, 67328k buffers Swap: 2811896k total, 2644k used, 2809252k free, 528928k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 15310 root 20 0 2512 1128 888 R 2.1 0.1 0:00.05 top I would appreciate any assistance with isolating the cause(s) of high load for when I/O and CPU are not factors.

    Read the article

  • The Hot-Add Memory Hogs

    - by Andrew Clarke
    One of the more difficult tasks, when virtualizing a server, is to determine the amount of memory that Hypervisor should assign to the virtual machine. This requires accurate monitoring and, because of the consequences of setting the value too low, there is a great temptation to err on the side of over-provisioning. This results in fewer guest VMs and, in fact, with more accurate memory provisioning, many virtual environments could support 30% more VMs. In order to achieve a better consolidation (aka VM density) ratio, Windows Server 2008 R2 SP1 has introduced what Microsoft calls ‘Dynamic Memory’. This means that the start-up RAM VM memory assigned to guest virtual machines can be allowed to vary according to demand, changing dynamically while the VM is running, based on the workload of applications running inside. If demand outstrips supply, then memory can be rationed according to the ‘memory weight’ assigned to the guest VM. By this mechanism, memory becomes a shared resource that can be reallocated automatically as demand patterns vary. Unlike VMWare’s Memory Overcommit technology, the sum of all the memory allocations to each virtual machine will not exceed the total memory of the host computer. This is fine for applications that are self-regulating in their demands for memory, releasing memory back into the 'pool' when not under peak load. Other applications however, such as SQL Server Standard and Enterprise, are by nature, memory hogs under high workload; they can grab hot-add memory whilst running under load and then never release it. This requires more careful setting-up and the SQLOS team have provided some guidelines from for configuring SQL Server in virtual environments. Whereas VMWare’s Memory Overcommit is well-proven in a number of different configurations, Hyper-V’s ‘Dynamic Memory’ is new. So far, the indications are that it will improve the business case for virtualizing and it is probably a far more intuitive technology for the average IT professional to grasp. It is certainly worth testing to see whether it works for you.

    Read the article

  • Python memory leaks

    - by Fragsworth
    I have a long-running script which, if let to run long enough, will consume all the memory on my system. Without going into details about the script, I have two questions: Are there any "Best Practices" to follow, which will help prevent leaks from occurring? What techniques are there to debug memory leaks in Python?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >