Search Results

Search found 12357 results on 495 pages for 'memory lane'.

Page 8/495 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • C++ Memory Leak, Can't find where

    - by Nicholas
    I'm using Visual Studio 2008, Developing an OpenGL window. I've created several classes for creating a skeleton, one for joints, one for skin, one for a Body(which is a holder for several joints and skin) and one for reading a skel/skin file. Within each of my classes, I'm using pointers for most of my data, most of which are declared using = new int[XX]. I have a destructor for each Class that deletes the pointers, using delete[XX]. Within my GLUT display function I have it declaring a body, opening the files and drawing them, then deleting the body at the end of the display. But there's still a memory leak somewhere in the program. As Time goes on, it's memory usage just keep increasing, at a consistent rate, which I'm interpreting as something that's not getting deleted. I'm not sure if it's something in the glut display function that's just not deleting the Body class, or something else. I've followed the steps for memory leak detection in Visual Studio 2008 and it doesn't report any leak, but I'm not 100% sure if it's working right for me. I'm not fluent in C++, so there maybe something I'm overlooking, can anyone see it?

    Read the article

  • Can this code cause a memory leak (Arduino)

    - by tbraun89
    I have a arduino project and I created this struct: struct Project { boolean status; String name; struct Project* nextProject; }; In my application I parse some data and create Project objects. To have them in a list there is a pointer to the nextProject in each Project object expect the last. This is the code where I add new projects: void RssParser::addProject(boolean tempProjectStatus, String tempData) { if (!startProject) { startProject = true; firstProject.status = tempProjectStatus; firstProject.name = tempData; firstProject.nextProject = NULL; ptrToLastProject = &firstProject; } else { ptrToLastProject->nextProject = new Project(); ptrToLastProject->nextProject->status = tempProjectStatus; ptrToLastProject->nextProject->name = tempData; ptrToLastProject->nextProject->nextProject = NULL; ptrToLastProject = ptrToLastProject->nextProject; } } firstProject is an private instance variable and defined in the header file like this: Project firstProject; So if there actually no project was added, I use firstProject, to add a new one, if firstProject is set I use the nextProject pointer. Also I have a reset() method that deletes the pointer to the projects: void RssParser::reset() { delete ptrToLastProject; delete firstProject.nextProject; startProject = false; } After each parsing run I call reset() the problem is that the memory used is not released. If I comment out the addProject method there are no issues with my memory. Someone can tell me what could cause the memory leak?

    Read the article

  • Android - Memory leak when dynamically building UI with image resource backgrounds

    - by Rich
    I have an Activity that I swear is leaking memory. The app I'm working on does a lot with images, so I've had to be pretty stingy with memory when working directly with Bitmaps. I added an Activity, and now if you use this new Activity it basically puts me over the edge with mem usage and I end up throwing the "Bitmap exceeds VM budget" exception. If you never launch this Activity, everything is smooth as it was previously. I started reading about memory leaks, and I think that I have a similar situation to what is described in the article in the Android docs. I'm dynamically creating a bunch of image views and adding a BackgroundDrawable from the resources and adding an OnClickListener as well. I imagine I have to do some cleanup when the Activity hits onPause in its life cycle, but I'd like to know specifically what is the correct way. Here is the code that should demonstrate the objects I'm working with... LinearLayout templateContainer; . . . ImageView imgTemplatePreview = (ImageView) item.findViewById(R.id.imgTemplatePreview); . . . imgTemplatePreview.setBackgroundDrawable(getResources().getDrawable(previewId)); imgTemplatePreview.setOnClickListener(imgClick); templateContainer.addView(item);

    Read the article

  • SQL SERVER – Plan Cache and Data Cache in Memory

    - by pinaldave
    I get following question almost all the time when I go for consultations or training. I often end up providing the scripts to my clients and attendees. Instead of writing new blog post, today in this single blog post, I am going to cover both the script and going to link to original blog posts where I have mentioned about this blog post. Plan Cache in Memory USE AdventureWorks GO SELECT [text], cp.size_in_bytes, plan_handle FROM sys.dm_exec_cached_plans AS cp CROSS APPLY sys.dm_exec_sql_text(plan_handle) WHERE cp.cacheobjtype = N'Compiled Plan' ORDER BY cp.size_in_bytes DESC GO Further explanation of this script is over here: SQL SERVER – Plan Cache – Retrieve and Remove – A Simple Script Data Cache in Memory USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.TYPE = 1 OR au.TYPE = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.TYPE = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Further explanation of this script is over here: SQL SERVER – Get Query Plan Along with Query Text and Execution Count Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Memory

    Read the article

  • Why isn't DSM for unstructured memory done today?

    - by sinned
    Ages ago, Djikstra invented IPC through mutexes which then somehow led to shared memory (SHM) in multics (which afaik had the necessary mmap first). Then computer networks came up and DSM (distributed SHM) was invented for IPC between computers. So DSM is basically a not prestructured memory region (like a SHM) that magically get's synchronized between computers without the applications programmer taking action. Implementations include Treadmarks (inofficially dead now) and CRL. But then someone thought this is not the right way to do it and invented Linda & tuplespaces. Current implementations include JavaSpaces and GigaSpaces. Here, you have to structure your data into tuples. Other ways to achieve similar effects may be the use of a relational database or a key-value-store like RIAK. Although someone might argue, I don't consider them as DSM since there is no coherent memory region where you can put data structures in as you like but have to structure your data which can be hard if it is continuous and administration like locking can not be done for hard coded parts (=tuples, ...). Why is there no DSM implementation today or am I just unable to find one?

    Read the article

  • How much of slow and fast flash memory a "flash memory" have?

    - by gsc-frank
    Trying to know what is the best of my flash memories to use ReadyBoost I realize that I don't know how much of fast flash memory each of my flash drives have. One can read: In some situations, you might not be able to use all of the memory on your device to speed up your computer. For example, some flash memory devices contain both slow and fast flash memory, but ReadyBoost can only use fast flash memory to speed up your computer. From http://windows.microsoft.com/is-IS/windows7/Using-memory-in-your-storage-device-to-speed-up-your-computer

    Read the article

  • How much of slow and fast flash memory a "flash memory" have? [migrated]

    - by gsc-frank
    Trying to know what is the best of my flash memories to use ReadyBoost I realize that I don't know how much of fast flash memory each of my flash drives have. One can read: In some situations, you might not be able to use all of the memory on your device to speed up your computer. For example, some flash memory devices contain both slow and fast flash memory, but ReadyBoost can only use fast flash memory to speed up your computer. From http://windows.microsoft.com/is-IS/windows7/Using-memory-in-your-storage-device-to-speed-up-your-computer

    Read the article

  • Windows 7 memory usage

    - by lydonchandra
    Physical memory(MB) for Windows 7 Total 4021 Cached 1113 Available 768 Free 174 Memory used 3.25GB At this point, windows7 asks me to close some programs because "system memory is low". From my understanding reading articles, I still have 768 MB free memory, why does windows7 complain? Also what does Cached memory refer to? Is this part of memory that Windows7 reserved for itself meaning it's free to use by Windows7 (and means I have about 768 + 1113 MB of free mem?)?

    Read the article

  • sysbench memory test on ec2 small instance

    - by caribio
    I'm seeing a problem with sysbench memory test (the default version that's compiled in). This is on Ubuntu Maverick, sysbench installed via apt-get install sysbench. Running the same thing on Ubuntu @ Rackspace worked just as expected. While the CPU and I/O tests worked fine on EC2 servers, the memory test just runs without doing anything (notice the 0M in the test results). The instance used was the publicly available 'stock' Ubuntu image with no changes to it: ./ec2-run-instances ami-ccf405a5 --instance-type m1.small --region us-east-1 --key mykey Supplying more arguments (such as: --memory-block-size=1K --memory-total-size=102400M) didn't help. What am I doing wrong? Thanks. sysbench --num-threads=4 --test=memory run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 4 Doing memory operations speed test Memory block size: 1K Memory transfer size: 0M Memory operations type: write Memory scope type: global Threads started! Done. Operations performed: 0 ( 0.00 ops/sec) 0.00 MB transferred (0.00 MB/sec) Test execution summary: total time: 0.0003s total number of events: 0 total time taken by event execution: 0.0000 per-request statistics: min: 18446744073709.55ms avg: 0.00ms max: 0.00ms Threads fairness: events (avg/stddev): 0.0000/0.00 execution time (avg/stddev): 0.0000/0.00

    Read the article

  • Linux released memory

    - by user59088
    If My process allocates some big memory and then deallocates, would top or gnome-system-monitor show that my memory usage of that process decreased ? or kernel will still reserve that memory for that process ? What I see is I am deallocating memory. But I still see gnome-system-monitor displaying growing memory for my program. I don't find memory leak in my end. I want to know whether its not displaying released memory ? or there is really a memory leak at my end ?

    Read the article

  • What are the advantages of registered memory?

    - by odd parity
    I'm browsing for a few low-end servers for a startup and I'm a bit confused about the different memory types. The advantage of ECC is clear - single-bit error correction. When it comes to registered memory it seems more vague, especially in systems that support both registered and unbuffered memory. A Google search mostly finds copies of the Wikipedia article, which states that registered memory chips "...place less electrical load on the memory controller and allow single systems to remain stable with more memory modules than they would have otherwise". However I can't find any quantification of this. What I'm wondering about is: Is registered memory an improvement over unbuffered when it comes to soft error rate, or is it purely about the maximum number of modules supported? If yes, at what point (amount of modules or GB of memory) do these improvements start to become noticeable? For a specific example, the HP ProLiant DL 120 G6 server manual states that maximum supported memory configuration is 16 GB unbuffered (4x4GB) or 12 GB registered (6x2GB). In this case I'd rather have the extra 4GB of memory if the reliability difference is negligible.

    Read the article

  • Memory Usage in LINUX

    - by Incredible
    I have a debian system. It has 8GB memory. When I do top it shows 7.9 GB memory used and rest free. I add up the memory usage of all the programs running from top and they hardly sum up to around 50 MB. So, where is rest of the memory being used? Can I have a better detailed info of the memory usage? What is a better way to check the memory usage?

    Read the article

  • Calculating memory footprints using /proc/sysvipc/shm

    - by MarkTeehan
    This is for a SLES 10 database server. One of my servers runs three databases and three app servers; I am analyzing how their shared memory segments grow and shrink to avoid intermittent out-of-memory scenarios. "Top" is hot helpful for this since its calculations for RES and VIRT are inconsistent. I am doing this by matching up the contents of /proc/sysvipc/shm with memory usage reported by the database admin console. I do this by totaling up saving the contents of /proc/sysvipc/shm and then total up "bytes" for all of the segments for the offending userid. This is a large server with hundreds of segments and tens (or hundreds) of GB of allocated memory per userid. However it doesn't match up - the database management software claims to be using around 25% more memory than the total I calculate. Negligible swap space is in use, so I am ignoring that. I am running it as root so I am sure I see all shared memory segments. My question is : is all (significant) allocated memory recorded in /proc/sysvipc/shm, or is this only shared memory (*and not "un-shared" memory?). If this is incorrect, what is the correct way to calculate out the total allocated memory for each userid? Also: I believe doing a 'cat' on this file locks server IPC. I check it every 5 seconds - is it likely that this frequency could be problematic? Thanks! Mark Teehan Singapore

    Read the article

  • Appserver runs out of memory

    - by sarego
    We have been facing Out of Memory errors in our App server for sometime. We see the used heap size increasing gradually until finally it reaches the available heap in size. This happens every 3 weeks after which a server restart is needed to fix this. Upon analysis of the heap dumps we find the problem to be objects used in JSPs. Can JSP objects be the real cause of Appserver memory issues? How do we free up JSP objects (Objects which are being instantiated using usebean or other tags)? We have a clustered Websphere appserver with 2 nodes and an IHS.

    Read the article

  • How to free up memory?

    - by sarego
    We have been facing Out of Memory errors in our App server for sometime. We see the used heap size increasing gradually until finally it reaches the available heap in size. This happens every 3 weeks after which a server restart is needed to fix this. Upon analysis of the heap dumps we find the problem to be objects used in JSPs. Can JSP objects be the real cause of Appserver memory issues? How do we free up JSP objects (Objects which are being instantiated using usebean or other tags)? We have a clustered Websphere appserver with 2 nodes and an IHS.

    Read the article

  • Memory allocation included in API

    - by gurugio
    If there is the 'struct foo' and an APIs which handle foo, which is more flexible and convenient API? 1) API only initialize foo. User should declare foo or allocate memory for foo. The this style is like pthread_mutex_init/pthread_mutex_destroy. example 1) struct foo a; init_foo(&a);' example 2) struct foo *a; a = malloc(sizeof(struct foo)); init_foo(a); 2) API allocates memory and user get the pointer. This is like getaddrinfo/freeaddrinfo. example) struct foo *a; get_foo(&a); put_foo(a);

    Read the article

  • How to make a memory dump in .net?

    - by SDReyes
    How do you obtain a memory dump from a given memory address in the format: Address | Hexadecimal representation | ASCII representation --------------------------------------------------------------------------------------- 0x637132687 | 00 00 00 00 00 00 00 00 45 21 65 78 32 F5 12 6C | ....... ahsnfdas 0x637132703 | 00 00 00 00 00 00 00 00 45 21 65 78 32 F5 12 6C | ....... ahsnfdas 0x637132719 | 00 00 00 00 00 00 00 00 45 21 65 78 32 F5 12 6C | ....... ahsnfdas 0x637132735 | 00 00 00 00 00 00 00 00 45 21 65 78 32 F5 12 6C | ....... ahsnfdas Do you know any API/framework/tool for the work?

    Read the article

  • JQuery or javascript to find memory usage of page

    - by Tauren
    Is there a way to find out how much memory is being used by a web page, or by my jquery application? Here's my situation: I'm building a data heavy webapp using a jquery frontend and a restful backend that serves data in JSON. The page is loaded once, and then everything happens via ajax. The UI provides users with a way to create multiple tabs within the UI, and each tab can contain lots and lots of data. I'm considering limiting the number of tabs they can create, but was thinking it would be nice to only limit them once memory usage has gone above a certain threshold.

    Read the article

  • Received memory warning and app crashes - iphone

    - by Anand Gautam
    I am creating an app using ARC but my app is crashing due to Received memory warning. The App is working fine in simulator. But in case of iphone device, If i run the app for few minutes then on doing anything, the app crashes straightaway. I have checked my app by xcode instrument. My app folder size is 6 MB but all memory allocation is showing 63 MB on xcode instrument. Because of this reason, presentViewController-Animated-Completion is getting slow during navigation. Does anyone have any solution why this is happening?

    Read the article

  • Memory Allocation - Arduino

    - by Joey Arnold Andres
    I'm new to this low level stuff. I'm currently learning arduino. I'm currently using an Arduino Mega 2560 and in our course we are practicing memory management. I'm a pro at memory management in pc but somehow I'm having crazy problems here in arduino. For instance: The arduino have 8192B, I'm trying to overflow it with uint_16 so I made an array of 8192/16 which is 512. so I did uint16_t A[512+1]; Well I expected that to cause an overflow. What is wrong with my concept?

    Read the article

  • PHP Array Efficiency and Memory Clarification

    - by CogitoErgoSum
    When declaring an Array in PHP, the index's may be created out of order...I.e Array[1] = 1 Array[19] = 2 Array[4] = 3 My question. In creating an array like this, is the length 19 with nulls in between? If I attempted to get Array[3] would it come as undefined or throw an error? Also, how does this affect memory. Would the memory of 3 index's be taken up or 19? Also currently a developer wrote a script with 3 arrays FailedUpdates[] FailedDeletes[] FailedInserts[] Is it more efficient to do it this way, or do it in the case of an associative array controlling several sub arrays "Failures" array(){ ["Updates"] => array(){ [0] => 12 [1] => 41 } ["Deletes"] => array(){ [0] => 122 [1] => 414 [1] => 43 } ["Inserts"] => array(){ [0] => 12 } }

    Read the article

  • Another memory leak question

    - by James
    Hello My application keeps running for 4 to 6 hours, During this time there is no contineous increase in memory or anything similar. Then after 4 to 6 hours i start getting EOutofMemory Exceptions. Even still at that time there is only 900MB out of 3GB RAM is used as per task manager. And application itself is not using more then 200MB . Then why i am getting EOoutofMEmory error ? Does it mean that memory leak is not necessarily visible in task manager ? Regagards

    Read the article

  • When do I need to deallocate memory? C++

    - by extintor
    I am using this code inside a class to make a webbrowser control visit a website: void myClass::visitWeb(const char *url) { WCHAR buffer[MAX_LEN]; ZeroMemory(buffer, sizeof(buffer)); MultiByteToWideChar(CP_ACP, MB_ERR_INVALID_CHARS, url, strlen(url), buffer, sizeof(buffer)-1); VARIANT vURL; vURL.vt = VT_BSTR; vURL.bstrVal = SysAllocString(buffer); // webbrowser navigate code... VariantClear(&vURL); } Do I need to do some memory deallocation here?, I see vURL is being deallocated by VariantClear but should I deallocate memory for buffer? I've been told that in another bool I have in the same app I shouldn't deallocate anything because everything clear out when the bool return true/false, but what happens on this void?

    Read the article

  • Windows Server task manager displays much higher memory use than sum of all processes' working set s

    - by Sleepless
    I have a 16 GB Windows Server 2008 x64 machine mostly running SQL Server 2008. The free memory as seen in Task Manager is very low (128 MB at the moment), i.e. about 15.7 GB are used. So far, so good. Now when I try to narrow down the process(es) using the most memory I get confused: None of the processes have more than 200MB Working Set Size as displayed in the 'Processes' tab of Task Manager. Well, maybe the Working Set Size isn't the relevant counter? To figure that out I used a PowerShell command [1] to sum up each individual property of the process object in sort of a brute force approach - surely one of them must add up to the 15.7 GB, right? Turns out none of them does, with the closest being VirtualMemorySize (around 12.7 GB) and PeakVirtualMemorySize (around 14.7 GB). WTF? To put it another way: Which of the numerous memory related process information is the "correct" one, i.e. counts towards the server's physical memory as displayed in the Task Manager's 'Performance' tab? Thank you all! [1] $erroractionpreference="silentlycontinue"; get-process | gm | where-object {$.membertype -eq "Property"} | foreach-object {$.name; (get-process | measure-object -sum $_.name ).sum / 1MB}

    Read the article

  • Oracle Announces Oracle Exadata X3 Database In-Memory Machine

    - by jgelhaus
    Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012 News Facts During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Next-Generation Technologies Deliver Dramatic Performance Improvements Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads. Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including: Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash; 20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold; 33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors; Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data; Up to 30 percent reduction in power and cooling. Configured for Your Business, Available Today Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability. Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers. Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications. Supporting Quotes “Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.” Supporting Resources Oracle Press Release Oracle Exadata Database Machine Oracle Exadata X3-2 Database In-Memory Machine Oracle Exadata X3-8 Database In-Memory Machine Oracle Database 11g Follow Oracle Database via Blog, Facebook and Twitter Oracle OpenWorld 2012 Oracle OpenWorld 2012 Keynotes Like Oracle OpenWorld on Facebook Follow Oracle OpenWorld on Twitter Oracle OpenWorld Blog Oracle OpenWorld on LinkedIn Mark Hurd's keynote with Andy Mendelsohn and Juan Loaiza - - watch for the replay to be available soon at http://www.youtube.com/user/Oracle or http://www.oracle.com/openworld/live/on-demand/index.html

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >