Search Results

Search found 38535 results on 1542 pages for 'sql memory'.

Page 179/1542 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • memory problem with metapost

    - by yCalleecharan
    Hi, I'm using gnuplot to plot a graph to the mp format and then I'm converting it to eps via the command: mpost --sprologues=3 -soutputtemplate=\"%j-%c.eps\" myfigu.mp But I don't get the eps output; instead I get this message: This is MetaPost, version 1.208 (kpathsea version 3.5.7dev) (mem=mpost 2009.12.12) 6 MAY 2010 23:16 **myfigu.mp (./myfigu.mp ! MetaPost capacity exceeded, sorry [main memory size=3000000]. _wc-withpen .currentpen.withcolor.currentcolor gpdraw-...ptpath[(EXPR0)]_sms((EXPR1),(EXPR2))_wc .else:_ac.contour.ptpath[(... l.48052 gpdraw(0,517.1a,166.4b) ; If you really absolutely need more capacity, you can ask a wizard to enlarge me. How do I tweak in order to get more memory. The file from which I'm plotting has two columns of 189,200 values each. These values are of type long double (output from a C program). The text file containing these two column values is about 6 MB. Thanks a lot...

    Read the article

  • glDrawElements allocating memory and not releasing it

    - by Joshua Weinberg
    Using OpenGLES 1.1 on the iPhone 3G (device, not simulator), I do normal drawing fun. But at points during the run of the application I get giant memory spikes, after a lot of digging with instruments I have found that it is glDrawElements that is grabbing the memory. The buffer being allocated is 4 meg, which to me means its loading a texture into RAM, which I guess could be valid, but its never releasing this buffer, and is allocating multiple of them. How do I make sure that these buffers that GL is creating get destroyed, instead of just hanging around?

    Read the article

  • Colors in Instruments when hunting down memory leaks

    - by Structurer
    Hi I'm currently hunting down a memory leak in my app for iPhone. I'm using Instruments to track down the code that is causing the leak (becoming more and more a friend of Instruments!). Now Instruments show two lines: one in dark blue (row 146) and one in a lighter blue (150). From some trial and error I get that they are connected somehow, but not good enough at Objective-C and Memory Management yet to really understand how. Does anyone know why different colors are used and what could be my problem? I have tried to release numberForArray but the the app crashes when showing the last line in a picker view. All ideas appreciated! (Posting this I also realize that line 139 is redundant! Se there, already an improvement ;-)

    Read the article

  • How does C#'s DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Given: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees "5/1/2010" to "5/4/2010." But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would allow the user to get the latest data and always pass the same end date to the sql proc--"5/5/2010" in this case. Please speak to this as well. Sample proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Set Java Application's virtual machine max memory without access to VM parameters because of custom

    - by Tom
    I'm using a Java application which allows you to import custom files. On import, these files are loaded into memory. The problem is that the files I want to import are very big, this causes an OutOfMemory exception. The crash log also informs me that the VM was started with the java parameter "-Xmx512m", I want to alter this to "-Xmx1024m" so that I got double the memory available. The problem is that this application is using it's own JRE folder and that there's a launcher written in C which is calling the jvm.dll file. In any way, java.exe or javaw.exe are never called and thus I cannot set these parameters myself (if I delete these executables it doesn't matter, can still run the application - this is not the case with the dll). So, my question is, can I set this VM parameter in an other way? I'm even willing to alter the JRE files if there is no other way.

    Read the article

  • C++ Allocate Memory Without Activating Constructors

    - by schnozzinkobenstein
    I'm reading in values from a file which I will store in memory as I read them in. I've read on here that the correct way to handle memory location in C++ is to always use new/delete, but if I do: DataType* foo = new DataType[sizeof(DataType) * numDataTypes]; Then that's going to call the default constructor for each instance created, and I don't want that. I was going to do this: DataType* foo; char* tempBuffer=new char[sizeof(DataType) * numDataTypes]; foo=(DataType*) tempBuffer; But I figured that would be something poo-poo'd for some kind of type-unsafeness. So what should I do? And in researching for this question now I've seen that some people are saying arrays are bad and vectors are good. I was trying to use arrays more because I thought I was being a bad boy by filling my programs with (what I thought were) slower vectors. What should I be using???

    Read the article

  • When do I need to deallocate memory?

    - by extintor
    I am using this code inside a class to make a webbrowser control visit a website: void myClass::visitWeb(const char *url) { WCHAR buffer[MAX_LEN]; ZeroMemory(buffer, sizeof(buffer)); MultiByteToWideChar(CP_ACP, MB_ERR_INVALID_CHARS, url, strlen(url), buffer, sizeof(buffer)-1); VARIANT vURL; vURL.vt = VT_BSTR; vURL.bstrVal = SysAllocString(buffer); // webbrowser navigate code... VariantClear(&vURL); } I call visitWeb from another void function that gets called on the handlemessage() for the app. Do I need to do some memory deallocation here?, I see vURL is being deallocated by VariantClear but should I deallocate memory for buffer? I've been told that in another bool I have in the same app I shouldn't deallocate anything because everything clear out when the bool return true/false, but what happens on this void?

    Read the article

  • Hidden features of PL/SQL

    - by Adam Paynter
    In light of the "Hidden features of..." series of questions, what little-known features of PL/SQL have become useful to you? Edit: Features specific to PL/SQL are preferred over features of Oracle's SQL syntax. However, because PL/SQL can use most of Oracle's SQL constructs, they may be included if they make programming in PL/SQL easier.

    Read the article

  • SQL server virtual memory usage and perofrmance

    - by user365035
    Hello, I have a very large DB used mostly for analytics. The performance overall is very sluggish. I just noticed that when running the query below, the amount of virtual memory used greatly exceed the amount of physical memory available. Currently, phsycial memory is 10GB (10238 bytes) where as the virtual memory returns significantly more 8388607 bytes. That seems really wrong, but I'm at a bit of a loss on how to proceed. USE [master]; GO select cpu_count , hyperthread_ratio , physical_memory_in_bytes / 1048576 as 'mem_MB' , virtual_memory_in_bytes / 1048576 as 'virtual_mem_MB' , max_workers_count , os_error_mode , os_priority_class from sys.dm_os_sys_info

    Read the article

  • Objective-C++ Memory Problem

    - by Stephen Furlani
    Hello, I'm having memory woes. I've got a C++ Library (Equalizer from Eyescale) and they use the Traversal Visitor Pattern to allow you to add new functionality to their classes. I've finally figured out how it works, and I've got a Visitor that just returns the properties from one of the objects. (since I don't know how they're allocated). so. My little code does this: VisitorResult AGLContextVisitor::visit( Channel* channel ) { // Search through Nodes, Pipes until we get to the right window. // Add some code to make sure we find the right one? // Not executing the following code as C++ in gdb? eq::Window* w = channel->getWindow(); OSWindow* osw = w->getOSWindow(); AGLWindow* aw = (AGLWindow *)osw; AGLContext agl_ctx = aw->getAGLContext(); this->setContext(agl_ctx); return TRAVERSE_PRUNE; } So here's the problem. eq::Window* w = channel->getWindow(); (gdb) print w 0x0 BUT If I do this: (gdb) set objc-non-blocking-mode off (gdb) print w=channel->getWindow() 0x300effb9 // an honest memory location, and sets w as verified in the Debugger window of XCode. It does the same thing for osw. I don't get it. Why would something work in (gdb) but not in the code? The file is completely a cpp file, but it seems to be running in objc++, since I need to turn blocking off. Help!? I feel like I'm missing some memory-management basic thing here, either with C++ or Obj-C. [edit] channel-getWindow() is supposed to do this: /** @return the parent window. @version 1.0 */ Window* getWindow() { return _window; } The code also executes fine if I run it from a C++-only application. [edit] No... I tried creating a simple stand-alone program since I was tired of running it as a plugin. Messy to debug. And no, it doesn't run in the C++ program either. So I'm really at a loss as to what I'm doing wrong. Thanks, -- Stephen Furlani

    Read the article

  • How does DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would pass the same end-date to the sql proc (per day). And the user would still get the latest data. Please speak to this as well. Given Example: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees 5/1/2010 to 5/4/2010. But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Example proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Advice needed: cold backup for SQL Server 2008 Express?

    - by Mikey Cee
    What are my options for achieving a cold backup server for SQL Server Express instance running a single database? I have an SQL Server 2008 Express instance in production that currently represents a single point of failure for my application. I have a second physical box sitting at the installation that is currently doing nothing. I want to somehow replicate my database in near real time (a little bit of data loss is acceptable) to the second box. The database is very small and resources are utilized very lightly. In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead. Although Express doesn't support log shipping, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals. Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do? Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.

    Read the article

  • How to detect Out Of Memory condition?

    - by Jaromir Hamala
    I have an application running on Websphere Application Server 6.0 and it crashes nearly every day because of Out-Of-Memory. From verbose GC is certain there are the memory leaks(many of them) Unfortunately the application is provided by external vendor and getting things fixed is slow & painful process. As part of the process I need to gather the logs and heapdumps each time the OOM occurs. Now I'm looking for some way how to automate it. Fundamental problem is how to detect OOM condition. One way would be to create shell script which will periodically search for new heapdumps. This approach seems me a kinda dirty. Another approach might be to leverage the JMX somehow. But I have little or no experience in this area and don't have much idea how to do it. Or is in WAS some kind of trigger/hooks for this? Thank you very much for every advice!

    Read the article

  • Adobe Reader 9.0 memory leak while loading-unloading PDF files every one second indefinitely

    - by Total Starnger
    I have c++ written MFC based application that has PDF object viewer as a part of the implementation. A whole thing works just fine with Adobe Reader 8.0. Once I switched to Adobe Reader 9.0 as a default PDF reader, I keep experiencing small memory leak that forces my application to crash within a half an hour of continuous loading-unloading different PDF files. Any ideas what might cause this memory leak and is there any cure besides replacing Adobe Reader 9.0 with anything else? (Works fine with Foxit PDF reader as well, by the way..)

    Read the article

  • Autorelease with elements in a UITableViewCell - memory leak

    - by Shaun Budhram
    In my 'cellForRowAtIndexPath' method for a UITableView delegate, I'm allocating a cell if it doesn't exist, and in this cell, I'm creating a new activity spinner like so: UIActivityIndicatorView *actView = [[[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleGray ] autorelease]; I'm using Leaks to detect memory leaks in my program, and for some reason, this is coming up as a leak, even though it's autoreleasing. The cell itself is also autoreleasing. Has anyone had experience with autoreleasing variables coming up as leaks in the Leaks instrument, and how to tackle these problems? Also, if it helps, this is the history Leaks is displaying for this memory location. It looks like it at some point gets an additional retain message? This is not being done in my code.

    Read the article

  • Excel Add-in memory explosion

    - by tsinik
    I wrote a small .NET add in to excel 2007 that read data from external c++ api and display it inside an excel. The task manager shows that I'm having a memory leak (the memory usage is inflate linearly up to 250MB after whitch it throws an "Excel cannot complete this task with available resources error") but the problem disappears as soon as I minimize the excel window. The api uses delegates to return data and I update it into a dictionary. another thread is updating the excel from the dictionary every second. It is unlikely that the unmanaged code is responsible of the leak. Does anybody have an idea what can cause this? 10x!

    Read the article

  • How is timezone handled in the lifecycle of an ADO.NET + SQL Server DateTime column?

    - by stimpy77
    Using SQL Server 2008. This is a really junior question and I could really use some elaborate information, but the information on Google seems to dance around the topic quite a bit and it would be nice if there was some detailed elaboration on how this works... Let's say I have a datetime column and in ADO.NET I set it to DateTime.UtcNow. 1) Does SQL Server store DateTime.UtcNow accordingly, or does it offset it again based on the timezone of where the server is installed, and then return it offset-reversed when queried? I think I know that the answer is "of course it stores it without offsetting it again" but want to be certain. So then I query for it and cast it from, say, an IDataReader column to a DateTime. As far as I know, System.DateTime has metadata that internally tracks whether it is a UTC DateTime or it is an offsetted DateTime, which may or may not cause .ToLocalTime() and .ToUniversalTime() to have different behavior depending on this state. So, 2) Does this casted System.DateTime object already know that it is a UTC DateTime instance, or does it assume that it has been offset? Now let's say I don't use UtcNow, I use DateTime.Now, when performing an ADO.NET INSERT or UPDATE. 3) Does ADO.NET pass the offset to SQL Server and does SQL Server store DateTime.Now with the offset metadata? So then I query for it and cast it from, say, an IDataReader column to a DateTime. 4) Does this casted System.DateTime object already know that it is an offset time, or does it assume that it is UTC?

    Read the article

  • Does variable = null set it for garbage collection

    - by manyxcxi
    Help me settle a dispute with a coworker: Does setting a variable or collection to null in Java aid in garbage collection and reducing memory usage? If I have a long running program and each function may be iteratively called (potentially thousands of times): Does setting all the variables in it to null before returning a value to the parent function help reduce heap size/memory usage?

    Read the article

  • Memory allocation for collections in .NET

    - by Yogendra
    This might be a dupe. I did not find enough information on this. I was discussing memory allocation for collections in .Net. Where is the memory for elements allocated in a collection? List<int> myList = new List<int>(); The variable myList is allocated on stack and it references the List object created on heap. The question is when int elements are added to the myList, where would they be created ? Can anyone point the right direction?

    Read the article

  • Objective-C memory management issue

    - by Toby Wilson
    I've created a graphing application that calls a web service. The user can zoom & move around the graph, and the program occasionally makes a decision to call the web service for more data accordingly. This is achieved by the following process: The graph has a render loop which constantly renders the graph, and some decision logic which adds web service call information to a stack. A seperate thread takes the most recent web service call information from the stack, and uses it to make the web service call. The other objects on the stack get binned. The idea of this is to reduce the number of web service calls to only those appropriate, and only one at a time. Right, with the long story out of the way (for which I apologise), here is my memory management problem: The graph has persistant (and suitably locked) NSDate* objects for the currently displayed start & end times of the graph. These are passed into the initialisers for my web service request objects. The web service call objects then retain the dates. After the web service calls have been made (or binned if they were out of date), they release the NSDate*. The graph itself releases and reallocates new NSDates* on the 'touches ended' event. If there is only one web service call object on the stack when removeAllObjects is called, EXC_BAD_ACCESS occurs in the web service call object's deallocation method when it attempts to release the date objects (even though they appear to exist and are in scope in the debugger). If, however, I comment out the release messages from the destructor, no memory leak occurs for one object on the stack being released, but memory leaks occur if there are more than one object on the stack. I have absolutely no idea what is going wrong. It doesn't make a difference what storage symantics I use for the web service call objects dates as they are assigned in the initialiser and then only read (so for correctness' sake are set to readonly). It also doesn't seem to make a difference if I retain or copy the dates in the initialiser (though anything else obviously falls out of scope or is unwantedly released elsewhere and causes a crash). I'm sorry this explanation is long winded, I hope it's sufficiently clear but I'm not gambling on that either I'm afraid. Major big thanks to anyone that can help, even suggest anything I may have missed?

    Read the article

  • Memory management in iOS

    - by angrest
    Looks like I did not understand memory management in Objective C... sigh. I have the following code (note that in my case, placemark.thoroughfare and placemark.subThoroughfare are both filled with valid data, thus both if-conditions will be TRUE if (placemark.thoroughfare) { [item.place release]; item.place = [NSString stringWithFormat:@"%@ ", placemark.thoroughfare]; } else { [item.place release]; item.place = @"Unknown Place"; } if (placemark.thoroughfare && placemark.subThoroughfare) { // *** problem is here *** [item.place release]; item.place = [NSString stringWithFormat:@"%@ %@", placemark.thoroughfare , placemark.subThoroughfare]; } If I do not release item.place at the marked location in the code, Instruments finds a memory leak there. If I do, the program crashes as soon as I try to access item.place outside the offending method. Any ideas?

    Read the article

  • finding out memory allocation hotspots in java

    - by Zamir
    Our GC is working hard and we have some pauses that we want to decrease. We have some memory allocation issues that we want to solve before or while we are tweaking with the actual JVM GC args. I would like to know which objects are making the GC sweat: is there a way to know which objects are evacuated every time the GC is working? is there a way to know which objects are moved between areas every time the GC is working? Is there a way to know which objects are in Eden area? I am working extensively with Jprofiler and Memory Analyzer. I would like to get this information on a running application in my staging environment.

    Read the article

  • How is inheritance implemented at the memory level?

    - by cambr
    Suppose I have class A { public: void print(){cout<<"A"; }}; class B: public A { public: void print(){cout<<"B"; }}; class C: public C { }; How is inheritance implemented at the memory level? Does C copy print() code to itself or does it have a pointer to the it that points somewhere in A part of the code? How does the same thing happen when we override the previous definition, for example in B (at the memory level)?

    Read the article

  • Controling CRT memory initialization

    - by Ofek Shilon
    Occasionally you meet bugs that are reproducible only in release builds and/or only on some machines. A common (but by no means only) reason is uninitialized variables, that are subject to random behaviour. E.g, an uninitialized BOOL can be TRUE most of the time, on most machines, but randomly be initialized as FALSE. What I wish I would have is a systematic way of flushing out such bugs by modifying the behaviour of the CRT memory initialization. I'm well aware of the MS debug CRT magic numbers - at the very least I'd like to have a trigger to turn 0xCDCDCDCD (the pattern that initializes freshly allocated memory) to zeros. I suspect one would be able to easily smoke out nasty initialization pests this way, even in debug builds. Am I missing an available CRT hook (API, registry key, whatever) that enables this? Anyone has other ideas to get there?

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >