Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 275/672 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • How to debug macruby?

    - by Dan
    Hi, I've encountered an inconsistent bug with MacRuby and have no idea how to go about debugging this. If anyone could help would be great. I don't know if this is due to my own code or is it a bug in the MacRuby framework. I have a feeling it's my own code, something about over-retaining a piece of memory and hence the garbage collection failed. This is the error from Xcode. Thanks. CSV Wizard(30245,0x7fff704f7ca0) malloc: resurrection error for object 0x20199da20 while assigning {conservative-block}[196608](0x302360060)[117616] = Array[64](0x20199da20) garbage pointer stored into reachable memory, break on auto_zone_resurrection_error to debug CSV Wizard(30245,0x103781000) malloc: garbage block 0x20199da20(Array[64]) was over-retained during finalization, refcount = 1 This could be an unbalanced CFRetain(), or CFRetain() balanced with -release. Break on auto_zone_resurrection_error() to debug. CSV Wizard(30245,0x103781000) malloc: fatal resurrection error for garbage block 0x20199da20(Array[64]): over-retained during finalization, refcount = 1

    Read the article

  • Bad_alloc exception when using new for a struct c++

    - by bsg
    Hi, I am writing a query processor which allocates large amounts of memory and tries to find matching documents. Whenever I find a match, I create a structure to hold two variables describing the document and add it to a priority queue. Since there is no way of knowing how many times I will do this, I tried creating my structs dynamically using new. When I pop a struct off the priority queue, the queue (STL priority queue implementation) is supposed to call the object's destructor. My struct code has no destructor, so I assume a default destructor is called in that case. However, the very first time that I try to create a DOC struct, I get the following error: Unhandled exception at 0x7c812afb in QueryProcessor.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x0012f5dc.. I don't understand what's happening - have I used up so much memory that the heap is full? It doesn't seem likely. And it's not as if I've even used that pointer before. So: first of all, what am I doing that's causing the error, and secondly, will the following code work more than once? Do I need to have a separate pointer for each struct created, or can I re-use the same temporary pointer and assume that the queue will keep a pointer to each struct? Here is my code: struct DOC{ int docid; double rank; public: DOC() { docid = 0; rank = 0.0; } DOC(int num, double ranking) { docid = num; rank = ranking; } bool operator>( const DOC & d ) const { return rank > d.rank; } bool operator<( const DOC & d ) const { return rank < d.rank; } }; //a lot of processing goes on here; when a matching document is found, I do this: rank = calculateRanking(table, num); //if the heap is not full, create a DOC struct with the docid and rank and add it to the heap if(q.size() < 20) { doc = new DOC(num, rank); q.push(*doc); doc = NULL; } //if the heap is full, but the new rank is greater than the //smallest element in the min heap, remove the current smallest element //and add the new one to the heap else if(rank > q.top().rank) { q.pop(); cout << "pushing doc on to queue" << endl; doc = new DOC(num, rank); q.push(*doc); } Thank you very much, bsg.

    Read the article

  • Inserting rows while fetching(from another table) in SQLite

    - by Samuel
    I'm getting this error no matter what with python and sqlite. File "addbooks.py", line 77, in saveBook conn.commit() sqlite3.OperationalError: cannot commit transaction - SQL statements in progress The code looks like this: conn = sqlite3.connect(fname) cread = conn.cursor() cread.execute('''select book_text from table''') while True: row = cread.fetchone() if row is None: break .... for entry in getEntries(doc): saveBook(entry, conn) Can't do a fetchall() because table and column size are big, and the memory is scarce. What can be done without resorting to dirty tricks(as getting the rowids in memory, which would probably fit, and then selecting the rows one by one)?.

    Read the article

  • Sqlite. How to create an index in attached DB?

    - by kappa
    I have a problem with adding index to memory database attached to main database. 1) I open the database (F) from file 2) Attach the :memory: (M) database 3) Create tables in database M 4) Copy data from F to M I would also like to create an index in database M, but don't know how to do that. This code creates index but in F database: sQuery = "CREATE INDEX IF NOT EXISTS [INDID] ON [PANEL]([ID] ASC);"; I tried to add the name qualifier before table name like this: sQuery = "CREATE INDEX IF NOT EXISTS [INDID] ON [M.PANEL]([ID] ASC);"; but SQLite returns with message that column main.M.PANEL does not exist. What can I do?

    Read the article

  • [Python] How do I read binary pickle data first, then unpickle it?

    - by conradlee
    I'm unpickling a NetworkX object that's about 1GB in size on disk. Although I saved it in the binary format (using protocol 2), it is taking a very long time to unpickle this file---at least half an hour. The system I'm running on has plenty of system memory (128 GB), so that's not the bottleneck. I've read here that pickling can be sped up by first reading the entire file into memory, and then unpickling it (that particular thread refers to python 3.0, which I'm not using, but the point should still be true in python 2.6). How do I first read the binary file, and then unpickle it? I have tried: import cPickle as pickle f = open("big_networkx_graph.pickle","rb") bin_data = f.read() graph_data = pickle.load(bin_data) But this returns: TypeError: argument must have 'read' and 'readline' attributes Any ideas?

    Read the article

  • Strange error with CreateCompatibleDC

    - by sevaxx
    Maybe this is a foolish question, I can't see why I can not get a DC created in the following code : HBITMAP COcrDlg::LoadClippedBitmap(LPCTSTR pathName,UINT maxWidth,UINT maxHeight) { HBITMAP hBmp = (HBITMAP)::LoadImage(NULL, pathName, IMAGE_BITMAP, 0, 0, LR_LOADFROMFILE | LR_CREATEDIBSECTION); if (!hBmp) return NULL; HDC hdc = (HDC)GetDC(); HDC hdcMem = CreateCompatibleDC(hdc); if (!hdcMem) { DWORD err = GetLastError(); } ... ... ... The bitmap hBmp is loaded fine and hdc has a valid value. But the call to CreateCompatibleDC() returns a NULL pointer. Then, GetLastError() returns 0 ! Anybody can guess what's going on here , please ? PS : There are no memory allocations or GDI routines called before this one...so I think memory leaks should be ruled out.

    Read the article

  • Leak caused by fread

    - by Jack
    I'm profiling code of a game I wrote and I'm wondering how it is possible that the following snippet causes an heap increase of 4kb (I'm profiling with Heapshot Analysis of Xcode) every time it is executed: u8 WorldManager::versionOfMap(FILE *file) { char magic[4]; u8 version; fread(magic, 4, 1, file); <-- this is the line fread(&version,1,1,file); fseek(file, 0, SEEK_SET); return version; } According to the profiler the highlighted line allocates 4.00Kb of memory with a malloc every time the function is called, memory which is never released. This thing seems to happen with other calls to fread around the code, but this was the most eclatant one. Is there anything trivial I'm missing? Is it something internal I shouldn't care about? Just as a note: I'm profiling it on an iPhone and it's compiled as release (-O2).

    Read the article

  • Monitor and Terminate Python script based on system resource use

    - by Vincent
    What is the "right" or "best" way to monitor the system resources a python script is using and terminate it if the resource use exceeds some predetermined values. In my case memory usage is of concern. I am not asking how to measure the system resource use although I am open to suggestions. As a simple example, let's assume I have a function that finds prime numbers less than some large number and adds them to a list based on some condition. I don't know ahead of time how many prime numbers will satisfy the condition so I what to be sure to terminate the function if I use up to much system memory (8gb lets say). I know that there are ways to monitor the size of python objects. What I don't know is the proper way to monitor the size of the list and exit is to just include a size test in the prime function loop and exit if it exceeds 8gb or if there is an "external" way to monitor and exit.

    Read the article

  • How Flask loads blueprint internaly?

    - by Ignas B.
    I'm just interested how Flask's blueprints gets imported. It still imports the python module at the end of all the stuff done by Flask and if I'm right python does two things when importing: registers the module name in the namespace and then initialize it if needed. So if Flask blueprint is initialized when it gets registered, so all the module then is in memory and if there are lots of blueprints to register, the memory just gets wasted, because in one request basically you use one blueprint. Not a big loss but still... But if it is only registered in the namespace and initialized only when needed (when the real request reaches it), then it make sense to register them all at once (as is the recommended way I understood). This is I guess the case here :) But just wanted to ask and understand a bit deeper.

    Read the article

  • Python fCGI + sqlAlchemy = malformed header from script. Bad header=FROM tags : index.py

    - by crgwbr
    I'm writing an Fast-CGI application that makes use of sqlAlchemy & MySQL for persistent data storage. I have no problem connecting to the DB and setting up ORM (so that tables get mapped to classes); I can even add data to tables (in memory). But, as soon as I query the DB (and push any changes from memory to storage) I get a 500 Internal Server Error and my error.log records malformed header from script. Bad header=FROM tags : index.py, when tags is the table name. Any idea what could be causing this? Also, I don't think it matters, but its a Linux development server talking to an off-site (across the country) MySQL server.

    Read the article

  • Stack recommendations for small/medium-sized web application in Python

    - by reto
    I'm looking for some recommendations for a python web application. We have some memory restrictions and we try to keep it small and lean. We thought about using WSGI (and a python webserver) and build the rest ourself. We already have a template engine we'd like to use, but we are open for some suggestions regarding the whole request handling (the controller). The application has to run in a single process and the requests have to be processed with multiple threads. We've looked at django, but we are a not sure if it fits into our memory budget. Your feedback is very welcome! Cheers, Reto

    Read the article

  • A PHP object server for business logic

    - by Matthieu
    I wonder if such a thing is possible or even exist : In the MVC pattern, I would like to have the Model part to be persistant in memory, instead of reloading my instances at each execution. So I want only the Controller and View part to be executed. Is there any solution of server that would provide PHP objects (just like a Mysql server provides data records), and that keeps these objects in memory ? A problem would be also : how to have a constructed query to get objects ? Maybe PHP Linq ?

    Read the article

  • Lifetime of javacript variables.

    - by The Machine
    What is the lifetime of a variable in javascript, declared with "var". I am sure, it is definitely not according to expectation. <script> function(){ var a; var fun=function(){ // a is accessed and modified } }(); </script> Here how and when does javascript garbage collect the variable a? Since 'a' is a part of the closure of the inner function, it ideally should never get garbage collected, since the inner function 'fun', may be passed as a reference to an external context.So 'fun' should still be able to access 'a', from the external context. If my understanding is correct, how does garbage collection happen then, and how does it ensure to have enough memory space, since keeping all variables in memory until the execution of the program might not be tenable ?

    Read the article

  • Best practices for displaying large number of images as thumbnails in c#

    - by andySF
    I got to a point where it's very difficult to get answers by debugging and tracing object, so i need some help. What I'm trying to do: A history form for my screen capture pet project. The history must list all images as thumbnails (ex: picasa). What I've done: I created a HistoryItem:UserControl. This history item has a few buttons, a check box, a label and a picture box. The buttons are for delete/edit/copy image. The check box is used for selecting one or more images and the label is for some info text. The picture box is getting the image from a public property that is a path and a method creates a proportional thumbnail to display it when the control has been loaded. This user control has two public events. One for deleting the image and one for bubbling the events for mouse enter and mouse leave trough all controls. For this I use EventBroadcastProvider. The bubbling is useful because wherever I move the mouse over the control, the buttons appear. The dispose method has been extended and I manually remove the events. All images are loaded by looping a xml file that contains the path of all images. For each image in this XML I create a new HitoryItem that is added (after a little coding to sort and limit the amount of images loaded) to a flow layout panel. The problem: When I lunch the history form, and the flow layout panel is populated with my HistoryItem custom control, my memory usage increases drastically.From 14Mb to around 100MB with 100 images loaded. By closing the history form and disposing whatever I could dispose and even trying to call GC.Collect() the memory increase remain. I search for any object that could not be disposed properly like an image or event but wherever I used them they are disposed. The problem seams to be from multiple sources. One is that the events for bubbling are not disposing properly, and the other is from the picture box itself. All of this i could see by commenting all the code to a limited version when only the custom control without any image processing and even events is loaded. Without the events the memory consumption is reduced by axiomatically 20%. So my real question is if this logic, flow layout panels and custom controls with picture boxes, is the best solution for displaying large amounts of images as thumbnails. Thank you!

    Read the article

  • Why doesn't this work?

    - by user146780
    I'v tried to solve a memory leak in the GLU callback by creating a global variable but now it dos not draw anything: GLdouble *gluptr = NULL; void CALLBACK combineCallback(GLdouble coords[3], GLdouble *vertex_data[4], GLfloat weight[4], GLdouble **dataOut) { GLdouble *vertex; if(gluptr == NULL) { gluptr = (GLdouble *) malloc(6 * sizeof(GLdouble)); } vertex = (GLdouble*)gluptr; vertex[0] = coords[0]; vertex[1] = coords[1]; vertex[2] = coords[2]; for (int i = 3; i < 6; i++) { vertex[i] = weight[0] * vertex_data[0][i] + weight[1] * vertex_data[0][i] + weight[2] * vertex_data[0][i] + weight[3] * vertex_data[0][i]; } *dataOut = vertex; } basically instead of doing malloc each time in the loop (thus the memory leak) im using a global pointer, but this doesn't work (drawing to the screen). Why would using malloc to a pointer created in the function work any different than a global variable? Thanks

    Read the article

  • Windows Service suddenly doing nothing

    - by TB
    Hi, My windows service is using a Thread (not a timer) which is always looping and sleeps for 1 second every loop using : evet.WaitOne(interval); When I start the service it works fine and I can see in the task manager that it is running, consuming and releasing memory, consuming processor ... etc that is all normal, but after a while (random amount of time) the service simply stops!! it is still there in the task manager but it is not consuming any processor work now and its consumption to the memory is not changing. it simply (died but still there in the task manager like a Zombie). I know that many exceptions might have happened during running the service (it is really doing many things) but all those exceptions are handled in Try catch blocks, so why is my "always looping" thread stops ??? This thread also logs every time he loops, when he is freezig in this way he is not logging anything (of course)

    Read the article

  • GlassFish JDO and global object

    - by bach
    Hi, I'm thinking about the GlassFish platform for my new app. My app env. doesn't have a big volume of data to handle, but a lot of users writing/reading the same data A very volotile portion of the data updates every 200milsec by diff users. Therefore I'd like that type of data to be in memory only and accessible to the whole app My questions: How do I use a global object in memory with GF? a. use a static variable object - for that I guess I need to make sure GF is running on only 1 JVM -- how to I configure GF to run on 1 jvm? b. use HttpContext - same as a. How do I persist to the DB? a. can I use JDO interface? How do I Schedule tasks to be performed in the future (something like the task queue in GAE) thanks, J.S. Bach

    Read the article

  • Can you create a HIPAA compliant Amazon S3 Web Application?

    - by xkingpin
    I am facing some questions when trying to design an S3 application using ASP.NET MVC and trying to stay HIPAA compliant. My initial plan was to require an SSL connection to my web server, encrypt the images on my server, then send them to s3 using my private keys. Here's my obvious concerns: You cannot store unencrypted images in any temporary file cache when client views images within the browser. Even if I setup an ashx to generically handle the image in memory, couldn't this get stored in cache? Saying the images will be encrpyted because you will be connecting to my server via https still does not guarantee all browsers will not cache data. It's not possible to even consider the "Query String" with expiration option since data will be encrypted before being stored on disk at s3, and will again be decrypted at my server in memory. I think my only option would be to write/purchase some sort of ActiveX component that will not expose the image as a simple html image source or write my app as a client side WinForm application.

    Read the article

  • Tuning JVM (GC) for high responsive server application

    - by elgcom
    I am running an application server on Linux 64bit with 8 core CPUs and 6 GB memory. The server must be highly responsive. After some inspection I found that the application running on the server creates rather a huge amount of short-lived objects, and has only about 200~400 MB long-lived objects(as long as there is no memory leak) After reading http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html I use these JVM options -Xms2g -Xmx2g -XX:MaxPermSize=256m -XX:NewRatio=1 -XX:+UseConcMarkSweepGC Result: the minor GC takes 0.01 ~ 0.02 sec, the major GC takes 1 ~ 3 sec the minor GC happens constantly. How can I further improve or tune the JVM? larger heap size? but will it take more time for GC? larger NewSize and MaxNewSize (for young generation)? other collector? parallel GC? is it a good idea to let major GC take place more often? and how?

    Read the article

  • Performance of Managed C++ Vs UnManaged/native C++

    - by bsobaid
    I am writing a very high performance application that handles and processes hundreds of events every millisecond. Is Unmanaged C++ faster than managed c++? and why? Managed C++ deals with CLR instead of OS and CLR takes care of memory management, which simplifies the code and is probably also more efficient than code written by "a programmer" in unmanaged C++? or there is some other reason? When using managed, how can one then avoid dynamic memory allocation, which causes a performance hit, if it is all transparent to the programmer and handled by CLR? So coming back to my question, Is managed C++ more efficient in terms of speed than unmanaged C++ and why?

    Read the article

  • What wording in the C++ standard allows static_cast<non-void-type*>(malloc(N)); to work?

    - by ben
    As far as I understand the wording in 5.2.9 Static cast, the only time the result of a void*-to-object-pointer conversion is allowed is when the void* was a result of the inverse conversion in the first place. Throughout the standard there is a bunch of references to the representation of a pointer, and the representation of a void pointer being the same as that of a char pointer, and so on, but it never seems to explicitly say that casting an arbitrary void pointer yields a pointer to the same location in memory, with a different type, much like type-punning is undefined where not punning back to an object's actual type. So while malloc clearly returns the address of suitable memory and so on, there does not seem to be any way to actually make use of it, portably, as far as I have seen.

    Read the article

  • Capturing stdout within the same process in Python

    - by danben
    I've got a python script that calls a bunch of functions, each of which writes output to stdout. Sometimes when I run it, I'd like to send the output in an e-mail (along with a generated file). I'd like to know how I can capture the output in memory so I can use the email module to build the e-mail. My ideas so far were: use a memory-mapped file (but it seems like I have to reserve space on disk for this, and I don't know how long the output will be) bypass all this and pipe the output to sendmail (but this may be difficult if I also want to attach the file)

    Read the article

  • C++: Best text accumulator

    - by MInner
    Text gets accumulates piecemeal before being sent to client. Now we use own class that allocates memory for each piece as char massive. (Anyway, works like char[][] + std::list<char*>). Then we build the whole string, convert it into std::sting and then create boost::asio::streambuf using it. That's slow enough, I assume. Correct me if I'm wrong. I know, in many cases simple FILE type from stdio.h is used. How does it works? Allocates memory at every write into it. So, is it faster and is there any way to read into boost::asio::streambuf from FILE?

    Read the article

  • thread management in nbody code of cuda-sdk

    - by xnov
    When I read the nbody code in Cuda-SDK, I went through some lines in the code and I found that it is a little bit different than their paper in GPUGems3 "Fast N-Body Simulation with CUDA". My questions are: First, why the blockIdx.x is still involved in loading memory from global to share memory as written in the following code? for (int tile = blockIdx.y; tile < numTiles + blockIdx.y; tile++) { sharedPos[threadIdx.x+blockDim.x*threadIdx.y] = multithreadBodies ? positions[WRAP(blockIdx.x + q * tile + threadIdx.y, gridDim.x) * p + threadIdx.x] : //this line positions[WRAP(blockIdx.x + tile, gridDim.x) * p + threadIdx.x]; //this line __syncthreads(); // This is the "tile_calculation" function from the GPUG3 article. acc = gravitation(bodyPos, acc); __syncthreads(); } isn't it supposed to be like this according to paper? I wonder why sharedPos[threadIdx.x+blockDim.x*threadIdx.y] = multithreadBodies ? positions[WRAP(q * tile + threadIdx.y, gridDim.x) * p + threadIdx.x] : positions[WRAP(tile, gridDim.x) * p + threadIdx.x]; Second, in the multiple threads per body why the threadIdx.x is still involved? Isn't it supposed to be a fix value or not involving at all because the sum only due to threadIdx.y if (multithreadBodies) { SX_SUM(threadIdx.x, threadIdx.y).x = acc.x; //this line SX_SUM(threadIdx.x, threadIdx.y).y = acc.y; //this line SX_SUM(threadIdx.x, threadIdx.y).z = acc.z; //this line __syncthreads(); // Save the result in global memory for the integration step if (threadIdx.y == 0) { for (int i = 1; i < blockDim.y; i++) { acc.x += SX_SUM(threadIdx.x,i).x; //this line acc.y += SX_SUM(threadIdx.x,i).y; //this line acc.z += SX_SUM(threadIdx.x,i).z; //this line } } } Can anyone explain this to me? Is it some kind of optimization for faster code?

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >