Search Results

Search found 12872 results on 515 pages for 'memory alignment'.

Page 230/515 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • A PHP object server for business logic

    - by Matthieu
    I wonder if such a thing is possible or even exist : In the MVC pattern, I would like to have the Model part to be persistant in memory, instead of reloading my instances at each execution. So I want only the Controller and View part to be executed. Is there any solution of server that would provide PHP objects (just like a Mysql server provides data records), and that keeps these objects in memory ? A problem would be also : how to have a constructed query to get objects ? Maybe PHP Linq ?

    Read the article

  • C# Class Instantiation Overflow

    - by Goober
    Scenario I have a C# Win Forms App, where the Main Form contains a loop that creates 3000 Instances of another form (Form B). Inside Form B there are a large number of properties and fields and a bunch of methods that do a fair amount of processing. Question Will the creation of 3000 of these classes give me problems? I'm thinking along the lines of memory exceptions? I've had 1 or 2 already, and I've also had an exception that says something along the lines of "Something went bang and this is usually a sign of corrupt memory somewhere else". Setup I'm using a DevExpress Ribbon Form and I haven't implemented Dispose on anything....Do I need to? Help greatly appreciated.

    Read the article

  • [Python] How do I read binary pickle data first, then unpickle it?

    - by conradlee
    I'm unpickling a NetworkX object that's about 1GB in size on disk. Although I saved it in the binary format (using protocol 2), it is taking a very long time to unpickle this file---at least half an hour. The system I'm running on has plenty of system memory (128 GB), so that's not the bottleneck. I've read here that pickling can be sped up by first reading the entire file into memory, and then unpickling it (that particular thread refers to python 3.0, which I'm not using, but the point should still be true in python 2.6). How do I first read the binary file, and then unpickle it? I have tried: import cPickle as pickle f = open("big_networkx_graph.pickle","rb") bin_data = f.read() graph_data = pickle.load(bin_data) But this returns: TypeError: argument must have 'read' and 'readline' attributes Any ideas?

    Read the article

  • Monitor and Terminate Python script based on system resource use

    - by Vincent
    What is the "right" or "best" way to monitor the system resources a python script is using and terminate it if the resource use exceeds some predetermined values. In my case memory usage is of concern. I am not asking how to measure the system resource use although I am open to suggestions. As a simple example, let's assume I have a function that finds prime numbers less than some large number and adds them to a list based on some condition. I don't know ahead of time how many prime numbers will satisfy the condition so I what to be sure to terminate the function if I use up to much system memory (8gb lets say). I know that there are ways to monitor the size of python objects. What I don't know is the proper way to monitor the size of the list and exit is to just include a size test in the prime function loop and exit if it exceeds 8gb or if there is an "external" way to monitor and exit.

    Read the article

  • Python fCGI + sqlAlchemy = malformed header from script. Bad header=FROM tags : index.py

    - by crgwbr
    I'm writing an Fast-CGI application that makes use of sqlAlchemy & MySQL for persistent data storage. I have no problem connecting to the DB and setting up ORM (so that tables get mapped to classes); I can even add data to tables (in memory). But, as soon as I query the DB (and push any changes from memory to storage) I get a 500 Internal Server Error and my error.log records malformed header from script. Bad header=FROM tags : index.py, when tags is the table name. Any idea what could be causing this? Also, I don't think it matters, but its a Linux development server talking to an off-site (across the country) MySQL server.

    Read the article

  • Keep an ASP.NET IIS website responsive when time between visits is long

    - by Abel
    After years of ASP.NET development I'm actually quite surprised that I can't seem to find a satisfying solution for this. Why does an IIS ASP.NET site always seem to fall asleep (for 2-6 seconds) after a certain time of inactivity (after several hours), during which no HTTP response is sent from server to client. This happens on any type of site, one page or many, db or not, regardless the settings. How can I fix this? During the wait time, the server is not busy and there are no high peaks or (.NET) memory shortages. My guess is, it has to do with Windows moving the IIS process to the background and its memory to the page file, but I'm not sure. Anybody any idea? EDIT: one solution is to send some HTTP request once an hour or so, but I hope for something more constructive. EDIT: what I meant is: after hours of inactivity, it pauses several seconds on any new HTTP request.

    Read the article

  • How to debug macruby?

    - by Dan
    Hi, I've encountered an inconsistent bug with MacRuby and have no idea how to go about debugging this. If anyone could help would be great. I don't know if this is due to my own code or is it a bug in the MacRuby framework. I have a feeling it's my own code, something about over-retaining a piece of memory and hence the garbage collection failed. This is the error from Xcode. Thanks. CSV Wizard(30245,0x7fff704f7ca0) malloc: resurrection error for object 0x20199da20 while assigning {conservative-block}[196608](0x302360060)[117616] = Array[64](0x20199da20) garbage pointer stored into reachable memory, break on auto_zone_resurrection_error to debug CSV Wizard(30245,0x103781000) malloc: garbage block 0x20199da20(Array[64]) was over-retained during finalization, refcount = 1 This could be an unbalanced CFRetain(), or CFRetain() balanced with -release. Break on auto_zone_resurrection_error() to debug. CSV Wizard(30245,0x103781000) malloc: fatal resurrection error for garbage block 0x20199da20(Array[64]): over-retained during finalization, refcount = 1

    Read the article

  • How can I create the XML::Simple data structure using a Perl XML SAX parser?

    - by DVK
    Summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog—due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. I'm looking for an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been. I don't care much what the interface of the new module is; e.g whether I need to call next_record() or give it a callback coderef accepting a record.

    Read the article

  • segmented reduction with scattered segments

    - by Christian Rau
    I got to solve a pretty standard problem on the GPU, but I'm quite new to practical GPGPU, so I'm looking for ideas to approach this problem. I have many points in 3-space which are assigned to a very small number of groups (each point belongs to one group), specifically 15 in this case (doesn't ever change). Now I want to compute the mean and covariance matrix of all the groups. So on the CPU it's roughly the same as: for each point p { mean[p.group] += p.pos; covariance[p.group] += p.pos * p.pos; ++count[p.group]; } for each group g { mean[g] /= count[g]; covariance[g] = covariance[g]/count[g] - mean[g]*mean[g]; } Since the number of groups is extremely small, the last step can be done on the CPU (I need those values on the CPU, anyway). The first step is actually just a segmented reduction, but with the segments scattered around. So the first idea I came up with, was to first sort the points by their groups. I thought about a simple bucket sort using atomic_inc to compute bucket sizes and per-point relocation indices (got a better idea for sorting?, atomics may not be the best idea). After that they're sorted by groups and I could possibly come up with an adaption of the segmented scan algorithms presented here. But in this special case, I got a very large amount of data per point (9-10 floats, maybe even doubles if the need arises), so the standard algorithms using a shared memory element per thread and a thread per point might make problems regarding per-multiprocessor resources as shared memory or registers (Ok, much more on compute capability 1.x than 2.x, but still). Due to the very small and constant number of groups I thought there might be better approaches. Maybe there are already existing ideas suited for these specific properties of such a standard problem. Or maybe my general approach isn't that bad and you got ideas for improving the individual steps, like a good sorting algorithm suited for a very small number of keys or some segmented reduction algorithm minimizing shared memory/register usage. I'm looking for general approaches and don't want to use external libraries. FWIW I'm using OpenCL, but it shouldn't really matter as the general concepts of GPU computing don't really differ over the major frameworks.

    Read the article

  • Inserting rows while fetching(from another table) in SQLite

    - by Samuel
    I'm getting this error no matter what with python and sqlite. File "addbooks.py", line 77, in saveBook conn.commit() sqlite3.OperationalError: cannot commit transaction - SQL statements in progress The code looks like this: conn = sqlite3.connect(fname) cread = conn.cursor() cread.execute('''select book_text from table''') while True: row = cread.fetchone() if row is None: break .... for entry in getEntries(doc): saveBook(entry, conn) Can't do a fetchall() because table and column size are big, and the memory is scarce. What can be done without resorting to dirty tricks(as getting the rowids in memory, which would probably fit, and then selecting the rows one by one)?.

    Read the article

  • Visual Studio 2010 "Not enough storage is available to process this command"

    - by Daniel Perez
    I'm fighting with VS 2010 and this error that seems to be very common in previous versions, but it looks like not everyone is having it in the latest version. I've got VS 2010 SP1 and I'm getting this error quite often. The problem is that it's not even enough to restart VS in order to make it go away, I usually have to restart my pc, and i'm losing a lot of time doing this (it's quite frequent) I've got Windows 7 32bits (can't upgrade to 64 bits, the company doesn't allow it), and I can't do things like creating another solution (please don't reply this :) ) I've used the command to make devenv.exe LARGEADDRESSAWARE, but the error keeps on happening My virtual memory size is set to automatic, and the weird thing is that VS doesn't even take 2gb of ram, so I don't know if the error is really because it's lacking memory, or if it's some bug in the program any ideas, things to try, something?

    Read the article

  • Strange error with CreateCompatibleDC

    - by sevaxx
    Maybe this is a foolish question, I can't see why I can not get a DC created in the following code : HBITMAP COcrDlg::LoadClippedBitmap(LPCTSTR pathName,UINT maxWidth,UINT maxHeight) { HBITMAP hBmp = (HBITMAP)::LoadImage(NULL, pathName, IMAGE_BITMAP, 0, 0, LR_LOADFROMFILE | LR_CREATEDIBSECTION); if (!hBmp) return NULL; HDC hdc = (HDC)GetDC(); HDC hdcMem = CreateCompatibleDC(hdc); if (!hdcMem) { DWORD err = GetLastError(); } ... ... ... The bitmap hBmp is loaded fine and hdc has a valid value. But the call to CreateCompatibleDC() returns a NULL pointer. Then, GetLastError() returns 0 ! Anybody can guess what's going on here , please ? PS : There are no memory allocations or GDI routines called before this one...so I think memory leaks should be ruled out.

    Read the article

  • What wording in the C++ standard allows static_cast<non-void-type*>(malloc(N)); to work?

    - by ben
    As far as I understand the wording in 5.2.9 Static cast, the only time the result of a void*-to-object-pointer conversion is allowed is when the void* was a result of the inverse conversion in the first place. Throughout the standard there is a bunch of references to the representation of a pointer, and the representation of a void pointer being the same as that of a char pointer, and so on, but it never seems to explicitly say that casting an arbitrary void pointer yields a pointer to the same location in memory, with a different type, much like type-punning is undefined where not punning back to an object's actual type. So while malloc clearly returns the address of suitable memory and so on, there does not seem to be any way to actually make use of it, portably, as far as I have seen.

    Read the article

  • Stack recommendations for small/medium-sized web application in Python

    - by reto
    I'm looking for some recommendations for a python web application. We have some memory restrictions and we try to keep it small and lean. We thought about using WSGI (and a python webserver) and build the rest ourself. We already have a template engine we'd like to use, but we are open for some suggestions regarding the whole request handling (the controller). The application has to run in a single process and the requests have to be processed with multiple threads. We've looked at django, but we are a not sure if it fits into our memory budget. Your feedback is very welcome! Cheers, Reto

    Read the article

  • Reading from a file not line-by-line

    - by MadH
    Assigning a QTextStream to a QFile and reading it line-by-line is easy and works fine, but I wonder if the performance can be inreased by first storing the file in memory and then processing it line-by-line. Using FileMon from sysinternals, I've encountered that the file is read in chunks of 16KB and since the files I've to process are not that big (~2MB, but many!), loading them into memory would be a nice thing to try. Any ideas how can I do so? QFile is inhereted from QIODevice, which allows me to ReadAll() it into QByteArray, but how to proceed then and divide it into lines?

    Read the article

  • internet explorer, google chrome injection

    - by Volim Te
    I wrote code that injects a function in Internet Explorer/Chrome but it doesn't work with these processes. Basically, it fills one big structure with all the APIs my function needs, strings, and other data, then it opens a process to get a handle, virtualallocex to allocate enough memory to store a function and structure there, and it writes the function and the structure in allocated memory. It then runs createremotethread there with the function as a starting address and structure as parameter. It works all great with calc/notepad/winamp processes but I have problems with browser injection. I'm wondering what could it be, I'm using these APIs. x.xCreateFile x.xWriteFile x.xCloseHandle x.xSleep x.xVirtualAlloc x.xVirtualFree x.xMessageBox x.xLoadLibrary x.xShellExecute Is it because browsers are protected now and they're running with lowest privileges?

    Read the article

  • Algorithm for non-contiguous netmask match

    - by Gianluca
    Hi, I have to write a really really fast algorithm to match an IP address to a list of groups, where each group is defined using a notation like 192.168.0.0/252.255.0.255. As you can see, the bitmask can contain zeros even in the middle, so the traditional "longest prefix match" algorithms won't work. If an IP matches two groups, it will be assigned to the group containing most 1's in the netmask. I'm not working with many entries (let's say < 1000) and I don't want to use a data structure requiring a large memory footprint (let's say 1-2 MB), but it really has to be fast (of course I can't afford a linear search). Do you have any suggestion? Thanks guys. UPDATE: I found something quite interesting at http://www.cse.usf.edu/~ligatti/papers/grouper-conf.pdf, but it's still too memory-hungry for my utopic use case

    Read the article

  • Why doesn't this work?

    - by user146780
    I'v tried to solve a memory leak in the GLU callback by creating a global variable but now it dos not draw anything: GLdouble *gluptr = NULL; void CALLBACK combineCallback(GLdouble coords[3], GLdouble *vertex_data[4], GLfloat weight[4], GLdouble **dataOut) { GLdouble *vertex; if(gluptr == NULL) { gluptr = (GLdouble *) malloc(6 * sizeof(GLdouble)); } vertex = (GLdouble*)gluptr; vertex[0] = coords[0]; vertex[1] = coords[1]; vertex[2] = coords[2]; for (int i = 3; i < 6; i++) { vertex[i] = weight[0] * vertex_data[0][i] + weight[1] * vertex_data[0][i] + weight[2] * vertex_data[0][i] + weight[3] * vertex_data[0][i]; } *dataOut = vertex; } basically instead of doing malloc each time in the loop (thus the memory leak) im using a global pointer, but this doesn't work (drawing to the screen). Why would using malloc to a pointer created in the function work any different than a global variable? Thanks

    Read the article

  • Windows Service suddenly doing nothing

    - by TB
    Hi, My windows service is using a Thread (not a timer) which is always looping and sleeps for 1 second every loop using : evet.WaitOne(interval); When I start the service it works fine and I can see in the task manager that it is running, consuming and releasing memory, consuming processor ... etc that is all normal, but after a while (random amount of time) the service simply stops!! it is still there in the task manager but it is not consuming any processor work now and its consumption to the memory is not changing. it simply (died but still there in the task manager like a Zombie). I know that many exceptions might have happened during running the service (it is really doing many things) but all those exceptions are handled in Try catch blocks, so why is my "always looping" thread stops ??? This thread also logs every time he loops, when he is freezig in this way he is not logging anything (of course)

    Read the article

  • Tuning JVM (GC) for high responsive server application

    - by elgcom
    I am running an application server on Linux 64bit with 8 core CPUs and 6 GB memory. The server must be highly responsive. After some inspection I found that the application running on the server creates rather a huge amount of short-lived objects, and has only about 200~400 MB long-lived objects(as long as there is no memory leak) After reading http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html I use these JVM options -Xms2g -Xmx2g -XX:MaxPermSize=256m -XX:NewRatio=1 -XX:+UseConcMarkSweepGC Result: the minor GC takes 0.01 ~ 0.02 sec, the major GC takes 1 ~ 3 sec the minor GC happens constantly. How can I further improve or tune the JVM? larger heap size? but will it take more time for GC? larger NewSize and MaxNewSize (for young generation)? other collector? parallel GC? is it a good idea to let major GC take place more often? and how?

    Read the article

  • C++: Best text accumulator

    - by MInner
    Text gets accumulates piecemeal before being sent to client. Now we use own class that allocates memory for each piece as char massive. (Anyway, works like char[][] + std::list<char*>). Then we build the whole string, convert it into std::sting and then create boost::asio::streambuf using it. That's slow enough, I assume. Correct me if I'm wrong. I know, in many cases simple FILE type from stdio.h is used. How does it works? Allocates memory at every write into it. So, is it faster and is there any way to read into boost::asio::streambuf from FILE?

    Read the article

  • Leak caused by fread

    - by Jack
    I'm profiling code of a game I wrote and I'm wondering how it is possible that the following snippet causes an heap increase of 4kb (I'm profiling with Heapshot Analysis of Xcode) every time it is executed: u8 WorldManager::versionOfMap(FILE *file) { char magic[4]; u8 version; fread(magic, 4, 1, file); <-- this is the line fread(&version,1,1,file); fseek(file, 0, SEEK_SET); return version; } According to the profiler the highlighted line allocates 4.00Kb of memory with a malloc every time the function is called, memory which is never released. This thing seems to happen with other calls to fread around the code, but this was the most eclatant one. Is there anything trivial I'm missing? Is it something internal I shouldn't care about? Just as a note: I'm profiling it on an iPhone and it's compiled as release (-O2).

    Read the article

  • Performance of Managed C++ Vs UnManaged/native C++

    - by bsobaid
    I am writing a very high performance application that handles and processes hundreds of events every millisecond. Is Unmanaged C++ faster than managed c++? and why? Managed C++ deals with CLR instead of OS and CLR takes care of memory management, which simplifies the code and is probably also more efficient than code written by "a programmer" in unmanaged C++? or there is some other reason? When using managed, how can one then avoid dynamic memory allocation, which causes a performance hit, if it is all transparent to the programmer and handled by CLR? So coming back to my question, Is managed C++ more efficient in terms of speed than unmanaged C++ and why?

    Read the article

  • java servlet: generate zip file from BLOBs

    - by Zack
    I'm trying to zip a large number of pdf files (stored as BLOBs in the DB) and then return the zip as an attachment to the user. What's the best way to do this without running into memory issues? Another note: I actually need to merge some PDFs prior to adding them to the ZipOutputStream. Therefore, a couple PDFs will need to be stored in memory at a time. I assume it would be best to then store them as temporary files on the server before zipping them all?

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >