Search Results

Search found 15543 results on 622 pages for 'memory editing'.

Page 255/622 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • Java object caching, which is faster, reading from a file or from a remote machine?

    - by Kumar225
    I am at a point where I need to take the decision on what to do when caching of objects reaches the configured threshold. Should I store the objects in a indexed file (like provided by JCS) and read them from the file (file IO) when required or have the object stored in a distributed cache (network, serialization, deserialization) We are using Solaris as OS. ============================ Adding some more information. I have this question so as to determine if I can switch to distributed caching. The remote server which will have cache will have more memory and better disk and this remote server will only be used for caching. One of the problems we cannot increase the locally cached objects is , it stores the cached objects in JVM heap which has limited memory(using 32bit JVM). ======================================================================== Thanks, we finally ended up choosing Coherence as our Cache product. This provides many cache configuration topologies, in process vs remote vs disk ..etc.

    Read the article

  • How to free member (i.e BSTR, SAFEARRAY, VARIANT) of an IDL User Defined Structure which is encapsul

    - by Picaro De Vosio
    Hi, I have a structure defined in IDL. This structure has following members: { BSTR m_sFirst; BSTR m_sSecond; VARIANT m_vChildStruct; //This member encapsulate a sub structure SAFEARRAY __RPC_FAR * m_saArray; }CustomINFO; I am allocating the memory for the structs using CoTaskMemAlloc and encapsulating it in Variant as follows: vV-vt = VT_RECORD; vV-pvRecord = pStruct; //Pointer of sturct vV-pRecInfo = pRI; //RecordInfo Interface Is it enough to call VariantClear to deallocate the memory of struct and its members? Will it also release the IRecordInfo interface? Or i have to manually get the encapsulated struct and deallocate each member myself and then use CoTaskMemFree to deallocate sturct. Thanks Picaro De Vosio

    Read the article

  • Inserting clobs into oracle with ODBC

    - by Carra
    I'm trying to insert a clob into Oracle. If I try this with an OdbcConnection it does not insert the data into the database. It returns 1 row affected and no errors but nothing is inserted into the database. It does work with an OracleConnection. However, using the Microsoft OracleClient makes our webservices often crash with an AccessViolationException (Attempted to read or write protected memory. This is often an indication that other memory is corrupt.). This happens a lot when we use OracleDataAdapter.Fill(dataset). So using this doesn't seem like an option. Is there any way to insert/update clobs with more then 4.000 characters from .Net with an OdbcConnection?

    Read the article

  • QT vs. Net - REAL comparisons for R.A.D. projects

    - by Pirate for Profit
    Man in all these Qt vs. .NET discussions 90% these people argue about the dumbest crap. Trying to get a real comparison chart here, because I know a little about both frameworks but I don't know everything. I believe Qt and .NET both have strengths and weaknesses. This is to make a comparison that highlights these so people can make more informed decisions before embarking on a project, in the spirit of R.A.D. Event Handling In Qt the event handling system is very simple. You just emit signals when something cool happens and then catch them in slots. ie. // run some calculations, then emit valueChanged(30, false, 20.2); and then catching it, any object can make a slot to recieve that message easily void MyObj::valueChanged(int percent, bool ok, float timeRemaining). It's easy to "block" an event or "disconnect" when needed, and works seamlessly across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, void valueChanged(object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET anonymous delegates are the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down for the simplest yet still flexible event handling system ever i m o. Plugins and such I do love some of the ease of C# compared to C++, as well as .NET's assembly architecture, even though it leads to a bunch of .dll's (there's ways to combine everything into a single exe though). That is a big bonus for modular projects, which are a PITA to import stuff in C++ as far as RAD is concerned. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. QConcurrent is the simplest threading mechanism I've ever played with. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? Doesn't the just-in-time compiler make native code that is pretty good, like and that only happens the first time the program is run? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • How/is data shared between fastCGI processes?

    - by Josh the Goods
    I've written a simple perl script that I'm running via fastCGI on Apache. The application loads a set of XML data files which are used to lookup values based upon the the query parameters of an incoming request. As I understand it, if I want to increase the amount of concurrent requests my application can handle I need to allow fastCGI to spawn multiple processes. Will each of these processes have to hold duplicate copies of the XML data in memory? Is there a way to set things up so that I can have one copy of the XML data loaded in memory while increasing the capacity to handle concurrent requests?

    Read the article

  • Is it possible to implement a GC.GetAliveInstancesOf<T>() (for use in debugging)?

    - by Omer Raviv
    Hi, I know this was answered before, but I'd like to pose a somewhat different question. Is there any conceivable way to implement GC.GetAliveInstancesOf(), that can be evaluated in Visual Studio Debug Watch window? Sasha Goldstein shows one solution in this article, but it requires every class you want to query inherit from a specific base class. I'll emphasize that I only want to use this method during debugging, so I don't care that the GC may change an object's address in memory during runtime. One idea might be to somehow harness the !dumpheap –type command of SOS, and do some magic trick to create a temporary variable and have it point out to the memory address printed by SOS. Does anyone have a solution that works?

    Read the article

  • Are there any free .NET OCR libraries that will perform OCR on an application window directly?

    - by Kelsey
    I am looking for a free .NET OCR library that will be able to do OCR on a given application window or even a image in memory (I can take a snapshot of the application window myself). I have looked at tessnet2 and MODI but both require an image located on disk. I need to use OCR because the application I am trying to write a script for does some wacky stuff that cannot be read using windows API and I need to scrape data from the screen. I have tested both of tessnet2 and MODI and they both can read the text mostly but because this has to run in an enviroment that will not be able to write to disk, I need it to be able to read directly from the applciation window or some type of memory stream. I am thinking OCR is my only soution but there could be other methods that I am not thinking of. Suggestions?

    Read the article

  • server|configuration problem, a php script just die with no error log & no reason

    - by Roberto
    Hi (first of all, thanks for your attention & sorry for my bad english hahaha also this is not a programming error, or thats what I think, I think this is an error in some configuration of the server or something else but I dont know what) I have a php script (is running like a process of linux, its not running on the web browser) that send SMS via SMPP on the port 2055 (using sockets in php) & then inserts like 10,000 rows into a dababase on MySQL, the script gets the data from a XML file; firts it was running in a shared server (Hostgator is our hosting provider) & at the begining it worked fine, with no trouble, but 5 months later an error appear, the process just die with no reason, the script only sent & inserted 700 rows in the table of the database & the process didnt show any warning or error, nothing appears in the error logs, & I didnt make any change in the script Hostgator never helped us hahaha so we decided to move the script from the shared server to a dedicated server; I thought it was a memory problem or something like that, but when we move the script to the dedicated server the problem just get worse, the script die when has just sent & inserted 40 to 50 rows to the database the information about this error: the shared server is on Red Hat 4.1.2-46 & the dedicated server is on CentOS 5.4 I have commented the line that sends the SMS, & the problem remains in the shared server, at the begining the script was fine, but then the script started to die when has just inserted 700 aprox. in the database, & now the script is dying when has inserted 2500 rows, its better but we didnt change anything in the dedicated server, the script dies when has just inserted like 40 row in the table the script, before it dies, change to a zombie process & we dont know why the usage of memory appears to be 0.3%, and of the cpu appears to be 0.7% to 1% I have changed the max memory limit of php to 128Mb, and even to -1 (so php wont have any limit) but the problem remains we have the limit of 50 connections of mysql at the same time, so I think this is not the problem Im using mysqli to connect from php to mysql Hostgator report that they haven't made any change or update in the servers what could be the problem?? what should I do??? what should I search??? is something in the logic Im missing?? what steps do I have to follow when managing & searching problems of process on Linux??? thank you very much, I think this is not a programming problem, but you have more experience than me, you can tell me thanks!!! bye!!! :)

    Read the article

  • FreeBSD console broken?

    - by Kuroki Kaze
    It seems my FreeBSD console is misconfigured (i guess). I cannot use home button in command line, and in vi left arrow is switching me from edit to command mode, which makes editing a little difficult. How can I know what's wrong and fix it? I'm not a root, by the way, I just hope it's something with my profile configuration.

    Read the article

  • free OCR with UI for correcting errors (for Windows)

    - by Hugh Allen
    I've used SimpleOCR, which has a nice GUI for correcting errors. Unfortunately it makes a lot of errors! (and suffers other bugs and limitations) Is there a free OCR program for Windows which has such a GUI and a low error rate? I want it to show the original (bitmap) word while I'm editing the OCRed word similar to what SimpleOCR does: Free is good. Open-source is better.

    Read the article

  • CUDA threads output different value

    - by kar
    HAi, I wrote a cuda program , i have given the kernel function below. The device memory is allocated through CUDAMalloc(); the value of *md is 10; __global__ void add(int *md) { int x,oper=2; x=threadIdx.x; * md = *md*oper; if(x==1) { *md = *md*0; } if(x==2) { *md = *md*10; } if(x==3) { *md = *md+1; } if(x==4) { *md = *md-1; } } executed the above code add<<<1,5>>(*md) , add<<<1,4>>>(*md) for <<<1,5>>> the output is 19 for <<<1,4>>> the output is 21 1) I have doubt that cudaMalloc() will allocate in device main memory? 2) Why the last thread alone is executed always in the above program? Thank you

    Read the article

  • binary search and trio

    - by user121196
    given a large list of alphabetically sorted words in a file,I need to write a program that, given a word x, determines if x is in the list. priorties: 1. speed. 2. memory I already know I can use (n is number of words, m is average length of the words) 1. a tri, time is O(log(n)), space(best case) is O(log(n*m)), space(worst case) is O(n*m). 2. load the complete list into memory, then binary search, time is O(log(n)), space is O(n*m) I'm not sure about the complexity on tri, please correct me if they are wrong. Also are there other good approaches?

    Read the article

  • Programmatic resource monitoring per process in Linux

    - by tuxx
    Hi, I want to know if there is an efficient solution to monitor a process resource consumption (cpu, memory, network bandwidth) in Linux. I want to write a daemon in C++ that does this monitoring for some given PIDs. From what I know, the classic solution is to periodically read the information from /proc, but this doesn't seem the most efficient way (it involves many system calls). For example to monitor the memory usage every second for 50 processes, I have to open, read and close 50 files (that means 150 system calls) every second from /proc. Not to mention the parsing involved when reading these files. Another problem is the network bandwidth consumption: this cannot be easily computed for each process I want to monitor. The solution adopted by NetHogs involves a pretty high overhead in my opinion: it captures and analyzes every packet using libpcap, then for each packet the local port is determined and searched in /proc to find the corresponding process. Do you know if there are more efficient alternatives to these methods presented or any libraries that deal with this problems?

    Read the article

  • Iphone App Similar to the Talking Larry Application

    - by Zeeshan
    I am developing iphone application similar to the Talking Larry Application. I am facing an issue of Low Memory when i pre-load the animations into an NSMutable Array. If i do not pre-load the animations and load it into the Animation Images when the user touch the button then it takes Time and it play back the animation very Slow. And during video making it does not play the complete animations. How can i resolve this issue. I want to play back the animations similar to the Talking Larry app and do not want to get Low Memory Issue.

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • cuda/thrust: Trying to sort_by_key 2.8GB of data in 6GB of gpu RAM throws bad_alloc

    - by Sven K
    I have just started using thrust and one of the biggest issues I have so far is that there seems to be no documentation as to how much memory operations require. So I am not sure why the code below is throwing bad_alloc when trying to sort (before the sorting I still have 50% of GPU memory available, and I have 70GB of RAM available on the CPU)--can anyone shed some light on this? #include <thrust/device_vector.h> #include <thrust/sort.h> #include <thrust/random.h> void initialize_data(thrust::device_vector<uint64_t>& data) { thrust::fill(data.begin(), data.end(), 10); } #define BUFFERS 3 int main(void) { size_t N = 120 * 1024 * 1024; char line[256]; try { std::cout << "device_vector" << std::endl; typedef thrust::device_vector<uint64_t> vec64_t; // Each buffer is 900MB vec64_t c[3] = {vec64_t(N), vec64_t(N), vec64_t(N)}; initialize_data(c[0]); initialize_data(c[1]); initialize_data(c[2]); std::cout << "initialize_data finished... Press enter"; std::cin.getline(line, 0); // nvidia-smi reports 48% memory usage at this point (2959MB of // 6143MB) std::cout << "sort_by_key col 0" << std::endl; // throws bad_alloc thrust::sort_by_key(c[0].begin(), c[0].end(), thrust::make_zip_iterator(thrust::make_tuple(c[1].begin(), c[2].begin()))); std::cout << "sort_by_key col 1" << std::endl; thrust::sort_by_key(c[1].begin(), c[1].end(), thrust::make_zip_iterator(thrust::make_tuple(c[0].begin(), c[2].begin()))); } catch(thrust::system_error &e) { std::cerr << "Error: " << e.what() << std::endl; exit(-1); } return 0; }

    Read the article

  • Software for managing a gamenet

    - by Isaac
    I need a software for managing gamenet (Gamenet is like a cybercafe except people play games in gamenet instead of surfing the web!) The software should have this basic features: Accounting features (defining users, assigning a usage time to them, etc) Denying access to regular windows features (windows explorer, creating/editing/deleting files) Showing a list of available games to users to run. Creating login report I've tested a software named GamePort but it has some bugs and shortcomings.

    Read the article

  • Software for managing a gamenet

    - by Isaac
    I need a software for managing gamenet (Gamenet is like a cybercafe except people play games in gamenet instead of surfing the web!) The software should have this basic features: Accounting features (defining users, assigning a usage time to them, etc) Denying access to regular windows features (windows explorer, creating/editing/deleting files) Showing a list of available games to users to run. Creating login report I've tested a software named GamePort but it has some bugs and shortcomings.

    Read the article

  • resolv.conf not working properly with Ethernet in Ubuntu

    - by Mark Z.
    I have a Lenovo X200 laptop on which I am running Ubuntu 9.10. Recently, (I assume after some updating, but I really don't know) my ethernet port stopped working under Linux. A more tech/linux-savvy friend of mine was able to temporarily fix the problem by manually editing the resolv.conf file with the DNS servers he found through his connection. However, after rebooting, the problem came back and now I am looking for a more permanent solution.

    Read the article

  • Where to start when programming process synchronization algorithms like clone/fork, semaphores

    - by David
    I am writing a program that simulates process synchronization. I am trying to implement the fork and semaphore techniques in C++, but am having trouble starting off. Do I just create a process and send it to fork from the very beginning? Is the program just going to be one infinite loop that goes back and forth between parent/child processes? And how do you create the idea of 'shared memory' in C++, explicit memory address or just some global variable? I just need to get the overall structure/idea of the flow of the program. Any references would be appreciated.

    Read the article

  • Problem with return 2 libc method

    - by jth
    Hi, I'am trying to understand the return2libc method. I'am using an ubuntu linux 9.10, 32 bit with ASLR disabled. In theory, it sounds quite easy, overwrite the saved eip with the address of system() (or whatever function you want), then put the address to which system() should return and after that, the parameter for system, the "/bin/bash"-string. But what happens is that my exploit keeps segfaulting the vulnerable program. I assume something with the system()-address went wrong. This is what I did so far: Determined the address of system(): (gdb) print system $1 = {<text variable, no debug info>} 0x167020 <system> (gdb) x/x system 0x167020 <system>: 0x890cec83 I used the subsequent x/x system because those 3 bytes returned by print system looks like an index in some sort of jumptable (PLT?), so I assume 0x890cec83 is the right address which is used to overwrite the saved eip. After that I determined the address of the /bin/bash string in memory, using a small C program which basically consists of this line: printf("Address of string /bin/bash: %p\n", getenv("SHELL")); Then I looked a little bit around in the memory and fount /bin/bash: (gdb) x/s 0xbffff6ca 0xbffff6ca: "/bin/bash" After I gathered this information, I filled the buffer: (gdb) b 9 Breakpoint 1 at 0x8048407: file victim.c, line 9. (gdb) r `perl -e 'print "A"x9 . "\x83\xec\x0c\x89FAKE\xca\f6\ff\bf";'` Breakpoint 1, main (argc=1111638594, argv=0xc360cca) at victim.c:10 10 return 0; (gdb) x/s 0xbffff6ca 0xbffff6ca: "/bin/bash" Stack frame looks like this: (gdb) i f Stack level 0, frame at 0xbffff440: eip = 0x8048407 in main (victim.c:10); saved eip 0x890cec83 source language c. Arglist at 0xbffff438, args: argc=1111638594, argv=0xc360cca Locals at 0xbffff438, Previous frame's sp is 0xbffff440 Saved registers: ebp at 0xbffff438, eip at 0xbffff43c This seems all right to me, saved eip was overwritten with the (hopefully) correct system()-address, return address for system was set to "FAKE" (shouldn't matter) and the address of /bin/bash also seems to be correct. When I'am continuing the execution, victim segfaults on some strange address and certainly not in 0x890cec83: (gdb) cont Continuing. Program received signal SIGSEGV, Segmentation fault. 0x0804840d in main (argc=Cannot access memory at address 0x41414149 ) at victim.c:11 11 } Has anyone an explanation or a hint what happens here and why the execution isn't redirected to 0x890cec83? Thanks in advance, any hint, and be it only vague, would be appreciated. I have no idea why this doesn't work.

    Read the article

  • Penalty of using QGraphicsObject vs QGraphicsItem?

    - by Dutch
    I currently have a hierarchy of items based off of QGraphicsItem. I want to move to QGraphicsObject instead so that I can put properties on my items. I will not be making use of signals/slots or any other features of QObject. I'm told that you shouldn't derive from QObject because it's "heavy" and "slow". To test the impact, I derive from QGraphicsObject, add a couple properties to my items, and look at the memory usage of the running app. I create 1000 items using both flavors and I don't notice anything more than 10k more memory usage. Since all I am adding on to my items are properties, is it safe to say that QObject only adds weight if you are using signals/slots?

    Read the article

  • what's faster: merging lists or dicts in python?

    - by tipu
    I'm working with an app that is cpu-bound more than memory bound, and I'm trying to merge two things whether they be sets or dicts. Now the thing is i can choose either one, but I'm wondering if merging dicts would be faster since it's all in memory? Or is it always going to be O(n), n being the size of the smaller set. The reason I asked about dicts rather than sets is because I can't convert a set to json, because that results in {key1, key2, key3} and json needs a key/value pair, so I am using a dict so json dumps returns {key1:1, key2:1, key3:1}. Yes this is wasteful, but if it proves to be faster then I'm okay with it.

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >