Search Results

Search found 12281 results on 492 pages for 'memory fences'.

Page 374/492 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Easiest way to combine a dataset and other data in a single file?

    - by Tim Gradwell
    I have a dataset in C# which I can serialise using dataset.WriteXml(filename); but I want the file to contain other data as well (essentially some meta data about the dataset). I could add another table to the dataset which contained the metadata, but I'd prefer to keep it separate, if at all possible. Essentially I think I want to create a 'combination of files' file, that looks something like this: size_of_file1 file1 size_of_file2 file2 ... etc Then, I'd like to load the file into memory, and split the file into separate streams, so that I can feed the dataset into dataset.ReadXml(stream); and the metadata into something else. Sound possible? Can anyone tell me how I can do it? Thanks Tim

    Read the article

  • Which external DSLs do you like to use?

    - by Max Toro
    The reason I'm asking is because right now there seems to be tendency to make DSLs internal. One example is LINQ in C# and VB. You can use it against in-memory objects, or you can use it as a replacement of SQL or other external DSL. Another example is HTML5 vs XHTML2. XHTML2 supported decentralized extensibility through namespaces, in other words you embed external DSL code (XForms, SVG, MathML, etc.) in your XHTML code. Sadly HTML5 doesn't seem to have such mechanism, instead new features are internal (e.g. <canvas> instead of SVG). I'd like to know what other developers think about this. Do you like using external DSLs ? Which ones ? If not, why ?

    Read the article

  • Does a static object within a function introduce a potential race condition?

    - by Jeremy Friesner
    I'm curious about the following code: class MyClass { public: MyClass() : _myArray(new int[1024]) {} ~MyClass() {delete [] _myArray;} private: int * _myArray; }; // This function may be called by different threads in an unsynchronized manner void MyFunction() { static const MyClass _myClassObject; [...] } Is there a possible race condition in the above code? Specifically, is the compiler likely to generate code equivalent to the following, "behind the scenes"? void MyFunction() { static bool _myClassObjectInitialized = false; if (_myClassObjectInitialized == false) { _myClassObjectInitialized = true; _myClassObject.MyClass(); // call constructor to set up object } [...] } ... in which case, if two threads were to call MyFunction() nearly-simultaneously, then _myArray might get allocated twice, causing a memory leak? Or is this handled correctly somehow?

    Read the article

  • Memcachedb Versus MongoDB Versus CouchDB in terms of file based caching solution?

    - by Scott Faisal
    We need a caching solution that essentially caches data (text files) anywhere from 3 days up to a week based on user preferences and criteria. In this case memory based caching does not make sense to us. We were referred to MemcacheDB however I also thought of some NO SQL solutions. Our current application uses RDMS (MYSQL) and I guess it makes sense to use MemcacheDB however NOSQL does appeal as it is something more on the horizon. However we have not deployed a production level application under NOSQL and the beta stuff does not settle well with management/investors. Any how what are your thoughts and how would you address it? Thank You

    Read the article

  • what practical proofs are there about the Turing completeness of neural nets? what nns can execute c

    - by Albert
    I'm interested in the computational power of neural nets. It is generally accepted that recurrent neural nets are Turing complete. Now I was searching for some papers which proofs this. What I found so far: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1991 I think this is only interesting from a theoretical point of view because it needs to have the neuron activity of infinite exactness (to encode the state somehow as a rational number). S. Franklin and M. Garzon, Neural computability This needs an unbounded number of neurons and also doesn't really seem to be that much practical. (Note that another question of mine tries to point out this kind of problem between such theoretical results and the practice.) I'm searching mostly for some neural net which really can execute some code which I can also simulate and test in practice. Of course, in practice, they would have some kind of limited memory. Does anyone know something like this?

    Read the article

  • C++ program runs slow in VS2008

    - by Nima
    I have a program written in C++, that opens a binary file(test.bin), reads it object by object, and puts each object into a new file (it opens the new file, writes into it(append), and closes it). I use fopen/fclose, fread and fwrite. test.bin contains 20,000 objects. This program runs under linux with g++ in 1 sec but in VS2008 in debug/release mode in 1min! There are reasons why I don't do them in batches or don't keep them in memory or any other kind of optimizations. I just wonder why it is that much slow under windows. Thanks,

    Read the article

  • RSA encrypted data block size

    - by calccrypto
    how do you store an rsa encrypted data block? the output might be significantly greater than the original input data block size, and i dont think people waste memory by padding bucket loads of 0s in front of each data block. besides, how would they be removed? or is each block stored on new lines within the file? if that is the case, how would you tell the difference between legitimate new line and a '\n' char written into the file? what am i missing? im writing the "write to file" part in python, so maybe its one of the differences between: open(file,'w') open(file,'w+b') open(file,'wb') that i dont know. or is it something else?

    Read the article

  • how to generate large image in compact framework

    - by Buthrakaur
    I need to generate large images (A4 image at 200 DPI, PNG format would be fine) in my compact framework application. This is impossible to do in standard way due to memory limitations (such big image will throw OOMException). Is there any library which offers file-backed stream image generation? Or I could generate many smaller stripes of images (each stripe representing a row of the large image) using standard Bitmap approach, but I need to merge them together afterwards - is there any method how to merge many smaller images into one large without having to instantiate large Bitmap instance (which would again cause OOM)?

    Read the article

  • Is it possible to give a python dict an initial capacity (and is it usefull)

    - by Peter Smit
    I am filling a python dict with around 10,000,000 items. My understanding of dict (or hashtables) is that when too much elements get in them, the need to resize, an operation that cost quite some time. Is there a way to say to a python dict that you will be storing at least n items in it, so that it can allocate memory from the start? Or will this optimization not do any good to my running speed? (And no, I have not checked that the slowness of my small script is because of this, I actually wouldn't now how to do that. This is however something I would do in Java, set the initial capacity of the HashSet right)

    Read the article

  • Predicting performance for an iPhone/iPod Touch App

    - by Avizz
    I don't have an iPhone Developer Program Account yet and will be getting one in the next couple of days. Can instruments be used with the simulator to give a rough estimate on how well my app may perform? Using instruments I checked and fixed all the leaks it was detecting, and it appears that my memory usage maxes out at about 5.77mb. Is there any other tests I could perform with instruments to judge how well my app would perform? I realize there is no way other then the actual device to get a definite answer, it would be nice to get an estimate.

    Read the article

  • [Ruby] How can I randomly iterate through a large Range?

    - by void
    I would like to randomly iterate through a range. Each value will be visited only once and all values will eventually be visited. For example: (0..9).sort_by{rand}.map{|x| f(x)} where f(x) is some function that operates on each value. A Fisher-Yates shuffle could be used to increase efficiency, but this code is sufficient for many purposes. My problem is that sort_by will transform the range into an array, which is not cool because I am working with astronomically large numbers. Ruby will quickly consume a large amount of RAM trying to create a monstrous array. This is also why the following code will not work: tried = {} # store previous attempts bigint = 99**99 bigint.times { x = rand(bigint) redo if tried[x] tried[x] = true f(x) # some function } This code is very naive and quickly runs out of memory as tried obtains more entries. What sort of algorithm can accomplish what I am trying to do?

    Read the article

  • Double indirection and structures passed into a function

    - by ZPS
    I am curious why this code works: typedef struct test_struct { int id; } test_struct; void test_func(test_struct ** my_struct) { test_struct my_test_struct; my_test_struct.id=267; *my_struct = &my_test_struct; } int main () { test_struct * main_struct; test_func(&main_struct); printf("%d\n",main_struct->id); } This works, but pointing to the memory address of a functions local variable is a big no-no, right? But if i used a structure pointer and malloc, that would be the correct way, right? void test_func(test_struct ** my_struct) { test_struct *my_test_struct; my_test_struct = malloc(sizeof(test_struct)); my_test_struct->id=267; *my_struct = my_test_struct; } int main () { test_struct * main_struct; test_func(&main_struct); printf("%d\n",main_struct->id); }

    Read the article

  • Implications of Fulltext Search over many columns

    - by Alex
    Hello, I have a really wide table which includes separate columns for billing address, shipping address, primary address, names, aliases etc. (I can't normalize this table further, and that's not the question here anyways). I'm implementing SQL Server fulltext search, and I'm wondering whether I should limit the search ability to just the primary fields (primary address and names for example), or if I can extend the search ability across all columns without occurring too much of a performance or memory penalty. I've done some basic testing with 10,000 sample rows and it's quite fast but I don't have much experience with fulltext indexing, especially its dictionary internals, so I don't know if the index is going to grow over time, or if there is anything else to consider. Thoughts?

    Read the article

  • how to read httpWebRequest's request stream in c#, i got error" the stream is not readable" ?

    - by sam
    hi i want to read request stream from a custom httpWebRequest class that inherits from httpWebRequest and i have tried to read the request stream in different stages but sitll not sure how to archieve that in the class,thanks very much for any help. This custom httpWebRequest is used to serilize soap message and i want to know what request has been sent in string format. I also implemented custom HttpRequestCreator,HttpWebResponse but till cant find a place/stage i can read the request stream. If i output everything in a memory stream then copy the content to request stream, anyone knows which stage i can do it in the constructor, BeginGetRequestStream,EndGetRequestStream or GetRequestStream?

    Read the article

  • Howto check if a object is connected to another in hibernate

    - by codevourer
    Imagine two domain object classes, A and B. A has a bidirectional one-to-many relationship to B. A is related to thousands of B. The relations must be unique, it's not possible to have a duplicate. To check if an instance of B is already connected to a given instance of A, we could perform an easy INNER JOIN but this will only ensure the already persisted relations. What about the current transient relations? class A { @OneToMany private List<B> listOfB; } If we access the listOfB and perform a check of contains() this will fetch all the connected instances of B lazy from the datasource. I only want to validate them by their primary-key. Is there an easy solution where I can do things like "Does this instance of A is connected with this instance of B?" Without loading all these data into memory and perform a based on collections?

    Read the article

  • Detect movie being played (Windows)

    - by modosansreves
    Watching a movie is quite a different user activity. User doesn't touch neither mouse nor keyboard. Yet he 'actively' uses the computer. Thus, screensaver shouldn't run, indexing should be performed with care etc. On the other side, playing video requires either using direct write to video memory, or DirectShow, or some other API. This may be the key to the answer. What is the Dead Simple Way to determine that a video is being played?

    Read the article

  • Is it possible to find out what FlashBuilder is doing during compilation?

    - by justkevin
    I've found that Flash Builder 4 (formerly Flex Builder) has trouble working with large projects. After a certain point, builds seem to take longer and longer. I've tried many different ways of improving build time including: Moving embedded resources into externally linked projects. Using -incremental. Tweaking the .ini jvm settings including memory and -server. Turning off automatic build (I'd prefer not to have to do this, because one of the main reasons for using an IDE is to be told about errors as you make them). Deleting the project and re-checking out from the repository. While some of these may help a bit, the performance is still annoyingly slow. I feel if I knew what was taking so long I could refactor my projects to build faster. Is there some setting that tells FlashBuilder to let me see what parts of the build process take so much time?

    Read the article

  • C Programming: calling free() on error?

    - by kouei
    Hi all, This a follow up on my previous question. link here. My question is: Let's say I have the following code.. char* buf = (char*) malloc(1024); ... for(; i<20; i++) { if(read(fd, buf, 1024) == -1) { // read off a file and store in buffer perror("read failed"); return 1; } ... } free(buf); what i'm trying to get at is that - what if an error occurs at read()? does that mean my allocated memory never gets freed? If that's the case, how do I handle this? Should I be calling free() as part of error handling? Once again, I apologize for the bad English. ^^; Many thanks, K.

    Read the article

  • How do I write Push and Pop in Scheme?

    - by kunjaan
    Right now I have (define (push x a-list) (set! a-list (cons a-list x))) (define (pop a-list) (let ((result (first a-list))) (set! a-list (rest a-list)) result)) But I get this result: Welcome to DrScheme, version 4.2 [3m]. Language: Module; memory limit: 256 megabytes. > (define my-list (list 1 2 3)) > (push 4 my-list) > my-list (1 2 3) > (pop my-list) 1 > my-list (1 2 3) What am I doing wrong? Is there a better way to write push so that the element is added at the end and pop so that the element gets deleted from the first?

    Read the article

  • c, pass awk syntax as argument to execl

    - by Skuja
    I want to run following command in c to read systems cpu and memory usage: ps aux|awk 'NR > 0 { cpu +=$3; ram+=$4 }; END {print cpu,ram}' I am trying to pass it to execl command and after that read its output: execl("/bin/ps", "/bin/ps", "aux|awk", "'NR > 0 { cpu +=$3; ram+=$4 }; END {print cpu,ram}'",(char *) 0); but in terminal i am getting following error: ERROR: Unsupported option (BSD syntax) I would like to know how to properly pass awk as argument to execl?

    Read the article

  • How are a session identifiers generated?

    - by Asaf R
    Most web applications depend on some kind of session with the user (for instance, to retain login status). The session id is kept as a cookie in the user's browser and sent with every request. To make it hard to guess the next user's session these session-ids need to be sparse and somewhat random. The also have to be unique. The question is - how to efficiently generate session ids that are sparse and unique? This question has a good answer for unique random numbers, but it seems not scalable for a large range of numbers, simply because the array will end up taking a lot of memory.

    Read the article

  • Is there any significant benefit to reading string directly from control instead of moving it into a

    - by Kevin
    sqlInsertFrame.Parameters.AddWithValue("@UserName", txtUserName.txt); Given the code above...if I don't have any need to move the textbox data into a string variable, is it best to read the data directly from the control? In terms of performance, it would seem smartest to not create any unnecessary variables which use up memory if its not needed. Or is this a situation where its technically true but doesn't yield any real world results due to the size of the data in question. Forgive me, I know this is a very basic question.

    Read the article

  • How to create a new IDA project based on an existing one with different offsets?

    - by tbergelt
    I have an existing IDA Pro project for a C166 processor embedded application. This project already has many functions, variables, etc defined. There are different versions of the embedded application I am looking at. The different versions of the application are 99% the same, but with slight variations in code and data that cause functions and variables to be at different memory offsets. I want to create a new IDA project for a different version of the application. I would like to somehow import all of my function and variable definitions from my existing IDA project. I would like IDA to recognize the signatures of the existing function definitions and define them at there new location in the new project. How can I do this? Are there certain plugins for IDA I can chain together?

    Read the article

  • Garbage Collection Java

    - by simion
    On the slides i am revising from it says the following; Live objects can be identified either by maintaining a count of the number of references to each object, or by tracing chains of references from the roots. Reference counting is expensive – it needs action every time a reference changes and it doesn’t spot cyclical structures, but it can reclaim space incrementally. Tracing involves identifying live objects only when you need to reclaim space – moving the cost from general access to the time at which the GC runs, typically only when you are out of memory. I understand the principles of why reference counting is expensive but do not understand what "doesn’t spot cyclical structures, but it can reclaim space incrementally." means. Could anyone help me out a little bit please? Thanks

    Read the article

  • In C, as free() knows an array size, why isn't there a function that gets the array size? [closed]

    - by user354959
    Possible Duplicate: If free() knows the length of my array, why can’t I ask for it in my own code? Searching around (including here at stackoverflow), I got that malloc() allocates an array and also creates a header to control the array info. In this header, there's also the array size. free() use such information to know how to deallocate that array. So, if the array size info is "there" (somewhere in the memory), why there isn't a function that returns an array size, looking for this at the array header? Or am I missing something?

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >