Search Results

Search found 6492 results on 260 pages for 'trigger io'.

Page 48/260 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • How to get size of file in visual c++?

    - by karikari
    Below is my code. My problem is, my destination file always has a lot more strings than the originating file. Then, inside the for loop, instead of using i < sizeof more, I realized that I should use i < sizeof file2 . Now my problem is, how to get the size of file2? int i = 0; FILE *file2 = fopen(LOG_FILE_NAME,"r"); wfstream file3 (myfile, ios_base::out); // char more[1024]; char more[SIZE-OF-file2]; for(i = 0; i < SIZE-OF-file2 ; i++) { fgets(more, SIZE-OF-file2, file2); file3 << more; } fclose(file2); file3.close();

    Read the article

  • File mode for creating+reading+appending+binary

    - by MihaiD
    I need to open a file for reading and writing. If the file is not found, it should be created. It should also be treated as a binary for Windows. Can you tell me the file mode sequence I need to use for this? I tried 'r+ab' but that doesn't create the files if they are not found. Thanks

    Read the article

  • C: Reading file with a starting point

    - by Shinka
    A simple question but I can't find the answer in my book. I want to read a binary file to seed a random number generator, but I don't want to seed my generator with the same seed each time I call the function, so I will need to keep a variable for my position in the file (not a problem) and I would need to know how to read a file starting a specific point in the file (no idea how). The code: void rng_init(RNG* rng) { // ... FILE *input = fopen("random.bin", "rb"); unsigned int seed[32]; fread(seed, sizeof(unsigned int), 32, input); // seed 'rng'... fclose(input); }

    Read the article

  • Reading and writing in parallel

    - by Malfist
    I want to be able to read and write a large file in parallel, or if not in parallel, at least in blocks so that I don't use up so much memory. This is my current code: // Define memory stream which will be used to hold encrypted data. MemoryStream memoryStream = new MemoryStream(); // Define cryptographic stream (always use Write mode for encryption). CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write); //start encrypting using (BinaryReader reader = new BinaryReader(File.Open(fileIn, FileMode.Open))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = reader.Read(buffer, 0, buffer.Length); cryptoStream.Write(buffer, 0, read); } while (read == buffer.Length); } // Finish encrypting. cryptoStream.FlushFinalBlock(); // Convert our encrypted data from a memory stream into a byte array. //byte[] cipherTextBytes = memoryStream.ToArray(); //write our memory stream to a file memoryStream.Position = 0; using (BinaryWriter writer = new BinaryWriter(File.Open(fileOut, FileMode.Create))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = memoryStream.Read(buffer, 0, buffer.Length); writer.Write(buffer, 0, read); } while (read == buffer.Length); } // Close both streams. memoryStream.Close(); cryptoStream.Close(); As you can see, it reads the entire file into memory, encrypts it, then writes it out. If I happen to be encrypting files that are very large (2GB+) it tends not to work, or at the very least, consumes ~97% of my memory. How could I do it in a more effective manner?

    Read the article

  • Why are some programs writing on stderr instead of stdout their output?

    - by Zagorax
    I've recently added to my .bashrc file an ssh-add command. I found that ssh-add $HOME/.ssh/id_rsa_github > /dev/null results on a message "identity added and something else" every time I open a shell. While ssh-add $HOME/.ssh/id_rsa_github > /dev/null 2>&1 did the trick and my shell is now 'clean'. Reading on internet, I found that other command do it, (for example time). Could you please explain why it's done?

    Read the article

  • How does including a .csv work in an enum?

    - by Tommy
    enum ID // IDs { ID_HEADER = 0, // ID 0 = headers #include "DATA.CSV" ID_LIMIT }; I inherited some code here..... Looking at "DATA.CSV" I see all the ID's used to populate the enum in column B, along with other data. My question: How does the enum know that it is using "column B" to retrieve it's members? There must be some other logic in the application yet I don't see it. What else should I look for? Thanks.

    Read the article

  • Exciting new Evented I/O technologies

    - by Saif Bechan
    Lately I have been having my eye on evented I/O to tackle some of my web application problems. I have been looking at things as Python's Twisted, Ruby's Eventmachine, and Node.js. Are there any other alternatives to these three, maybe in other languages as PHP ?

    Read the article

  • Seeking to a line in a file in g++

    - by Phenom
    Is there a way that I can seek to a certain line in a file to read or write data? Let's say I want to write some data starting on the 10th line in a text file. There might be some data already in the first few lines, or the file could even be empty. Is there a way I can seek directly to the line I want without having to worry about what's already in the file?

    Read the article

  • C read part of file into cache

    - by Pete Jodo
    I have to do a program (for Linux) where there's an extremely large index file and I have to search and interpret the data from the file. Now the catch is, I'm only allowed to have x-bytes of the file cached at any time (determined by argument) so I have to remove certain data from the cache if it's not what I'm looking for. If my understanding is correct, fopen (r) doesn't put anything in the cache, only when I call getc or fread(specifying size) does it get cached. So my question is, lets say I use fread and read 100 bytes but after checking it, only 20 of the 100 bytes contains the data I need; how would I remove the useless 80 bytes from cache (or overwrite it) in order to read more from the file.

    Read the article

  • [c++] How to create a std::ofstream to a temp file?

    - by dehmann
    Okay, mkstemp is the preferred way to create a temp file in POSIX. But it opens the file and returns an int, which is a file descriptor. From that I can only create a FILE*, but not an std::ofstream, which I would prefer in C++. (Apparently, on AIX and some other systems, you can create an std::ofstream from a file descriptor, but my compiler complains when I try that.) I know I could get a temp file name with tmpnam and then open my own ofstream with it, but that's apparently unsafe due to race conditions, and results in a compiler warning (g++ v3.4. on Linux): warning: the use of `tmpnam' is dangerous, better use `mkstemp' So, is there any portable way to create an std::ofstream to a temp file?

    Read the article

  • How do I read and write to a file using threads in java?

    - by WarmWaffles
    I'm writing an application where I need to read blocks in from a single file, each block is roughly 512 bytes. I am also needing to write blocks simultaneously. One of the ideas I had was BlockReader implements Runnable and BlockWriter implements Runnable and BlockManager manages both the reader and writer. The problem that I am seeing with most examples that I have found was locking problems and potential deadlock situations. Any ideas how to implement this?

    Read the article

  • Preventing threads from writing to the same file

    - by EpsilonVector
    I'm implementing an FTP-like protocol in Linux kernel 2.4 (homework), and I was under the impression that if a file is open for writing any subsequent attempt to open it by another thread should fail, until I actually tried it and discovered it goes through. How do I prevent this from happening? PS: I'm using open() to open the file.

    Read the article

  • Reading a large text file to memory in C++

    - by NoneType
    Is there a way to read a large text file (~60MB) into memory at once (like a compiler flag to increase program memory limit) ? Currently, ofstream's open function throws a segmentation fault while trying to read this file. ifstream fis; fis.open("my_large_file.txt"); // Segfaults here The file just consists of rows of the form number_1<tabspace>number_2 i.e., two numbers separated by a tabspace.

    Read the article

  • VB6: Slow Binary Write?

    - by Tom the Junglist
    Wondering why a particular binary write operation in VB is so slow. The function reads a Byte array from memory and dumps it into a file like this: Open Destination For Binary Access Write As #1 Dim startP, endP As Long startP = BinaryStart endP = UBound(ReadBuf) - 1 Dim i as Integer For i = startP To endP DoEvents Put #1, (i - BinaryStart) + 1, ReadBuf(i) Next Close #1 For two megabytes on a slower system, this can take up to a minute. Can anyone tell me why this is so slow?

    Read the article

  • which one consume less resources? opening text file or make an sql query,both a thousand times ?

    - by imin
    hi I've a php website which displays recipes www.trymasak.my, to be exact. The recipes being displayed at the index page is updated about once a day. To get the latest recipes, I just use a mysql query which is something like "select recipe_name, page_views, image from table order by last_updated". So if I got 10000 visitors a day, obviously the query would be made 10000 times a day. A friend told me a better way (in terms of reducing server load) is when I update the recipes, I just put in the latest recipe details (names,images etc) into a text file, and make my page instead of querying a same query for 10,000 times, just get the data from the text file. Is his suggestion really better? If yes, which is the best php command should I use to open, read and close the text file? thanks

    Read the article

  • USB device Set Attribute in C#

    - by p19lord
    I have this bit of code: DriveInfo[] myDrives = DriveInfo.GetDrives(); foreach (DriveInfo myDrive in myDrives) { if (myDrive.DriveType == DriveType.Removable) { string path = Convert.ToString(myDrive.RootDirectory); DirectoryInfo mydir = new DirectoryInfo(path); String[] dirs = new string[] {Convert.ToString(mydir.GetDirectories())}; String[] files = new string[] {Convert.ToString(mydir.GetFiles())}; foreach (var file in files) { File.SetAttributes(file, ~FileAttributes.Hidden); File.SetAttributes(file, ~FileAttributes.ReadOnly); } foreach (var dir in dirs) { File.SetAttributes(dir, ~FileAttributes.Hidden); File.SetAttributes(dir, ~FileAttributes.ReadOnly); } } } I have a problem. It is trying the code for Floppy Disk drive first which and because no Floppy disk in it, it threw the error The device is not ready. How can I prevent that?

    Read the article

  • Python to extract data from a file

    - by user297003
    Hi, I am new to python. I am trying to extract the text between that has specific text file: ---- data1 data1 data1 extractme ---- data2 data2 data2 ---- data3 data3 extractme ---- and then dump it to text file so that ---- data1 data1 data1 extractme --- data3 data3 extractme --- Thanks for the help.

    Read the article

  • Random Loss of precision in Python ReadLine()

    - by jackyouldon
    Hi all, We have a process which takes a very large csv (1.6GB) and breaks it down into pieces (in this case 3). This runs nightly and normally doesn't give us any problems. When it ran last night, however, the first of the output files had lost precision on the numeric fields in the data. The active ingredient in the script are the lines: while lineCounter <= chunk: oOutFile.write(oInFile.readline()) lineCounter = lineCounter + 1 and the normal output might be something like StringField1; StringField2; StringField3; StringField4; 1000000; StringField5; 0.000054454 etc. On this one occasion and in this one output file the numeric fields were all output with 6 zeros at the end i.e. StringField1; StringField2; StringField3; StringField4; 1000000.000000; StringField5; 0.000000 We are using Python v2.6 (and don't want to upgrade unless we really have to) but we can't afford to lose this data. Does anyone have any idea why this might have happened? If the readline is doing some kind of implicit conversion is there a way to do a binary read, because we really just want this data to pass through untouched? It is very wierd to us that this only affected one of the output files generated by the same script, and when it was rerun the output was as expected. thanks Jack

    Read the article

  • How to pass a file (read from Java) most effectively to a native method?

    - by soc
    Hi, I have approx. 30000 files (1MB each) which I want to put into a native method, which requires just an byte array and the size of it as arguments. I looked through some examples and benchmarks (like http://nadeausoftware.com/articles/2008/02/java_tip_how_read_files_quickly) but all of them do some other fancy things. Basically I don't care about the contents of the file, I don't want to access something in that file or the byte array or do anything else with it. I just want to put a file into a native method which accepts an byte array as fast as possible. At the moment I'm using RandomAccessFile, but that's horribly slow (10MB/s). Is there anything like byte[] readTheWholeFile(File file){ ... } which I could put into native void fancyCMethod(readTheWholeFile(myFile), myFile.length()) What would you suggest?

    Read the article

  • How do i force a file to be deleted? Windows server 2008

    - by acidzombie24
    On my site a user may upload a file (pic, zip, audio, video, whatever). He then may decide to replace it with a newer revision. This user may upload a file, make a post then decide to put up a new revision replacing the old (lets say its a large zip or tar.gz file). Theres a good chance people may be downloading it if he sent out an email or even im for the home user. Problem. I need to replace the file and people may be downloading and it may be some minutes before it is deleted. I dont want my code to stall until i cant delete or check every second to see if its unused (especially bad if another user can start and he takes long creating a cycle). How do i delete the file while users are downloading the file? i dont care if they stop i just care that the file can be replaced and new downloads are the new revision.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >