Search Results

Search found 4688 results on 188 pages for 'io redirection'.

Page 38/188 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • PHP: What is an efficient way to parse a text file containing very long lines?

    - by Shaun
    I'm working on a parser in php which is designed to extract MySQL records out of a text file. A particular line might begin with a string corresponding to which table the records (rows) need to be inserted into, followed by the records themselves. The records are delimited by a backslash and the fields (columns) are separated by commas. For the sake of simplicity, let's assume that we have a table representing people in our database, with fields being First Name, Last Name, and Occupation. Thus, one line of the file might be as follows [People] = "\Han,Solo,Smuggler\Luke,Skywalker,Jedi..." Where the ellipses (...) could be additional people. One straightforward approach might be to use fgets() to extract a line from the file, and use preg_match() to extract the table name, records, and fields from that line. However, let's suppose that we have an awful lot of Star Wars characters to track. So many, in fact, that this line ends up being 200,000+ characters/bytes long. In such a case, taking the above approach to extract the database information seems a bit inefficient. You have to first read hundreds of thousands of characters into memory, then read back over those same characters to find regex matches. Is there a way, similar to the Java String next(String pattern) method of the Scanner class constructed using a file, that allows you to match patterns in-line while scanning through the file? The idea is that you don't have to scan through the same text twice (to read it from the file into a string, and then to match patterns) or store the text redundantly in memory (in both the file line string and the matched patterns). Would this even yield a significant increase in performance? It's hard to tell exactly what PHP or Java are doing behind the scenes.

    Read the article

  • Reading and writing C++ vector to a file

    - by JB
    For some graphics work I need to read in a large amount of data as quickly as possible and would ideally like to directly read and write the data structures to disk. Basically I have a load of 3d models in various file formats which take too long to load so I want to write them out in their "prepared" format as a cache that will load much faster on subsequent runs of the program. Is it safe to do it like this? My worries are around directly reading into the data of the vector? I've removed error checking, hard coded 4 as the size of the int and so on so that i can give a short working example, I know it's bad code, my question really is if it is safe in c++ to read a whole array of structures directly into a vector like this? I believe it to be so, but c++ has so many traps and undefined behavour when you start going low level and dealing directly with raw memory like this. I realise that number formats and sizes may change across platforms and compilers but this will only even be read and written by the same compiler program to cache data that may be needed on a later run of the same program. #include <fstream> #include <vector> using namespace std; struct Vertex { float x, y, z; }; typedef vector<Vertex> VertexList; int main() { // Create a list for testing VertexList list; Vertex v1 = {1.0f, 2.0f, 3.0f}; list.push_back(v1); Vertex v2 = {2.0f, 100.0f, 3.0f}; list.push_back(v2); Vertex v3 = {3.0f, 200.0f, 3.0f}; list.push_back(v3); Vertex v4 = {4.0f, 300.0f, 3.0f}; list.push_back(v4); // Write out a list to a disk file ofstream os ("data.dat", ios::binary); int size1 = list.size(); os.write((const char*)&size1, 4); os.write((const char*)&list[0], size1 * sizeof(Vertex)); os.close(); // Read it back in VertexList list2; ifstream is("data.dat", ios::binary); int size2; is.read((char*)&size2, 4); list2.resize(size2); // Is it safe to read a whole array of structures directly into the vector? is.read((char*)&list2[0], size2 * sizeof(Vertex)); }

    Read the article

  • Explained shell statement

    - by Mats Stijlaart
    The following statement will remove line numbers in a txt file: cat withLineNumbers.txt | sed 's/^.......//' >> withoutLineNumbers.txt The input file is created with the following statement (this one i understand): nl -ba input.txt >> withLineNumbers.txt I know the functionality of cat and i know the output is written to the 'withoutLineNumbers.txt' file. But the part of '| sed 's/^.......//'' is not really clear to me. Thanks for your time.

    Read the article

  • Need a way to determine if a file is done being written to.

    - by Khorkrak
    The situation I'm in is this - there's a process that's writing to a file, sometimes the file is rather large say 400 - 500MB. I need to know when it's done writing. How can I determine this? If I look in the directory I'll see it there but it might not be done being written. Plus this needs to be done remotely - as in on the same internal LAN but not on the same computer and typically the process that wants to know when the file writing is done is running on a Linux box with a the process that's writing the file and the file itself on a windows box. No samba isn't an option. xmlrpc communication to a service on that windows box is an option as well as using snmp to check if that's viable. Ideally Works on either Linux or Windows - meaning the solution is OS independent. Works for any type of file. Good enough: Works just on windows but can be done through some library or whatever that can be accessed with Python. Works only for PDF files. Current best idea is to periodically open the file in question from some process on the windows box and look at the last bytes checking for the PDF end tag and accounting for the eol differences because the file may have been created on Linux or Windows.

    Read the article

  • Quickest way to write to file in java

    - by user1097772
    I'm writing an application which compares directory structure. First I wrote an application which writes gets info about files - one line about each file or directory. My soulution is: calling method toFile Static PrintWriter pw = new PrintWriter(new BufferedWriter( new FileWriter("DirStructure.dlis")), true); String line; // info about file or directory public void toFile(String line) { pw.println(line); } and of course pw.close(), at the end. My question is, can I do it quicker? What is the quickest way? Edit: quickest way = quickest writing in the file

    Read the article

  • Read a specific range of lines from a file using c

    - by James Joy
    I have the following content in a file: hhasfghgsafjgfhgfhjf gashghfdgdfhgfhjasgfgfhsgfjdg jfshafghgfgfhfsghfgffsjgfj . . . . . startread hajshjsfhajfhjkashfjf hasjgfhgHGASFHGSHF hsafghfsaghgf . . . . . stopread . . . . . . jsfjhfhjgfsjhfgjhsajhdsa jhasjhsdabjhsagshaasgjasdhjk jkdsdsahghjdashjsfahjfsd I need to read the lines from the next line of startread till the previous line of stopread using a c code and store it to a string variable(of course with a \n for every line breaks). How can i achieve this? I have used fgets(line,sizeof(line),file); but it starts reading from the beginning. I don't have the exact line number to start and stop reading since the file is written by another C code. But there are these identifiers startread and stopread to identify whereto start reading. Operating platform is linux. Thanks in advance.

    Read the article

  • File Output using Gforth

    - by sheepez
    As a first project I have been writing a short program to render the Mandelbrot fractal. I have got to the point of trying to output my results to a file ( e.g. .bmp or .ppm ) and got stuck. I have not really found any examples of exactly what I am trying to do, but I have found two examples of code to copy from one file to another. The examples in the Gforth documentation ( Section 3.27 ) did not work for me ( winXP ) in fact they seemed to open and create files but not write to files properly. This is the Gforth documentation example that copies the contents of one file to another: 0 Value fd-in 0 Value fd-out : open-input ( addr u -- ) r/o open-file throw to fd-in ; : open-output ( addr u -- ) w/o create-file throw to fd-out ; s" foo.in" open-input s" foo.out" open-output : copy-file ( -- ) begin line-buffer max-line fd-in read-line throw while line-buffer swap fd-out write-line throw repeat ; I found this example ( http://rosettacode.org/wiki/File_IO#Forth ) which does work. The main problem is that I can't isolate the part that writes to a file and have it still work. The main confusion is that r doesn't seem to consume TOS as I might expect. : copy-file2 ( a1 n1 a2 n2 -- ) r/o open-file throw >r w/o create-file throw r> begin pad maxstring 2 pick read-file throw ?dup while pad swap 3 pick write-file throw repeat close-file throw close-file throw ; \ Invoke it like this: s" output.txt" s" input.txt" copy-file I would be very grateful if someone could explain exactly how the open, create read and write -file words actually work, as my investigation keeps resulting in somewhat bizarre stacks. Any clues as to why the Gforth examples do not work might help too. In summary, I want to output from Gforth to a file and so far have been thwarted. Can anyone offer any help? Thank you Vijay, I think that I understand the example that you gave. However when I try to use something like this ( which I think is similar ): 0 value test-file : write-test s" testfile.out" w/o create-file throw to test-file s" test text" test-file write-line ; I get ok but nothing is put into the file, have I made a mistake? It seems that the problem was due to not flushing the relevant buffers or explicitly closing the file. Adding something like test-file flush-file throw or test-file close-file throw between write-line and ; makes it work. Thanks again Vijay for helping.

    Read the article

  • What is the fastest way for reading huge files in Delphi?

    - by dummzeuch
    My program needs to read chunks from a huge binary file with random access. I have got a list of offsets and lengths which may have several thousand entries. The user selects an entry and the program seeks to the offset and reads length bytes. The program internally uses a TMemoryStream to store and process the chunks read from the file. Reading the data is done via a TFileStream like this: FileStream.Position := Offset; MemoryStream.CopyFrom(FileStream, Size); This works fine but unfortunately it becomes increasingly slower as the files get larger. The file size starts at a few megabytes but frequently reaches several tens of gigabytes. The chunks read are around 100 kbytes in size. The file's content is only read by my program. It is the only program accessing the file at the time. Also the files are stored locally so this is not a network issue. I am using Delphi 2007 on a Windows XP box. What can I do to speed up this file access?

    Read the article

  • Is there a Base64Stream for .NET? where?

    - by Cheeso
    If I want to produce a Base64-encoded output, how would I do that in .NET? I know that since .NET 2.0, there is the ICryptoTransform interface, and the ToBase64Transform() and FromBase64Transform() implementations of that interface. But those classes are embedded into the System.Security namespace, and require the use of a TransformBlock, TransformFinalBlock, and so on. Is there an easier way to base64 encode a stream of data in .NET?

    Read the article

  • Embarrassingly parallel workflow creates too many output files

    - by Hooked
    On a Linux cluster I run many (N > 10^6) independent computations. Each computation takes only a few minutes and the output is a handful of lines. When N was small I was able to store each result in a separate file to be parsed later. With large N however, I find that I am wasting storage space (for the file creation) and simple commands like ls require extra care due to internal limits of bash: -bash: /bin/ls: Argument list too long. Each computation is required to run through a qsub scheduling algorithm so I am unable to create a master program which simply aggregates the output data to a single file. The simple solution of appending to a single fails when two programs finish at the same time and interleave their output. I have no admin access to the cluster, so installing a system-wide database is not an option. How can I collate the output data from embarrassingly parallel computation before it gets unmanageable?

    Read the article

  • Error copying file from app bundle

    - by Michael Chen
    I used the FireFox add-on SQLite Manager, created a database, which saved to my desktop as "DB.sqlite". I copied the file into my supporting files for the project. But when I run the app, immediately I get the error "Assertion failure in -[AppDelegate copyDatabaseIfNeeded], /Users/Mac/Desktop/Note/Note/AppDelegate.m:32 2014-08-19 23:38:02.830 Note[28309:60b] Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Failed to create writable database file with message 'The operation couldn’t be completed. (Cocoa error 4.)'.' First throw call stack: "... Here is the App Delegate Code where the error takes place -(void) copyDatabaseIfNeeded { NSFileManager *fileManager = [NSFileManager defaultManager]; NSError *error; NSString *dbPath = [self getDBPath]; BOOL success = [fileManager fileExistsAtPath:dbPath]; if (!success) { NSString *defaultDBPath = [[ [NSBundle mainBundle ] resourcePath] stringByAppendingPathComponent:@"DB.sqlite"]; success = [fileManager copyItemAtPath:defaultDBPath toPath:dbPath error:&error]; if (!success) NSAssert1(0, @"Failed to create writable database file with message '%@'.", [error localizedDescription]); } } I am very new to Sqlite, so I maybe I didn't create a database correctly in the FireFox Sqlite manager, or maybe I didn't "properly" copy the .sqlite file in? (I did check the target membership in the sqlite and it correctly has my project selected. Also, the .sqlite file names all match up perfectly.)

    Read the article

  • Execute a Application On The Server Using VBScript

    - by Nathan Campos
    I have an application on my server that is called leaf.exe, that haves two arguments needed to run, they are: inputfile and outputfile, that will be like this example: leaf.exe input.jpg output.leaf They are all on the same directory as my home page file(the executable and the input file). But I need that a VBScript could run the application like that, then I want to know how could I do this.

    Read the article

  • Opening a file in c++ by using a string array of adress

    - by muhammad-aslam
    Hello guYz plz help me out in making it possible to open the files by the adress provided in an array of strings......... a way to open file is as given below... ifstream infile; infile.open("d:\aslam.txt"); but how can i open file providing an array of string as an adress of file..... like this infile.open(arr[i]); (but its not working) plz help me.........

    Read the article

  • Foreach File in a Folder in Flash?

    - by msandbot
    Hey, I have an image slideshow program working right now and it takes in a folder of a hard coded in number of images. I would like to change this so that it can take in a folder and will display all of them no matter the number. Is there a way to do this in flash? I'm thinking something like the foreach loop in perl or other scripting language. It is possible to store then number of images in a text file but I also don't know how to read that in flash either. I'm working in actionscript 3. Any help would be greatly appreciated. Thanks -Mike

    Read the article

  • removing a line from a text file?

    - by Blackbinary
    Hi all. I am working with a text file, which contains a list of processes under my programs control, along with relevant data. At some point, one of the processes will finish, and thus will need to be removed from the file (as its no longer under control). Here is a sample of the file contents (which has enteries added "randomly"): PID=25729 IDLE=0.200000 BUSY=0.300000 USER=-10.000000 PID=26416 IDLE=0.100000 BUSY=0.800000 USER=-20.000000 PID=26522 IDLE=0.400000 BUSY=0.700000 USER=-30.000000 So for example, if I wanted to remove the line that says PID=26416.... how could I do that, without writing the file over again? I can use external unix commands, however I am not very familiar with them so please if that is your suggestion, give an example. Thanks!

    Read the article

  • Create a Stream without having a physical file to create from.

    - by jhorton
    I'm needing to create a zip file containing documents that exist on the server. I am using the .Net Package class to do so, and to create a new Package (which is the zip file) I have to have either a path to a physical file or a stream. I am trying to not create an actual file that would be the zip file, instead just create a stream that would exist in memory or something. My question is how do you instantiate a new Stream (i.e. FileStream, MemoryStream, etc) without having a physical file to instantiate from.

    Read the article

  • Why does DataInputStream not support integers?

    - by Jason
    I need to read in a list of numbers from a file, none of which are larger than 32767. Originally I was going to use the Scanner class to pull in the data, then I read about DataInputStream. This would work well for me, except that according to the API, it supports all primitive variables EXCEPT ints! Listed are longs, shorts, bytes, chars, booleans, ect, but no ints. I have no need for double precision from the incoming data. Is this a deliberate or unintentional oversight?

    Read the article

  • Batch backup a harddrive without modifying access times C#

    - by johnathan-doena
    I'm trying to write a simple program that will backup my flash drive. I want it to work automatically and silently in the background, and I also want it to be as quick as possible. The thing is, resetting all the access times is useless to me, and something I want to avoid. I know I can read the access times and set them back, but I bet it will fail one day in the future. It would be much simpler to read the files without ever changing it. Also, what is the fastest way to do this? What differences would there be between, say, a flash drive and an external hard drive. I am writing this in C#, as it is the simplest way to do it and it will probably last more generations of Windows..

    Read the article

  • Why isn't this file reading/writing program working?

    - by user320950
    This program is supposed to read files and write them. I took the file open checks out because they kept causing errors. The problem is that the files open like they are supposed to and the names are correct but nothing is on any of the text screens. Do you know what is wrong? #include<iostream> #include<fstream> #include<cstdlib> #include<iomanip> using namespace std; int main() { ifstream in_stream; // reads itemlist.txt ofstream out_stream1; // writes in items.txt ifstream in_stream2; // reads pricelist.txt ofstream out_stream3;// writes in plist.txt ifstream in_stream4;// read recipt.txt ofstream out_stream5;// write display.txt float price=' ',curr_total=0.0; int wrong=0; int itemnum=' '; char next; in_stream.open("ITEMLIST.txt", ios::in); // list of avaliable items out_stream1.open("listWititems.txt", ios::out); // list of avaliable items in_stream2.open("PRICELIST.txt", ios::in); out_stream3.open("listWitdollars.txt", ios::out); in_stream4.open("display.txt", ios::in); out_stream5.open("showitems.txt", ios::out); in_stream.close(); // closing files. out_stream1.close(); in_stream2.close(); out_stream3.close(); in_stream4.close(); out_stream5.close(); system("pause"); in_stream.setf(ios::fixed); while(in_stream.eof()) { in_stream >> itemnum; cin.clear(); cin >> next; } out_stream1.setf(ios::fixed); while (out_stream1.eof()) { out_stream1 << itemnum; cin.clear(); cin >> next; } in_stream2.setf(ios::fixed); in_stream2.setf(ios::showpoint); in_stream2.precision(2); while((price== (price*1.00)) && (itemnum == (itemnum*1))) { while (in_stream2 >> itemnum >> price) // gets itemnum and price { while (in_stream2.eof()) // reads file to end of file { in_stream2 >> itemnum; in_stream2 >> price; price++; curr_total= price++; in_stream2 >> curr_total; cin.clear(); // allows more reading cin >> next; } } } out_stream3.setf(ios::fixed); out_stream3.setf(ios::showpoint); out_stream3.precision(2); while((price== (price*1.00)) && (itemnum == (itemnum*1))) { while (out_stream3 << itemnum << price) { while (out_stream3.eof()) // reads file to end of file { out_stream3 << itemnum; out_stream3 << price; price++; curr_total= price++; out_stream3 << curr_total; cin.clear(); // allows more reading cin >> next; } return itemnum, price; } } in_stream4.setf(ios::fixed); in_stream4.setf(ios::showpoint); in_stream4.precision(2); while ( in_stream4.eof()) { in_stream4 >> itemnum >> price >> curr_total; cin.clear(); cin >> next; } out_stream5.setf(ios::fixed); out_stream5.setf(ios::showpoint); out_stream5.precision(2); out_stream5 <<setw(5)<< " itemnum " <<setw(5)<<" price "<<setw(5)<<" curr_total " <<endl; // sends items and prices to receipt.txt out_stream5 << setw(5) << itemnum << setw(5) <<price << setw(5)<< curr_total; // sends items and prices to receipt.txt out_stream5 << " You have a total of " << wrong++ << " errors " << endl; }

    Read the article

  • create a sparse BufferedImage in java

    - by elgcom
    I have to create an image with very large resolution, but the image is relatively "sparse", only some areas in the image need to draw. For example with following code /* this take 5GB memory */ final BufferedImage img = new BufferedImage( 36000, 36000, BufferedImage.TYPE_INT_ARGB); /* draw something */ Graphics g = img.getGraphics(); g.drawImage(....); /* output as PNG */ final File out = new File("out.png"); ImageIO.write(img, "png", out); The PNG image on the end I created is ONLY about 200~300 MB. The question is how can I avoid creating a 5GB BufferedImage at the beginning? I do need an image with large dimension, but with very sparse color information. Is there any Stream for BufferedImage so that it will not take so much memory?

    Read the article

  • Java- FileWriter/BufferedWriter - appending to end of a text file?

    - by KP65
    I've done this before once, I'm trying to replicate what I did so far and this is what I've got: try { BufferedWriter writer = new BufferedWriter(new FileWriter("file.P", true)); System.out.println("entered"); if (!(newUserName.isEmpty()) || (newUserPass.isEmpty())){ writer.newLine(); writer.write("hellotest123"); writer.close(); } It seems to find file.P, which is just a txt file, but it doesn't seem to append anything onto it? It enters the code and passes the IF statement fine, but nothing is appended to the text file? I'm slightly stuck!

    Read the article

  • Fastest way to read data from a lot of ASCII files

    - by Alsenes
    Hi guys, for a college exercise that I've already submitted I needed to read a .txt file wich contained a lot of names of images(1 in each line). Then I needed to open each image as an ascii file, and read their data(images where in ppm format), and do a series of things with them. The things is, I noticed my program was taking 70% of the time in the reading the data from the file part, instead of in the other calculations that I was doing (finding number of repetitions of each pixel with a hash table, finding diferents pixels beetween 2 images etc..), which I found quite odd to say the least. This is how the ppm format looks like: P3 //This value can be ignored when reading the file, because all image will be correctly formatted 4 4 255 //This value can be also ignored, will be always 255. 0 0 0 0 0 0 0 0 0 15 0 15 0 0 0 0 15 7 0 0 0 0 0 0 0 0 0 0 0 0 0 15 7 0 0 0 15 0 15 0 0 0 0 0 0 0 0 0 This is how I was reading the data from the files: ifstream fdatos; fdatos.open(argv[1]); //Open file with the name of all the images const int size = 128; char file[size]; //Where I'll get the image name Image *img; while (fdatos >> file) { //While there's still images anmes left, continue ifstream fimagen; fimagen.open(file); //Open image file img = new Image(fimagen); //Create new image object with it's data file ……… //Rest of the calculations whith that image ……… delete img; //Delete image object after done fimagen.close(); //Close image file after done } fdatos.close(); And inside the image object read the data like this: const int tallafirma = 100; char firma[tallafirma]; fich_in >> std::setw(100) >> firma; // Read the P3 part, can be ignored int maxvalue, numpixels; fich_in >> height >> width >> maxvalue; // Read the next three values numpixels = height*width; datos = new Pixel[numpixels]; int r,g,b; //Don't need to be ints, max value is 256, so an unsigned char would be ok. for (int i=0; i<numpixels; i++) { fich_in >> r >> g >> b; datos[i] = Pixel( r, g ,b); } //This last part is the slow one, //I thing I should be able to read all this data in one single read //to buffer or something which would be stored in an array of unsigned chars, //and then I'd only need to to do: //buffer[0] -> //Pixel 1 - Red data //buffer[1] -> //Pixel 1 - Green data //buffer[2] -> //Pixel 1 - Blue data So, any Ideas? I think I can improve it quite a bit reading all to an array in one single call, I just don't know how that is done. Also, is it posible to know how many images will be in the "index file"? Is it posiible to know the number of lines a file has?(because there's one file name per line..) Thanks!!

    Read the article

  • Determine if the current thread has low I/O priority

    - by Magnus Hoff
    I have a background thread that does some I/O-intensive background type work. To please the other threads and processes running, I set the thread priority to "background mode" using SetThreadPriority, like this: SetThreadPriority(GetCurrentThread(), THREAD_MODE_BACKGROUND_BEGIN); However, THREAD_MODE_BACKGROUND_BEGIN is only available in Windows Server 2008 or newer, as well as Windows Vista and newer, but the program needs to work well on Windows Server 2003 and XP as well. So the real code is more like this: if (!SetThreadPriority(GetCurrentThread(), THREAD_MODE_BACKGROUND_BEGIN)) { SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_LOWEST); } The problem with this is that on Windows XP it will totally disrupt the system by using too much I/O. I have a plan for a ugly and shameful way of mitigating this problem, but that depends on me being able to determine if the current thread has low I/O priority or not. Now, I know I can store which thread priority I ended up setting, but the control flow in the program is not really well suited for this. I would rather like to be able to test later whether or not the current thread has low I/O priority -- if it is in "background mode". GetThreadPriority does not seem to give me this information. Is there any way to determine if the current thread has low I/O priority?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >