Search Results

Search found 5919 results on 237 pages for 'io priority'.

Page 32/237 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • How to read comma separated values from text file in JAVA?

    - by user1425223
    I have got this text file with latitude and longitude values of different points on a map. I want to store these coordinates into a mySQL database using hibernate. I want to know how can I split my string into latitudes and longitudes? What is the general way to do these type of things that is with other delimiters like space, tab etc.? File: 28.515046280572285,77.38258838653564 28.51430151808072,77.38336086273193 28.513566177802456,77.38413333892822 28.512830832397192,77.38490581512451 28.51208605426073,77.3856782913208 28.511341270865113,77.38645076751709 28.510530488025346,77.38720178604126 28.509615992924807,77.38790988922119 28.50875805732363,77.38862872123718 28.507994394490268,77.38943338394165 28.50728729434496,77.39038825035095 28.506674470385246,77.39145040512085 28.506174780521828,77.39260911941528 28.505665660113582,77.39376783370972 28.505156537248446,77.39492654800415 28.50466626846366,77.39608526229858 28.504175997400655,77.39724397659302 28.503685724059455,77.39840269088745 28.503195448440064,77.39956140518188 28.50276174118543,77.4007523059845 28.502309175192945,77.40194320678711 28.50185660725938,77.40313410758972 28.50140403738471,77.40432500839233 28.500951465568985,77.40551590919495 28.500498891812207,77.40670680999756 28.5000463161144,77.40789771080017 28.49959373847559,77.40908861160278 Code I am using to read from file: try { BufferedReader in = new BufferedReader(new FileReader("G:\\RoutePPAdvant2.txt")); String str; str = in.readLine(); while ((str = in.readLine()) != null) { System.out.println(str); } in.close(); } catch (IOException e) { System.out.println("File Read Error"); }

    Read the article

  • Why does C's "fopen" take a "const char *" as its second argument?

    - by Chris Cooper
    It has always struck me as strange that the C function "fopen" takes a "const char *" as the second argument. I would think it would be easier to both read your code and implement the library's code if there were bit masks defined in stdio.h, like "IO_READ" and such, so you could do things like: FILE* myFile = fopen("file.txt", IO_READ & IO_WRITE); Is there a programmatic reason for the way it actually is, or is it just historic? (i.e. "That's just the way it is.")

    Read the article

  • Spring Integration 1.0 RC2: Streaming file content?

    - by gdm
    I've been trying to find information on this, but due to the immaturity of the Spring Integration framework I haven't had much luck. Here is my desired work flow: New files are placed in an 'Incoming' directory Files are picked up using a file:inbound-channel-adapter The file content is streamed, N lines at a time, to a 'Stage 1' channel, which parses the line into an intermediary (shared) representation. This parsed line is routed to multiple 'Stage 2' channels. Each 'Stage 2' channel does its own processing on the N available lines to convert them to a final representation. This channel must have a queue which ensures no Stage 2 channel is overwhelmed in the event that one channel processes significantly slower than the others. The final representation of the N lines is written to a file. There will be as many output files as there were routing destinations in step 4. *'N' above stands for any reasonable number of lines to read at a time, from [1, whatever I can fit into memory reasonably], but is guaranteed to always be less than the number of lines in the full file. How can I accomplish streaming (steps 3, 4, 5) in Spring Integration? It's fairly easy to do without streaming the files, but my files are large enough that I cannot read the entire file into memory. As a side note, I have a working implementation of this work flow without Spring Integration, but since we're using Spring Integration in other places in our project, I'd like to try it here to see how it performs and how the resulting code compares for length and clarity.

    Read the article

  • Extract some data from a lot of xml files

    - by LifeH2O
    I have cricket player profiles saved in the form of .xml files in a folder. each file has these tags in it <playerid>547</playerid> <majorteam>England</majorteam> <playername>Don</playername> the playerid is same as in .xml (each file is of different size,1kb to 5kb). These are about 500 files. What i need is to extract the playername, majorteam, and playerid from all these files to a list. I will convert that list to XML later. If you know how can i do it directly to XML i will be very thankful.

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • Do we need seperate file path for window and linux in java

    - by Kishor Sharma
    I have a file on linux ubuntu server hosted with path name /home/kishor/project/detail/. When I made a web app in window to upload and download file from specified location i used path "c:\kishor\projects\detail\" for saving in window. For my surprise when i used window file path name in my server i am still able to get files and upload them, i.e, "c:\kishor\projects\detail\". Can anyone explain why it is working (as window and linux both use different file path pattern).

    Read the article

  • How to delete duplicate/aggregate rows faster in a file using Java (no DB)

    - by S. Singh
    I have a 2GB big text file, it has 5 columns delimited by tab. A row will be called duplicate only if 4 out of 5 columns matches. Right now, I am doing dduping by first loading each coloumn in separate List , then iterating through lists, deleting the duplicate rows as it encountered and aggregating. The problem: it is taking more than 20 hours to process one file. I have 25 such files to process. Can anyone please share their experience, how they would go about doing such dduping? This dduping will be a throw away code. So, I was looking for some quick/dirty solution, to get job done as soon as possible. Here is my pseudo code (roughly) Iterate over the rows i=current_row_no. Iterate over the row no. i+1 to last_row if(col1 matches //find duplicate && col2 matches && col3 matches && col4 matches) { col5List.set(i,get col5); //aggregate } Duplicate example A and B will be duplicate A=(1,1,1,1,1), B=(1,1,1,1,2), C=(2,1,1,1,1) and output would be A=(1,1,1,1,1+2) C=(2,1,1,1,1) [notice that B has been kicked out]

    Read the article

  • Add HTML Id's to tags in .aspx file

    - by slandau
    So I'm writing an app that lets the user select a folder, it gets all the .aspx files in that folder, and lets the users check off which ones they want to add HTML ID's to. Then they click start, and this runs private void btnStart_Click(object sender, EventArgs e) { for (int i = 0; i < listFiles.CheckedItems.Count; i++) { } } It loops through all the selected file names. How do I open each of these .aspx files in the background, and go through them and add the id="thisItemId" attribute to each tag that's like a , , , , , etc....

    Read the article

  • Modifying File while in use using Java

    - by Marquinio
    Hi all, I have this recurrent Java JAR program tasks that tries to modify a file every 60seconds. Problem is that if user is viewing the file than Java program will not be able to modify the file. I get the typical IOException. Anyone knows if there is a way in Java to modify a file currently in use? Or anyone knows what would be the best way to solve this problem? I was thinking of using the File canRead(), canWrite() methods to check if file is in use. If file is in use then I'm thinking of making a backup copy of data that could not be written. Then after 60 seconds add some logic to check if backup file is empty or not. If backup file is not empty then add its contents to main file. If empty then just add new data to main file. Of course, the first thing I will always do is check if file is in use. Thanks for all your ideas.

    Read the article

  • What is the easiest way to loop through a folder of files in C#?

    - by badpanda
    I am new to C# and am trying to write a program that navigates the local file system using a config file containing relevant filepaths. My question is this: What are the best practices to use when performing file I/O (this will be from the desktop app to a server and back) and file system navigation in C#? I know how to google, and I have found several solutions, but I would like to know which of the various functions is most robust and flexible. As well, if anyone has any tips regarding exception handling for C# file I/O that would also be very helpful. Thanks!!! badPanda

    Read the article

  • How to store a scaleable sized extensible event log?

    - by firoso
    Hello everyone! I've been contemplating writing a simple "event log" that takes a paramater list and stores event messages in a log file, trouble is, I forsee this file growing to be rather large (assume 1M entries or more) the question is, how can I implement this system without pulling teeth, I know that SQL would be a possible way to go. XML would be ideal but not really practical for scaleability if i'm not going nuts. Example Log Entry -----Time Date-------- ---------Sender----------------------- ---------Tags---------- --Message---------- 12/24/2008 24:00:00 $DOMAIN\SYSTEM\Application$ :Trivial: :Notification: It's Christmas in 1s

    Read the article

  • Efficient way to delete a line from a text file (C#)

    - by Valentin Vasilyev
    Hello. I need to delete a certain line from a text file. What is the most efficient way of doing this? File can be potentially large(over million records). Thank you. UPDATE: below is the code I'm currently using, but I'm not sure if it is good. internal void DeleteMarkedEntries() { string tempPath=Path.GetTempFileName(); using (var reader = new StreamReader(logPath)) { using (var writer = new StreamWriter(File.OpenWrite(tempPath))) { int counter = 0; while (!reader.EndOfStream) { if (!_deletedLines.Contains(counter)) { writer.WriteLine(reader.ReadLine()); } ++counter; } } } if (File.Exists(tempPath)) { File.Delete(logPath); File.Move(tempPath, logPath); } }

    Read the article

  • MFC: Reading entire file to buffer...

    - by deostroll
    I've meddled with some code but I am unable to read the entire file properly...a lot of junk gets appended to the output. How do I fix this? // wmfParser.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "wmfParser.h" #include <cstring> #ifdef _DEBUG #define new DEBUG_NEW #endif // The one and only application object CWinApp theApp; using namespace std; int _tmain(int argc, TCHAR* argv[], TCHAR* envp[]) { int nRetCode = 0; // initialize MFC and print and error on failure if (!AfxWinInit(::GetModuleHandle(NULL), NULL, ::GetCommandLine(), 0)) { // TODO: change error code to suit your needs _tprintf(_T("Fatal Error: MFC initialization failed\n")); nRetCode = 1; } else { // TODO: code your application's behavior here. CFile file; CFileException exp; if( !file.Open( _T("c:\\sample.txt"), CFile::modeRead, &exp ) ){ exp.ReportError(); cout<<'\n'; cout<<"Aborting..."; system("pause"); return 0; } ULONGLONG dwLength = file.GetLength(); cout<<"Length of file to read = " << dwLength << '\n'; /* BYTE* buffer; buffer=(BYTE*)calloc(dwLength, sizeof(BYTE)); file.Read(buffer, 25); char* str = (char*)buffer; cout<<"length of string : " << strlen(str) << '\n'; cout<<"string from file: " << str << '\n'; */ char str[100]; file.Read(str, sizeof(str)); cout << "Data : " << str <<'\n'; file.Close(); cout<<"File was closed\n"; //AfxMessageBox(_T("This is a test message box")); system("pause"); } return nRetCode; }

    Read the article

  • Write problem - lossing the original data

    - by John
    Every time I write to the text file I will lose the original data, how can I read the file and enter the data in the empty line or the next line which is empty? public void writeToFile() { try { output = new Formatter(myFile); } catch(SecurityException securityException) { System.err.println("Error creating file"); System.exit(1); } catch(FileNotFoundException fileNotFoundException) { System.err.println("Error creating file"); System.exit(1); } Scanner scanner = new Scanner (System.in); String number = ""; String name = ""; System.out.println("Please enter number:"); number = scanner.next(); System.out.println("Please enter name:"); name = scanner.next(); output.format("%s,%s \r\n", number, name); output.close(); }

    Read the article

  • Add up values from a text file

    - by Stanley
    Hi Guys I have a text file that contains Amounts at Substring (34, 47) of each line. I need to sum Up all the Values to the End of the File. I have this code that I had started to build but I do not know how to proceed from here: public class Addup { /** * @param args the command line arguments */ public static void main(String[] args) throws FileNotFoundException, IOException { // TODO code application logic here FileInputStream fs = new FileInputStream("C:/Analysis/RL004.TXT"); BufferedReader br = new BufferedReader(new InputStreamReader(fs)); String line; while((line = br.readLine()) != null){ String num = line.substring(34, 47); double i = Double.parseDouble(num); System.out.println(i); } } } The output is like this: 1.44576457E4 2.33434354E6 4.56875685E3 The Amount is in two decimal Places and I need the result also in the Two decimal Places. What Is the Best way to achieve this?

    Read the article

  • How to read from database and write into text file with C#?

    - by user147685
    How to read from database and write into text file? I want to write/copy (not sure what to call) the record inside my database into a text file. One row record in database is equal to one line in the text file. I'm having no problem in database. For creating text file, it mentions FileStream and StreamWriter. Which one should I use?

    Read the article

  • VB.NET 2008, Windows 7 and saving files

    - by James Brauman
    Hello, We have to learn VB.NET for the semester, my experience lies mainly with C# - not that this should make a difference to this particular problem. I've used just about the most simple way to save a file using the .NET framework, but Windows 7 won't let me save the file anywhere (or anywhere that I have found yet). Here is the code I am using to save a text file. Dim dialog As FolderBrowserDialog = New FolderBrowserDialog() Dim saveLocation As String = dialog.SelectedPath ... Build up output string ... Try ' Try to write the file. My.Computer.FileSystem.WriteAllText(saveLocation, output, False) Catch PermissionEx As UnauthorizedAccessException ' We do not have permissions to save in this folder. MessageBox.Show("Do not have permissions to save file to the folder specified. Please try saving somewhere different.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error) Catch Ex As Exception ' Catch any exceptions that occured when trying to write the file. MessageBox.Show("Writing the file was not successful.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error) End Try The problem is that this using this code throws an UnauthorizedAccessException no matter where I try to save the file. I've tried running the .exe file as administrator, and the IDE as administrator. Is this just Windows 7 being overprotective? And if so, what can I do to solve this problem? The requirements state that I be able to save a file! Thanks.

    Read the article

  • Make Directory.GetFiles() ignore protected folders

    - by Kryptic
    Hello Everyone, I'm using the Directory.GetFiles() method to get a list of files to operate on. This method throws an UnauthorizedAccessException for example when trying to access a protected folder. I would like it to simply skip over such folders and continue. How can I accomplish this with either Directory.GetFiles (preferably) or another method? Update: Here is the code that throws the exception. I am asking the user to select a directory and then retrieving the list of files. I commented out the code (so this is now whole method) that iterates through the files and the problem still occurs. The exception is thrown on the Directory.GetFiles() line. FolderBrowserDialog fbd = new FolderBrowserDialog(); DialogResult dr = fbd.ShowDialog(); if (dr == System.Windows.Forms.DialogResult.Cancel) return; string directory = fbd.SelectedPath; string[] files = Directory.GetFiles(directory, "*.html", SearchOption.AllDirectories);

    Read the article

  • Store data in file system rather than SQL or Oracle database.

    - by nunu
    Hi All, As I am working on Employee Management system, I have two table (for example) in database as given below. EmployeeMaster (DB table structure) EmployeeID (PK) | EmployeeName | City MonthMaster (DB table structure) Month | Year | EmployeeID (FK) | PrenentDays | BasicSalary Now my question is, I want to store data in file system rather than storing data in SQL or ORACLE. I want my data in file system storage for Insert, Edit and Delete opration with keeping relation with objects too. I am a C# developer, Could anybody have thoughts or idea on it. (To store data in file system with keeping relations between them) Thanks in advance. Any ideas on it?

    Read the article

  • Improving File Read Performance (single file, C++, Windows)

    - by david
    I have large (hundreds of MB or more) files that I need to read blocks from using C++ on Windows. Currently the relevant functions are: errorType LargeFile::read( void* data_out, __int64 start_position, __int64 size_bytes ) const { if( !m_open ) { // return error } else { seekPosition( start_position ); DWORD bytes_read; BOOL result = ReadFile( m_file, data_out, DWORD( size_bytes ), &bytes_read, NULL ); if( size_bytes != bytes_read || result != TRUE ) { // return error } } // return no error } void LargeFile::seekPosition( __int64 position ) const { LARGE_INTEGER target; target.QuadPart = LONGLONG( position ); SetFilePointerEx( m_file, target, NULL, FILE_BEGIN ); } The performance of the above does not seem to be very good. Reads are on 4K blocks of the file. Some reads are coherent, most are not. A couple questions: Is there a good way to profile the reads? What things might improve the performance? For example, would sector-aligning the data be useful? I'm relatively new to file i/o optimization, so suggestions or pointers to articles/tutorials would be helpful.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >