Search Results

Search found 71513 results on 2861 pages for 'file extension'.

Page 393/2861 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • Android - Loop Through strings.xml file

    - by Alexis Cartier
    I was wondering if there is anyway to loop through the strings.xml file. Let's say that I have the following format: <!-- FIRST SECTION --> <string name="change_password">Change Password</string> <string name="change_server">Change URL</string> <string name="default_password">password</string> <string name="default_server">http://xxx:8080</string> <string name="default_username">testPhoneAccount</string> <!-- SECOND SECTION --> <string name="debug_settings_category">Debug Settings</string> <string name="reload_data_every_startup_pref">reload_data_every_startup</string> <string name="reload_data_on_first_startup_pref">reload_data_on_first_startup</string> Now let's say I have this: private HashMap<String,Integer> hashmapStringValues = new HashMap<String, Integer>(); Is there a way to iterate only in the second section of my xml file? Maybe wrap the section with a tag like <section2> and then iterate through it? public void initHashMap(){ for (int i=0;i< ???? ;i++) //Here I need to loop only in the second section of my xml file { String nameOfTag = ? // Here I get the name of the tag int value = R.string.nameOfTag // Here I get the associated value of the tag this.hashmapStringValues.put(nameOfTag,value); } }

    Read the article

  • Consulting a Prolog Source Code from within a VS2008 Solution File

    - by Joshua Green
    I have a Prolog file (Hanoi.pl) containing the code for solving the Hanoi Towers puzzle: hanoi( N ):- move( N, left, middle, right ). move( 0, _, _, _ ):- !. move( N, A, B, C ):- M is N-1, move( M, A, C, B ), inform( A, B ), move( M, C, B, A ). inform( X, Y ):- write( 'move a disk from ' ), write( X ), write( ' to ' ), writeln( Y ). I also have a C++ file written in VS2008 IDE: #include <iostream> #include <string> #include <stdio.h> #include <stdlib.h> using namespace std; #include "SWI-cpp.h" #include "SWI-Prolog.h" predicate_t phanoi; term_t t0; int main(int argc, char** argv) { long n = 5; int rval; if ( !PL_initialise(1, argv) ) PL_halt(1); PL_put_integer( t0, n ); phanoi = PL_predicate( "hanoi", 1, NULL ); rval = PL_call_predicate( NULL, PL_Q_NORMAL, phanoi, t0 ); system( "PAUSE" ); } How can I consult my Prolog source code (Hanoi.pl) from within my C++ code? Not from the Command Prompt - from the code, something like include or consult or compile? It is located in the same folder as my cpp file. Thanks,

    Read the article

  • need to add secure ftp file upload area to a client's website

    - by user346602
    This is a variation on a previous question as I am having tons of trouble finding answers in all my relentless online searches. Am designing a website for an architecture firm. They want their clients to be able to upload files to them, through a link on their site, via ftp. They also want to have a sign in for their clients, and ensure the uploads are secure. I can figure out how to make a form that has a file upload area - but just don't understand the ftp and the secure part... I understand html, css and a bit of JQuery; the rest is still very challenging to me. Have found something called net2ftp that claims to do what I'm looking for - but the even the installation instructions (for administrators, here: http://www.net2ftp.com/help.html) confuse me. Do I need a MySQL database? Where do I put in Admin password they refer to? It goes on... Is there anything "easier" out there that anyone knows of? I have read that I should be Googling "file managers" - but don't know if these can be embedded in a client's website. I even need to understand of what happens to said file, and where it ends up, when client clicks the upload link. Oh - I am so in over head on this one.

    Read the article

  • Replacing a word in a text file with a value using python

    - by Jamde Jam
    I have been trying to replace a word in a text file with a value (say 1), but my outfile is blank.I am new to python (its only been a month since I have been learning it). My file is relatively large, but I just want to replace a word with the value 1 for now. Here is a segment of what the file looks like: NAME SECOND_1 ATOM 1 6 0 0 0 # ORB 1 ATOM 2 2 0 12/24 0 # ORB 2 ATOM 3 2 12/24 0 0 # ORB 2 ATOM 4 2 0 0 4/24 # ORB 3 ATOM 5 2 0 0 20/24 # ORB 3 ATOM 6 2 0 0 8/24 # ORB 3 ATOM 7 2 0 0 16/24 # ORB 3 ATOM 8 6 0 0 12/24 # ORB 1 ATOM 9 2 12/24 0 12/24 # ORB 2 ATOM 10 2 0 12/24 12/24 # ORB 2 #1 #2 #3 I want to first replace the word ATOM with the value 1. Next I want to replace #ORB with a space. Here is what I am trying thus far. input = open('SECOND_orbitsJ22.txt','r') output=open('SECOND_orbitsJ22_out.txt','w') for line in input: word=line.split(',') if(word[0]=='ATOM'): word[0]='1' output.write(','.join(word)) Can anyone offer any suggestions or help? Thanks so much.

    Read the article

  • How can I reorder an mbox file chronologically?

    - by Joshxtothe4
    Hello, I have a single spool mbox file that was created with evolution, containing a selection of emails that I wish to print. My problem is that the emails are not placed into the mbox file chronologically. I would like to know the best way to place order the files from first to last using bash, perl or python. I would like to oder by received for files addressed to me, and sent for files sent by me. Would it perhaps be easier to use maildir files or such? The emails currently exist in the format: From [email protected] Fri Aug 12 09:34:09 2005 Message-ID: <[email protected]> Date: Fri, 12 Aug 2005 09:34:09 +0900 From: me <[email protected]> User-Agent: Mozilla Thunderbird 1.0.6 (Windows/20050716) X-Accept-Language: en-us, en MIME-Version: 1.0 To: someone <[email protected]> Subject: Re: (no subject) References: <[email protected]> In-Reply-To: <[email protected]> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit Status: RO X-Status: X-Keywords: X-UID: 371 X-Evolution-Source: imap://[email protected]/ X-Evolution: 00000002-0010 Hey the actual content of the email someone wrote: > lines of quotedtext I am wondering if there is a way to use this information to easily reorganize the file, perhaps with perl or such.

    Read the article

  • Still don't understand file upload-folder permissions

    - by Camran
    I have checked out articles and tutorials. I don't know what to do about the security of my picture upload-folder. It is pictures for classifieds which should be uploaded to the folder. This is what I want: Anybody may upload images to the folder. The images will be moved to another folder, by another php-code later on (automatic). Only I may manually remove them, as well as another php file on the server which automatically empties the folder after x-days. What should I do here? The images are uploaded via a php-upload script. This script checks to see if the extension of the file is actually a valid image-file. When I try this: chmod 755 images the images wont be uploaded. But like this it works: chmod 777 images But 777 is a security risk right? Please give me detailed information... The Q is, what to do to solve this problem, not info about what permissions there are etc etc... Thanks If you need more info let me know...

    Read the article

  • Saving NSString to file

    - by Michael Amici
    I am having trouble saving simple website data to file. I believe the sintax is correct, could someone help me? When I run it, it shows that it gets the data, but when i quit and open up the file that it is supposed to save to, nothing is in it. - (BOOL)textFieldShouldReturn:(UITextField *)nextField { [timer invalidate]; startButton.hidden = NO; startButton.enabled = YES; stopButton.enabled = NO; stopButton.hidden = YES; stopLabel.hidden = YES; label.hidden = NO; label.text = @"Press to Activate"; [nextField resignFirstResponder]; NSString *urlString = textField.text; NSData *dataNew = [NSData dataWithContentsOfURL:[NSURL URLWithString:urlString]]; NSUInteger len = [dataNew length]; NSString *stringCompare = [NSString stringWithFormat:@"%i", len]; NSLog(@"%@", stringCompare); NSString *filePath = [[NSBundle mainBundle] pathForResource:@"websiteone" ofType:@"txt"]; if (filePath) { [stringCompare writeToFile:filePath atomically:YES encoding:NSUTF8StringEncoding error:NULL]; NSString *myText = [NSString stringWithContentsOfFile:filePath encoding:NSUTF8StringEncoding error:NULL]; NSLog(@"Saving... %@", myText); } else { NSLog(@"cant find the file"); } return YES; }

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Where do I put the .js file when I create js interface with Graphene 2

    - by Thang Pham
    I follow this tutorial https://docs.jboss.org/author/display/ARQGRA2/JavaScript+Interface Where do I put my helloworld.js file? I put it under webapp/resources/js/helloworld.js and I do import org.jboss.arquillian.graphene.javascript.Dependency; import org.jboss.arquillian.graphene.javascript.JavaScript; @JavaScript("helloworld") @Dependency(sources = "js/helloworld.js") public interface HelloWorld { String hello(); } and I got NPE when I inject @JavaScript private HelloWorld helloWorld; Please help. Here is my POM, I use glassfish3.1 <properties> <endorsed.dir>${project.build.directory}/endorsed</endorsed.dir> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <version.org.jboss.arquillian>1.0.4.Final</version.org.jboss.arquillian> <version.org.jboss.arquillian.drone>1.2.0.Alpha2</version.org.jboss.arquillian.drone> <version.org.jboss.arquillian.graphene>1.0.0.Final</version.org.jboss.arquillian.graphene> <version.org.jboss.arquillian.graphene2>2.0.0.Alpha4</version.org.jboss.arquillian.graphene2> </properties> <dependencyManagement> <dependencies> <!-- Arquillian Drone dependencies and Selenium dependencies --> <dependency> <groupId>org.jboss.arquillian.extension</groupId> <artifactId>arquillian-drone-bom</artifactId> <version>${version.org.jboss.arquillian.drone}</version> <type>pom</type> <scope>import</scope> </dependency> <!-- Arquillian Core dependencies --> <dependency> <groupId>org.jboss.arquillian</groupId> <artifactId>arquillian-bom</artifactId> <version>${version.org.jboss.arquillian}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.spec</groupId> <artifactId>jboss-javaee-6.0</artifactId> <version>1.0.0.Final</version> <type>pom</type> <scope>provided</scope> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>org.jboss.arquillian.junit</groupId> <artifactId>arquillian-junit-container</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.jboss.arquillian.extension</groupId> <artifactId>arquillian-drone-webdriver-depchain</artifactId> <type>pom</type> <scope>test</scope> </dependency> <dependency> <groupId>org.jboss.arquillian.graphene</groupId> <artifactId>graphene-webdriver</artifactId> <version>${version.org.jboss.arquillian.graphene2}</version> <type>pom</type> <scope>test</scope> </dependency> <dependency> <groupId>org.jboss.arquillian.graphene</groupId> <artifactId>graphene-webdriver-impl</artifactId> <version>${version.org.jboss.arquillian.graphene2}</version> <type>jar</type> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-simple</artifactId> <version>1.6.4</version> <scope>test</scope> </dependency> <dependency> <groupId>org.jboss.arquillian.container</groupId> <artifactId>arquillian-glassfish-remote-3.1</artifactId> <version>1.0.0.CR4</version> <scope>test</scope> </dependency> </dependencies>

    Read the article

  • Using a function found in a different file in a loop

    - by Anders
    This question is related to BuddyPress, and a follow-up question from this question I have a .csv-file with 790 rows and 3 columns where the first column is the group name, second is the group description and last (third) the slug. As far as I've been told I can use this code: <?php $groups = array(); if (($handle = fopen("groupData.csv", "r")) !== FALSE) { while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) { $group = array('group_id' = 'SOME ID', 'name' = $data[0], 'description' = $data[1], 'slug' = groups_check_slug(sanitize_title(esc_attr($data[2]))), 'date_created' = gmdate( "Y-m-d H:i:s" ), 'status' = 'public' ); $groups[] = $group; } fclose($handle); } foreach ($groups as $group) { groups_create_group($group); } With http://www.nomorepasting.com/getpaste.php?pasteid=35217 which is called bp-groups.php. The thing is that I can't make it work. I've created a new file with the code written above called groupgenerator.php uploaded the .csv file to the same folder and opened groupgenerator.php in my browser. But, i get this error: Fatal error: Call to undefined function groups_check_slug() in What am I doing wrong?

    Read the article

  • looping problem while appending data to existing text file

    - by Manu
    try { stmt = conn.createStatement(); stmt1 = conn.createStatement(); stmt2 = conn.createStatement(); rs = stmt.executeQuery("select cust from trip1"); rs1 = stmt1.executeQuery("select cust from trip2"); rs2 = stmt2.executeQuery("select cust from trip3"); File f = new File(strFileGenLoc); OutputStream os = (OutputStream)new FileOutputStream(f,true); String encoding = "UTF8"; OutputStreamWriter osw = new OutputStreamWriter(os, encoding); BufferedWriter bw = new BufferedWriter(osw); } while ( rs.next() ) { while(rs1.next()){ while(rs2.next()){ bw.write(rs.getString(1)==null? "":rs.getString(1)); bw.write("\t"); bw.write(rs1.getString(1)==null? "":rs1.getString(1)); bw.write("\t"); bw.write(rs2.getString(1)==null? "":rs2.getString(1)); bw.write("\t"); bw.newLine(); } } } Above code working fine. My problem is 1. "rs" resultset contains one record in the table 2. "rs1" resultset contains 5 record in the table 3. "rs2" resultset contains 5 record in the table "rs" data is getting recursive. while writing to the same text file , the output i am getting like 1 2 3 1 12 21 1 23 25 1 10 5 1 8 54 but i need output like below 1 2 3 12 21 23 25 10 5 8 54 What things i need to change in my code.. Please advice

    Read the article

  • Parse a CSV file using python (to make a decision tree later)

    - by Margaret
    First off, full disclosure: This is going towards a uni assignment, so I don't want to receive code. :). I'm more looking for approaches; I'm very new to python, having read a book but not yet written any code. The entire task is to import the contents of a CSV file, create a decision tree from the contents of the CSV file (using the ID3 algorithm), and then parse a second CSV file to run against the tree. There's a big (understandable) preference to have it capable of dealing with different CSV files (I asked if we were allowed to hard code the column names, mostly to eliminate it as a possibility, and the answer was no). The CSV files are in a fairly standard format; the header row is marked with a # then the column names are displayed, and every row after that is a simple series of values. Example: # Column1, Column2, Column3, Column4 Value01, Value02, Value03, Value04 Value11, Value12, Value13, Value14 At the moment, I'm trying to work out the first part: parsing the CSV. To make the decisions for the decision tree, a dictionary structure seems like it's going to be the most logical; so I was thinking of doing something along these lines: Read in each line, character by character If the character is not a comma or a space Append character to temporary string If the character is a comma Append the temporary string to a list Empty string Once a line has been read Create a dictionary using the header row as the key (somehow!) Append that dictionary to a list However, if I do things that way, I'm not sure how to make a mapping between the keys and the values. I'm also wondering whether there is some way to perform an action on every dictionary in a list, since I'll need to be doing things to the effect of "Everyone return their values for columns Column1 and Column4, so I can count up who has what!" - I assume that there is some mechanism, but I don't think I know how to do it. Is a dictionary the best way to do it? Would I be better off doing things using some other data structure? If so, what?

    Read the article

  • Mercurial merge strategy per file type

    - by dls
    All: I want to use kdiff to merge all files with a certain suffix (say *.c, *.h) and I want to do two things (turn off premerge and use internal:other) for all files with another suffix (say *.mdl). The purpose of this is to allow me to employ a type of 'clobber merge' for a specific file type (ie: un-mergable files like configurations, auto-generated C, models, etc..) In my .hgrc I've tried: [merge-tools] kdiff3= clobbermerge=internal:other clobbermerge.premerge = False [merge-patterns] **.c = kdiff3 **.h = kdiff3 **.mdl = clobbermerge but it still triggers kdiff3 for all files. Thoughts? An extension of this would be to perform a 'clobber merge' on a directory - but once the syntax is clear for a file suffix, the dir should be easy.

    Read the article

  • JavaFX: File upload to REST service / servlet fails because of missing boundary

    - by spa
    I'm trying to upload a file using JavaFX using the HttpRequest. For this purpose I have written the following function. function uploadFile(inputFile : File) : Void { // check file if (inputFile == null or not(inputFile.exists()) or inputFile.isDirectory()) { return; } def httpRequest : HttpRequest = HttpRequest { location: urlConverter.encodeURL("{serverUrl}"); source: new FileInputStream(inputFile) method: HttpRequest.POST headers: [ HttpHeader { name: HttpHeader.CONTENT_TYPE value: "multipart/form-data" } ] } httpRequest.start(); } On the server side, I am trying to handle the incoming data using the Apache Commons FileUpload API using a Jersey REST service. The code used to do this is a simple copy of the FileUpload tutorial on the Apache homepage. @Path("Upload") public class UploadService { public static final String RC_OK = "OK"; public static final String RC_ERROR = "ERROR"; @POST @Produces("text/plain") public String handleFileUpload(@Context HttpServletRequest request) { if (!ServletFileUpload.isMultipartContent(request)) { return RC_ERROR; } FileItemFactory factory = new DiskFileItemFactory(); ServletFileUpload upload = new ServletFileUpload(factory); List<FileItem> items = null; try { items = upload.parseRequest(request); } catch (FileUploadException e) { e.printStackTrace(); return RC_ERROR; } ... } } However, I get a exception at items = upload.parseRequest(request);: org.apache.commons.fileupload.FileUploadException: the request was rejected because no multipart boundary was found I guess I have to add a manual boundary info to the InputStream. Is there any easy solution to do this? Or are there even other solutions?

    Read the article

  • Reading strings and integers from .txt file and printing output as strings only

    - by screename71
    Hello, I'm new to C++, and I'm trying to write a short C++ program that reads lines of text from a file, with each line containing one integer key and one alphanumeric string value (no embedded whitespace). The number of lines is not known in advance, (i.e., keep reading lines until end of file is reached). The program needs to use the 'std::map' data structure to store integers and strings read from input (and to associate integers with strings). The program then needs to output string values (but not integer values) to standard output, 1 per line, sorted by integer key values (smallest to largest). So, for example, suppose I have a text file called "data.txt" which contains the following three lines: 10 dog -50 horse 0 cat -12 zebra 14 walrus The output should then be: horse zebra cat dog walrus I've pasted below the progress I've made so far on my C++ program: #include <fstream> #include <iostream> #include <map> using namespace std; using std::map; int main () { string name; signed int value; ifstream myfile ("data.txt"); while (! myfile.eof() ) { getline(myfile,name,'\n'); myfile >> value >> name; cout << name << endl; } return 0; myfile.close(); } Unfortunately, this produces the following incorrect output: horse cat zebra walrus If anyone has any tips, hints, suggestions, etc. on changes and revisions I need to make to the program to get it to work as needed, can you please let me know? Thanks!

    Read the article

  • Why is this linq extension method hit the database twice?

    - by Pure.Krome
    Hi folks, I have an extension method called ToListIfNotNullOrEmpty(), which is hitting the DB twice, instead of once. The first time it returns one result, the second time it returns all the correct results. I'm pretty sure the first time it hits the database, is when the .Any() method is getting called. here's the code. public static IList<T> ToListIfNotNullOrEmpty<T>(this IEnumerable<T> value) { if (value.IsNullOrEmpty()) { return null; } if (value is IList<T>) { return (value as IList<T>); } return new List<T>(value); } public static bool IsNullOrEmpty<T>(this IEnumerable<T> value) { if (value != null) { return !value.Any(); } return true; } I'm hoping to refactor it so that, before the .Any() method is called, it actually enumerates through the entire list. If i do the following, only one DB call is made, because the list is already enumerated. var pewPew = (from x in whatever select x) .ToList() // This enumerates. .ToListIsNotNullOrEmpty(); // This checks the enumerated result. I sorta don't really want to call ToList() then my extension method. Any ideas, folks?

    Read the article

  • Storing header and data sections in a CSV file

    - by morpheous
    This should be relatively easy to do, but after several hours straight programming my mind seems a bit frazzled and could do with some help. I have a C++ class which I am currently using to store read/write data to file. I was initially using binary data, but have decided to store the data as CSV in order to let programs written in other languages be able to load the data. The C++ class looks a bit like this: class BinaryData { public: BinaryData(); void serialize(std::ostream& output) const; void deserialize(std::istream& input); private: Header m_hdr; std::vector<Row> m_rows; }; I am simply rewriting the serialize/deserialize methods to write to a CSV file. I am not sure on the "best" way to store a header section and a "data" section in a "flat" CSV file though - any suggestions on the most sensible way to do this?

    Read the article

  • What can i use to journal writes to file system

    - by Dmitry
    Hello, all I need to track all writes to files in order to have synchronized version of files on different place (server or just other directory, not considerable). Let it: all files located in same directory feel free to create some system files (e.g. SomeFileName.Ext~temp-data) no one have concurrent access to synced directory; nobody spoil ours meta-files or change real-files before we do postponed writes (like a commits) do not to care recovering "local" changes in case of crash; system can just rolled back to state of "server" by simple copy from it significant to have it transparent to use (so programmer must just call ordinary fopen(), read(), write()) It must be guaranteed that copy of files which "server" have is consistent. That is whole files scope existed in some moment of time. They may be sufficiently outdated but it must be fair snapshot of all files at some time. As i understand i should overload writing logic to collect data in order sent changes to "server". For example writing to temporary File~tmp. And so i have to overload reads in order program could read actual data of file. It would be great if you suggest some existing library (java or c++, it is unimportant) or solution (VCS customizing?). Or give hints how should i write it by myself. edit: After some reading i have more precision requirements: I need COW (Copy-on-write) wrapper for fopen(),fwrite(),.. or interceptor (hook) WriteFile() and other FS api system calls. Log-structured file system in userspace would be a alternative too.

    Read the article

  • Checking contents of Uploaded file

    - by kapil
    Hi all, I am using ASP.NEt MVC . I want to upload .zip files for which I am using html input file upload control on my view. I want only .zip files to be uploaded. I want to check that my .zip contains only two files - both having extensions .txt and one of them having name "start". Can anyone please suggest me about how to check this? How can we assure that the uploaded .zip is really a zipped folder and not any other file having just .zip extension. can we use HttpPostedFileBase.ContentType? thanks in advance, kaps

    Read the article

  • Java: Embedding Soundbank file in JAR

    - by Pyroclastic
    If I have a soundbank stored in a JAR, how would I load that soundbank into my application using resource loading...? I'm trying to consolidate as much of a MIDI program into the jar file as I can, and the last thing I have to add is the soundbank file I'm using, as users won't have the soundbanks installed. I'm trying to put it into my jar file, and then load it with getResource() in the Class class, but I'm getting an InvalidMidiDataException on a soundbank that I know is valid. Here's the code, it's in the constructor for my synthesizer object: try { synth = MidiSystem.getSynthesizer(); channels = synth.getChannels(); instrument = MidiSystem.getSoundbank(this.getClass().getResource("img/soundbank-mid.gm")).getInstruments(); currentInstrument = instrument[0]; synth.loadInstrument(currentInstrument); synth.open(); } catch (InvalidMidiDataException ex) { System.out.println("FAIL"); instrument = synth.getAvailableInstruments(); currentInstrument = instrument[0]; synth.loadInstrument(currentInstrument); try { synth.open(); } catch (MidiUnavailableException ex1) { Logger.getLogger(MIDISynth.class.getName()).log(Level.SEVERE, null, ex1); } } catch (IOException ex) { Logger.getLogger(MIDISynth.class.getName()).log(Level.SEVERE, null, ex); } catch (MidiUnavailableException ex) { Logger.getLogger(MIDISynth.class.getName()).log(Level.SEVERE, null, ex); }

    Read the article

  • Process xml-like log file queue

    - by Zsolt Botykai
    Hi all, first of all: I'm not a programmer, never was, although had learn a lot during my professional carreer as a support consultant. Now my task is to process - and create some statistics about a constantly written and rapidly growing XML like log file. It's not valid XML, because it does not have a proper <root> element, e.g. the log looks like this: <log itemdate="somedate"> <field id="0" /> ... </log> <log itemdate="somedate+1"> <field id="0" /> ... </log> <log itemdate="somedate+n"> <field id="0" /> ... </log> E.g. I have to count all the items with field id=0. But most of the solutions I had found (e.g. using XPath) reports an error about the garbage after the first closing </log>. Most probably I can use python (2.6, although I can compile 3.x as well), or some really old perl version (5.6.x), and recently compiled xmlstarlet which really looks promising - I was able to create the statistics for a certain period after copying the file, and pre- & appending the opening and closing root element. But this is a huge file and copying takes time as well. Isn't there a better solution? Thanks in advance!

    Read the article

  • how to remove subsets form given text file

    - by user324887
    i have a problem like this 10 20 30 40 70 20 30 70 30 40 10 20 29 70 80 90 20 30 40 40 45 65 10 20 80 45 65 20 I want to remove all subset transaction from this file. output file should be like follows 10 20 30 40 70 29 70 80 90 20 30 40 40 45 65 10 20 80 Where records like 20 30 70 30 40 10 20 45 65 20 are removed because of they are subset of other records. i AM using set for this but i am not able to create one set for one line can anybody know how to do this please help me here i am sending you my code include include include using namespace std; using namespace std; set s1; int main() { FILE fp = fopen ( "abc.txt", "r" ); if ( fp != NULL ) { char line [ 128 ]; / or other suitable maximum line size */ while ( fgets ( line, sizeof line, fp ) != NULL ) /* read a line */ { istringstream iss(line); do { string sub; iss >> sub; s1.insert(sub); } while (iss); for (set<string>::const_iterator p = s1.begin( );p != s1.end( ); ++p) cout << *p << endl; } } }

    Read the article

  • Writing to a new log file each day with TraceSource

    - by Cipher
    I am using a logger in my application to write to files. The source, switch and listeners have been defined in the app.config file as follows: <system.diagnostics> <sources> <source name="LoggerApp" switchName="sourceSwitch" switchType="System.Diagnostics.SourceSwitch"> <listeners> <add name="myListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="myListener.log" /> </listeners> </source> </sources> <switches> <add name="sourceSwitch" value="Information" /> </switches> </system.diagnostics> Inside, my .cs code, I use the logger as follows: private static TraceSource logger = new TraceSource("LoggerApp"); logger.TraceEvent(TraceEventType.Information, 1, "{0} : Started the application", DateTime.Now); What would I have to do to create a new log file each day instead of writing to the same log file every time?

    Read the article

  • Django forms I cannot save picture file

    - by dana
    i have the model: class OpenCv(models.Model): created_by = models.ForeignKey(User, blank=True) first_name = models.CharField(('first name'), max_length=30, blank=True) last_name = models.CharField(('last name'), max_length=30, blank=True) url = models.URLField(verify_exists=True) picture = models.ImageField(help_text=('Upload an image (max %s kilobytes)' %settings.MAX_PHOTO_UPLOAD_SIZE),upload_to='jakido/avatar',blank=True, null= True) bio = models.CharField(('bio'), max_length=180, blank=True) date_birth = models.DateField(blank=True,null=True) domain = models.CharField(('domain'), max_length=30, blank=True, choices = domain_choices) specialisation = models.CharField(('specialization'), max_length=30, blank=True) degree = models.CharField(('degree'), max_length=30, choices = degree_choices) year_last_degree = models.CharField(('year last degree'), max_length=30, blank=True,choices = year_last_degree_choices) lyceum = models.CharField(('lyceum'), max_length=30, blank=True) faculty = models.ForeignKey(Faculty, blank=True,null=True) references = models.CharField(('references'), max_length=30, blank=True) workplace = models.ForeignKey(Workplace, blank=True,null=True) the form: class OpencvForm(ModelForm): class Meta: model = OpenCv fields = ['first_name','last_name','url','picture','bio','domain','specialisation','degree','year_last_degree','lyceum','references'] and the view: def save_opencv(request): if request.method == 'POST': form = OpencvForm(request.POST, request.FILES) # if 'picture' in request.FILES: file = request.FILES['picture'] filename = file['filename'] fd = open('%s/%s' % (MEDIA_ROOT, filename), 'wb') fd.write(file['content']) fd.close() if form.is_valid(): new_obj = form.save(commit=False) new_obj.picture = form.cleaned_data['picture'] new_obj.created_by = request.user new_obj.save() return HttpResponseRedirect('.') else: form = OpencvForm() return render_to_response('opencv/opencv_form.html', { 'form': form, }, context_instance=RequestContext(request)) but i don't seem to save the picture in my database... something is wrong, and i can't figure out what :(

    Read the article

  • PHP Include and sort by variable within file

    - by Jason Hoax
    I have written this PHP include-script but now I'm trying to sort the included files out by variables WITHIN the included php's. In other words, in each included PHP file there is a rating, now I want the ratings to be read so that when they are included they will be sorted out from highest to lowest. (scores are like 6.0 to 9.0) Kind Regards! $location = 'experiments/visualizations'; foreach (glob("$location/*.php") as $filename) { include $filename; } The included files are named randomly like: File1: $filename = "AAAA"; $projecttitle = "Project Name"; $description = "This totally explains the product"; $score = "7.6"; File 2: $filename = "BBBB"; $projecttitle = "Project Name2" $description = "This totally explains the product"; $score = "9.6"; As you can see 9.6 is higher than 7.6 but PHP sorts the includes out by name instead of variables within the file. I tried sorting, but I can't get it fixed. Help!

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >