Search Results

Search found 70351 results on 2815 pages for 'file parsing'.

Page 75/2815 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Using fgets to read strings from file in C

    - by Ivan
    I am trying to read strings from a file that has each string on a new line but I think it reads a newline character once instead of a string and I don't know why. If I'm going about reading strings the wrong way please correct me. i=0; F1 = fopen("alg.txt", "r"); F2 = fopen("tul.txt", "w"); if(!feof(F1)) { do{ //start scanning file fgets(inimene[i].Enimi, 20, F1); fgets(inimene[i].Pnimi, 20, F1); fgets(inimene[i].Kood, 12, F1); printf("i=%d\nEnimi=%s\nPnimi=%s\nKaad=%s",i,inimene[i].Enimi,inimene[i].Pnimi,inimene[i].Kood); i++;} while(!feof(F1));}; /*finish getting structs*/ The printf is there to let me see what was read into what and here is the result i=0 Enimi=peter Pnimi=pupkin Kood=223456iatb i=1 Enimi= Pnimi=masha Kaad=gubkina i=2 Enimi=234567iasb Pnimi=sasha Kood=dudkina As you can see after the first struct is read there is a blank(a newline?) onct and then everything is shifted. I suppose I could read a dummy string to absorb that extra blank and then nothing would be shifted, but that doesn't help me understand the problem and avoid in the future.

    Read the article

  • Optimising RSS parsing on App Engine to avoid high CPU warnings

    - by Danny Tuppeny
    I'm pulling some RSS feeds into a datastore in App Engine to serve up to an iPhone app. I use cron to schedule updating the RSS every x minutes. Each task only parses one RSS feed (which has 15-20 items). I frequently get warnings about high CPU usage in the App Engine dashboard, so I'm looking for ways to optimise my code. Currently, I use minidom (since it's already there on App Engine), but I suspect it's not very efficient! Here's the code: dom = minidom.parseString(urlfetch.fetch(url).content) if dom: items = [] for node in dom.getElementsByTagName('item'): item = RssItem( key_name = self.getText(node.getElementsByTagName('guid')[0].childNodes), title = self.getText(node.getElementsByTagName('title')[0].childNodes), description = self.getText(node.getElementsByTagName('description')[0].childNodes), modified = datetime.now(), link = self.getText(node.getElementsByTagName('link')[0].childNodes), categories = [self.getText(category.childNodes) for category in node.getElementsByTagName('category')] ); items.append(item); db.put(items); def getText(self, nodelist): rc = '' for node in nodelist: if node.nodeType == node.TEXT_NODE: rc = rc + node.data return rc There isn't much going on, but the scripts often take 2-6 seconds CPU time, which seems a bit excessive for looping through 20ish items and reading a few attributes. What can I do to make this faster? Is there anything particularly bad in the above code, or should I change to another way of parsing? Are there are any libraries (that work on App Engine) that would be better, or would I be better parsing the RSS myself?

    Read the article

  • How to load an HTML file into an included PHP file from another PHP?

    - by Peter NGM
    i have 2 PHP file and 1 HTML. i want to include file2.php in file1.php. file1.php is: <html> <head>...</head> <body> ... <?php include("file2.php"); ?> ... </body> </html> i want to load an HTML file in file2.php: file2.php is: <?php $doc = new DOMDocument(); $doc->loadHTMLFile("sample.html"); echo $doc->saveHTML(); ?> and sample.html is: ... <b>Hello World!</b> ... my problem is this error: Warning: DOMDocument::loadHTMLFile() [domdocument.loadhtmlfile]: I/O warning : failed to load external entity "sample.html" in C:\xampp\htdocs\project\file2.php on line 3 please help me to solve this problem.

    Read the article

  • Trouble parsing quotes with SAX parser (javax.xml.parsers.SAXParser)

    - by johnrock
    When using a SAX parser, parsing fails when there is a " in the node content. How can I resolve this? Do I need to convert all " characters? In other words, anytime I have a quote in a node: <node>characters in node containing "quotes"</node> That node gets butchered into multiple character arrays when the Handler is parsing it. Is this normal behaviour? Why should quotes cause such a problem? Here is the code I am using: import javax.xml.parsers.SAXParser; import javax.xml.parsers.SAXParserFactory; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.client.methods.HttpGet; import org.xml.sax.InputSource; import org.xml.sax.XMLReader; ... HttpGet httpget = new HttpGet(GATEWAY_URL + "/"+ question.getId()); httpget.setHeader("User-Agent", PayloadService.userAgent); httpget.setHeader("Content-Type", "application/xml"); HttpResponse response = PayloadService.getHttpclient().execute(httpget); HttpEntity entity = response.getEntity(); if(entity != null) { SAXParserFactory spf = SAXParserFactory.newInstance(); SAXParser sp = spf.newSAXParser(); XMLReader xr = sp.getXMLReader(); ConvoHandler convoHandler = new ConvoHandler(); xr.setContentHandler(convoHandler); xr.parse(new InputSource(entity.getContent())); entity.consumeContent(); messageList = convoHandler.getMessageList(); }

    Read the article

  • Abort SAX parsing mid-document?

    - by CSharperWithJava
    I'm parsing a very simple XML schema with a SAX parser in Android. An example file would be <Lists> <List name="foo"> <Note title="note 1" .../> <Note title="note 2" .../> </List> <List name="bar"> <Note title="note 3" .../> </List> </Lists> The ... represents more note data as attributes that aren't important to question. I use a SAX parser to parse the document and only implement the startElement and 'endElement' methods of the HandlerBase to handle Note and List nodes. However, In some cases the files can be very large and take some time to process. I'd like to be able to abort the parsing process at any time (i.e. user presses cancel button). The best way I've come up with is to throw an exception from my startElement method when certain conditions are met (i.e. boolean stopParsing is true). Is there a better way to do this? I've always used DOM style parsers, so I don't fully understand the SAX parser. One final note, I'm running this on Android, so I will have the Parser running on a worker thread to keep the UI responsive. If you know how I can kill the thread safely while the parser is running that would answer my question as well.

    Read the article

  • Append data to file in iPhone app

    - by zp26
    I have a problem. I want to create a file like XML. I have create a code for this but the file not append the information but rewrite and save only the last NSData. Can you help me/ This is my code -(void)salvataggioInXML:(NSString*)name:(float)x:(float)y:(float)z{ NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectoryPath = [paths objectAtIndex:0]; NSString *filePath = [documentsDirectoryPath stringByAppendingPathComponent:@"filePosizioni.xml"]; NSFileHandle *myHandle = [NSFileHandle fileHandleForUpdatingAtPath:filePath]; [myHandle seekToEndOfFile]; NSString *tagName = [NSString stringWithFormat:@"<name>%@<name>", name]; NSString *tagX = [NSString stringWithFormat:@"<x>%f<x>", name]; NSString *tagY = [NSString stringWithFormat:@"<y>%f<y>", name]; NSString *tagZ = [NSString stringWithFormat:@"<z>%f<z>", name]; NSData* dataName = [tagName dataUsingEncoding: NSASCIIStringEncoding]; NSData* dataX = [tagX dataUsingEncoding: NSASCIIStringEncoding]; NSData* dataY = [tagY dataUsingEncoding: NSASCIIStringEncoding]; NSData* dataZ = [tagZ dataUsingEncoding: NSASCIIStringEncoding]; if ([dataName writeToFile:filePath atomically:YES]) NSLog(@"writeok"); [myHandle seekToEndOfFile]; if ([dataX writeToFile:filePath atomically:YES]) NSLog(@"writeok"); [myHandle seekToEndOfFile]; if ([dataY writeToFile:filePath atomically:YES]) NSLog(@"writeok"); [myHandle seekToEndOfFile]; if ([dataZ writeToFile:filePath atomically:YES]) NSLog(@"writeok"); [myHandle seekToEndOfFile]; NSLog(@"zp26 %@",filePath); }

    Read the article

  • max file upload size change in web.config

    - by Christopher Johnson
    using .net mvc 3 and trying to increase the allowable file upload size. This is what I've added to web.config: <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules runAllManagedModulesForAllRequests="true"> <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" preCondition="managedHandler" /> <add name="ErrorMail" type="Elmah.ErrorMailModule, Elmah" preCondition="managedHandler" /> <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah" preCondition="managedHandler" /> </modules> <handlers> <add name="Elmah" path="elmah.axd" verb="POST,GET,HEAD" type="Elmah.ErrorLogPageFactory, Elmah" preCondition="integratedMode" /> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="104857600"/> </requestFiltering> </security> *ignore the elmah stuff. it's still not allowing file sizes larger than 50MB and this should allow up to 100MB no? any ideas?

    Read the article

  • find and replace values in a flat-file using PHP

    - by peirix
    I'd think there was a question on this already, but I can't find one. Maybe the solution is too easy... Anyway, I have a flat-file and want to let the user change the values based on a name. I've already sorted out creating new name+value-pairs using the fopen('a') mode, using jQuery to send the AJAX call with newValue and newName. But say the content looks like this: host|http:www.stackoverflow.com folder|/questions/ folder2|/users/ And now I want to change the folder value. So I'll send in folder as oldName and /tags/ as newValue. What's the best way to overwrite the value? The order in the list doesn't matter, and the name will always be on the left, followed by a |(pipe), the value and then a new-line. My first thought was to read the list, store it in an array, search all the [0]'s for oldName, then change the [1] that belongs to it, and then write it back to a file. But I feel there is a better way around this? Any ideas? Maybe regex?

    Read the article

  • Atomic int writes on file

    - by Waneck
    Hello! I'm writing an application that will have to be able to handle many concurrent accesses to it, either by threads as by processes. So no mutex'es or locks should be applied to this. To make the use of locks go down to a minimum, I'm designing for the file to be "append-only", so all data is first appended to disk, and then the address pointing to the info it has updated, is changed to refer to the new one. So I will need to implement a small lock system only to change this one int so it refers to the new address. How is the best way to do it? I was thinking about maybe putting a flag before the address, that when it's set, the readers will use a spin lock until it's released. But I'm afraid that it isn't at all atomic, is it? e.g. a reader reads the flag, and it is unset on the same time, a writer writes the flag and changes the value of the int the reader may read an inconsistent value! I'm looking for locking techniques but all I find is either for thread locking techniques, or to lock an entire file, not fields. Is it not possible to do this? How do append-only databases handle this? Thanks! Cauê

    Read the article

  • OO Objective-C design with XML parsing

    - by brainfsck
    Hi, I need to parse an XML record that represents a QuizQuestion. The "type" attribute tells the type of question. I then need to create an appropriate subclass of QuizQuestion based on the question type. The following code works ([auto]release statements omitted for clarity): QuizQuestion *question = [[QuizQuestion alloc] initWithXMLString:xml]; if( [ [question type] isEqualToString:@"multipleChoiceQuestion"] ) { [myQuestions addObject:[[MultipleChoiceQuizQuestion alloc] initWithXMLString:xml]; } //QuizQuestion.m -(id)initWithXMLString:(NSString*)xml { self.type = ...// parse "type" attribute from xml // parse the rest of the xml } //MultipleChoiceQuizQuestion.m -(id)initWithXMLString:(NSString*)xml { if( self= [super initWithXMLString:xml] ) { // multiple-choice stuff } } Of course, this means that the XML is parsed twice: once to find out the type of QuizQuestion, and once when the appropriate QuizQuestion is initialized. To prevent parsing the XML twice, I tried the following approach: // MultipleChoiceQuizQuestion.m -(id)initWithQuizRecord:(QuizQuestion*)record { self=record; // record has already parsed the "type" and other parameters // multiple-choice stuff } However, this fails due to the "self=record" assignment; whenever the MultipleChoiceQuizQuestion tries to call an instance-method, it tries to call the method on the QuizQuestion class instead. Can someone tell me the correct approach for parsing XML into the appropriate subclass when the parent class needs to be initialized to know which subclass is appropriate?

    Read the article

  • Looking for a very simple file-based CMS

    - by nfm
    I'm building a site for a friend for free, and am trying to work out a good way for her to be able to easily make updates. I haven't used any CMSs before. I was browsing the web today looking at some, and they all seem way too complicated for what I'm after. Basically, all I want is a really simple CMS that pulls together HTML snippets in particular subdirectories, and wraps them in header/footer HTML and inserts them into a template page in the appropriate section. I'm imagining a site layout something like this: / /index.php /blog_template.php /news_template.php /blog/ /blog/header.php /blog/footer.php /blog/my-first-blog.html /blog/blogs-rule.html /blog/... Say index.php contains div#blog. PHP would wrap each /blog/*.html file in /blog/header.php and blog/footer.php, and insert them into the div#blog as div#blog([0-9]*). I haven't been able to find anything this basic, and am one step away from throwing something together myself, but I'm a bit short on time at the moment and figured I'd post here first. Has anyone come across something like this? I don't want any DB, extensions, user accounts, installation, config, updates... just a simple file based solution. Thanks :) Forgot to mention - needs to be FOSS and run on Linux!

    Read the article

  • Fastest Java way to remove the first/top line of a file (like a stack)

    - by christangrant
    I am trying to improve an external sort implementation in java. I have a bunch of BufferedReader objects open for temporary files. I repeatedly remove the top line from each of these files. This pushes the limits of the Java's Heap. I would like a more scalable method of doing this without loosing speed because of a bunch of constructor calls. One solution is to only open files when they are needed, then read the first line and then delete it. But I am afraid that this will be significantly slower. So using Java libraries what is the most efficient method of doing this. --Edit-- For external sort, the usual method is to break a large file up into several chunk files. Sort each of the chunks. And then treat the sorted files like buffers, pop the top item from each file, the smallest of all those is the global minimum. Then continue until for all items. http://en.wikipedia.org/wiki/External_sorting My temporary files (buffers) are basically BufferedReader objects. The operations performed on these files are the same as stack/queue operations (peek and pop, no push needed). I am trying to make these peek and pop operations more efficient. This is because using many BufferedReader objects takes up too much space.

    Read the article

  • Online file storage similar to Amazon S3

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe RESTful but extras like mkdir and cp (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • Parsing custom time format with SimpleDateFormat

    - by ggrigery
    I'm having trouble parsing a date format that I'm getting back from an API and that I have never seen (I believe is a custom format). An example of a date: /Date(1353447000000+0000)/ When I first encountered this format it didn't take me long to see that it was the time in milliseconds with a time zone offset. I'm having trouble extracting this date using SimpleDateFormat though. Here was my first attempt: String weirdDate = "/Date(1353447000000+0000)/"; SimpleDateFormat sdf = new SimpleDateFormat("'/Date('SSSSSSSSSSSSSZ')/'"); Date d1 = sdf.parse(weirdDate); System.out.println(d1.toString()); System.out.println(d1.getTime()); System.out.println(); Date d2 = new Date(Long.parseLong("1353447000000")); System.out.println(d2.toString()); System.out.println(d2.getTime()); And output: Tue Jan 06 22:51:41 EST 1970 532301760 Tue Nov 20 16:30:00 EST 2012 1353447000000 The date (and number of milliseconds parsed) is not even close and I haven't been able to figure out why. After some troubleshooting, I discovered that the way I'm trying to use SDF is clearly flawed. Example: String weirdDate = "1353447000000"; SimpleDateFormat sdf = new SimpleDateFormat("S"); Date d1 = sdf.parse(weirdDate); System.out.println(d1.toString()); System.out.println(d1.getTime()); And output: Wed Jan 07 03:51:41 EST 1970 550301760 I can't say I've ever tried to use SDF in this way to just parse a time in milliseconds because I would normally use Long.parseLong() and just pass it straight into new Date(long) (and in fact the solution I have in place right now is just a regular expression and parsing a long). I'm looking for a cleaner solution that I can easily extract this time in milliseconds with the timezone and quickly parse out into a date without the messy manual handling. Anyone have any ideas or that can spot the errors in my logic above? Help is much appreciated.

    Read the article

  • Read/Write/Find/Replace huge csv file

    - by notapipe
    I have a huge (4,5 GB) csv file.. I need to perform basic cut and paste, replace operations for some columns.. the data is pretty well organized.. the only problem is I cannot play with it with Excel because of the size (2000 rows, 550000 columns). here is some part of the data: ID,Affection,Sex,DRB1_1,DRB1_2,SENum,SEStatus,AntiCCP,RFUW,rs3094315,rs12562034,rs3934834,rs9442372,rs3737728 D0024949,0,F,0101,0401,SS,yes,?,?,A_A,A_A,G_G,G_G D0024302,0,F,0101,7,SN,yes,?,?,A_A,G_G,A_G,?_? D0023151,0,F,0101,11,SN,yes,?,?,A_A,G_G,G_G,G_G I need to remove 4th, 5th, 6th, 7th, 8th and 9th columns; I need to find every _ character from column 10 onwards and replace it with a space ( ) character; I need to replace every ? with zero (0); I need to replace every comma with a tab; I need to remove first row (that has column names; I need to replace every 0 with 1, every 1 with 2 and every ? with 0 in 2nd column; I need to replace F with 2, M with 1 and ? with 0 in 3rd column; so that in the resulting file the output reads: D0024949 1 2 A A A A G G G G D0024302 1 2 A A G G A G 0 0 D0023151 1 2 A A G G G G G G (both input and output should read one line per row, ne extra blank row) Is there a memory efficient way of doing that with java(and I need a code to do that) or a usable tool for playing with this large data so that I can easily apply Excel functionality..

    Read the article

  • How to delete duplicate/aggregate rows faster in a file using Java (no DB)

    - by S. Singh
    I have a 2GB big text file, it has 5 columns delimited by tab. A row will be called duplicate only if 4 out of 5 columns matches. Right now, I am doing dduping by first loading each coloumn in separate List , then iterating through lists, deleting the duplicate rows as it encountered and aggregating. The problem: it is taking more than 20 hours to process one file. I have 25 such files to process. Can anyone please share their experience, how they would go about doing such dduping? This dduping will be a throw away code. So, I was looking for some quick/dirty solution, to get job done as soon as possible. Here is my pseudo code (roughly) Iterate over the rows i=current_row_no. Iterate over the row no. i+1 to last_row if(col1 matches //find duplicate && col2 matches && col3 matches && col4 matches) { col5List.set(i,get col5); //aggregate } Duplicate example A and B will be duplicate A=(1,1,1,1,1), B=(1,1,1,1,2), C=(2,1,1,1,1) and output would be A=(1,1,1,1,1+2) C=(2,1,1,1,1) [notice that B has been kicked out]

    Read the article

  • Looking for the log file on iPhone

    - by zp26
    Hi, I have a problem. I have the code below for save a data on file. I build my app on the device, and run. The result variable is TRUE, but i don't find the file on the my iPhone device. Can you help me? Thank and sorry for my english XP -(void)saveXML:(NSString*)name:(float)x:(float)y:(float)z{ NSMutableData *data = [NSMutableData data]; NSKeyedArchiver *archiver = [[NSKeyedArchiver alloc] initForWritingWithMutableData:data]; [archiver setOutputFormat:NSPropertyListXMLFormat_v1_0]; [archiver encodeFloat:x forKey:@"x"]; [archiver encodeFloat:y forKey:@"y"]; [archiver encodeFloat:z forKey:@"z"]; [archiver encodeObject:name forKey:@"name"]; [archiver finishEncoding]; NSString* filePath = [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0] stringByAppendingPathComponent:@"XML Position"]; BOOL result = [data writeToFile:filePath atomically:YES]; if(result) [self updateTextView:@"success"]; [archiver release]; }

    Read the article

  • Trouble parsing self closing XML tags using SAX parser

    - by sandesh
    Hi, I am having trouble parsing self closing XML tags using SAX. I am trying to extract the link tag from the Google Base API.I am having reasonable success in parsing regular tags. Here is a snippet of the xml <entry> <id>http://www.google.com/base/feeds/snippets/15802191394735287303</id> <published>2010-04-05T11:00:00.000Z</published> <updated>2010-04-24T19:00:07.000Z</updated> <category scheme='http://base.google.com/categories/itemtypes' term='Products'/> <title type='text'>En-el1 Li-ion Battery+charger For Nikon Digital Camera</title> <link rel='alternate' type='text/html' href='http://rover.ebay.com/rover/1/711-67261-24966-0/2?ipn=psmain&amp;icep_vectorid=263602&amp;kwid=1&amp;mtid=691&amp;crlp=1_263602&amp;icep_item_id=170468125748&amp;itemid=170468125748'/> . . and so on I can parse the updates and published tags, but not the link and category tag. Here is my startElement and endElement overrides public void startElement(String uri, String localName, String qName, Attributes attributes) throws SAXException { if (qName.equals("title") && xmlTags.peek().equals("entry")) { insideEntryTitle = true; } xmlTags.push(qName); } public void endElement(String uri, String localName, String qName) throws SAXException { // If a "title" element is closed, we start a new line, to prepare // printing the new title. xmlTags.pop(); if (insideEntryTitle) { insideEntryTitle = false; System.out.println(); } } declaration for xmltags.. private Stack<String> xmlTags = new Stack<String>(); Any help guys? this is my first post here.. I hope I have followed posting rules! thanks a ton guys..

    Read the article

  • how to write binary copy of structure array to file

    - by cerr
    I would like to write a binary image of a structure array to a binary file. I have tried this so far: #include <stdio.h> #include <string.h> #define NUM 256 const char *fname="binary.bin"; typedef struct foo_s { int intA; int intB; char string[20]; }foo_t; void main (void) { foo_t bar[NUM]; bar[0].intA = 10; bar[0].intB = 999; strcpy(bar[0].string,"Hello World!"); Save(bar); printf("%s written succesfully!\n",fname); } int Save(foo_t* pData) { FILE *pFile; int ptr = 0; int itr = 0; pFile = fopen(fname, "w"); if (pFile == NULL) { printf("couldn't open %s\n", fname); return; } for (itr = 0; itr<NUM; itr++) { for (ptr=0; ptr<sizeof(foo_t); ptr++) { fputc((unsigned char)*((&pData[itr])+ptr), pFile); } fclose(pFile); } } but the compiler is saying aggregate value used where an integer was expected fputc((unsigned char)*((&pData[itr])+ptr), pFile); and I don't quite understand why, what am I doing wrong? Thanks!

    Read the article

  • stdio data from write not making it into a file

    - by user1551209
    I'm having a problem with using stdio commands for manipulating data in a file. I short, when I write data into a file, write returns an int indicating that it was successful, but when I read it back out I only get the old data. Here's a stripped down version of the code: fd = open(filename,O_RDWR|O_APPEND); struct dE *cDE = malloc(sizeof(struct dE)); //Read present data printf("\nreading values at %d\n",off); printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("ReadStatus <%d>\n",read(fd,cDE,deSize)); printf("current Key/Data <%d/%s>\n",cDE->key,cDE->data); printf("\nwriting new values\n"); //Change the values locally cDE->key = //something new cDE->data = //something new //Write them back printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("WriteStatus <%d>\n",write(fd,cDE,deSize)); //Re-read to make sure that it got written back printf("\nre-reading values at %d\n",off); printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("ReadStatus <%d>\n",read(fd,cDE,deSize)); printf("current Key/Data <%d/%s>\n",cDE->key,cDE->data); Furthermore, here's the dE struct in case you're wondering: struct dE { int key; char data[DataSize]; }; This prints: reading values at 1072 SeekStatus <1072> ReadStatus <32> current Key/Data <27/old> writing new values SeekStatus <1072> WriteStatus <32> re-reading values at 1072 SeekStatus <1072> ReadStatus <32> current Key/Data <27/old>

    Read the article

  • Where will the image be saved? [closed]

    - by Dummy Derp
    import java.awt.Dimension; import java.awt.Rectangle; import java.awt.Robot; import java.awt.Toolkit; import java.awt.image.BufferedImage; import javax.imageio.ImageIO; import java.io.File; ... public void captureScreen(String fileName) throws Exception { Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize(); Rectangle screenRectangle = new Rectangle(screenSize); Robot robot = new Robot(); BufferedImage image = robot.createScreenCapture(screenRectangle); ImageIO.write(image, "png", new File(fileName)); } ... Going over this code I found on the internet. I got everything except the part where file is created. In what format file name should be? Should it be C:/myFolder/myImage.png" or just myImage.png and where will it be saved? Here is what docs say: File public File(String pathname) Creates a new File instance by converting the given pathname string into an abstract pathname. If the given string is the empty string, then the result is the empty abstract pathname. Parameters: pathname - A pathname string Throws: NullPointerException - If the pathname argument is null

    Read the article

  • Programmatically convert *.odt file to MS Word *.doc file using an OpenOffice.org basic macro

    - by Chen Levy
    I am trying to build a reStructuredText to MS Word document tool-chain, so I will be able to save only the rst sources in version control. So far I -- Have rst2odt.py to convert reStructuredText to OpenOffice.org Writer format. Next I want to use the most recent OpenOffice.org (currently 3.1) that do a pretty decent work of generating a Word 97/2000/XP document, so I wrote the macro: sub ConvertToWord(file as string) rem ---------------------------------------------------------------------- rem define variables dim document as object dim dispatcher as object rem ---------------------------------------------------------------------- rem get access to the document document = ThisComponent.CurrentController.Frame dispatcher = createUnoService("com.sun.star.frame.DispatchHelper") rem ---------------------------------------------------------------------- dim odf(1) as new com.sun.star.beans.PropertyValue odf(0).Name = "URL" odf(0).Value = "file://" + file + ".odt" odf(1).Name = "FilterName" odf(1).Value = "MS Word 97" dispatcher.executeDispatch(document, ".uno:Open", "", 0, odf()) rem ---------------------------------------------------------------------- dim doc(1) as new com.sun.star.beans.PropertyValue doc(0).Name = "URL" doc(0).Value = "file://" + file + ".doc" doc(1).Name = "FilterName" doc(1).Value = "MS Word 97" dispatcher.executeDispatch(document, ".uno:SaveAs", "", 0, doc()) end sub But when I executing it: soffice "macro:///Standard.Module1.ConvertToWord(/path/to/odt_file_wo_ext)" I get a: "BASIC runtime error. Property or method not found." message On the line: document = ThisComponent.CurrentController.Frame And when I comment that line, the above invocation complete without error, but do nothing. I guess I need to somehow set the value of document to a newly created instance, but I don't know how to do it. Or am I going at it at a completely backward way? P.S. I will consider JODConverter as a fallback, because I try to minimize my dependencies.

    Read the article

  • allow file download to all types of files

    - by Avinash
    hi i have given my user to upload nay types of files. But my problem is that how can i force user top just download any type of files? Since pdf, jpg and text files are directly viewable to browser. So i want that any type of file should be downloaded to view. Running on php Thanks Avinash

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >