Search Results

Search found 72722 results on 2909 pages for 'file processing'.

Page 246/2909 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • How Would you Mimic an Arbitrary Directory Structure with Rails Routes?

    - by viatropos
    I want to be able to map Google Docs' folder system to urls in my application and am just wondering how I can say "route, I want you to match an arbitrary set of nodes, and the last one is the file (or possibly a directory, I can check in the controller)". So I could do things like: www.mysite.com/documents/folder1/childfolderA/document www.mysite.com/documents/root-level-doc Can the routes.rb file do something like this?

    Read the article

  • vim - open file with complete path

    - by David Oneill
    If I have a file that contains a complete path for a file, is there a way to highlight the filename (using visual mode) and open the file? (preferably in a split screen) As I think about it, here is what I would like: if the file name contains a / character, assume it is a full path (IE the 'current directory' is root). Otherwise, use the current folder (IE default behavior) Is this possible?

    Read the article

  • How do I send this email in Python, opening files and stuff?

    - by alex
    msg = EmailMessage(subject, body, from_email, [to_email]) msg.content_subtype = "html" msg.send() This is how I send an email in Django. But what if I want to open a text file and take into account all its line breaks and tabs. I want to take the body of the text file (with line breaks \n) and email it as text of the "body".

    Read the article

  • How do i add .pdb file associated with a lib file?

    - by Anon911
    How do i add .pdb file associated with a lib file? I am importing a .lib file in a C++ project. When I build it i get an error saying that vc90.pdb does not contains the debug information. I have the pdb file associated with the library. how can i add it to my current project. Thanks

    Read the article

  • File filtering_Python 3.2 [closed]

    - by user71261
    I'm trying to write a short file filtering code in Python that will find my desired string. I've got it worked out logically, but my Command Feed is sending me an error message for the print statement. This is how it works as of now: filename = input('give file name: ') n = input('give desired string: ') f = open line = f.readline() while line: if n in line: print line line = f.readline() Error Statement: Traceback (most recent call last): File "<string>", line 7, in <fragment> Syntax Error: print line: <string>, line 718 I know this is a simple problem but the answer is not obvious to me. please help.

    Read the article

  • How can i modify remote machine input.properties file from web page in C#.Net

    - by picnic4u
    i have build script in remote machine.but i want to start the build from my local machine.so for this i need to update the input.properties file in remote machine and then run the batch file to start the build process. For this i have created one web page so how can i modify the remote input.properties file and run the batch file in C#. please give me some suggestion for this. thanks in advance...

    Read the article

  • [php] how to read only 5 last line of the txt file

    - by safaali
    hello i have a file named "file.txt" it updates by adding lines to it. I am reading it by this code: $fp = fopen("file.txt", "r"); $data = ""; while(!feof($fp)) { $data .= fgets($fp, 4096); } echo $data; and a huge number of lines appears. I just want to echo the last 5 lines of the file how can i do that ? thanks in advanced

    Read the article

  • Dreamweaver deleted my file contents

    - by user2073081
    I was developing a website using Dreamweaver, I was editing one of my php files and suddenly the electricity shut down and my computer turned off. when I turned on my PC again and open the file that I was editing, all the contents were gone!!! When I look to the php file size it was 10 KB so that means it is not empty, so i decided to open it in note++ it showed me a long string of nulls !!! So is there a way to get my file contents back?? Please because I spent almost a week coding it :(

    Read the article

  • Apache FileUtils listFiles

    - by Marquinio
    Hey everyone I'm trying to get a List of directories. I'm using FileUtils listFiles(). I want to do something like this: listFiles(File,IOFileFilter,false). My real questions is how I can implement the accept() from the IOFileFilter so I can check if current File is a directory? Thank you in advance.

    Read the article

  • PHP file write permission

    - by user125862
    I have two files, one php file and one log file. The permission has been changed to the following: -rwxr-x--- 1 www-data www-data 294 2012-06-25 10:17 function.php -rwxr-x--- 1 www-data www-data 0 2012-06-25 09:53 log.txt The permission is set to 750. when I call function.php, I receive the following error message fopen(log.txt): failed to open stream: Permission denied This is the line : $fp = fopen("log.txt","a"); I am very confused. I have php and the file to be written both under www-data now, so why would there be any permission problem? Please help

    Read the article

  • Bigcommerce Templates file missing

    - by ime
    I am developing a template for bigcommerce site, where i want to show category list and some hard code links in the place of Page menu. Now what i do, just place this %%Panel.SideCategoryList%% in the upper navigation area. Which shows category list right. But the problem is that i didn't find this %%SNIPPET_SideCategoryList%% file. (In snippets folder exist a file with this name, but that file doesn't work even if i remove all contents of that file.)

    Read the article

  • toURI method of File transform space character into %20

    - by piero
    toURI method of File transform space character into %20 On windows XP with Java 6 public static void main(String[] args) { File f = new File("C:\\My dir\\test.txt"); URI uri = f.toURI(); System.out.println(f.getAbsolutePath()); System.out.println(uri); } C:\My dir\test.txt file:/C:/My%20dir/test.txt

    Read the article

  • Parallelism in .NET – Part 3, Imperative Data Parallelism: Early Termination

    - by Reed
    Although simple data parallelism allows us to easily parallelize many of our iteration statements, there are cases that it does not handle well.  In my previous discussion, I focused on data parallelism with no shared state, and where every element is being processed exactly the same. Unfortunately, there are many common cases where this does not happen.  If we are dealing with a loop that requires early termination, extra care is required when parallelizing. Often, while processing in a loop, once a certain condition is met, it is no longer necessary to continue processing.  This may be a matter of finding a specific element within the collection, or reaching some error case.  The important distinction here is that, it is often impossible to know until runtime, what set of elements needs to be processed. In my initial discussion of data parallelism, I mentioned that this technique is a candidate when you can decompose the problem based on the data involved, and you wish to apply a single operation concurrently on all of the elements of a collection.  This covers many of the potential cases, but sometimes, after processing some of the elements, we need to stop processing. As an example, lets go back to our previous Parallel.ForEach example with contacting a customer.  However, this time, we’ll change the requirements slightly.  In this case, we’ll add an extra condition – if the store is unable to email the customer, we will exit gracefully.  The thinking here, of course, is that if the store is currently unable to email, the next time this operation runs, it will handle the same situation, so we can just skip our processing entirely.  The original, serial case, with this extra condition, might look something like the following: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) break; customer.LastEmailContact = DateTime.Now; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re processing our loop, but at any point, if we fail to send our email successfully, we just abandon this process, and assume that it will get handled correctly the next time our routine is run.  If we try to parallelize this using Parallel.ForEach, as we did previously, we’ll run into an error almost immediately: the break statement we’re using is only valid when enclosed within an iteration statement, such as foreach.  When we switch to Parallel.ForEach, we’re no longer within an iteration statement – we’re a delegate running in a method. This needs to be handled slightly differently when parallelized.  Instead of using the break statement, we need to utilize a new class in the Task Parallel Library: ParallelLoopState.  The ParallelLoopState class is intended to allow concurrently running loop bodies a way to interact with each other, and provides us with a way to break out of a loop.  In order to use this, we will use a different overload of Parallel.ForEach which takes an IEnumerable<T> and an Action<T, ParallelLoopState> instead of an Action<T>.  Using this, we can parallelize the above operation by doing: Parallel.ForEach(customers, (customer, parallelLoopState) => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) parallelLoopState.Break(); else customer.LastEmailContact = DateTime.Now; } }); There are a couple of important points here.  First, we didn’t actually instantiate the ParallelLoopState instance.  It was provided directly to us via the Parallel class.  All we needed to do was change our lambda expression to reflect that we want to use the loop state, and the Parallel class creates an instance for our use.  We also needed to change our logic slightly when we call Break().  Since Break() doesn’t stop the program flow within our block, we needed to add an else case to only set the property in customer when we succeeded.  This same technique can be used to break out of a Parallel.For loop. That being said, there is a huge difference between using ParallelLoopState to cause early termination and to use break in a standard iteration statement.  When dealing with a loop serially, break will immediately terminate the processing within the closest enclosing loop statement.  Calling ParallelLoopState.Break(), however, has a very different behavior. The issue is that, now, we’re no longer processing one element at a time.  If we break in one of our threads, there are other threads that will likely still be executing.  This leads to an important observation about termination of parallel code: Early termination in parallel routines is not immediate.  Code will continue to run after you request a termination. This may seem problematic at first, but it is something you just need to keep in mind while designing your routine.  ParallelLoopState.Break() should be thought of as a request.  We are telling the runtime that no elements that were in the collection past the element we’re currently processing need to be processed, and leaving it up to the runtime to decide how to handle this as gracefully as possible.  Although this may seem problematic at first, it is a good thing.  If the runtime tried to immediately stop processing, many of our elements would be partially processed.  It would be like putting a return statement in a random location throughout our loop body – which could have horrific consequences to our code’s maintainability. In order to understand and effectively write parallel routines, we, as developers, need a subtle, but profound shift in our thinking.  We can no longer think in terms of sequential processes, but rather need to think in terms of requests to the system that may be handled differently than we’d first expect.  This is more natural to developers who have dealt with asynchronous models previously, but is an important distinction when moving to concurrent programming models. As an example, I’ll discuss the Break() method.  ParallelLoopState.Break() functions in a way that may be unexpected at first.  When you call Break() from a loop body, the runtime will continue to process all elements of the collection that were found prior to the element that was being processed when the Break() method was called.  This is done to keep the behavior of the Break() method as close to the behavior of the break statement as possible. We can see the behavior in this simple code: var collection = Enumerable.Range(0, 20); var pResult = Parallel.ForEach(collection, (element, state) => { if (element > 10) { Console.WriteLine("Breaking on {0}", element); state.Break(); } Console.WriteLine(element); }); If we run this, we get a result that may seem unexpected at first: 0 2 1 5 6 3 4 10 Breaking on 11 11 Breaking on 12 12 9 Breaking on 13 13 7 8 Breaking on 15 15 What is occurring here is that we loop until we find the first element where the element is greater than 10.  In this case, this was found, the first time, when one of our threads reached element 11.  It requested that the loop stop by calling Break() at this point.  However, the loop continued processing until all of the elements less than 11 were completed, then terminated.  This means that it will guarantee that elements 9, 7, and 8 are completed before it stops processing.  You can see our other threads that were running each tried to break as well, but since Break() was called on the element with a value of 11, it decides which elements (0-10) must be processed. If this behavior is not desirable, there is another option.  Instead of calling ParallelLoopState.Break(), you can call ParallelLoopState.Stop().  The Stop() method requests that the runtime terminate as soon as possible , without guaranteeing that any other elements are processed.  Stop() will not stop the processing within an element, so elements already being processed will continue to be processed.  It will prevent new elements, even ones found earlier in the collection, from being processed.  Also, when Stop() is called, the ParallelLoopState’s IsStopped property will return true.  This lets longer running processes poll for this value, and return after performing any necessary cleanup. The basic rule of thumb for choosing between Break() and Stop() is the following. Use ParallelLoopState.Stop() when possible, since it terminates more quickly.  This is particularly useful in situations where you are searching for an element or a condition in the collection.  Once you’ve found it, you do not need to do any other processing, so Stop() is more appropriate. Use ParallelLoopState.Break() if you need to more closely match the behavior of the C# break statement. Both methods behave differently than our C# break statement.  Unfortunately, when parallelizing a routine, more thought and care needs to be put into every aspect of your routine than you may otherwise expect.  This is due to my second observation: Parallelizing a routine will almost always change its behavior. This sounds crazy at first, but it’s a concept that’s so simple its easy to forget.  We’re purposely telling the system to process more than one thing at the same time, which means that the sequence in which things get processed is no longer deterministic.  It is easy to change the behavior of your routine in very subtle ways by introducing parallelism.  Often, the changes are not avoidable, even if they don’t have any adverse side effects.  This leads to my final observation for this post: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • Arduino IDE not connecting to microcontroller

    - by JDD
    I get this error when trying to connect to an Arduino through a USB serial connection. I'm using the Arduino IDE 1.0.1 and the 64bit version of Ubuntu 12.04. This has been a reoccurring problem since 10.04 and happens to a few other programs that use a serial connection too. I have no problem getting serial data from the Arduino using Python or Screen. The Arduino IDE seems to work just fine otherwise. processing.app.SerialException: Error opening serial port '/dev/ttyACM0'. at processing.app.Serial.<init>(Serial.java:178) at processing.app.Serial.<init>(Serial.java:92) at processing.app.SerialMonitor.openSerialPort(SerialMonitor.java:207) at processing.app.Editor.handleSerial(Editor.java:2447) at processing.app.EditorToolbar.mousePressed(EditorToolbar.java:353) at java.awt.Component.processMouseEvent(Component.java:6386) at javax.swing.JComponent.processMouseEvent(JComponent.java:3268) at java.awt.Component.processEvent(Component.java:6154) at java.awt.Container.processEvent(Container.java:2045) at java.awt.Component.dispatchEventImpl(Component.java:4750) at java.awt.Container.dispatchEventImpl(Container.java:2103) at java.awt.Component.dispatchEvent(Component.java:4576) at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4633) at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4294) at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4227) at java.awt.Container.dispatchEventImpl(Container.java:2089) at java.awt.Window.dispatchEventImpl(Window.java:2518) at java.awt.Component.dispatchEvent(Component.java:4576) at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:672) at java.awt.EventQueue.access$400(EventQueue.java:96) at java.awt.EventQueue$2.run(EventQueue.java:631) at java.awt.EventQueue$2.run(EventQueue.java:629) at java.security.AccessController.doPrivileged(Native Method) at java.security.AccessControlContext$1.doIntersectionPrivilege(AccessControlContext.java:105) at java.security.AccessControlContext$1.doIntersectionPrivilege(AccessControlContext.java:116) at java.awt.EventQueue$3.run(EventQueue.java:645) at java.awt.EventQueue$3.run(EventQueue.java:643) at java.security.AccessController.doPrivileged(Native Method) at java.security.AccessControlContext$1.doIntersectionPrivilege(AccessControlContext.java:105) at java.awt.EventQueue.dispatchEvent(EventQueue.java:642) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:275) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:200) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:190) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:185) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:177) at java.awt.EventDispatchThread.run(EventDispatchThread.java:138) Caused by: gnu.io.UnsupportedCommOperationException: Invalid Parameter at gnu.io.RXTXPort.setSerialPortParams(RXTXPort.java:171) at processing.app.Serial.<init>(Serial.java:163) ... 35 more

    Read the article

  • Cannot run update due to a dpkg error with burg-theme-minimal-sir

    - by boywithaxe
    I cannot run an update or indeed run $: apt-get remove due to a dpkg error with a package that's a part of super-boot-manager. Running an update returns: dpkg: error processing burg-theme-minimal-sir (--configure): subprocess installed post-installation script returned error exit status 1 I tried removing this package alone, with the same error, also trying to remove super-boot-manager returns: (Reading database ... 225474 files and directories currently installed.) Removing burg-theme-minimal-sir ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing burg-theme-minimal-sir (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Removing super-boot-manager ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Errors were encountered while processing: burg-theme-minimal-sir E: Sub-process /usr/bin/dpkg returned an error code (1) I'm sort of stuck now and Google has failed me. Has anyone encountered this problem before? Or does anyone know a way for fixing this?

    Read the article

  • polipo E: Sub-process /usr/bin/dpkg returned an error code (1)

    - by ICXC
    @me:/home$ sudo apt-get install polipo Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: polipo 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 198 kB of archives. After this operation, 799 kB of additional disk space will be used. Get:1 http://sy.archive.ubuntu.com/ubuntu/ precise/universe polipo amd64 1.0.4.1-1.1 [198 kB] Fetched 198 kB in 2s (97.5 kB/s) Selecting previously unselected package polipo. (Reading database ... 169595 files and directories currently installed.) Unpacking polipo (from .../polipo_1.0.4.1-1.1_amd64.deb) ... Processing triggers for doc-base ... Processing 1 added doc-base file... Processing triggers for man-db ... Processing triggers for install-info ... Processing triggers for ureadahead ... Setting up polipo (1.0.4.1-1.1) ... Starting polipo: Couldn't open config file /etc/polipo/config: 2. invoke-rc.d: initscript polipo, action "start" failed. ****dpkg: error processing polipo (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: polipo E: Sub-process /usr/bin/dpkg returned an error code (1)****

    Read the article

  • How should I compress a file with multiple bytes that are the same with Huffman coding?

    - by Omega
    On my great quest for compressing/decompressing files with a Java implementation of Huffman coding (http://en.wikipedia.org/wiki/Huffman_coding) for a school assignment, I am now at the point of building a list of prefix codes. Such codes are used when decompressing a file. Basically, the code is made of zeroes and ones, that are used to follow a path in a Huffman tree (left or right) for, ultimately, finding a byte. In this Wikipedia image, to reach the character m the prefix code would be 0111 The idea is that when you compress the file, you will basically convert all the bytes of the file into prefix codes instead (they tend to be smaller than 8 bits, so there's some gain). So every time the character m appears in a file (which in binary is actually 1101101), it will be replaced by 0111 (if we used the tree above). Therefore, 1101101110110111011011101101 becomes 0111011101110111 in the compressed file. I'm okay with that. But what if the following happens: In the file to be compressed there exists only one unique byte, say 1101101. There are 1000 of such byte. Technically, the prefix code of such byte would be... none, because there is no path to follow, right? I mean, there is only one unique byte anyway, so the tree has just one node. Therefore, if the prefix code is none, I would not be able to write the prefix code in the compressed file, because, well, there is nothing to write. Which brings this problem: how would I compress/decompress such file if it is impossible to write a prefix code when compressing? (using Huffman coding, due to the school assignment's rules) This tutorial seems to explain a bit better about prefix codes: http://www.cprogramming.com/tutorial/computersciencetheory/huffman.html but doesn't seem to address this issue either.

    Read the article

  • How can I malloc an array of struct to do the following using file operation?

    - by RHN
    How can malloc an array of struct to do the following using file operation? the file is .txt input in the file is like: 10 22 3.3 33 4.4 I want to read the first line from the file and then I want to malloc an array of input structures equal to the number of lines to be read in from the file. Then I want to read in the data from the file and into the array of structures malloc. Later on I want to store the size of the array into the input variable size. return an array.After this I want to create another function that print out the data in the input variable in the same form like input file and suppose a function call clean_data will free the malloc memory at the end. I have tried somthing like: struct input { int a; float b,c; } struct input* readData(char *filename,int *size); int main() { return 0; } struct input* readData(char *filename,int *size) { char filename[] = "input.txt"; FILE *fp = fopen(filename, "r"); int num; while(!feof(fp)) { fscanf(fp,"%f", &num); struct input *arr = (struct input*)malloc(sizeof(struct input)); } }

    Read the article

  • I get this error after upgade. please help

    - by user203404
    dpkg: dependency problems prevent configuration of initramfs-tools: initramfs-tools depends on initramfs-tools-bin (<< 0.99ubuntu13.2.1~); however: Version of initramfs-tools-bin on system is 0.103ubuntu0.2. klibc-utils (2.0.1-1ubuntu2) breaks initramfs-tools (<< 0.103) and is installed. Version of initramfs-tools to be configured is 0.99ubuntu13.2. dpkg: error processing initramfs-tools (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of plymouth: plymouth depends on initramfs-tools; however: Package initramfs-tools is not configured yet. dpkg: error processing plymouth (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of mountall: mountall depends on plymouth; however: Package plymouth is not configured yet. dpkg: error processing mountall (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of initscripts: initscripts depends on mountall (>= 2.28); however: Package mountall is not configured yet. dpkg: error processing initscripts (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of upstart: upstart depends on initscripts; however: Package initscripts is not configured yet. upstart depends on mountall; however: Package mountall is not configured yet. dpkg: error processing upstart (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of passwd: passwd depends on upstart-job; however: Package upstart-job is not installed. Package upstart which provides upstart-job is not configured yet. dpkg: error processing passwd (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Errors were encountered while processing: initramfs-tools plymouth mountall initscripts upstart passwd E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Windows Backup fails with 0x80070002: "The system cannot find the file specified"

    - by James Johnston
    Windows 7 Backup is failing. When backing up even a single insignificant directory (e.g. I chose only the empty "Contacts" directory, leaving all other directories unchecked), I get this error within a few seconds and the backup fails. If I uncheck all files/directories, and just do the system image - then the system image is backed up OK without issue. Backup destination is an external USB hard drive. Steps to reproduce and subsequent failure: Set up backup to go to external hard drive. Don't back up system image. Back up "Contacts" directory only for my profile. Start backup. Immediately view the status of the backup, it stays on "Creating a shadow copy..." for a few seconds, and then the backup fails. Click Options button, and it says "Check your backup / The system cannot find the file specified." - with options to "Try to run backup again" or "Change backup settings". If I click "Show Details", then it says: Backup time: 4/12/2012 04:38 Backup location: My Book (D:) Error code: 0x80070002 An examination of the Event Log shows nothing useful beyond the following: Log Name: Application Source: Windows Backup Date: 4/12/2012 04:38:44 Event ID: 4104 Task Category: None Level: Error Keywords: Classic User: N/A Computer: JTJLaptop Description: The backup was not successful. The error is: The system cannot find the file specified. (0x80070002). Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Windows Backup" /> <EventID Qualifiers="0">4104</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2012-04-12T04:38:44.000000000Z" /> <EventRecordID>23979</EventRecordID> <Channel>Application</Channel> <Computer>JTJLaptop</Computer> <Security /> </System> <EventData> <Data>The system cannot find the file specified. (0x80070002)</Data> <Binary>02000780E30500003F0900005B090000420ED1665C2BEE174B64529CB14610EA71000000</Binary> </EventData> </Event> What I have tried: ChkDsk on both C: (main drive) and D: (backup drive) doesn't find any errors. Running SFC /SCANNOW to run system file checker Checked the list of profiles at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList and ensured that each profile directory exists. I'm stumped; WHAT file can't be found and why is my backup failing? This is on a Lenovo T420 laptop.

    Read the article

  • Update MySQL table using data from a text file through Java

    - by Karthi Karthi
    I have a text file with four lines, each line contains comma separated values like below file My file is: Raj,[email protected],123455 kumar,[email protected],23453 shilpa,[email protected],765468 suraj,[email protected],876567 and I have a MySQL table which contains four fields firstname lastname email phno ---------- ---------- --------- -------- Raj babu [email protected] 2343245 kumar selva [email protected] 23453 shilpa murali [email protected] 765468 suraj abd [email protected] 876567 Now I want to update my table using the data in the above text file through Java. I have tried using bufferedReader to read from the file and used split method using comma as delimiter and stored it in array. But it is not working. Any help appreciated. This is what I have tried so far void readingFile() { try { File f1 = new File("TestFile.txt"); FileReader fr = new FileReader(f1); BufferedReader br = new BufferedReader(fr); String strln = null; strln = br.readLine(); while((strln=br.readLine())!=null) { // System.out.println(strln); arr = strln.split(","); strfirstname = arr[0]; strlastname = arr[1]; stremail = arr[2]; strphno = arr[3]; System.out.println(strfirstname + " " + strlastname + " " + stremail +" "+ strphno); } // for(String i : arr) // { // } br.close(); fr.close(); } catch(IOException e) { System.out.println("Cannot read from File." + e); } try { st = conn.createStatement(); String query = "update sampledb set email = stremail,phno =strphno where firstname = strfirstname "; st.executeUpdate(query); st.close(); System.out.println("sampledb Table successfully updated."); } catch(Exception e3) { System.out.println("Unable to Update sampledb table. " + e3); } } and the output i got is: Ganesh Pandiyan [email protected] 9591982389 Dass Jeyan [email protected] 9689523645 Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1 Gowtham Selvan [email protected] 9894189423 at TemporaryPackages.FileReadAndUpdateTable.readingFile(FileReadAndUpdateTable.java:35) at TemporaryPackages.FileReadAndUpdateTable.main(FileReadAndUpdateTable.java:72) Java Result: 1 @varadaraj: This is the code of yours.... String stremail,strphno,strfirstname,strlastname; // String[] arr; Connection conn; Statement st; void readingFile() { try { BufferedReader bReader= new BufferedReader(new FileReader("TestFile.txt")); String fileValues; while ((fileValues = bReader.readLine()) != null) { String[] values=fileValues .split(","); strfirstname = values[0]; // strlastname = values[1]; stremail = values[1]; strphno = values[2]; System.out.println(strfirstname + " " + strlastname + " " + stremail +" "+ strphno); } bReader.close(); } catch (IOException e) { System.out.println("File Read Error"); } // for(String i : arr) // { // } try { st = conn.createStatement(); String query = "update sampledb set email = stremail,phno =strphno where firstname = strfirstname "; st.executeUpdate(query); st.close(); System.out.println("sampledb Table successfully updated."); } catch(Exception e3) { System.out.println("Unable to Update sampledb table. " + e3); } }

    Read the article

  • Need suggestions on how to extract data from .docx/.doc file then into mssql

    - by DarkPP
    I'm suppose to develop an automated application for my project, it will load past-year examination/exercises paper (word file), detect the sections accordingly, extract the questions and images in that section, and then store the questions and images into the database. (Preview of the question paper is at the bottom of this post) So I need some suggestions on how to extract data from a word file, then inserting them into a database. Currently I have a few methods to do so, however I have no idea how I could implement them when the file contains textboxes with background image. The question has to link with the image. Method One (Make use of ms office interop) Load the word file - Extract image, save into a folder - Extract text, save as .txt - Extract text from .txt then store in db Question: How i detect the section and question. How I link the image to the question. Extract text from word file (Working): private object missing = Type.Missing; private object sFilename = @"C:\temp\questionpaper.docx"; private object sFilename2 = @"C:\temp\temp.txt"; private object readOnly = true; object fileFormat = Word.WdSaveFormat.wdFormatText; private void button1_Click(object sender, EventArgs e) { Word.Application wWordApp = new Word.Application(); wWordApp.DisplayAlerts = Word.WdAlertLevel.wdAlertsNone; Word.Document dFile = wWordApp.Documents.Open(ref sFilename, ref missing, ref readOnly, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing); dFile.SaveAs(ref sFilename2, ref fileFormat, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing,ref missing, ref missing,ref missing,ref missing,ref missing,ref missing, ref missing,ref missing); dFile.Close(ref missing, ref missing, ref missing); } Extract image from word file (Doesn't work on image inside textbox): private Word.Application wWordApp; private int m_i; private object missing = Type.Missing; private object filename = @"C:\temp\questionpaper.docx"; private object readOnly = true; private void CopyFromClipbordInlineShape(String imageIndex) { Word.InlineShape inlineShape = wWordApp.ActiveDocument.InlineShapes[m_i]; inlineShape.Select(); wWordApp.Selection.Copy(); Computer computer = new Computer(); if (computer.Clipboard.GetDataObject() != null) { System.Windows.Forms.IDataObject data = computer.Clipboard.GetDataObject(); if (data.GetDataPresent(System.Windows.Forms.DataFormats.Bitmap)) { Image image = (Image)data.GetData(System.Windows.Forms.DataFormats.Bitmap, true); image.Save("C:\\temp\\DoCremoveImage" + imageIndex + ".png", System.Drawing.Imaging.ImageFormat.Png); } } } private void button1_Click(object sender, EventArgs e) { wWordApp = new Word.Application(); wWordApp.Documents.Open(ref filename, ref missing, ref readOnly, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing); try { for (int i = 1; i <= wWordApp.ActiveDocument.InlineShapes.Count; i++) { m_i = i; CopyFromClipbordInlineShape(Convert.ToString(i)); } } finally { object save = false; wWordApp.Quit(ref save, ref missing, ref missing); wWordApp = null; } } Method Two Unzip the word file (.docx) - Copy the media(image) folder, store somewhere - Parse the XML file - Store the text in db Any suggestion/help would be greatly appreciated :D Preview of the word file: (backup link: http://i.stack.imgur.com/YF1Ap.png)

    Read the article

  • Generating cache file for Twitter rss feed

    - by Kerri
    I'm working on a site with a simple php-generated twitter box with user timeline tweets pulled from the user_timeline rss feed, and cached to a local file to cut down on loads, and as backup for when twitter goes down. I based the caching on this: http://snipplr.com/view/8156/twitter-cache/. It all seemed to be working well yesterday, but today I discovered the cache file was blank. Deleting it then loading again generated a fresh file. The code I'm using is below. I've edited it to try to get it to work with what I was already using to display the feed and probably messed something crucial up. The changes I made are the following (and I strongly believe that any of these could be the cause): - Revised the time difference code (the linked example seemed to use a custom function that wasn't included in the code) Removed the "serialize" function from the "fwrites". This is purely because I couldn't figure out how to unserialize once I loaded it in the display code. I truthfully don't understand the role that serialize plays or how it works, so I'm sure I should have kept it in. If that's the case I just need to understand where/how to deserialize in the second part of the code so that it can be parsed. Removed the $rss variable in favor of just loading up the cache file in my original tweet display code. So, here are the relevant parts of the code I used: <?php $feedURL = "http://twitter.com/statuses/user_timeline/#######.rss"; // START CACHING $cache_file = dirname(__FILE__).'/cache/twitter_cache.rss'; // Start with the cache if(file_exists($cache_file)){ $mtime = (strtotime("now") - filemtime($cache_file)); if($mtime > 600) { $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss'); $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } echo "<!-- twitter cache generated ".date('Y-m-d h:i:s', filemtime($cache_file))." -->"; } else { $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/#######.rss'); $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } //END CACHING //START DISPLAY $doc = new DOMDocument(); $doc->load($cache_file); $arrFeeds = array(); foreach ($doc->getElementsByTagName('item') as $node) { $itemRSS = array ( 'title' => $node->getElementsByTagName('title')->item(0)->nodeValue, 'date' => $node->getElementsByTagName('pubDate')->item(0)->nodeValue ); array_push($arrFeeds, $itemRSS); } // the rest of the formatting and display code.... } ?> ETA 6/17 Nobody can help…? I'm thinking it has something to do with writing a blank cache file over a good one when twitter is down, because otherwise I imagine that this should be happening every ten minutes when the cache file is overwritten again, but it doesn't happen that frequently. I made the following change to the part where it checks how old the file is to overwrite it: $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss'); if($mtime > 600 && $cache_rss != ''){ $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } …so now, it will only write the file if it's over ten minutes old and there's actual content retrieved from the rss page. Do you think this will work?

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >