Search Results

Search found 70140 results on 2806 pages for 'file io'.

Page 5/2806 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • When creating a new text file, should I add a .txt extension to its name?

    - by Agmenor
    When I create a new document aimed at containing only plain text, I am not obliged by Ubuntu to add a .txt extension to its name. It works indeed very well: gedit opens it without problem, understanding very well that it is only text. The only two pro arguments I have found from now on for adding an extension are 1/ interoperability with Windows systems and 2/ avoiding confusion with folders having the same name. Nevertheless those two arguments do not convince me at all. As a consequence, should I keep the reflex of adding an extension to files or not?

    Read the article

  • How do I know if my disks are being hit with too much IO reads or writes or both?

    - by Mark F
    Hi All, So I know a bit about disk I/O and bottlenecks relating to this especially when relating to databases. But how do I really know what the max IO numbers will be for my disks? What metric might be available to me for working out roughly (but needs to be a good approximation) of how much capacity (if you will) have I got left available in I/O. I've seen it before where things are bubbling along nicely and then all of a sudden, everything screams to a halt, and it ends up being an IO bound problem. Is there a better way to predict when IO is reaching its limits? This article was interesting but not giving the answer I desire. "http://serverfault.com/questions/61510/linux-how-can-i-see-whats-waiting-for-disk-io". So is my best bet surrounding just looking at 'CPU IO WAIT'? There must be a more reactive method for this? Best, M

    Read the article

  • java.io.StreamCorruptedException: invalid stream header: 7371007E

    - by Alex
    Hello, this is pprobably a simple question . I got a client Server application which communicate using objects. when I send only one object from the client to server all works well. when I attempt to send several objects one after another on the same stream I get StreamCorruptedException. can some one direct me to the cause of this error . Thanks client write method private SecMessage[] send(SecMessage[] msgs) { SecMessage result[]=new SecMessage[msgs.length]; Socket s=null; ObjectOutputStream objOut =null; ObjectInputStream objIn=null; try { s=new Socket("localhost",12345); objOut=new ObjectOutputStream( s.getOutputStream()); for (SecMessage msg : msgs) { objOut.writeObject(msg); } objOut.flush(); objIn=new ObjectInputStream(s.getInputStream()); for (int i=0;i<result.length;i++) result[i]=(SecMessage)objIn.readObject(); } catch(java.io.IOException e) { alert(IO_ERROR_MSG+"\n"+e.getMessage()); } catch (ClassNotFoundException e) { alert(INTERNAL_ERROR+"\n"+e.getMessage()); } finally { try {objIn.close();} catch (IOException e) {} try {objOut.close();} catch (IOException e) {} } return result; } server read method //in is an inputStream Defined in the server SecMessage rcvdMsgObj; rcvdMsgObj=(SecMessage)new ObjectInputStream(in).readObject(); return rcvdMsgObj; and the SecMessage Class is public class SecMessage implements java.io.Serializable { private static final long serialVersionUID = 3940341617988134707L; private String cmd; //... nothing interesting here , just a bunch of fields , getter and setters }

    Read the article

  • How to remove file association in windows 8?

    - by Chesnokov Yuriy
    I have Chrome associated with .xlsx file on windows 8.1 machine In Control Panel\Programs\Default Programs\Set Associations it is not possible to remove association only to change it to another program. In Control Panel\Programs\Default Programs\Set Default Programs\Set Program Associations , .xlsx is not present in Chrome. I removed all keys from HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts\.xlsx Still Chrome remains associated with that extension in Control Panel\Programs\Default Programs\Set Associations, Windows Explorer shows the Chrome icon with the .xlsx file.

    Read the article

  • Problem with File IO and splitting strings with Environment.NewLine in VB.Net

    - by Senthil
    Hi, I was experimenting with basic VB.Net file read/write and encountered this problem. I don't know whether it has something to do with the File IO or the String splitting. I am writing text to a file like so Dim sWriter As New StreamWriter("Data.txt") sWriter.WriteLine("FirstItem") sWriter.WriteLine("SecondItem") sWriter.WriteLine("ThirdItem") sWriter.Close() Then, I am reading the text from the file Dim sReader As New StreamReader("Data.txt") Dim fileContents As String = sReader.ReadToEnd() sReader.Close() Now, I am splitting the fileContents variable using Environment.NewLine and saving the returned String array. Dim tempStr() As String = fileContents.Split(Environment.NewLine) When I print the array, I get some weird results For Each str As String In tempStr Console.WriteLine("*" + str + "*") Next I added the *'s to the beginning and end to find out what is going on. Since NewLine is used as the delimiter, I expect the strings in the array to NOT have any NewLine's. But the output was this - *FirstItem* * SecondItem* * ThirdItem* * * Shouldn't it be this - *FirstItem* *SecondItem* *ThirdItem* ?? Since I am using WriteLine, my guess is a new line is added after the last string and hence the last empty item in the array after splitting. But why is there a new line in the beginning of the second and third strings?

    Read the article

  • best practices question: How to save a collection of images and a java object in a single file? File

    - by Richard
    Hi all, I am making a java program that has a collection of flash-card like objects. I store the objects in a jtree composed of defaultmutabletreenodes. Each node has a user object attached to it with has a few string/native data type parameters. However, i also want each of these objects to have an image (typical formats, jpg, png etc). I would like to be able to store all of this information, including the images and the tree data to the disk in a single file so the file can be transferred between users and the entire tree, including the images and parameters for each object, can be reconstructed. I had not approached a problem like this before so I was not sure what the best practices were. I found XLMEncoder (http://java.sun.com/j2se/1.4.2/docs/api/java/beans/XMLEncoder.html) to be a very effective way of storing my tree and the native data type information. However I couldn't figure out how to save the image data itself inside of the XML file, and I'm not sure it is possible since the data is binary (so restricted characters would be invalid). My next thought was to associate a hash string instead of an image within each user object, and then gzip together all of the images, with the hash strings as the names and the XMLencoded tree in the same compmressed file. That seemed really contrived though. Does anyone know a good approach for this type of issue? THanks! Thanks!

    Read the article

  • Best Practice: Apache File Upload

    - by matnagel
    I am looking for a soultion for trusted users to upload pdf files via html forms (with maybe php involved). This is quite a standard ubuntu linux server with apache 2.x and php 5. I am wonderiung what are the benefits of the apache file upload module. There were no updates for some time, is it actively maintained? What are the advantages over traditional php upload with apache 2 without this module? http://commons.apache.org/fileupload I remember traditional php file upload is difficult with some pitfalls, will the apache file upload module improve the situation? The solution I am looking for will be part of an existing website and be integrated into the admin web frontend. Things I am not considering are webdav, ssh, ftp, ftps, ftp over ssh. Should work with a browser and without installing special client software, so I am asking about a browser based upload without special client side requirements. I can request a modern browser like firefox = 3.5 or modern webkit broser like chrome or safari from the users.

    Read the article

  • Iterating through folders and files in batch file?

    - by Will Marcouiller
    Here's my situation. A project has as objective to migrate some attachments to another system. These attachments will be located to a parent folder, let's say "Folder 0" (see this question's diagram for better understanding), and they will be zipped/compressed. I want my batch script to be called like so: BatchScript.bat "c:\temp\usd\Folder 0" I'm using 7za.exe as the command line extraction tool. What I want my batch script to do is to iterate through the "Folder 0"'s subfolders, and extract all of the containing ZIP files into their respective folder. It is obligatory that the files extracted are in the same folder as their respective ZIP files. So, files contained in "File 1.zip" are needed in "Folder 1" and so forth. I have read about the FOR...DO command on Windows XP Professional Product Documentation - Using Batch Files. Here's my script: @ECHO OFF FOR /D %folder IN (%%rootFolderCmdLnParam) DO FOR %zippedFile IN (*.zip) DO 7za.exe e %zippedFile I guess that I would also need to change the actual directory before calling 7za.exe e %zippedFile for file extraction, but I can't figure out how in this batch file (through I know how in command line, and even if I know it is the same instruction "cd"). Anyone's help is gratefully appreciated.

    Read the article

  • Windows file association for README, INSTALL, LICENSE and the like [closed]

    - by Lumi
    Possible Duplicate: How to set the default program for opening files without an extension in Windows? Many files originating in the UNIX world come without file extension. Popular examples include README, INSTALL, LICENSE. We know for a fact that these are text files. It is therefore a bit disappointing not to be able to just double-click them open in Explorer and see them in Notepad (actually, Notepad2 because of the UNIX line endings which silly Microsoft Notepad doesn't render correctly). Does anyone know of a way to create a file association for, say, README files without extension? This could then be replicated to cover the most frequently occurring file types, and then double-clicking them open would work. Update (Sort of in response to all your comments.) Thanks, folks, your comments and answers have helped me. @Indrek, yes, I was under the assumption that you could somehow create an association for just README or Makefile, and couldn't do so for files without extension. Turns out the contrary is true, and yes, that is a workaround that neatly solves the issue. Ultimately, I just want to be able to double-click to open a README or Makefile, that's all. @Sampo, the SendMe trick is also useful, although usability is not as great as a straight double-click. (I'm really lazy sometimes.) Turns out the following trick using ftype and ftype from an Administrator prompt does the double-click enabling job: assoc .=no_ext ftype no_ext=%SystemRoot%\system32\NOTEPAD.EXE %1 :: You can see it created some entries in the registry: reg query hkcr\no_ext /s reg query hkcr\. /s

    Read the article

  • Moving hidden files/folders with the command-line or batch-file

    - by Synetech
    Question Does anyone know of a way to move files and folders that have the hidden, system, or read-only attribute set from the command-line or a batch file? (No, stripping the attributes first is not an option since there is no practical way to know which attributes were set in order to re-set them after the move.) (Failed) Attempts Using the basic move command does not work with items with the hidden or system attribute set and for some reason, it does not have switches to specify attributes like the dir and del commands do. I tried using a utility I wrote that uses the shell’s file operation function, but that requires using start /w to prevent the batch file from running on ahead, and it complains about long-filename support for some reason. I tried using robocopy, but it first copies the files and then deletes the originals instead of simply moving the source (which results in a frustrating delay, even with the excessive output redirected to nul). (Surprisingly it seems that few people have ever needed to move hidden files from the command-line. All I could find was this one person who abandoned the attempt.)

    Read the article

  • Windows network: deny file access to another user if file is currently open

    - by Steve
    Some users on my network are having difficulties saving files, because the file is open elsewhere. Let's say Lemuel wants to edit a file, but Bernice in the next office over is working on it. Lemuel opens, edits, and tries to save, but then gets a "no write access" error. Bernice chortles with glee (since earlier that week Lemuel stole her sandwich). Unfortunately, various softwares will not warn the user that they have opened a read-only file. Is there a way (in Windows) to limit file access to ONE user only, i.e. 777 for the first user to open the file, and 000 for all users after that? (Sorry for the Linux terminology but it gets my point across).

    Read the article

  • Flash IO error while uploading photo with low uploading internet speed

    - by Beck
    Actionscript: System.security.allowDomain("http://" + _root.tdomain + "/"); import flash.net.FileReferenceList; import flash.net.FileReference; import flash.external.ExternalInterface; import flash.external.*; /* Main variables */ var session_photos = _root.ph; var how_much_you_can_upload = 0; var selected_photos; // container for selected photos var inside_photo_num = 0; // for photo in_array selection var created_elements = _root.ph; var for_js_num = _root.ph; /* Functions & settings for javascript<->flash conversation */ var methodName:String = "addtoflash"; var instance:Object = null; var method:Function = addnewphotonumber; var wasSuccessful:Boolean = ExternalInterface.addCallback(methodName, instance, method); function addnewphotonumber() { session_photos--; created_elements--; for_js_num--; } /* Javascript hide and show flash button functions */ function block(){getURL("Javascript: blocking();");} function unblock(){getURL("Javascript:unblocking();");} /* Creating HTML platform function */ var result = false; /* Uploading */ function uploadthis(photos:Array) { if(!photos[inside_photo_num].upload("http://" + _root.tdomain + "/upload.php?PHPSESSID=" + _root.phpsessionid)) { getURL("Javascript:error_uploading();"); } } /* Flash button(applet) options and bindings */ var fileTypes:Array = new Array(); var imageTypes:Object = new Object(); imageTypes.description = "Images (*.jpg)"; imageTypes.extension = "*.jpg;"; fileTypes.push(imageTypes); var fileListener:Object = new Object(); var btnListener:Object = new Object(); btnListener.click = function(eventObj:Object) { var fileRef:FileReferenceList = new FileReferenceList(); fileRef.addListener(fileListener); fileRef.browse(fileTypes); } uploadButton.addEventListener("click", btnListener); /* Listeners */ fileListener.onSelect = function(fileRefList:FileReferenceList):Void { // reseting values inside_photo_num = 0; var list:Array = fileRefList.fileList; var item:FileReference; // PHP photo counter how_much_you_can_upload = 3 - session_photos; if(list.length > how_much_you_can_upload) { getURL("Javascript:howmuch=" + how_much_you_can_upload + ";list_length=" + list.length + ";limit_reached();"); return; } // if session variable isn't yet refreshed, we check inner counter if(created_elements >= 3) { getURL("Javascript:limit_reached();"); return; } selected_photos = list; for(var i:Number = 0; i < list.length; i++) { how_much_you_can_upload--; item = list[i]; trace("name: " + item.name); trace(item.addListener(this)); if((item.size / 1024) > 5000) {getURL("Javascript:size_limit_reached();");return;} } result = false; setTimeout(block,500); /* Increment number for new HTML container and pass it to javascript, after javascript returns true and we start uploading */ for_js_num++; if(ExternalInterface.call("create_platform",for_js_num)) { uploadthis(selected_photos); } } fileListener.onProgress = function(file:FileReference, bytesLoaded:Number, bytesTotal:Number):Void { getURL("Javascript:files_process(" + bytesLoaded + "," + bytesTotal + "," + for_js_num + ");"); } fileListener.onComplete = function(file:FileReference, bytesLoaded:Number, bytesTotal:Number):Void { inside_photo_num++; var sendvar_lv:LoadVars = new LoadVars(); var loadvar_lv:LoadVars = new LoadVars(); loadvar_lv.onLoad = function(success:Boolean){ if(loadvar_lv.failed == 1) { getURL("Javascript:type_failed();"); return; } getURL("Javascript:filelinks='" + loadvar_lv.json + "';fullname='" + loadvar_lv.fullname + "';completed(" + for_js_num + ");"); created_elements++; if((inside_photo_num + 1) > selected_photos.length) {setTimeout(unblock,1000);return;} // don't create empty containers anymore if(created_elements >= 3) {return;} result = false; /* Increment number for new HTML container and pass it to javascript, after javascript returns true and we start uploading */ for_js_num++; if(ExternalInterface.call("create_platform",for_js_num)) { uploadthis(selected_photos); } } sendvar_lv.getnum = true; sendvar_lv.PHPSESSID = _root.phpsessionid; sendvar_lv.sendAndLoad("http://" + _root.tdomain + "/upload.php",loadvar_lv,"POST"); } fileListener.onCancel = function(file:FileReference):Void { } fileListener.onOpen = function(file:FileReference):Void { } fileListener.onHTTPError = function(file:FileReference, httpError:Number):Void { getURL("Javascript:http_error(" + httpError + ");"); } fileListener.onSecurityError = function(file:FileReference, errorString:String):Void { getURL("Javascript:security_error(" + errorString + ");"); } fileListener.onIOError = function(file:FileReference):Void { getURL("Javascript:io_error();"); selected_photos[inside_photo_num].cancel(); uploadthis(selected_photos); } <PARAM name="allowScriptAccess" value="always"> <PARAM name="swliveconnect" value="true"> <PARAM name="movie" value="http://www.localh.com/fileref.swf?ph=0&phpsessionid=8mirsjsd75v6vk583vkus50qbb2djsp6&tdomain=www.localh.com"> <PARAM name="wmode" value="opaque"> <PARAM name="quality" value="high"> <PARAM name="bgcolor" value="#ffffff"> <EMBED swliveconnect="true" wmode="opaque" src="http://www.localh.com/fileref.swf?ph=0&phpsessionid=8mirsjsd75v6vk583vkus50qbb2djsp6&tdomain=www.localh.com" quality="high" bgcolor="#ffffff" width="100" height="22" name="fileref" align="middle" allowScriptAccess="always" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer"></EMBED> My uploading speed is 40kb/sec Getting flash error while uploading photos bigger than 500kb and getting no error while uploading photos less than 100-500kb~. My friend has 8mbit uploading speed and has no errors even while uploading 3.2mb photos and more. How to fix this problem? I have tried to re-upload on IO error trigger, but it stops at the same place. Any solution regarding this error? By the way, i was watching process via debugging proxy and figured out, that responce headers doesn't come at all on this IO error. And sometimes shows socket error. If need, i will post serverside php script as well. But it stops at if(isset($_FILES['Filedata'])) { so it won't help :) as all processing comes after this check.

    Read the article

  • Help me choose between Go and Io

    - by Robert Smith
    During the following months I'll have some spare time so I thought of picking up a new programming language.I've been reading some articles about Go and Io and both of them look interesting and very promising so I'm stuck making a decision about which one to pick up next. I'm mainly interested in distributed systems and concurrency. Any help is greatly appreciated. Thanks.

    Read the article

  • MySQL high IO usage quries

    - by jack
    MySQL has a built-in slow query logger. Is there any options or third-party tools which are able to detect the queries causing high IO usage just in the way like what slow query logger does?

    Read the article

  • java.io.FileNotFoundException (The system cannot find the path specified)

    - by xenom
    I get this exception when I want to open a keystore java.io.FileNotFoundException: \resources\keystore (The system cannot find the path specified) Basically my application is like src/ client.java server.java resources/ keystore truststore And the faulty code : System.setProperty("javax.net.ssl.keyStore","/resources/keystore"); System.setProperty("javax.net.ssl.keyStorePassword", "ebanking"); I also tried ./resources/keystore, resources/keystore, \\resources\\keystore etc.. My application is supposed to work in an executable jar so no absolute path technique please.

    Read the article

  • C# Multithreading File IO (Reading)

    - by washtik
    We have a situation where our application needs to process a series of files and rather than perform this function synchronously, we would like to employ multi-threading to have the workload split amongst different threads. Each item of work is: 1. Open a file for read only 2. Process the data in the file 3. Write the processed data to a Dictionary We would like to perform each file's work on a new thread? Is this possible and should be we better to use the ThreadPool or spawn new threads keeping in mind that each item of "work" only takes 30ms however its possible that hundreds of files will need to be processed. Any ideas to make this more efficient is appreciated.

    Read the article

  • File size and mime-type before upload

    - by Marcos Placona
    Hi, I've scavenged on every single topic on this forum to try and find an answer before I posted this. Most people say you should simply use SWFUpload, some others mention Activex, and it keeps going. I know this is do-able, as Google does it with gMail when you try to upload a file that's bigger than 25mb, or executable. My question is, how can I determine the file size and mime-type before the file actually hits my server. I primarily thought it was an impossible task, but Google proves me wrong. Could anyone give me a definitive answer on how to accomplish such task? Thanks

    Read the article

  • C# "Could not find a part of the path" - Creating Local File

    - by Pyronaut
    I am trying to write to a folder that is located on my C:\ drive. I keep getting the error of : Could not find a part of the path .. etc My filepath looks basically like this : C:\WebRoot\ManagedFiles\folder\thumbs\5c27a312-343e-4bdf-b294-0d599330c42d\Image\lighthouse.jpg And I am writing to it like so : using (MemoryStream memoryStream = new MemoryStream()) { thumbImage.Save(memoryStream, ImageFormat.Jpeg); using (FileStream diskCacheStream = new FileStream(cachePath, FileMode.CreateNew)) { memoryStream.WriteTo(diskCacheStream); } memoryStream.WriteTo(context.Response.OutputStream); } Don't worry too much about the memory stream. It is just outputting it (After I save it). Since I am creating a file, I am a bit perplexed as to why it cannot find the file (Shouldn't it just write to where I tell it to, regardless?). The strange thing is, It has no issue when I'm testing it above using File.Exists. Obviously that is returning false, But it means that atleast my Filepath is semi legit. Any help is much appreciated.

    Read the article

  • mac osX file recovery

    - by Daniel
    I thought that all operating systems would merge folder content when being moved to the same location. Imagine my surprise when that didn't happen and I have hundreds, if not thousands of files that have gone missing and are nowhere to be found. Because they were not "deleted" they are not in the trash bin. I've tried to do some recovery using a program called stellarPheonix but after about a 24hour scan, it didn't recognize any of the raw files (.dng,.arw) as image files and so I couldn't see if they could be recovered. It also didn't show the directory structure, which would be handy. I tried a quick scan, but all it showed was files that were still on the HD, not sure what the point of that is. I've used recover 2000 on Win and it does a good job, does anyone know of anything that works quickly and reliably for this kind of file recovery. (I don't think I should have to do a sector-by=sector for this kind of file loss)

    Read the article

  • Mac OS X file recovery

    - by Daniel
    I thought that all operating systems would merge folder content when being moved to the same location. Imagine my surprise when that didn't happen and I have hundreds, if not thousands of files that have gone missing and are nowhere to be found. Because they were not "deleted" they are not in the trash bin. I've tried to do some recovery using a program called stellarPheonix but after about a 24hour scan, it didn't recognize any of the raw files (.dng,.arw) as image files and so I couldn't see if they could be recovered. It also didn't show the directory structure, which would be handy. I tried a quick scan, but all it showed was files that were still on the HD, not sure what the point of that is. I've used recover 2000 on Win and it does a good job, does anyone know of anything that works quickly and reliably for this kind of file recovery. (I don't think I should have to do a sector-by=sector for this kind of file loss)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >