Search Results

Search found 72103 results on 2885 pages for 'file storage'.

Page 73/2885 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Reading lists from a file in Ruby

    - by Gjorgji
    Hi, I have a txt file which contains data in the following format: X1 Y1 X2 Y2 etc.. I want to read the data from this file and create two lists in ruby (X containing X1, X2 and Y containing Y1, Y2). How can I do this in Ruby? Thanks.

    Read the article

  • HTML File upload field style

    - by Steven1350
    I am trying to create a file upload field that has a little bit of style to it, but I seem to be having problems finding examples of this. I know part of the reason is that the field itself varies from browser to browser. Any ideas how to do this? Or is there a way to do this without using a file element of a form that can be styled?

    Read the article

  • Creating and writing file from a FileOutputStream in Java

    - by Althane
    Okay, so I'm working on a project where I use a Java program to initiate a socket connection between two classes (a FileSender and FileReceiver). My basic idea was that the FileSender would look like this: try { writer = new DataOutputStream(connect.getOutputStream()); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } //While we have bytes to send while(filein.available() >0){ //We write them out to our buffer writer.write(filein.read(outBuffer)); writer.flush(); } //Then close our filein filein.close(); //And then our socket; connect.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); The constructor contains code that checks to see if the file exists, and that the socket is connected, and so on. Inside my FileReader is this though: input = recvSocket.accept(); BufferedReader br = new BufferedReader(new InputStreamReader(input.getInputStream())); FileOutputStream fOut= new FileOutputStream(filename); String line = br.readLine(); while(line != null){ fOut.write(line.getBytes()); fOut.flush(); line = br.readLine(); } System.out.println("Before RECV close statements"); fOut.close(); input.close(); recvSocket.close(); System.out.println("After RECV clsoe statements"); All inside a try-catch block. So, what I'm trying to do is have the FileSender reading in the file, converting to bytes, sending and flushing it out. FileReceiver, then reads in the bytes, writes to the fileOut, flushes, and continues waiting for more. I make sure to close all the things that I open, so... here comes the weird part. When I try and open the created text file in Eclipse, it tells me "An SWT error has occured ... recommended to exit the workbench... see .log for more details.". Another window pops up saying "Unhandled event loop exception, (no more handles)". However, if I try to open the sent text file in notepad2, I get ThisIsASentTextfile Which is good (well, minus the fact that there should be line breaks, but I'm working on that...). Does anyone know why this is happening? And while we're checking, how to add the line breaks? (And is this a particularly bad way to transfer files over java without getting some other libraries?)

    Read the article

  • How to check if ping responded or not in a batch file

    - by Ismail
    I want to continuously ping a server and see a message box when ever it responds i.e. server is currently down. I want to do it through batch file. I can show a message box as said here http://stackoverflow.com/questions/774175/how-can-i-open-a-message-box-in-a-windows-batch-file/774253#774253 and can ping continuously by ping <servername> -t But how do I check if it responded or not?

    Read the article

  • Empty file fields

    - by user319319
    i must check all :file fields, all fields must be not empty. i use code function CheckFiles() { var t = $('.uploadElement:empty').size(); alert(t); } but t return all uploadElement elements count. how to get empty :file fields? sorry my english

    Read the article

  • Large File Download - Connection With Server Reset

    - by daveywc
    I have an asp.net website that allows the user to download largish files - 30mb to about 60mb. Sometimes the download works fine but often it fails at some varying point before the download finishes with the message saying that the connection with the server was reset. Originally I was simply using Server.TransmitFile but after reading up a bit I am now using the code posted below. I am also setting the Server.ScriptTimeout value to 3600 in the Page_Init event. private void DownloadFile(string fname, bool forceDownload) { string path = MapPath(fname); string name = Path.GetFileName(path); string ext = Path.GetExtension(path); string type = ""; // set known types based on file extension if (ext != null) { switch (ext.ToLower()) { case ".mp3": type = "audio/mpeg"; break; case ".htm": case ".html": type = "text/HTML"; break; case ".txt": type = "text/plain"; break; case ".doc": case ".rtf": type = "Application/msword"; break; } } if (forceDownload) { Response.AppendHeader("content-disposition", "attachment; filename=" + name.Replace(" ", "_")); } if (type != "") { Response.ContentType = type; } else { Response.ContentType = "application/x-msdownload"; } System.IO.Stream iStream = null; // Buffer to read 10K bytes in chunk: byte[] buffer = new Byte[10000]; // Length of the file: int length; // Total bytes to read: long dataToRead; try { // Open the file. iStream = new System.IO.FileStream(path, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read); // Total bytes to read: dataToRead = iStream.Length; //Response.ContentType = "application/octet-stream"; //Response.AddHeader("Content-Disposition", "attachment; filename=" + filename); // Read the bytes. while (dataToRead > 0) { // Verify that the client is connected. if (Response.IsClientConnected) { // Read the data in buffer. length = iStream.Read(buffer, 0, 10000); // Write the data to the current output stream. Response.OutputStream.Write(buffer, 0, length); // Flush the data to the HTML output. Response.Flush(); buffer = new Byte[10000]; dataToRead = dataToRead - length; } else { //prevent infinite loop if user disconnects dataToRead = -1; } } } catch (Exception ex) { // Trap the error, if any. Response.Write("Error : " + ex.Message); } finally { if (iStream != null) { //Close the file. iStream.Close(); } Response.Close(); } }

    Read the article

  • File Server - Storage configuration: RAID vs LVM vs ZFS something else... ?

    - by privatehuff
    We are a small company that does video editing, among other things, and need a place to keep backup copies of large media files and make it easy to share them. I've got a box set up with Ubuntu Server and 4 x 500 GB drives. They're currently set up with Samba as four shared folders that Mac/Windows workstations can see fine, but I want a better solution. There are two major reasons for this: 500 GB is not really big enough (some projects are larger) It is cumbersome to manage the current setup, because individual hard drives have different amounts of free space and duplicated data (for backup). It is confusing now and that will only get worse once there are multiple servers. ("the project is on sever2 in share4" etc) So, I need a way to combine hard drives in such a way as to avoid complete data loss with the failure of a single drive, and so users see only a single share on each server. I've done linux software RAID5 and had a bad experience with it, but would try it again. LVM looks ok but it seems like no one uses it. ZFS seems interesting but it is relatively "new". What is the most efficient and least risky way to to combine the hdd's that is convenient for my users? Edit: The Goal here is basically to create servers that contain an arbitrary number of hard drives but limit complexity from an end-user perspective. (i.e. they see one "folder" per server) Backing up data is not an issue here, but how each solution responds to hardware failure is a serious concern. That is why I lump RAID, LVM, ZFS, and who-knows-what together. My prior experience with RAID5 was also on an Ubuntu Server box and there was a tricky and unlikely set of circumstances that led to complete data loss. I could avoid that again but was left with a feeling that I was adding an unnecessary additional point of failure to the system. I haven't used RAID10 but we are on commodity hardware and the most data drives per box is pretty much fixed at 6. We've got a lot of 500 GB drives and 1.5 TB is pretty small. (Still an option for at least one server, however) I have no experience with LVM and have read conflicting reports on how it handles drive failure. If a (non-striped) LVM setup could handle a single drive failing and only loose whichever files had a portion stored on that drive (and stored most files on a single drive only) we could even live with that. But as long as I have to learn something totally new, I may as well go all the way to ZFS. Unlike LVM, though, I would also have to change my operating system (?) so that increases the distance between where I am and where I want to be. I used a version of solaris at uni and wouldn't mind it terribly, though. On the other end on the IT spectrum, I think I may also explore FreeNAS and/or Openfiler, but that doesn't really solve the how-to-combine-drives issue.

    Read the article

  • What is the simplest way to download file in PHP

    - by silent
    Hi all, I need to download an image from some URL to my server. However, my server's config disallowed me to do it this way: getimagesize( $file ); Because, it generate error: Warning: getimagesize() [function.getimagesize]: URL file-access is disabled in the server configuration in somefile.php on line 10 So, is there another way I can use that doesn't require external library?

    Read the article

  • gwt file permission

    - by Hoax
    I have a little GWT/AppEngine Project which uses RPC. Basically I need to get some data from a XML file that resides on the server. But when I use the RPC to read the file in my server-package I am getting a AccessControlException (access denied). Any ideas what the problem is? cheers hoax

    Read the article

  • how to userinput without typing to a batch file

    - by Blood hound
    I am trying to run a batch file which requires user input "y/n" to do further action , I want to call this batch file for automation , as during automation argument yes or no need to be passed without user intervention , any idea how to achieve it ? cmd /c setup.bat now if setup.bat is run " yes or no " need to be selected to get the desired result as now this setup.bat is called during automation, is there is anyway to pass "yes" parameter as an input to setup.bat

    Read the article

  • How can I view locking on a server file?

    - by JamesP
    We have a database file (foxpro) on a Windows share (2003 server). We're having some problems where the program that writes to this file has to retry as the file is locked. This all happens very quickly and within a few seconds the file is available, but the problem is it shouldn't be locked. Does anyone know how we can view what's locking it? Any tools available?

    Read the article

  • How is thread local storage (__thread) implemented on LInux?

    - by anon
    __thread Foo foo; How is "foo" actually resolved? Does the compiler silently replace every instance of "foo" with a function call? Is "foo" stored somewhere relative to the bottom of the stack, and the compiler stores this as "hey, for each thread, have this space near the bottom of the stack, and foo is stored as "offset x from bottom of stack"" ? Insights please.

    Read the article

  • git status shows a file that I have listed explicitly in my .gitignore file

    - by metaperl
    I have the following line in my .gitignore file: var/www/docs/.backroom/billing_info/inv.pl but when I type 'git status' I am told the following: # modified: var/www/docs/.backroom/billing_info/inv.pl I dont understand how a file which is explicitly listed as an ignore pattern could be listed as modified when I want git to ignore it. There are no lines starting with a ! in my .gitignore file Here is my entire .gitignore file for reference: http://pastebin.com/Jw445Qd7

    Read the article

  • Dealing with a shutdown during a file write?

    - by Ken
    All, I'm working on a Real-time system, VxWorks I think, I'm saving application settings to a file. What's the best way to handle preserving the settings if the system shuts down or loses power in the middle of a file write? All I can think of is shuffling a few files around or reducing the frequency at which i Save variables in order to reduce incidents.

    Read the article

  • stderr to file; but without buffering

    - by l.thee.a
    I am trying to isolate a nasty bug, which brings down my linux kernel. I am printing messages to stderr and stderr is redirected to a log file. Is there a way to disable buffering on the file access? When kernel hangs, I am losing the messages in the buffer.

    Read the article

  • jquery / javascript for upload the file from browser to server

    - by Lalit
    Hi, I am developing the application in asp.net mvc with c#. I want the functionality that , a div will popup, so that i can facilate to use to upload the image file from his browser to server , in application domains file system. as usual. This question may be repeat , but i expect something more like how to build this scenario, and what are the security issues may come? and what care have to take while coding in the security perspective ?

    Read the article

  • Removing a file from TortoiseHG data source

    - by Hossein Margani
    Hi! I am using TortoiseHG for source code control in Windows, I forgot to edit the ".hgignor" file, and now I have a huge folder ".hg" which I know it's because of DLL and EXE and PDB files which I do not need them. Now changing the ignor file does not remove those files. What should I do for deleting these files completely from my TortoiseHg data source? Thank you.

    Read the article

  • Prepending to a multi-gigabyte file.

    - by dafmetal
    What would be the most performant way to prepend a single character to a multi-gigabyte file (in my practical case, a 40GB file). There is no limitation on the implementation to do this. Meaning it can be through a tool, a shell script, a program in any programming language, ...

    Read the article

  • Industrial strength cloud file storage

    - by ArthurG
    I'm looking for an industrial strength cloud file storage system. It will be used by multiple people in a startup. Our requirements: Transparent file system access: files and folders in the file system must be able transparently access (read and write) files in the cloud; files must be synchronized whenever network access is available and buffered otherwise. The system must be usable by non-technical people. Access control: we need to control who can access which files, at least on a very coarse basis. e.g., the developers will be able to access the system design documents, only the corporate folks can access recruiting documents, and only management can access certain corporate documents. Dropbox provides this via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. so the cloud service should have a notion of an account (our startup) with multiple users with distinct credentials and rights for each user Clients: it must be accessible from Macs and PCs; I would hope that it supports Linux (e.g., Ubuntu) too Security: it must provide robust security Backup: the cloud service must reliably backup the files Versioning: change version history, is a big plus, but not required Not free: we're willing to pay for the service So far, we've reviewed the following, albeit not completely thoroughly: Dropbox: has all except 1) Access control, which is provided via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. and 2) Security, as discussed here http://www.economist.com/blogs/babbage/2011/05/internet_security and here http://blog.dropbox.com/?p=821. Windows Live Mesh, has all except 1) Clients, only supporting Windows 7 and OS X. SpiderOak has all, except 1) Transparent file system access, which is only available for 1 user. Amazon Cloud, doesn't offer 1) Transparent file system access Rackspace Cloud Drive has all except 1) Access control and 2) Versioning I'll gladly include any clarifications or additional systems the community provides. Arthur

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >