Search Results

Search found 71242 results on 2850 pages for 'temp file guy'.

Page 108/2850 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • [NSData dataWithContentsOfFile:path] doesn't work

    - by Felics
    Hello, when I have the fallowing code to read a binary file: NSString* file = [NSString stringWithUTF8String:fileName]; NSString* filePath = resource ? [[NSBundle mainBundle] pathForResource:file ofType:nil] : [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0] stringByAppendingPathComponent: file]; NSData* fileData = [NSData dataWithContentsOfFile:filePath]; Where "fileName" and resource are load function parameters. "resource" is used to know if the file is located in application bundle or in Documents. Sometimes this code works well and sometimes it doesn't. As far I saw this problem is random. I can run the code 10 times in a row and it works fine and after that it gives me nil data without any modification. Does anybody knows what could be the problem? Could it be related with file extension or file name? Thank you. PS: I use this code on iPhone Simulator and the file exists in application bundle.

    Read the article

  • Anyone know any good backend user online file manager?

    - by skyhigh
    Hi I'm looking for a backend system where your clients can login and upload files to your server, download files from the server and you can delete the users, create users, etc. I do not know the proper name for this kind of software. Maybe its called online file manager? Any recommendations? My server supports PHP, apache and mysq. Thanks

    Read the article

  • time on files differ by 1 sec. FAIL Robocopy sync

    - by csmba
    I am trying to use Robocopy to sync (/IMG) a folder on my PC and a shared network drive. The problem is that the file attributes differ by 1 sec on both locations (creation,modified and access). So every time I run robocopy, it syncs the file again... BTW, problem is the same if I delete the target file and robocopy it from new... still, new file has 1 sec different properties. Env Details: Source: Win 7 64 bit Target: WD My Book World Edition NAS 1TB which takes its time from online NTP pool.ntp.org (I don't know if file system is FAT or not)

    Read the article

  • PHP: Fastest way possible to read contents of a file.

    - by SoLoGHoST
    Ok, I'm looking for the fastest possible way to read all of the contents of a file via php with a filepath on the server, also these files can be huge. So it's very important that it does a READ ONLY to it as fast as possible. Is reading it line by line faster than reading the entire contents? Though, I remember reading up on this some, that reading the entire contents can produce errors for huge files. Is this true?

    Read the article

  • xcopy not accepting a relative path as source parameter on certain computers

    - by slicedtoad
    xcopy /e /q ".\dlls\*.*" "%programfiles(x86)%\foo" >> TEMP xcopy /e /q dlls "%programfiles(x86)%\foo" >> TEMP xcopy /e /q ".\dlls" "%programfiles(x86)%\foo" >> TEMP All of the above work on two of my machines (windows 7 64bit). But on two peers' laptops (windows 7 64 bit and windows 8 64bit) they return file dlls not found or (in the case of the first one) file *.* not found Can someone shed some light here? The only difference I can see between the machines is possibly permissions. But I don't see how that would affect xcopy's ability to recognize a local path.

    Read the article

  • Quick-sort doesn't work with middle pivot element

    - by Bobby
    I am trying to sort an array of elements using quick-sort algorithm.But I am not sure where am I going wrong.I choose the middle element as pivot every time and then I am checking the conditions.Here is my code below. void quicksort(int *temp,int p,int r) { if(r>p+1) { int mid=(p+r)/2; int piv=temp[mid]; int left=p+1; int right=r; while(left < right) { if(temp[left]<=piv) left++; else swap(&temp[left],&temp[--right]); } swap(&temp[--left],&temp[p]); quicksort(temp,p,left); quicksort(temp,right,r); } }

    Read the article

  • Is there a way to efficiently yield every file in a directory containing millions of files?

    - by Josh Smeaton
    I'm aware of os.listdir, but as far as I can gather, that gets all the filenames in a directory into memory, and then returns the list. What I want, is a way to yield a filename, work on it, and then yield the next one, without reading them all into memory. Is there any way to do this? I worry about the case where filenames change, new files are added, and files are deleted using such a method. Some iterators prevent you from modifying the collection during iteration, essentially by taking a snapshot of the state of the collection at the beginning, and comparing that state on each move operation. If there is an iterator capable of yielding filenames from a path, does it raise an error if there are filesystem changes (add, remove, rename files within the iterated directory) which modify the collection? There could potentially be a few cases that could cause the iterator to fail, and it all depends on how the iterator maintains state. Using S.Lotts example: filea.txt fileb.txt filec.txt Iterator yields filea.txt. During processing, filea.txt is renamed to filey.txt and fileb.txt is renamed to filez.txt. When the iterator attempts to get the next file, if it were to use the filename filea.txt to find it's current position in order to find the next file and filea.txt is not there, what would happen? It may not be able to recover it's position in the collection. Similarly, if the iterator were to fetch fileb.txt when yielding filea.txt, it could look up the position of fileb.txt, fail, and produce an error. If the iterator instead was able to somehow maintain an index dir.get_file(0), then maintaining positional state would not be affected, but some files could be missed, as their indexes could be moved to an index 'behind' the iterator. This is all theoretical of course, since there appears to be no built-in (python) way of iterating over the files in a directory. There are some great answers below, however, that solve the problem by using queues and notifications. Edit: The OS of concern is Redhat. My use case is this: Process A is continuously writing files to a storage location. Process B (the one I'm writing), will be iterating over these files, doing some processing based on the filename, and moving the files to another location. Edit: Definition of valid: Adjective 1. Well grounded or justifiable, pertinent. (Sorry S.Lott, I couldn't resist). I've edited the paragraph in question above.

    Read the article

  • Need to have JProgress bar to measure progress when copying directories and files

    - by user1815823
    I have the below code to copy directories and files but not sure where to measure the progress. Can someone help as to where can I measure how much has been copied and show it in the JProgress bar public static void copy(File src, File dest) throws IOException{ if(src.isDirectory()){ if(!dest.exists()){ //checking whether destination directory exisits dest.mkdir(); System.out.println("Directory copied from " + src + " to " + dest); } String files[] = src.list(); for (String file : files) { File srcFile = new File(src, file); File destFile = new File(dest, file); copyFolder(srcFile,destFile); } }else{ InputStream in = new FileInputStream(src); OutputStream out = new FileOutputStream(dest); byte[] buffer = new byte[1024]; int length; while ((length = in.read(buffer)) > 0){ out.write(buffer, 0, length); } in.close(); out.close(); System.out.println("File copied from " + src + " to " + dest); }

    Read the article

  • Referencing java resource files for cold fusion

    - by Chimeara
    I am using a .Jar file containing a .properties file in my CF code, however it seems unable to find the .properties file when run from CF. My java code is: String key =""; String value =""; try { File file = new File("src/test.properties"); FileInputStream fileInput = new FileInputStream(file); Properties properties = new Properties(); properties.load(fileInput); fileInput.close(); Enumeration enuKeys = properties.keys(); while (enuKeys.hasMoreElements()) { key = (String) enuKeys.nextElement(); value = properties.getProperty(key); //System.out.println(key + ": " + value); } } catch (FileNotFoundException e) { e.printStackTrace(); key ="error"; } catch (IOException e) { e.printStackTrace(); key ="error"; } return(key + ": " + value); I have my test.properties file in the project src folder, and make sure it is selected when compiling, when run from eclipse it gives the expected key and value, however when run from CF I get the caught errors. My CF code is simply: propTest = CreateObject("java","package.class"); testResults = propTest.main2(); Is there a special way to reference the .properties file so CF can access it, or do I need to include the file outside the .jar somewhere?

    Read the article

  • Can I use php's fwrite with 644 file permissions?

    - by filip
    I am trying to set up automated .htaccess updating. This clearly needs to be as secure as possible, however right now the best I can do file permission-wise is 666. What can I do to setup either my server or php code so that my script's fwrite() command will work with 644 or better? For instance is there a way to set my script(s) to run as owner?

    Read the article

  • Beginner servlet question: accessing files in a .war, which path?

    - by Navigateur
    When a third-party library I'm using tries to access a file, I'm getting "Error opening ... file ... (No such file or directory)" even though I KNOW the file is in the WAR. I've tried both packaged (.war) and "exploded" (directory) deployment, and the file is definitely there. I've tried setting full permissions on it too. It's on Unix (Ubuntu). File is war/dict/index.sense and the error is "dict/index.sense (No such file or directory)". It works fine on my Windows computer when running in hosted mode as a GWT app from Eclipse, just not when I transfer it to the Unix machine for deployment. My question is: has anybody experienced this before and/or are there differences in relative path that I should consider i.e. what's the root path for relative file access in a war?

    Read the article

  • [Python] How to create a named temporary file in memory?

    - by conradlee
    I would like to use Python's tempfile module to create a temporary file that I will use for communication between processes (use of pipes is awkward). The documentation I've linked to above shows two functions that almost do what I want: tempfile.NamedTemporaryFile # For creating named tempfiles tempfile.SpooledTemporaryFile # For creating tempfiles in memory but actually I want a tempfile that is both named AND in memory. Any ideas?

    Read the article

  • Empty R environment becomes large file when saved

    - by user1052019
    I'm getting behaviour I don't understand when saving environments. The code below demonstrates the problem. I would have expected the two files (far-too-big.RData, and right-size.RData) to be the same size, and also very small because the environments they contain are empty. In fact, far-too-big.R ends up the same size as bigfile.RData. I get the same results using 2.14.1 and 2.15.2, both on WinXP 5.1 SP3. Can anyone explain why this is happening? Thanks. a <- matrix(runif(1000000, 0, 1), ncol=1000) save(a, file="c:/temp/bigfile.RData") test <- function() { load("c:/temp/bigfile.RData") test <- new.env() save(test, file="c:/temp/far-too-big.RData") test1 <- new.env(parent=globalenv()) save(test1, file="c:/temp/right-size.RData") } test()

    Read the article

  • Why can't I reserve 1,000,000,000 in my vector ?

    - by vipersnake005
    When I type in the foll. code, I get the output as 1073741823. #include <iostream> #include <vector> using namespace std; int main() { vector <int> v; cout<<v.max_size(); return 0; } However when I try to resize the vector to 1,000,000,000, by v.resize(1000000000); the program stops executing. How can I enable the program to allocate the required memory, when it seems that it should be able to? I am using MinGW in Windows 7. I have 2 GB RAM. Should it not be possible? In case it is not possible, can't I declare it as an array of integers and get away? BUt even that doesn't work. Another thing is that, suppose I would use a file(which can easily handle so much data ). How can I let it read and write and the same time. Using fstream file("file.txt', ios::out | ios::in ); doesn't create a file, in the first place. But supposing the file exists, I am unable to use to do reading and writing simultaneously. WHat I mean is this : Let the contents of the file be 111111 Then if I run : - #include <fstream> #include <iostream> using namespace std; int main() { fstream file("file.txt",ios:in|ios::out); char x; while( file>>x) { file<<'0'; } return 0; } Shouldn't the file's contents now be 101010 ? Read one character and then overwrite the next one with 0 ? Or incase the entire contents were read at once into some buffer, should there not be atleast one 0 in the file ? 1111110 ? But the contents remain unaltered. Please explain. Thank you.

    Read the article

  • Error reading file with accented vowels

    - by Daniel Dcs
    The following statement to fill a list from a file : action = [] with open (os.getcwd() + "/files/" + "actions.txt") as temp:          action = list (temp) gives me the following error: (result, consumed) = self._buffer_decode (data, self.errors, end) UnicodeDecodeError: 'utf-8' codec can not decode byte 0xf1 in position 67: invalid continuation byte if I add errors = 'ignore': action = [] with open (os.getcwd () + "/ files /" + "actions.txt", errors = 'ignore') as temp:          action = list (temp) Is read the file but not the ñ and vowels accented á-é-í-ó-ú being that python 3 works, as I have understood, default to 'utf-8' I'm looking for a solution for two or more days, and I'm getting more confused. In advance thank you very much for any suggestions.

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >