Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 195/1620 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • Loading all files in a directory in a Java applet

    - by WarrenB
    How would one go about programatically loading all the resource files in a given directory in a JAR file for an applet? The resources will probably change several times over the lifetime of the program so I don't want to hardcode names in. Normally I would just traverse the directory structure using File.list(), but I get permission issues when trying to do that within an applet. I also looked at using an enumeration with something line ClassLoader.getResources() but it only finds files of the same name within the JAR file. Essentially what I want to do is (something like) this: ClassLoader imagesURL = this.getClass().getClassLoader(); MediaTracker tracker = new MediaTracker(this); Enumeration<URL> images = imagesURL.getResources("resources/images/image*.gif"); while (images.hasMoreElements()){ tracker.add(getImage(images.nextElement(), i); i++; } I know I'm probably missing some obvious function, but I've spent hours searching through tutorials and documentation for a simple way to do this within an unsigned applet.

    Read the article

  • Problem with referencing CSS and Javascript files relatively

    - by Markus
    I have an IIS web site. This web site contains other web sites so the structure is like this. \ MainWebSite\ App1\ App2\ All sites are Asp.net MVC Webapplications. In the MasterPage of App1, I reference the script files like this: <script type="text/javascript" src="../../Scripts/jquery-ui-1.8.custom.min.js"> </script> The Problem is that he now tries to find the File at http:\server\MainWebSite\Scripts.... how can i work around that? Should I put all my Scripts and CSS files into the root directory, is that a preferred solution?

    Read the article

  • Is there a limit for the number of files in a directory on an SD card?

    - by jamesh
    I have a project written for Android devices. It generates a large number of files, each day. These are all text files and images. The app uses a database to reference these files. The app is supposed to clear up these files after a little use (perhaps after a few days), but this process may or may not be working. This is not the subject of this question. Due to a historic accident, the organization of the files are somewhat naive: everything is in the same directory; a .hidden directory which contains a zero byte .nomedia file to prevent the MediaScanner indexing it. Today, I am seeing an error reported: java.io.IOException: Cannot create: /sdcard/.hidden/file-4200.html at java.io.File.createNewFile(File.java:1263) Regarding the sdcard, I see it has plenty of storage left, but counting $ cd /Volumes/NO_NAME/.hidden $ ls | wc -w 9058 Deleting a number of files seems to have allowed the file creation for today to proceed. Regrettably, I did not try touching a new file to try and reproduce the error on a commandline; I also deleted several hundred files rather than a handful. However, my question is: are there hard limits on filesize or number of files in a directory? am I even on the right track here? Nota Bene: The SD card is as-is - i.e. I haven't formatted it, so I would guess it would be a FAT-* format. The FAT-32 format has hard limits of filesize of 2GB (well above the filesizes I am dealing with) and a limit of number of files in the root directory. I am definitely not writing files in the root directory.

    Read the article

  • Editing Multiple files in vi with Wildcards

    - by Alan Storm
    When using the programmers text editor vi, I'll often using a wildcard search to be lazy about the file I want to edit vi ThisIsAReallLongFi*.txt When this matches a single file it works great. However, if it matches multiple files vi does something weird. First, it opens the first file for editing Second, when I :wq out of the file, I get a message the bottom of the terminal that looks like this E173: 4 more files to edit Hit ENTER or type command to continue When I hit enter, it returns me to edit mode in the file I was just in. The behavior I'd expect here would be that vi would move on to the next file to edit. So, What's the logic behind vi's behavior here Is there a way to move on and edit the next file that's been matched? And yes, I know about tab completion, this question is based on curiosity and wanting to understand the shell better.

    Read the article

  • Why is lua crashing after extracting zip files?

    - by Brian T Hannan
    I have the following code but it crashes every time it reaches the end of the function, but it successfully extracts all the files and puts them in the right location. require "zip" function ExtractZipAndCopyFiles(zipPath, zipFilename, destinationPath) local zfile, err = zip.open(zipPath .. zipFilename) -- iterate through each file insize the zip file for file in zfile:files() do local currFile, err = zfile:open(file.filename) local currFileContents = currFile:read("*a") -- read entire contents of current file local hBinaryOutput = io.open(destinationPath .. file.filename, "wb") -- write current file inside zip to a file outside zip if(hBinaryOutput)then hBinaryOutput:write(currFileContents) hBinaryOutput:close() end end zfile:close() end -- call the function ExtractZipAndCopyFiles("C:\\Users\\bhannan\\Desktop\\LUA\\", "example.zip", "C:\\Users\\bhannan\\Desktop\\ZipExtractionOutput\\") Why does it crash every time it reaches the end?

    Read the article

  • How to consolidate multiple LOG files into one .LDF file in SQL2000

    - by John Galt
    Here is what sp_helpfile says about my current database (recovery model is Simple) in SQL2000: name fileid filename size maxsize growth usage MasterScratchPad_Data 1 C:\SQLDATA\MasterScratchPad_Data.MDF 6041600 KB Unlimited 5120000 KB data only MasterScratchPad_Log 2 C:\SQLDATA\MasterScratchPad_Log.LDF 2111304 KB Unlimited 10% log only MasterScratchPad_X1_Log 3 E:\SQLDATA\MasterScratchPad_X1_Log.LDF 191944 KB Unlimited 10% log only I'm trying to prepare this for a detach then an attach to a sql2008 instance but I don't want to have the 2nd .LDF file (I'd like to have just one file for the log). I have backed up the database. I have issued: BACKUP LOG MasterScratchPad WITH TRUNCATE_ONLY. I have run multiple DBCC SHRINKFILE commands on both of the LOG files. How can I accomplish this goal of having just one .LDF? I cannot find anything on how to delete the one with fileid of 3 and/or how to consolidate multiple files into one log file.

    Read the article

  • Flash uploader that can handle >2GB files?

    - by Alvin SMith
    Is there an open source Flash uploader that can handle files larger than 2 GB? ASP.net implementations like SlickUpload are not an option, and SWFUpload (and others that I've seen) do not handle files larger than 2 GB. Nor is requiring the user to have Java installed to run applets. This would be for both IE and Firefox. I've seen a couple "large file transfer" sites that have a Flash uploader and claim to go past the 2GB limit (which is the limit for http uploads for most browsers) so I know it is technically possible.

    Read the article

  • Parsing log files in a folder in ColdFusion

    - by Simon Guo
    The problem is there is a folder ./log/ containing the files like: jan2010.xml, feb2010.xml, mar2010.xml, jan2009.xml, feb2009.xml, mar2009.xml ... each xml file would like: <root><record name="bob" spend="20"></record>...(more records)</root> I want to write a piece of ColdFusion code (log.cfm) that simply parsing those xml files. For the front end I would let user to choose a year, then the click submit button. All the content in that year will be show up in separate table by month. Each table shows the total money spent for each person. like: person cost bob 200 mike 300 Total 500 Thanks.

    Read the article

  • DailyRollingFileHandler ---- Files should be rotated on a daily basis

    - by nag
    We have a requirment which requires to have an Handler that is extended from Java logging and allows to have the files rotated on daily basis. Currently Java util logging do have the support of rotation based on file size by using File Handler. It doesnt support rotation on daily basis. http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6350749 Mentioned handler seems is not promising .http://www.x4juli.org/api/org/x4juli/handlers/RollingFileHandler.html So , what we are looking is for such an appender that allows daily rotation . We would like to write such handler and which is the appropriate handler to extend for ... StreamHandler or FileHandler ? And other questions are , is there way we can configure 2 different files for a single handler say FileHandler say for eg , we would like some kind of messages need to be captured in one file and other messages in other file. Would appreciate for any comments.

    Read the article

  • How can you configure or extend BITS (Background Intelligent Transfer Service) to read files from a

    - by Mark
    I have a ASP .NET load balanced application (webservice and website). It runs on SQL server. I need to be able to provide large files for download. However, because of the load balancing situation, the files are stored in the SQL database as opposed to the file system. BITS seems to be the best approach. I have full control of the client. However, i don't know how to configure BITS to read the file from the database. I know how to write the C# code for that, but i don't know how to get BITS to hook into it as opposed to reading the file from the file system. Any ideas?

    Read the article

  • How to merge elements from two XML files?

    - by Googler
    Hi all, I need a c# lang code to merge two xml files into one, from the specified content. XML FILE 1 : <exchange-documents> <documentlegal> <bibliographic-data> <applicants> <applicant-name> <name>CENTURY PRODUCTS CO [US]</name> </applicant-name> </applicants> </bibliographic-data> </documentlegal> </exchange-documents> XML FILE 2: <exchange-documents> <documentpatent> <bibliographic-data> <applicants> <applicant-name> <name>CENTURY PRODUCTS CO [US]</name> </applicant-name> </applicants> </bibliographic-data> </documentpatent> </exchange-documents> I need to read the above two xml files and write it into a new xml files with selected elements? OUTPUT XML: <documentlegal> <bibliographic-data> <applicants> <applicant-name> <name>CENTURY PRODUCTS CO [US]</name> </applicant-name> </applicants> </bibliographic-data> </documentlegal> <documentpatent> <bibliographic-data> <applicants> <applicant-name> <name>CENTURY PRODUCTS CO [US]</name> </applicant-name> </applicants> </bibliographic-data> </documentpatent> I dont need the exchnage document element. Can anyone provide me a c# code to achiev the above scenario?

    Read the article

  • Document Management System - Where to Store Files?

    - by Diego AC
    Hey, stack! I'm on charge of building an ASP.NET MVC Document Management System. It have to be able to do basic document management tasks like adding, editing and searching entries and also perform versioning. Anyways, I'm targeting PDF, Office and many image formats as the file attached to each document entry in the database. My question is: What design guidelines do pros follow when building the storage mechanism? Do they store the document files in the file system? Database? How file uploading is handled? I used to upload the files to a temporal location while the user was editing the data and move it to permanent storage when the user confirmed the entry creation. Is this good? Any suggestions on improvement?

    Read the article

  • System.Web.HttpException in asp.net mvc 2 on images and javascript files

    - by Rippo
    Hi I am getting the following errors reported by ELMAH on my asp.net mvc 2 site for javascript files, images etc. System.Web.HttpException: The remote host closed the connection I have done some research and it appears that the user/bot is clicking a link on the site before the page has fully loaded. Now this error never occurs on a controller action but always on a file that is on disk. e.g. /Content/CmsImages/logo.png /Content/CmsImages/MemberImages/Photo-001605.jpg /Content/jquery.tickertype.js So this means that all static files are being routed through the mvc pipeline. What options do I have?

    Read the article

  • golang dynamically parsing files

    - by Brian Voelker
    For parsing files i have setup a variable for template.ParseFiles and i currently have to manually set each file. Two things: How would i be able to walk through a main folder and a multitude of subfolders and automatically add them to ParseFiles so i dont have to manually add each file individually? How would i be able to call a file with the same name in a subfolder because currently I get an error at runtime if i add same name file in ParseFiles. var templates = template.Must(template.ParseFiles( "index.html", // main file "subfolder/index.html" // subfolder with same filename errors on runtime "includes/header.html", "includes/footer.html", )) func main() { // Walk and ParseFiles filepath.Walk("files", func(path string, info os.FileInfo, err error) { if !info.IsDir() { // Add path to ParseFiles } return }) http.HandleFunc("/", home) http.ListenAndServe(":8080", nil) } func home(w http.ResponseWriter, r *http.Request) { render(w, "index.html") } func render(w http.ResponseWriter, tmpl string) { err := templates.ExecuteTemplate(w, tmpl, nil) if err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) } }

    Read the article

  • how to know what files or folder are changed before do commit

    - by Pedro
    My problem is how to know what files or folder are changed before do commit. I can add all the new files in my working copy before do commit, and the repository changes, but if for example i delete one file of the working copy i dont know the way to add this change before do commit. When you use the tortoise for example before do commit the program shows all the changes of the working copy and you can choose what changes commit and what changes dont. There is some way to do this usin sharp svn?? thanks for your answer!!!

    Read the article

  • Configuring Hadoop logging to avoid too many log files

    - by Eric Wendelin
    I'm having a problem with Hadoop producing too many log files in $HADOOP_LOG_DIR/userlogs (the Ext3 filesystem allows only 32000 subdirectories) which looks like the same problem in this question: http://stackoverflow.com/questions/2091287/error-in-hadoop-mapreduce My question is: does anyone know how to configure Hadoop to roll the log dir or otherwise prevent this? I'm trying to avoid just setting the "mapred.userlog.retain.hours" and/or "mapred.userlog.limit.kb" properties because I want to actually keep the log files. I was also hoping to configure this in log4j.properties, but looking at the Hadoop 0.20.2 source, it writes directly to logfiles instead of actually using log4j. Perhaps I don't understand how it's using log4j fully. Any suggestions or clarifications would be greatly appreciated.

    Read the article

  • How to move files in C drive using MoveFileEx APi

    - by rajivpradeep
    Hi, when i use MoveFileEx to move files in C drive, but i am getting the ERROR that ACCESS DENIED. Any solutions int i ; DWORD dw ; String^ Source = "C:\Folder\Program\test.exe" ; String^ Destination = "C:\test.exe"; // move to program Files Folder pin_ptr<const wchar_t> WSource = PtrToStringChars(Source); pin_ptr<const wchar_t> WDestination = PtrToStringChars(Destination); i = MoveFileEx( WSource, WDestination ,MOVEFILE_REPLACE_EXISTING | MOVEFILE_COPY_ALLOWED ) ; dw = GetLastError() ;

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >