Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 232/1620 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • eclipse CDT - Cannot open .gcda files

    - by Taani
    I am developing a coverage data tool in eclipse cdt. I used gcov and build and execute my C program to generate .gcda and .gcno files. When double click on .gcda file to see the coverage data, below error message displays. An error has occurred. See error log for more details. org.eclipse.linuxtools.binutils.utils.STSymbolManager.demangle(Lorg/eclipse/cdt/core/IBinaryParser$IBinaryObject;Ljava/lang/String;Lorg/eclipse/core/resources/IProject;)Ljava/lang/String; But I already downloaded and save org.eclipse.linuxtools.binutils_4.0.0.201209191645.jar into plugins directory. Where am I doing wrong?

    Read the article

  • XCode project complains about missing files if a linked framework contains private headers

    - by darklight
    My Problem is this: My framework contains public and private headers - the public headers import private headers in the framework My app that links against this framework imports public headers Now when I compile it, XCode complains about missing files (the private headers that are indirectly imported via the frameworks public headers). I read somewhere on stackoverflow that I should do this: "In the public header file use @class to include other interfaces and use #import in the implementation file (.m)." I find this solution pretty unsatisfying - you have to use it for circular dependencies, too. Is there any better way to keep my headers private?

    Read the article

  • ZIPLIB problem on opening zip files

    - by Ahmet vardar
    I am using this class to create zip <?php // vim: expandtab sw=4 ts=4 sts=4: class zipfile { var $datasec = array(); var $ctrl_dir = array(); var $eof_ctrl_dir = "\x50\x4b\x05\x06\x00\x00\x00\x00"; var $old_offset = 0; function unix2DosTime($unixtime = 0) { $timearray = ($unixtime == 0) ? getdate() : getdate($unixtime); if ($timearray['year'] < 1980) { $timearray['year'] = 1980; $timearray['mon'] = 1; $timearray['mday'] = 1; $timearray['hours'] = 0; $timearray['minutes'] = 0; $timearray['seconds'] = 0; } // end if return (($timearray['year'] - 1980) << 25) | ($timearray['mon'] << 21) | ($timearray['mday'] << 16) | ($timearray['hours'] << 11) | ($timearray['minutes'] << 5) | ($timearray['seconds'] >> 1); } // end of the 'unix2DosTime()' method function addFile($data, $name, $time = 0) { $name = str_replace('\\', '/', $name); $dtime = dechex($this->unix2DosTime($time)); $hexdtime = '\x' . $dtime[6] . $dtime[7] . '\x' . $dtime[4] . $dtime[5] . '\x' . $dtime[2] . $dtime[3] . '\x' . $dtime[0] . $dtime[1]; eval('$hexdtime = "' . $hexdtime . '";'); $fr = "\x50\x4b\x03\x04"; $fr .= "\x14\x00"; // ver needed to extract $fr .= "\x00\x00"; // gen purpose bit flag $fr .= "\x08\x00"; // compression method $fr .= $hexdtime; // last mod time and date // "local file header" segment $unc_len = strlen($data); $crc = crc32($data); $zdata = gzcompress($data); $zdata = substr(substr($zdata, 0, strlen($zdata) - 4), 2); // fix crc bug $c_len = strlen($zdata); $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize $fr .= pack('v', strlen($name)); // length of filename $fr .= pack('v', 0); // extra field length $fr .= $name; // "file data" segment $fr .= $zdata; // "data descriptor" segment (optional but necessary if archive is not // served as file) $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize // add this entry to array $this -> datasec[] = $fr; // now add to central directory record $cdrec = "\x50\x4b\x01\x02"; $cdrec .= "\x00\x00"; // version made by $cdrec .= "\x14\x00"; // version needed to extract $cdrec .= "\x00\x00"; // gen purpose bit flag $cdrec .= "\x08\x00"; // compression method $cdrec .= $hexdtime; // last mod time & date $cdrec .= pack('V', $crc); // crc32 $cdrec .= pack('V', $c_len); // compressed filesize $cdrec .= pack('V', $unc_len); // uncompressed filesize $cdrec .= pack('v', strlen($name) ); // length of filename $cdrec .= pack('v', 0 ); // extra field length $cdrec .= pack('v', 0 ); // file comment length $cdrec .= pack('v', 0 ); // disk number start $cdrec .= pack('v', 0 ); // internal file attributes $cdrec .= pack('V', 32 ); // external file attributes - 'archive' bit set $cdrec .= pack('V', $this -> old_offset ); // relative offset of local header $this -> old_offset += strlen($fr); $cdrec .= $name; // optional extra field, file comment goes here // save to central directory $this -> ctrl_dir[] = $cdrec; } // end of the 'addFile()' method function file() { $data = implode('', $this -> datasec); $ctrldir = implode('', $this -> ctrl_dir); return $data . $ctrldir . $this -> eof_ctrl_dir . pack('v', sizeof($this -> ctrl_dir)) . // total # of entries "on this disk" pack('v', sizeof($this -> ctrl_dir)) . // total # of entries overall pack('V', strlen($ctrldir)) . // size of central dir pack('V', strlen($data)) . // offset to start of central dir "\x00\x00"; // .zip file comment length } // end of the 'file()' method function addFiles($files ) { foreach($files as $file) { if (is_file($file)) //directory check { $data = implode("",file($file)); $this->addFile($data,$file); } } } function output($file) { $fp=fopen($file,"w"); fwrite($fp,$this->file()); fclose($fp); } } // end of the 'zipfile' class ?> It creates zip file but when i try to open it on Mac os x snow leopard and windows 7, it doesnt open. on mac i had this error: Error 1: operation not permitted Any idea ? thanks

    Read the article

  • Writing out BMP files with DataBuffer.TYPE_FLOAT or DataBuffer.TYPE_DOUBLE in java

    - by Basil Dsouza
    Hi Guys, I had a problem working with the image classes in java. I am creating a buffered image with DataBuffer.TYPE_DOUBLE. This all works fine in memory (I think). But the problem starts when I try to write it using ImageIO.write. Initially I was getting no exception at all and instead was only getting an empty output file for my troubles.. After a bit of poking around in the code, i found out that the bmp writer doesnt support writing type_double type of files. From: BMPImageWriterSpi.canEncodeImage: if (dataType < DataBuffer.TYPE_BYTE || dataType > DataBuffer.TYPE_INT) return false; So my question is, does anyone have a way of writing out those kind of images to disk? any documentation or tutorial, or link would be helpful. Thanks, Basil Dsouza

    Read the article

  • Objective C -- property lists or text files?

    - by William Jockusch
    I need to import a list of about 40,000 words into my Iphone app. The list will be the same every time the app starts. It seems that property lists and text files are reasonable options. Any reason to prefer one over the other? For reasons I don't understand, finder says the property list on my mac is 1MB, while the text file is only 328K. The property list is an NSMutableArray of NSMutableArrays of NSStrings. The text file is a plain txt file. But amount of time the app takes to start up is also important. If I read in a text file, my app would have to do some simple processing on it each time it starts. Thanks.

    Read the article

  • Excluding files from web logs

    - by Ray
    Looking through my web logs, I see a lot of entries that don't interest me. Some of them are commonly used images, css files, and scripts, which I can easily exclude by un-checking the 'log visits' check box in IIS for the folder properties. I would also like to exclude log entries for certain common requests which are not in their own folders. Mostly, 'favicon.ico'. 'scriptresource.axd', and 'webresource.axd'. These (especially scriptresource.axd) make up almost a third of a typical log file on my site. So, the question is, how do I tell IIS not to log these requests? And is there any reason that this is a bad idea?

    Read the article

  • How to generate pdf files _with_ utf-8 multibyte characters using Zend Framework

    - by Sejanus
    Hello, I've got a "little" problem with Zend Framework Zend_Pdf class. Multibyte characters are stripped from generated pdf files. E.g. when I write aabccdee it becomes abcd with lithuanian letters stripped. I'm not sure if it's particularly Zend_Pdf problem or php in general. Source text is encoded in utf-8, as well as the php source file which does the job. Thank you in advance for your help ;) P.S. I run Zend Framework v. 1.6 and I use FONT_TIMES_BOLD font. FONT_TIMES_ROMAN does work

    Read the article

  • Xml files stop being served by IIS6 after allowing .net to process the .xml extension

    - by Brian Surowiec
    I added a route into my site to allow for a sitemap and everything worked fine in IIS7 but once I deployed the route stopped working. Since the live server is running IIS6 I needed to put a new mapping in for .xml to be processed by .net and then it started to work. My issue though is on every other xml file on the site now. I keep getting a 404 error when trying to view xml files, but the sitemap.xml route works. Is this a routing issue or an IIS setup issue? Here are my routes if it will help routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Gallery-Group-View", "Projects/{groupId}", new { controller = "Gallery", action = "GalleryList", groupId = "" }); routes.MapRoute( "Gallery-List-View", "Projects/{groupId}/{galleryId}", new { controller = "Gallery", action = "GalleryView", groupId = "", galleryId = "" }); routes.MapRoute( "Sitemap", "Sitemap.xml", new { controller = "XML", action = "Sitemap" } ); routes.MapRoute( "Default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" } );

    Read the article

  • iPhone: fast hash function for storing web images (url) as files (hashed filenames)

    - by Stefan Klumpp
    What is a fast hash function available for the iPhone to hash web urls (images)? I'd like to store the cached web image as a file with a hash as the filename, because I suppose the raw web url could contain strange characters that could cause problems on the file system. The hash function doesn't need to be cryptographic, but it definitely needs to be fast. Example: Input: http://www.calumetphoto.com/files/iccprofiles/icc-test-image.jpg Output: 3573ed9c4d3a5b093355b2d8a1468509 This was done by using MD5(), but since I don't know much about that topic I don't know if it is overkill (- slow).

    Read the article

  • Newbie Question: Read and Process a List of Text Files

    - by johnv
    I'm completely new to .NET and am trying as a first step to write a text processing program. The task is simple: I have a list of 10,000 text files stored in one folder, and I'm trying to read each one, store it as a string variable, then run it through a series of functions, then save the final output to another folder. So far I can only manage to manually input the file path like this (in VB.NET): Dim tRead As System.IO.StreamReader Public Function ReadFile() As String Dim EntireFile As String tRead = File.OpenText("c:\textexample\00001.txt") EntireFile = tRead.ReadToEnd Return EntireFile End Function Public Function Step1() ..... End Function Public Function Step2() ..... End Function .............. I'm wondering, therefore, if there's a way to automate this process. Perhaps for example store all input file path into a text file then read each entry at a time, then save the final output into the save path, again listed in a text file. Any help is greatly appreciated. ReplyQuote

    Read the article

  • Default input and output buffering for fopen'd files?

    - by Evan Teran
    So a FILE stream can have both input and output buffers. You can adjust the output stream using setvbuf (I am unaware of any method to play with the input buffer size and behavior). Also, by default the buffer is BUFSIZ (not sure if this is a POSIX or C thing). It is very clear what this means for stdin/stdout/stderr, but what are the defaults for newly opened files? Are they buffered for both input and output? Or perhaps just one? If it is buffered, does output default to block or line mode?

    Read the article

  • How to add resource files during build on J2ee (eclipse + jboss)

    - by legendlink
    hi, i'm trying to run a web servlet project in eclipse 3.4 using jboss 4.2.2 as my web server. im using the wtp plugin and everything looks good (can run and debug). but some of the files/resources are not included on the war file. in my "WebContent/WEB-INF" folder, i have "properries", "config", and "lib" folders. but it seems like when i build and publish the project, only "config" and "lib" folders are included. how can i include the "properties" file during build?

    Read the article

  • Cmake suddenly can't find my source files anymore...

    - by aheld
    To make a long story short: To add insult to injury, CMake actually ran fine several times. I was wrestling with a compiler error when CMake suddenly didn't feel like working anymore. For reference, here's the whole CMakeLists.txt file: set(CMAKE_INCLUDE_CURRENT_DIR ON) Find_Package ( SDL REQUIRED ) Find_Package ( SDL_image REQUIRED ) Find_Package ( SDL_mixer REQUIRED ) if ( NOT SDL_FOUND ) message ( FATAL_ERROR "Make sure that SDL is installed" ) endif ( NOT SDL_FOUND ) link_libraries ( ${SDL_LIBRARY} ${SDLIMAGE_LIBRARY} ${SDLMIXER_LIBRARY} SDLmain ) set(wiggle_SOURCES level.cpp levelgenerator.cpp main.cpp player.cpp scoreboard.cpp snake.cpp soundplayer.cpp titlescreen.cpp ) add_executable(Wiggle ../${wiggle_SOURCES}) The error occured for the first time when, instead of simply typing "make", I typed "make -lSDL -lSDL_image -lSDL_mixer" - make refused to find the header files SDL.h and SDL_image.h after I detached the project from Code::Blocks.

    Read the article

  • Cannot loop through Excel 2003 files in SSIS 2008

    - by Techspirit
    Hi, I am trying to execute a SSIS 2008 package on a 64-bit OS and import Excel 2003 files to SQL Server 2008. I have created an OLEDB Connection to the Excel file with a Connection String that retrieves the Excel file from a variable, inside the ForEach Loop Container. The Run64BitRunTime is set to false. I am not able to edit the SQL Command on the OLEDB Source in the Data Flow task. It returns an error : Error 2 Validation error. Load List Staged Table: Load List Staged Table: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "List OLEDB to Excel" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquireConnection method call failed. 0 0 Appreciate any help.

    Read the article

  • downloading archives response corrupts files

    - by panchicore
    wrapper = FileWrapper(file("C:/pics.zip")) content_type = mimetypes.guess_type(result.files)[0] response = HttpResponse(wrapper, content_type=content_type) response['Content-Length'] = os.path.getsize("C:/pics.zip") response['Content-Disposition'] = "attachment; filename=pics.zip" return response pics.zip is a valid file with 3 pictures inside. server response the download, but when I am going to open the zip, winrar says This archive is either in unknown format or damaged! If I change the file path and the file name to a valid image C:/pic.jpg is downloaded damaged too. What Im missing in this download view?

    Read the article

  • How and where to store user uploaded files in high traffic web farm scenario website?

    - by Inam Jameel
    i am working on a website which deploy on web farms to serve high traffic. where should i store user uploaded files? is it wise to store uploaded files in the file system of the same website and synchronize these files in all web servers(web farm)? or should i use another server to store all uploaded files in this server to store files in a central location? if separate file server will be a better choice, than how can i pass files from web server to that file server efficiently? or should i upload files directly to that file server?

    Read the article

  • Logical python question - handling directories and files in them

    - by Konstantin
    Hello! I'm using this function to extract files from .zip archive and store it on the server: def unzip_file_into_dir(file, dir): import sys, zipfile, os, os.path os.makedirs(dir, 0777) zfobj = zipfile.ZipFile(file) for name in zfobj.namelist(): if name.endswith('/'): os.mkdir(os.path.join(dir, name)) else: outfile = open(os.path.join(dir, name), 'wb') outfile.write(zfobj.read(name)) outfile.close() And the usage: unzip_file_into_dir('/var/zips/somearchive.zip', '/var/www/extracted_zip') somearchive.zip have this structure: somearchive.zip 1.jpeg 2.jpeg another.jpeg or, somethimes, this one: somearchive.zip somedir/ 1.jpeg 2.jpeg another.jpeg Question is: how do I modify my function, so that my extracted_zip catalog would always contain just images, not images in another subdirectory, even if images are stored in somedir inside an archive.

    Read the article

  • Must .aspx files have a page directive?

    - by Keith Bloom
    Around 90% of the pages for our websites have no .Net code embedded in them yet are published as .aspx files. I want these to render as fast as possible so I'm removing as much as I can. Does the .Net page directive have an impact on performance? I am thinking about two factors; the page speed for each GET and what happens when the file changes. The CMS system re-creates each page daily and I'm wondering if this triggers the ASP.Net compilation process.

    Read the article

  • Reading files using Windows API

    - by Eli Polonsky
    Hi I'm trying to write a console program that reads characters from a file. i want it to be able to read from a Unicode file as well as an ANSI one. how should i address this issue? do i need to programatically distinguish the type of file and read acoordingly? or can i somehow use the windows API data types like TCHAR and stuff like that. The only differnce between reading from the files is that in Unicode i have to read 2 bytes for a character and in ASNSI its 1 byte? im a little lost with this windows API. would appretiate any help thanks

    Read the article

  • Un-readable files uploaded via PHP FTP functions

    - by Mike
    I just setup a LAMP development server and am still trouble-shooting some things. The server is installed on one computer and I use a Windows laptop to write my code and test the site via the web browser. My file uploading script works in that JPEG image files are successfully uploaded to the server, but when I try to view the images in the web browser, permission is denied. I check the permissions on the file via the server and they are 600. I can fix the issue by chmod 777 theimage.jpg, but this doesn't seem like a good solution at all. Does the solution have something to do with Apache configuration? Or is there something else I should be doing. Thank-you, Mike

    Read the article

  • Cannot find maven dependency, mysterious jar files

    - by natasha
    Hi, I am trying to build a simple war file which has a few jsps. However I am coming across an odd issue, for some reason during the packaging maven is pulling 4 jar files into the WEB-INF/lib. I have trimmed down all the fat from the pom file, and have grepped for any references to these jars without any success. I cannot figure out where maven is pulling them from. I tried 'mvn dependency:build-classpath' and the classpath is empty. Please help, these jars are corrupt and I cannot deploy this war file because of them. Thanks, natasha

    Read the article

  • Lighttpd rewriting files and directories

    - by Ronald
    I'm trying to do url rewriting with Lighttpd. I have what I need partially working. Right now I have this: http://domain.com/name/a/123 which rewrites to http://domain.com/name/a.php?pid=123 I do this with this rewrite-once rule: "^/name/a/([^/]+)"= "/name/a.php?pid=$1" That php page has external resources that are not getting rewritten such as the JavaScript and CSS files. Is there a way I can also have the rewrite do the following? http://domain.com/name/a/js/file.js = http://domain.com/name/js/file.js

    Read the article

  • How can my CGI program access non-browseable files?

    - by Zerobu
    I was wondering if it was possible to read a text file that was located in a directory called "/home/user/files" I wanted to read it from my cgi-bin which is located in /home/user/cgi-bi/ Below is my code, #!/usr/bin/perl use strict; use CGI; #Virtual Directory #Steffan Harris eval { use constant PASSWORD => 'perl'; use constant UPLOAD_DIR => '/home/sharris2/files'; sub mapToFile { print chdir UPLOAD_DIR; } #This function will list all files in a directory. sub listDirectoryFiles { chdir UPLOAD_DIR; my @files = <*>; mapToFile; print<<LIST; <h2>Current Files</h2> <ul> LIST if(!$files[0]) { print" </ul>\n<em>No files in directory</em>"; } foreach(@files) { print" <li>$_</li>"; } print " </ul>\n"; } #This function generates a 404 Not Found error sub generate404 { print<<RESPONSE; Status: 404 Not Found Content-Type: text/html <html> <head><title>404 Not Found</title></head> <body> <p> <h1>404 - Not Found</h1> </p> The requested URL <b>$ENV{"HTTP_HOST"}$ENV{"REQUEST_URI"}</b> was not found on the server. </body> </html> RESPONSE exit; } #This function checks the path info to see if it matches a file in the UPLOAD_DIR directory, If it does not, then it returns a 404 error sub checkExsistence { if($ENV{"PATH_INFO"}) { chdir UPLOAD_DIR; my @files = <*>; if(!$files[0] and $ENV{"PATH_INFO"} eq "/") { return; } foreach(@files) { if($ENV{"PATH_INFO"} eq "/".$_ || $ENV{"PATH_INFO"} eq "/") { print "yes"; return; } } generate404; } } sub checkPassword { my ($password, $cgi); $cgi = new CGI; $password = $cgi->param('passwd'); unless($password eq PASSWORD) { print<<RESPONSE; Status: 200 OK Content-Type: text/html <html> <head> <title>Incorrect Password</title> </head> <body> <h1>Invalid password entered.</h1> <h3><a href="/~sharris2/cgi-bin/files/">Go Back</a></h3> </body> RESPONSE exit; } } sub upLoadFile { checkPassword; my ($uploadfile, $cgi); $cgi = new CGI; $uploadfile = $cgi->upload('uploadfile'); chdir UPLOAD_DIR; $uploadfile or die "Did not receive a file to upload"; open my $FILE, '>', UPLOAD_DIR."/$uploadfile" or die "$!"; while(<$uploadfile>) { print $FILE $_; } } #Start of main part of program my $cgi = new CGI; if(!$ENV{"PATH_INFO"}) { print $cgi->redirect('/~sharris2/cgi-bin/files/'); } checkExsistence; if($ENV{"REQUEST_METHOD"} eq "POST") { upLoadFile; } print <<"HEADERS"; Status: 200 OK Content-Type: text/html HEADERS print <<"HTML"; <html> <head> <title>Virtual Directory</title> </head> <body> HTML listDirectoryFiles; print<<HTML; <h2>Upload a new file</h2> <form method = "POST" enctype = "multipart/form-data" action = "/~sharris2/cgi-bin/files/" /> File:<input type = "file" name="uploadfile"/> <p>Password: <input type = "password" name ="passwd"/></p> <p><input type = "submit" value= "Submit File" /></p> </form> </body> </html> HTML };

    Read the article

  • Delete files from blobstore using file serving URL

    - by Arturo
    In my app (GWT on GAE) we are storing on our database the serving URL that is stored on blobstore. When user selects one of these files and clicks "delete", we need to delete the file from blobstore. This is our code, but it is not deleting the file at all: public void remove(String fileURL) { BlobstoreService blobstoreService = BlobstoreServiceFactory.getBlobstoreService(); String key = getBlobKeyFromURL(box.getImageURL()); BlobKey blobKey = new BlobKey(key); blobstoreService.delete(blobKey); } Where fileURL looks like this: http://lh6.ggpht.com/d5VC0ywISACeJRiC3zkzaZug-tPsaI_LGt93-e_ATGTCwnGLao4yTWjLVppQ And getBlobKeyFromURL() would return what is after the last "/", in this example: d5VC0ywISACeJRiC3zkzaZug-tPsaI_LGt93-e_ATGTCwnGLao4yTWjLVppQ Could you please advice? Thanks

    Read the article

  • How do I protect static files with ASP.NET form auhentication on IIS 7.5?

    - by Egil Hansen
    Hi all I have a website running on a IIS 7.5 server with ASP.NET 4.0 on a shared host, but in full trust. The site is a basic "file browser" that allows the visitors to login and have a list of files available to them displayed, and, obviously, download the files. The static files (mostly pdf files) are located in a sub folder on the site called data, e.g. http://example.com/data/... The site uses ASP.NET form authentication. My question is: How do I get the ASP.NET engine to handle the requests for the static files in the data folder, so that request for files are authenticated by ASP.NET, and users are not able to deep link to a file and grab files they are not allowed to have? Best regards, Egil.

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >