Search Results

Search found 23404 results on 937 pages for 'script compression'.

Page 446/937 | < Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >

  • YUI Compressor + gzip causes Illegal Character error in jQuery

    - by lo_fye
    When I minify jquery using YUI compressor, it works fine. When I then add gzip compression (and serve this version via mod rewrite), the gzipped version throws this error: illegal character in jquery.min.js on line 1 Line 1 is: ???????M???????????s?8?0???!sz?dKr?=? This results in a "jquery is not defined" error. I am using the following rewrite rules to serve up the gzipped versions: #Check to see if browser can accept gzip files. ReWriteCond %{HTTP:accept-encoding} (gzip.*) #make sure there's no trailing .gz on the url ReWriteCond %{REQUEST_FILENAME} !^.+\.gz$ #check to see if a .gz version of the file exists. RewriteCond %{REQUEST_FILENAME}.gz -f #All conditions met so add .gz to URL filename (invisibly) RewriteRule ^(.+) $1.gz [L] I can't find any references to this happening to anyone else. Thoughts?

    Read the article

  • git: Is it possible to save the packed objects of a dry run and push them later?

    - by shovavnik
    I'm trying to push a bunch of commits that contain a lot of code and a few thousand MP3 and PDF files besides (ranging from 5-40 MB each). Git successfully packs the objects: C:\MyProject> git push Counting objects: 7582, done. Delta compression using up to 2 threads. Compressing objects: 100% (7510/7510), done. But it fails to send the push for some as yet unknown reason. The problem is that it takes it a very long time to repack the files (I'm on a battery-powered laptop and it took about 20 minutes to pack). So I guess my question can be phrases thus: Is it possible to save the packed objects created in a dry run? Once saved, is it possible to push those packed objects and avoid repacking? I looked it up in the git manual and elsewhere and couldn't find anything conclusive. Any help or pointers are appreciated.

    Read the article

  • Is there any simple way to test two PNGs for equality?

    - by Mason Wheeler
    I've got a bunch of PNG images, and I'm looking for a way to identify duplicates. By duplicates I mean, specifically, two PNG files whose uncompressed image data are identical, not necessarily whose files are identical. This means I can't do something simple like compare CRC hash values. I figure this can actually be done reliably since PNGs use lossless compression, but I'm worried about speed. I know I can winnow things down a little by testing for equal dimensions first, but when it comes time to actually compare the images against each other, is there any way to do it reasonably efficiently? (ie. faster than the "double-for-loop checking pixel values against each other" brute-force method?)

    Read the article

  • Image coding library

    - by Dmitry
    Is there any good library for lossless image encoding/decoding that has compression rate more or less similar to PNG but decoding to raw RGB bitmap data would be much faster than PNG? Also alpha transparency is needed, but not essential because, alpha channel could be taken from separate image. Original problem lies in slowness of reading and decoding PNG files on iPhone using standard libraries. Obvious and the simples solution would have been storing raw RGB bitmap data, but then size of unpacked ipa is too large - 4 times larger than PNG files. So, I am trying to find some compromise solution.

    Read the article

  • New to AVL tree implementation.

    - by nn
    I am writing a sliding window compression algorithm (LZ77) that searches for phrases in a "moving" dictionary. So far I have written a BST where each node is stored in an array and it's index in the array is also the value of the starting position in the window itself. I am now looking at transforming the BST to an AVL tree. I am a little confused at the sample implementations I have seen. Some only appear to store the balance factors whereas others store the height of each tree. Are there any performance advantage/disadvantages of storing the height and/or balance factor for each node? Apologies if this is a very simple question, but I'm still not visualizing how I want to restructure my BST to implement height balancing. Thanks.

    Read the article

  • ZIPLIB problem on opening zip files

    - by Ahmet vardar
    I am using this class to create zip <?php // vim: expandtab sw=4 ts=4 sts=4: class zipfile { var $datasec = array(); var $ctrl_dir = array(); var $eof_ctrl_dir = "\x50\x4b\x05\x06\x00\x00\x00\x00"; var $old_offset = 0; function unix2DosTime($unixtime = 0) { $timearray = ($unixtime == 0) ? getdate() : getdate($unixtime); if ($timearray['year'] < 1980) { $timearray['year'] = 1980; $timearray['mon'] = 1; $timearray['mday'] = 1; $timearray['hours'] = 0; $timearray['minutes'] = 0; $timearray['seconds'] = 0; } // end if return (($timearray['year'] - 1980) << 25) | ($timearray['mon'] << 21) | ($timearray['mday'] << 16) | ($timearray['hours'] << 11) | ($timearray['minutes'] << 5) | ($timearray['seconds'] >> 1); } // end of the 'unix2DosTime()' method function addFile($data, $name, $time = 0) { $name = str_replace('\\', '/', $name); $dtime = dechex($this->unix2DosTime($time)); $hexdtime = '\x' . $dtime[6] . $dtime[7] . '\x' . $dtime[4] . $dtime[5] . '\x' . $dtime[2] . $dtime[3] . '\x' . $dtime[0] . $dtime[1]; eval('$hexdtime = "' . $hexdtime . '";'); $fr = "\x50\x4b\x03\x04"; $fr .= "\x14\x00"; // ver needed to extract $fr .= "\x00\x00"; // gen purpose bit flag $fr .= "\x08\x00"; // compression method $fr .= $hexdtime; // last mod time and date // "local file header" segment $unc_len = strlen($data); $crc = crc32($data); $zdata = gzcompress($data); $zdata = substr(substr($zdata, 0, strlen($zdata) - 4), 2); // fix crc bug $c_len = strlen($zdata); $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize $fr .= pack('v', strlen($name)); // length of filename $fr .= pack('v', 0); // extra field length $fr .= $name; // "file data" segment $fr .= $zdata; // "data descriptor" segment (optional but necessary if archive is not // served as file) $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize // add this entry to array $this -> datasec[] = $fr; // now add to central directory record $cdrec = "\x50\x4b\x01\x02"; $cdrec .= "\x00\x00"; // version made by $cdrec .= "\x14\x00"; // version needed to extract $cdrec .= "\x00\x00"; // gen purpose bit flag $cdrec .= "\x08\x00"; // compression method $cdrec .= $hexdtime; // last mod time & date $cdrec .= pack('V', $crc); // crc32 $cdrec .= pack('V', $c_len); // compressed filesize $cdrec .= pack('V', $unc_len); // uncompressed filesize $cdrec .= pack('v', strlen($name) ); // length of filename $cdrec .= pack('v', 0 ); // extra field length $cdrec .= pack('v', 0 ); // file comment length $cdrec .= pack('v', 0 ); // disk number start $cdrec .= pack('v', 0 ); // internal file attributes $cdrec .= pack('V', 32 ); // external file attributes - 'archive' bit set $cdrec .= pack('V', $this -> old_offset ); // relative offset of local header $this -> old_offset += strlen($fr); $cdrec .= $name; // optional extra field, file comment goes here // save to central directory $this -> ctrl_dir[] = $cdrec; } // end of the 'addFile()' method function file() { $data = implode('', $this -> datasec); $ctrldir = implode('', $this -> ctrl_dir); return $data . $ctrldir . $this -> eof_ctrl_dir . pack('v', sizeof($this -> ctrl_dir)) . // total # of entries "on this disk" pack('v', sizeof($this -> ctrl_dir)) . // total # of entries overall pack('V', strlen($ctrldir)) . // size of central dir pack('V', strlen($data)) . // offset to start of central dir "\x00\x00"; // .zip file comment length } // end of the 'file()' method function addFiles($files ) { foreach($files as $file) { if (is_file($file)) //directory check { $data = implode("",file($file)); $this->addFile($data,$file); } } } function output($file) { $fp=fopen($file,"w"); fwrite($fp,$this->file()); fclose($fp); } } // end of the 'zipfile' class ?> It creates zip file but when i try to open it on Mac os x snow leopard and windows 7, it doesnt open. on mac i had this error: Error 1: operation not permitted Any idea ? thanks

    Read the article

  • Fixing warning from git

    - by japancheese
    I've been doing a workflow of making a git repository on a remote central repository, cloning that repo on my local dev machine, doing some work, and then pushing the changes back to the same repo on the remote server. However, and I believe this was after an update I did to git recently, after pushing up a change, I'm getting the following warning: Counting objects: 2724, done. Delta compression using up to 2 threads. Compressing objects: 100% (2666/2666), done. Writing objects: 100% (2723/2723), 5.90 MiB | 313 KiB/s, done. Total 2723 (delta 219), reused 0 (delta 0) warning: updating the currently checked out branch; this may cause confusion, as the index and working tree do not reflect changes that are now in HEAD. Can someone explain to me exactly what this warning means, and what I'm doing wrong in my workflow to not receive this warning?

    Read the article

  • Own data format for the iPhone

    - by Stefan
    Hi, I would like to create my own data format for an iPhone app. The files should be similar structured as e.g. Apple's iWork files (.pages). That means, I have a folder with some files in it: The file 'Juicy.fruit' contains: Fruits ---> Apple.xml ---> Banana.xml ---> Pear.xml ---> PreviewPicture.png This folder "Fruits" should be packed in a handy file 'Juicy.fruit'. Compression isn't necessary. How could I achieve this? I've discovered some open source ZIP-libraries. However, I would like to to build my own data format with the iPhones built-in libs (if possible). Best regards, Stefan

    Read the article

  • Extremely slow insert from Delphi to Remote MySQL Database

    - by MarkRobinson
    Having a major hair-pulling issue with extremely slow inserts from Delphi 2010 to a remote MySQL 5.09 server. So far, I have tried: ADO using MySQL ODBC Driver Zeoslib v7 Alpha I have used batching and direct insert with ADO (using table access), and with Zeos I have used SQL insertion with a Query, then used Table direct mode and also cached updates Table mode using applyupdates and commit. Both technologies I have tried with compression on and off. So far I have seen a pretty much the same across the board 7.5 records per second!!! Now, I would from this point assume that the remote server is just slow, but the MySQL Workbench is amazingly fast, and the Migration toolkit managed the initial migration very quickly (to be honest, I don't recall how quickly - which kind of means that it was quick) I'm just about to try the MyDAC components as we already use SDAC (wish there was a multi-buy discount or that we'd chosen UniDAC instead now!) Any ideas?

    Read the article

  • IE 8 dialog windows not decompressing files

    - by Mike
    Hi, I've got a website where we have pre-compressed all of our HTML files. In general this works fine, but since IE 8 has come out some people are finding that they can not use some parts of the website. We've used the showModalDialog command to open a dialog window and pointing to one of our pre-compressed files but it displays it just show up as strange characters (ie not decompressed). Now it only happens in the dialog. I'm pretty sure our compression is all fine because the page they are viewing to open the dialog is also compressed. Has anyone else come across this or got any suggestions cuz i'm stumped??? Thanks, Mike

    Read the article

  • Untar, ungz, gz, tar - how do you remember all the useful options?

    - by deadprogrammer
    I am pretty sure I am not the only one with the following problem: every time I need to uncompress a file in *nix I can't remember all the switches, and end up googling it, which is surprizing considering how often I need to do this. Do you have a good compression cheat sheet? Or how about a mnemonic for all those nasty switches in tar? I am making this article a wiki so that we can create a nice cheat sheet here. Oh, and about man pages: is there's one thing they are not helpful for, it's for figuring out how to uncompress a file.

    Read the article

  • sending binary data via POST on android

    - by wo_shi_ni_ba_ba
    Android supports a limited version of apache's http client(v4). typically if I want to send binary data using content type= application/octet-stream via POST, I do the following: HttpClient client = getHttpClient(); HttpPost method=new HttpPost("http://192.168.0.1:8080/xxx"); System.err.println("send to server "+s); if(compression){ byte[]compressed =compress(s); RequestEntity entity = new ByteArrayRequestEntity(compressed); method.setEntity(entity); } HttpResponse resp=client.execute(method); however ByteArrayRequestEntity is not supported on android. what can I do?

    Read the article

  • Encode/compress sequence of repeating integers

    - by Alex
    Hey there! I have very long integer sequences that look like this (arbitrary length!): 0000000001110002220033333 Now I need some algorithm to convert this string into something compressed like a9b3a3c3a2d5 Which means "a 9 times, then b 3 times, then a 3 times" and so on, where "a" stands for 0, "b" for 1, "c" for 2 and "d" for 3. How would you do that? So far nothing suitable came to my mind, and I had no luck with google because I didn't really know what to search for. What is this kind of encoding / compression called? PS: I am going to do the encoding with PHP, and the decoding in JavaScript.

    Read the article

  • Criteria for selecting software for embedded device

    - by Suresh Kumar
    We are currently evaluating Web servers for an embedded device. We have laid down the evaluation criteria for things like HTTP version, Security, Compression etc. On the embeddable side, we have identified the following criteria: Memory footprint Memory management (support for plugging in a custom memory manager) CPU usage Thread usage (support for thread pool) Portability What I want inputs on is: Are there any other criteria that an embeddable software should meet? What exactly does it mean when someone says that a software is designed for embeddable use? We currently have zeroed in on two Web servers: AppWeb Lighttpd (lighty) Feature wise, both the above Web servers seem to be on par. However, it is claimed that AppWeb is designed for embedded use while Lighttpd is not. To choose between the above two Web servers, what criteria should I be looking at?

    Read the article

  • Are these jobs for developer or designers or for client himself? for a web-site projects [closed]

    - by jitendra
    Are these jobs for developer or for designers or for client himself? for a web-site projects. Client is asking to do all things to XHTML CSS PHP coder.. Spell checking grammar checking Descriptive alt text for big chart , graph images, technical images To write Table summary and caption Descriptive Link text Color Contrast checking Deciding in content what should be H2 ,H3, H4... and what should be <strong> or <span class="boldtext"> Meta Description and keywords for each pages Image compression To decide Filenames for images,PDf etc To decide Page's <title> for each page

    Read the article

  • What's the best way to convert a .eps (CMYK) to a .jpg (RGB) with Image Magick

    - by Slinky
    Hi All, I have a bunch of .eps files (CMYK) that I need to convert to .jpg (RGB) files. The following command sometimes gives me under or over saturated .jpg images, when compared to the source EPS file: $cmd = "convert -density 300 -quality 100% -colorspace RGB ".$epsURL." -flatten -strip ".$convertedURL; Is there a smarter way to do this such that the converted image will have the same qualities as the source EPS file? Here is an example of the source file info: Image: rejm.eps Format: PS (PostScript) Class: DirectClass Geometry: 537x471 Base geometry: 1074x941 Type: ColorSeparation Endianess: Undefined Colorspace: CMYK Channel depth: Cyan: 8-bit Magenta: 8-bit Yellow: 8-bit Black: 8-bit Channel statistics: Cyan: Min: 0 (0) Max: 255 (1) Mean: 161.913 (0.634955) Standard deviation: 72.8257 (0.285591) Magenta: Min: 0 (0) Max: 255 (1) Mean: 184.261 (0.722591) Standard deviation: 75.7933 (0.297229) Yellow: Min: 0 (0) Max: 255 (1) Mean: 70.6607 (0.277101) Standard deviation: 39.8677 (0.156344) Black: Min: 0 (0) Max: 195 (0.764706) Mean: 34.4382 (0.135052) Standard deviation: 38.1863 (0.14975) Total ink density: 292% Colors: 210489 Rendering intent: Undefined Resolution: 28.35x28.35 Units: PixelsPerCentimeter Filesize: 997.727kb Interlace: None Background color: white Border color: #DFDFDFDFDFDF Matte color: grey74 Page geometry: 537x471+0+0 Dispose: Undefined Iterations: 0 Compression: Undefined Orientation: Undefined Signature: 8ea00688cb5ae496812125e8a5aea40b0f0e69c9b49b2dc4eb028b22f76f2964 Profile-iptc: 19738 bytes Thanks

    Read the article

  • Python Daemon Subprocess not working at boot

    - by Adam Richardson
    I am attempting to write a python daemon that will launch at boot. The goal of the script is to receive a job from our gearman load balancing server and complete the job. I am using the python-daemon module from pypi (http://pypi.python.org/pypi/python-daemon/). The nature of the job that it is completing is converting images in the orf (olympus raw image format) to jpeg. In order to accomplish this an outside program is used, ufraw in this case. The problem comes in when I start the daemon at boot, if I launch from the shell it runs perfectly and completes the work. When it starts at boot it is unable to launch the subprocess command. commandString = '/usr/bin/ufraw-batch --interpolation=four-color --wb=camera --compression=100 --output="' + outfile + '" --out-type=jpg --overwrite "' + infile + '"' args = shlex.split(commandString) process = subprocess.Popen(args).wait() I am not sure what I am doing wrong. Thanks for any help.

    Read the article

  • Looking for a good Dynamic Imaging Solution

    - by user151289
    I work for a small E-Commerce shop and we are looking for a process that will handle resizing our product images dynamically. Currently our designers take high resolution photos, either provided by the manufactures or created in house, and alter them to fit various pages on our site. The designers are constantly resizing, cropping, altering compression levels, etc., of each product photo to fit the needs of the business. Being that our product line is updated frequently, this becomes a monotonous task. Abobe Scene7 does exactly what we are looking to do and the images are served up from a CDN. Unfortunately we found it to be too expensive. I'm curious to learn how others handle this process at their organizations. Does anyone know of any good 3rd party tools or other SAAS providers that can handle performing some basic image manipulation and serving them on the fly?

    Read the article

  • Starting a new process in a asp.net web service

    - by Deumber
    I have the following code: public void BeginConvert(object data) { ConverterData cObject = (ConverterData)data; string argument = string.Format("-i \"{0}\" -b {1} \"{2}\"", cObject.Source, compression, cObject.Destiny); Process converterProcess = new Process(); converterProcess.StartInfo.FileName = ffPath; converterProcess.StartInfo.Arguments = argument; converterProcess.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; converterProcess.Start(); converterProcess.WaitForExit(); } I use it in a webservice, i start it in a new thread and it return exit code 1 (error, i'm trying to do a video convertion with ffmpeg library), i impersonate ASP.NET to use a local account with permissions to read and write files, when i run it in my machine running or debugging it works but know thta the web service is running in IIS doest'n. Could someone help me?

    Read the article

  • How do I prevent an ASP.NET MVC deployment on IIS 6.0, using wildcard mapping, from attempting to ha

    - by Rob
    As noted by the title, what is the best way to configure an IIS 6.0 deployment of an ASP.NET MVC application such that connections to hidden shares are ignored? The application in question is using wildcard mapping to allow for clean URLs since we are planning on upgrading to IIS 7.0 in the near future and we are also handling the caching and compression issues with a custom library so we would like to avoid turning wildcard mapping off unless absolutely necessary. Below is a one of the errors from the application to give you an example of what we are seeing. -------------------------------------------------------------------------------- System.Web.HttpException -------------------------------------------------------------------------------- Time Stamp - 03 Mar 2010, 08:11:44 Path - N/A, Internal Server Operation Message - The controller for path '/C$' could not be found or it does not implement IController. Target Site - System.Web.Mvc.IController GetControllerInstance(System.Type) Stack Trace - at System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(Type controllerType) at System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContextBase httpContext) at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContext httpContext) at System.Web.Mvc.MvcHandler.System.Web.IHttpHandler.ProcessRequest(HttpContext httpContext) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) --------------------------------------------------------------------------------

    Read the article

  • php gzip xml file (53MB) casue Out of memory error

    - by ntan
    Hi, i have a 53 MB xml file that i want to gzip. The code below gzip it $gzFile = "my.gz"; $data = IMPLODE("", FILE($filename)); $gzdata = GZENCODE($data, 9); //open gz -- 'w9' is highest compression $fp = gzopen ($gzFile, 'w9'); //loop through array and write each line into the compressed file gzwrite ($fp, $gzdata); //close the file gzclose ($fp); This cause PHP Fatal error: Out of memory (allocated 70516736) (tried to allocate 24 bytes) Any one have any suggestions. I already have increase the memory in php.ini

    Read the article

  • Serving GZipped files from s3 using the Asset Pipeline

    - by kmurph79
    I have a Rails 3.2.3 app on Heroku and I'm using the asset_sync gem to serve my assets from s3, via these instructions. It works great, except s3 is not serving up the gzipped css/js files (just the uncompressed version). I've enabled gzip compression, to no avail: config.gzip_compression = true According to Using GZIP with html pages served from Amazon S3 I need to add meta-data to the s3 object for uploading. How would I do this in concert with the Asset Pipeline? Thank you for any help.

    Read the article

  • How do I get Cabal to bypass my Windows proxy settings?

    - by Brent.Longborough
    When retrieving packages with Cabal, I frequently get errors with this message: user error (Codec.Compression.Zlib: premature end of compressed stream) It looks like Cabal is using my Windows Networking proxy settings (for Privoxy). From digging around Google, Cabal or its libraries appear to have (had) a problem in this area. Possible solutions I can see are: Turn off proxying while using Cabal (not very keen on this one); or Get a patch and start hacking. I'm hesitant to go down this path, as I'm a complete Haskell noob and I'm not yet comfortable with Darcs; or Give it the magic "can I haz no proxy" parameter. Hence the question.

    Read the article

  • Compressing a hex string in Ruby/Rails

    - by PreciousBodilyFluids
    I'm using MongoDB as a backend for a Rails app I'm building. Mongo, by default, generates 24-character hexadecimal ids for its records to make sharding easier, so my URLs wind up looking like: example.com/companies/4b3fc1400de0690bf2000001/employees/4b3ea6e30de0691552000001 Which is not very pretty. I'd like to stick to the Rails url conventions, but also leave these ids as they are in the database. I think a happy compromise would be to compress these hex ids to shorter collections using more characters, so they'd look something like: example.com/companies/3ewqkvr5nj/employees/9srbsjlb2r Then in my controller I'd reverse the compression, get the original hex id and use that to look up the record. My question is, what's the best way to convert these ids back and forth? I'd of course want them to be as short as possible, but also url-safe and simple to convert. Thanks!

    Read the article

  • getAssetFileDescriptor from ZipResourceFile merges all mp3 in mediaplayer SOLVED

    - by Jordi
    I've a program with an Expansion file that stores 4 mp3 in a obb file (zip without compression). I can retrieve the data, but instead of taking the audio file i asked for, it merges ALL audio files in the same AssetFileDescriptor. ---SOLVED--- with the fixes Support class public AssetFileDescriptor getaudio(){ ZipResourceFile expansionFile = APKExpansionSupport.getAPKExpansionZipFile(c,21,21); AssetFileDescriptor afd=null; if(take==1) { afd = expansionFile.getAssetFileDescriptor("file01.mp3"); }else if(take==2 { afd = expansionFile.getAssetFileDescriptor("file02.mp3"); } //more els eif ............ return afd; } In the MediaPlayer class AssetFileDescriptor fd = Llistat.getInstance().getAudio(); mPlayer.setDataSource(fd.getFileDescriptor(), fd.getStartOffset(),fd.getLength()); mPlayer.prepare(); fd.close(); My problem was that i directly was returning and using a FileDescriptor, while i was needing the AssetFileDescriptor to take its StartOffset and Length.

    Read the article

< Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >