Search Results

Search found 8185 results on 328 pages for 'transfer encoding'.

Page 24/328 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Apache gzip with chucked encoding

    - by hoodoos
    I'm expiriencing some problem with one of my data source services. As it says in HTTP response headers it's running on Apache-Coyote/1.1. Server gives responses with Transfer-Encoding: chunked, here sample response: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 06:13:52 GMT And problem is when I'm requesting server to send gzipped request it often sends not full response. I recieve response, see that last chunk recived, but then after ungzipping I see that response is partial. So my question is: is it common apache issue? maybe one of it's mod_deflate plugins or something? Ask questions if you need more info. Thanks.

    Read the article

  • Corrupted .WAR file after transfer from 32-64 bit Windows Server to Desktop or vice versa

    - by Albert Widjaja
    Hi All, Does anyone experience this problem of corrupted .WAR file after it has been copied over the network share ? this is .WAR file (Web Archive) the J2EE application file (.WAR file is compressed with the same zip algorithm i think ?) Scenario 1: Windows Server 2008 x64 transfer into Windows XP using RDP client (Local Devices and Resources) Scenario 2: Windows XP 32 bit transfer into Windows Server 2003 x64 using shared network drive (port 445 SMB ?) for both of the scenario it always failed / corrupted (the source code seems to be duplicated at the end of line when you open up in the Eclipse / Java IDE). but when in both scenario i compressed the file into .ZIP file everything is OK. can anyone explains why this problem happens ? Thanks, Albert

    Read the article

  • Wifi connected but no data transfer

    - by Anuj
    I have a Desktop which runs on Windows XP and a laptop which runs in Ubuntu. Recently I have set up a wireless router in order to be able to access internet on my laptop through wifi. The laptop connects to the wifi at ease, but is unable to transfer any data. Only when I switch on my laptop for the first time, it is able to transfer some data only for around 2 mins, after which it shows Destination Host unreachable on pinging the router, and everything stops working, but the wifi still shows to be connected. Please help!

    Read the article

  • Seasgate GoFlex NAS + Horrible Speed = Bad Experience

    - by Jon H
    I am having issues with transfer speeds from my desktop PC to my NAS. I have my NAS hooked up to a Gigabit Gateway as well as my Desktop with Cat 5e. I see up to 4.0 MB/Second Transfer Rates, the normal is about 2.5 MB/Seconds. There is 3 Partitions on my NAS, Public, Private, Backup. When I transfer from Private to Public I see the speeds above. If its under the same partition almost instant. I was wondering if the speeds I am seeing is in due to my Computer or the NAS. I was looking into building my own Media Server in due to these horrible speeds. Is their anything I can do in the mean time to speed this up? Motherboard = M3970AM-HP (Angelica) Processor = AMD FX 6100 Ram = 10GB PC3-10600 MB/sec Hard Drive (1) = 1.5TB SATA 3.0GB 5400RPM Hard Drive (2) = 120GB SATA SSD NAS = Seagate 3TB Go Flex Home Connection (1) = 1000 Base T Connection (2) = Wireless N

    Read the article

  • proxy software that supports parallel transfer

    - by est
    Hi guys, I need to setup a really fast proxy server in a remote server, here's the scenario: The server prefetches 3KB of data, mostly HTTP resources. The server send to client 3KB of data, instead of traditional HTTP or SOCKS proxy, the server open multithreaded transfer with 3 connections, send 1KB of data per thread to each connection The client receives 1KBx3, and combine them to the original 3KB data, and return as a local HTTP proxy server. The client display the original data in browser via the local HTTP proxy The latency is not important as long as the transfer rate is good. Does any software like this exist? It's better if it's open source or free ones.

    Read the article

  • afp/smb transfers caps at 2 megabytes/sec, wireless N

    - by RD.
    I wanted to transfer files between two mac computers. The network is wireless-N and both computers have wireless-N modules in them. The problem is that when I transfer files between them, via file sharing (afp) the network speed caps at 2 megabytes/sec. Just downloading files from the internet I can get faster speeds, so this isn't a constriction of my wifi bandwidth, it appears to be a constriction of the protocol being used. My wifi-n is set to 130mbits, so I should see real world transfer speeds around 12-16 megabytes/sec I did this command on both computers sudo sysctl -w net.inet.tcp.delayed_ack=0 which is supposed to lower tcp overhead, but this did not affect it. How can I get the speed I am expecting?

    Read the article

  • MySql transfer / update (a bit specific)

    - by Jeff
    before posting I was digging whole site but didn't find help for my problem, so I hope someone will help... Facts: 30 Gb mysql database on remote server (about 20.000.000 rows) data are once weekly updated in local network (mysql) I need to transfer/replace local updated database with remote connection is about 2mb (real mb, not mbps) up/down Point is that I can't have 'down time' of remote mysql server. Until now I Tried: navicat data sync - Ok, but take about 3 days to finish dbForge - ok but need 5 days to finish mysql dump transfer to remote server and execution - about day, but a lot of downtime rsync folder with database /mysql/lib/MY_DATABASE - 4 hours, but after that I need to execute always 'repir on remote server' which takes about 2 hours, and a lot of down time mysql dump piped from cl to directly goto server - still now satisfied many problems I could give you more things that I tried... mysql replication - slow Anyase, what is best,best way to: refresh remote mysql on weekly level and in same time to have 0 sec down time nor huge server load If you have any idea please share

    Read the article

  • Virtual Sd-card won't transfer [migrated]

    - by Hyztname
    I have a Verizon Galaxy Nexus 32GB. I've installed AOKP on it and it has been running pretty okay with no bugs etc... But after I connected it too my Nintendo Wii I think it mounted sdcard as FAT32 itself :\ I'm receiving the following error when trying to download something from 4Shared sync app(But actually none can download or transfer anything to Virtual sdcard): XXXXX.apk: open failed: EACCES(Permission denied) I already restored my old backup wiped all data etc, nothing seems to work. Note that I can access my sd-card, i can't only transfer, take pics... basically data. PS: I showed 4sync problem because it was the only one that specify something

    Read the article

  • How to resolve a NULL cString crash

    - by hanumanDev
    I'm getting a crash with the following encoding fix I'm trying to implement: // encoding fix NSString *correctStringTitle = [NSString stringWithCString:[[item objectForKey:@"main_tag"] cStringUsingEncoding:NSISOLatin1StringEncoding] encoding:NSUTF8StringEncoding]; cell.titleLabel.text = [correctStringTitle capitalizedString]; my crash log output states: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** +[NSString stringWithCString:encoding:]: NULL cString' thanks for any help

    Read the article

  • Convert .net String object into base64 encoded string

    - by chester89
    I have a question, which Unicode encoding to use while encoding .NET string into base64? I know strings are UTF-16 encoded on Windows, so is my way of encoding is the right one? public static String ToBase64String(this String source) { return Convert.ToBase64String(Encoding.Unicode.GetBytes(source)); }

    Read the article

  • How to detect UTF-8-based encoded strings [closed]

    - by Diego Sendra
    A customer of asked us to build him a multi-language based support VB6 scraper, for which we had the need to detect UTF-8 based encoded strings to decode it later for proper displaying in application UI. It's necessary to point out that this need arises based on VB6 limitations to natively support UTF-8 in its controls, contrary to what it happens in .NET where you can tell a control that it should expect UTF-8 encoding. VB6 natively supports ISO 8859-1 and/or Windows-1252 encodings only, for which textboxes, dropdowns, listview controls, others can't be defined to natively support/expect UTF-8 as you can do in .NET considering what we just explained; so we would see weird symbols such as é, è among others, making it a whole mess at the time of displaying. So, next function contains whole UTF-8 encoded punctuation marks and symbols from languages like Spanish, Italian, German, Portuguese, French and others, based on an excellent UTF-8 based list we got from this link - Ref. http://home.telfort.nl/~t876506/utf8tbl.html Basically, the function compares if each and one of the listed UTF-8 encoded sentences, separated by | (pipe) are found in our passed string making a substring search first. Whether it's not found, it makes an alternative ASCII value based search to get a match. Say, a string like "Societé" (Society in english) would return FALSE through calling isUTF8("Societé") while it would return TRUE when calling isUTF8("SocietÈ") since È is the UTF-8 encoded representation of é. Once you got it TRUE or FALSE, you can decode the string through DecodeUTF8() function for properly displaying it, a function we found somewhere else time ago and also included in this post. Function isUTF8(ByVal ptstr As String) Dim tUTFencoded As String Dim tUTFencodedaux Dim tUTFencodedASCII As String Dim ptstrASCII As String Dim iaux, iaux2 As Integer Dim ffound As Boolean ffound = False ptstrASCII = "" For iaux = 1 To Len(ptstr) ptstrASCII = ptstrASCII & Asc(Mid(ptstr, iaux, 1)) & "|" Next tUTFencoded = "Ä|Ã…|Ç|É|Ñ|Ö|ÃŒ|á|Ã|â|ä|ã|Ã¥|ç|é|è|ê|ë|í|ì|î|ï|ñ|ó|ò|ô|ö|õ|ú|ù|û|ü|â€|°|¢|£|§|•|¶|ß|®|©|â„¢|´|¨|â‰|Æ|Ø|∞|±|≤|≥|Â¥|µ|∂|∑|âˆ|Ï€|∫|ª|º|Ω|æ|ø|¿|¡|¬|√|Æ’|≈|∆|«|»|…|Â|À|Ã|Õ|Å’|Å“|–|—|“|â€|‘|’|÷|â—Š|ÿ|Ÿ|â„|€|‹|›|ï¬|fl|‡|·|‚|„|‰|Â|Ú|Ã|Ë|È|Ã|ÃŽ|Ã|ÃŒ|Ó|Ô||Ã’|Ú|Û|Ù|ı|ˆ|Ëœ|¯|˘|Ë™|Ëš|¸|Ë|Ë›|ˇ" & _ "Å|Å¡|¦|²|³|¹|¼|½|¾|Ã|×|Ã|Þ|ð|ý|þ" & _ "â‰|∞|≤|≥|∂|∑|âˆ|Ï€|∫|Ω|√|≈|∆|â—Š|â„|ï¬|fl||ı|˘|Ë™|Ëš|Ë|Ë›|ˇ" tUTFencodedaux = Split(tUTFencoded, "|") If UBound(tUTFencodedaux) > 0 Then iaux = 0 Do While Not ffound And Not iaux > UBound(tUTFencodedaux) If InStr(1, ptstr, tUTFencodedaux(iaux), vbTextCompare) > 0 Then ffound = True End If If Not ffound Then 'ASCII numeric search tUTFencodedASCII = "" For iaux2 = 1 To Len(tUTFencodedaux(iaux)) 'gets ASCII numeric sequence tUTFencodedASCII = tUTFencodedASCII & Asc(Mid(tUTFencodedaux(iaux), iaux2, 1)) & "|" Next 'tUTFencodedASCII = Left(tUTFencodedASCII, Len(tUTFencodedASCII) - 1) 'compares numeric sequences If InStr(1, ptstrASCII, tUTFencodedASCII) > 0 Then ffound = True End If End If iaux = iaux + 1 Loop End If isUTF8 = ffound End Function Function DecodeUTF8(s) Dim i Dim c Dim n s = s & " " i = 1 Do While i <= Len(s) c = Asc(Mid(s, i, 1)) If c And &H80 Then n = 1 Do While i + n < Len(s) If (Asc(Mid(s, i + n, 1)) And &HC0) <> &H80 Then Exit Do End If n = n + 1 Loop If n = 2 And ((c And &HE0) = &HC0) Then c = Asc(Mid(s, i + 1, 1)) + &H40 * (c And &H1) Else c = 191 End If s = Left(s, i - 1) + Chr(c) + Mid(s, i + n) End If i = i + 1 Loop DecodeUTF8 = s End Function

    Read the article

  • UITableView's NSString memory leak on iphone when encoding with NSUTF8StringEncoding

    - by vince
    my UITableView have serious memory leak problem only when the NSString is NOT encoding with NSASCIIStringEncoding. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"cell"; UILabel *textLabel1; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; textLabel1 = [[UILabel alloc] initWithFrame:CGRectMake(105, 6, 192, 22)]; textLabel1.tag = 1; textLabel1.textColor = [UIColor whiteColor]; textLabel1.backgroundColor = [UIColor blackColor]; textLabel1.numberOfLines = 1; textLabel1.adjustsFontSizeToFitWidth = NO; [textLabel1 setFont:[UIFont boldSystemFontOfSize:19]]; [cell.contentView addSubview:textLabel1]; [textLabel1 release]; } else { textLabel1 = (UILabel *)[cell.contentView viewWithTag:1]; } NSDictionary *tmpDict = [listOfInfo objectForKey:[NSString stringWithFormat:@"%@",indexPath.row]]; textLabel1.text = [tmpDict objectForKey:@"name"]; return cell; } -(void) readDatabase { NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; databasePath = [documentsDir stringByAppendingPathComponent:[NSString stringWithFormat:@"%@",myDB]]; sqlite3 *database; if(sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { const char sqlStatement = [[NSString stringWithFormat:@"select id,name from %@ order by orderid",myTable] UTF8String]; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { while(sqlite3_step(compiledStatement) == SQLITE_ROW) { NSString *tmpid = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 0)]; NSString *tmpname = [NSString stringWithCString:(const char *)sqlite3_column_text(compiledStatement, 1) encoding:NSUTF8StringEncoding]; [listOfInfo setObject:[[NSMutableDictionary alloc] init] forKey:tmpid]; [[listOfInfo objectForKey:tmpid] setObject:[NSString stringWithFormat:@"%@", tmpname] forKey:@"name"]; } } sqlite3_finalize(compiledStatement); debugNSLog(@"sqlite closing"); } sqlite3_close(database); } when i change the line NSString *tmpname = [NSString stringWithCString:(const char *)sqlite3_column_text(compiledStatement, 1) encoding:NSUTF8StringEncoding]; to NSString *tmpname = [NSString stringWithCString:(const char *)sqlite3_column_text(compiledStatement, 1) encoding:NSASCIIStringEncoding]; the memory leak is gone i tried NSString stringWithUTF8String and it still leak. i've also tried: NSData *dtmpname = [NSData dataWithBytes:sqlite3_column_blob(compiledStatement, 1) length:sqlite3_column_bytes(compiledStatement, 1)]; NSString *tmpname = [[[NSString alloc] initWithData:dtmpname encoding:NSUTF8StringEncoding] autorelease]; and the problem remains, the leak occur when u start scrolling the tableview. i've actually tried other encoding and it seems that only NSASCIIStringEncoding works(no memory leak) any idea how to get rid of this problem?

    Read the article

  • Code for decoding/encoding a modified base64 URL

    - by Kirk Liemohn
    I want to base64 encode data to put it in a URL and then decode it within my HttpHandler. I have found that Base64 Encoding allows for a '/' character which will mess up my UriTemplate matching. Then I found that there is a concept of a "modified Base64 for URL" from wikipedia: A modified Base64 for URL variant exists, where no padding '=' will be used, and the '+' and '/' characters of standard Base64 are respectively replaced by '-' and '_', so that using URL encoders/decoders is no longer necessary and has no impact on the length of the encoded value, leaving the same encoded form intact for use in relational databases, web forms, and object identifiers in general. Using .NET I want to modify my current code from doing basic base64 encoding and decoding to using the "modified base64 for URL" method. Has anyone done this? To decode, I know it starts out with something like: string base64EncodedText = base64UrlEncodedText.Replace('-', '+').Replace('_', '/'); // Append '=' char(s) if necessary - how best to do this? // My normal base64 decoding now uses encodedText But, I need to potentially add one or two '=' chars to the end which looks a little more complex. My encoding logic should be a little simpler: // Perform normal base64 encoding byte[] encodedBytes = Encoding.UTF8.GetBytes(unencodedText); string base64EncodedText = Convert.ToBase64String(encodedBytes); // Apply URL variant string base64UrlEncodedText = base64EncodedText.Replace("=", String.Empty).Replace('+', '-').Replace('/', '_'); I have seen the Guid to Base64 for URL StackOverflow entry, but that has a known length and therefore they can hardcode the number of equal signs needed at the end.

    Read the article

  • Fast or asynchronous AS3 JPEG encoding

    - by Bart van Heukelom
    I'm currently using the JPGEncoder from the AS3 core lib to encode a bitmap to JPEG var enc:JPGEncoder = new JPGEncoder(90); var jpg:ByteArray = enc.encode(bitmap); Because the bitmap is rather large (3000 x 2000) the encoding takes a long while (about 20 seconds), causing the application to seemingly freeze while encoding. To solve this, I need either: An asynchronous encoder so I can keep updating the screen (with a progress bar or something) while encoding An alternative encoder which is simply faster Is either possible?

    Read the article

  • Perl's use encoding pragma breaking UTF strings

    - by Karel Bílek
    I have a problem with Perl and Encoding pragma. (I use utf-8 everywhere, in input, output, the perl scripts themselves. I don't want to use other encoding, never ever.) However. When I write binmode(STDOUT, ':utf8'); use utf8; $r = "\x{ed}"; print $r; I see the string "í" (which is what I want - and what is 00+ED unicode char). But when I add the "use encoding" pragma like this binmode(STDOUT, ':utf8'); use utf8; use encoding 'utf8'; $r = "\x{ed}"; print $r; all I see is a box character. Why? Moreover, when I add Data::Dumper and let the Dumper print the new string like this binmode(STDOUT, ':utf8'); use utf8; use encoding 'utf8'; $r = "\x{ed}"; use Data::Dumper; print Dumper($r); I see that perl changed the string to "\x{fffd}". Why?

    Read the article

  • python UTF16LE file to UTF8 encoding

    - by Qiao
    I have big file with utf16le (BOM) encoding. Is it possible to convert it to usual UTF8 by python? Something like file_old = open('old.txt', mode='r', encoding='utf_16_le') file_new = open('new.txt', mode='w', encoding='utf-8') text = file_old.read() file_new.write(text.encode('utf-8')) http://docs.python.org/release/2.3/lib/node126.html (-- utf_16_le UTF-16LE) Not working. Can't understand "TypeError: must be str, not bytes" error. python 3

    Read the article

  • Ruby string encoding problem

    - by John Prideaux
    I've looked at the other ruby/encoding related posts but haven't been able to figure out why the following is not working. Likely just because I'm dense, but here's the situation. Using Ruby 1.9 on windows. I have a set of CSV files that need some data appended to the end of each line. Whenever I run my script, the appended characters are gibberish. The input text appears to be IBM437 encoding, whereas my string I'm appending starts as US-ASCII. Nothing I've tried with respect to forcing encoding on the input strings or the append string seems to change the resultant output. I'm stumped. The current encoding version is simply the last that I tried. def append_salesperson(txt, salesperson) if txt.length > 2 return txt.chomp.force_encoding('US-ASCII') + %(, "", "", "#{salesperson}") end end salespeople = Hash[ "fname", "Record Manager"] outfile = File.open("ActData.csv", "w:US-ASCII") salespeople.each do | filename, recordManager | infile = File.open("#{filename}.txt") infile.each do |line| outfile.puts append_salesperson(line, recordManager) end infile.close end outfile.close

    Read the article

  • Urls parameters doesn't encoding correctly!

    - by Ivan90
    I'am using asp.net mvc version 1.0 and I've a problem with some parameter in a url! My url is look like so(http://localhost:2282/Tags/PostList/c#) routes.MapRoute( "TagsRoute", "Tags/PostList/{tag}", new { controller="Tags",Action="PostList",tag = "" } ); In effect the problem is that tag paramter isn't encoding and so simbol # is ignored! I am using an actionlink but maybe with version 1.0 isn't encoding parameter directly! <%=Html.ActionLink(itemtags.Tags.TagName, "PostList","Tags", new { tag = itemtags.Tags.TagName }, new { style = "color:red;" })%> With this actionlink only whitespace are encoding correctly, infact asp.net mvc become asp.net%20mvc and it work fine! But c# isn't encoding :( So I try to use Server.UrlEncode, and in effect it happen some stuff!!! Infact c# became c%2523 but it isn't correct again because hexadecimal of # is %23! Have you some solutions???? Route Contraints? Thanks

    Read the article

  • Problem using AudioRecord with 8-bit encoding in android

    - by maxsap
    Hello, I have made an application that records from the phones microphone using the AudioRecord and 16-bit encoding, and I am able to playback the recording. For some compatibility reason I need to use 8-bit encoding, but when I try to run the same program using that encoding I keep getting an Invalid Audio Format. my code is : int bufferSize = AudioRecord.getMinBufferSize(11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_8BIT); AudioRecord recordInstance = new AudioRecord( MediaRecorder.AudioSource.MIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_8BIT, bufferSize); Any one knows what is the problem? according to the documentation AudioRecord is capable of 8-bit encoding. thanks in advanced maxsap.

    Read the article

  • Running SQL*Plus with bash causes wrong encoding

    - by Petr Mensik
    I have a problem with running SQL*Plus in the bash. Here is my code #!/bin/bash #curl http://192.168.168.165:8080/api_test/xsql/f_exp_order_1016.xsql > script.sql wget -O script.sql 192.168.168.165:8080/api_test/xsql/f_exp_order_1016.xsql set NLS_LANG=_.UTF8 sqlplus /nolog << ENDL connect login/password set sqlblanklines on start script.sql exit <<endl I download the insert statements from our intranet, put it into sql file and run it through SQL*Plus. This is working fine. My problem is that when I save the file script.sql my encoding goes wrong. All special characters(like íášc) are broken and that's causing inserting wrong characters into my DB. Encoding of that file is UTF-8, also UTF-8 is set on the XSQL page on our intranet. So I really don't know where could be a problem. And also any advices regarding to my script are welcomed, I am total newbie in Linux scripting:-)

    Read the article

  • Firefox or Chrome - how to force a specific encoding for a page

    - by Mike
    Hi, I am accessing an intranet site built by amateurs, that was constructed to be "best viewed by IE" (arghhh!). The site is in portuguese. All accented letters are jammed and do not appear as they should. As I create sites myself, I know that the best way to build a site in portuguese and other latin languages is to use the "charset=iso-8859-1" on the page's HTML encoding. This will ensure cross-browser and platforms compatibility. But I have no way to change this, because I am a visitor on this site. I don't know the encoding they are using. What I ask is: is there a way I can force my browser (Chrome or Firefox) to recode the page using the correct charset? I need this to work on Ubuntu.

    Read the article

  • Normalize Accept-Encoding via HAProxy for optimized Squid hit rate

    - by Matt Beckman
    Our website infrastructure uses HAProxy for load balancing, a Squid cluster for caching, and application data is on an IIS cluster. We load balance HAProxy by URI to optimize the Squid hit-rate, but we know that Squid is holding different copies of each page based on the Accept-Encoding header passed to it by the browser, and so IE (gzip, deflate) will have a different copy of a cached page than Firefox (gzip,deflate) or Chrome (gzip,deflate,sdch). We want to normalize the Accept-Encoding headers and I think the best place to do so would be in HAProxy. I'd appreciate it if someone could offer some ideas on how to accomplish this without breaking support for clients without gzip or deflate support.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >