Search Results

Search found 1714 results on 69 pages for 'utf8 decode'.

Page 6/69 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Encode and Decode a string in c#

    - by Jibu P C
    Hii, I had a requirement of encode a string provided to a unreadable format and also have to decode after certain action performed. I have tried 'Base64' encoding. But this is not a secure way. I need some other solutions. Give some help regarding the above context.

    Read the article

  • how to decode quoted-printable using java

    - by shikha
    Hi Can anyone please tell me how to decode quoted-printable using java. Actually i am reading mail from the server and fetching some data from mail using regex.My mail content type is text/html and because of which i am getting html tags along with the data and making very difficult for me to do the pattern matching. it is showing some =20 or =cF etc like characters also. How i can resolve this problem??? Thanks and Regards, Shikha Virmani

    Read the article

  • In Python, how do I decode GZIP encoding?

    - by alex
    I downloaded a webpage in my python script. In most cases, this works fine. However, this one had a response header: GZIP encoding, and when I tried to print the source code of this web page, it had all symbols in my putty. How do decode this to regular text?

    Read the article

  • MP3 Decoding on Android

    - by Rob Szumlakowski
    Hi. We're implementing a program for Android phones that plays audio streamed from the internet. Here's approximately what we do: Download a custom encrypted format. Decrypt to get chunks of regular MP3 data. Decode MP3 data to raw PCM data in a memory buffer. Pipe the raw PCM data to an AudioTrack Our target devices so far are Droid and Nexus One. Everything works great on Nexus One, but the MP3 decode is too slow on Droid. The audio playback starts to skip if we put the Droid under load. We are not permitted to decode the MP3 data to SD card, but I know that's not our problem anyways. We didn't write our own MP3 decoder, but used MPADEC (http://sourceforge.net/projects/mpadec/). It's free and was easy to integrate with our program. We compile it with the NDK. After exhaustive analysis with various profiling tools, we're convinced that it's this decoder that is falling behind. Here's the options we're thinking about: Find another MP3 decoder that we can compile with the Android NDK. This MP3 decoder would have to be either optimized to run on mobile ARM devices or maybe use integer-only math or some other optimizations to increase performance. Since the built-in Android MediaPlayer service will take URLs, we might be able to implement a tiny HTTP server in our program and serve the MediaPlayer with the decrypted MP3s. That way we can take advantage of the built-in MP3 decoder. Get access to the built-in MP3 decoder through the NDK. I don't know if this is possible. Does anyone have any suggestions on what we can do to speed up our MP3 decoding? -- Rob Sz

    Read the article

  • C#, UTF-8 and encoding characters

    - by AspNyc
    This is a shot-in-the-dark, and I apologize in advance if this question sounds like the ramblings of a madman. As part of an integration with a third party, I need to UTF8-encode some string info using C# so I can send it to the target server via multipart form. The problem is that they are rejecting some of my submissions, probably because I'm not encoding their contents correctly. Right now, I'm trying to figure out how a dash or hyphen -- I can't tell which it is just by looking at it -- is received or interpreted by the target server as ?~@~S (yes, that's a 5-character string and is not your browser glitching out). And unfortunately I don't have a thorough enough understanding of Encoding.UTF8.GetBytes() to know how to use the byte array to begin identifying where the problem might lie. If anybody can provide any tips or advice, I would greatly appreciate it. So far my only friend has been MSDN, and not much of one at that.

    Read the article

  • comparing strings in PostgreSQL

    - by binaryLV
    Hello! Is there any way in PostgreSQL to convert UTF-8 characters to "similar" ASCII characters? String glažškunu rukiši would have to be converted to glazskunu rukisi. UTF-8 text is not in some specific language, it might be in Latvian, Russian, English, Italian or any other language. This is needed for using in where clause, so it might be just "comparing strings" rather than "converting strings". I tried using convert, but it does not give desired results (e.g., select convert('A', 'utf8', 'sql_ascii') gives \304\200, not A). Database is created with: ENCODING = 'UTF8' LC_COLLATE = 'Latvian_Latvia.1257' LC_CTYPE = 'Latvian_Latvia.1257' These params may be changed, if necessary.

    Read the article

  • Fix stubborn 'Setting locale failed.'

    - by plua
    I have a very stubborn, well-known locale error on Ubuntu 9.10: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_TIME = "custom.UTF-8", LANG = "en_US.UTF-8" Tried the following: Added LANG=en_US.UTF-8 and LC_ALL=en_US.UTF-8 to /etc/environment Run apt-get install --reinstall locales (error: perl: warning: Falling back to the standard locale ("C"). /usr/bin/mandb: can't set the locale; make sure $LC_* and $LANG are correct) Run sudo dpkg-reconfigure locales. Result: Cannot set LC_ALL to default locale: No such file or directory, and then updates locales all locales including en_US.UTF-8 sudo locale-gen updates all locales successfully, including en_US.UTF-8 sudo locale-gen un_US en_US.UTF-8 gives no error nor other output In /etc/default/locale it says LANG="en_US.UTF-8" echo $LANG gives en_US.UTF-8 /var/lib/locales/supported.d/local says en_US.UTF-8 UTF-8 locale -a gives me: C en_AG en_AU.utf8 en_BW.utf8 en_CA.utf8 en_DK.utf8 en_GB.utf8 en_HK.utf8 en_IE.utf8 en_IN en_NG en_NZ.utf8 en_PH.utf8 en_SG.utf8 en_US.utf8 en_ZA.utf8 en_ZW.utf8 POSIX So well... I am pretty much out of options I can think of. Anybody any idea?? Thanks!

    Read the article

  • ghc6 install trouble: hGetContents: invalid argument (invalid UTF-8 byte sequence)

    - by olimay
    Having trouble installing ghc6 on Ubuntu Maverick via apt. Here's what seems to be the relevant error that comes up when I try to (apt-get|aptitude) install ghc6: A package failed to install. Trying to recover: Setting up ghc6 (6.12.1-13ubuntu1) ... ghc-pkg: /home/opm/.ghc/i386-linux-6.12.1/package.conf.d/unix-compat-0.2-edefa7bced91ebe610d455bab466e200.conf: hGetContents: invalid argument (invalid UTF-8 byte sequence) (Here's the full output, if you're interested: http://paste.ubuntu.com/566475/ ) This still happens after apt-get clean and apt-get update. My searching around has not really helped me understand what's going on, except that it might be caused by a mismatch in locale. So, here's the output of locale too: LANG=en_US.utf8 LANGUAGE=en_US:en LC_CTYPE="en_US.utf8" LC_NUMERIC="en_US.utf8" LC_TIME="en_US.utf8" LC_COLLATE="en_US.utf8" LC_MONETARY="en_US.utf8" LC_MESSAGES="en_US.utf8" LC_PAPER="en_US.utf8" LC_NAME="en_US.utf8" LC_ADDRESS="en_US.utf8" LC_TELEPHONE="en_US.utf8" LC_MEASUREMENT="en_US.utf8" LC_IDENTIFICATION="en_US.utf8" LC_ALL= Any ideas? Additional background: this all seems very strange to me, because I used to have ghc6 installed correctly--I use XMonad as my main window manager most of the time. I tried to install haskell-platform (through apt), which failed and told me that there was something wrong with ghc6, and so I reinstalled ghc6 and began to get the above error message.

    Read the article

  • Fix stubborn 'Setting locale failed.'

    - by user60129
    I have a very stubborn, well-known locale error on Ubuntu 9.10: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_TIME = "custom.UTF-8", LANG = "en_US.UTF-8" Tried the following: Added LANG=en_US.UTF-8 and LC_ALL=en_US.UTF-8 to /etc/environment Run apt-get install --reinstall locales (error: perl: warning: Falling back to the standard locale ("C"). /usr/bin/mandb: can't set the locale; make sure $LC_* and $LANG are correct) Run sudo dpkg-reconfigure locales. Result: Cannot set LC_ALL to default locale: No such file or directory, and then updates locales all locales including en_US.UTF-8 sudo locale-gen updates all locales successfully, including en_US.UTF-8 sudo locale-gen un_US en_US.UTF-8 gives no error nor other output In /etc/default/locale it says LANG="en_US.UTF-8" echo $LANG gives en_US.UTF-8 /var/lib/locales/supported.d/local says en_US.UTF-8 UTF-8 locale -a gives me: C en_AG en_AU.utf8 en_BW.utf8 en_CA.utf8 en_DK.utf8 en_GB.utf8 en_HK.utf8 en_IE.utf8 en_IN en_NG en_NZ.utf8 en_PH.utf8 en_SG.utf8 en_US.utf8 en_ZA.utf8 en_ZW.utf8 POSIX So well... I am pretty much out of options I can think of. Anybody any idea?? Thanks!

    Read the article

  • mime decode pdf quoted-printable

    - by TonyVipros
    Hi, I've been building a simple ticket system and it's all done and working except for when it receives PDF files via email that have been sent using quoted-printable encoding. I've tried using quoted_printable_decode(), the quoted-printable.decode stream filter, the later just created an empty file. I've also tried using $input = preg_replace('/=([a-f0-9]{2})/ie', "chr(hexdec('\\1'))", $input). However the PDF file is always unreadable. I've compared the original with the rebuilt version and there are a lot of 00 missing and some other characters replaced. original file rebuilt file

    Read the article

  • Decode received multipart/form-data request in Cocoa

    - by Snej
    Hi: I wonder if there is any possibility to explicitly decode an incoming multipart/form-data POST request. Is there any lib to handle this safely? Several files are embedded in this request and I want to save these files individually. NSData *data = [(id)CFHTTPMessageCopyBody(request) autorelease]; Content-Type: multipart/form-data; boundary=0xKhTmLbOuNdArY The data content is: --0xKhTmLbOuNdArY Content-Disposition: form-data; name="file1"; filename="fileName1.extension" Content-Type: application/octet-stream; charset=utf-8 ......... --0xKhTmLbOuNdArY Content-Disposition: form-data; name="file2"; filename="fileName2.extension" Content-Type: application/octet-stream; charset=utf-8 ......... --0xKhTmLbOuNdArY--

    Read the article

  • Skia Decoder fails to decode remote Stream

    - by Samuh
    I am trying to open a remote Stream of a JPEG image and convert it into a Bitmap object: BitmapFactory.decodeStream( new URL("http://some.url.to/source/image.jpg") .openStream()); The decoder returns null and in the logs I get the following message: DEBUG/skia(xxxx): --- decoder->decode returned false Note: 1. the content length is non-zero and content type is image/jpeg 2. When I open the URL in browser I can see the image. What is that I am missing here? Please help. Thanks.

    Read the article

  • exceptions with python unicode encode/decode functions (why doesn't errors=ignore actually ignore th

    - by gatoatigrado
    Does anyone know why the string conversion functions throw exceptions when errors="ignore" is passed? How can I convert from regular Python string objects to unicode without errors being thrown? Thanks very much! python -c "import codecs; codecs.open('tmp', 'wb', encoding='utf8', errors='ignore').write('?????')" returns Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.6/codecs.py", line 686, in write return self.writer.write(data) File "/usr/lib/python2.6/codecs.py", line 351, in write data, consumed = self.encode(object, self.errors) UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 0: ordinal not in range(128)

    Read the article

  • decode mysql query before returning it to view

    - by stormdrain
    I'm running a query to mysql that returns encrypted data. I'd like, if possible, to decode the results before sending it to the view. It seems like better form to handle the decoding in the controller (or even the model) rather than inside the view. I can't seem to wrap my head around how to do it, though. I was thinking I could iterate through the object, decodode it, and push it to another array that would be sent to the view. Problem with this is I won't know (and need to keep) the indexes of the query. So the query might return something like: [id] => 742 [client_id] => 000105 [last] => dNXcw6mQPaGQ4rXfgIGJMq1pZ1dYAim0 [first] => dDF7VoO37qdtYoYKfp1ena5mjBXXU0K3dDlcq1ssSvCgpOx75y0A== [middle] =>iXy6OWa48kCamViDZFv++K6okIkalC0am3OMPcBwK8sA== [phone] => eRY3zBhAw2H8tKE Any ideas?

    Read the article

  • How to decode HTML encoded text in MS Access

    - by Dejan
    Hi all, I have a table field in MS Access 2003 which contains HTML encoded strings like this: &#913;&#957;&#964;&#945;&#947;&#969;&#957;&#953;&#963;&#956;&#972;&#962; &#960;&#945;&#947;&#954;&#959;&#963;&#956;&#943;&#959;&#965; &#949;&#960;&#953;&#960;&#941;&#948;&#959;&#965; &#963;&#964;&#951;&#957; &#954;&#945;&#964;&#940;&#961;&#964;&#953;&#963 How can I decode this into "normal string", using MS Access? Thanks in advance.

    Read the article

  • question about php decode JSON

    - by cj333
    Hi, I still have some question about php decode JSON. the JSON return like this. all({"Total":30,"Debug":null,"Documents":[ { "DocTitle":"Image: A municipal police officer takes positio", "Docmultimedia":[ { "DocExpire":"2/7/2011 1:39:02 PM" } ] } ...] }); this is my php code: foreach ($data->Documents as $result) { echo htmlspecialchars($result->DocTitle).'<br />'; if(!empty($result->Docmultimedia)){ echo htmlspecialchars($result->Docmultimedia->DocExpire).'<br />'; } } It return Warning: Invalid argument supplied for foreach(). and echo htmlspecialchars($result->Docmultimedia->DocExpire), is it write right? Thanks all.

    Read the article

  • How to encode and decode chinese characters?

    - by melaos
    I've try googling around but wasn't able to find what charset that this text below belongs to: 具有éœé›»ç”¢ç”Ÿè£ç½®ä¹‹å½±åƒè¼¸å…¥è£ç½® But putting <meta http-equiv="Content-Type" Content="text/html; charset=utf-8"> and keeping that string into a html file i was able to view the chinese character wording properly. which is: ??????????????? So my question is: what tools can i use to detect the character set of those text? And how do i convert/encode/decode them properly in C#? Updates: Added some test code [TestMethod] public void TestMethod1() { string encodedText = "具有éœé›»ç”¢ç”Ÿè£ç½®ä¹‹å½±åƒè¼¸å…¥è£ç½®"; Encoding encoder = new UTF8Encoding(); byte[] postBytes = encoder.GetBytes(encodedText); postBytes = UTF8Encoding.Convert(Encoding.UTF8, Encoding.Unicode, postBytes); string decodedText = Encoding.Unicode.GetString(postBytes); Assert.AreNotEqual(encodedText, decodedText); } thanks

    Read the article

  • Decode S-JIS string to UTF-8

    - by user566613
    Hi, I am working on a Japanese File and I have no knowledge of the language. The file is encoded in S-JIS. Now, I am supposed to convert the contents into UTF-8 so that the content looks like Japanese. And here I am completely blank. I tried the following code that I found somewhere on Internet but no luck: byte[] arrByte = Encoding.UTF8.GetBytes(arrActualData[x]); string str = ASCIIEncoding.ASCII.GetString(arrByte);Can anyone help me with this? Thanks in advance Kunal

    Read the article

  • jQuery urlencode/decode patch help

    - by jeerose
    Hi Gang, I'm using this jQuery urlencode and urldecode plugin - very simple and easy to use but it doesn't, in its original form, remove + from the string. The one comment on the home page suggests a patch but I don't know how to implement it. Can anyone help me out? The Page: http://www.digitalbart.com/jquery-and-urlencode/ //URL Encode/Decode $.extend({URLEncode:function(c){var o='';var x=0;c=c.toString(); var r=/(^[a-zA-Z0-9_.]*)/; while(x<c.length){var m=r.exec(c.substr(x)); if(m!=null && m.length>1 && m[1]!=''){o+=m[1];x+=m[1].length; }else{if(c[x]==' ')o+='+';else{var d=c.charCodeAt(x);var h=d.toString(16); o+='%'+(h.length<2?'0':'')+h.toUpperCase();}x++;}}return o;}, URLDecode:function(s){var o=s;var binVal,t;var r=/(%[^%]{2})/; while((m=r.exec(o))!=null && m.length>1 && m[1]!=''){ b=parseInt(m[1].substr(1),16); t=String.fromCharCode(b);o=o.replace(m[1],t);}return o;} }); The proposed Patch: function dummy_url_decode(url) { // fixed -- + char decodes to space char var o = url; var binVal, t, b; var r = /(%[^%]{2}|\+)/; while ((m = r.exec(o)) != null && m.length > 1 && m[1] != '') { if (m[1] == '+') { t = ' '; } else { b = parseInt(m[1].substr(1), 16); t = String.fromCharCode(b); } o = o.replace(m[1], t); } return o; } Thanks!

    Read the article

  • Writing UTF8 text to file

    - by sonofdelphi
    I am using the following function to save text to a file (on IE-8 w/ActiveX). function saveFile(strFullPath, strContent) { var fso = new ActiveXObject( "Scripting.FileSystemObject" ); var flOutput = fso.CreateTextFile( strFullPath, true ); //true for overwrite flOutput.Write( strContent ); flOutput.Close(); } The code works fine if the text is fully Latin-9 but when the text contains even a single UTF-8 encoded character, the write fails. The ActiveX FileSystemObject does not support UTF-8, it seems. I tried UTF-16 encoding the text first but the result was garbled. What is a workaround?

    Read the article

  • IDN aware tools to encode/decode human readable IRI to/from valid URI

    - by Denis Otkidach
    Let's assume a user enter address of some resource and we need to translate it to: <a href="valid URI here">human readable form</a> HTML4 specification refers to RFC 3986 which allows only ASCII alphanumeric characters and dash in host part and all non-ASCII character in other parts should be percent-encoded. That's what I want to put in href attribute to make link working properly in all browsers. IDN should be encoded with Punycode. HTML5 draft refers to RFC 3987 which also allows percent-encoded unicode characters in host part and a large subset of unicode in both host and other parts without encoding them. User may enter address in any of these forms. To provide human readable form of it I need to decode all printable characters. Note that some parts of address might not correspond to valid UTF-8 sequences, usually when target site uses some other character encoding. An example of what I'd like to get: <a href="http://xn--80aswg.xn--p1ai/%D0%BF%D1%83%D1%82%D1%8C?%D0%B7%D0%B0%D0%BF%D1%80%D0%BE%D1%81"> http://????.??/???????????</a> Are there any tools to solve these tasks? I'm especially interested in libraries for Python and JavaScript.

    Read the article

  • Mysql latin1 turkish data and delphi 2010 utf8

    - by sabri.arslan
    Hello, I have tables collating latin1_general_ci and have turkish character values. And i can use this data on delphi 7+zeos with no problem. but i want to upgrade my delphi to 2010 version but zeos too slow as i saw. so i want to use odbc+ado or dbexpress solution. dbexpress solution works fine , display my data as entered and write as entered table without any change to column charset. but dbexpress has problems as i saw. for example when i select * from table which has column types as varchar,decimal,int,tinyint,text give av errors on xp systems. vista and 7 does not give any error and work fine(not fully tested). ado solution(dbgo) works fine but its not show my data as entered.its want everything be utf. but i don't want to convert my data to utf before test everything. how can i see my data as entered and write client side utf and store latin1(as zeos or dbexpress do). i was tried many other options. eg. mysql side collation and charset parameters. sorry for my bad english. i hope someone understand me. thanks.

    Read the article

  • Match beginning of words in Mysql for UTF8 strings

    - by ankimal
    Hi, I m trying to match beignning of words in a mysql column that stores strings as varchar. Unfortunately, REGEXP does not seem to work for UTF-8 strings as mentioned here So, select * from names where name REGEXP '[[:<:]]Aandre'; does not work if I have name like Foobar Aándreas However, select * from names where name like '%andre%' matches the row I need but does not guarantee beginning of words matches. Is it better to do the like and filter it out on the application side ? Any other solutions?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >