Search Results

Search found 4604 results on 185 pages for 'utf'.

Page 2/185 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Force .js files saved in ANSI encoding to show in UTF-8 on IIS 7.5

    - by Xcarpa
    I'm migrating a web system that now works on windows server 2003 IIS 6, to IIS 7.5 on windows 2008 server This system generates javascript files with accented characters in ANSI (Portuguese - Brazil). These javascripts shows for example alert messages. In IIS 6 I have no problem with that, but now using IIS 7.5 if those files are not in UTF-8, the accented characters do not appear correctly. Do we have any way to force these files, even in ANSI, to be processed by IIS 7.5 as UTF-8 ? Thank you ! Cheers Xcarpa

    Read the article

  • UTF-8 support in java application

    - by jacekn
    I'm having trouble with UTF-8. common.jsp <%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%> typical.jsp <%@ include file="common.jsp" %> Page Head <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> Form <form id="screenObject" accept-charset="UTF-8" action="/SiteAdmin/articleHeaderEdit?articleId=15" method="post"> I enter non latin1 characters into a text field and click Save. Validator complains about another field and stops the submission. This never gets to the database, so database ability to handle UTF-8 is not in this picture. The page redisplays with appropriate error but the text that had been entered is all messed up. All non latin1 characters are converted to some gibberish. I'm using Spring 3 MVC, in case that matters... Attempts Adding this to my view resolver didn't help: <property name="contentType" value="text/html;charset=UTF-8" />

    Read the article

  • JQuery AJAX is not sending UTF-8 to my server, only in IE.

    - by alex
    I am sending UTF-8, japanese text, to my server. It works in Firefox. My access.log and headers are: /ajax/?q=%E6%BC%A2%E5%AD%97 Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Content-Type application/x-www-form-urlencoded; charset=UTF-8 Howeer, in IE8, my access.log says: /ajax/?q=?? For some reason, IE8 is turning my AJAX call into question marks. Why!? I added the scriptCharset and ContentType according to some tutorials, but still no luck. And this is my code: $.ajax({ >---method:"get", >---url:"/ajax/", scriptCharset: "utf-8" , contentType: "application/x-www-form-urlencoded; charset=UTF-8", >---data:"q="+query ... ...

    Read the article

  • xml with special character, encoding utf-8

    - by Sergio Morieca
    I have a few simple questions, because I got confused reading all difference responses. 1) If I have an xml with prolog: and I'm going to unmarshall it with Java (for example: JaXB). I suppose, that I can't put CROSS OF LORRAINE (http://www.fileformat.info/info/unicode/char/2628/index.htm) inside, but I can put "\u2628", correct? 2) I've also heard that UTF-8 doesn't contain it, but anything in Unicode can be saved with encoding UTF-8 (or UTF-16), and here is an example from this page: UTF-8 (hex) 0xE2 0x98 0xA8 (e298a8) Is my reasoning correct? Can I use this form and put it in the xml with utf-8 encoding?

    Read the article

  • UTF-8 to ISO-8859-1 mapping / lossless conversion libraries in Java

    - by Pawel Krupinski
    I need to perform a conversion of characters from UTF-8 to ISO-8859-1 in Java without losing for example all of the UTF-8 specific punctuation. Ideally would like these to be converted to equivalents in ISO (e.g. there are probably 5 different single quotes in UTF-8 and would like them all converted to ISO single quote character). String.getBytes("ISO-8859-1") just won't do the trick in this case as it will lose the UTF-8-specific chars. Do you know of any ready mappings or libraries in Java that would map UTF-8 specific characters to ISO?

    Read the article

  • Saving XML in UTF-8 with MSXML

    - by stung
    I'm trying to load a simple Xml file (encoded in UTF-8): <?xml version="1.0" encoding="UTF-8"?> <Test/> And save it with MSXML in vbscript: Set xmlDoc = CreateObject("MSXML2.DOMDocument.6.0") xmlDoc.Load("C:\test.xml") xmlDoc.Save "C:\test.xml" The problem is, MSXML saves file in ANSI instead of UTF-8 (despite the original file being encoded in UTF-8). The MSDN docs for MSXML says that save() will write the file in whatever encoding the XML is defined in but this is clearly not working at least on my machine. How can MSXML save in UTF-8?

    Read the article

  • json_encode with mysql content and umlauts in utf-8

    - by i3rutus
    Hey, i feel my beard growing while trying to find out the Problem here. Basic the Problem is, that Umlauts/Special Signs äöß ... don't work. I guess everyone is sick and tired of that questions but all the solutions found online don't seem to work. Im having utf-8 content in a utf-8 Mysql Database. I feel the Problem ist somewhere in the Database connection but i just can't figure out. character_set_client utf8 character_set_connection utf8 character_set_database utf8 character_set_filesystem binary character_set_results utf8 character_set_server latin1 character_set_system utf8 Im not sure if the problem is the latin1 for character_set_server because im not into that mysql stuff. I also dont know how to change cause i can't access the mysql server's config files. Whatever is confusing me, that if i get my results from the Database and echo it, print_r gives the right result. ini_set('default_charset','utf-8'); header('Content-Type: text/plain; > charset=utf-8'); Firefox says char encode is utf-8 but if when i output: print_r($listnew); echo json_encode($listnew[5]); print_r results everything right but json_encode does wrong. print_r: [5] => Array ( [id] => 5 [data] => U-Bahnhof Theresienstraße [size] => 17 ) json_encode: {"id":5,"data":"U-Bahnhof Theresienstra\u00dfe","size":17} i know json_encode needs a utf-8 string to work properly there and i feel im having a encode trouble here but i just can't firgure out where it is. Any help would be appreciated, thanks in advance. i3

    Read the article

  • Should UTF-16 be considered harmful?

    - by Artyom
    I'm going to ask what is probably quite a controversial question: "Should one of the most popular encodings, UTF-16, be considered harmful?" Why do I ask this question? How many programmers are aware of the fact that UTF-16 is actually a variable length encoding? By this I mean that there are code points that, represented as surrogate pairs, take more then one element. I know; lots of applications, frameworks and APIs use UTF-16, such as Java's String, C#'s String, Win32 APIs, Qt GUI libraries, the ICU Unicode library, etc. However, with all of that, there are lots of basic bugs in the processing of characters out of BMP (characters that should be encoded using two UTF-16 elements). For example, try to edit one of these characters: 𝄞 𝕥 𝟶 𠂊 You may miss some, depending on what fonts you have installed. These characters are all outside of the BMP (Basic Multilingual Plane). If you cannot see these characters, you can also try looking at them in the Unicode Character reference. For example, try to create file names in Windows that include these characters; try to delete these characters with a "backspace" to see how they behave in different applications that use UTF-16. I did some tests and the results are quite bad: Opera has problem with editing them Notepad can't deal with them correctly (delete for example) File names editing in Window dialogs in broken All QT3 applications can't deal with them. StackOverflow seems to remove these characters if edited directly in as Unicode characters, and only seems to allow them as HTML Unicode escapes. So... This was very simple test. Do you think that UTF-16 should be considered harmful?

    Read the article

  • C++ UTF-8 output with ICU

    - by Isaac
    I'm struggling to get started with the C++ ICU library. I have tried to get the simplest example to work, but even that has failed. I would just like to output a UTF-8 string and then go from there. Here is what I have: #include <unicode/unistr.h> #include <unicode/ustream.h> #include <iostream> int main() { UnicodeString s = UNICODE_STRING_SIMPLE("??????"); std::cout << s << std::endl; return 0; } Here is the output: $ g++ -I/sw/include -licucore -Wall -Werror -o icu_test main.cpp $ ./icu_test пÑÐ¸Ð²ÐµÑ My terminal and font support UTF-8 and I regularly use the terminal with UTF-8. My source code is in UTF-8. I think that perhaps I somehow need to set the output stream to UTF-8 because ICU stores strings as UTF-16, but I'm really not sure and I would have thought that the operators provided by ustream.h would do that anyway. Any help would be appreciated, thank you.

    Read the article

  • Character Encoding

    - by anteater7171
    My text editor allows me to code in several different character formats Ansi, UTF-8, UTF-8(No BOM), UTF-16LE, and UTF-16BE. What is the difference between them? What is commonly regarded as the best format (I'm using Python if that makes a diffrence)?

    Read the article

  • Is it possible to reliably auto-decode user files to Unicode? [C#]

    - by NVRAM
    I have a web application that allows users to upload their content for processing. The processing engine expects UTF8 (and I'm composing XML from multiple users' files), so I need to ensure that I can properly decode the uploaded files. Since I'd be surprised if any of my users knew their files even were encoded, I have very little hope they'd be able to correctly specify the encoding (decoder) to use. And so, my application is left with task of detecting before decoding. This seems like such a universal problem, I'm surprised not to find either a framework capability or general recipe for the solution. Can it be I'm not searching with meaningful search terms? I've implemented BOM-aware detection (http://en.wikipedia.org/wiki/Byte_order_mark) but I'm not sure how often files will be uploaded w/o a BOM to indicate encoding, and this isn't useful for most non-UTF files. My questions boil down to: Is BOM-aware detection sufficient for the vast majority of files? In the case where BOM-detection fails, is it possible to try different decoders and determine if they are "valid"? (My attempts indicate the answer is "no.") Under what circumstances will a "valid" file fail with the C# encoder/decoder framework? Is there a repository anywhere that has a multitude of files with various encodings to use for testing? While I'm specifically asking about C#/.NET, I'd like to know the answer for Java, Python and other languages for the next time I have to do this. So far I've found: A "valid" UTF-16 file with Ctrl-S characters has caused encoding to UTF-8 to throw an exception (Illegal character?) (That was an XML encoding exception.) Decoding a valid UTF-16 file with UTF-8 succeeds but gives text with null characters. Huh? Currently, I only expect UTF-8, UTF-16 and probably ISO-8859-1 files, but I want the solution to be extensible if possible. My existing set of input files isn't nearly broad enough to uncover all the problems that will occur with live files. Although the files I'm trying to decode are "text" I think they are often created w/methods that leave garbage characters in the files. Hence "valid" files may not be "pure". Oh joy. Thanks.

    Read the article

  • cURL gets response with utf-8 BOM

    - by Reshat Belyalov
    In my script I send data with cURL, and enabled CURLOPT_RETURNTRANSFER. The response is json encoded data. When I'm trying to json_decode, it returns null. Then I found that response contains utf-8 BOM symbols at the beginning of string (). There is some experiments: $data = $data = curl_exec($ch); echo $data; the result is {"field_1":"text_1","field_2":"text_2","field_3":"text_3"} $data = $data = curl_exec($ch); echo mb_detect_encoding($data); result - UTF-8 $data = $data = curl_exec($ch); echo mb_convert_encoding($data, 'UTF-8', mb_detect_encoding($data)); // identical to echo mb_convert_encoding($data, 'UTF-8', 'UTF-8'); result - {"field_1":"text_1","field_2":"text_2","field_3":"text_3"} The one thing that helps is removing first 3 symbols: if (substr($data, 0, 3) == pack('CCC', 239, 187, 191)) { $data = substr($data, 3); } But what if there will be another BOM? So the question is: How to detect right encoding of cURL response? OR how to detect what BOM has arrrived? Or maybe how to convert the response with BOM? Thanks.

    Read the article

  • example of a utf-8 format octet string

    - by erik
    I'm working w/ a function that expects a string formatted as a utf-8 encoded octet string. Can someone give me an example of what a utf-8 encoded octet string would look like? Put another way, if I convert 'foo' to bytes, I get 112, 111, 111. What would these char codes look like as a utf-8 encoded octet string? Would it be "0x70 0x6f 0x6f"? Thanks

    Read the article

  • Utf-16BE to ISO-8859-1 in PHP

    - by mck89
    Hi, i need to convert a Utf-16BE in ISO-8859-1 in PHP (i'm not an expert in encoding so i don't know if Utf-16 and Utf-16BE are the same thing). I've read somewhere to use the mb_convert_encoding function but i haven't that function because i don't have the multibyte extension installed. So do you know an alternative method to do this?

    Read the article

  • UTF-8 xml file shows Gibberish

    - by Adam
    I have a UTF-8 encoded xml file, which was exported from a Wordpress MySQL database. While the file is saved as UTF-8, and the encoding is UTF-8, I get gibberish instead of the Hebrew text that is supposed to be in there, which looks like this: ™×•×˜×•×ª How can I find the original encoding or charset and convert the text into proper Hebrew? PHP's mb_detect_encoding($str); returns UTF-8 Tried all sorts of php encoding functions, with different settings and input/output charsets, but they all just print different looking gibberish blocks, like: ÃâÃËÃâ¢Ãâ¢ÃËà and ?? ××©×ž× ...Any Ideas how to go about this?

    Read the article

  • Working with utf-8 files in Eclipse.

    - by Pablo Cabrera
    Quite straight forward question. Is there a way to configure Eclipse to work with text files encoded with utf-8 with and without the BOM? So far I've used eclipse with utf-8 encoding and it works, but when I try to edit a file generated by another editor that includes the BOM, Eclipse doesn't handle it properly, it 'shows an invisible character' at the begining of the file (the BOM). Is there a way to make Eclipse understand utf-8 encoded files with BOM?

    Read the article

  • Corrupt UTF-8 Characters with PHP 5.2.10 and MySQL 5.0.81

    - by jkndrkn
    We have an application hosted on both a local development server and a live site. We are experiencing UTF-8 corruption issues and are looking to figure out how to resolve them. The system is run using symfony 1.0 with Propel. On our development server, we are running PHP 5.2.0 and MySQL 5.0.32. We do not experience corrupted UTF-8 characters there. On our live site, PHP 5.2.10 and MySQL 5.0.81 is running. On that server, certain characters such as ô´ and S are corrupted once they are stored in the database. The corrupted characters are showing up as either question marks or approximations of the original character with adjacent question marks. Examples of corruption: Uncorrupted: ô´ Corrupted: ô? Uncorrupted: S Corrupted: ? We are currently using the following techniques on both development and live servers: Executing the following queries prior to execution of any other queries: SET NAMES 'utf8' COLLATE 'utf8_unicode_ci' SET CHARSET 'utf8' Setting the <meta> Content-Type value to: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Adding the following to our .htaccess file: AddDefaultCharset utf-8 Using mb_* (multibyte) PHP functions where necessary. Being sure to set database columns to use utf8_unicode_ci collation. These techniques are sufficient for our development site, but do not work on the live site. On the live site I've also tried adding mysql_set_encoding('ut8', $mysql_connection) but this does not help either. I have found some evidence that newer versions of PHP and MySQL are mishandling UTF-8 character encodings.

    Read the article

  • [AS3] Calling Php Script with UTF-8 POST variables

    - by kornelijepetak
    AS3 documentation says that Strings in AS3 are in UTF-16 format. There is a textbox on a Flash Clip where user can type some data. When a button is clicked, I want this data to be sent to a php script. I have everything set up, but it seems that the PHP script gets the data in UTF-16 format. The data in the database (which is utf-8) shows some unrecognizable characters (where special characters are used), meaning that the data has not been sent in a correct encoding. var variables:URLVariables=new URLVariables; var varSend:URLRequest=new URLRequest("http://website.com/systematic/accept.php"); varSend.method=URLRequestMethod.POST; varSend.data=variables; var varLoader:URLLoader=new URLLoader; varLoader.dataFormat=URLLoaderDataFormat.VARIABLES; varLoader.addEventListener(Event.COMPLETE, completeHandler); When submit button is clicked, the following handler gets executed. function sendData(event:MouseEvent) : void { // i guess here is the problem (tbName.text is UTF-16) variables.name = tbName.text; varLoader.load(varSend); } Is there any way to send the data so that PHP script gets the data in UTF-8 format? (PHP script is retrieving the value using the $_POST['name']).

    Read the article

  • utf-8 convertion doesn't work always

    - by Marco Piccinni
    I searched into other stack before to type here and I didn't find anythong similar. I have to scrape different utf-8 webpages which contain text like "Oggi è una bellissima giornata" the problem is on the characther "è" I extract this text with jtidy and xpath query expression and I convert it with byte[] content = filteredEncodedString.getBytes("utf-8"); String result = new String(content,"utf-8"); where filteredEncodedString contains the text "Oggi è una bellissima giornata". This procedures works on the most webpages analyzed so far but in some case it doesn't extract a utf-8 string. Page encoding is always the same as the text is similar. Any ideas about the problem? thanks Marco

    Read the article

  • Bad utf8 display with tmux

    - by Nison Maël
    When I press the "é" key multiple times on my keyboard, here is what tmux print (notice the spaces) : arcanis@~ > é é é é é é é é é é é é é It also broke emacs when the file contains utf8 characters. My locale is : arcanis@~ > locale LANG=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL= How can I fix this ?

    Read the article

  • <?xml version=“1.0” encoding=“UTF-8”?> not <?xml version='1.0' encoding='UTF-8'?>

    - by user2446702
    I am using lxml with tree.write(xmlFileOut, pretty_print = True, xml_declaration = True, encoding='UTF-8' to write out my opened and edited xml file, but I absolutely need to have the xml declaration as <?xml version=“1.0” encoding=“UTF-8”?> and NOT <?xml version='1.0' encoding='UTF-8'?> Now I know they are exactly the same when it comes to xml, but I am dealing with a very tricky customer who absolutely has to have " in the declaration and not '. I have searched everywhere but can't find the answer. Could I create it and add it in myself to the head of the xml somehow? Could I tell lxml that this is what I need as an xml declaration?

    Read the article

  • Perl LWP::UserAgent mishandling UTF-8 response

    - by RedGrittyBrick
    When I use LWP::UserAgent to retrieve content encoded in UTF-8 it seems LWP::UserAgent doesn't handle the encoding correctly. Here's the output after setting the Command Prompt window to Unicode by the command chcp 65001 Note that this initially gives the appearance that all is well, but I think it's just the shell reassembling bytes and decoding UTF-8, From the other output you can see that perl itself is not handling wide characters correctly. C:\perl getutf8.pl ====================================================================== HTTP/1.1 200 OK Connection: close Date: Fri, 31 Dec 2010 19:24:04 GMT Accept-Ranges: bytes Server: Apache/2.2.8 (Win32) PHP/5.2.6 Content-Length: 75 Content-Type: application/xml; charset=utf-8 Last-Modified: Fri, 31 Dec 2010 19:20:18 GMT Client-Date: Fri, 31 Dec 2010 19:24:04 GMT Client-Peer: 127.0.0.1:80 Client-Response-Num: 1 <?xml version="1.0" encoding="UTF-8"? <nameBudejovický Budvar</name ====================================================================== response content length is 33 ....v....1....v....2....v....3....v....4 <nameBudejovický Budvar</name . . . . v . . . . 1 . . . . v . . . . 2 . . . . v . . . . 3 . . . . 3c6e616d653e427564c49b6a6f7669636bc3bd204275647661723c2f6e616d653e < n a m e B u d ? ? j o v i c k ? ? B u d v a r < / n a m e Above you can see the payload length is 31 characters but Perl thinks it is 33. For confirmation, in the hex, we can see that the UTF-8 sequences c49b and c3bd are being interpreted as four separate characters and not as two Unicode characters. Here's the code #!perl use strict; use warnings; use LWP::UserAgent; my $ua = LWP::UserAgent-new(); my $response = $ua-get('http://localhost/Bud.xml'); if (! $response-is_success) { die $response-status_line; } print '='x70,"\n",$response-as_string(), '='x70,"\n"; my $r = $response-decoded_content((charset = 'UTF-8')); $/ = "\x0d\x0a"; # seems to be \x0a otherwise! chomp($r); # Remove any xml prologue $r =~ s/^<\?.*\?\x0d\x0a//; print "Response content length is ", length($r), "\n\n"; print "....v....1....v....2....v....3....v....4\n"; print $r,"\n"; print ". . . . v . . . . 1 . . . . v . . . . 2 . . . . v . . . . 3 . . . . \n"; print unpack("H*", $r), "\n"; print join(" ", split("", $r)), "\n"; Note that Bud.xml is UTF-8 encoded without a BOM. How can I persuade LWP::UserAgent to do the right thing? P.S. Ultimately I want to translate the Unicode data into an ASCII encoding, even if it means replacing each non-ASCII character with one question mark or other marker. I have accepted Ysth's "upgrade" answer - because I know it is the right thing to do when possible. However I am going to use a work-around (which may depress Tom further): $r = encode("cp437", decode("utf8", $r));

    Read the article

  • How to convert none-Latin-based encoded text into UTF-8, or make them coexist on same page?

    - by Yallaa
    Good day, I have a script that scrapes the title/description of remote pages and prints those values into a corresponding charset=UTF-8 encoded page. Here is the problem, whenever a remote page is encoded with non-Latin characters encoding like (Arabic, Russian, Chinese, Japanese etc.) the imported values print as garbled text. I've tried passing those values through either iconv or mb_convert_encoding converters but without much success. Then, I tried detecting the remote encoding first, then change my presentation page's encoding into the remote one instead of the current utf-8, which works okay with the imported values, but the other existing utf-8 content of that language on the page gets garbled instead. Example: If I try to import those values from a Russian windows-1251 into my UTF-8 encoded page which has a mix English/Russian content. I change the imported non-utf-8 string into a utf-8 using either iconv or mb_convert_encoding. I tried: $RemoteValue = iconv($RemoteEncoding, 'UTF-8', $RemoteValue); or $RemoteValue mb_convert_encoding($RemoteValue, "UTF-8", $RemoteEncoding); or $RemoteValue mb_convert_encoding($RemoteValue, "UTF-8", "auto"); without success. If I detect that the remote page is windows-1251 encoded and I change my presentation page (which already has UTF-8 encoded mixed language content) to be similar to the remote page, then the japanese utf-8 content on the existing page gets garbled... Can 2 differently encoded strings coexist on the same page (ex. utf-8 & windows-1251)? Am I using the converters correctly? any hints as to why they don't work? Is there any better way to do this? Thank you for your help

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >