Search Results

Search found 4604 results on 185 pages for 'utf'.

Page 10/185 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Problem With Inserts of multibyte (converted to utf-8) strings in the mysql tables of utf_unicode_ci encoding

    - by user381595
    http://domainsoutlook.com/sandbox/keyword/?s=http://bhaskar.com raw example of my keyword density analyser. Every keyword shows up properly with no problems in unicode conversions etc. Now, When I am adding these words to the database column of a table, the words show up as messed up. http domainsoutlook.com/b/site/bhaskar.com.html For example on this front end page if you see there is a keyword that is shown as a blank but still occurs on the website 8 times. (It isnt empty in the database though). I have checked and there is no problem with mysql_real_escape_String...because the output stays the same before and after the word is gone through mysql_real_escape_String. Another problem was that I wanted to fix my urls for arabic language. They should be showing up as /word-{1st letter of the word}/{whole word}.html but its showing as /word-{whole word}/{1st letter of the word}.html I really need answers for these two questions.

    Read the article

  • Change Emacs Default Coding System

    - by Saterus
    My problem stems from Emacs inserting the coding system headers into source files containing non-ascii characters: # -*- coding: utf-8 -*- My coworkers do not like these headers being checked into our repositories. I don't want them inserted into my files because Emacs automatically detects that the file should be UTF-8 regardless so there doesn't seem to be any benefit to anyone. I would like to simply set Emacs to use UTF-8 automatically for all files, yet it seems to disagree with this idea. In an effort to fix this, I've added the following to my .emacs: (prefer-coding-system 'utf-8) (setq coding-system-for-read 'utf-8) (setq coding-system-for-write 'utf-8) This does not seem to solve my problem. Emacs still inserts the coding-system headers into my files. Anyone have any ideas? EDIT: I think this problem is specifically related to ruby-mode. I still can't turn it off though.

    Read the article

  • Zend Framework and UTF-8 characters (æøå)

    - by Randy Mayer
    Hi, I use Zend Framework and I have problem with JSON and UTF-8. Output \u00c3\u00ad\u00c4\u008d Ã­Ä I use... JavaScript (jQuery) contentType : "application/json; charset=utf-8", dataType : "json" Zend Framework $view->setEncoding('UTF-8'); $view->headMeta()->appendHttpEquiv('Content-Type', 'text/html;charset=utf-8'); header('Content-Type: application/json; charset=utf-8'); utf8_encode(); Zend_Json::encode Database resources.db.params.charset = "utf8" resources.db.params.driver_options.1002 = "SET NAMES utf8" resources.db.isDefaultTableAdapter = true Collation utf8_unicode_ci Type MyISAM Server PHP Version 5.2.6 What did I do wrong? Thank you for your reply!

    Read the article

  • charsets in MySQL replication

    - by niklassaers
    Hi guys, What can I do to ensure that replication will use latin1 instead of utf-8? I'm migrating between an MySQL 5.1.22 server (master) on a Linux system and a MySQL 5.1.42 server (slave) on a FreeBSD system. My replication works well, but when non-ascii characters are in my varchars, they turn "weird". The Linux/MySQL-5.1.22 shows the following character set variables: character_set_client=latin1 character_set_connection=latin1 character_set_database=latin1 character_set_filesystem=binary character_set_results=latin1 character_set_server=latin1 character_set_system=utf8 character_sets_dir=/usr/share/mysql/charsets/ collation_connection=latin1_swedish_ci collation_database=latin1_swedish_ci collation_server=latin1_swedish_ci While the FreeBSD shows character_set_client=utf8 character_set_connection=utf8 character_set_database=utf8 character_set_filesystem=binary character_set_results=utf8 character_set_server=utf8 character_set_system=utf8 character_sets_dir=/usr/local/share/mysql/charsets/ collation_connection=utf8_general_ci collation_database=utf8_general_ci collation_server=utf8_general_ci Setting any of these variables from the MySQL CLI has no effect, and setting them in my.cnf or at the command line makes the server not start. Of course, both servers have the tables in question created the same way, in this case with DEFAULT CHARSET=latin1. Let me give you an example: CREATE TABLE `test` ( `test` varchar(5) DEFAULT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 When I on the master do, in a Latin1 terminal, "INSERT INTO test VALUES ('æøå')", this becomes on the slave, when I select it from a Latin1 based terminal +--------+ | test | +--------+ | æøå | +--------+ On a UTF-8 based terminal on the replication slave, test contains: +--------+ | test | +--------+ | æøå | +--------+ So my conclusion is that it is converted to utf8, even though the table definition is latin1. Is this a correct conclusion? Of course, on the master, in a latin1 terminal, it still says: +------+ | test | +------+ | æøå | +------+ Since both system character sets are utf-8, if I set both terminals to utf-8 and do again "INSERT INTO test VALUES ('æøå')" on the master with a utf-8 terminal, on the slave with utf-8 I get: +------------+ | test | +------------+ | æøà | +------------+ If my conclusion is correct, all my replicated data is converted to utf8 (if it is utf8, it is treated as latin1 and converted to utf8), while all the old data in the table is, as the CREATE TABLE suggests, latin1. I'd love to convert it all to utf-8 if it weren't for the fact that legacy applications rely on it being latin1, so I need to keep it in latin1 while they still exist. What can I do to ensure that the replication reads latin1, treats it as latin1 and writes it on the slave as latin1? Cheers Nik

    Read the article

  • How to read.table with "Hebrew" column names (in R)?

    - by Tal Galili
    Hi all, I am trying to read a .txt file, with Hebrew column names, but without success. I uploaded an example file to: http://www.talgalili.com/files/aa.txt And am trying the command: read.table("http://www.talgalili.com/files/aa.txt", header = T, sep = "\t") This returns me with: X.....ª X...ª...... X...œ.... 1 12 97 6 2 123 354 44 3 6 1 3 Instead of: ??? ????? ???? 12 97 6 123 354 44 6 1 3 My output for: l10n_info() Is: $MBCS [1] FALSE $`UTF-8` [1] FALSE $`Latin-1` [1] TRUE $codepage [1] 1252 And for: Sys.getlocale() Is: [1] "LC_COLLATE=English_United States.1252;LC_CTYPE=English_United States.1252;LC_MONETARY=English_United States.1252;LC_NUMERIC=C;LC_TIME=English_United States.1252" Can you suggest to me what to try and change to allow me to load the file correctly ? Update: Trying to use: read.table("http://www.talgalili.com/files/aa.txt",fileEncoding ="iso8859-8") Has resulted in: V1 1 ? Warning messages: 1: In read.table("http://www.talgalili.com/files/aa.txt", fileEncoding = "iso8859-8") : invalid input found on input connection 'http://www.talgalili.com/files/aa.txt' 2: In read.table("http://www.talgalili.com/files/aa.txt", fileEncoding = "iso8859-8") : incomplete final line found by readTableHeader on 'http://www.talgalili.com/files/aa.txt' While also trying this: Sys.setlocale("LC_ALL", "en_US.UTF-8") Or this: Sys.setlocale("LC_ALL", "en_US.UTF-8/en_US.UTF-8/C/C/en_US.UTF-8/en_US.UTF-8") Get's me this: [1] "" Warning message: In Sys.setlocale("LC_ALL", "en_US.UTF-8") : OS reports request to set locale to "en_US.UTF-8" cannot be honored Any suggestion or clarification will be appreciated. Best, Tal

    Read the article

  • How to enable utf-8 in xpdf outline pane and search

    - by Thanos D. Papaïoannou
    Xpdf version 3.02 downloaded from the Ubuntu repositories and run on Ubuntu 8.04.3 replaces greek utf-8 characters with blank characters in the outline pane, i.e. the bookmark pane, and in the search window. In particular, it is impossible to search for greek words in documents. Is there a way to enable utf-8 support in xpdf so that 1. and 2. above work properly? Thanks!

    Read the article

  • Plans for our next milestone

    - by The Official Microsoft IIS Site
    We have seen some increase in activity with more people downloading our driver and either reporting their successes or reporting any issues they run into – for the native driver (sqlsrv_xxxx API) to the PDO driver (PDO API). We’d like to thank you all for your effort and hope that our responses were quick enough as well as accurate. To keep things simple, let us call the former the SQLSRV-PHP extension (php_sqlsrv.dll) whereas the latter will be the SQLSRV-PDO extension (php_pdo_sqlsrv...(read more)

    Read the article

  • SQL Server is now supported by phpBB!

    - by The Official Microsoft IIS Site
    Our team is really excited to announce the new release of phpBB 3.0.7-PL1 by the phpBB community that supports SQL Server, and one can download it from the Web Application Gallery for a very easy install!! But let’s step back for a moment and provide some background. Microsoft’s Interoperability team has been working with a few PHP projects to support SQL Server using our driver, phpBB was one of them. Although phpBB already had some support for SQL Server / Access, our 1.1 release driver offered...(read more)

    Read the article

  • How to reset the language of the package descriptions

    - by xubuntix
    I have had German as my main language about a year ago. Later I changed it to English. Most parts of the system accepted the change. The notable exceptions are the package descriptions, which remain in German for some packages. You can see in the image (apt-cache and software-center), that while some descriptions are in English, some have remained in German. So the question is: how do I reset this? I guess that there is somewhere a description cache that needs to be told that it should update all descriptions? EDIT: As asked: the output of some language related commands: $ cat /etc/default/locale LANG="en_US.UTF-8" $ apt-config dump | grep Lang Acquire::Languages ""; Acquire::Languages:: "de_DE"; Acquire::Languages:: "de"; Acquire::Languages:: "en"; Acquire::Languages:: "none"; $ locale LANG=de_DE.UTF-8 LANGUAGE=en LC_CTYPE="de_DE.UTF-8" LC_NUMERIC="de_DE.UTF-8" LC_TIME="de_DE.UTF-8" LC_COLLATE="de_DE.UTF-8" LC_MONETARY="de_DE.UTF-8" LC_MESSAGES="de_DE.UTF-8" LC_PAPER="de_DE.UTF-8" LC_NAME="de_DE.UTF-8" LC_ADDRESS="de_DE.UTF-8" LC_TELEPHONE="de_DE.UTF-8" LC_MEASUREMENT="de_DE.UTF-8" LC_IDENTIFICATION="de_DE.UTF-8" LC_ALL= As a note: I'm not sure what each entry means, but some of the de_DE.UTF-8 are probably ok, since I do want paper-sizes, monetary, time, etc. in standard German formats.

    Read the article

  • Form Encoding Problems on GRAILS 2.0

    - by ArmlessJohn
    I have an Grails application that is configured everywhere to function as UTF-8. While running a debug version, headers say Content-Type:text/html;charset=utf-8, and meta tags agree. Browser identified page as UTF-8 and shows characters correctly. When posting a form, the browser correctly sends it encoded as UTF-8. When reading the data via params.paramname, however, the data looks garbled; maçã becomes maçã. Upon further inspection, it seems the form is sending UTF-8 data, but Grails seem to try and read it as if it was ISO-8859-1. Setting accept-charset="ISO-8859-1" on the form confirms this problem, as it fixes the problem. I also have this on applicationContext.xml: <bean id="characterEncodingFilter" class="org.springframework.web.filter.CharacterEncodingFilter"> <property name="encoding"> <value>utf-8</value> </property> <property name="forceEncoding"> <value>true</value> </property> </bean> Is there an solution for this besides adding accept-charset="ISO-8859-1" to all forms in the application? Thanks.

    Read the article

  • Python minidom and UTF-8 encoded XML with hash references

    - by Jakob Simon-Gaarde
    Hi I am experiencing some difficulty in my home project where I need to parse a SOAP request. The SOAP is generated with gSOAP and involves string parameters with special characters like the danish letters "æøå". gSOAP builds SOAP requests with UTF-8 encoding by default, but instead of sending the special chatacters in raw format (ie. bytes C3A6 for the special character "æ") it sends what I think is called character hash references (ie. &#195;&#166;). I don't completely understand why gSOAP does it this way as I can see that it has marked the incomming payload as being UTF-8 encoded anyway (Content-Type: text/xml; charset=utf-8), but this is besides the question (I think). Anyway I guess gSOAP probably is obeying transport rules, or what? When I parse the request from gSOAP in python with xml.dom.minidom.parseString() I get element values as unicode objects which is fine, but the character hash references are not decoded as UTF-8 character codes. It unescapes the character hash references, but does not decode the string afterwards. In the end I have a unicode string object with UTF-8 encoding: So if the string "æble" is contained in the XML, it comes like this in the request: "&#195;&#166;ble" After parsing the XML the unicode string in the DOM Text Node's data member looks like this: u'\xc3\xa6ble' I would expect it to look like this: u'\xe6ble' What am I doing wrong? Should I unescape the SOAP XML before parsing it, or is it somewhere else I should be looking for the solution, maybe gSOAP? Thanks in advance. Best regards Jakob Simon-Gaarde

    Read the article

  • MySQL, Altering Table from Latin-1 to UTF-8

    - by brant
    I would like to rid new entries into my database of Latin-1 characters and just allow UTF-8. I plan to Alter table and make the following changes: Charset: latin1 - utf-8 Collation: latin1_swdish_ci - utf8_general_ci The table in question has 1 million rows. Is this a good idea? What are the risks of doing this? What happens to data that I try to input that is not in utf-8? What happens to data that has been previously entered that is not utf-8?

    Read the article

  • Problem with diacritics on psql 9.0 (PostgreSQL)

    - by Gaks
    I have two instances of PostgreSQL installed on my server: 8.3 and 9.0. There seams to be some problem with Polish diacritic characters (like óleaszzc) on postgresql 9.0 client - psql. When I connect to DB (either 8.3 or 9.0) with psql 8.3 - I can type all diacritics on the terminal without any problems: www:/tmp# sudo -u postgres /usr/lib/postgresql/8.3/bin/psql -q postgres=# ólscn However, when I connect to the same DBs with psql 9.0 client - I can't type diacritics on the terminal anymore: www:/tmp# sudo -u postgres /usr/lib/postgresql/8.3/bin/psql -q Here are some encoding settings: www:/tmp# sudo -u postgres /usr/lib/postgresql/9.0/bin/psql -q -c "show client_encoding" client_encoding ----------------- UTF8 (1 row) . www:/tmp# sudo -u postgres /usr/lib/postgresql/8.3/bin/psql -q -c "show client_encoding" client_encoding ----------------- UTF8 (1 row) . www:/tmp# sudo -u postgres /usr/lib/postgresql/9.0/bin/psql -q -l List of databases Name | Owner | Encoding | Collation | Ctype | Access privileges ---------------------+--------------+----------+-------------+-------------+----------------------- postgres | postgres | UTF8 | pl_PL.UTF-8 | pl_PL.UTF-8 | . www:/tmp# echo $LANG pl_PL.UTF-8 It looks like DB/cluster configuration doesn't matter - if psql 8.x on terminal works fine and psql 9.x does not. Any idea how to fix that?

    Read the article

  • SQL Error (1064) when importing data from SQL file

    - by mejpark
    I have a MySQL database, which was originally set up with the default latin1 character set and latin1_swedish_ci collation. I was using the database like this for sometime, until I noticed strange characters on my production web site, which is powered by a database exported from my development machine. At this point, I changed the default character set of the database and tables to utf8 and the collation to utf8_unicode_ci, converted the latin1 data inside each table to utf8 (using the 'convert data' option) and exported the database as a single SQL file using HeidiSQL. When the resulting SQL file is opened in Notepad++, several characters are rendered incorrectly. For example, en dashes (-) are displayed as – and e with accent (é) are displayed as é. I changed the encoding of the file from ANSI to UTF-8 (using the encoding menu option in Notepad++) and the offending characters are rendered correctly. I saved the new utf8-encoded SQL file and attempted to import the contents into the MySQL database on my production server. The import process fails with following error: /* SQL Error (1064): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '?# -------------------------------------------------------- # Host: ' at line 1 */ /* Error with snippets directory: The specified path was not found */ The head of the SQL file: # -------------------------------------------------------- # Host: 127.0.0.1 # Server version: 5.1.33-community # Server OS: Win32 # HeidiSQL version: 6.0.0.3773 # Date/time: 2011-04-20 09:48:36 # -------------------------------------------------------- It chokes on the first line of the file, which is commented out. Why is this happening? I didn't have a problem loading data from SQL files until I changed the character set and collation of the database. I came up with an ugly workaround to this problem by performing following steps: Export database as single SQL file using HeidiSQL Open resulting file in Notepad++ and convert from ANSI to UTF-8 encoding Create new empty file in Notepad++, paste in UTF-8 and save file normally What am I missing here?

    Read the article

  • Apache returns 304, I want it to ignore anything from client and send the page

    - by Ayman
    I am using Apache HTTPD 2.2 on Windows. mod_expires is commented out. Most other stuff are not changed from the defaults. gzip is on. I made some changes to my .js files. My client gets one 304 response for one of the .js files and never gets the rest. How can I force Apache to sort of flush everything and send all new files to the client? The main html file includes these scripts in the head section of the main page: <script src="js/jquery-1.7.1.min.js" type="text/javascript"> </script> <script src="js/jquery-ui-1.8.17.custom.min.js" type="text/javascript"></script> <script src="js/trex.utils.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.core.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.codes.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.emv.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.b24xtokens.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.iso.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.span2.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.amex.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.abi.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.barclays.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.bnet.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.visa.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.atm.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.apacs.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.pstm.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.stm.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.thales.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.fps-saf.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.fps-iso.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.app.js" type="text/javascript" charset="utf-8"></script> Apache access log has the following: [07/Jul/2013:16:50:40 +0300] "GET /trex/index.html HTTP/1.1" 200 2033 "-" [07/Jul/2013:16:50:40 +0300] "GET /trex/js/trex.fps-iso.js HTTP/1.1" 304 [08/Jul/2013:07:54:35 +0300] "GET /trex/index.html HTTP/1.1" 304 - "-" [08/Jul/2013:07:54:35 +0300] "GET /trex/js/trex.iso.js HTTP/1.1" 200 12417 [08/Jul/2013:07:54:35 +0300] "GET /trex/js/trex.amex.js HTTP/1.1" 200 6683 [08/Jul/2013:07:54:35 +0300] "GET /trex/js/trex.fps-saf.js HTTP/1.1" 200 2925 [08/Jul/2013:07:54:35 +0300] "GET /trex/js/trex.fps-iso.js HTTP/1.1" 304 Chrome request headers are as below: THis file is ok, latest: Request URL:http://localhost/trex/js/trex.iso.js Request Method:GET Status Code:200 OK (from cache) THis file is ok, latest: Request URL:http://localhost/trex/js/trex.amex.js Request Method:GET Status Code:200 OK (from cache) This one is also ok: Request URL:http://localhost/trex/js/trex.fps-iso.js Request Method:GET Status Code:200 OK (from cache) The rest of the scrips all have 200 OK (from cache).

    Read the article

  • How to keep character encoding with database queries.

    - by JasonS
    Hi, I am doing the following. 1) I am exporting a database and saving it to a file called dump.sql. 2) The file is then transferred to a different server via PHP ftp. 3) When the file has been successfully transferred the administrator has an option to run a 'dbtransfer' script on the new host. 4) This script blows up the script and runs the queries line by line. This works great - however there is a problem with foreign language encoding. We are using UTF-8. Step 1 : This works fine, file is in UTF-8 Format. Step 3 : When I test the contents of the dump.sql file using mb_check_encoding(). The string comes back as UTF-8. Step 4 : This creates tables with utf8_general_ci encoding. The information is dumped in. When I check the table after the transfer I get records like this: 'ç,Ç,ö,Ö,ü,Ü,ı,İ,ş,Ş,ğ,Ğ'. I don't understand how a UTF-8 string can lose its encoding when it goes into the database. Am I missing a step? Do I need to run some sort of function to ensure the string is parsed as UTF-8? Once the system is installed I can save foreign language queries. It is just the transfer that is messing up. Any ideas?

    Read the article

  • Want to show <embed> and <object> tags from YUI editor as a text rather then a video.

    - by user208678
    I am using YUI rich text editor on my website (php/mysql), so that a user may enter textual matter/articles through it. But if a user copies and paste some embed code in the textarea, from any video sites like youtube, it should get saved as a text block and not as a playing video when showing the text content on the browser. Now YUI automatically converts the characters into html entities which ever is needed. Please note that if I put a new line in the yui editor (by pressing "Enter" key), it will be converted into a "<br>" tag in the background and this will not get html entity encoded when passing the value to my backend PHP script. But If I copy and paste any embed tag or for that reason any valid html tags in the textarea, it will be html entity encoded by YUI. Now to support UTF-8 characters, I am using a function (DBVarConv) in my php script before saving it into my database. The code for the function is given below function DBVarConv($var,$isEncoded = false) { if($isEncoded) return addslashes(htmlentities($var, ENT_QUOTES, 'UTF-8', false)); else return htmlentities ($var, ENT_QUOTES, 'UTF-8', false); } $myeditorData = DBVarConv($myeditorData, true); // Save $myeditorData in database. While showing the data in the browser, I am using another function called "smart_html_entity_decode". The code is given below. function smart_html_entity_decode($text, $isAddslashesUsed = false) { if($isAddslashesUsed) $tmp = stripslashes(html_entity_decode($text, ENT_QUOTES, 'UTF-8')); else $tmp = html_entity_decode($text, ENT_QUOTES, 'UTF-8'); if ($tmp == $text) return $tmp; return smart_html_entity_decode($tmp, $isAddslashesUsed); } // Get $myData from database $myData=smart_html_entity_decode($myData, true); echo $myData; The problem is that in doing so, it is also decoding the embed and object tags from their html encoded entities and as a result my obejct tags are shown as a video and not as a simple text. Try using the text editor at tumblr.com. If you paste an embed code in the editor, it will be shown as a text block not as a video. I am trying to build the same functionality on my website with UTF-8 support. Any help will be highly appreciated.

    Read the article

  • PHP - DOM class - numbered entities and encodings problem

    - by user343607
    Hi guys, I'm having some difficult with PHP DOM class. I am making a sitemap script, and I need the output of $doc-saveXML() to be like <?xml version="1.0" encoding="UTF-8"?> <root> <url> <loc>http://www.somesite.com/servi&#xE7;os/redesign</loc> </url> </root> or <?xml version="1.0" encoding="UTF-8"?> <root> <url> <loc>http://www.somesite.com/servi&#231;os/redesign</loc> </url> </root> but I am getting: <?xml version="1.0" encoding="UTF-8"?> <root> <url> <loc>http://www.somesite.com/servi&amp;#xE7;os/redesign</loc> </url> </root> This is the closet I could get, using a replace named to numbered entities function. I was also able to reproduce <?xml version="1.0" ?> <root> <url> <loc>http://www.somesite.com/servi&amp;#xE7;os/redesign</loc> </url> </root> But without the encoding specified. The best solution (the way I think the code should be written) would be: <?php $myArray = array(); // do some stuff to populate the with URL strings $doc = new DOMDocument('1.0', 'UTF-8'); // here we modify some property. Maybe is the answer I am looking for... $urlset = doc->createElement("urlset"); $urlset = $doc->appendChild($urlset); foreach($myArray as $address) { $url = $doc->createElement("url"); $url = $urlset->appendChild($url); $loc = $doc->createElement("loc"); $loc = $url->appendChild($loc); $valueContent = $doc->createTextNode($value); $valueContent = $loc->appendChild($address); } echo $doc->saveXML(); ?> Notes: Server response header contains charset as UTF-8; PHP script is saved in UTF-8; URLs read are UTF-8 strings; Above script contains encoding declaration on DOMDocument constructor, and does not use any convert functions, like htmlentities, urlencode, utf8_encode... I've tried changing the DOMDocument properties DOMDocument::$resolveExternals and DOMDocument::$substituteEntities values. None combinations worked. And yes, I know I can made all process without specifying the character set on DOMDocument constructor, dump string content into a variable and make a very simple string substitution with string replace functions. This works. But I would like to know where I am slipping, how can this be made using native API's and settings, or even if this is possible. Thanks in advance.

    Read the article

  • Choosing a W3C valid DOCTYPE and charset combination?

    - by George Carter
    I have a homepage with the following: <DOCTYPE html> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> My choice of the DOCTYPE "html" is based on a recommendation for html pages using jQuery. My choice of charset=utf=8 is based on a recommendation to make my pages readable on most browsers. But these choices may be wrong. When I run this page thru the W3C HTML validator, I get messages you see below. Any way I can eliminate the 2 errors? ! Using experimental feature: HTML5 Conformance Checker. The validator checked your document with an experimental feature: HTML5 Conformance Checker. This feature has been made available for your convenience, but be aware that it may be unreliable, or not perfectly up to date with the latest development of some cutting-edge technologies. If you find any issue with this feature, please report them. Thank you. Validation Output: 2 Errors 1. Error Line 18, Column 70: Changing character encoding utf-8 and reparsing. …ntent-Type" content="text/html; charset=utf-8"> 2. Error Line 18, Column 70: Changing encoding at this point would need non-streamable behavior. …ntent-Type" content="text/html; charset=utf-8">

    Read the article

  • Ajax / GroovyGrails Post data coming over with unexpected leading character. Who is encoding/decod

    - by ?????
    I'm having an encoding issue, and I'm not sure where to look for the problem. I have this Ajax.Request function (prototype library) sending data to a Groovy/Grails encoder var myAjax = new Ajax.Request(url, {method:'post', encoding:'UTF-8', contentType:'application/x-www-form-urlencoded', parameters:{'content':new_content}, onSuccess:success, onFailure:failure}); The data is coming in with an unexpected %A0 at the beginning: I have this simple controller that just echos the content back: def titlechange = { def content = URLDecoder.decode(params['content']) printf("Content: %s; DecodedContent = %s\n", params['content'], content) response.characterEncoding='UTF-8' render content } the debug print statement shows: Content: %A0Hello%2C%20world%21; DecodedContent = †Hello, world! Where is that %A0 coming from? My grails configuration has this: // The default codec used to encode data with ${} grails.views.default.codec="none" // none, html, base64 grails.views.gsp.encoding="UTF-8" grails.converters.encoding="UTF-8 Is the issue on the grails side or on the JavaScript side?

    Read the article

  • How can i add encoding to the python generated CSV file

    - by user1958218
    I am following this post http://stackoverflow.com/a/9016545 and i want to know that how can i do that in Python. I don't know how can i insert BOM data in there This is my current code response = HttpResponse(content_type='text/csv') response['Content-Type'] = 'application/octet-stream' response['Content-Disposition'] = 'attachment; filename="results.csv"' writer = UnicodeWriter(response, quoting=csv.QUOTE_ALL, encoding="utf-8") I want to convert to utf -16 . BOm data is this but don't know how to insert it From here http://stackoverflow.com/a/4440143 echo "\xEF\xBB\xBF"; // UTF-8 BOM But i want it for python and utf-16 I tried opening that csv in notepad and insert \xef\xbb\xb in beginning and excel displayed that correctly. But it is also visible before first column. How can i hide that because user wont like that

    Read the article

  • Set CSV import default to UTF-8 in Calc

    - by picca
    Every time I open a CSV (comma separated values) document in OpenOffice.org Calc I get a dialog with CSV preferences. The current default character set is "Eastern Europe (ISO-8859-2)". I would like "UTF-8" to be selected by default instead.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >