Search Results

Search found 25009 results on 1001 pages for 'content encoding'.

Page 101/1001 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • [Ruby] Why do I have to URI.encode even safe characters for Net::HTTP requests?

    - by Matthias
    I was trying to send a GET request to Twitter (user ID replaced for privacy reasons) using Net::HTTP: url = URI.parse("http://api.twitter.com/1/friends/ids.json?user_id=12345") resp = Net::HTTP.get_response(url) this throws an exception in Net::HTTP: NoMethodError: undefined method empty?' for #<URI::HTTP:0x59f5c04> from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/http.rb:1470:ininitialize' just by coincidence, I stumbled upon a similar code snippet, which used URI.encode prior to URI.parse, so I copied that and tried again: url = URI.parse(URI.encode("http://api.twitter.com/1/friends/ids.json?user_id=12345")) resp = Net::HTTP.get_response(url) now it works fine, but why? There are no reserved characters that need escaping in the URL I mentioned, so why do I have to call URI.encode for get_response to succeed?

    Read the article

  • Java UTF-8 to ASCII conversion with supplements

    - by bozo
    Hi, we are accepting all sorts of national characters in UTF-8 string on the input, and we need to convert them to ASCII string on the output for some legacy use. (we don't accept Chinese and Japanese chars, only European languages) We have a small utility to get rid of all the diacritics: public static final String toBaseCharacters(final String sText) { if (sText == null || sText.length() == 0) return sText; final char[] chars = sText.toCharArray(); final int iSize = chars.length; final StringBuilder sb = new StringBuilder(iSize); for (int i = 0; i < iSize; i++) { String sLetter = new String(new char[] { chars[i] }); sLetter = Normalizer.normalize(sLetter, Normalizer.Form.NFC); try { byte[] bLetter = sLetter.getBytes("UTF-8"); sb.append((char) bLetter[0]); } catch (UnsupportedEncodingException e) { } } return sb.toString(); } The question is how to replace all the german sharp s (ß, Ð, d) and other characters that get through the above normalization method, with their supplements (in case of ß, supplement would probably be "ss" and in case od Ð supplement would be either "D" or "Dj"). Is there some simple way to do it, without million of .replaceAll() calls? So for example: Ðonardan = Djonardan, Blaß = Blass and so on. We can replace all "problematic" chars with empty space, but would like to avoid this to make the output as similar to the input as possible. Thank you for your answers, Bozo

    Read the article

  • Ruby custom class to and from YAML;

    - by Sanarothe
    Hi. I'm having trouble deserializing a ruby class that I wrote to YAML. Where I want to be I want to be able to pass one object around as a full 'question' which includes the question text, some possible answers (For multi. choice) and the correct answer. One module (The encoder) takes input, builds a 'question' class out of it and appends it to the question pool. Another module reads a question pool and builds an array of 'question' objects. Where I am currently Sample Question Pool --- | --- !ruby/object:MultiQ a: "no" answer: "no" b: "no" c: "no" d: "no" text: "yes?" Encoder dump to YAML file. Object is a MultiQ filled up with input. (See below.) def dump(file, object) File.open(file, 'a') do |out| YAML.dump(object.to_yaml, out) end object = nil end MultiQ Class definition class MultiQ attr_accessor :text, :answer, :a, :b, :c, :d def initialize(text, answer, a, b, c, d) @text = text @answer = answer @a = a @b = b @c = c @d = d end end The decoder (I've been trying different things, so what's here wasn't my first or best guess. But I'm at a loss and the documentation doesn't really explain things thoroughly enough.) File.open( "test_set.yaml" ) do |yf| YAML.load_documents( yf ) { |item| new = YAML.object_maker( MultiQ, item) puts new } end Questions you can answer How do I achieve my goal? What methods should I use, between parsing, loading files or documents, to successfully deserialize a Ruby class? I've already looked over the YAML Rdoc, and I didn't absorb very much, so please don't just link me to it. What other methods would you suggest using? Is there a better way to store questions like this? Should I be using document db, relational db, xml? Some other format?

    Read the article

  • Qt Jambi: Accessing the content of QNetworkReply

    - by Richard
    Hi All, I'm having trouble accessing the content of QNetworkReply objects. Content appears to be empty or zero. According to the docs (translating from c++ to java) I think I've got this set up correctly, but to no avail. Additionally an "Unknown error" is being reported. Any ideas much appreciated. Code: public class Test extends QObject { private QWebPage page; public Test() { page = new QWebPage(); QNetworkAccessManager nac = new QNetworkAccessManager(); nac.finished.connect(this, "requestFinished(QNetworkReply)"); page.setNetworkAccessManager(nac); page.loadProgress.connect(this, "loadProgress(int)"); page.loadFinished.connect(this, "loadFinished()"); } public void requestFinished(QNetworkReply reply) { reply.reset(); reply.open(OpenModeFlag.ReadOnly); reply.readyRead.connect(this, "ready()"); // never gets called System.out.println("bytes: " + reply.url().toString()); // writes out asset uri no problem System.out.println("bytes: " + reply.bytesToWrite()); // 0 System.out.println("At end: " + reply.atEnd()); // true System.out.println("Error: " + reply.errorString()); // "Unknown error" } public void loadProgress(int progress) { System.out.println("Loaded " + progress + "%"); } public void loadFinished() { System.out.println("Done"); } public void ready() { System.out.println("Ready"); } public void open(String url) { page.mainFrame().load(new QUrl(url)); } public static void main(String[] args) { QApplication.initialize(new String[] { }); Test t = new Test(); t.open("http://news.bbc.co.uk"); QApplication.exec(); } }

    Read the article

  • c# Remove special chars from a File

    - by jmpena
    Hello i have a problem, im trying to open a textfile and remove all the special chars ñ Ñ ' á í etc... the file its a Layout that the clients send to me and i parse it to send the file to an AS400 server but i have to remove all special chars. THE PROBLES IS: some files with some special chars when i open it in c# it read the special chars and Two different chars and move the entire line one space to the right and then the information that has to be in that position wont be OK. i take the same file and open it in Notepad and the file is OK but when i open it in WordPad it looks like 2 chars (for just 1 especial char) Example: in the file i have: "0001 0003JUAN PEÑA33441JPENATEST" But in c# it shows "0001 0003JUAN PEï¦A33441JPENATEST" im using the encondig 1251 any help?

    Read the article

  • Changing label content programmatically from within a DataTemplate used in a DataGrid column header

    - by iimpact
    I'm dynamically creating DataGrid columns (based on an event from my ViewModel) and programmatically adding them to an existing DataGrid. Each column uses a generic HeaderTemplate by setting it to a DataTemplate that has been identified in the xaml. The DataTemplate contains two labels in which needs their content needs to be changed upon creation of the DataGrid column. How would this be done? I understand that the DataTemplate uses the ContentPresenter but I am having trouble accessing it within a dynamically created DataGrid column. Code is as follows: xaml: (template used to format the DataGrid column header): <DataTemplate x:Key="columnTemplate"> <StackPanel> <Label Padding="0" Name="labelA"/> <Separator HorizontalAlignment="Stretch"/> <Label Padding="0" Name="labelB"/> </StackPanel> </DataTemplate> c#: (used to dynamically create a DataGrid column and add it to an existing DataGrid) var dataTemplate = FindResource("columnTemplate") as DataTemplate; var column = new DataGridTextColumn(); column.HeaderTemplate = dataTemplate; DataGrid1.Columns.Add(column); I would like to then access both labelA and labelB and change the content.

    Read the article

  • base 64 URL decode with Ruby/Rails?

    - by seth.vargo
    I am working with the Facebook API and Ruby on Rails and I'm trying to parse the JSON that comes back. The problem I'm running into is that Facebook base64URL encodes their data. There is no built-in base64URL decode for Ruby. For the difference between a base64 encoded and base64URL encoded, see wikipedia. How do I decode this using Ruby/Rails? Edit: Because some people have difficulty reading - base64 URL is DIFFERENT than base64

    Read the article

  • How to get UTF-8 working in java webapps?

    - by kosoant
    I need to get UTF-8 working in my Java webapp (servlets + JSP, no framework used) to support äöå etc. for regular Finnish text and Cyrillic alphabets like ??? for special cases. My setup is the following: Development encironment: Windows XP Production encironment: Debian Database used: MySQL 5.x Users mainly use Firefox2 but also Opera 9.x, FF3, IE7 and Google Chrome are used to access the site. How to achieve this?

    Read the article

  • Change Emacs Default Coding System

    - by Saterus
    My problem stems from Emacs inserting the coding system headers into source files containing non-ascii characters: # -*- coding: utf-8 -*- My coworkers do not like these headers being checked into our repositories. I don't want them inserted into my files because Emacs automatically detects that the file should be UTF-8 regardless so there doesn't seem to be any benefit to anyone. I would like to simply set Emacs to use UTF-8 automatically for all files, yet it seems to disagree with this idea. In an effort to fix this, I've added the following to my .emacs: (prefer-coding-system 'utf-8) (setq coding-system-for-read 'utf-8) (setq coding-system-for-write 'utf-8) This does not seem to solve my problem. Emacs still inserts the coding-system headers into my files. Anyone have any ideas? EDIT: I think this problem is specifically related to ruby-mode. I still can't turn it off though.

    Read the article

  • display of umlauts in firefox

    - by Mike D
    I was doing some web searching and found some strange things involving umlauts. For example if you do a google or yahoo search for the word "nther" you are likely to find things like G&#xfc;nther which I take to be Gunther with an umlaut over the u. Now my question is what if anything can I do to cause these characters to be properly displayed by Firefox under windows XP? An amazing thing is that I had to introduce spaces in the G & # etc string otherwise it was properly displayed here as u with umlaut!

    Read the article

  • How do you get the glyph for a character encoded as '&#333;' from a utf-8 encoded database field usi

    - by AE
    I have a MySQL database table with a collation of 'utf8_general_ci' and the value in the field is: x & #299; bán yá wén (without the spaces). When this is converted (for example by StackOverflow's editor) it looks like this: xī bán yá wén where the second character looks like a lower case i with a bar over the top. In PHP, what function converts the & #299 ; entity into the ī character? I've tried using html_entity_decode($str,ENT_COMPAT,'UTF-8'), however I get characters like the following: yÄ«n wén or zhÅ•ng wén I'm pretty sure there's something I don't understand about the decoding, which is why I'm using the wrong function. Can anyone shed some light on how to get the single character glyph that's represented by the entity & #299 and similar high-number characters above 255? Many thanks, AE

    Read the article

  • Windows code pages, what are they?

    - by Mike D
    I'm trying to gain a basic understanding of what is meant by a Windows code page. I kind of get the feeling it's a translation between a given 8 bit value and some 'abstraction' for a given character graphic. I made the following experiment. I created a "" character literal with two versions of the letter u with an umlaut. One created using the ALT 129 (uses code page 437) value and one using the ALT 0252 (uses code page 1252) value. When I examined the literal both characters had the value 252. Is 252 the universal 8 bit abstraction for u with an umlaut? Is it the Unicode value? Aside from keyboard input are there any library routines or system calls that use code pages? For example is there a function to translate a string using a given code table (as above for the ALT 129 value)?

    Read the article

  • convert special characters but not tags

    - by Tom
    I've got some text which needs converting to use HTML entities, but it also contains tags. Here's a sample: <p>Ofcom issued the warning to Global-owned GWR in Bristol – which is required to operate as a "contemporary and chart music and information station" – for operating outside the music </p> The (" and -) need to be converted but the paragraph tags must remain HTML. Using something like htmlentities converts everything, how can I convert everything but the tags?

    Read the article

  • Serializing chinese characters with Xerces 2.6

    - by Gianluca
    I have a Xerces (2.6) DOMNode object encoded UTF-8. I use to read its TEXT element like this: CBuffer DomNodeExtended::getText( const DOMNode* node ) const { char* p = XMLString::transcode( node->getNodeValue( ) ); CBuffer xNodeText( p ); delete p; return xNodeText; } Where CBuffer is, well, just a buffer object which is lately persisted as it is in a DB. This works until in the TEXT there are just common ASCII characters. If we have i.e. chinese ones they get lost in the transcode operation. I've googled a lot seeking for a solution. It looks like with Xerces 3, the DOMWriter class should solve the problem. With Xerces 2.6 I'm trying the XMLTranscoder, but no success yet. Could anybody help?

    Read the article

  • how to properly display utf encoded characters on my utf-8 encoded page?

    - by Ali
    Hi guys I'm retrieving emails and some of my emails have utf encoded text. However even though my page is encoded as utf 8 - in some places when I try to out put utf text I get funny characters like : =?utf-8?B?Rlc6INqp24zYpyDYotm+INin2LMg2YXYs9qp2LHYp9uB2bkg2qnbjCDZhtmC?= =?utf-8?B?2YQg2qnYsdiz2qnYqtuSINuB24zaug==?= Whereas in other areas of the same page it displays fine. WHats going on?

    Read the article

  • Oracle Unicode problem when using NLS_CHARACTERSET is WE8ISO8859P1 and NLS_NCHAR_CHARACTERSET is AL16UTF16, and ColdFusion as programming language

    - by tsurahman
    I have 2 Oracle 10g database, XE and Enterprise XE Enterprise and this are the data type I've use in the test table and then I tried to test to insert some Unicode char from http://www.sustainablegis.com/unicode/ and the results are XE Enterprise for this test, I use ColdFusion 9 developer edition <cfprocessingDirective pageencoding="utf-8"> <cfset setEncoding("form","utf-8")> <form action="" method="post"> Unicode : <br> <textarea name="txaUnicode" id="txaUnicode" cols="50" rows="10"></textarea> <br><br> Language : <br> <input type="Text" name="txtLanguage" id="txtLanguage"> <br><br> <input type="Submit"> </form> <cfset dsn = "theDSN"> <cfif StructKeyExists(FORM, "FIELDNAMES")> <cfquery name="qryInsert" datasource="#dsn#"> INSERT INTO UNICODE ( C_VARCHAR2, C_CHAR, C_CLOB, C_NVARCHAR2, LANGUAGE ) VALUES ( <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#FORM.TXAUNICODE#">, <cfqueryparam cfsqltype="CF_SQL_CHAR" value="#FORM.TXAUNICODE#">, <cfqueryparam cfsqltype="CF_SQL_LONGVARCHAR" value="#FORM.TXAUNICODE#">, <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#FORM.TXAUNICODE#">, <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#FORM.TXTLANGUAGE#"> ) </cfquery> </cfif> <cfquery name="qryUnicode" datasource="#dsn#"> SELECT * FROM UNICODE ORDER BY LANGUAGE </cfquery> <table border="1"> <thead> <tr> <th>LANGUAGE</th> <th>C_VARCHAR2</th> <th>C_CHAR</th> <th>C_CLOB</th> <th>C_NVARCHAR2</th> </tr> </thead> <tbody> <cfoutput query="qryUnicode"> <tr> <td>#qryUnicode.LANGUAGE#</td> <td>#qryUnicode.C_VARCHAR2#</td> <td>#qryUnicode.C_CHAR#</td> <td>#qryUnicode.C_CLOB#</td> <td>#qryUnicode.C_NVARCHAR2#</td> </tr> </cfoutput> </tbody> </table> from this guide http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10749/ch6unicode.htm#i1007297 I think for my Enterprise database it should produce same thing as XE (at least for NVARCHAR2 column) since the typical solution from that guide said: Use NCHAR and NVARCHAR2 datatypes to store Unicode characters Keep WE8ISO8859P1 as the database character set Use AL16UTF16 as the national character set So, how to make it works too in my Enterprise database? Thank you :)

    Read the article

  • Parsing a UTF-16 encoded xml file in ruby

    - by Matthew Toohey
    Hello I've been trying to parse a UTF-16 encoded xml file in Ruby (1.8.7), and I can't seem to find how to do it by searching (google and stack overflow) Here's the xml file url: http://www.abc.net.au/triplej/feeds/playout/triplejsydneyplayout.xml?_5366 Getting the xml string from Net::HTTP and passing it to REXML, then calling logger.info xmlDoc.inspect produces: <UNDEFINED> ... </> Any ideas? Cheers

    Read the article

  • Exporting SQL Server table to CSV issue commas, tabs and quotes

    - by cyberpine
    After we export to flat file CSV, columns with commas, quotes and tabs cause problems in Excel. The vendor needs to read the file in Excel to make manual changes and then needs it in a flat file format CSV format to load using PL/SQL into an Oracle table. I can remove those characters from the table in SQL Server, but is there a smarter way? Does it make sense to save to CSV when done in Excel and will that cause problems when attempting to load the file into Oracle anyway? Also, we need the first row to have column names.. any SQL way to generate all the files in one swoop (the the tiles in the first row) rather than using export to flat file?

    Read the article

  • Why is my GetNextChar() in my DecoderFallbackBuffer Specialization Repeatedly Getting Called?

    - by Canoehead
    I need to produce my own DecoderFallback and DecoderFallbackBuffer classes to implement some custom stream decoding. I have found that the stream reader making use of it is calling GetNextChar() repeatedly even when my specilizaed DecoderFallbackBuffer.Remaining property returns 0 to indicate that there no more characters to return. The end result is that the stream reader gets into an infinite loop. Why is this happening?

    Read the article

  • Oracle Database character set issue with the audit tables on Debian

    - by Leonid Shirmanov
    I've got Oracle XE installed on Debian linux and the character set is configured to AL32UTF8. There are several client applications that connects to a database from Windows with the different locales - French etc, not English. That's ok with all the client data these applications put into database, nothing converted and text data in French represents correctly. But texts in audit tables looks like '??????' if contains any not-english character. I suppose this is because audit records go to database in the different locale and it's not dependent on the client's globalization/locale settings. How this globalization issue can be fixed? thanks!

    Read the article

  • Outlook is unable to accept french-accented characters in my mailto string?

    - by 4501
    Outlook is causing some problems when being passed a mailto string with accented characters in it. Changing the codepage for my entire webpage that has this string on it solves this problem, but that causes other problems in the system, so I would not like to do that. A string like such returns a lot of garbage characters: "mailto:[email protected]?subject=Mon bâtiment / Départementé / Bureau n'est pas répertorié" Meanwhile, this cuts off the character after the "D" "mailto:[email protected]?subject=Mon bâtiment / D&eacute;partement&#233; / Bureau n'est pas r&#233;pertori&#233;" What gives? Is there no way to make this work? I am in Canada, so some regional issues might be taking effect here?

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >