Search Results

Search found 1003 results on 41 pages for 'utf8'.

Page 26/41 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • How do I output Unicode characters as a pair of ASCII characters?

    - by ChrisF
    How do I convert (as an example): Señor Coconut Y Su Conjunto - Introducciõn to: Señor Coconut Y Su Conjunto - Introducciõn I've got an app that creates m3u playlists, but when the track filename, artist or title contains non ASCII characters it doesn't get read properly by the music player so the track doesn't get played. I've discovered that if I write the track out as: #EXTINFUTF8:76,Señor Coconut Y Su Conjunto - Introducciõn #EXTINF:76,Señor Coconut Y Su Conjunto - Introducciõn #UTF8:01-Introducciõn.mp3 01-Introducciõn.mp3 Then the music player will read it correctly and play the track. My problem is that I can't find the information I need to be able to do the conversion properly.

    Read the article

  • Rails 3 routes and using GET to create clean URLs?

    - by Hard-Boiled Wonderland
    I am a little confused with the routes in Rails 3 as I am just starting to learn the language. I have a form generated here: <%= form_tag towns_path, :method => "get" do %> <%= label_tag :name, "Search for:" %> <%= text_field_tag :name, params[:name] %> <%= submit_tag "Search" %> <% end %> Then in my routes: get "towns/autocomplete_town_name" get "home/autocomplete_town_name" match 'towns' => 'towns#index' match 'towns/:name' => 'towns#index' resources :towns, :module => "town" resources :businesses, :module => "business" root :to => "home#index" So why when submitting the form do I get the URL: /towns?utf8=?&name=townname&commit=Search So the question is how do I make that url into a clean url like: /towns/townname Thanks, Andrew

    Read the article

  • No download dialog with FileResult

    - by majkinetor
    I am returning File result from action triggered by the form post event. I can't get download dialog. Instead, if I use: return File(Encoding.UTF8.GetBytes(reportPath), "text/plain", "Report.csv"); I get path to the file upon ajax execution in the target div. When I use return File(reportPath, "text/plain", "Report.csv"); I get content of the file in the target div. Any thoughts ? The action is declared as [HttpPost] public virtual ActionResult ExportFilter(Model model) { string outputFile = CreateReport(model); return File(....) }

    Read the article

  • oracle sql developer is truncating my results

    - by nont
    I'm calling a stored function like this: select XML_INVOICE.GENERATE_XML_DOC('84200006823') from dual; The query results then show up in a table underneath, which I can right click and select "Export Date" - XML <?xml version='1.0' encoding='UTF8' ?> <RESULTS> <ROW> <COLUMN NAME="XML_INVOICE.GENERATE_XML_DOC('84200006823')" <![CDATA[<xml>yada yada</xml><morexml>...]]></COLUMN> </ROW> </RESULTS> The problem is the "..." - SQL Developer (2.1.0.63 on Linux) is not showing all the data - its truncating the result and appending the ellipsis. This is of no use to me. How do I get it to export ALL of my data?

    Read the article

  • How do I convert a NSString into TIS-620 encoded string

    - by MacPC
    In the apple document, I can see that there's a way to convert from UTF8 string to ASCII string like this NSData *asciiData = [theString dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; NSString *asciiString = [[NSString alloc] initWithData:asciiData encoding:NSASCIIStringEncoding]; But my app requires a TIS-620 string to post to a site so I try to do the same thing NSData *asciiData = [newPost.header dataUsingEncoding:kCFStringEncodingMacThai allowLossyConversion:YES]; NSString *asciiString = [[NSString alloc] initWithData:asciiData encoding:kCFStringEncodingMacThai]; NSLog(@"%@", asciiString); The output I got is like this ???????????. Does anyone know how to convert the NSString to TIS-620 properly? Thanks so much.

    Read the article

  • how to show the right word in my code, my code is : os.urandom(64)

    - by zjm1126
    My code is: print os.urandom(64) which outputs: > "D:\Python25\pythonw.exe" "D:\zjm_code\a.py" \xd0\xc8=<\xdbD' \xdf\xf0\xb3>\xfc\xf2\x99\x93 =S\xb2\xcd'\xdbD\x8d\xd0\\xbc{&YkD[\xdd\x8b\xbd\x82\x9e\xad\xd5\x90\x90\xdcD9\xbf9.\xeb\x9b>\xef#n\x84 which isn't readable, so I tried this: print os.urandom(64).decode("utf-8") but then I get: > "D:\Python25\pythonw.exe" "D:\zjm_code\a.py" Traceback (most recent call last): File "D:\zjm_code\a.py", line 17, in <module> print os.urandom(64).decode("utf-8") File "D:\Python25\lib\encodings\utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode bytes in position 0-3: invalid data What should I do to get human-readable output?

    Read the article

  • IE 8 Chinese encoding characters

    - by digitalbart
    Hello, I am unable to render Chinese characters in IE 8. I have researched this and I am aware of the meta tag to force compatibility mode. I am also aware of the language pack you can install. Finally I have seen that Microsoft actually forces IE7 compatibility mode on their Chinese website. http://www.microsoft.com/zh/cn/default.aspx I am wondering if anyone has any alternatives solutions to this problem. None them seem that appealing to me. I am using utf8 as my encoding and this problem only occurs in IE8. Thanks

    Read the article

  • How to handle large table in MySQL ?

    - by Frantz Miccoli
    I've a database used to store items and properties about these items. The number of properties is extensible, thus there is a join table to store each property associated to an item value. CREATE TABLE `item_property` ( `property_id` int(11) NOT NULL, `item_id` int(11) NOT NULL, `value` double NOT NULL, PRIMARY KEY (`property_id`,`item_id`), KEY `item_id` (`item_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; This database has two goals : storing (which has first priority and has to be very quick, I would like to perform many inserts (hundreds) in few seconds), retrieving data (selects using item_id and property_id) (this is a second priority, it can be slower but not too much because this would ruin my usage of the DB). Currently this table hosts 1.6 billions entries and a simple count can take up to 2 minutes... Inserting isn't fast enough to be usable. I'm using Zend_Db to access my data and would really be happy if you don't suggest me to develop any php side part. Thanks for your advices !

    Read the article

  • Zend Framework multiple databases

    - by Uffo
    I'm currently using only one database with Zend Framework, but now I have to add ONE MORE. I'm using this code right now: public static function setupDatabase() { $config = self::$registry->configuration; $db = Zend_Db::factory($config->db->adapter, $config->db->toArray()); $db->query("SET NAMES 'utf8'"); self::$registry->database = $db; Zend_Db_Table::setDefaultAdapter($db); } What code do I need to write in order to use ONE MORE database; and how I will reference it, when I need to make some queries or so.. Best Regards!

    Read the article

  • SQL Syntax Error 1064

    - by 01010011
    Hi, I keep getting the following error message ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right systax to use near ''isbn10','isbn13','title','edition','author_f_name','author_m_name','author_l_na' at line 1 when trying to populate my MySQL database from the command line with the following command: source C:\myFilePath\myFileName.sql Here is an excerpt from my mysqldump (showing the table structure for book). Where did I go wrong? Any assistance will be appreciated: -- -- Table structure for table book DROP TABLE IF EXISTS book; /*!40101 SET @saved_cs_client = @@character_set_client /; /!40101 SET character_set_client = utf8 */; CREATE TABLE book ( book_id int(11) NOT NULL AUTO_INCREMENT, isbn10 char(20) DEFAULT NULL, isbn13 char(20) DEFAULT NULL, title char(20) DEFAULT NULL, edition char(20) DEFAULT NULL, author_f_name char(20) DEFAULT NULL, author_m_name char(20) DEFAULT NULL, author_l_name char(20) DEFAULT NULL, cond enum('as new','very good','good','fair','poor') DEFAULT NULL, price decimal(8,2) DEFAULT NULL, genre char(20) DEFAULT NULL, PRIMARY KEY (book_id) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; /*!40101 SET character_set_client = @saved_cs_client */; -- -- Dumping data for table book

    Read the article

  • UploadFileAsync not asynchronous?

    - by a2h
    Aight, did a bit of Googling and searching here, the only question I found related was this, although the only answer it had wasn't marked as accepted, is old and is confusing. My problem is basically what I've said in the title. What happens is that the GUI freezes while the upload is in progress. My code: // stuff above snipped public partial class Form1 : Form { WebClient wcUploader = new WebClient(); public Form1() { InitializeComponent(); wcUploader.UploadFileCompleted += new UploadFileCompletedEventHandler(UploadFileCompletedCallback); wcUploader.UploadProgressChanged += new UploadProgressChangedEventHandler(UploadProgressCallback); } private void button1_Click(object sender, EventArgs e) { if (openFileDialog1.ShowDialog() == DialogResult.OK) { string toUpload = openFileDialog1.FileName; wcUploader.UploadFileAsync(new Uri("http://anyhub.net/api/upload"), "POST", toUpload); } } void UploadFileCompletedCallback(object sender, UploadFileCompletedEventArgs e) { textBox1.Text = System.Text.Encoding.UTF8.GetString(e.Result); } void UploadProgressCallback(object sender, UploadProgressChangedEventArgs e) { textBox1.Text = (string)e.UserState + "\n\n" + "Uploaded " + e.BytesSent + "/" + e.TotalBytesToSend + "b (" + e.ProgressPercentage + "%)"; } }

    Read the article

  • mysql ON DUPLICATE KEY UPDATE

    - by julio
    Hi-- I'm stuck on a mySQL query using ON DUPLICATE KEY UPDATE. I'm getting the error: mySQL Error: 1062 - Duplicate entry 'hr2461809-3' for key 'fname' The table looks like this: id int(10) NOT NULL default '0', picid int(10) unsigned NOT NULL default '0', fname varchar(255) NOT NULL default '', type varchar(5) NOT NULL default '.jpg', path varchar(255) NOT NULL default '', PRIMARY KEY (id), UNIQUE KEY fname (fname), KEY picid (propid) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; And the query that's breaking is this: INSERT INTO images SET picid=732, fname='hr2461809-3', path='pictures/' ON DUPLICATE KEY UPDATE picid=732, fname='hr2461809-3', path='pictures/' I'm using a very similar query elsewhere in the app with no issues. I'm not sure why this one breaks. I expected that when the UNIQUE KEY on fname gets violated, that it would simply update the row where the violation occurred? Thanks for any help

    Read the article

  • Flask Admin didn't show all fields

    - by twoface88
    I have model like this: class User(db.Model): __tablename__ = 'users' __table_args__ = {'mysql_engine' : 'InnoDB', 'mysql_charset' : 'utf8'} id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), unique=True) email = db.Column(db.String(120), unique=True) _password = db.Column('password', db.String(80)) def __init__(self, username = None, email = None, password = None): self.username = username self.email = email self._set_password(password) def _set_password(self, password): self._password = generate_password_hash(password) def _get_password(self): return self._password def check_password(self, password): return check_password_hash(self._password, password) password = db.synonym("_password", descriptor=property(_get_password, _set_password)) def __repr__(self): return '<User %r>' % self.username I have ModelView: class UserAdmin(sqlamodel.ModelView): searchable_columns = ('username', 'email') excluded_list_columns = ['password'] list_columns = ('username', 'email') form_columns = ('username', 'email', 'password') But no matter what i do, flask admin didn't show password field when i'm editing user info. Is there any way ? Even just to edit hash code. UPDATE: https://github.com/mrjoes/flask-admin/issues/78

    Read the article

  • python input UnicodeDecodeError:

    - by The man on the Clapham omnibus
    python 3.x >>> a = input() hope >>> a 'hope' >>> b = input() håpe >>> b 'håpe' >>> c = input() start typing hå... delete using backspace... and change to hope Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'utf8' codec can't decode byte 0xc3 in position 1: invalid continuation byte >>> The situation is not terrible, I am working around it, but find it strange that when deleting, the bytes get messed up. Has anyone else experienced this? the terminal history shows that I thought that I entered h?ope any ideas? in the script that is using this, I do import readline to give command line history.

    Read the article

  • Storing apostrophes, exclamation marks, etc. in mysql database

    - by rein
    I changed from latin1 to utf8. Although all sorts of text was displaying fine I noticed non-english characters were stored in the database as weird symbols. I spent a day trying to fix that and finally now non-english characters display as non-english characters in the database and display the same on the browser. However I noticed that I see apostrophes stored as &#39; and exclamation marks stored as &#33;. Is this normal, or should they be appearing as ' and ! in the database instead? If so, what would I need to do in order to fix that?

    Read the article

  • Unique constraint with nullable column

    - by Álvaro G. Vicario
    I have a table that holds nested categories. I want to avoid duplicate names on same-level items (i.e., categories with same parent). I've come with this: CREATE TABLE `category` ( `category_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `category_name` varchar(100) NOT NULL, `parent_id` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`category_id`), UNIQUE KEY `category_name_UNIQUE` (`category_name`,`parent_id`), KEY `fk_category_category1` (`parent_id`,`category_id`), CONSTRAINT `fk_category_category1` FOREIGN KEY (`parent_id`) REFERENCES `category` (`category_id`) ON DELETE SET NULL ON UPDATE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_spanish_ci Unluckily, category_name_UNIQUE does not enforce my rule for root level categories (those where parent_id is NULL). Is there a reasonable workaround?

    Read the article

  • Glib::ustring and Japanese characters

    - by user294787
    Glib::ustring is supposed to work well with UTF8 but I have a problem when working with Japanese strings. If you compare those two strings, "???" and "???", using == operator or compare method, it will answer that those two strings are equals. I don't understand why. How Glib::ustring works ? The only way I found to get false to the comparison is to compare strings of different sizes. For example "?????" and "????". Very strange...

    Read the article

  • Socket receive buffer size

    - by Kanishka
    Is there a way to determine the receive buffer size of a TCPIP socket in c#. I am sending a message to a server and expecting a response where I am not sure of the receive buffer size. IPEndPoint ipep = new IPEndPoint(IPAddress.Parse("192.125.125.226"),20060); Socket server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); server.Connect(ipep); String OutStr= "49|50|48|48|224|48|129|1|0|0|128|0|0|0|0|0|4|0|0|32|49|50"; byte[] temp = OutStr.Split('|').Select(s => byte.Parse(s)).ToArray(); int byteCount = server.Send(temp); byte[] bytes = new byte[255]; int res=0; res = server.Receive(bytes); return Encoding.UTF8.GetString(bytes);

    Read the article

  • Turkish characters are not displayed correctly

    - by tfeseas
    MySql database uses utf-8 encoding and data are stored correctly.I use set_name utf8 query to make sure the data called are utf-8 encoded.all variables from database works fine as long as the header charset is utf-8,but the static html characters do not work properly.When i set header charset to ISO-8859-9 variables are displayed differenly while html characters work ok.can anyone help me? <?php header('Content-Type: text/html; charset=ISO-8859-9'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head><title>noname</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />

    Read the article

  • Howto convert to string and read data from TCP packet

    - by salime
    I used sharppcap to capture TCP packets. Now i wanna reconstruct HTTP packet from TCP packets but i don't know how. I read somewhere i can find start of HTTP packet in TCP data... i tried to convert byte[] TCP data to string using this code: string s = System.Text.Encoding.UTF8.GetString(tcp_pack.Data); but the string isn't readable. like a binary file that is opened with notepad. is it because the data is encrypted or code is incorrect? how can i reconstruct HTTP packet from TCP packets?

    Read the article

  • Writing Lucene StandardAnalyzer results to text file with OutputStreamWriter

    - by user3693192
    I'm getting ONLY the last result written to "outputStreamFile.txt". Can't figure out how to revise code so I can get ALL results written to text file. Sample input text: "1st line of text\n" "2nd line of text \n" Results in only 2nd line begin written (and not 1st line) as: "2nd line text\n" private static void analyze(String text) throws IOException { analyzer = new StandardAnalyzer(Version.LUCENE_30); Reader r = new StringReader(text); TokenStream ts = (TokenStream) analyzer.tokenStream("", r); TermAttribute term = ts.addAttribute(TermAttribute.class); File outfile = new File("C:\\Users\\Desktop\\outputStreamFile.txt"); FileOutputStream fileOutputStream = new FileOutputStream(outfile); OutputStreamWriter outputStreamWriter = new OutputStreamWriter(fileOutputStream, "UTF8"); while(ts.incrementToken()) { //System.out.print(term.term() + " "); outputStreamWriter.write(term.term().toString() + "\r\n"); } outputStreamWriter.close(); }

    Read the article

  • Table character encoding - exception in application

    - by zgnilec
    I have a code: CREATE TABLE IF NOT EXISTS Person ( name varchar(24) ... ) CHARACTER SET utf8 COLLATE utf8_polish_ci; This works OK in my application, but I read if someone put in name field a string that contains character wchich code is greater than 127, database will use 2 bytes (or more) to store this character. So i think, i will change character set to utf16: CHARACTER SET utf16 COLLATE utf16_polish_ci; But now when I run my application, exception apears: KeyNotFoundException. It apears exactly at these instructions: MySqlCommand komenda = baza.Polaczenie.CreateCommand (); komenda.CommandText = zapytanie; MySqlDataReader dr = komenda.ExecuteReader (); // HERE, at execute reader method if (dr.Read ()) ... 1) Anyone had similar problem? 2) Any idea how to use always 2 bytes/char in database field?

    Read the article

  • What does addListener do in node.js?

    - by Jeffrey
    I am trying to understand the purpose of addListener in node.js. Can someone explain please? Thanks! A simple example would be: var tcp = require('tcp'); var server = tcp.createServer(function (socket) { socket.setEncoding("utf8"); socket.addListener("connect", function () { socket.write("hello\r\n"); }); socket.addListener("data", function (data) { socket.write(data); }); socket.addListener("end", function () { socket.write("goodbye\r\n"); socket.end(); }); }); server.listen(7000, "localhost");

    Read the article

  • Replacing whitespace with sed in a CSV (to use w/ postgres copy command)

    - by Wells
    I iterate through a collection of CSV files in bash, running: iconv --from-code=ISO-8859-1 --to-code=UTF-8 ${FILE} | \ sed -e 's/\"//g' | \ sed -e 's/, /,/g' \ > ${FILE}.utf8 Running iconv to fix UTF-8 characters, then the first sed call removes the double quote characters, and the final sed call is supposed to remove leading and trailing whitespace around the commas. HOWEVER, I still have a line like this in the saved file: FALSE,,,, 2.40,, The COPY command in postgres is kind of dumb, so it thinks " 2.40" is not valid syntax for a numeric value. Where am I going wrong w/ my processing of the CSV file? Thanks!

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >