Search Results

Search found 1003 results on 41 pages for 'utf8'.

Page 26/41 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • C++'s char * by swig got problem in Python 3.0

    - by gpliu3
    Our C++ lib works fine with Python2.4 using Swig, returning a C++ char* back to a python str. But this solution hit problem in Python3.0, error is: Exception=(, UnicodeDecodeError('utf8', b"\xb6\x9d\xa.....",0, 1, 'unexpected code byte') Our definition is like(working fine in Python 2.4): void cGetPubModulus( void* pSslRsa, char* cMod, int* nLen ); %include "cstring.i" %cstring_output_withsize( char* cMod, int* nLen ); Suspect swig is doing a Bytes-Str conversion automatically. In python2.4 it can be implicit but in Python3.0 it's no long allowed.. Anyone got a good idea? thanks

    Read the article

  • A reasonable way to add attributes to an xml root element in C#.

    - by DrLazer
    The function "WriteStartElement" does not return anything. I find this a little bizzare. So up until now I have been doing it like this. XmlDocument xmlDoc = new XmlDocument(); XmlTextWriter xmlWriter = new XmlTextWriter(m_targetFilePath, System.Text.Encoding.UTF8); xmlWriter.Formatting = Formatting.Indented; xmlWriter.WriteProcessingInstruction("xml", "version='1.0' encoding='UTF-8'"); xmlWriter.WriteStartElement("client"); xmlWriter.Close(); xmlDoc.Load(m_targetFilePath); XmlElement root = xmlDoc.DocumentElement; Saving the doc, then reloading it to get hold of the start element so i can write attributes to it. Does anybody know the correct way of doing this because I'm pretty sure what I'm doing isn't right. I tried to use xmlWriter.AppendChild() but it doesnt seem to write out anything. :(

    Read the article

  • Table character encoding - exception in application

    - by zgnilec
    I have a code: CREATE TABLE IF NOT EXISTS Person ( name varchar(24) ... ) CHARACTER SET utf8 COLLATE utf8_polish_ci; This works OK in my application, but I read if someone put in name field a string that contains character wchich code is greater than 127, database will use 2 bytes (or more) to store this character. So i think, i will change character set to utf16: CHARACTER SET utf16 COLLATE utf16_polish_ci; But now when I run my application, exception apears: KeyNotFoundException. It apears exactly at these instructions: MySqlCommand komenda = baza.Polaczenie.CreateCommand (); komenda.CommandText = zapytanie; MySqlDataReader dr = komenda.ExecuteReader (); // HERE, at execute reader method if (dr.Read ()) ... 1) Anyone had similar problem? 2) Any idea how to use always 2 bytes/char in database field?

    Read the article

  • python input UnicodeDecodeError:

    - by The man on the Clapham omnibus
    python 3.x >>> a = input() hope >>> a 'hope' >>> b = input() håpe >>> b 'håpe' >>> c = input() start typing hå... delete using backspace... and change to hope Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'utf8' codec can't decode byte 0xc3 in position 1: invalid continuation byte >>> The situation is not terrible, I am working around it, but find it strange that when deleting, the bytes get messed up. Has anyone else experienced this? the terminal history shows that I thought that I entered h?ope any ideas? in the script that is using this, I do import readline to give command line history.

    Read the article

  • Environment variable (NLS_LANG) value altered in Java process?

    - by Ralkie
    This was noticed in some legacy Java application (jre1.4 on HP-UX). Parent process (shell script S1) is starting Java process, which on its own is starting child process (shell script S2). Schematically it's: S1 Java S2. NB! Java application connects to Oracle DB using OCI driver. What is strange here is that process running S1 has environment variable NLS_LANG set to american_america.BLT8MSWIN1257, Java spawns S2 using: Runtime.getRuntime().exec(cmd); and S2 shows that NLS_LANG is set to american_america.UTF8 (!) This happens on some limited-access environment (production), I was not able to reproduce same problem on linux with jre 1.5. AFAIK, Java process should inherit environment from its parrent (S1) and should pass all environment variables to its child S2 (since single argument exec call was used). However, it does not seem to be the case. Any ideas why NLS_LANG appears to be altered?

    Read the article

  • SQL select all items of an owner from an item-to-owner table

    - by kdobrev
    I have a table bike_to_owner. I would like to select current items owned by a specific user. Table structure is CREATE TABLE IF NOT EXISTS `bike_to_owner` ( `bike_id` int(10) unsigned NOT NULL, `user_id` int(10) unsigned NOT NULL, `last_change_date` date NOT NULL, PRIMARY KEY (`bike_id`,`user_id`,`last_change_date`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; In the profile page of the user I would like to display all his/her current possessions. I wrote this statement: SELECT `bike_id`,`user_id`,max(last_change_date) FROM `bike_to_owner` WHERE `user_id` = 3 group by `last_change_date` but i'm not quite sure it works correctly in all cases. Can you please verify this is correct and if not suggest me something better. Using php/mysql. Thanks in advance!

    Read the article

  • Diamonds with question marks

    - by hokkaido
    Hi, I'm getting these little diamonds with question marks in them in my HTML attributes when I present data from my database. I'm using EPiServer and a few custom properties. This is the information I've gathered, I save my data as a XML document, since I use custom EPiServer properties which need more than one defined value. This is saved as UTF8. It's only attributes in element tags which have this problem, such as align=left becomes align=?left?. There is no " character there, but I get the diamonds anyway. If I use " outside an element, it works and shows correctly. Any clues?

    Read the article

  • How to handle large table in MySQL ?

    - by Frantz Miccoli
    I've a database used to store items and properties about these items. The number of properties is extensible, thus there is a join table to store each property associated to an item value. CREATE TABLE `item_property` ( `property_id` int(11) NOT NULL, `item_id` int(11) NOT NULL, `value` double NOT NULL, PRIMARY KEY (`property_id`,`item_id`), KEY `item_id` (`item_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; This database has two goals : storing (which has first priority and has to be very quick, I would like to perform many inserts (hundreds) in few seconds), retrieving data (selects using item_id and property_id) (this is a second priority, it can be slower but not too much because this would ruin my usage of the DB). Currently this table hosts 1.6 billions entries and a simple count can take up to 2 minutes... Inserting isn't fast enough to be usable. I'm using Zend_Db to access my data and would really be happy if you don't suggest me to develop any php side part. Thanks for your advices !

    Read the article

  • Java Micro Edition (J2ME) - Update Record using recordstore enumeration

    - by Garbit
    Hi there, I have a record store of items which have (name, quantity, owner, status) Now when the user triggers an event i want to set the status of all items in my recordstore with "purchased" re = shoppingListStore.enumerateRecords(null, null, false); while (re.hasNextElement()) { // read current values of item byte [] itemRecord = re.nextRecord(); // deserialise byte array newItemObject.fromByteArray(itemRecord); // set item status to purchased newItemObject.setItemStatus("Purchased"); // create new bytearray and call newitemobject . tobytearray method to return a byte array of the object (using UTF8 encoded strings~) byte[] itemData = newItemObject.toByteArray(); // add new byte array to shoppinglist store shoppingListStore.setRecord(re.nextRecordId(), itemData, 0, itemData.length); } However I am overwriting the next record (using the nextRecordId), i've tried using nextRecordId - 1 but obviously this is out of bounds on the first one Hope you can help, Many thanks, andy

    Read the article

  • Zend Framework multiple databases

    - by Uffo
    I'm currently using only one database with Zend Framework, but now I have to add ONE MORE. I'm using this code right now: public static function setupDatabase() { $config = self::$registry->configuration; $db = Zend_Db::factory($config->db->adapter, $config->db->toArray()); $db->query("SET NAMES 'utf8'"); self::$registry->database = $db; Zend_Db_Table::setDefaultAdapter($db); } What code do I need to write in order to use ONE MORE database; and how I will reference it, when I need to make some queries or so.. Best Regards!

    Read the article

  • UploadFileAsync not asynchronous?

    - by a2h
    Aight, did a bit of Googling and searching here, the only question I found related was this, although the only answer it had wasn't marked as accepted, is old and is confusing. My problem is basically what I've said in the title. What happens is that the GUI freezes while the upload is in progress. My code: // stuff above snipped public partial class Form1 : Form { WebClient wcUploader = new WebClient(); public Form1() { InitializeComponent(); wcUploader.UploadFileCompleted += new UploadFileCompletedEventHandler(UploadFileCompletedCallback); wcUploader.UploadProgressChanged += new UploadProgressChangedEventHandler(UploadProgressCallback); } private void button1_Click(object sender, EventArgs e) { if (openFileDialog1.ShowDialog() == DialogResult.OK) { string toUpload = openFileDialog1.FileName; wcUploader.UploadFileAsync(new Uri("http://anyhub.net/api/upload"), "POST", toUpload); } } void UploadFileCompletedCallback(object sender, UploadFileCompletedEventArgs e) { textBox1.Text = System.Text.Encoding.UTF8.GetString(e.Result); } void UploadProgressCallback(object sender, UploadProgressChangedEventArgs e) { textBox1.Text = (string)e.UserState + "\n\n" + "Uploaded " + e.BytesSent + "/" + e.TotalBytesToSend + "b (" + e.ProgressPercentage + "%)"; } }

    Read the article

  • Turkish characters are not displayed correctly

    - by tfeseas
    MySql database uses utf-8 encoding and data are stored correctly.I use set_name utf8 query to make sure the data called are utf-8 encoded.all variables from database works fine as long as the header charset is utf-8,but the static html characters do not work properly.When i set header charset to ISO-8859-9 variables are displayed differenly while html characters work ok.can anyone help me? <?php header('Content-Type: text/html; charset=ISO-8859-9'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head><title>noname</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />

    Read the article

  • SQL Syntax Error 1064

    - by 01010011
    Hi, I keep getting the following error message ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right systax to use near ''isbn10','isbn13','title','edition','author_f_name','author_m_name','author_l_na' at line 1 when trying to populate my MySQL database from the command line with the following command: source C:\myFilePath\myFileName.sql Here is an excerpt from my mysqldump (showing the table structure for book). Where did I go wrong? Any assistance will be appreciated: -- -- Table structure for table book DROP TABLE IF EXISTS book; /*!40101 SET @saved_cs_client = @@character_set_client /; /!40101 SET character_set_client = utf8 */; CREATE TABLE book ( book_id int(11) NOT NULL AUTO_INCREMENT, isbn10 char(20) DEFAULT NULL, isbn13 char(20) DEFAULT NULL, title char(20) DEFAULT NULL, edition char(20) DEFAULT NULL, author_f_name char(20) DEFAULT NULL, author_m_name char(20) DEFAULT NULL, author_l_name char(20) DEFAULT NULL, cond enum('as new','very good','good','fair','poor') DEFAULT NULL, price decimal(8,2) DEFAULT NULL, genre char(20) DEFAULT NULL, PRIMARY KEY (book_id) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; /*!40101 SET character_set_client = @saved_cs_client */; -- -- Dumping data for table book

    Read the article

  • Writing Lucene StandardAnalyzer results to text file with OutputStreamWriter

    - by user3693192
    I'm getting ONLY the last result written to "outputStreamFile.txt". Can't figure out how to revise code so I can get ALL results written to text file. Sample input text: "1st line of text\n" "2nd line of text \n" Results in only 2nd line begin written (and not 1st line) as: "2nd line text\n" private static void analyze(String text) throws IOException { analyzer = new StandardAnalyzer(Version.LUCENE_30); Reader r = new StringReader(text); TokenStream ts = (TokenStream) analyzer.tokenStream("", r); TermAttribute term = ts.addAttribute(TermAttribute.class); File outfile = new File("C:\\Users\\Desktop\\outputStreamFile.txt"); FileOutputStream fileOutputStream = new FileOutputStream(outfile); OutputStreamWriter outputStreamWriter = new OutputStreamWriter(fileOutputStream, "UTF8"); while(ts.incrementToken()) { //System.out.print(term.term() + " "); outputStreamWriter.write(term.term().toString() + "\r\n"); } outputStreamWriter.close(); }

    Read the article

  • oracle sql developer is truncating my results

    - by nont
    I'm calling a stored function like this: select XML_INVOICE.GENERATE_XML_DOC('84200006823') from dual; The query results then show up in a table underneath, which I can right click and select "Export Date" - XML <?xml version='1.0' encoding='UTF8' ?> <RESULTS> <ROW> <COLUMN NAME="XML_INVOICE.GENERATE_XML_DOC('84200006823')" <![CDATA[<xml>yada yada</xml><morexml>...]]></COLUMN> </ROW> </RESULTS> The problem is the "..." - SQL Developer (2.1.0.63 on Linux) is not showing all the data - its truncating the result and appending the ellipsis. This is of no use to me. How do I get it to export ALL of my data?

    Read the article

  • No download dialog with FileResult

    - by majkinetor
    I am returning File result from action triggered by the form post event. I can't get download dialog. Instead, if I use: return File(Encoding.UTF8.GetBytes(reportPath), "text/plain", "Report.csv"); I get path to the file upon ajax execution in the target div. When I use return File(reportPath, "text/plain", "Report.csv"); I get content of the file in the target div. Any thoughts ? The action is declared as [HttpPost] public virtual ActionResult ExportFilter(Model model) { string outputFile = CreateReport(model); return File(....) }

    Read the article

  • mysql ON DUPLICATE KEY UPDATE

    - by julio
    Hi-- I'm stuck on a mySQL query using ON DUPLICATE KEY UPDATE. I'm getting the error: mySQL Error: 1062 - Duplicate entry 'hr2461809-3' for key 'fname' The table looks like this: id int(10) NOT NULL default '0', picid int(10) unsigned NOT NULL default '0', fname varchar(255) NOT NULL default '', type varchar(5) NOT NULL default '.jpg', path varchar(255) NOT NULL default '', PRIMARY KEY (id), UNIQUE KEY fname (fname), KEY picid (propid) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; And the query that's breaking is this: INSERT INTO images SET picid=732, fname='hr2461809-3', path='pictures/' ON DUPLICATE KEY UPDATE picid=732, fname='hr2461809-3', path='pictures/' I'm using a very similar query elsewhere in the app with no issues. I'm not sure why this one breaks. I expected that when the UNIQUE KEY on fname gets violated, that it would simply update the row where the violation occurred? Thanks for any help

    Read the article

  • How to encode and decode chinese characters?

    - by melaos
    I've try googling around but wasn't able to find what charset that this text below belongs to: 具有éœé›»ç”¢ç”Ÿè£ç½®ä¹‹å½±åƒè¼¸å…¥è£ç½® But putting <meta http-equiv="Content-Type" Content="text/html; charset=utf-8"> and keeping that string into a html file i was able to view the chinese character wording properly. which is: ??????????????? So my question is: what tools can i use to detect the character set of those text? And how do i convert/encode/decode them properly in C#? Updates: Added some test code [TestMethod] public void TestMethod1() { string encodedText = "具有éœé›»ç”¢ç”Ÿè£ç½®ä¹‹å½±åƒè¼¸å…¥è£ç½®"; Encoding encoder = new UTF8Encoding(); byte[] postBytes = encoder.GetBytes(encodedText); postBytes = UTF8Encoding.Convert(Encoding.UTF8, Encoding.Unicode, postBytes); string decodedText = Encoding.Unicode.GetString(postBytes); Assert.AreNotEqual(encodedText, decodedText); } thanks

    Read the article

  • Replacing whitespace with sed in a CSV (to use w/ postgres copy command)

    - by Wells
    I iterate through a collection of CSV files in bash, running: iconv --from-code=ISO-8859-1 --to-code=UTF-8 ${FILE} | \ sed -e 's/\"//g' | \ sed -e 's/, /,/g' \ > ${FILE}.utf8 Running iconv to fix UTF-8 characters, then the first sed call removes the double quote characters, and the final sed call is supposed to remove leading and trailing whitespace around the commas. HOWEVER, I still have a line like this in the saved file: FALSE,,,, 2.40,, The COPY command in postgres is kind of dumb, so it thinks " 2.40" is not valid syntax for a numeric value. Where am I going wrong w/ my processing of the CSV file? Thanks!

    Read the article

  • Socket receive buffer size

    - by Kanishka
    Is there a way to determine the receive buffer size of a TCPIP socket in c#. I am sending a message to a server and expecting a response where I am not sure of the receive buffer size. IPEndPoint ipep = new IPEndPoint(IPAddress.Parse("192.125.125.226"),20060); Socket server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); server.Connect(ipep); String OutStr= "49|50|48|48|224|48|129|1|0|0|128|0|0|0|0|0|4|0|0|32|49|50"; byte[] temp = OutStr.Split('|').Select(s => byte.Parse(s)).ToArray(); int byteCount = server.Send(temp); byte[] bytes = new byte[255]; int res=0; res = server.Receive(bytes); return Encoding.UTF8.GetString(bytes);

    Read the article

  • Storing apostrophes, exclamation marks, etc. in mysql database

    - by rein
    I changed from latin1 to utf8. Although all sorts of text was displaying fine I noticed non-english characters were stored in the database as weird symbols. I spent a day trying to fix that and finally now non-english characters display as non-english characters in the database and display the same on the browser. However I noticed that I see apostrophes stored as &#39; and exclamation marks stored as &#33;. Is this normal, or should they be appearing as ' and ! in the database instead? If so, what would I need to do in order to fix that?

    Read the article

  • Howto convert to string and read data from TCP packet

    - by salime
    I used sharppcap to capture TCP packets. Now i wanna reconstruct HTTP packet from TCP packets but i don't know how. I read somewhere i can find start of HTTP packet in TCP data... i tried to convert byte[] TCP data to string using this code: string s = System.Text.Encoding.UTF8.GetString(tcp_pack.Data); but the string isn't readable. like a binary file that is opened with notepad. is it because the data is encrypted or code is incorrect? how can i reconstruct HTTP packet from TCP packets?

    Read the article

  • Unique constraint with nullable column

    - by Álvaro G. Vicario
    I have a table that holds nested categories. I want to avoid duplicate names on same-level items (i.e., categories with same parent). I've come with this: CREATE TABLE `category` ( `category_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `category_name` varchar(100) NOT NULL, `parent_id` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`category_id`), UNIQUE KEY `category_name_UNIQUE` (`category_name`,`parent_id`), KEY `fk_category_category1` (`parent_id`,`category_id`), CONSTRAINT `fk_category_category1` FOREIGN KEY (`parent_id`) REFERENCES `category` (`category_id`) ON DELETE SET NULL ON UPDATE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_spanish_ci Unluckily, category_name_UNIQUE does not enforce my rule for root level categories (those where parent_id is NULL). Is there a reasonable workaround?

    Read the article

  • Flask Admin didn't show all fields

    - by twoface88
    I have model like this: class User(db.Model): __tablename__ = 'users' __table_args__ = {'mysql_engine' : 'InnoDB', 'mysql_charset' : 'utf8'} id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), unique=True) email = db.Column(db.String(120), unique=True) _password = db.Column('password', db.String(80)) def __init__(self, username = None, email = None, password = None): self.username = username self.email = email self._set_password(password) def _set_password(self, password): self._password = generate_password_hash(password) def _get_password(self): return self._password def check_password(self, password): return check_password_hash(self._password, password) password = db.synonym("_password", descriptor=property(_get_password, _set_password)) def __repr__(self): return '<User %r>' % self.username I have ModelView: class UserAdmin(sqlamodel.ModelView): searchable_columns = ('username', 'email') excluded_list_columns = ['password'] list_columns = ('username', 'email') form_columns = ('username', 'email', 'password') But no matter what i do, flask admin didn't show password field when i'm editing user info. Is there any way ? Even just to edit hash code. UPDATE: https://github.com/mrjoes/flask-admin/issues/78

    Read the article

  • What does addListener do in node.js?

    - by Jeffrey
    I am trying to understand the purpose of addListener in node.js. Can someone explain please? Thanks! A simple example would be: var tcp = require('tcp'); var server = tcp.createServer(function (socket) { socket.setEncoding("utf8"); socket.addListener("connect", function () { socket.write("hello\r\n"); }); socket.addListener("data", function (data) { socket.write(data); }); socket.addListener("end", function () { socket.write("goodbye\r\n"); socket.end(); }); }); server.listen(7000, "localhost");

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >