Search Results

Search found 17841 results on 714 pages for 'non ascii characters'.

Page 403/714 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • String codification to Twitter

    - by Miguel Ribeiro
    I'm developing a program that sends tweets. I have this piece of code: StringBuilder sb = new StringBuilder("Recomendo "); sb.append(lblName.getText()); sb.append(" no canal "+lblCanal.getText()); sb.append(" no dia "+date[2]+"/"+date[1]+"/"+date[0]); sb.append(" às "+time[0]+"h"+time[1]); byte[] defaultStrBytes = sb.toString().getBytes("ISO-8859-1"); String encodedString = new String(defaultStrBytes, "UTF-8"); But When I send it to tweet I get the "?" symbol or other strage characters because of the accents like "à" . I've also tried with only String encodedString = new String(sb.toString().getBytes(), "UTF-8"); //also tried with ISO-8859-1 but the problem remains...

    Read the article

  • IE7 not digesting JSON: "parse error"

    - by Kenny Leu
    While trying to GET a JSON, my callback function is NOT firing. $.ajax({ type:"GET", dataType:'json', url: myLocalURL, data: myData, success: function(returned_data){alert('success');} }); The strangest part of this is that my JSON(s) validates on JSONlint this ONLY fails on IE7...it works in Safari, Chrome, and all versions of Firefox. If I use 'error', then it reports "parseError"...even though it validates! Is there anything that I'm missing? Does IE7 not process certain characters, data structures (my data doesn't have anything non-alphanumeric, but it DOES have nested JSONs)? I have used tons of other AJAX calls that all work (even in IE7), but with the exception of THIS call. An example data return here is: {"question":{ "question_id":"19", "question_text":"testing", "other_crap":"none" }, "timestamp":{ "response":"answer", "response_text":"the text here" } } I am completely at a loss. Hopefully someone has some insight into what's going on...thank you!

    Read the article

  • Longer Form Fields in Drupal

    - by Slinger Jansen
    I have a really silly problem that has cost me a load of time already. I have created a content template with a URL in there. When I look at the HTML code for it, I see a big fat "maxlength=256" in the form tag. I'd like to expand the length of this field, because my customer wishes to enter really long links (over 500 characters). Any idea how I can change it? When I do a generic search through the code I see so many occurences of 256, but the length might just as well be in the database somewhere. I have of course made the database field a longer varchar (1024 sounded poetic to me), so that's something I don't have to worry about. I think it's silly, but the customer's always right, as we know. I am using Drupal 6.14.

    Read the article

  • latin bases language segmentation gramatical rules

    - by pravin
    Hi folks, I am working on one feature i.e. to apply language segmentation rules ( grammatical ) for Latin based language ( English currently ). Currently I am in phase of breaking sentences of user input. e.g.: "I am working in language translation". "I have used Google MT API for this" In above example i will break above sentence by full stop (.) This is normal cases where I am breaking sentence on dot, but there are n number of characters for breaking sentence like ( . ! ? etc ). I have following SRX rules for segmentation. Here my question are :- 1) Is there any reference ? which I can use for resolving my language segmentation rules. 2) Or Is there any forums on language segmentation ? , so that i can discuss efficiently Please let me know if anybody know about this ? Thanks a lot.

    Read the article

  • Should I convert overlong UTF-8 strings to their shortest normal form?

    - by Grant McLean
    I've just been reworking my Encoding::FixLatin Perl module to handle overlong UTF-8 byte sequences and convert them to the shortest normal form. My question is quite simply "is this a bad idea"? A number of sources (including this RFC) suggest that any over-long UTF-8 should be treated as an error and rejected. They caution against "naive implementations" and leave me with the impression that these things are inherently unsafe. Since the whole purpose of my module is to clean up messy data files with mixed encodings and convert them to nice clean utf8, this seems like just one more thing I can clean up so the application layer doesn't have to deal with it. My code does not concern itself with any semantic meaning the resulting characters might have, it simply converts them into a normalised form. Am I missing something. Is there a hidden danger I haven't considered?

    Read the article

  • Track unicode words from Twitter using Ruby and the Tweetstream API

    - by Régis B.
    I am trying to track a set of keywords from Twitter by using the Streaming API (can't post the link here because of spam limitations: google twitter streaming API). I am doing this inside Ruby, using the TweetStream gem: http://bit.ly/cODAWI The problem I have is that I want to track keywords that contain some unicode/UTF-8 characters. For instance: require 'rubygems' require 'tweetstream' TweetStream::Client.new("my_user_name", "my_password").track("é") do |s| puts s.text end (you can try it out, provided you installed the tweetstream and json gems) This piece of code does not print anything, while replacing "é" with "e" outputs a bunch of tweets continuously. I did not find any reliable documentation about Unicode in Ruby, so I have no idea where the problem comes from. Thanks for your help!

    Read the article

  • Codeigniter: A nice straight forward tutorial on how to build a reset password/forgotten password?

    - by Psychonetics
    I've built a full sign up system with user account activation, login, validation, captcha etc. To complete this I now need to implement a forgot password/reset password feature.. I have created one function that generates a random password 8 characters, another method that takes that random password word and applies sha1 and hashing. Also one that takes that hashed password and stores it in a table in the database. I will keep these methods to one side as they might come in handy later on but for now I would like to know if anyone can point me to a nice tutorial for creating a password reset feature for my website. Thanks in advance

    Read the article

  • Does C# have an equivalent to JavaScript's encodeURIComponent()?

    - by travis
    In JavaScript: encodeURIComponent("©v") == "%C2%A9%E2%88%9A" Is there an equivalent for C# applications? For escaping HTML characters I used: txtOut.Text = Regex.Replace(txtIn.Text, @"[\u0080-\uFFFF]", m => @"&#" + ((int)m.Value[0]).ToString() + ";"); But I'm not sure how to convert the match to the correct hexadecimal format that JS uses. For example this code: txtOut.Text = Regex.Replace(txtIn.Text, @"[\u0080-\uFFFF]", m => @"%" + String.Format("{0:x}", ((int)m.Value[0]))); Returns "%a9%221a" for "©v" instead of "%C2%A9%E2%88%9A". It looks like I need to split the string up into bytes or something. Edit: This is for a windows app, the only items available in System.Web are: AspNetHostingPermission, AspNetHostingPermissionAttribute, and AspNetHostingPermissionLevel.

    Read the article

  • Overloading operator>> for case insensitive string

    - by TheSOFan
    Given the definition of ci_string from cpp.reference.com, how would we go about implementing operator? My attempts at it involved std::read, but it doesn't seem to work (that is, gcount() properly counts the number of characters entered, but there is no output) #include <iostream> #include <cctype> #include <string> // ci_string definition goes here std::istream& operator>>(std::istream& in, ci_string& str) { return in.read(&*str.begin(), 4); } int main() { ci_string test_str; std::cin >> test_str; std::cout << test_str; return 0; }

    Read the article

  • Matching First Alphanumeric Character skipping (The |An? )

    - by TheLizardKing
    I have a list of artists, albums and tracks that I want to sort using the first letter of their respective name. The issue arrives when I want to ignore "The ", "A ", "An " and other various non-alphanumeric characters (Talking to you "Weird Al" Yankovic and [dialog]). Django has a nice start '^(An?|The) +' but I want to ignore those and a few others of my choice. I am doing this in Django, using a MySQL db with utf8_bin collation. EDIT Well my fault for not mentioning this but the database I am accessing is pretty much ready only. It's created and maintained by Amarok and I can't alter it without a whole mess of issues. That being said the artist table has The Chemical Brothers listed as The Chemical Brothers so I think I am stuck here. It probably will be slow but that's not so much of a concern for me as it's a personal project.

    Read the article

  • How can I replace a line which contains only -------- by |||

    - by mimou
    I have something like: ------------------------------------------------------------------------ r2 | username | 2011-01-16 16:52:23 +0100 (Sun, 16 Jan 2011) | 1 line Changed paths: D /foo Removed foo ------------------------------------------------------------------------ r1 | username | 2011-01-16 16:51:03 +0100 (Sun, 16 Jan 2011) | 1 line Changed paths: A /foo created foo ------------------------------------------------------------------------ My target is to identify the file added by the "username" in a specific date. Thus, I need to have the combination (username, 16 Jan 2011, A) to insure that it is the right file ands then print foo. My idea is to: delete the white spaces change the newlines into | get rid of the --------------- and replace them with newlines but the problem is that I couldn't replace the ------- since they are mixed with other characters. ------------------------------------------------------------------------ |r2|username|2011-01-1616:52:23+0100(Sun,16Jan2011)|1line|Changedpaths:|D/foo|Removedfoo| ------------------------------------------------------------------------ |r1|username|2011-01-1616:51:03+0100(Sun,16Jan2011)|1line|Changedpaths:|A/foo|createdfoo| ------------------------------------------------------------------------ So I thought it would be a good idea to start by replacing the --------------- by a special character like ||| and then change this character by a newline using awk FS=||| OFS=\n Can anyone help me! thanks

    Read the article

  • Checking for uppercase/lowercase/numbers with Jquery

    - by user1725794
    Either I'm being really retarded here or its just the lack of sleep but why doesn't this work? If I use the "or" operator it works for each separate test but as soon as it change it to the "and" operator it stops working. I'm trying to test the password input of a form to see if its contains lowercase, uppercase and at least 1 number of symbol. I'm having a lot of trouble with this so help would be lovely, here is the code I have. var upperCase= new RegExp('[^A-Z]'); var lowerCase= new RegExp('[^a-z]'); var numbers = new RegExp('[^0-9]'); if(!$(this).val().match(upperCase) && !$(this).val().match(lowerCase) && !$(this).val().match(numbers)) { $("#passwordErrorMsg").html("Your password must be between 6 and 20 characters. It must contain a mixture of upper and lower case letters, and at least one number or symbol."); } else { $("#passwordErrorMsg").html("OK") }

    Read the article

  • XML Parsing in Groovy strips attribute new lines

    - by Bill James
    I'm writing code where I retrieve XML from a web api, then parse that XML using Groovy. Unfortunately, it seems that both XmlParser and XmlSlurper for Groovy strip newline characters from the attributes of nodes when .text() is called. How can I get at the text of the attribute including the newlines? Sample code: def xmltest = ''' <snippet> <preSnippet att1="testatt1" code="This is line 1 This is line 2 This is line 3" > <lines count="10" /> </preSnippet> </snippet>''' def parsed = new XmlParser().parseText( xmltest ) println "Parsed" parsed.preSnippet.each { pre -> println pre.attribute('code'); } def slurped = new XmlSlurper().parseText( xmltest ) println "Slurped" slurped.children().each { preSnip -> println [email protected]() } the output of which is: Parsed This is line 1 This is line 2 This is line 3 Slurped This is line 1 This is line 2 This is line 3

    Read the article

  • label in my tabel celll

    - by ven in Iphone world
    hi this is lak.. thank you for your fast feedback but that not worked for me i am using labels in table.. UILabel *label1 = (UILabel *) [cell viewWithTag:1]; label1.backgroundColor = [UIColor clearColor]; label1.text=aStation.station_name; label1.textColor = [UIColor colorWithRed:0.76 green:0.21 blue:0.07 alpha:1.0]; [label1 setFont:[UIFont fontWithName:@"Trebuchet MS" size:15]]; for this type of label i want to limit number of characters. hope i will get an answer..

    Read the article

  • Is it okay to truncate a SHA256 hash to 128 bits?

    - by Sunny Hirai
    MD5 and SHA-1 hashes have weaknesses against collision attacks. SHA256 does not but it outputs 256 bits. Can I safely take the first or last 128 bits and use that as the hash? I know it will be weaker (because it has less bits) but otherwise will it work? Basically I want to use this to uniquely identify files in a file system that might one day contain a trillion files. I'm aware of the birthday problem and a 128 bit hash should yield about a 1 in a trillion chance on a trillion files that there would be two different files with the same hash. I can live with those odds. What I can't live with is if somebody could easily, deliberately, insert a new file with the same hash and the same beginning characters of the file. I believe in MD5 and SHA1 this is possible.

    Read the article

  • getting java.lang.OutOfMemoryError exception while running a Midlet (using netbeans)

    - by Jeeka
    I am writing a Midlet(using Netbeans) which reads a file containing exactly 2400 lines (each line being 32 characters long) and (extract a part of each line) puts them in an array. I am doing the same for 11 such files( all files have exactly 2400 lines).The Midlet runs fine for reading 6 files and putting them in 6 arrays. However, the Midlet stops while doing it for the 7th file throwing the following exception: TRACE: , startApp threw an Exception java.lang.OutOfMemoryError (stack trace incomplete) java.lang.OutOfMemoryError (stack trace incomplete) I have tried the modifying the netbeans.conf file to increase the heap memory ( as suggested by many forums and blogs) but nothing works for me. Here are the parameters that i had modified in the netbeans.conf file: -J-Xss2m -J-Xms1024m -J-Xmx1024m -J-XX:PermSize=1024m -J-XX:MaxPermSize=1536m -J-XX:+UseConcMarkSweepGC -J-XX:+CMSClassUnloadingEnabled -J-XX:+CMSPermGenSweepingEnabled Can anyone please help me to get me out of this ! I badly need this to be sorted out ASAP ! Thanks in advance !

    Read the article

  • Compressing a hex string in Ruby/Rails

    - by PreciousBodilyFluids
    I'm using MongoDB as a backend for a Rails app I'm building. Mongo, by default, generates 24-character hexadecimal ids for its records to make sharding easier, so my URLs wind up looking like: example.com/companies/4b3fc1400de0690bf2000001/employees/4b3ea6e30de0691552000001 Which is not very pretty. I'd like to stick to the Rails url conventions, but also leave these ids as they are in the database. I think a happy compromise would be to compress these hex ids to shorter collections using more characters, so they'd look something like: example.com/companies/3ewqkvr5nj/employees/9srbsjlb2r Then in my controller I'd reverse the compression, get the original hex id and use that to look up the record. My question is, what's the best way to convert these ids back and forth? I'd of course want them to be as short as possible, but also url-safe and simple to convert. Thanks!

    Read the article

  • MySQL import in phpmyadmin (CSV) chokes on quotes

    - by Andrew Swift
    I am trying to import a .csv file into a MySQL table via phpMyAdmin. The .csv file is separated by pipes, formated like this: data|d'ata|d'a"ta|dat"a| data|"da"ta|data|da't'a| dat'a|data|da"ta"|da'ta| The data contains quotes. I have no control over the format in which I recieve the data -- it is generated by a third party. The problem comes when there is a | followed by a double quote. I always get an "invalid field count in CSV input on line N" error. I am uploading the file from the import page, using Latin1, CSV, terminated by |, separated by ". I would like to just change the "enclosed by" character, but I keep getting "Invalid parameter for CSV import: Fields enclosed by". I have tried various characters with no success. How can I tell MySQL to accept this format in phpMyAdmin? Setting up these tables is the first step in writing a program that will use uploaded gzipped .csv files to maintain the catalog of an e-commerce site.

    Read the article

  • Multiline Find & Replace in Visual Studio

    - by hawbsl
    Can it be done? We're using either VS2005 or VS2008. I don't mean regular expressions - which have their place - but plain old text find & replace. I know we can do it (at a pinch) with regular expressions using the \n tag but prefer not to get tangled up in regex escapes characters, plus there's a readability issue. If it can't be done what plain and simple (free) alternative are people using? That doesn't involve knocking up our own macro.

    Read the article

  • Which collation should I use to store these country names in MySQL?

    - by morpheous
    I am trying to store a list of countries in a mySQL database. I am having problems storing (non English) names like these: São Tomé and Príncipe República de El Salvador They are stored with strange characters in the db, (and therefore output strangely in my HTML pages). I have tried using different combinations of collations for the database and the MySQL connection collation: The "obvious" setting was to use utf8_unicode_ci for both the databse and the connection information. To my utter surprise, that did not solve the problem. Does anyone know how to resolve this issue?

    Read the article

  • Typing Text effect not working

    - by Anthony Garand
    I'm writing a script to take an array of strings, split them by characters, and print them out to the screen. This is what I have and for some reason it is not doing anything. Any ideas? function autowrite() { var write_text=["Your Memories","Your Thoughts","Your Photos"]; var split_text = Array(); var i; var c; for(i=0; i < write_text.length; i++) { split_text[i] = write_text[i].split(""); for(c=0; i < split_text.length[i]; i++) { alert(split_text[i][c]); } } }

    Read the article

  • How many bytes does Oracle use when storing a single character?

    - by Mr-sk
    I tried to look here: http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/datatype.htm#i3253 And I understand that I have to provide string length for the column, I'm just not able to find out how many bytes oracle uses when storing a character. My limit is 500 characters, so if its 1 byte / character, I can create the column with 500, if its 2 byte / character then 1000, etc. Anyone have a link to the documentation or know for certain? In case it matters, the SQL is being called from PHP, so these are PHP strings I'm inserting into the database. Thanks.

    Read the article

  • Referencing file on disk from NSManagedObject

    - by Kamchatka
    Hello, What would be the best way to name a file associated to a NSManagedObject. The NSManagedObject will hold the URL to this file. But I need to create a unique filename for my file. Is there some kind of autoincrement id that I could use? Should I use mktemp (but it's not a temporary file) or try to convert the NSManagedObjectId to a filename? but I fear there will be special characters which might cause problem. What would you suggest?

    Read the article

  • Valid signed Hex to long function

    - by Ben
    I am trying to convert a 24bit Hexadecimal string (6 characters) signed in two's complement to a long int in C. This is the function I have come up with: long int hex2li (char string[]) { char *pEnd; long int result = strtol (string, &pEnd, 16); if (strcmp (pEnd, "") == 0) { if (toupper (string[0]) == 'F') { return result - 16777216; } else { return result; } } return LONG_MIN; } Is it valid? Is there a better way of doing this?

    Read the article

  • Replace whitespaces using PHP preg_replace function ignoring quoted strings

    - by Saiful
    Look at the following string: SELECT column1 , column2, column3 FROM table1 WHERE column1 = 'text, "FROM" \'from\\\' x' AND column2 = "sample text 'where' \"where\\\" " AND ( column3 = 5 ) I need to escape unnecessary white space characters from the string like: removing white space from beginning and ending position of , ( ) etc removing newline (\r\n) and tabs (\t) But one thing. The remove process could not remove white spaces from the quoted string like: 'text, "FROM" \'from\\' x' "sample text 'where' \"where\\" " etc. i need to use the PHP function: preg_replace($pattern, $replacement, $string); So what will be the value of $pattern and $replacement where the value of $string is the given SQL.

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >