Search Results

Search found 5303 results on 213 pages for 'encoding'.

Page 4/213 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Efficient way to calculate byte length of a character, depending on the encoding

    - by BalusC
    What's the most efficient way to calculate the byte length of a character, taking the character encoding into account? In UTF-8 for example the characters have a variable byte length, so each character needs to be determined individually. As far now I've come up with this: char c = getItSomehow(); String encoding = "UTF-8"; int length = new String(new char[] { c }).getBytes(encoding).length; But this is clumsy and inefficient in a loop since a new String needs to be created everytime. I can't find other and more efficient ways in the Java API. I imagine that this can be done with bitwise operations like bit shifting, but that's my weak point and I'm unsure how to take the encoding into account here :) If you question the need for this, check this topic.

    Read the article

  • PhpMyAdmin import/export - strange character encoding issues.

    - by John Hunt
    Hello, I'm migrating a site to a new host, and there are a couple of databases on there. There's no SSH access so I'm stuck with phpmyadmin. The issue is that certain characters (namely just whitespace) seems to being corrupt on the new site (same html, and apache doesn't seem to be messing with any encodings - you can see the strange characters have changed when I use less on my linux machine after downloading a table dump from both servers.) The issue isn't as bad if I import into the new database as utf-8 - whitespace characters only have one funny A type symbol instead of two. I've been trying various combinations of character encoding etc to no avail. Exporting from: phpMyAdmin 2.6.2 MySQL 4.1.20 MySQL connection collation: utf8_general_ci MySQL charset: UTF-8 Unicode (utf8) Collation on tables and their fields is: latin1_swedish_ci Importing to: phpMyAdmin - 2.11.9.2 MySQL client version: 5.0.45 MySQL charset: UTF-8 Unicode (utf8) MySQL connection collation: utf8_general_ci The import sql has this kind of thing in it: ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=192 ; I get the impression this is actually a bug or something with mysqldump as nothing seems to work.. does anyone have any insight into this? Cheers, John.

    Read the article

  • Setting Quercus db connection encoding to UTF-8 (urgent problem and need your great help)

    - by sokcmss
    Now we are going to use java class in my website developed with PHP + mySQL. I came to know Quercus and it worked well. But only problem is encoding. Quercus is providing ISO8859 encoding in default and all database in UTF-8 is not shown properly like ???. If anybody know the way to set Quercus db connection encoding to UTF-8, please help me. Look forward to hearing good news urgently.

    Read the article

  • iphone's nsxmlparser parsing RSS causes encoding problems

    - by Tankista
    Hi, Im working on simle RSS reader. This reader loads data from internet via this code: NSXMLParser *rss = [[NSXMLParser alloc] initWithURL:[NSURL URLWithString:@"http://twitter.com/statuses/user_timeline/50405236.rss"]]; My problem is with encoding. RSS 2.0 file is supposed to be UTF8 encoded according to encoding attribute in XML file. <?xml version="1.0" encoding="utf-8"?> So when I download URLs content I get text truncated after first occurance of char with diacritics, example: l š c t ž ý á í é, etc. I tried to solve the problem by downloading URL as UTF8 string, I used this code: NSString *rssXmlString = [NSString stringWithContentsOfURL: [NSURL URLWithString: @"http://www.macblog.sk/rss.xml"] encoding:NSUTF8StringEncoding error: nil]; NSData *rssXmlData = [rssXmlString dataUsingEncoding: NSUTF8StringEncoding]; Did not help. Thanx for your responses.

    Read the article

  • Windows cmd encoding change causes Python crash.

    - by Alex
    First I chage Windows CMD encoding to utf-8 and run Python interpreter: chcp 65001 python Then I try to print a unicode sting inside it and when i do this Python crashes in a peculiar way (I just get a cmd prompt in the same window). >>> import sys >>> print u'ëèæîð'.encode(sys.stdin.encoding) Any ideas why it happens and how to make it work? UPD: sys.stdin.encoding returns 'cp65001' UPD2: It just came to me that the issue might be connected with the fact that utf-8 uses multi-byte character set (kcwu made a good point on that). I tried running the whole example with 'windows-1250' and got 'ëeaî?'. Windows-1250 uses single-character set so it worked for those characters it understands. However I still have no idea how to make 'utf-8' work here. UPD3: Oh, I found out it is a known Python bug. I guess what happens is that Python copies the cmd encoding as 'cp65001 to sys.stdin.encoding and tries to apply it to all the input. Since it fails to understand 'cp65001' it crushes on any input that contains non-ascii characters.

    Read the article

  • Asp.net renders string with wrong encoding, but PHP doesn't (MySQL)

    - by citronas
    I took over some old php application with MySQL as database. Inside the database, there are tables including content with localized strings (therefore containing special chars) Currently there is a PHP application accessing that database. My job is to create an ASP.net (C# codebehind) application that accesses that strings as well. That works, as far as encoding goes. If I try to access these strings, I do get a kind of encoding problem, like 'Ändern' and 'Prüfzeichen', but only in the ASP.net application. The PHP app sets utf-8 as charset and the strings are perfectly rendered. In the ASP.net application it's gibberish, regardless of the page encoding. In the MySQL database, the charset for the specified table 'translations' is set to 'latin --cp1252 West European' and collation to 'latin_swedish_ci'. I can't seem to figure out what PHP apparently does, and ASP.net does not. I traced the php code and could not find any sign of special encoding while getting a string from the database. The question is, how can I ensure correct encoding inside the ASP.net application without modifying the database, because big changes at the php code are not possible? Does anybody have a clue?

    Read the article

  • Shell wrong encoding

    - by csch
    Somehow I managed to screw up my shell-encoding. An example: root§server:ç£ cat --help Usage: cat ¡OPTION¿... ¡FILE¿... Concatenate FILE(s), or standard input, to standard output. -A, --show-all equivalent to -vET -b, --number-nonblank number nonempty output lines -e equivalent to -vE -E, --show-ends display $ at end of each line -n, --number number all output lines -s, --squeeze-blank suppress repeated empty output lines -t equivalent to -vT -T, --show-tabs display TAB characters as ^I -u (ignored) -v, --show-nonprinting use ^ and M- notation, except for LFD and TAB --help display this help and exit --version output version information and exit With no FILE, or when FILE is -, read standard input. Examples: cat f - g Output f's contents, then standard input, then g's contents. cat Copy standard input to standard output. Report cat bugs to bug-coreutils§gnu.org GNU coreutils home page: <http://www.gnu.org/software/coreutils/> General help using GNU software: <http://www.gnu.org/gethelp/> For complete documentation, run: info coreutils 'cat invocation' root§server:ç£ It should look like: root@server:~# cat --help Usage: cat [OPTION]... [FILE]... Concatenate FILE(s), or standard input, to standard output. -A, --show-all equivalent to -vET -b, --number-nonblank number nonempty output lines -e equivalent to -vE -E, --show-ends display $ at end of each line -n, --number number all output lines -s, --squeeze-blank suppress repeated empty output lines -t equivalent to -vT -T, --show-tabs display TAB characters as ^I -u (ignored) -v, --show-nonprinting use ^ and M- notation, except for LFD and TAB --help display this help and exit --version output version information and exit With no FILE, or when FILE is -, read standard input. Examples: cat f - g Output f's contents, then standard input, then g's contents. cat Copy standard input to standard output. Report cat bugs to [email protected] GNU coreutils home page: <http://www.gnu.org/software/coreutils/> General help using GNU software: <http://www.gnu.org/gethelp/> For complete documentation, run: info coreutils 'cat invocation' root@server:~# I have no clue what went wrong, do you have any ideas?

    Read the article

  • Servlet response wrapper has encoding problem

    - by John O
    A servlet response wrapper is being used in a Servlet Filter. The idea is that the response is manipulated, with a 'nonce' value being injected into forms, as part of defence against CSRF attacks. The web app is using UTF-8 everywhere. When the Servlet Filter is absent, no problems. When the filter is added, encoding issues occur. (It seems as if the response is reverting to 8859-1.) The guts of the code : final class CsrfResponseWrapper extends AbstractResponseWrapper { ... byte[] modifyResponse(byte[] aInputResponse){ ... String originalInput = new String(aInputResponse, encoding); String modifiedResult = addHiddenParamToPostedForms(originalInput); result = modifiedResult.getBytes(encoding); ... } ... } As I understand it, the transition between byte-land and String-land should specify an encoding. That is done here, as you can see, in two places. The value of the 'encoding' variable is 'UTF-8'; the alteration of the String itself is standard string manipulation (with a regex), and never specifies an encoding (addHiddenParamToPostedForms). Where am I in error about the encoding? EDIT: Here is the base class (sorry it's rather long): package hirondelle.web4j.security; import javax.servlet.ServletOutputStream; import javax.servlet.ServletResponse; import javax.servlet.http.HttpServletResponse; import javax.servlet.http.HttpServletResponseWrapper; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.io.PrintWriter; /** Abstract Base Class for altering response content. (May be useful in future contexts as well. For now, keep package-private.) */ abstract class AbstractResponseWrapper extends HttpServletResponseWrapper { AbstractResponseWrapper(ServletResponse aServletResponse) throws IOException { super((HttpServletResponse)aServletResponse); fOutputStream = new ModifiedOutputStream(aServletResponse.getOutputStream()); fWriter = new PrintWriter(fOutputStream); } /** Return the modified response. */ abstract byte[] modifyResponse(byte[] aInputResponse); /** Standard servlet method. */ public final ServletOutputStream getOutputStream() { //fLogger.fine("Modified Response : Getting output stream."); if ( fWriterReturned ) { throw new IllegalStateException(); } fOutputStreamReturned = true; return fOutputStream; } /** Standard servlet method. */ public final PrintWriter getWriter() { //fLogger.fine("Modified Response : Getting writer."); if ( fOutputStreamReturned ) { throw new IllegalStateException(); } fWriterReturned = true; return fWriter; } // PRIVATE /* Well-behaved servlets return either an OutputStream or a PrintWriter, but not both. */ private PrintWriter fWriter; private ModifiedOutputStream fOutputStream; /* These items are used to implement conformance to the javadoc for ServletResponse, regarding exceptions being thrown. */ private boolean fWriterReturned; private boolean fOutputStreamReturned; /** Modified low level output stream. */ private class ModifiedOutputStream extends ServletOutputStream { public ModifiedOutputStream(ServletOutputStream aOutputStream) { fServletOutputStream = aOutputStream; fBuffer = new ByteArrayOutputStream(); } /** Must be implemented to make this class concrete. */ public void write(int aByte) { fBuffer.write(aByte); } public void close() throws IOException { if ( !fIsClosed ){ processStream(); fServletOutputStream.close(); fIsClosed = true; } } public void flush() throws IOException { if ( fBuffer.size() != 0 ){ if ( !fIsClosed ) { processStream(); fBuffer = new ByteArrayOutputStream(); } } } /** Perform the core processing, by calling the abstract method. */ public void processStream() throws IOException { fServletOutputStream.write(modifyResponse(fBuffer.toByteArray())); fServletOutputStream.flush(); } // PRIVATE // private ServletOutputStream fServletOutputStream; private ByteArrayOutputStream fBuffer; /** Tracks if this stream has been closed. */ private boolean fIsClosed = false; } }

    Read the article

  • VB.NET encoding one character wrong

    - by Nick Spiers
    I have a byte array that I'm encoding to a string: Private Function GetKey() As String Dim ba() As Byte = {&H47, &H43, &H44, &H53, &H79, &H73, &H74, &H65, &H6D, &H73, &H89, &HA, &H1, &H32, &H31, &H36} Dim strReturn As String = Encoding.ASCII.GetString(ba) Return strReturn End Function Then I write that to a file via IO.File.AppendAllText. If I open that file in 010 Editor (to view the binary data) it displays as this: 47 43 44 53 79 73 74 65 6D 73 3F 0A 01 32 31 36 The original byte array contained 89 at position 11, and the encoded string contains 3F. If I change my encoding to Encoding.Default.GetString, it gives me: 47 43 44 53 79 73 74 65 6D 73 E2 80 B0 0A 01 32 31 36 Any help would be much appreciated!

    Read the article

  • Spring MVC + Hibernate encoding problem

    - by Bar
    I work on Spring MVC + Hibernate application, use MySQL (ver. 5.0.51a) with the InnoDB engine. The problem appears when I am sending a form with cyrillic characters. As the result, database contains senseless chars in unknown encoding. All the JSP pages, database (+ tables and fields) created using UTF-8. Hibernate config also contains property which sets encoding to UTF-8. I had solved this by creating filter which encodes request content with UTF-8. Exemplary code: … encoding = "UTF-8"; request.setCharacterEncoding(encoding); chain.doFilter(request, response); … But it visibly slows down the app. The interesting thing is that executing insert query directly from the app (i.e. running from Eclipse as Java Application) works perfect. Any suggestions are welcome. TIA, Michael.

    Read the article

  • Set filename character encoding in Putty's PSFTP

    - by lacton
    I am using PuTTY's command line utility psftp.exe to transfer files between a UTF8-configured linux server and a MS Windows PC. File names containing non ASCII characters (e.g., Japanese kana) are corrupted when using the 'ls' or 'get' commands of the psftp utility. I tried to create a saved session from putty.exe with the translation set to UTF8, and use that saved session from psftp.exe (i.e., open saved_session_with_UTF8_translation), but the filename characters were still corrupted. How can I configure psftp.exe so that it uses the right charset for the file names?

    Read the article

  • Multi-core DVD ripping/encoding on the Mac

    - by Paul D. Waite
    A friend of mine likes ripping DVDs to his Mac. He’s currently on an ancient machine, and is about to upgrade to either a MacBook Pro or an iMac. Just wondering if any of the Mac DVD ripping software will rip faster on the iMac (thanks to its four cores), as opposed to the MacBook Pro (a measly two cores)? Or is DVD ripping not that sort of task?

    Read the article

  • What affects video encoding speeds?

    - by Pig Head
    FRAPs doesn't compress its videos when you record, so the files are enormous. In a long recording you can get up to a few hundred gigabytes. Obviously, usually you would need to convert/compress them. What affects the speed of this? I don't think the RAM does, as when I converted 600 gb my RAM usage only went to 6 gig, but the processor was at 100%, which is surprising as I have a 6 core processor @ 3.46 ghz. Would clock speed or cores help the most?

    Read the article

  • Video encoding for archival

    - by Jim
    I would like to archive some home videos (DV). I don't need to save them losslessly, but I would like to encode them in something high-quality. What format is both pretty indistinguishable from the original and will likely be readable 15 years from now? WMA makes me nervous, because it's only one company that makes it, and they're constantly coming out with newer formats. (VLC couldn't open my WMAs that Windows Movie Maker made.) Other things I've considered are h.264, Ogg Theora, DivX, and Xvid. I don't mind paying for something, but usually that means the format is owned by only one vendor.

    Read the article

  • encoding movies for the web

    - by ELS
    Hi All, I have a friend that hosts his website on IIS and Windows Server 2003 R2 32-bit. He has .WMV files and .MPG and others and some of these are 30 mb in size! He wonders why users complain the site is slow! So my question is how can we reduce the size of these movies? What software? What settings for bit-rate, etc? Is there free software? I can use either a Mac or a PC. Thoughts are appreciated.

    Read the article

  • How can I find out a file's path in the text encoding used by PosteRazor?

    - by ændrük
    PosteRazor uses an apparently outdated GUI that is incapable of properly displaying my filenames: For the sake of convenience, I want to be able to open any file in PosteRazor by copying and pasting its path from Nautilus. This works in other applications, but sadly, PosteRazor in unable to understand the path: How can I convert the path that Nautilus generates into a text encoding that is compatible with PosteRazor?

    Read the article

  • Character Encoding, UTF or ANSI?

    - by Paulocoghi
    I'm using Eclipse in Ubuntu to edit PHP files. But, unfortunately, some of these PHP files were created in Notepad++ in Windows XP, with ANSI encoding defined. Also, these files generates HTML codes with charset=ISO-8859-1. When I configured Eclipse to ISO-8859-1, many special characters were lost and changed to '???', and when I try to save a file with ISO enconding, Eclipse displays an error that was not possible to save the file because some characters aren't compatible with the charset. How can I save these files without changing the encoding, or how can I change the encoding without lose characters.

    Read the article

  • Technique for ensuring HTML- and URL-encoding

    - by JW
    Has anyone implemented a good template system for ensuring that output is properly HTML-encoded where it makes sense? Maybe even something that recognizes when output should be URL-encoded or JSON-encoded instead? The lazy approach — just encoding all inputs — causes problems when you want to send those inputs to a database, or to a block of JavaScript code. So something a little smarter is needed. The tedious approach — putting the proper encoding function around each piece of data on the template — works, but it's easy for developers to forget to do it. Is there a good approach that makes it easy for developers, and ensures that the right encoding is done? I was listening to one of the SO podcasts, and Joel tossed out an idea about using typed data to enforce a difference between HTML-encoded strings and non-encoded strings. Maybe that could be a starting point. I'm looking more for a strategy than for an implementation in a particular language (although I'd be happy to hear about implementations that already exist and work).

    Read the article

  • Variable-byte encoding clarification

    - by Myx
    Hello: I am very new to the world of byte encoding so please excuse me (and by all means, correct me) if I am using/expressing simple concepts in the wrong way. I am trying to understand variable-byte encoding. I have read the Wikipedia article (http://en.wikipedia.org/wiki/Variable-width_encoding) as well as a book chapter from an Information Retrieval textbook. I think I understand how to encode a decimal integer. For example, if I wanted to provide variable-byte encoding for the integer 60, I would have the following result: 1 0 1 1 1 1 0 0 (please let me know if the above is incorrect). If I understand the scheme, then I'm not completely sure how the information is compressed. Is it because usually we would use 32 bits to represent an integer, so that representing 60 would result in 1 1 1 1 0 0 preceded by 26 zeros, thus wasting that space as opposed to representing it with just 8 bits instead? Thank you in advance for the clarifications.

    Read the article

  • How to configure encoding in maven

    - by Ethan Leroy
    When I run maven install on my multi module maven project I always get the following output: [WARNING] File encoding has not been set, using platform encoding UTF-8, i.e. build is platform dependent! So, I googled around a bit, but all I can find is that I have to add <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> to my pom.xml. But it's already there (in the parent pom.xml). Configuring <encoding> for the maven-resources-plugin or the maven-compiler-plugin also doesn't fix it. So what's the problem?

    Read the article

  • Hyphen encoding (minus) in Google Base RSS feed

    - by pmells
    I am trying to create an automatic feed generation for data to be sent to Google Base using utf-8 encoding. However I am getting errors whenever hyphens are found telling me that there is an encoding error in the relevant attribute (title, description, product_type). I am currently using: &amp;minus; but I have also tried: &amp;#8722; neither of which have worked. I am using the following declaration at the top of the document: <?xml version="1.0" encoding="utf-8"?> Any help appreciated and let me know if I need to give more information!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >