Search Results

Search found 5325 results on 213 pages for 'huffman encoding'.

Page 1/213 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Is encoding needed in this decryption?

    - by Lijo
    I have a Encryption – Decryption scenario as shown below. //[Clear text ID string as input] -- [(ASCII GetByte) + Encoding] -- [Encrption as byte array] -- [Database column is in VarBinary] -- [Pass byte[] as VarBinary parameter to SP for comparison] //[ID stored as VarBinary in Database] -- [Read as byte array] -- [(Decrypt as byte array) + Encoding + (ASCII Get String)] -- Show as string in the UI My question is in the decryption scenario. After decryption I get a byte array. I am doing an encoding (IBM037) after that. Is it correct? Is there something wrong in the flow shown above? private static byte[] GetEncryptedID(string id) { Interface_Request input = new Interface_Request(); input.RequestText = Encodeto64(id); input.RequestType = Encryption; ProgramInterface inputRequest = new ProgramInterface(); inputRequest.Test_Trial_Request = input; using (KTestService operation = new KTestService()) { return ((operation.KTrialOperation(inputRequest)).Test_Trial_Response.ResponseText); } } private static string GetDecryptedID(byte[] id) { Interface_Request input = new Interface_Request(); input.RequestText = id; input.RequestType = Decryption; ProgramInterface request = new ProgramInterface(); request.Test_Trial_Request = input; using (KTestService operationD = new KTestService()) { ProgramInterface1 response = operationD.KI014Operation(request); byte[] decryptedValue = response.ICSF_AES_Response.ResponseText; Encoding sourceByteFormat = Encoding.GetEncoding("IBM037"); Encoding destinationByteFormat = Encoding.ASCII; //Convert from one byte format to other (IBM to ASCII) byte[] ibmEncodedBytes = Encoding.Convert(sourceByteFormat, destinationByteFormat,decryptedValue); return System.Text.ASCIIEncoding.ASCII.GetString(ibmEncodedBytes); } } private static byte[] EncodeTo64(string toEncode) { byte[] dataInBytes = System.Text.ASCIIEncoding.ASCII.GetBytes(toEncode); Encoding destinationByteFormat = Encoding.GetEncoding("IBM037"); Encoding sourceByteFormat = Encoding.ASCII; //Convert from one byte format to other (ASCII to IBM) byte[] asciiBytes = Encoding.Convert(sourceByteFormat, destinationByteFormat, dataInBytes); return asciiBytes; }

    Read the article

  • C# Check if character exists in encoding

    - by Alvin Wong
    I am writing a program that a part renders a bitmap font in CP437. In a function that renders the text with I want to be able to check whether a char is available in CP437 before the encoding conversion, like: public static void DrawCharacter(this Graphics g, char c) { if (char_exist_in_encoding(Encoding.GetEncoding(437), c) { byte[] src = Encoding.Unicode.GetBytes(c.ToString()); byte[] dest = Encoding.Convert(Encoding.Unicode, Encoding.GetEncoding(437), src); DrawCharacter(g, dest[0]); // Call the void(this Graphics, byte) overload } } Without the check, any characters outside CP437 will result in a '?' (63, 0x3F). I want to hide any invalid characters completely. Is there an implementation of char_exist_in_encoding other than the following stupid approach? public static bool char_exist_in_encoding(Encoding e, char c) { if (c == '?') return true; byte[] src = Encoding.Unicode.GetBytes(c.ToString()); byte[] dest = Encoding.Convert(Encoding.Unicode, Encoding.GetEncoding(437), src); if (dest[0] == 0x3F) return false; return true; } Perhaps not very relevant, but the bitmap is created like this: Bitmap b = new Bitmap(256 * 8, 16); Graphics g = Graphics.FromImage(b); g.TextRenderingHint = System.Drawing.Text.TextRenderingHint.SingleBitPerPixelGridFit; Font f = new Font("Whatever 8x16 bitmap font", 16, GraphicsUnit.Pixel); for (byte i = 0; i < 255; i++) { byte[] arr = Encoding.Convert(Encoding.GetEncoding(437), Encoding.Unicode, new byte[] { i }); char c = Encoding.Unicode.GetChars(arr)[0]; g.DrawString(c.ToString(), f, Brushes.Black, i * 8 - 3, 0); // Don't know why it needs a 3px offset } b.Save(@"D:\chars.png");

    Read the article

  • MODx character encoding

    - by Piet
    Ahhh character encodings. Don’t you just love them? Having character issues in MODx? Then probably the MODx manager character encoding, the character encoding of the site itself, your database’s character encoding, or the encoding MODx/php uses to talk to MySQL isn’t correct. The Website Encoding Your MODx site’s character encoding can be configured in the manager under Tools/Configuration/Site/Character encoding. This is the encoding your website’s visitors will get. The Manager’s Encoding The manager’s encoding can be changed by setting $modx_manager_charset at manager/includes/lang/<language>.inc.php like this (for example): $modx_manager_charset = 'iso-8859-1'; To find out what language you’re using (and thus was file you need to change), check Tools/Configuration/Site/Language (1 line above the character encoding setting). This needs to be the same encoding as your site. You can’t have your manager in utf8 and your site in iso-8859-1. Your Database’s Encoding The charset MODx/php uses to talk to your database can be set by changing $database_connection_charset in manager/includes/config.inc.php. This needs to be the same as your database’s charset. Make sure you use the correct corresponding charset, for iso-8859-1 you need to use ‘latin1′. Utf8 is just utf8. Example: $database_connection_charset = 'latin1'; Now, if you check Reports/System info, the ‘Database Charset’ might say something else. This is because the mysql variable ‘character_set_database’ is displayed here, which contains the character set used by the default database and not the one for the current database/connection. However, if you’d change this to display ‘character_set_connection’, it could still say something else because the ’set character set’ statement used by MODx doesn’t change this value either. The ’set names’ statement does, but since it turns out my MODx install works now as expected I’ll just leave it at this before I get a headache. If I saved you a potential headache or you think I’m totally wrong or overlooked something, let me know in the comments. btw: I want to be able to use a real editor with MODx. Somehow.

    Read the article

  • What encoding is used by javax.xml.transform.Transformer?

    - by Simon Reeves
    Please can you answer a couple of questions based on the code below (excludes the try/catch blocks), which transforms input XML and XSL files into an output XSL-FO file: File xslFile = new File("inXslFile.xsl"); File xmlFile = new File("sourceXmlFile.xml"); TransformerFactory tFactory = TransformerFactory.newInstance(); Transformer transformer = tFactory.newTransformer(new StreamSource(xslFile)); FileOutputStream fos = new FileOutputStream(new File("outFoFile.fo"); transformer.transform(new StreamSource(xmlFile), new StreamResult(fos)); inXslFile is encoded using UTF-8 - however there are no tags in file which states this. sourceXmlFile is UTF-8 encoded and there may be a metatag at start of file indicating this. am currently using Java 6 with intention of upgrading in the future. What encoding is used when reading the xslFile? What encoding is used when reading the xmlFile? What encoding will be applied to the FO outfile? How can I obtain the info (properties) for 1 - 3? Is there a method call? How can the properties in 4 be altered - using configuration and dynamically? if known - Where is there info (web site) on this that I can read - I have looked without much success.

    Read the article

  • What is really happening when we change encoding in a string?

    - by Jim Thio
    http://php.net/manual/en/function.mb-convert-encoding.php Say I do: $encoded = mb_convert_encoding ($original); That looks like simple enough. WHat I am imagining is the following $original has a pointer to the way the string is actually encoded. Something like char * kind of thing. And then there are things like what the character actually encoded. It's probably somewhere along UTF-64 kind of thing where each glyph is indeed a character. Now when we do $encoded = mb_convert_encoding ($original); several thing can happen: the original internal representation doesn't change however it is REINTERPRETED so that the code that show up differs the original string that it represent doesn't change however the ENCODING change. Which one is right?

    Read the article

  • Confused about Huffman Trees

    - by ShrimpCrackers
    A quick tutorial on generating a huffman tree Confused about Huffman Trees. Near the end of that link above, it shows the tree with 2 elements left, and then the completed tree. I'm confused about the way that it is branched. Is there a specific way a huffman tree needs to be branched? For example, 57:* with its right child 35:* is branched off to the right. Could it have been 35 branched to the left with 22 branched to the right? Also, why wasn't 22:* paired up with 15:4 - it just paired with 20:5 to create a new tree. From initial obersvations it seems the tree does not need to be balanced or have any specific order other than that the frequencies of a leaf add up to the value of the parent node. Could two people creating a huffman tree with the same data end up with different encoding values?

    Read the article

  • Switch encoding of terminal with a command

    - by Tomas Lycken
    One of the servers I quite often ssh to uses western encoding instead of utf-8 (and there's no way I can change that). I've started writing a bash script to connect to this server, so I won't have to type out the entire address every time, but I would like to improve this script so it also changes the encoding of the terminal window correctly. The change I need to do can be performed using the mouse by navigating to "Terminal"-"Set Character Encoding..."-"Western (ISO-8859-1)". Is there a terminal command that does the same thing, for the current terminal window/screen? To clarify: I'm not interested in ways of switching the locale of the system on the remote site - that system is administered by someone else, and I have no idea what stuff might depend on the latin-1 encoding there. What I want to do is to let this terminal window on my side switch character encoding to the above mentioned, in the same way I can do with my mouse and the menus.

    Read the article

  • How to detect the character encoding of a text file?

    - by Cédric Boivin
    I try to detect which character encoding is used in my file. I try with this code to get the standard encoding public static Encoding GetFileEncoding(string srcFile) { // *** Use Default of Encoding.Default (Ansi CodePage) Encoding enc = Encoding.Default; // *** Detect byte order mark if any - otherwise assume default byte[] buffer = new byte[5]; FileStream file = new FileStream(srcFile, FileMode.Open); file.Read(buffer, 0, 5); file.Close(); if (buffer[0] == 0xef && buffer[1] == 0xbb && buffer[2] == 0xbf) enc = Encoding.UTF8; else if (buffer[0] == 0xfe && buffer[1] == 0xff) enc = Encoding.Unicode; else if (buffer[0] == 0 && buffer[1] == 0 && buffer[2] == 0xfe && buffer[3] == 0xff) enc = Encoding.UTF32; else if (buffer[0] == 0x2b && buffer[1] == 0x2f && buffer[2] == 0x76) enc = Encoding.UTF7; else if (buffer[0] == 0xFE && buffer[1] == 0xFF) // 1201 unicodeFFFE Unicode (Big-Endian) enc = Encoding.GetEncoding(1201); else if (buffer[0] == 0xFF && buffer[1] == 0xFE) // 1200 utf-16 Unicode enc = Encoding.GetEncoding(1200); return enc; } My five first byte are 60, 118, 56, 46 and 49. Is there a chart that shows which encoding matches those five first bytes?

    Read the article

  • How relevant is UTF-7 when it comes to parsing emails?

    - by J. Pablo Fernández
    I recently implemented incoming emails for an application and boy, did I open the gates of hell? Since then every other day an email arrives that makes the app fail in a different way. One of those things is emails encoded as UTF-7. Most emails come as ASCII, some of the Latin encodings, or thankfully, UTF-8. Hotmail error messages (like email address doesn't exist or quota exceeded) seem to come as UTF-7. Unfortunately, UTF-7 is not an encoding Ruby understands: > "hello world".encode("utf-8", "utf-7") Encoding::ConverterNotFoundError: code converter not found (UTF-7 to UTF-8) > Encoding::UTF_7 => #<Encoding:UTF-7 (dummy)> My application doesn't crash, it actually handles the email quite well, but it does send me a notification about the potential error. I spent some time googling and I can't find anyone that implemented the conversion, at least not as a Ruby 1.9.3 Encoding::Converter. So, my question is, since I never got an email with actual content, from an actual person, in UTF-7, how relevant is that encoding? can I safely ignore it?

    Read the article

  • PHP: Detect encoding and make everything UTF-8

    - by marco92w
    Hello! I'm reading out lots of texts from various RSS feeds and inserting them into my database. Of course, there are several different character encodings used in the feeds, e.g. UTF-8 and ISO-8859-1. Unfortunately, there are sometimes problems with the encodings of the texts. Example: 1) The "ß" in "Fußball" should look like this in my database: "Ÿ". If it is a "Ÿ", it is displayed correctly. 2) Sometimes, the "ß" in "Fußball" looks like this in my database: "ß". Then it is displayed wrongly, of course. 3) In other cases, the "ß" is saved as a "ß" - so without any change. Then it is also displayed wrongly. What can I do to avoid the cases 2 and 3? How can I make everything the same encoding, preferably UTF-8? When must I use utf8_encode(), when must I use utf8_decode() (it's clear what the effect is but when must I use the functions?) and when must I do nothing with the input? Can you help me and tell me how to make everything the same encoding? Perhaps with the function mb-detect-encoding()? Can I write a function for this? So my problems are: 1) How to find out what encoding the text uses 2) How to convert it to UTF-8 - whatever the old encoding is Thanks in advance! EDIT: Would a function like this work? function correct_encoding($text) { $current_encoding = mb_detect_encoding($text, 'auto'); $text = iconv($current_encoding, 'UTF-8', $text); return $text; } I've tested it but it doesn't work. What's wrong with it?

    Read the article

  • unknown data encoding

    - by Keyur Shah
    Hi, While i was working with an old application with existing database which is in ms-access contains some strange data encoding such as 48001700030E0F465075465A56525E1100121D04121B565A58 as email address What kind of data encoding is this? i tried base64 but it dosent seems that. Can anybody with previous experience with ms-access could tell me what possible encoding could this be.

    Read the article

  • Setting encoding in Grails controller's render method

    - by Philippe
    Hello, I'm trying to build an RSS feed using Grails and Rome. In my controller's rss action, my last command is : render(text: getFeed("rss_2.0"), contentType:"application/rss+xml", encoding:"ISO-8859-1 ") However, when I navigate to my feed's URL, the header is : <?xml version="1.0" encoding="UTF-8"?> <rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0"> ... Does anyone have a clue about WHY the encoding is UTF-8 when I set it to ISO-8859-1 in the render method ??? Thanks for your help !

    Read the article

  • Perl - Encoding String for XML

    - by Sho Minamimoto
    I'm not too fluent with the perl XML libraries (actually, I really suck at understanding encoding in general), all I'm doing is taking a string that possibly has characters such as "à" and putting it in an XML file, but when I open the file, I get an encoding error at the line containing such a character. So I just need a lightweight way to take a string and encode it for XML.

    Read the article

  • Creating files with french characters and encoding.

    - by Kevin
    HI, I am creating a file like so. FileStream temp = File.Create( this.FileName ); Then putting data in the file like so. this.Writer = new StreamWriter( this.Stream ); this.Writer.WriteLine( strMessage ); That code is encapsulated in a class hierarchy but that is the meat and potatoes of it. My problem is this. MSDN says that the default encoding for creating a file this way is UTF8. And when I write a french character such as é Textpad interprets the file as a UTF 8 file, but notepad++ says it's "ANSI as UTF8" or maybe it's an ansi file but is reading it as UTF8. When I create a file the same way without the french character both textpad and notepad++ read the file as an ansi file even though according to msdn it should be a utf 8 file still. Which program should be trusted. Notepad++ or textpad - Notepad++ seems to be more consistant, but is still the oppossite to what MSDN says it should be. My problem is that we create files that get sent off to another company and depending on whether there are french characters the encoding seems to keep changing. Or is there a better way to determine the encoding of a file. I've read about byte order marks and preambles but as far as I understand neither are guaranteed to be there. We initially thought that all the files we were building were ansi. Also please note that both ansi and utf8 should handle the french characters appropriately as the characters are part of both character sets.

    Read the article

  • Encoding problem with preg_replace() and scandir()

    - by itarato
    Hi, On OS-X (PHP5.2.11) I have a file: siësta.doc (and thousand other with Unicode filenames) and I want to convert the file names to a web-consumable format (a-zA-Z0-9.). If I hardcode the file name above I can do the right conversion: <?php $file = 'siësta.doc'; echo preg_replace("/[^a-zA-Z0-9.]/u", '_', $file); // Output: si_sta.doc ?> But if I read the file names with scandir, I've got strange conversions: <?php $files = scandir(DIRNAME); foreach ($files as $file) { echo preg_replace("/[^a-zA-Z0-9.]/u", '_', $file); // Output for the file above: sie_sta.doc } ?> I tried to detect the encoding, set the encoding, convert it with iconv functions. I tried the mb_ functions also. But it was just worse. What did I do wrong? Thanks in advance

    Read the article

  • C#, UTF-8 and encoding characters

    - by AspNyc
    This is a shot-in-the-dark, and I apologize in advance if this question sounds like the ramblings of a madman. As part of an integration with a third party, I need to UTF8-encode some string info using C# so I can send it to the target server via multipart form. The problem is that they are rejecting some of my submissions, probably because I'm not encoding their contents correctly. Right now, I'm trying to figure out how a dash or hyphen -- I can't tell which it is just by looking at it -- is received or interpreted by the target server as ?~@~S (yes, that's a 5-character string and is not your browser glitching out). And unfortunately I don't have a thorough enough understanding of Encoding.UTF8.GetBytes() to know how to use the byte array to begin identifying where the problem might lie. If anybody can provide any tips or advice, I would greatly appreciate it. So far my only friend has been MSDN, and not much of one at that.

    Read the article

  • Is There any GUI Application for Flash Media Live Encoding for Ubuntu or Linux

    - by Dumindu Mahawela
    I need to Broadcast a TV channel to a Website. I need a GUI application for Flash Media Live Encoding. Famous Adobe FME does not have a Linux version. I did try to install Open Broadcast Encoder in Ubuntu 13.04 64amd but wasnt successfull. So the things that I need to know are; Is There any GUI Application for Flash Media Live Encoding for Ubuntu or Linux ? Is it able to succesfully install Open Broadcast Encoder In Ubuntu ?

    Read the article

  • Change language encoding for file uploading

    - by jc.yin
    Previously we were running a Wordpress site on a Mac OS Server machine. We had several hundred images with Chinese characters for the image names. Now we're trying to migrate to a Ubuntu system and everything is fine except the images. Every time I try to upload an image with a Chinese name via FTP, I get the following message: "MyImage contains illegal characters. Please choose an appropriate text encoding" I have no idea how to solve this issue, do I need to somehow change the system language encoding in Ubuntu to allow for image uploading? Thanks

    Read the article

  • Global UTF-encoding, the right way

    - by mowgli
    I'm curious, as to what is the right way to have UTF-8 encoding on all web files All my files (incl. CSS and JS) are made and saved in UTF-8 encoding In PHP, I set the char-set on top of the main page (this page includes all others) with: header('Content-type: text/html; charset=utf-8'); In the same page I have this html meta tag: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Then I stubled upon an external css file that has this on first line: @charset "UTF-8"; And now I wonder, should I set the charset INSIDE all my CSS/JS files too, like that? And/or should I serve each file with charset=utf-8 in the meta tag?

    Read the article

  • Huffman coding two characters as one

    - by Adomas
    Hi, I need huffman code(best in python or in java), which could encode text not by one character (a = 10, b = 11), but by two (ab = 11, ag = 10). Is it possible and if yes, where could i find it, maybe it's somewhere in the internet and i just can'd find it?

    Read the article

  • Character Encoding: â??

    - by akaphenom
    I am trying to piece together the mysterious string of characters â?? I am seeing quite a bit of in our database - I am fairly sure this is a result of conversion between character encodings, but I am not completely positive. The users are able to enter text (or cut and paste) into a Ext-Js rich text editor. The data is posted to a severlet which persists it to the database, and when I view it in the database i see those strange characters... is there any way to decode these back to their original meaning, if I was able to discover the correct encoding - or is there a loss of bits or bytes that has occured through the conversion process? Users are cutting and pasting from multiple versions of MS Word and PDF. Does the encoding follow where the user copied from? Thank you website is UTF-8 We are using ms sql server 2005; SELECT serverproperty('Collation') -- Server default collation. Latin1_General_CI_AS SELECT databasepropertyex('xxxx', 'Collation') -- Database default SQL_Latin1_General_CP1_CI_AS and the column: Column_name Type Computed Length Prec Scale Nullable TrimTrailingBlanks FixedLenNullInSource Collation text varchar no -1 yes no yes SQL_Latin1_General_CP1_CI_AS The non-Unicode equivalents of the nchar, nvarchar, and ntext data types in SQL Server 2000 are listed below. When Unicode data is inserted into one of these non-Unicode data type columns through a command string (otherwise known as a "language event"), SQL Server converts the data to the data type using the code page associated with the collation of the column. When a character cannot be represented on a code page, it is replaced by a question mark (?), indicating the data has been lost. Appearance of unexpected characters or question marks in your data indicates your data has been converted from Unicode to non-Unicode at some layer, and this conversion resulted in lost characters. So this may be the root cause of the problem... and not an easy one to solve on our end.

    Read the article

  • [Asp.Net MVC] Encoding a character

    - by Trimack
    Hi, I am experiencing some weird encoding behaviour in my ASP.NET MVC project. In my Site.Master there is <div class="logo"> <a href="<%=Url.Action("Index", "Win7")%>"><%= Html.Encode("Windows 7 Tutoriál") %></a></div> which translates to the resulting page as <div class="logo"> <a href="/">Windows 7 TutoriA?l</a></div> However, in the Index.aspx there is <h1> Windows 7 Tutoriál</h1> which translates correctly on the same resulting page. I do have <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> as my first line in <head>. Locally, both files are saved in UTF-8 encoding. Any ideas why is this happening and how to fix it? Thanks in advance.

    Read the article

  • Decoding not reversing unicode encoding in Django/Python

    - by PhilGo20
    Ok, I have a hardcoded string I declare like this name = u"Par Catégorie" I have a # -- coding: utf-8 -- magic header, so I am guessing it's converted to utf-8 Down the road it's outputted to xml through xml_output.toprettyxml(indent='....', encoding='utf-8') And I get a UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 3: ordinal not in range(128) Most of my data is in French and is ouputted correctly in CDATA nodes, but that one harcoded string keep ... I don't see why an ascii codec is called. what's wrong ?

    Read the article

  • C#/Oracle: Specify Encoding/Character Set of Query?

    - by Reini
    I'm trying to fetch some Data of a Oracle 10 Database. Some cells are containing german umlauts (äöü). In my Administration-Tool (TOAD) I can see them very well: "Mantel für Damen" (Jacket for Women) This is my C# Code (simplified): var oracleCommand = new OracleCommand(sqlGetArticles, databaseConnection); var articleResult = oracleCommand.ExecuteReader(); string temp = articleResult.Read()["SomeField"].ToString(); Console.WriteLine(temp); The output is: "Mantel f?r Damen" Tryed on Debugging (moving mouse over variable), Debug-Window, Console-Window, File. I think I have to specify the Encoding/Character Set somwhere. But where?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >