Search Results

Search found 5303 results on 213 pages for 'encoding'.

Page 3/213 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Does FFMpeg support gpu acceleration of media encoding/decoding?

    - by Jason123
    I was wondering if ffmpeg supported gpu acceleration. I was reading on their websites and came across contradicting information. http://www.ffmpeg.org/general.html#Video-Codecs -H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (VDPAU acceleration) http://ffmpeg.org/trac/ffmpeg/wiki/x264EncodingGuide -Will a graphics card make x264 encode faster? No. libx264 doesn't use them (at least not yet). There are some proprietary encoders that utilize the GPU, but that does not mean they are well optimized, though encoding time may be faster; and they might be worse than x264 anyway, and possibly slower. Regardless, FFmpeg today doesn't support any means of gpu encoding, outside of libx264. If not, is there any way to add gpu acceleration to h.264 encoding/decoding?

    Read the article

  • sql and web encoding problem

    - by Marki
    Guys, I've got an encoding problem I believe. I have upgraded from phpBB2 to phpBB3. The old databases were in latin1, the new ones have utf8 encoding. Already during the upgrade process some rows of the DB were only read partly into the new version, because of strange characters as it turned out. When I use PHP's mb_convert_encoding() function to convert those strings to UTF8 they end up e.g. as 0x0093, i.e. they must have been some kind of double quotes. Even after doing this conversion, they still show up as 0x0093 in the browser (the squares with 0093 in them when the browser does not know what to display). Can someone explain the problem here? I'm a little confused and afraid I don't see all the dependencies that need to work to have the correct encodings and the correct display thereof...

    Read the article

  • Video encoding is very slow on Amazon EC2 instance

    - by Timka
    We are using Amazon EC2 m1.xlarge instance for video re-encoding and it looks like the actual encoding process takes a very long time. For an average 250mb video file it takes about an hour to encode. Intance: m1.xlarge (Xeon E5645 x 15gb ram) Windows Server 2008 R2 64-bit AviSynth version 2.5 (32bit) + ffms2 plugin (FFmpegSource 1.21) FFmpeg SVN-r13712 libavutil 3213056 libavcodec 3356930 libavformat 3411456 libavdevice 3407872 Number of parallel jobs is 3 Average CPU utilization ~96% Update#1 Source video: mp4/h.264 Parameters for ffmpeg: --enable-memalign-hack --enable-avisynth --enable-libxvid --enable-libx264 + --enable-libgsm --enable-libfaac --enable-libfaad --enable-liba52 + --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-pthreads + --enable-swscale --enable-gpl Video files encoded to mp4/h.264 with the following extra command line options: -threads 0 -coder 0 -bf 0 -refs 1 -level 30 -maxrate 10000000 -bufsize 10000000

    Read the article

  • c# HTTPListener encoding issue

    - by Rob Griffin
    I have a Java application sending HTTP requests to a C# application. The C# app uses HTTPListener to listen for requests and respond. On the Java side I'm encoding the URL using UTF-8. When I send a \ character it gets encoded as %5C as expected but on the C# side it becomes a / character. The encoding for the request object is Windows-1252 which I think may be causing the problem. How do I set the default encoding to UTF-8? Currently I'm doing this to convert the encoding: foreach (string key in request.QueryString.Keys) { if (key != null) { byte[] sourceBytes =request.ContentEncoding.GetBytes(request.QueryString[key]); string value = Encoding.UTF8.GetString(sourceBytes)); } } This handles the non ASCII characters I'm also sending but doesn't fix the slash problem. Examining request.QueryString[key] in the debugger shows that the / is already there.

    Read the article

  • International JRE6 or JDK6 or reading a file in "cp037" encoding scheme

    - by Reddy
    I have been trying to read a file in "cp037" encoding scheme using JAVA. I able to read a file in basic encoding schemes like UTF-8, UTF16 etc...After a bit of research on the internet i came to know that we need charset.jar or international version of JRE be installed to support extended encoding schemes. Can anyone send me a link for international version of JRE6 or JDK6. or is there any better way that i could read a file in cp037 encoding scheme. P.S: cp037 is a character encoding scheme supported by IBM Mainframes. All i need is to display a file in windows, which is being generated on IBM Mainframes machine, using a java program. Thanks in advance for your help... :-)

    Read the article

  • What encoding to use for exporting to CSV?

    - by Michael Borgwardt
    I'm developing a java app that exports data to CSV files, intended to be opened in Excel by end users. We just noticed that the export function uses Java's platform default encoding. This causes umlaut characters to be lost and unit test to fail on the build server (which is configured to have US-ASCII as its platform default encoding exactly to catch such potential problems). The question is: which would be the best encoding to use? How does Excel determine what encoding to use? Does it use something platform-specific that presumably matches Java's platform default? I'm currently leaning towards hardcoding Cp1252 - that should cover the target machines (the deployment environment is actually specified) and would fix the test problem. From googling around, Excel does not seem to handle UTF-8 well, so that's out, and sticking to the platform default encoding would require some sort of workaround hack for the tests.

    Read the article

  • ADSL throughput loss from Reed-Solomon encoding

    - by javano
    I'm reading about ADSL starting here and I am confused by how the Reed-Solomon encoding for ECC is limiting the available transfer rate, as much as it does (nearly half). This pdf on the same subject contains the following; A maximum of 255 sub-carriers can be used to modulate data in the downstream direction. Sub-carrier 256, the downstream Nyquist frequency, and sub-carrier 64, the downstream pilot frequency, are not available for user data, thus limiting the total number of available downstream sub-carriers to 254. Each of these 254 sub-carriers can support the modulation of 0 to 15 bits. Since the ADSL DMT data frame rate is 4000 frames per second, the maximum theoretical downstream data rate of an ADSL system is 15.24Mbps. Due to limitations in system architecture, specifically the maximum allowable Reed-Solomon codeword size (255 bytes), the maximum achievable downstream data rate is 8.16Mbps. How is this nearly halving the throughput? Is all that extra bandwidth overhead of the RS encoding? 15240000 bps (15.24Mbps) - 8160000 bps (8.12Mbps) = 7080000 bps (7.08Mbps). Where has that 7Mbps of throughput gone? EDIT: I tried to read the wiki page on Reed-Soloman but it's all crazy maths and algerbra, which I don't understand. I can understand that data is split into 255 byte codewords, because that maybe the max codeword size whilst still maintaining accuracy during transmission; But I don't understand why that means less data is sent?

    Read the article

  • Email encoding on IIS7

    - by Ivanhoe123
    All emails sent from the server are displaying Cyrillic letters as weird characters, for example: Можно. Regular alphabet letters are properly rendered. I searched all across the web but was not able to find any solutions. Here is some information about the system: Dedicated server with Windows 2008 and IIS7 Application are in PHP (run as FastCGI) If of any importance, Smartermail is installed on the server The emails are sent using PHPs mail() function through a Drupal website. Encoding on that site is set up properly and there are no display issues on front end. Where is the problem? How can I make Cyrillic letters to be properly encoded? Any help is greatly appreciated. Thanks! UPDATE Here are the email headers: Received: from SERVERNAME (mail.domain.com [12.123.123.123]) by mail.domain.com with SMTP; Fri, 16 Nov 2012 00:00:00 +0100 From: [email protected] To: [email protected] Subject: Email subject Date: Fri, 16 Nov 2012 00:00:00 +0100 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Mailer: Drupal Sender: [email protected] Return-Path: [email protected] Message-ID: f98b801988c642ef911ef46f7cace92b@com X-SmarterMail-Spam: SPF_None, ISpamAssassin 8 [raw: 5], DK_None, DKIM_None, Custom Rules [] X-SmarterMail-TotalSpamWeight: 8

    Read the article

  • Force encoding with IIS 7

    - by Cédric Boivin
    I try to force encoding with IIS 7. When I add in the http response headers the key : Content-Type and value charset=utf-8 i got this key content-type : text/html,content-type=utf-8 it's there a way to remove the comma ? Thanks Justin for your answer. But it's seen don't work. There is my config, i need to do that for asp classic. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <remove fileExtension=".html" /> <remove fileExtension=".hxt" /> <remove fileExtension=".htm" /> <remove fileExtension=".asp" /> <mimeMap fileExtension=".htm" mimeType="text/html" /> <mimeMap fileExtension=".hxt" mimeType="text/html" /> <mimeMap fileExtension=".html" mimeType="text/html" /> <mimeMap fileExtension=".asp" mimeType="text/html; charset=UTF-8" /> </staticContent> </system.webServer> </configuration>

    Read the article

  • Changing character encoding in MySQL, PHP scripts, HTML

    - by Sandman
    So, I have built on this system for quite some time, and it is currently outputting Latin1 (ISO-8859-1) to the web browser, and this is the components: MySQL - all data is stored with the Latin1 character set PHP - All PHP text files are stored on disk with Latin1 encoding HTML - The output has the http-equiv="content-type" content="text/html; charset=iso-8859-1" meta tag So, I'm trying to understand how the encoding of the different parts come into play in my workflow. If I open a PHP script and change its encoding within the text editor to UTF-8 and save it back to disk and reload the web browser, the text is all messed up - unless the text comes from the DB. If I change the encoding of the DB to UTF-8 and keep the PHP files in latin1 I have to use utf8_decode() for the data to display correctly. And if I change the HTML code the browser will read it incorrectly. So yeah, I realise that if I want to "upgrade" to UTF8, I have to update all three parts of this setup for it to work correctly, but since it's a huge system with some 180k lines of PHP code and millions of posts in a lot of databases/tables, I don't want to start something like this without understanding everything correctly. What haven't I thought about? What could mess this up beyond fixing? What are the procedures for changing the encoding of an entire MySQL installation and what's the easiest way to change the encoding of hundreds or thousands of PHP files on disk? The META tag is luckily added dynamically, so I'll change that in one place only :) Let me hear about your experiences with this.

    Read the article

  • Theory: "Lexical Encoding"

    - by _ande_turner_
    I am using the term "Lexical Encoding" for my lack of a better one. A Word is arguably the fundamental unit of communication as opposed to a Letter. Unicode tries to assign a numeric value to each Letter of all known Alphabets. What is a Letter to one language, is a Glyph to another. Unicode 5.1 assigns more than 100,000 values to these Glyphs currently. Out of the approximately 180,000 Words being used in Modern English, it is said that with a vocabulary of about 2,000 Words, you should be able to converse in general terms. A "Lexical Encoding" would encode each Word not each Letter, and encapsulate them within a Sentence. // An simplified example of a "Lexical Encoding" String sentence = "How are you today?"; int[] sentence = { 93, 22, 14, 330, QUERY }; In this example each Token in the String was encoded as an Integer. The Encoding Scheme here simply assigned an int value based on generalised statistical ranking of word usage, and assigned a constant to the question mark. Ultimately, a Word has both a Spelling & Meaning though. Any "Lexical Encoding" would preserve the meaning and intent of the Sentence as a whole, and not be language specific. An English sentence would be encoded into "...language-neutral atomic elements of meaning ..." which could then be reconstituted into any language with a structured Syntactic Form and Grammatical Structure. What are other examples of "Lexical Encoding" techniques? If you were interested in where the word-usage statistics come from : http://www.wordcount.org

    Read the article

  • How to retain similar character encoding

    - by Mystere Man
    I have a logfile that contains the half character ½, I need to process this log file and rewrite certain lines to a new file, which contain that character. However, when I write out the file the characters appear in notepad incorrectly. I know this is some kind of encoding issue, and i'm not sure if it's just that the files i'm writing don't contain the correct bom or what. I've tried reading and writing the file with all the available encoding options in the Encoding enumeration. I'm using this code: string line; // Note i've used every version of the Encoding enumeration using (StreamReader sr = new StreamReader(file, Encoding.Unicode)) using (StreamWRiter sw = new StreamWriter(newfile, false, Encoding.Unicode)) { while ((line = sr.ReadLine()) != null) { // process code, I do not alter the lines, they are copied verbatim // but i do not write every line that i read. sw.WriteLine(line); } } When I view the original log in notepad, the half character displays correctly. When I view the new file, it does not. Can anyone help me to solve this?

    Read the article

  • 2 pass encoding or not?

    - by marco.ragogna
    I would like to do a backup of some movies on DVD with File Factory. In the output setting, by default the option 2 pass encoding is disabled. Do I need to enable it for better quality and does it worth?

    Read the article

  • Force encoding with IIS 7

    - by Cédric Boivin
    I try to force encoding with IIS 7. When I add in the http response headers the key : Content-Type and value charset=utf-8 i got this key content-type : text/html,content-type=utf-8 it's there a way to remove the comma ?

    Read the article

  • How to correct character encoding in IE8 native json ?

    - by mike_t2e
    I am using json with unicode text, and having a problem with the IE8 native json implementation. <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <script> var stringified = JSON.stringify("?????? olé"); alert(stringified); </script> Using json2.js or FireFox native json, the alert() string is the same as in the original one. IE8 on the other hand returns Unicode values rather than the original text \u0e2a\u0e27\u0e31\u0e2a\u0e14\u0e35 ol\u00e9 . Is there an easy way to make IE behave like the others, or convert this string to how it should be ? And would you regard this as a bug in IE, I thought native json implementations were supposed to be drop-in identical replacements for json2.js ?

    Read the article

  • Encoding movie files into h264

    - by Shiki
    Found some topics about archiving into h264, but those were about the generic questions (does it worth it, which codec to use.) I want to use h264 (with CUDA (if possible)). So far I only found Avidemux a usable encoder with x264 but it makes an unwatchable video file after the encoding (using the best profile, all setting maxed out), really blurry. Please write down detailed what to use, where to get it (if its free, doesnt matter), what to set, etc. Thanks in advance. (OS: Windows 7 ulti x64, VGA is VP2 capable with CUDA GTX260 XFX) Of course, if there is an up to date duplicate, just comment with the link and I'll remove the question ASAP.

    Read the article

  • Fonts and encoding in windows live mail.

    - by Looser
    Sometimes I have non-english emails (arabic for example). When I try to open them with windows live mail, it doesn't show them correctly.. for example such this: &1588;&1603;&1585;&1575; &1610;&1575;&1605;&1575;&1606; (I changed it a bit to show here as I want) I had a look at options, there was encoding on arabic(windows) but I didn't find anything else. When I open this mail on yahoo however, there is no problem. What can I do?

    Read the article

  • VirtualBox Shared Folder encoding issue

    - by Somebody
    I'm using Ubuntu in Virtualbox and have a shared folder mounted to Virtualbox which i'm accessing inside Ubuntu. The problem is, that when i'm editing and saving some files from shared folder in Windows it's getting some strange symbols at the end of edited file. There must be some encoding issues. Doesn't Virtualbox automatically converts files to Unix standards? To fix that, i have to re-mount shared folder inside Ubuntu each time i'm editing some file. Any solution to avoid re-mounting each time I edit? I'm mounting like that: mount -t vboxsf SVN /opt/htdocs/ Thanks.

    Read the article

  • <?xml version=“1.0” encoding=“UTF-8”?> not <?xml version='1.0' encoding='UTF-8'?>

    - by user2446702
    I am using lxml with tree.write(xmlFileOut, pretty_print = True, xml_declaration = True, encoding='UTF-8' to write out my opened and edited xml file, but I absolutely need to have the xml declaration as <?xml version=“1.0” encoding=“UTF-8”?> and NOT <?xml version='1.0' encoding='UTF-8'?> Now I know they are exactly the same when it comes to xml, but I am dealing with a very tricky customer who absolutely has to have " in the declaration and not '. I have searched everywhere but can't find the answer. Could I create it and add it in myself to the head of the xml somehow? Could I tell lxml that this is what I need as an xml declaration?

    Read the article

  • "Fix" String encoding in Java

    - by Nico
    I have a String created from a byte[] array, using UTF8 encoding. However, it should have been created using another encoding (windows-1252). Is there a way to convert this string back to the right encoding? I know it's easy to do if you have access to the original byte array, but it my case it's too late because it's given by a closed source library. Thanks, Nico

    Read the article

  • iconv supports too few encoding

    - by schemacs
    iconv -l outputs too few encodings on CentOS 6.5: $ iconv -l 10646-1:1993, 10646-1:1993/UCS4, ANSI_X3.4-1968, ANSI_X3.4-1986, ANSI_X3.4, ASCII, CP367, CSASCII, CSUCS4, IBM367, ISO-10646, ISO-10646/UCS2, ISO-10646/UCS4, ISO-10646/UTF-8, ISO-10646/UTF8, ISO-IR-6, ISO-IR-193, ISO646-US, ISO_646.IRV:1991, OSF00010020, OSF00010100, OSF00010101, OSF00010102, OSF00010104, OSF00010105, OSF00010106, OSF05010001, UCS-2, UCS-2BE, UCS-2LE, UCS-4, UCS-4BE, UCS-4LE, UCS2, UCS4, UNICODEBIG, UNICODELITTLE, US-ASCII, US, UTF-8, UTF8, WCHAR_T But on my Ubuntu the list seems much longer, here is different: CentOS6.5: $ php -a php > echo iconv('utf8', 'gbk', 'abc'); PHP Notice: iconv(): Wrong charset, conversion from `utf8' to `gbk' is not allowed in php shell code on line 1 php > quit $ php -i|grep iconv iconv iconv support => enabled iconv implementation => glibc iconv library version => 2.12 iconv.input_encoding => ISO-8859-1 => ISO-8859-1 iconv.internal_encoding => ISO-8859-1 => ISO-8859-1 iconv.output_encoding => ISO-8859-1 => ISO-8859-1 Ubuntu 14.04: $ php -a Interactive mode enabled php > echo iconv('utf8', 'gbk', "abc\n"); abc php > quit $ php -i|grep iconv iconv iconv support => enabled iconv implementation => glibc iconv library version => 2.19 iconv.input_encoding => ISO-8859-1 => ISO-8859-1 iconv.internal_encoding => ISO-8859-1 => ISO-8859-1 iconv.output_encoding => ISO-8859-1 => ISO-8859-1 But I don't want to recompile glibc(this will be huge mount of work), any idea on how to ad new encoding support?

    Read the article

  • Is gstreamer the best encoder for vorbis or is there a better encoding engine I should use?

    - by sayth
    I have sound juicer installed and I want to rip to vorbis.ogg. Is gstreamer the best encoder for vorbis or is there a better encoding engine I should use. The default gstreamer profile is audio/x-raw-float,rate=44100,channels=2 ! vorbisenc name=enc quality=0.5 ! oggmux I am going to raise the quality to 0.7 but thats all nothing if gstreamer isn't the best encoder. Any suggestions for high quality ripping? Edit: a good answer to this will also be the top search result in google for "best vorbis encoding engine". Double Edit: It appears oggenc itself is the best encoder which rules out using sound juicer to rip cd's as it uses gstreamer. I have installed oggenc and am testing the command ripper abcde. Found a good configuration for it here oggenc config for abcde

    Read the article

  • MySQL encoding problem after site move

    - by Quan Zhou
    Guys, I need your help. Since last month my friend has lost his database on Dreamhost, he decided to move his wordpress based blog site (written in Chinese) to my server. He's using a wp-plugin called wp-db-backup to perform regular db backups. And the servers backgrounds are: Dreamhost: Linux 2.6.31.5-modsign-aufs2-grsec-2-opt mysql Ver 14.12 Distrib 5.0.16, for pc-linux-gnu (i386) using readline 5.0 apache2 unknown version My Server: Linux li159-46 2.6.32.12-x86_64-linode12 mysql Ver 14.14 Distrib 5.1.45, for debian-linux-gnu (x86_64) using readline 6.1 nginx 0.8.36 His site's encoding was UTF-8 in both wp-config and db. I imported his db backup file in UTF-8 by default, then I sync'd files using rsync from dreamhost, then I just changed the db address and nothing more. But when I take first look at the "new" site, it was full of unreadable characters, I met this problem before, I changed charset options in browser but none of them can make it displayed properly. Then I converted his db to GB18030, it works with only if browser set charset to GB18030 either GBK, but by default they recognize the charset as UTF-8. I tried to edit the headers but it doesn't work. What could I do now? Thx~~

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >