Search Results

Search found 22988 results on 920 pages for 'url encoding'.

Page 289/920 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • How to download file into string with progress callback?

    - by Kaminari
    I would like to use the WebClient (or there is another better option?) but there is a problem. I understand that opening up the stream takes some time and this can not be avoided. However, reading it takes a strangely much more amount of time compared to read it entirely immediately. Is there a best way to do this? I mean two ways, to string and to file. Progress is my own delegate and it's working good. FIFTH UPDATE: Finally, I managed to do it. In the meantime I checked out some solutions what made me realize that the problem lies elsewhere. I've tested custom WebResponse and WebRequest objects, library libCURL.NET and even Sockets. The difference in time was gzip compression. Compressed stream lenght was simply half the normal stream lenght and thus download time was less than 3 seconds with the browser. I put some code if someone will want to know how i solved this: (some headers are not needed) public static string DownloadString(string URL) { WebClient client = new WebClient(); client.Headers["User-Agent"] = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1045 Safari/532.5"; client.Headers["Accept"] = "application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; client.Headers["Accept-Encoding"] = "gzip,deflate,sdch"; client.Headers["Accept-Charset"] = "ISO-8859-2,utf-8;q=0.7,*;q=0.3"; Stream inputStream = client.OpenRead(new Uri(URL)); MemoryStream memoryStream = new MemoryStream(); const int size = 32 * 4096; byte[] buffer = new byte[size]; if (client.ResponseHeaders["Content-Encoding"] == "gzip") { inputStream = new GZipStream(inputStream, CompressionMode.Decompress); } int count = 0; do { count = inputStream.Read(buffer, 0, size); if (count > 0) { memoryStream.Write(buffer, 0, count); } } while (count > 0); string result = Encoding.Default.GetString(memoryStream.ToArray()); memoryStream.Close(); inputStream.Close(); return result; } I think that asyncro functions will be almost the same. But i will simply use another thread to fire this function. I dont need percise progress indication.

    Read the article

  • Dreamweaver utf-8 encoded php page displays wrong chinese character in IE and Chrome, correct in FF

    - by user1334485
    I have an issue with character encoding: I have this page: http://www.studiomille.jp/class/ (its in japanese but the character in question is from chinese i think) FF shows it correctly, IE (all versions) and Chrome doesn't (sorry larger screenshots): FF Screenshot: IE Screenshot: (there are other character that are different throughout the site, this is just one example) Everything is set to UTF-8: * PHP sends header: Content-Type:text/html; charset=UTF-8 * PHP starts with: mb_language('uni'); mb_internal_encoding('UTF-8'); * meta tag: <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> * all files are saved with UTF-8 encoding with DreamWeaver CS3 * the same font is used in all the browsers. On that page nothing comes from the db, everything is hard coded. The site has the same behavior on my localhost too. So why only FF gets it right and how can I make it work on IE also?

    Read the article

  • A reasonable way to add attributes to an xml root element in C#.

    - by DrLazer
    The function "WriteStartElement" does not return anything. I find this a little bizzare. So up until now I have been doing it like this. XmlDocument xmlDoc = new XmlDocument(); XmlTextWriter xmlWriter = new XmlTextWriter(m_targetFilePath, System.Text.Encoding.UTF8); xmlWriter.Formatting = Formatting.Indented; xmlWriter.WriteProcessingInstruction("xml", "version='1.0' encoding='UTF-8'"); xmlWriter.WriteStartElement("client"); xmlWriter.Close(); xmlDoc.Load(m_targetFilePath); XmlElement root = xmlDoc.DocumentElement; Saving the doc, then reloading it to get hold of the start element so i can write attributes to it. Does anybody know the correct way of doing this because I'm pretty sure what I'm doing isn't right. I tried to use xmlWriter.AppendChild() but it doesnt seem to write out anything. :(

    Read the article

  • What is the most efficient way to encode an arbitrary GUID into readable ASCII (33-127)?

    - by mark
    Dear ladies and sirs. The standard string representation of GUID takes about 36 characters. Which is very nice, but also really wasteful. I am wondering, how to encode it in the shortest possible way using all the ASCII characters in the range 33-127. The naive implementation produces 22 characters, simply because 128 bits / 6 bits yields 22. Huffman encoding is my second best, the only question is how to choose the codes.... Any more ideas? Thanks. P.S. The encoding must be lossless, of course.

    Read the article

  • How do I convert a NSString into TIS-620 encoded string

    - by MacPC
    In the apple document, I can see that there's a way to convert from UTF8 string to ASCII string like this NSData *asciiData = [theString dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; NSString *asciiString = [[NSString alloc] initWithData:asciiData encoding:NSASCIIStringEncoding]; But my app requires a TIS-620 string to post to a site so I try to do the same thing NSData *asciiData = [newPost.header dataUsingEncoding:kCFStringEncodingMacThai allowLossyConversion:YES]; NSString *asciiString = [[NSString alloc] initWithData:asciiData encoding:kCFStringEncodingMacThai]; NSLog(@"%@", asciiString); The output I got is like this ???????????. Does anyone know how to convert the NSString to TIS-620 properly? Thanks so much.

    Read the article

  • In the JSON spec, what does "Since the first two characters of a JSON text will always be ASCII characters" mean?

    - by dan gibson
    The spec is http://www.ietf.org/rfc/rfc4627.txt?number=4627 It contains this: Encoding JSON text SHALL be encoded in Unicode. The default encoding is UTF-8. Since the first two characters of a JSON text will always be ASCII characters [RFC0020], it is possible to determine whether an octet stream is UTF-8, UTF-16 (BE or LE), or UTF-32 (BE or LE) by looking at the pattern of nulls in the first four octets. What does it mean "Since the first two characters of a JSON text will always be ASCII characters [RFC0020]"? I've looked at RFC0020 but couldn't find anything about it. JSON could be {" or { " (ie whitespace before the quote.

    Read the article

  • how to get apache mod_cache work with mod_wsgi (django)?

    - by harmv
    I thought i'd speed up my django projects, by letting apache doing some caching for me. Unfortunately I see that apache never caches my dynamic pages. Has mod_cache problems with mod_wsgi served code ? My apache config: <VirtualHost *:80 ServerName myserver.com CacheEnable mem / # for testing only CacheIgnoreQueryString On CacheIgnoreCacheControl On WSGIDaemonProcess aname processes=1 threads=25 WSGIProcessGroup aname Alias /media/ /home/harm/projects/test/media/ WSGIScriptAlias / /home/harm/projects/test/wsgi.py The response does have the correct caching headers: Content-Length 2647 Content-Encoding gzip Vary Accept-Encoding Cache-Control public, max-age=3600 Keep-Alive timeout=15, max=100 Connection Keep-Alive Content-Type application/x-javascript Am I missing something ?

    Read the article

  • Decoding utf16 in Perl?

    - by Geo
    If I open a file ( and specify an encoding directly ) : open(my $file,"<:encoding(UTF-16)","some.file") || die "error $!\n"; while(<$file>) { print "$_\n"; } close($file); I can read the file contents nicely. However, if I do: use Encode; open(my $file,"some.file") || die "error $!\n"; while(<$file>) { print decode("UTF-16",$_); } close($file); I get the following error: UTF-16:Unrecognised BOM d at F:/Perl/lib/Encode.pm line 174 How can I make it work with decode?

    Read the article

  • Fastest way to convert file from latin1 to utf-8 in python.

    - by xsaero00
    I need fastest way to convert files from latin1 to utf-8 in python. The files are large ~ 2G. ( I am moving DB data ). So far I have import codecs infile = codecs.open(tmpfile, 'r', encoding='latin1') outfile = codecs.open(tmpfile1, 'w', encoding='utf-8') for line in infile: outfile.write(line) infile.close() outfile.close() but it is still slow. The conversion takes one fourth of the whole migration time. I could also use a linux command line utility if it is faster than native python code.

    Read the article

  • How to save state of the app when app terminates?

    - by user164589
    Hi guys. I am trying to save the app state by encoding when the app terminates. I've found the solution related this issue. But I don't know how to use. I am really trying to make encoding and decoding like this: http://cocoaheads.byu.edu/wiki/nscoding in CustomObject.h @interface CustomObject : NSObject <NSCoding> { NSArray *someArray; } in CustomObject.m @implementation CustomObject // Other method implementations here - (void) encodeWithCoder:(NSCoder*)encoder { [encoder encodeObject:someArray forKey:@"someArray"]; } - (id) initWithCoder:(NSCoder*)decoder { if (self = [super init]) { someArray = [[decoder decodeObjectForKey:@"someArray"] retain]; } return self; } @end My object to save is another NSArray. Not "someArray" in CustomObject. We call it that "MySaveObject". I want to pass "MySaveObject" to "someArray" in CustomObject. Actually I don't know how to encode "MySaveObject" and to pass to "someArray" in CustomObject. Thanks in advance.

    Read the article

  • converting from int to hex

    - by Catherine
    I want to convert some ints to hex,but i'm getting something like this : "?|???plL4?h??N{" from 12345. Why? int t = 12345; System.Security.Cryptography.MD5CryptoServiceProvider ano = new System.Security.Cryptography.MD5CryptoServiceProvider(); byte[] d_ano = System.Text.Encoding.ASCII.GetBytes(t.ToString()); byte[] d_d_ano = ano.ComputeHash(d_ano); string st_data1 = System.Text.Encoding.ASCII.GetString(d_d_ano); string st_data = st_data1.ToString(); I'm using it in window form,not in console.

    Read the article

  • Local Live Quicktime Video Broadcast, latency?

    - by Snowwire
    I'm looking into the feasibility of using a local server to distribute live video of a conference to delegates in the same room. They would still hear the live audio coming from the speaker, so only the video would be streamed. I was considering a Darwin Steaming Server (a lot of iPhone users to support) and encoding with H.264. My main concern is latency across the network. Even with everything running locally, would there be lip sync issues between the live audio and the 'live' video stream? It feels like there will be problems given the encoding, broadcasting, decoding to be completed, but I've never done any like this before so thought I would check. Thanks

    Read the article

  • How to pass parameter value to XSL?

    - by Manish
    Suppose I have a XSL as follows: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:output method="html" encoding="utf-8" omit-xml-declaration="yes" indent="yes"/> <xsl:param name="sortKey" select="'firstname'"/> </xsl:stylesheet> Then a XML as follows <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <?xml-stylesheet type="text/xsl" href="XYZ.xsl"?> <ABC> </ABC> I want to pass a value to the XSL parameter firstname from XML. Can I do that? If yes, How? I'm not sure if I'm correct and this can be done.

    Read the article

  • Efficient way to organise data file in columns with Python

    - by user1700959
    I'm getting an output data file of a program which looks like this, with more than one line for each time step: 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 7.9819E-06 1.7724E-02 2.3383E-02 3.0048E-02 3.8603E-02 4.9581E-02 5.6635E-02 4.9991E-02 3.9052E-02 3.0399E-02 .... I want to arrange it in ten columns I have made a Python script, using regular expressions to delete \n in the proper lines, but I think that there should be a simpler more elegant way to do it, here is my script: import re with open('inputfile', encoding='utf-8') as file1: datai=file1.read() dataf=re.sub(r'(?P<nomb>( \d\.\d\d\d\dE.\d\d){8})\n','\g<nomb>',datai) with open('result.txt',mode='w',encoding='utf-8') as resultfile: resultfile.write(datof) Thanks in advance

    Read the article

  • Website Images not indexed by Google, Yahoo and Bing

    - by Nabil Kadimi
    Hi, My classifieds website has been present online since 2006, the html pages are indexed and rank as expected whereas a search on Google Images for site:example.com returns nothing & in Yahoo or Bing it returns only a few image results, 8 to 10. Here is an example of a response HTTP headers as reported by Firebug: Date Sat, 15 Jan 2011 20:38:21 GMT Server Apache Cache-Control max-age=34560000 Expires Sun, 19 Feb 2012 20:38:21 GMT Accept-Ranges bytes Last-Modified Fri, 14 Jan 2011 21:59:16 GMT Vary Accept-Encoding Content-Encoding gzip Content-Length 21675 Connection close Content-Type image/jpeg What should I do to tell search engines to index my website images? Thanks in advance.

    Read the article

  • Google CDN not gzipping jquery

    - by thermal7
    If I navigate here: http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.min.js I download 70k using Firefox 3.6.3 and I can confirm it is sending Accept-Encoding: gzip. If I use the Microsoft one: http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.min.js I download 30k (and it comes through as Content-Encoding: gzip) I am also experiencing this when using jquery 1.4.2 in regular sites eg jquery.com. Funily enough, stack overflow which references jquery 1.3.2 on the google cdn, is coming through gzipped. Why is this happening? Is it some kind of issue with google or am I missing something? I live in Melbourne, Australia.

    Read the article

  • Apache/2.2.9, mod_perl/2.0.4: status_line doesn't seem to work

    - by Eugene
    Response is prepared this way: my $r = Apache2::RequestUtil->request; $r->status_line('500 Internal Server Error'); $r->send_cgi_header("Content-Type: text/html; charset=UTF-8\n\n"); print 'Custom error message'; Request: GET /test_page HTTP/1.1 Host: www.xxx.xxx Response: HTTP/1.1 200 OK Date: XXXXXXXXXX Server: Apache/xxxxxxxx Vary: Accept-Encoding Transfer-Encoding: chunked Content-Type: text/html; charset=UTF-8 44 Custom error message 0 Why response status is 200 and not 500?

    Read the article

  • Inconsistent Behavior From Declared DLL Function

    - by Steven
    Why might my GetRawData declared function return a correct value when called from my VB.NET application, but return zero when called from my ASP.NET page? The code is exactly the same except for class type difference (Form / Page) and calling event handler (Form1_Load, Page_Load). Note: In the actual code, #DLL# and #RAWDATAFILE# are both absolute filenames to my DLL and raw data file respectively. Note: The DLL file was not created by Visual Studio. Form1.vb Public Class Form1 Declare Auto Function GetRawData Lib "#DLL#" (ByVal filename() As Byte, _ ByVal byteArray() As Byte, _ ByVal length As Int32) As Int32 Private Sub Form1_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load Dim buffer(10485760) As Byte Dim msg As String, length As Integer = 10485760 Dim filename As String = "#RAWDATAFILE#" length = GetRawData(Encoding.Default.GetBytes(filename), buffer, length) Default.aspx.vb Partial Public Class _Default Inherits System.Web.UI.Page Declare Auto Function GetRawData Lib "#DLL#" (ByVal filename() As Byte, _ ByVal byteArray() As Byte, _ ByVal length As Int32) As Int32 Protected Sub Page_Load(ByVal sender As Object, _ ByVal e As System.EventArgs) Handles Me.Load Dim buffer(10485760) As Byte Dim msg As String, length As Integer = 10485760 Dim filename As String = "#RAWDATAFILE#" length = GetRawData(Encoding.Default.GetBytes(filename), buffer, length)

    Read the article

  • Multiple databases support in Symfony

    - by Ngu Soon Hui
    I am using Propel as my DAL for my Symfony project. I can't seem to get my application to work across two or more databases. Here's my schema.yml: db1: lkp_User: pk_User: { type: integer, required: true, primaryKey: true, autoIncrement: true } UserName: { type: varchar(45), required: true } Password: longvarchar _uniques: Unique: [ UserName ] db2: tesco: Id: { type: integer, required: true, primaryKey: true, autoIncrement: true } Name: { type: varchar(45), required: true } Description: longvarchar And here's the databases.yml: dev: db1: param: classname: DebugPDO test: db1: param: classname: DebugPDO all: db1: class: sfPropelDatabase param: classname: PropelPDO dsn: 'mysql:dbname=bpodb;host=localhost' #where the db is located username: root password: #pass encoding: utf8 persistent: true pooling: true db2: class: sfPropelDatabase param: classname: PropelPDO dsn: 'mysql:dbname=mystore2;host=localhost' #where the db is located username: root password: #pass encoding: utf8 persistent: true pooling: true When I call php symfony propel-build-model, only db1 is generated, db2 is not. Any idea how to fix this problem?

    Read the article

  • sql server bulk copy out/postgres copy from infile

    - by Chris Curvey
    I'm starting a conversion of a system from MS SQL Server to Postgres. I have the table structures converted, and I use "bcp" to get the data out of SQL Server. ERROR: invalid byte sequence for encoding "UTF8": 0x80 HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". CONTEXT: COPY cm_outgoing, line 200: "200 c:\temp\200.xml 2009-10-10 01:50:44.000 1900-01-01 00:00:00.000" I've already used "sed" to get rid of the NUL (0x00) entries in the file, and I can't find any instances of 0x80 in the file that I'm trying to import. Any thoughts? Is there an easier way?

    Read the article

  • How to replicate Google "Hangouts On Air" stream combining functionality?

    - by Rob Olmos
    I've been researching this one for quite a bit but haven't found any solid leads. I have a Wowza/Flash app with video chatroom functionality and would like to combine the streams server-side into one video/audio stream in order to be sent to a live Youtube channel. I've found a couple projects such as jMixer and some helpful keywords such as "vision mixer" to help with my search but looking for any previous experience or new ideas. The other option is building something like it myself with a commercial video decoding/encoding library to raw frames, stitching the frames together, then encoding it. I was originally going down this route but put project on hold. What are some ideas, keywords, or existing software (open source preferred) to take those live streams and combine them into one in real-time? Or is coding it myself the required route? Thanks!

    Read the article

  • rake db:create not working for legacy rails app (2.3.5) using MySQL (5.5.28)

    - by ridicter
    I'm a new Rails Developer, and I'm working on a legacy Rails app. Whenever I run the rake db:create command, I get an error that the database couldn't be created. I have found many StackOverflow questions related to this, but in troubleshooting nearly all permutations of solutions, I couldn't resolve the issue. I created the three Dbs (dev, prod, test), created the user with all access privileges to these dbs, and ran rake db:create. I'm running Mac OS X Lion, MySQL 5.5.28, Rails 2.3.5, Ruby 1.8.7. Here are my settings development: adapter: mysql encoding: utf8 database: adva_development username: adva password: **** host: localhost socket: /tmp/mysql.sock Here's the error: Couldn't create database for {"adapter"=>"mysql", "username"=>"adva", "host"=>"localhost", "encoding"=>"utf8", "database"=>"adva_development", "socket"=>"/tmp/mysql.sock", "password"=>"****"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation) I have done the following troubleshooting: Verified user and password are correct, and the user has access to the DB. (Double checked user access with SELECT * FROM mysql.db WHERE Db = 'adva_development' \G; User has all privileges.) Verify the socket is correct. I don't really understand sockets, but I can plainly see it at /tmp/mysql.sock. Checked collation and character set. I found out I had created the DB in latin charset and collation, so I recreated them. I ran show variables like "collation_database"; and show variables like "character_set_database"; and came back with utf8 and utf8_unicode_ci respectively. I followed the instructions in this question. After uninstalling mysql gem, I ran the following but came up with the same error: gem install --no-rdoc --no-ri mysql -- --with-mysql-dir=/usr/local/mysql-5.5.28-osx10.6-x86_64/bin --with-mysql-config=/usr/local/mysql-5.5.28-osx10.6-x86_64/bin/mysql_config Following Matt's suggestion, here's what a rake --trace db:create reveals: ** Invoke db:create (first_time) ** Invoke db:load_config (first_time) ** Invoke rails_env (first_time) ** Execute rails_env ** Execute db:load_config ** Execute db:create Couldn't create database for {"database"=>"adva_development", "adapter"=>"mysql", "host"=>"127.0.0.1", "password"=>"woof2adva", "username"=>"adva", "encoding"=>"utf8"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation) After 3 days and six or seven hours, I have pretty much run out of options. I tried various random things, like replacing localhost with 127.0.0.1 to no avail. Could there be something wrong related to my specific environment? Mac OS X Lion + MySQL 5.5.28? I plan on trying on setting up everything in a Linux environment. Thanks!

    Read the article

  • Is the XML processing instructions node mandatory ?

    - by ereOn
    I had a discussion with a colleague of mine about the XML processing instructions node (I'm talking about this = <?xml version="1.0" encoding="UTF-8"?>). I believe that for something to be called "valid XML", it requires a processing instructions node. My colleague states that the processing instruction node is optionnal, since the default encoding is UTF-8 and the version is always 1.0. This make sense, but what does the standard says ? In short, given the following file: <books> <book id="1"><title>Title</title></book> </book> Can we say that: It is valid XML ? It is a valid XML node ? It is a valid XML document ? Thank you very much.

    Read the article

  • Turkish character problems while parsing (Android)

    - by alper35.5
    I am parsing an html content and have output on my screen. This website have Turkish characters such as çÇsSöÖgGiIüÜ. I am not able to show them as proper characters, they are printed out as question marks yet. Eclipse - Project - Properties - Resource - Text File Encoding = Inherited from container (Cp1254) I searched web and found this solution: Eclipse - Project - Properties - Resource - Text File Encoding = Other: UTF-8 However, it's not working. It only changes my files' current characters. (I have titles that have such characters on my activities) Any help? Thanks in advance...

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >