Search Results

Search found 3420 results on 137 pages for 'precompiled headers'.

Page 118/137 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Javascript + PHP $_POST array empty

    - by Peterim
    While trying to send a POST request via xmlhttp.open("POST", "url", true) (javascript) to the server I get an empty $_POST array. Firebug shows that the data is being sent. Here is the data string from Firebug: a=1&q=151a45a150.... But $_POST['q'] returns nothing. The interesting thing is that file_get_contents('php://input') does have my data (the string above), but PHP somehow doesn't recognize it. Tried both $_POST and $_REQUEST, nothing works. Headers being sent: POST /test.php HTTP/1.1 Host: website.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://website.com/ Content-Length: 156 Content-Type: text/plain; charset=UTF-8 Pragma: no-cache Cache-Control: no-cache Thank you for any suggestions.

    Read the article

  • segmentation fault in file operations in c

    - by mekasperasky
    #include<stdio.h> /* this is a lexer which recognizes constants , variables ,symbols, identifiers , functions , comments and also header files . It stores the lexemes in 3 different files . One file contains all the headers and the comments . Another file will contain all the variables , another will contain all the symbols. */ int main() { int i; char a,b[20],c; FILE *fp1; fp1=fopen("source.txt","r"); //the source file is opened in read only mode which will passed through the lexer //now lets remove all the white spaces and store the rest of the words in a file if(fp1==NULL) { perror("failed to open source.txt"); //return EXIT_FAILURE; } i=0; while(1) { a=fgetc(fp1); if(a !="") { b[i]=a; } else { fprintf(fp1, "%.20s\n", b); i=0; continue; } i=i+1; /*Switch(a) { case EOF :return eof; case '+':sym=sym+1; case '-':sym=sym+1; case '*':sym=sym+1; case '/':sym=sym+1; case '%':sym=sym+1; case ' */ } return 0; } how does this code end up in segmentation fault?

    Read the article

  • PHP Cookie Warning...

    - by Nano HE
    Hi,I am new to PHP, I practised PHP setcookie() just now and failed. http://localhost/test/index.php <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title></title> </head> <body> <?php $value = 'something from somewhere'; setcookie("TestCookie", $value); ?> </body> </html> http://localhost/test/view.php <?php // I plan to view the cookie value via view.php echo $_COOKIE["TestCookie"]; ?> But I failed to run index.php, IE warning like this. Warning: Cannot modify header information - headers already sent by (output started at C:\xampp\htdocs\test\index.php:9) in C:\xampp\htdocs\test\index.php on line 12 I enabled my IE 6 cookie no doubt. Is there anything wrong on my procedure above? Thank you. WinXP OS and XAMPP 1.7.3 used.

    Read the article

  • Java/Groovy File IO Replacing an Image File with it's own Contents - Why Does This Work?

    - by jboyd
    I have some JPG files that need to be replaced at runtime with a JFIF standardized version of themselves (we are using a vendor that gives us JPG that do not have proper headers so they don't work in certain applications)... I am able to create a new file from the existing image, then get a buffered image from that file and write the contents right back into the file without having to delete it and it works... imageSrcFolder.eachFileMatch ( ~/.*\.jpg/, { BufferedImage bi = ImageIO.read( it ) ImageIO.write( bi, "jpg", it ) }); The question I have is why? Why doesn't the file end up doubled in size? Why don't I have to delete it first? Why am I able to take a file object to an existing file and then treat it as if it were a brand new one? It seems that what I consider to be a "file" is not what the File object in java actually is, or else this wouldn't work at all. My code does exactly what I want it to do, but I'm not convinced it always will... it just seems way too easy

    Read the article

  • Can't run install MySQL gem Fedora 14, even after installing mysql-devel, ruby-devel, and rubygems

    - by jonderry
    I'm trying to install the mysql gem via sudo gem install mysql --version 2.7 However, I get the following error: Building native extensions. This could take a while... ........... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... yes checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Any ideas?

    Read the article

  • protocol parsing in c

    - by nomad.alien
    I have been playing around with trying to implement some protocol decoders, but each time I run into a "simple" problem and I feel the way I am solving the problem is not optimal and there must be a better way to do things. I'm using C. Currently I'm using some canned data and reading it in as a file, but later on it would be via TCP or UDP. Here's the problem. I'm currently playing with a binary protocol at work. All fields are 8 bits long. The first field(8bits) is the packet type. So I read in the first 8 bits and using a switch/case I call a function to read in the rest of the packet as I then know the size/structure of it. BUT...some of these packets have nested packets inside them, so when I encounter that specific packet I then have to read another 8-16 bytes have another switch/case to see what the next packet type is and on and on. (Luckily the packets are only nested 2 or 3 deep). Only once I have the whole packet decoded can I handle it over to my state machine for processing. I guess this can be a more general question as well. How much data do you have to read at a time from the socket? As much as possible? As much as what is "similar" in the protocol headers? So even though this protocol is fairly basic, my code is a whole bunch of switch/case statements and I do a lot of reading from the file/socket which I feel is not optimal. My main aim is to make this decoder as fast as possible. To the more experienced people out there, is this the way to go or is there a better way which I just haven't figured out yet? Any elegant solution to this problem?

    Read the article

  • Server.TransferRequest returns blank page on specific server

    - by jishi
    I'm facing an issue that seems to be related to configuration. I have a webapplication based on MonoRail, where we utilize the routing feature from MonoRail. On the first request after the application has started, the routing isn't initialized. To circumvent this, I have the following code in Application_OnError(): public virtual void Application_OnError() { if ( // identified as routing error ) Server.TransferRequest( Context.Request.RawUrl, false ); return; } Problem beeing that on our development server (which runs server 2008 R2, with IIS 7.5 and .NET 3.5) returns a blank page without headers, but on my workstation (which runs win7, IIS 7.5 and .NET 3.5) it works fine. What could be the cause of this? If the code in Application_OnError() throws an exception, what would be the expected output? I have verified the following: The access-log shows one entry, I'm not sure if a TransferRequest would show up as a second entry when invoked successfully The application actually do some work according to my internal logs, and it doesn't die since a subsequent requests works flawlessly (because routing will be active) Any hints on what to look for would be greatly appreciated!

    Read the article

  • setting cookies

    - by aharon
    Okay, so I'm trying to set cookies using Ruby. I'm in a Rack environment. response[name]=value will add an HTTP header into the HTTP headers hash rack has. I know that it works. The following method doesn't work: def set_cookie(opts={}) args = { :name => nil, :value => nil, :expires => Time.now+314, :path => '/', :domain => Cambium.uri #contains the IP address of the dev server this is running on }.merge(opts) raise ArgumentError, ":name and :value are mandatory" if args[:name].nil? or args[:value].nil? response['Set-Cookie']="#{args[:name]}=#{args[:value]}; expires=#{args[:expires].clone.gmtime.strftime("%a, %d-%b-%Y %H:%M:%S GMT")}; path=#{args[:path]}; domain=#{args[:domain]}" end Why not? And how can I solve it? Thanks.

    Read the article

  • Tablesorter Pager not working in Safari or Chrome

    - by Zendog74
    Hi all. I am building an app using the tablesorter plug-in and it's pager plug-in. Things work perfectly fine in Firefox and IE, but in Safari (4.0.4 on a PC) and Chrome () I get errors when it hits the following code that binds the tablesorter pager. I took the pager binding out and it worked, so something is going wrong somewhere in those three lines of code. var tableSel = calendarportlet.ut.createIdSelector(calendarportlet.addNamespace("eventListTable")); var pagerSel = calendarportlet.ut.createIdSelector(calendarportlet.addNamespace("pager")); jQuery(tableSel).tablesorter({ widthFixed: true, headers: { 0: {sorter: false} }, sortList:[[2,1],[1,0]], widgets: ['zebra'] }).tablesorterPager({ <-- error happens in here container: jQuery(pagerSel), positionFixed: false }); Also, the errors only happen in Safari and Chrome when prototype.js is loaded AFTER jQuery. If they are loaded before jQuery, it works fine. However, this is a portlet and it has to play nice with other portlets, so we don't want to modify the header and loading order of the js libs. Anyone have any ideas on how to fix this?

    Read the article

  • Cache AJAX requests

    - by Willem
    I am sending AJAX GET-requests to a PHP application and would like to cache the request returns for later use. Since I am using GET this should be possible because different requests request different URLs (e.g. getHTML.php?page=2 and getHTML.php?page=5). What headers do I need to declare in the PHP-application to make the clients browser cache the request URL content in a proper way? Do I need to declare anything in the Javascript which handles the AJAX-request (I am using jQuery's $.ajax function which has a cache parameter)? How would I handle edits which change the content of e.g. getHTML.php?page=2 so that the client doesn't fall back to the cached version? Adding another parameter to the GET request e.g. getHTML.php?page=2&version=2 is not possible because the link to the requested URL is created automatically without any checking (which is preferably the way I want it to be). How will the browser react when I try to AJAX-request a cached request URL? Will the AJAX-request return success immediately? Thanks Willem

    Read the article

  • getting jpeg error

    - by bhaskaragr29
    <?php function LoadPNG() { /* Attempt to open */ //require_once 'resizex.php'; $imgname="/home2/puneetbh/public_html/prideofhome/wp-content/uploads/268995481image_11.png"; //$im = @imagecreatefrompng($imgname); $img= imagecreatefromstring(file_get_contents($imgname)); //$im=imagecreatefrompng('images/frame.png'); $im= imagecreatefromjpeg('images/frame.jpeg'); //imagealphablending($img, false); //imagesavealpha($img, true); //$img=resizex("$url",60,65,1); imagecopymerge($im,$img,105,93,0, 0,275,258,100); /* See if it failed */ if(!$im) { /* Create a blank image */ $im = imagecreatetruecolor(150, 30); $bgc = imagecolorallocate($im, 255, 255, 255); $tc = imagecolorallocate($im, 0, 0, 0); imagefilledrectangle($im, 0, 0, 150, 30, $bgc); /* Output an error message */ imagestring($im, 1, 5, 5, 'Error loading ' . $imgname, $tc); } return $im; } $img = LoadPNG(); header('Content-type: image/jpeg'); imagejpeg($im); imagedestroy($im); imagedestroy($img); ?> i am getting error arning: imagecreatefromjpeg() [function.imagecreatefromjpeg]: gd-jpeg: JPEG library reports unrecoverable error: in /home2/puneetbh/public_html/prideapp/frame.php on line 11 Warning: imagecreatefromjpeg() [function.imagecreatefromjpeg]: 'images/frame.jpeg' is not a valid JPEG file in /home2/puneetbh/public_html/prideapp/frame.php on line 11 Warning: imagecopymerge(): supplied argument is not a valid Image resource in /home2/puneetbh/public_html/prideapp/frame.php on line 16 Warning: Cannot modify header information - headers already sent by (output started at /home2/puneetbh/public_html/prideapp/frame.php:11) in /home2/puneetbh/public_html/prideapp/frame.php on line 34 Warning: imagejpeg(): supplied argument is not a valid Image resource in /home2/puneetbh/public_html/prideapp/frame.php on line 35 Warning: imagedestroy(): supplied argument is not a valid Image resource in /home2/puneetbh/public_html/prideapp/frame.php on line 36

    Read the article

  • How do I integrate ruby on rails app with postful using POST and XML?

    - by Angela
    Hi, this is a pretty basic question but I'm not entirely clear how to do this. I am trying to use a third-party service that has RESTful service. The service is called Postful. But I'm not clear what exactly to do? http://www.postful.com/service/mail is one of the services, but to upload an image I have to post the following (but I'm not sure how I actually do this?). Thanks! > http://www.postful.com/service/upload > > Be sure to include the Content-Type > and Content-Length headers and the > image itself as the body of the > request. > > POST /upload HTTP/1.0 Content-Type: > application/octet-stream > Content-Length: 301456 > > ... file content here ... > > If the upload is successful, you will > receive a response like the following: > > <?xml version="1.0" encoding="UTF-8"?> > <upload> > <id>290797321.waltershandy.2</id> > </upload>

    Read the article

  • Boost's "cstdint" Usage

    - by patt0h
    Boost's C99 stdint implementation is awfully handy. One thing bugs me, though. They dump all of their typedefs into the boost namespace. This leaves me with three choices when using this facility: Use "using namespace boost" Use "using boost::[u]<type><width>_t" Explicitly refer to the target type with the boost:: prefix; e.g., boost::uint32_t foo = 0; Option ? 1 kind of defeats the point of namespaces. Even if used within local scope (e.g., within a function), things like function arguments still have to be prefixed like option ? 3. Option ? 2 is better, but there are a bunch of these types, so it can get noisy. Option ? 3 adds an extreme level of noise; the boost:: prefix is often = to the length of the type in question. My question is: What would be the most elegant way to bring all of these types into the global namespace? Should I just write a wrapper around boost/cstdint.hpp that utilizes option ? 2 and be done with it? Also, wrapping the header like so didn't work on VC++ 10 (problems with standard library headers): namespace Foo { #include <boost/cstdint.hpp> using namespace boost; } using namespace Foo; Even if it did work, I guess it would cause ambiguity problems with the ::boost namespace.

    Read the article

  • Program to format every media in c++

    - by AtoMerZ
    Problem: I'm trying to write a program that formats any type of media. So far I've managed to format Hard disk partitions, flash memories, SDRAM, RDX. But there's this last type of media (DVD-RAM) I need to format. My program fails to format this media. I'm using the FormatEx function in fmifs.dll. I had absolutely no idea how to use this function Except for its name and that it resides in fmifs.dll. with the help of this I managed to find a simple program that uses this library. Yet still it doesn't give complete information about how to use it. What I've tried: I'm looking for a full documentation about FormatEx, its parameters, and exactly what values each parameter can take. I tried searching on google and MSDN. This is what I found. First of all this isn't the function I'm working with. But even putting that aside there's not enough information on how to use the function (Like which headers/libraries to use). EDIT: I don't have to use FormatEx if there's an alternative to use please tell.

    Read the article

  • Silverlight ClientAccessPolicy issue...I think

    - by Terrence
    Fisrt of all I have my ClientAccessPolicy.xml file in the root of my website. If I access my website using the public domain name like this: h t t p://www.mydomain.com and then go to the page where my SL control is, I get the spinning % numbers up until about 98%, then it quits and my SL control does not appear on the page. If I access my website using the machine name (website is at datacenter, we have vpn setup) like this: h t t p://machinename and then go to the page where my SL control is everything works fine. this must be a ClientAccess Policy issue don't your think? Or what DO you thnik the issue is? Thanks in advance. Here is the contents of my ClientAccessPolicy.xml file: <?xml version="1.0" encoding="utf-8" ?> <access-policy> <cross-domain-access> <policy> <allow-from http-request-headers="*"> <domain uri="*" /> </allow-from> <grant-to> <resource path="/" include-subpaths="true" /> </grant-to> </policy> </cross-domain-access> </access-policy>

    Read the article

  • Who's setting TCP window size down to 0, Indy or Windows?

    - by François
    We have an application server which have been observed sending headers with TCP window size 0 at times when the network had congestion (at a client's site). We would like to know if it is Indy or the underlying Windows layer that is responsible for adjusting the TCP window size down from the nominal 64K in adaptation to the available throughput. And we would be able to act upon it becoming 0 (nothing gets send, users wait = no good). So, any info, link, pointer to Indy code are welcome... Disclaimer: I'm not a network specialist. Please keep the answer understandable for the average me ;-) Note: it's Indy9/D2007 on Windows Server 2003 SP2. More gory details: The TCP zero window cases happen on the middle tier talking to the DB server. It happens at the same moments when end users complain of slowdowns in the client application (that's what triggered the network investigation). 2 major Network issues causing bottlenecks have been identified. The TCP zero window happened when there was network congestion, but may or may not be caused by it. We want to know when that happen and have a way to do something (logging at least) in our code. So where to hook (in Indy?) to know when that condition occurs?

    Read the article

  • Find and Replace with Notepad++

    - by Levi
    I have a document that was converted from PDF to HTML for use on a company website to be referenced and indexed for search. I'm attempting to format the converted document to meet my needs and in doing so I am attempting to clean up some of the junk that was pulled over from when it was a PDF such as page numbers, headers, and footers. luckily all of these lines that need to be removed are in blocks of 4 lines unfortunately they are not exactly the same therefore cannot be removed with a simple literal replace. The lines contain numbers which are incremental as they correlate with the pages. How can I remove the following example from my html file. Title<br> 10<br> <hr> <A name=11></a>Footer<br> I've tried many different regular expression attempts but as my skill in that area is limited I can't find the proper syntax. I'm sure i'm missing something fairly easy as it would seem all I need is a wildcard replace for the two numbers in the code and the rest is literal. any help is apprciated

    Read the article

  • HTTP Compression problems on IIS7

    - by Jonathan Wood
    I've spent quite a bit of time on this but seem to be going nowhere. I have a large page that I really want to speed up. The obvious place to start seems to be HTTP compression, but I just can't seem to get it to work for me. After considerable searching, I've tried several variations of the code below. It kind of works, but after refreshing the browser, the results seem to fall apart. They were turning to garbage when the page used caching. If I turn off caching, then the page seems right but I lose my CSS formatting (stored in a separate file) and get an error that an included JS file contains invalid characters. Most of the resources I've found on the Web were either very old or focused on accessing IIS directly. My page is running on a shared hosting account and I do not have direct access to IIS7, which it's running on. protected void Application_BeginRequest(object sender, EventArgs e) { // Implement HTTP compression if (Request["HTTP_X_MICROSOFTAJAX"] == null) // Avoid compressing AJAX calls { // Retrieve accepted encodings string encodings = Request.Headers.Get("Accept-Encoding"); if (encodings != null) { // Verify support for or gzip (deflate takes preference) encodings = encodings.ToLower(); if (encodings.Contains("gzip") || encodings == "*") { Response.Filter = new GZipStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "gzip"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } else if (encodings.Contains("deflate")) { Response.Filter = new DeflateStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "deflate"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } } } } Is anyone having better success with this?

    Read the article

  • How do I compile and build the taf2-curb Ruby gem on Windows XP with MinGW?

    - by Laran Evans
    How do I compile and build the taf2-curb Ruby gem on Windows XP with MinGW? I tried this, but I'm kinda fishing, unsuccessfully. C:\Documents and Settings\Megem install taf2-curb -- --with-curl-include=C:/curl-7.19.5-devel-mingw32/include --with-curl-dir=C:/curl-7.19.5 --with-curl-lib=C:/curl-7.19.5-devel-mingw32/lib --prefix=C:/MinGW --with-curllib Bulk updating Gem source index for: http://gems.rubyforge.org Updating metadata for 73 gems from http://gems.rubyonrails.org ......................................................................... complete Bulk updating Gem source index for: http://gems.github.com Building native extensions. This could take a while... ERROR: Error installing taf2-curb: ERROR: Failed to build gem native extension. C:/Ruby/bin/ruby.exe extconf.rb install taf2-curb -- --with-curl-include=C:/curl-7.19.5-devel-mingw32/include --with-cur l-dir=C:/curl-7.19.5 --with-curl-lib=C:/curl-7.19.5-devel-mingw32/lib --prefix=C:/MinGW --with-curllib checking for curl-config... no checking for main() in true.lib... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --srcdir=. --curdir --ruby=C:/Ruby/bin/ruby --with-curl-dir --with-curl-include=${curl-dir}/include --with-curl-lib=${curl-dir}/lib --with-curllib extconf.rb:9: Can't find libcurl or curl/curl.h (RuntimeError) Try passing --with-curl-dir or --with-curl-lib and --with-curl-include options to extconf. Gem files will remain installed in C:/Ruby/lib/ruby/gems/1.8/gems/taf2-curb-0.4.8.0 for inspection. Results logged to C:/Ruby/lib/ruby/gems/1.8/gems/taf2-curb-0.4.8.0/ext/gem_make.out C:\Documents and Settings\Me I've installed curl-7.19.5 and curl-7.19.5-devel-mingw from this url: http://curl.haxx.se/download.html Help! And thanks!

    Read the article

  • Form Encoding Problems on GRAILS 2.0

    - by ArmlessJohn
    I have an Grails application that is configured everywhere to function as UTF-8. While running a debug version, headers say Content-Type:text/html;charset=utf-8, and meta tags agree. Browser identified page as UTF-8 and shows characters correctly. When posting a form, the browser correctly sends it encoded as UTF-8. When reading the data via params.paramname, however, the data looks garbled; maçã becomes maçã. Upon further inspection, it seems the form is sending UTF-8 data, but Grails seem to try and read it as if it was ISO-8859-1. Setting accept-charset="ISO-8859-1" on the form confirms this problem, as it fixes the problem. I also have this on applicationContext.xml: <bean id="characterEncodingFilter" class="org.springframework.web.filter.CharacterEncodingFilter"> <property name="encoding"> <value>utf-8</value> </property> <property name="forceEncoding"> <value>true</value> </property> </bean> Is there an solution for this besides adding accept-charset="ISO-8859-1" to all forms in the application? Thanks.

    Read the article

  • Approaches for cross server content sharing?

    - by Anonymity
    I've currently been tasked with finding a best solution to serving up content on our new site from another one of our other sites. Several approaches suggested to me, that I've looked into include using SharePoint's Lists Web Service to grab the list through javascript - which results in XSS and is not an option. Another suggestion was to build a server side custom web service and use SharePoint Request Forms to get the information - this is something I've only very briefly looked at. It's been suggested that I try permitting the requesting site in the HTTP headers of the serving site since I have access to both. This ultimately resulted in a semi-working solution that had major security holes. (I had to include username/password in the request to appease AD Authentication). This was done by allowing Access-Control-Allow-Origin: * The most direct approach I could think of was to simply build in the webpart in our new environment to have the authors manually update this content the same as they would on the other site. Are any one of the suggestions here more valid than another? Which would be the best approach? Are there other suggestions I may be overlooking? I'm also not sure if WebCrawling or Content Scrapping really holds water here...

    Read the article

  • Request size limitation when using MultipartHttpServletRequest of Spring 3.0

    - by Spiderman
    I'd like to know what is the size limitation if I upload list of files in one client's form submition using HTTP multipart content type. On the server side I am using Spring's MultipartHttpServletRequest to handle the request. mM questions: Is there should be different file size limitation and total request size limitation or file size is the only limitation and the request is capable of uploading 100s of files as lonng as they are not too large. Doest the Spring request wrapper read the complete request and store it in the JAVA heap memory or it store temporaray files of it to be able to use big quota. Is the use of reading the httpservlet request in streaming would change the size limitation than using complete http request read at-once by the application server. What is the bottleneck of this process - Java heap size, the quota of the filesystem on which my web-server runs, the maximum allowed BLOB size that the DataBase in which I am gonna save the file alows? or Spring internal limitations? Related threads that still don't have exact answer to this: does-spring-framework-support-streaming-mode-in-mutlipart-requests is-there-a-way-to-get-raw-http-request-stream-from-java-servlet-handler how-to- drop-body-of-a-request-after-checking-headers-in-servlet apache-commons-fileupload-throws-malformedstreamexception

    Read the article

  • Ajax request (from .NET) give me unexpected results

    - by ironnailpiercethesky
    I wanted to learn a little more about web and .NET so i decided to write a program to download binaries, webpages and does a little ajax. The first download site i tried was too easy. So i went against uploading.com. It seemed good, no captcha, some ajax use required, you cant guess all the data, it was perfect. After 2hours of messing around i got frustrated. For the life of me i could not get the ajax response to give me what i was expecting. I decided to use tamper data clear my cookies, click on the entry for the ajax request. Copy the Referer, cookies and postdata and came up with this code. It does not work! I am shocked that copying those fields did not work. I even tried setting the user agent (not in code linked) and it still did not work. Why isnt this POST on the ajax request not giving me the correct file? I notice the JsHttpRequest=12719132559480-xml changes based on time so if you want to run the code you should grab a new snapshot with tamperdata (or at least update that url). I dont see any extra headers in tamper data so does anyone know why i might not be getting the response expected? I know disabling cookies or javascript will make the link 'expired' automatically. I dont think javascript will matter because i got the data from tamperdata at the moment of request so what gives? What am i forgetting or what data am i not submitting properly?

    Read the article

  • YouTube - Encrypted cookie string

    - by Robertof
    Hello! I'm new to Stack Overflow. I'm building a YouTube Downloader in PHP. But YouTube have some IP-checks. Because the PHP file is on a remote server, the ip of the server != the ip of the user and the video-download fails. So, maybe I've found a solution. YouTube sends a cookie with an encrypted string, which is the user IP. I need to know the encrypted-string algorithm and know how to crypt a string with this. Here there is the string: nQ0CrJmASJk . It could be base64, but when I try to decode it with base64_decode, it gives me strange characters. You could check the cookie by requesting the main page of youtube, and check the headers "Set-Cookie". You will found a cookie with the name "VISITOR_INFO1_LIVE". Here there is the encrypyed string. Anyone knows what is the algorithm? Thanks. PS: sorry for my bad english. Cheers, Roberto.

    Read the article

  • How SQLite on Android handles long strings?

    - by Levara
    I'm wondering how Android's implementation of SQLite handles long Strings. Reading from online documentation on sqlite, it said that strings in sqlite are limited to 1 million characters. My strings are definitely smaller. I'm creating a simple RSS application, and after parsing a html document, and extracting text, I'm having problem saving it to a database. I have 2 tables in database, feeds and articles. RSS feeds are correctly saved and retrieved from feeds table, but when saving to the articles table, logcat is saying that it cannot save extracted text to it's column. I don't know if other columns are making problems too, no mention of them in logcat. I'm wondering, since text is from an article on web, are signs like (",',;) creating problems? Is Android automaticaly escaping them, or I have to do that. I'm using a technique for inserting similar to one in notepad tutorial: public long insertArticle(long feedid, String title, String link, String description, String h1,tring h2, String h3, String p, String image, long date) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_FEEDID, feedid); initialValues.put(KEY_TITLE, title); initialValues.put(KEY_LINK, link); initialValues.put(KEY_DESCRIPTION, description ); initialValues.put(KEY_H1, h1 ); initialValues.put(KEY_H2, h2); initialValues.put(KEY_H3, h3); initialValues.put(KEY_P, p); initialValues.put(KEY_IMAGE, image); initialValues.put(KEY_DATE, date); return mDb.insert(DATABASE_TABLE_ARTICLES,null, initialValues); } Column P is for extracted text, h1, h2 and h3 are for headers from a page. Logcat reports only column p to be the problem. The table is created with following statement: private static final String DATABASE_CREATE_ARTICLES = "create table articles( _id integer primary key autoincrement, feedid integer, title text, link text not null, description text," + "h1 text, h2 text, h3 text, p text, image text, date integer);";

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >