Search Results

Search found 53597 results on 2144 pages for 'http requests'.

Page 17/2144 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Rails' page caching vs. HTTP reverse proxy caches

    - by John Topley
    I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question). What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here. This is my understanding of how both techniques work (maybe I'm wrong): With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?

    Read the article

  • How can I prevent HTTPS on another domain from wrongly showing on my HTTP-only domain?

    - by Earlz
    So, I have a blog at domain.com. This blog is HTTP-only because I would gain almost nothing from adding SSL support. I have a web service now that I want to enable SSL support on that runs on the same server and IP address as my blog. I got it all working pretty easily, but not if I go to https://domain.com I will see a huge warning about an SSL certificate error and then if I click "ok" through the warning, I'll see the web service with SSL support, not my blog. My biggest fear with this scheme is Google indexing an HTTPS version of it and penalizing my blog because the content between the two doesn't match. How can I somehow for my blog's domain to either not serve anything on HTTPS, or to redirect back to my HTTP blog, or to serve my blog, but with an invalid SSL certificate? What can I do, preferably without buying another dedicated IP for my website?

    Read the article

  • POST and PUT requests – is it just the convention?

    - by bckpwrld
    I've read quite a few articles on the difference between POST and PUT and in when the two should be used. But there are still few things confusing me ( hopefully questions will make some sense ): 1) We should use PUT to create resources when we want clients to specify the URI of the newly created resources and we should use POST to create resources when we let service generate the URI of the newly created resources. a) Is it just by convention that POST create request doesn't contain an URI of the newly created resource or POST create request actually can't contain the URI of the newly created resource? b) PUT has idempotent semantics and thus can be safely used for absolute updates ( ie we send entire state of the resource to the server ), but not also for relative updates ( ie we send just changes to the resource state ), since that would violate its semantics. But I assume it's still possible for PUT to send relative updates to the server, it's just that in that case the PUT update won't be idempotent? 2) I've read somewhere that we should "use POST to append a resource to a collection identified by a service-generated URI". a) What exactly does that mean? That if URIs for the resources were generated by a server ( thus the resources were created via POST ), then ALL subsequent resources should also be created via POST? Thus, in such situation no resource should be created via PUT? b) If my assumption under a) is correct, could you elaborate why we shouldn't create some resources via POST and some via PUT ( assuming server already contains a collection of resources created via POST )? REPLY: 1) Please correct me if I'm wrong, but from your post and from the link you've posted, it seems: a) The Request-URI in POST is interpreted by server as the URI of the service. Thus, it could just as easily be interpreted as an URI of a newly created resource, if server code was written to recognize Request-URI as such b) Similarly, PUT is able to send relative updates, it's just that service code is usually written such that it will complain if PUT updates are relative. 2) Usually, create has fallen into the POST camp, because of the idea of "appending to a collection." It's become the way to append a resource to a list of resources. I don't quite understand the reasoning behind the idea of "appending to a collection" and why this idea prefers POST for create. Namely, if we create 10 resources via PUT, then server will contain a collection of 10 resources and if we then create another resource, then server will append this resource to that collection ( which will now contain 11 resources )?! Uh, this is kinda confusing thank you

    Read the article

  • How to disable proxy requests once a server has been added to spammers "open proxy" list?

    - by Matt
    Hello all, I've just started in a new company, and have been going over the setup of their Apache webserver conf files... only to find that they've had their apache servers set up as open proxies available to all the world for the last two months. I've already set ProxyRequests Off in the httpd.conf file and restarted the web server, but the access log file is still growing at a horrendous rate (about a gig a day). I noticed that another question was posted on here about this (http://serverfault.com/questions/63715/apache-hit-with-proxy-request), but their access log was supposedly returning 404 errors, while mine appears to be returning 403 and 404 codes... Is this correct? Here are a few lines out of my access log: 87.118.118.124 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.c5interlude.ru/torrent/viewtopic.php?p=2501 HTTP/1.0" 404 219 "http://www.c5interlude.ru/torrent/viewtopic.php?p=2501" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322)" 117.41.184.27 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.xtendmedia.com/st?ad_type=iframe&ad_size=300x250&section=790074 HTTP/1.0" 404 200 "http://www.newbiegamer.com" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; Alexa Toolbar)" 122.224.55.222 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar HTTP/1.1" 403 214 "http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar" "Mozilla/4.0" 58.55.21.40 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.cpx24.com/ad1.js HTTP/1.0" 404 204 "http://thebighits.com/?id=aibux" "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)" 122.226.223.188 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.reduxmedia.com/st?ad_type=iframe&ad_size=160x600&section=798636 HTTP/1.0" 404 200 "http://www.gvvu.com" "Mozilla/4.0 (compatible; MSIE 5.5; AOL 6.0; Windows 98; Win 9x 4.90)" 84.51.109.31 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.kslp.ru/forum/index.php HTTP/1.0" 404 213 "http://www.kslp.ru/forum/index.php" "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 6.0 ; .NET CLR 2.0.50215; SL Commerce Client v1.0; Tablet PC 2.0" 122.224.48.49 - - [16/Mar/2010:10:56:36 -0400] "GET http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe HTTP/1.1" 403 214 "http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe" "Mozilla/4.0" 117.41.184.27 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.xtendmedia.com/st?ad_type=iframe&ad_size=728x90&section=657624 HTTP/1.0" 404 200 "http://www.raiseanimals.com" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98; Alexa Toolbar)" And my corresponding error log entries: [Tue Mar 16 10:56:36 2010] [error] [client 87.118.118.124] File does not exist: C:/public_html/torrent, referer: http://www.c5interlude.ru/torrent/viewtopic.php?p=2501 [Tue Mar 16 10:56:36 2010] [error] [client 117.41.184.27] File does not exist: C:/public_html/st, referer: http://www.newbiegamer.com [Tue Mar 16 10:56:36 2010] [error] [client 122.224.55.222] (22)Invalid argument: Cannot map GET http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar HTTP/1.1 to file, referer: http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar [Tue Mar 16 10:56:36 2010] [error] [client 58.55.21.40] File does not exist: C:/public_html/ad1.js, referer: http://thebighits.com/?id=aibux [Tue Mar 16 10:56:36 2010] [error] [client 122.226.223.188] File does not exist: C:/public_html/st, referer: http://www.gvvu.com [Tue Mar 16 10:56:36 2010] [error] [client 84.51.109.31] File does not exist: C:/public_html/forum, referer: http://www.kslp.ru/forum/index.php [Tue Mar 16 10:56:36 2010] [error] [client 122.224.48.49] (22)Invalid argument: Cannot map GET http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe HTTP/1.1 to file, referer: http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe [Tue Mar 16 10:56:36 2010] [error] [client 117.41.184.27] File does not exist: C:/public_html/st, referer: http://www.raiseanimals.com Does this in fact look like the server is blocking them correctly, and is there anything else that I could do better to cut down on my access log size? (perhaps block these requests from the server completely?) Thanks! Matt

    Read the article

  • Google.com and clients1.google.com/generate_204

    - by David Murdoch
    I was looking into google.com's Net activity in firebug just because I was curious and noticed a request was returning "204 No Content." It turns out that a 204 No Content "is primarily intended to allow input for actions to take place without causing a change to the user agent's active document view, although any new or updated metainformation SHOULD be applied to the document currently in the user agent's active view." Whatever. I've looked into the JS source code and saw that "generate_204" is requested like this: (new Image).src="http://clients1.google.com/generate_204" No variable declaration/assignment at all. My first idea is that it was being used to track if Javascript is enabled. But the "(new Image).src='...'" call is called from a dynamically loaded external JS file anyway, so that would be pointless. Anyone have any ideas as to what the point could be? UPDATE "/generate_204" appears to be available on many google services/servers (e.g., maps.google.com/generate_204, maps.gstatic.com/generate_204, etc...). You can take advantage of this by pre-fetching the generate_204 pages for each google-owned service your web app may use. Like This: window.onload = function(){ var two_o_fours = [ // google maps domain ... "http://maps.google.com/generate_204", // google maps images domains ... "http://mt0.google.com/generate_204", "http://mt1.google.com/generate_204", "http://mt2.google.com/generate_204", "http://mt3.google.com/generate_204", // you can add your own 204 page for your subdomains too! "http://sub.domain.com/generate_204" ]; for(var i = 0, l = two_o_fours.length; i < l; ++i){ (new Image).src = two_o_fours[i]; } };

    Read the article

  • Unknown http requests of type http://<domain>/cache/<32-digit-alphanumeric-key>

    - by Siva Bathula
    I am getting a lot of incoming requests with this structure: //domain_name/cache/22092e9b25c40809dfb94b6179166b26. I am running a .NET 4.0 website served from IIS 7.5. A lot of these URLs have no referrer URLs and come in randomly with a different 32 digit alphanumeric key. And I do not have any resource like '.../cache/...' on my website. I just want to eliminate such requests and want to understand where these are coming from at all. Any help would be appreciated.

    Read the article

  • Generic HTTP 500 Error Message On Hosted Sites (like GoDaddy)

    - by Jimbo
    I decided to post this because I battled to find out how to do it and couldnt see anything on Stackoverflow about it. Often when you host with a provider like GoDaddy, they have "Custom Error Messages" set to ON. What I didnt realise was that the web.config settings dont just apply to ASP.NET, they apply to all applications on YOUR IIS site and hence will sort this problem out for Classic ASP as well (very few GoDaddy support people even know this) All you need to do is add the following to your web.config OR, for those using Classic ASP, just create a web.config file in your ROOT with this code in it. <configuration> <system.webServer> <asp scriptErrorSentToBrowser="true"/> <httpErrors errorMode="Detailed"/> </system.webServer> </configuration>

    Read the article

  • Multithread http downloader with webui [closed]

    - by kiler129
    I looking for software similar to JDownloader or PyLoad. JD is pretty good but use heavy Java and for now have very weak web interface. PyLoad is awesome, include simple but powerful web-UI but downloading 10 files (10 threads each, so summary it's 100 connections running at around 8MB/s all) consume a lot of cpu - it's whole core for me. Do you know any lightweight alternatives? Aria2c is good for console but I failed to find any good webui, official one is good but after adding more files almost crashes Chrome :)

    Read the article

  • HTTP caching headers: how should must-revalidate work?

    - by Bobby Jack
    Using trac, I'm getting a response with the following header: Cache-control: must-revalidate Moreover, no 'Expires' header is being sent. Our local proxy, however, is caching these responses, so when an edit is made, pages need to be 'hard refreshed' to update. Is the proxy misbehaving? Other headers that might be relevant: Connection Keep-Alive Proxy-Connection Keep-Alive Keep-Alive timeout=15, max=100

    Read the article

  • Windows HTTP tunnel through 2 Linux hosts?

    - by Darkmage
    The localhost only has connection to Host1. Host1 has connection to Host2 and localhost. How can I setup this to use Host2 as a proxy for web trafic from localhost? I have seen similar topics but can't get it to work. How do I set it up on the Windows XP client?

    Read the article

  • Handling UTF-8 with BOM in HTTP

    - by Alois Mahdal
    Say I have a script which at some point serves a plain text file as a content (right after "\n\n"). These files are provided by users, but I can expect they will be UTF-8. So I hard-wire Content-Type: text/plain; charset=UTF-8. But while I can teach users to save everything in UTF-8, I can't be very sure that the files will be without BOM ("\xEE\xBB\xBF"), as at least on Windows, this is not very clearly distinguished in common plain text editors and not every one of them uses the same default. So what about these files created on Windows, where they may/may not start with BOM? Should/will server or UA get rid of this debris for me? Or is it my task to prepare clean UTF-8, i.e. open each file and check whether BOM needs to be removed?

    Read the article

  • Why do users get an HTTP 404 error when attempting to clone a Mercurial repository over HTTP?

    - by Geoffrey van Wyk
    The repository is hosted on my PC. I use Apache with WAMP and TortoiseHG. I have setup users and passwords and they are able to browse the repository in their browsers after entering their usernames and passwords. The problem is that, when they try to clone the repository, they get an HTTP404 file note found error. However, I can clone the repsoitory on my own PC using their credentials. The problem must lie somehwere with the mercurial setup.

    Read the article

  • Dynamically blocking excessive HTTP bandwidth use?

    - by Jeff Atwood
    We were a little surprised to see this on our Cacti graphs for June 4 web traffic: We ran Log Parser on our IIS logs and it turns out this was a perfect storm of Yahoo and Google bots indexing us.. in that 3 hour period, we saw 287k hits from 3 different google ips, plus 104k from yahoo. Ouch? While we don't want to block Google or Yahoo, this has come up before. We have access to a Cisco PIX 515E, and we're thinking about putting that in front so we can dynamically deal with bandwidth offenders without touching our web servers directly. But is that the best solution? I'm wondering if there is any software or hardware that can help us identify and block excessive bandwidth use, ideally in real time? Perhaps some bit of hardware or open-source software we can put in front of our web servers? We are mostly a windows shop but we have some linux skills as well; we're also open to buying hardware if the PIX 515E isn't sufficient. What would you recommend?

    Read the article

  • cannot connect via http but can via ssh in Windows 7

    - by Tim
    Hi, I have a strange problem in my Windows 7. Sometimes web browsers (ie firefox and chrome) works sometimes don't. But ssh is always working. What could be the reason and how to fix it? My router is Linksys WRT54GL. Web browsing via firefox in my Ubuntu is okay. Thanks and regards!

    Read the article

  • Tunneling over HTTP

    - by Morgan
    Hello, I have a network at work that is locked behind a firewall and Internet connection is available only by using a proxy server. At work, I can connect to databases that are distributed across the network. However, at home, I cannot connect to the proxy server or the databases. How can this be done? I can access my workstation via LogMeIn, so I can install anything on it. I thought of installing some kind of tunneling mechanism in my workstation. Then, at home, I could connect to this mechanism, which would in turn do the required connections. So essentially, what I'd like to do can be represented by the following diagram: Home = Workstation = Database. For example, whenever I connect to, say, 10.140.0.1:1234 at home, this would be redirected to 10.140.0.1:1234 of my Workstation, because 10.140.0.1:1234 is only available through the corporate network. NOTE: I'm using Windows XP.

    Read the article

  • How to implement a secure authentication over HTTP?

    - by Zagorax
    I know that we have HTTPS, but I would like to know if there's an algorithm/approach/strategy that grants a reasonable security level without using SSL. I have read many solution on the internet. Most of them are based on adding some time metadata to the hashes, but it needs that both server and client has the time set equal. Moreover, it seems to me that none of this solution could prevent a man in the middle attack.

    Read the article

  • Remote HTTP to FTP

    - by jamd12
    I am on a very slow download with the internet that I have and unfortunately I only have access to expensive and slow wireless or satelite. I have set up an FTP with a computer supplier locally who has a nice 2 Mbps speed and am trying to set up a way of adding links remotely (Hotfile, rapidshare, Fileserve, etc.) so that they can be downloaded onto the FTP and then transfered a few times a week manually onto a portable HDD. On my home PC I use Internet download manager for all my downloads. Is there a simple way that I can add links remotely to Internet Download Manager on the FTP or perhaps another solution? The OS on the FTP is Linux - I use Windows XP SP3 and Windows 7. I have not used FTP very much before so any suggestions on how best to do this would be much appreciated.

    Read the article

  • Redirecting HTTP traffic from a local server on the web

    - by MrJackV
    Here is the situation: I have a webserver (let's call it C1) that is running an apache/php server and it is port forwarded so that I can access it anywhere. However there is another computer within the webserver LAN that has a apache server too (let's call it C2). I cannot change the port forwarding nor I can change the apache server (a.k.a. install custom modules). My question is: is there a way to access C2 within a directory of C1? (e.g. going to www.website.org/random_dir will allow me to browse the root of C2 apache server.) I am trying to change as little as possible of the config/other (e.g. activating modules etc.) Is there a possible solution? Thanks in advance.

    Read the article

  • Browser http port-forwarding

    - by Kakao
    When using a browser like Firefox I need that any url of the domain example.com to have appended the port :8008. Not only when I type it at address bar but any where it is referenced within the served html page. All the other domains should be left as is. I know I can setup a proxy like Squid or use a pac file in a web site but I want it simpler if possible.

    Read the article

  • Logging Into a site that uses Live.com authentication

    - by Josh
    I've been trying to automate a log in to a website I frequent, www.bungie.net. The site is associated with Microsoft and Xbox Live, and as such makes uses of the Windows Live ID API when people log in to their site. I am relatively new to creating web spiders/robots, and I worry that I'm misunderstanding some of the most basic concepts. I've simulated logins to other sites such as Facebook and Gmail, but live.com has given me nothing but trouble. Anyways, I've been using Wireshark and the Firefox addon Tamper Data to try and figure out what I need to post, and what cookies I need to include with my requests. As far as I know these are the steps one must follow to log in to this site. 1. Visit https: //login.live.com/login.srf?wa=wsignin1.0&rpsnv=11&ct=1268167141&rver=5.5.4177.0&wp=LBI&wreply=http:%2F%2Fwww.bungie.net%2FDefault.aspx&id=42917 2. Recieve the cookies MSPRequ and MSPOK. 3. Post the values from the form ID "PPSX", the values from the form ID "PPFT", your username, your password all to a changing URL similar to: https: //login.live.com/ppsecure/post.srf?wa=wsignin1.0&rpsnv=11&ct= (there are a few numbers that change at the end of that URL) 4. Live.com returns the user a page with more hidden forms to post. The client then posts the values from the form "ANON", the value from the form "ANONExp" and the values from the form "t" to the URL: http ://www.bung ie.net/Default.aspx?wa=wsignin1.0 5. After posting that data, the user is returned a variety of cookies the most important of which is "BNGAuth" which is the log in cookie for the site. Where I am having trouble is on fifth step, but that doesn't neccesarily mean I've done all the other steps correctly. I post the data from "ANON", "ANONExp" and "t" but instead of being returned a BNGAuth cookie, I'm returned a cookie named "RSPMaybe" and redirected to the home page. When I review the Wireshark log, I noticed something that instantly stood out to me as different between the log when I logged in with Firefox and when my program ran. It could be nothing but I'll include the picture here for you to review. I'm being returned an HTTP packet from the site before I post the data in the fourth step. I'm not sure how this is happening, but it must be a side effect from something I'm doing wrong in the HTTPS steps. using System; using System.Collections.Generic; using System.Collections.Specialized; using System.Text; using System.Net; using System.IO; using System.IO.Compression; using System.Security.Cryptography; using System.Security.Cryptography.X509Certificates; using System.Web; namespace SpiderFromScratch { class Program { static void Main(string[] args) { CookieContainer cookies = new CookieContainer(); Uri url = new Uri("https://login.live.com/login.srf?wa=wsignin1.0&rpsnv=11&ct=1268167141&rver=5.5.4177.0&wp=LBI&wreply=http:%2F%2Fwww.bungie.net%2FDefault.aspx&id=42917"); HttpWebRequest http = (HttpWebRequest)HttpWebRequest.Create(url); http.Timeout = 30000; http.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)"; http.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; http.Headers.Add("Accept-Language", "en-us,en;q=0.5"); http.Headers.Add("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.7"); http.Headers.Add("Keep-Alive", "300"); http.Referer = "http://www.bungie.net/"; http.ContentType = "application/x-www-form-urlencoded"; http.CookieContainer = new CookieContainer(); http.Method = WebRequestMethods.Http.Get; HttpWebResponse response = (HttpWebResponse)http.GetResponse(); StreamReader readStream = new StreamReader(response.GetResponseStream()); string HTML = readStream.ReadToEnd(); readStream.Close(); //gets the cookies (they are set in the eighth header) string[] strCookies = response.Headers.GetValues(8); response.Close(); string name, value; Cookie manualCookie; for (int i = 0; i < strCookies.Length; i++) { name = strCookies[i].Substring(0, strCookies[i].IndexOf("=")); value = strCookies[i].Substring(strCookies[i].IndexOf("=") + 1, strCookies[i].IndexOf(";") - strCookies[i].IndexOf("=") - 1); manualCookie = new Cookie(name, "\"" + value + "\""); Uri manualURL = new Uri("http://login.live.com"); http.CookieContainer.Add(manualURL, manualCookie); } //stores the cookies to be used later cookies = http.CookieContainer; //Get the PPSX value string PPSX = HTML.Remove(0, HTML.IndexOf("PPSX")); PPSX = PPSX.Remove(0, PPSX.IndexOf("value") + 7); PPSX = PPSX.Substring(0, PPSX.IndexOf("\"")); //Get this random PPFT value string PPFT = HTML.Remove(0, HTML.IndexOf("PPFT")); PPFT = PPFT.Remove(0, PPFT.IndexOf("value") + 7); PPFT = PPFT.Substring(0, PPFT.IndexOf("\"")); //Get the random URL you POST to string POSTURL = HTML.Remove(0, HTML.IndexOf("https://login.live.com/ppsecure/post.srf?wa=wsignin1.0&rpsnv=11&ct=")); POSTURL = POSTURL.Substring(0, POSTURL.IndexOf("\"")); //POST with cookies http = (HttpWebRequest)HttpWebRequest.Create(POSTURL); http.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)"; http.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; http.Headers.Add("Accept-Language", "en-us,en;q=0.5"); http.Headers.Add("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.7"); http.Headers.Add("Keep-Alive", "300"); http.CookieContainer = cookies; http.Referer = "https://login.live.com/login.srf?wa=wsignin1.0&rpsnv=11&ct=1268158321&rver=5.5.4177.0&wp=LBI&wreply=http:%2F%2Fwww.bungie.net%2FDefault.aspx&id=42917"; http.ContentType = "application/x-www-form-urlencoded"; http.Method = WebRequestMethods.Http.Post; Stream ostream = http.GetRequestStream(); //used to convert strings into bytes System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); //Post information byte[] buffer = encoding.GetBytes("PPSX=" + PPSX +"&PwdPad=IfYouAreReadingThisYouHaveTooMuc&login=YOUREMAILGOESHERE&passwd=YOURWORDGOESHERE" + "&LoginOptions=2&PPFT=" + PPFT); ostream.Write(buffer, 0, buffer.Length); ostream.Close(); HttpWebResponse response2 = (HttpWebResponse)http.GetResponse(); readStream = new StreamReader(response2.GetResponseStream()); HTML = readStream.ReadToEnd(); response2.Close(); ostream.Dispose(); foreach (Cookie cookie in response2.Cookies) { Console.WriteLine(cookie.Name + ": "); Console.WriteLine(cookie.Value); Console.WriteLine(cookie.Expires); Console.WriteLine(); } //SET POSTURL value string POSTANON = "http://www.bungie.net/Default.aspx?wa=wsignin1.0"; //Get the ANON value string ANON = HTML.Remove(0, HTML.IndexOf("ANON")); ANON = ANON.Remove(0, ANON.IndexOf("value") + 7); ANON = ANON.Substring(0, ANON.IndexOf("\"")); ANON = HttpUtility.UrlEncode(ANON); //Get the ANONExp value string ANONExp = HTML.Remove(0, HTML.IndexOf("ANONExp")); ANONExp = ANONExp.Remove(0, ANONExp.IndexOf("value") + 7); ANONExp = ANONExp.Substring(0, ANONExp.IndexOf("\"")); ANONExp = HttpUtility.UrlEncode(ANONExp); //Get the t value string t = HTML.Remove(0, HTML.IndexOf("id=\"t\"")); t = t.Remove(0, t.IndexOf("value") + 7); t = t.Substring(0, t.IndexOf("\"")); t = HttpUtility.UrlEncode(t); //POST the Info and Accept the Bungie Cookies http = (HttpWebRequest)HttpWebRequest.Create(POSTANON); http.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.8) Gecko/20100202 Firefox/3.5.8 (.NET CLR 3.5.30729)"; http.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; http.Headers.Add("Accept-Language", "en-us,en;q=0.5"); http.Headers.Add("Accept-Encoding", "gzip,deflate"); http.Headers.Add("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.7"); http.Headers.Add("Keep-Alive", "115"); http.CookieContainer = new CookieContainer(); http.ContentType = "application/x-www-form-urlencoded"; http.Method = WebRequestMethods.Http.Post; http.Expect = null; ostream = http.GetRequestStream(); int test = ANON.Length; int test1 = ANONExp.Length; int test2 = t.Length; buffer = encoding.GetBytes("ANON=" + ANON +"&ANONExp=" + ANONExp + "&t=" + t); ostream.Write(buffer, 0, buffer.Length); ostream.Close(); //Here lies the problem, I am not returned the correct cookies. HttpWebResponse response3 = (HttpWebResponse)http.GetResponse(); GZipStream gzip = new GZipStream(response3.GetResponseStream(), CompressionMode.Decompress); readStream = new StreamReader(gzip); HTML = readStream.ReadToEnd(); //gets both cookies string[] strCookies2 = response3.Headers.GetValues(11); response3.Close(); } } } This has given me problems and I've put many hours into learning about HTTP protocols so any help would be appreciated. If there is an article detailing a similar log in to live.com feel free to point the way. I've been looking far and wide for any articles with working solutions. If I could be clearer, feel free to ask as this is my first time using Stack Overflow.

    Read the article

  • Default /etc/apt/sources.list?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • PPA causing 404 error?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • sudo apt-get update does not work for 12.10

    - by Brian Hawi
    hey i recently installed ubuntu 12.10 but the software center does not work i tried the sudo apt-get update because that worked when i was using ubuntu 11.04.... these are the errors hawi@hawi-HP-G62-Notebook-PC:~$ sudo apt-get update [sudo] password for hawi: Err http:ke.archive.ubuntu.com quantal InRelease Err http:ke.archive.ubuntu.com quantal-updates InRelease Err http:ke.archive.ubuntu.com quantal-backports InRelease Err http:ke.archive.ubuntu.com quantal Release.gpg Unable to connect to ke.archive.ubuntu.com:http: Err http:ke.archive.ubuntu.com quantal-updates Release.gpg Unable to connect to ke.archive.ubuntu.com:http: Err http:ke.archive.ubuntu.com quantal-backports Release.gpg Unable to connect to ke.archive.ubuntu.com:http: Err http:security.ubuntu.com quantal-security InRelease Err http:security.ubuntu.com quantal-security Release.gpg Unable to connect to security.ubuntu.com:http: [IP: 91.189.92.190 80] Err http:extras.ubuntu.com quantal InRelease Err http:extras.ubuntu.com quantal Release.gpg Unable to connect to extras.ubuntu.com:http: Reading package lists... Done W: Failed to fetch http:ke.archive.ubuntu.com/ubuntu/dists/quantal/InRelease W: Failed to fetch http:ke.archive.ubuntu.com/ubuntu/dists/quantal-updates/InRelease W: Failed to fetch http:ke.archive.ubuntu.com/ubuntu/dists/quantal-backports/InRelease W: Failed to fetch http:security.ubuntu.com/ubuntu/dists/quantal-security/InRelease W: Failed to fetch http:extras.ubuntu.com/ubuntu/dists/quantal/InRelease W: Failed to fetch http:ke.archive.ubuntu.com/ubuntu/dists/quantal/Release.gpg Unable to connect to ke.archive.ubuntu.com:http: W: Failed to fetch http:ke.archive.ubuntu.com/ubuntu/dists/quantal-updates/Release.gpg Unable to connect to ke.archive.ubuntu.com:http: W: Failed to fetch http:ke.archive.ubuntu.com/ubuntu/dists/quantal-backports/Release.gpg Unable to connect to ke.archive.ubuntu.com:http: W: Failed to fetch http:security.ubuntu.com/ubuntu/dists/quantal-security/Release.gpg Unable to connect to security.ubuntu.com:http: [IP: 91.189.92.190 80] W: Failed to fetch http:extras.ubuntu.com/ubuntu/dists/quantal/Release.gpg Unable to connect to extras.ubuntu.com:http: W: Some index files failed to download. They have been ignored, or old ones used instead. (note i have removed the // after http because the site does not allow me to post more than two links) what could be the issue?

    Read the article

  • Monitoring JSON requests sent/received from the browser?

    - by Uwe Keim
    Having a website that generates and receives JSON requests via AJAX, I failed to find a tool that shows me live the communication including the content of the JSON calls. I thought that the Google Chrome developer tools or the IE 9 developer tools do have such a feature, but again, I failed. Searching Google, I failed too. So my question is: Is there a client-side tool to monitor the content of JSON requests that a website sends to the server?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >