Search Results

Search found 59254 results on 2371 pages for 'http host'.

Page 177/2371 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Need to check uptime on a large file being hosted

    - by trustfundbaby
    I have a dynamically generated rss feed that is about 150M in size (don't ask) The problem is that it keeps crapping out sporadically and there is no way to monitor it without downloading the entire feed to get a 200 status. Pingdom times out on it and returns a 'down' error. So my question is, how do I check that this thing is up and running

    Read the article

  • Serializing an object into the body of a WCF request using webHttpBinding

    - by Bert
    I have a WCF service exposed with a webHttpBinding endpoint. [OperationContract(IsOneWay = true)] [WebInvoke(Method = "POST", RequestFormat = WebMessageFormat.Json, BodyStyle = WebMessageBodyStyle.Bare, UriTemplate = "/?action=DoSomething&v1={value1}&v2={value2}")] void DoSomething(string value1, string value2, MySimpleObject value3); In theory, if I call this, the first two parameters (value1 & value 2) are taken from the Uri and the final one (value3) should be deserialized from the body of the request. Assuming I am using Json as the RequestFormat, what is the best way of serialising an instance of MySimpleObject into the body of the request before I send it ? This, for instance, does not seem to work : HttpWebRequest sendRequest = (HttpWebRequest)WebRequest.Create(url); sendRequest.ContentType = "application/json"; sendRequest.Method = "POST"; using (var sendRequestStream = sendRequest.GetRequestStream()) { DataContractJsonSerializer jsonSerializer = new DataContractJsonSerializer(typeof(MySimpleObject)); jsonSerializer.WriteObject(sendRequestStream, obj); sendRequestStream.Close(); } sendRequest.GetResponse().Close();

    Read the article

  • Removing X-Powered-By

    - by Castor
    How can I remove X-Powered-By header in PHP? I am on an Apache Server and I use php 5.21. I can't use the header_remove function in php as it's not supported by 5.21. I used Header unset X-Powered-By, it worked on my local machine, but not on my production server. If php doesn't support header_remove() for ver < 5.3, is there an alternative?

    Read the article

  • Location detecting tecniques for IP addresses

    - by ilhan
    What are the location detecting tecniques for IP adresses? I know to look at the $_SERVER['HTTP_ACCEPT_LANGUAGE'] (not accurate but mostly useful to detect location, for example if an IP range's users set French to their browser then it means that this range) belongs to France and gethostbyaddr($_SERVER['REMOTE_ADDR']) (to look country code top-level domain) then may be to whois gethostbyaddr($_SERVER['REMOTE_ADDR']) sometimes: $HTTP_USER_AGENT (Firefox's user agent string has language code, not accurate but mostly can be used to detect the location) But what about cities?

    Read the article

  • Best format to submit an autocomplete?

    - by Keyo
    I'm seeking advice on best practice when submitting a varying amount of POST variables. Do I use JSON or some character separated list to merge all the values into one field or use a sequence of fields like 'autocomplete1', 'autocomplete2' and so on. Thanks in advance, Ben

    Read the article

  • Sending form in libcurl

    - by sfactor
    i need to send a file to a webserver using libcurl. i saw one of the examples in the curl website and am trying to implement it. it is the postit2.c example. can someone tell me how i might extend this to be able to send username and password as well

    Read the article

  • 410 Responses when your CMS host doesn't support them?

    - by leeand00
    Sending a 410 responses for a page that no longer exist should make Google stop crawling for that page. The site I am working on has been recently migrated, and very little of the content was migrated. I've already turned the existing content into 301 redirects (the content that is on both the old and the new site), but now I would like to flush the old content from Google's memory by placing 410 responses in it's path when it returns to crawl for them and finds a 404 response. However, I asked our CMS host about it, and they said that our CMS does not support 410 responses. Is there some other way to post a 410 response, like making a dead link 301 redirect to a page that a 410 response in the form of a meta tag?

    Read the article

  • How do I use NTLM authentication with Active Directory

    - by Jon Works
    I am trying to implement NTLM authentication on one of our internal sites and everything is working. The one piece of the puzzle I do not have is how to take the information from NTLM and authenticate with Active Directory. There is a good description of NTLM and the encryption used for the passwords, which I used to implement this, but I am not sure of how to verify if the user's password is valid. I am using Coldfusion but a solution to this problem can be in any language (Java, Python, PHP, etc). Edit: I am using Coldfusion on Redhat Enterprise Linux. Unfortunately we cannot use IIS to manage this and instead have to write or use a 3rd party tool for this.

    Read the article

  • Why will IIS 6 not serve my custom 404 page when I set the URL in 'Custom Errors'?

    - by Glenn Slaven
    I've got an ASP.NET MVC site & I've got an Errors controller with a NotFound action which works great for 404 errors that pass though .NET, but for stuff that doesn't (like static files) I've set the Custom Errors value for 404 to URL with a value of /Errors/NotFound. But when I do this & hit a non-existant page the site just gives me this: The system cannot find the path specified. Is this because it's a dynamic url, can IIS not redirect 404 requests to dynamic urls or have I screwed up the config somewhere?

    Read the article

  • Opening two connections to my Apache server?

    - by Ron Meretns
    Hi guys. I have an Apache server running PHP. Everything works fine. But If I have a script that runs for a long time, and I try to open another page to the same server, the second page waits till my first page finishes. This behavior happens to me on Linux and PC. I'm running Apache V2.2.9, and PHP 5.2.6. I'm not sure why this happens... is this normal behavior? Ron

    Read the article

  • How to decompress/inflate an XML response from ASP

    - by krisg
    Can anyone provide some insight into how i'd go about decompressing an XML response in classic ASP. We've been handed some code and asked to get it working: Set oXMLHttp = Server.CreateObject("MSXML2.ServerXMLHTTP") URL = HttpServer + re_domain + ".do;jsessionid=" + ue_session + "?" + data oXMLHttp.setTimeouts 5000, 60000, 1200000, 1200000 oXMLHttp.open "GET", URL, false oXMLHttp.setRequestHeader "Accept-Encoding", "gzip" oXMLHttp.send() if oXMLHttp.status = 200 Then if oXMLHttp.responseText = "" then htmlrequest_get = "Empty Response from Server" else htmlrequest_get = oXMLHttp.responseText end if else ... Apparently now that the response is compressed using gzip, we have to un-compress the XML response before we can start to work with the data. How should i go about this?

    Read the article

  • handling filename* parameters with spaces via RFC 5987 results in '+' in filenames

    - by Peter Friend
    I have some legacy code I am dealing with (so no I can't just use a URL with an encoded filename component) that allows a user to download a file from our website. Since our filenames are often in many different languages they are all stored as UTF-8. I wrote some code to handle the RFC5987 conversion to a proper filename* parameter. This works great until I have a filename with non-ascii characters and spaces. Per RFC, the space character is not part of attr_char so it gets encoded as %20. I have new versions of Chrome as well as Firefox and they are all converting to %20 to + on download. I have tried not encoding the space and putting the encoded filename in quotes and get the same result. I have sniffed the response coming from the server to verify that the servlet container wasn't mucking with my headers and they look correct to me. The RFC even has examples that contain %20. Am I missing something, or do all of these browsers have a bug related to this? Many thanks in advance. The code I use to encode the filename is below. Peter public static boolean bcsrch(final char[] chars, final char c) { final int len = chars.length; int base = 0; int last = len - 1; /* Last element in table */ int p; while (last >= base) { p = base + ((last - base) >> 1); if (c == chars[p]) return true; /* Key found */ else if (c < chars[p]) last = p - 1; else base = p + 1; } return false; /* Key not found */ } public static String rfc5987_encode(final String s) { final int len = s.length(); final StringBuilder sb = new StringBuilder(len << 1); final char[] digits = {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'}; final char[] attr_char = {'!','#','$','&','\'','+','-','.','0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z','^','_','a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z','|', '~'}; for (int i = 0; i < len; ++i) { final char c = s.charAt(i); if (bcsrch(attr_char, c)) sb.append(c); else { final char[] encoded = {'%', 0, 0}; encoded[1] = digits[0x0f & (c >>> 4)]; encoded[2] = digits[c & 0x0f]; sb.append(encoded); } } return sb.toString(); } Update Here is a screen shot of the download dialog I get for a file with Chinese characters with spaces as mentioned in my comment.

    Read the article

  • Is it possible to output cache by host name? ie varybyhost or varbyhostheader?

    - by Pure.Krome
    Hi folks, i've got a website that has a number of host headers. Depending on the host header, the results are different - both visually (theme'd) and data. So lets imagine i have a website called 'Foo' - that returns search results (original, eh?). Now, the same code runs both sites. It is physically the same server/website (using Host Headers) :- www.foo.com www.foo.com.au Now, if i goto '.com', the site is theme'd in blue. if i goto the '.com.au' site, it's theme'd in red. And the data is different for the same search result, based on the host name (ie. us results for .com, au results for .com.au) SO .. if i wish to use OutputCaching .. can this be handled / differ by the host name? I don't want to have the first person goto the .com site .. grab the results ... and the a second person goto my .com.au .. same search data .. and get the theme and results for the .com site. Possible?

    Read the article

  • Setting Curl's Timeout in PHP

    - by Moki
    I'm running a curl request on an eXist database through php. The dataset is very large, and as a result, the database consistently takes a long amount of time to return an XML response. To fix that, we set up a curl request, with what is supposed to be a long timeout. $ch = curl_init(); $headers["Content-Length"] = strlen($postString); $headers["User-Agent"] = "Curl/1.0"; curl_setopt($ch, CURLOPT_URL, $requestUrl); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_USERPWD, 'admin:'); curl_setopt($ch,CURLOPT_TIMEOUT,1000); $response = curl_exec($ch); curl_close($ch); However, the curl request consistently ends before the request is completed (<1000 when requested via a browser). Does anyone know if this is the proper way to set timeouts in curl?

    Read the article

  • Fetching custom Authorization header from incoming PHP request

    - by jpatokal
    So I'm trying to parse an incoming request in PHP which has the following header set: Authorization: Custom Username Simple question: how on earth do I get my hands on it? If it was Authorization: Basic, I could get the username from $_SERVER["PHP_AUTH_USER"]. If it was X-Custom-Authorization: Username, I could get the username from $_SERVER["HTTP_X_CUSTOM_AUTHORIZATION"]. But neither of these are set by a custom Authorization, var_dump($_SERVER) reveals no mention of the header (in particular, AUTH_TYPE is missing), and PHP5 functions like get_headers() only work on responses to outgoing requests. I'm running PHP 5 on Apache with an out-of-the box Ubuntu install.

    Read the article

  • Requires a valid Date or x-amz-date header?

    - by Jordan Messina
    I'm getting the following error when attempting to upload a file to S3: S3StorageError: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>AWS authentication requires a valid Date or x-amz-date header</Message><RequestId>7910FF83F3FE17E2</RequestId><HostId>EjycXTgSwUkx19YNkpAoY2UDDur/0d5SMvGJUicpN6qCZFa2OuqcpibIR3NJ2WKB</HostId></Error> I'm using Django with Django-Storages and Imagekit My S3 settings in my settings.py looks as follows: locale.setlocale(locale.LC_TIME, 'en_US') DEFAULT_FILE_STORAGE = 'backends.s3.S3Storage' AWS_ACCESS_KEY_ID = '************************' AWS_SECRET_ACCESS_KEY = '*****************************' AWS_STORAGE_BUCKET_NAME = 'static.blabla.com' AWS_HEADERS = { 'x-amz-date': datetime.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT'), 'Expires': 'Thu, 15 Apr 2200 20:00:00 GMT', } from S3 import CallingFormat AWS_CALLING_FORMAT = CallingFormat.SUBDOMAIN Thanks for any help you can give!

    Read the article

  • Download a file from one ASP.NET web application to other (given the credentials)

    - by Tom S.
    Hi everybody! Im working on a asp.net 3.5 web application (C#), where i have a file with some information that is updated frequently, and only few accounts can access to it (the application is using the asp.net authentication system, stored in a SQL database). My task is to parse that file, so i made a small parser (another web app) a to show the information in a more friendly way. However, everytime i want to parse it, i need to enter in the application with one of those accounts, download the file, put in the parser's folder. Is there any way to, given the username and password, download the file directly from the parser application and use that one? Thanks in advance

    Read the article

  • httpUnit class not found

    - by josh
    I am trying httpUnit for the first time and just trying to get a response back from google.com. However, I keep getting the following error: com.meterware.httpunit.dom.HTMLDocumentImpl not found Though, I have placed httpUnit.jar in the libraries folder of my NetBeans project and can actually see that class file is there. Any experiences with this?

    Read the article

  • .NET: Preserving some, but not all query params during redirect

    - by kasper pedersen
    Hi all, Could someone tell me if the code below would achieve what I want, which is: Check if the query parameters 'return_path' and/or 'user_state' are present in the query string, and if so append them to the query string of the redirect URI. As I'm not a .NET dev and don't have a server to test this on, I was hoping someone could give me some feedback. ArrayList vars = new ArrayList(); vars.Add("return_path"); vars.Add("user_state"); string newUrl = "/new/request/uri" + "?"; ArrayList params = new ArrayList(); foreach ( string key in Request.QueryString ) { if (vars.contains(key)) { params.Add(key + "=" + HttpUtility.URLPathEncode(Request.QueryString[key])); } } String[] paramArr = (String[]) params.ToArray( typeof (string) ); String queryString = String.join("&", paramArr); Response.Redirect(newUrl); Thank you :)

    Read the article

  • The remote server returned an error: (404) Not Found.

    - by John
    I am running this piece of code to get the source of my webpage. The problem is why this function returns 404 error? Private Function getPageSource(ByVal URL As String) As String Dim webClient As New System.Net.WebClient() Dim strSource As String = webClient.DownloadString(URL) webClient.Dispose() Return strSource End Function

    Read the article

  • Why is curl in Ruby slower than command-line curl?

    - by Stiivi
    I am trying to download more than 1m pages (URLs ending by a sequence ID). I have implemented kind of multi-purpose download manager with configurable number of download threads and one processing thread. The downloader downloads files in batches: curl = Curl::Easy.new batch_urls.each { |url_info| curl.url = url_info[:url] curl.perform file = File.new(url_info[:file], "wb") file << curl.body_str file.close # ... some other stuff } I have tried to download 8000 pages sample. When using the code above, I get 1000 in 2 minutes. When I write all URLs into a file and do in shell: cat list | xargs curl I gen all 8000 pages in two minutes. Thing is, I need it to have it in ruby code, because there is other monitoring and processing code. I have tried: Curl::Multi - it is somehow faster, but misses 50-90% of files (does not download them and gives no reason/code) multiple threads with Curl::Easy - around the same speed as single threaded Why is reused Curl::Easy slower than subsequent command line curl calls and how can I make it faster? Or what I am doing wrong? I would prefer to fix my download manager code than to make downloading for this case in a different way. Before this, I was calling command-line wget which I provided with a file with list of URLs. Howerver, not all errors were handled, also it was not possible to specify output file for each URL separately when using URL list. Now it seems to me that the best way would be to use multiple threads with system call to 'curl' command. But why when I can use directly Curl in Ruby? Code for the download manager is here, if it might help: Download Manager (I have played with timeouts, from not-setting it to various values, it did not seem help) Any hints appreciated.

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >