Search Results

Search found 92198 results on 3688 pages for 'http error'.

Page 295/3688 | < Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >

  • silverlight 3: long running wcf call triggers 401.1 (access denied)

    - by sympatric greg
    I have a wcf service consumed by a silverlight 3 control. The Silverlight client uses a basicHttpBindinging that is constructed at runtime from the control's initialization parameters like this: public static T GetServiceClient<T>(string serviceURL) { BasicHttpBinding binding = new BasicHttpBinding(Application.Current.Host.Source.Scheme.Equals("https", StringComparison.InvariantCultureIgnoreCase) ? BasicHttpSecurityMode.Transport : BasicHttpSecurityMode.None); binding.MaxReceivedMessageSize = int.MaxValue; binding.MaxBufferSize = int.MaxValue; binding.Security.Mode = BasicHttpSecurityMode.TransportCredentialOnly; return (T)Activator.CreateInstance(typeof(T), new object[] { binding, new EndpointAddress(serviceURL)}); } The Service implements windows security. Calls were returning as expected until the result set increased to several thousand rows at which time HTTP 401.1 errors were received. The Service's HttpBinding defines closeTime, openTimeout, receiveTimeout and sendTimeOut of 10 minutes. If I limit the size of the resultset the call suceeds. Additional Observations from Fiddler: When Method2 is modified to return a smaller resultset (and avoid the problem), control initialization consists of 4 calls: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:200 (1.25 seconds) When Method2 is configured to return the larger resultset we get: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:401.1 (7.5 seconds) Service1/Method2 -- result:401.1 (15ms) Service1/Method2 -- result:401.1 (7.5 seconds)

    Read the article

  • Android: Unable to make httprequest behind firewall

    - by Yang
    The standard getUrlContent works welll when there is no firewall. But I got exceptions when I try to do it behind a firewall. I've tried to set "http proxy server" in AVD manager, but it didn't work. Any idea how to correctly set it up? protected static synchronized String getUrlContent(String url) throws ApiException { if(url.equals("try")){ return "thanks"; } if (sUserAgent == null) { throw new ApiException("User-Agent string must be prepared"); } // Create client and set our specific user-agent string HttpClient client = new DefaultHttpClient(); HttpGet request = new HttpGet(url); request.setHeader("User-Agent", sUserAgent); try { HttpResponse response = client.execute(request); // Check if server response is valid StatusLine status = response.getStatusLine(); if (status.getStatusCode() != HTTP_STATUS_OK) { throw new ApiException("Invalid response from server: " + status.toString()); } // Pull content stream from response HttpEntity entity = response.getEntity(); InputStream inputStream = entity.getContent(); ByteArrayOutputStream content = new ByteArrayOutputStream(); // Read response into a buffered stream int readBytes = 0; while ((readBytes = inputStream.read(sBuffer)) != -1) { content.write(sBuffer, 0, readBytes); } // Return result from buffered stream return new String(content.toByteArray()); } catch (IOException e) { throw new ApiException("Problem communicating with API", e); } }

    Read the article

  • How do I force or add the content length for ajax type POST requests in Firefox?

    - by Jayson
    I'm trying to POST a http request using ajax, but getting a response from the apache server using modsec_audit that: "POST request must have a Content-Length header." I do not want to disable this in modsec_audit. This occurs only in firefox, and not IE. Further, I switched to using a POST rather than a GET to keep IE from caching my results. This is a simplified version of the code I'm using for the request, I'm not using any javascript framework. function getMyStuff(){ var SearchString = ''; /* build search string */ ... /* now do request */ var xhr = createXMLHttpRequest(); var RequestString = 'someserverscript.cfm' + SearchString; xhr.open("POST", RequestString, true); xhr.onreadystatechange = function(){ processResponse(xhr); } xhr.send(null); } function processResponse(xhr){ var serverResponse = xhr.responseText; var container = document.getElementById('myResultsContainer'); if (xhr.readyState == 4){ container.innerHTML = serverResponse; } } function createXMLHttpRequest(){ try { return new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) {} try { return new ActiveXObject("Microsoft.XMLHTTP"); } catch (e) {} try { return new XMLHttpRequest(); } catch(e) {} return null; } How do I force or add the content length for ajax type POST requests in Firefox?

    Read the article

  • Rails 3 get raw post data and write it to tmp file

    - by Andrew
    I'm working on implementing Ajax-Upload for uploading photos in my Rails 3 app. The documentation says: For IE6-8, Opera, older versions of other browsers you get the file as you normally do with regular form-base uploads. For browsers which upload file with progress bar, you will need to get the raw post data and write it to the file. So, how can I receive the raw post data in my controller and write it to a tmp file so my controller can then process it? (In my case the controller is doing some image manipulation and saving to S3.) Some additional info: As I'm configured right now the post is passing these parameters: Parameters: {"authenticity_token"=>"...", "qqfile"=>"IMG_0064.jpg"} ... and the CREATE action looks like this: def create @attachment = Attachment.new @attachment.user = current_user @attachment.file = params[:qqfile] if @attachment.save! respond_to do |format| format.js { render :text => '{"success":true}' } end end end ... but I get this error: ActiveRecord::RecordInvalid (Validation failed: File file name must be set.): app/controllers/attachments_controller.rb:7:in `create'

    Read the article

  • Intrinsics program (SSE) - g++ - help needed

    - by Sriram
    Hi all, This is the first time I am posting a question on stackoverflow, so please try and overlook any errors I may have made in formatting my question/code. But please do point the same out to me so I may be more careful. I was trying to write some simple intrinsics routines for the addition of two 128-bit (containing 4 float variables) numbers. I found some code on the net and was trying to get it to run on my system. The code is as follows: //this is a sample Intrinsics program to add two vectors. #include <iostream> #include <iomanip> #include <xmmintrin.h> #include <stdio.h> using namespace std; struct vector4 { float x, y, z, w; }; //functions to operate on them. vector4 set_vector(float x, float y, float z, float w = 0) { vector4 temp; temp.x = x; temp.y = y; temp.z = z; temp.w = w; return temp; } void print_vector(const vector4& v) { cout << " This is the contents of vector: " << endl; cout << " > vector.x = " << v.x << endl; cout << " vector.y = " << v.y << endl; cout << " vector.z = " << v.z << endl; cout << " vector.w = " << v.w << endl; } vector4 sse_vector4_add(const vector4&a, const vector4& b) { vector4 result; asm volatile ( "movl $a, %eax" //move operands into registers. "\n\tmovl $b, %ebx" "\n\tmovups (%eax), xmm0" //move register contents into SSE registers. "\n\tmovups (%ebx), xmm1" "\n\taddps xmm0, xmm1" //add the elements. addps operates on single-precision vectors. "\n\t movups xmm0, result" //move result into vector4 type data. ); return result; } int main() { vector4 a, b, result; a = set_vector(1.1, 2.1, 3.2, 4.5); b = set_vector(2.2, 4.2, 5.6); result = sse_vector4_add(a, b); print_vector(a); print_vector(b); print_vector(result); return 0; } The g++ parameters I use are: g++ -Wall -pedantic -g -march=i386 -msse intrinsics_SSE_example.C -o h The errors I get are as follows: intrinsics_SSE_example.C: Assembler messages: intrinsics_SSE_example.C:45: Error: too many memory references for movups intrinsics_SSE_example.C:46: Error: too many memory references for movups intrinsics_SSE_example.C:47: Error: too many memory references for addps intrinsics_SSE_example.C:48: Error: too many memory references for movups I have spent a lot of time on trying to debug these errors, googled them and so on. I am a complete noob to Intrinsics and so may have overlooked some important things. Any help is appreciated, Thanks, Sriram.

    Read the article

  • Problem when getting pageContent of an unavailable URL in Java

    - by tiendv
    I have a code for get pagecontent from a URL: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.URL; import java.net.URLConnection; public class GetPageFromURLAction extends Thread { public String stringPageContent; public String targerURL; public String getPageContent(String targetURL) throws IOException { String returnString=""; URL urlString = new URL(targetURL); URLConnection openConnection = urlString.openConnection(); String temp; BufferedReader in = new BufferedReader( newInputStreamReader(openConnection.getInputStream())); while ((temp = in.readLine()) != null) { returnString += temp + "\n"; } in.close(); // String nohtml = sb.toString().replaceAll("\\<.*?>",""); return returnString; } public String getStringPageContent() { return stringPageContent; } public void setStringPageContent(String stringPageContent) { this.stringPageContent = stringPageContent; } public String getTargerURL() { return targerURL; } public void setTargerURL(String targerURL) { this.targerURL = targerURL; } @Override public void run() { try { this.stringPageContent=this.getPageContent(targerURL); } catch (IOException e) { e.printStackTrace(); } } } Sometimes I receive an HTTP error of 405 or 403 and result string is null. I have tried checking permission to connect to the URL with: URLConnection openConnection = urlString.openConnection(); openConnection.getPermission() but it usualy returns null. Does mean that i don't have permission to access the link? I have tried stripping off the query portion of the URL with: String nohtml = sb.toString().replaceAll("\\<.*?>",""); where sb is a Stringbulder, but it doesn't seem to strip off the whole query substring. In an unrelated question, I'd like to use threads here because I must retrieve many URLs; how can I create a multi-thread client to improve the speed?

    Read the article

  • log in using fitnesse

    - by user1513027
    This is a basic HTTP encoding and formatting question. I need to log in from a RESTFixture to a PHP web service. I need to pass in the account, username, and password, as POST variables just as a login page does. Wireshark shows that my login page formats it as accountNumber=accounttest&username=usertest&password=passtest When I do that in the test, I get a POST array of $_POST: Array ( [accountNumber] => accounttest [amp;username] => usertest [amp;password] => passtest ) That would work but the "amp;" obviously makes it so that PHP doesn't find the username. Content type is the same for both live and the test. [CONTENT_TYPE] = application/x-www-form-urlencoded Here are a few other formats I've tried with results. In all three cases, it fails to parse so all fields end up in one array entry. input: accountNumber=accounttest%amp;username=usertest%amp;password=passtest Result: $_POST: Array ( [accountNumber] = accounttest%amp;username=usertest%amp;password=passtest ) input: accountNumber=accounttest;username=usertest;password=passtest Result: $_POST: Array ( [accountNumber] => accounttest;username=usertest;password=passtest ) input: accountNumber=accounttest%26username=usertest%26password=passtest Result: $_POST: Array ( [accountNumber] = accounttest&username=usertest&password=passtest ) So the last one correctly converts the %26 to &, but doesn't break the items apart into array elements.

    Read the article

  • how to save sitecore webpage in html file on local disk or server

    - by Sam
    WebRequest mywebReq ; WebResponse mywebResp ; StreamReader sr ; string strHTML ; StreamWriter sw; mywebReq = WebRequest.Create("http://domain/sitecore/content/test/page10.aspx"); mywebResp = mywebReq.GetResponse(); sr = new StreamReader(mywebResp.GetResponseStream()); strHTML = sr.ReadToEnd(); sw = File.CreateText(Server.MapPath("hello.html")); sw.WriteLine(strHTML); sw.Close(); Hi , I want to save sitecore .aspx page into html file on local disk , but i am getting exception. But if i use any other webpage example(google.com) it works fine. The exception : System.Net.WebException was caught HResult=-2146233079 Message=The remote server returned an error: (404) Not Found. Source=System StackTrace: at System.Net.WebClient.DownloadFile(Uri address, String fileName) at System.Net.WebClient.DownloadFile(String address, String fileName) at BlueDiamond.addmodule.btnSubmit(Object sender, EventArgs e) in c:\inetpub\wwwroot\STGLocal\Website\addmodule.aspx.cs:line 97 InnerException: Any help. Thanks in advance

    Read the article

  • Analyzing Web Application Speed

    - by Amy
    I'm a bit confused because the logical/programmer brain in me says that if all things are constant, the speed of a function must be constant. I am working on a PHP web application with jqGrid as a front end for showing the data. I am testing on my personal computer, so network traffic does not apply. I make an HTTP request to a PHP function, it returns the data, and then jqGrid renders it. What has me befuddled is that sometimes, Firebug reports that this is taking 300-600 milliseconds sometimes, and sometimes, it's taking 3.68 seconds. I can run the request over and over again, with very radically different response times. The query is the same. The number of users on the system is the same. No network latency. Same code. I'm not running other applications on the computer while testing. I could understand query caching improving performance on subsequent requests, but the speed is just fluctuating wildly with no rhyme or reason. So, my question is, what else can cause such variability in the response time? How can I determine what's doing it? More importantly, is there any way to get things more consistent?

    Read the article

  • Python 3.3 Webserver restarting problems

    - by IPDGino
    I have made a simple webserver in python, and had some problems with it before as described here: Python (3.3) Webserver script with an interesting error In that question, the answer was to use a While True: loop so that any crashes or errors would be resolved instantly, because it would just start itself again. I've used this for a while, and still want to make the server restart itself every few minutes, but on Linux for some reason it won't work for me. On windows the code below works fine, but on linux it keeps saying Handler class up here ... ... class Server: def __init__(self): self.server_class = HTTPServer self.server_adress = ('MY IP GOES HERE, or localhost', 8080) global httpd httpd = self.server_class(self.server_adress, Handler) self.main() def main(self): if count > 1: global SERVER_UP_SINCE HOUR_CHECK = int(((count - 1) * RESTART_INTERVAL) / 60) SERVER_UPTIME = str(HOUR_CHECK) + " MINUTES" if HOUR_CHECK > 60: minutes = int(HOUR_CHECK % 60) hours = int(HOUR_CHECK // 60) SERVER_UPTIME = ("%s HOURS, %s MINUTES" % (str(hours), str(minutes))) SERVING_ON_ADDR = self.server_adress SERVER_UP_SINCE = str(SERVER_UP_SINCE) SERVER_RESTART_NUMBER = count - 1 print(""" SERVER INFO ------------------------------------- SERVER_UPTIME: %s SERVER_UP_SINCE: %s TOTAL_FILES_SERVED: %d SERVING_ON_ADDR: %s SERVER_RESTART_NUMBER: %s \n\nSERVER HAS RESTARTED """ % (SERVER_UPTIME, SERVER_UP_SINCE, TOTAL_FILES, SERVING_ON_ADDR, SERVER_RESTART_NUMBER)) else: print("SERVER_BOOT=1\nSERVER_ONLINE=TRUE\nRESTART_LOOP=TRUE\nSERVING_ON_ADDR:%s" % str(self.server_adress)) while True: try: httpd.serve_forever() except KeyboardInterrupt: print("Shutting down...") break httpd.shutdown() httpd.socket.close() raise(SystemExit) return def server_restart(): """If you want the restart timer to be longer, replace the number after the RESTART_INTERVAL variable""" global RESTART_INTERVAL RESTART_INTERVAL = 10 threading.Timer(RESTART_INTERVAL, server_restart).start() global count count = count + 1 instance = Server() if __name__ == "__main__": global SERVER_UP_SINCE SERVER_UP_SINCE = strftime("%d-%m-%Y %H:%M:%S", gmtime()) server_restart() Basically, I make a thread to restart it every 10 seconds (For testing purposes) and start the server. After ten seconds it will say File "/home/username/Desktop/Webserver/server.py", line 199, in __init__ httpd = self.server_class(self.server_adress, Handler) File "/usr/lib/python3.3/socketserver.py", line 430, in __init__ self.server_bind() File "/usr/lib/python3.3/http/server.py", line 135, in server_bind socketserver.TCPServer.server_bind(self) File "/usr/lib/python3.3/socketserver.py", line 441, in server_bind self.socket.bind(self.server_address) OSError: [Errno 98] Address already in use As you can see in the except KeyboardInterruption line, I tried everything to make the server stop, and the program stop, but it will NOT stop. But the thing I really want to know is how to make this server able to restart, without giving some wonky errors.

    Read the article

  • GZipStream in an WebHttpResponse producing no data

    - by Pierre 303
    I want to compress my HTTP Responses for client that supports it. Here is the code used to send a standard response: IHttpClientContext context = (IHttpClientContext)sender; IHttpRequest request = e.Request; string responseBody = "This is some random text"; IHttpResponse response = request.CreateResponse(context); using (StreamWriter writer = new StreamWriter(response.Body)) { writer.WriteLine(responseBody); writer.Flush(); response.Send(); } The code above works fine. Now I added gzip support below. When I test it with a browser that supports gzip or a custom method, it returns an empty string. I'm sure I'm missing something simple, but I just can't find it... IHttpClientContext context = (IHttpClientContext)sender; IHttpRequest request = e.Request; string acceptEncoding = request.Headers["Accept-Encoding"]; string responseBody = "This is some random text"; IHttpResponse response = request.CreateResponse(context); if (acceptEncoding != null && acceptEncoding.Contains("gzip")) { byte[] bytes = ASCIIEncoding.ASCII.GetBytes(responseBody); response.AddHeader("Content-Encoding", "gzip"); using (GZipStream writer = new GZipStream(response.Body, CompressionMode.Compress)) { writer.Write(bytes, 0, bytes.Length); writer.Flush(); response.Send(); } } else { using (StreamWriter writer = new StreamWriter(response.Body)) { writer.WriteLine(responseBody); writer.Flush(); response.Send(); } }

    Read the article

  • Very strange client/browser issue

    - by Jeriko
    One of our clients has logged a very strange issue with us- We launched a preview for their website, but when it's viewed on their main PC, peculiar things start to happen... At first, the stylesheet wasn't being found, and so accessing any page resulted in one void of all styles. We sent them a direct link to the stylesheet, which was viewable from all our computers in the office - but gave a "File Not Found" error on their side. I then deleted the file, and replaced it with a new blank file, which he could then access. Copy-pasted screen.css contents into this file, and he could then view it fine, and stylesheets magically worked on the site again. Now, he can view styles, but not the referenced header images. The strange thing is that this problem doesn't exist on any other PC we've tested, or on any other site on the problem computer, but obviously we'd like our client's site to work for them. The strange thing is, they can view other sites of ours, hosted on the same server, built on top of the same CMS (and so most of the files are the same) without problem - but are getting 404s for files that most definitely do exist. Stylesheets are not turned off, nor is anything specifically deactivated on their browser (as other sites are fine) Reloading with CTRL+F5 doesn't help The client is using the latest version of firefox Any ideas here on what to try / how to narrow the problem down?

    Read the article

  • Displaying NON-ASCII Characters using HttpClient

    - by Abdullah Gheith
    So, i am using this code to get the whole HTML of a website. But i dont seem to get non-ascii characters with me. all i get is diamonds with question mark. characters like this: å, appears like this: ? I doubt its because of the charset, what could it then be? Log.e("HTML", "henter htmlen.."); String url = "http://beep.tv2.dk"; HttpClient client = new DefaultHttpClient(); client.getParams().setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1); client.getParams().setParameter(CoreProtocolPNames.HTTP_ELEMENT_CHARSET, "UTF-8"); HttpGet request = new HttpGet(url); HttpResponse response = client.execute(request); Header h = HeaderValueFormatter response.addHeader(header) String html = ""; InputStream in = response.getEntity().getContent(); BufferedReader reader = new BufferedReader(new InputStreamReader(in)); StringBuilder str = new StringBuilder(); String line = null; while((line = reader.readLine()) != null) { str.append(line); } in.close(); //b = false; html = str.toString();

    Read the article

  • Receive JSON payload with ZEND framework and / or PHP

    - by kent3800
    I'm receiving a JSON payload from a webservice at my site's internal webpage at /asset/setjob. The following is the JSON payload being posted to /asset/setjob: [{"job": {"source_filename": "beer-drinking-pig.mpg", "current_step": "waiting_for_file", "encoding_profile_id": "nil", "resolution": "nil", "status_url": "http://example.com/api/v1/jobs/1.json", "id": 1, "bitrate": "nil", "current_status": "waiting for file", "current_progress": "nil", "remote_id": "my-own-remote-id"}}] This payload posts one time to this page. The page is not meant for viewing but parsing the JSON object for the id and current_status so that I can insert it into a database. I'm using Zend framework. HOW DO I receive this payload in Zend? Do I $_GET['json']? $_POST['job']? None of these seem to work. I essentially need to assign this payload to a php variable so that I can then manipulate it. I've tried: $jsonStrGet = var_dump($_GET); $jsonStrPost = var_dump($_POST); And I've tried: $response = $this-getResponse(); $body = $response-getBody(); Blockquote Any help would be much appreciated! Thanks.

    Read the article

  • Problem uploading app to google app engine

    - by Oberon
    I'm having problems uploading an app to the google-app-engine from my work place. I believe the problem is related to proxy, because I do not see the same problem when following the same procedure from home. (I do not specify HTTP_PROXY from home). These are the commands I run: HTTP_PROXY=http://proxy.<thehostname>.com:8080 HTTP_PROXY=https://proxy.<thehostname>.com:8080 appcfg.py --insecure update myappfolder When running the commands I get prompted for email and password, as expected, but after that it immediately exits with this errormessage: Error 302: --- begin server output --- <HTML> <HEAD> <TITLE>Moved Temporarily</TITLE> </HEAD> <BODY BGCOLOR="#FFFFFF" TEXT="#000000"> <H1>Moved Temporarily</H1> The document has moved <A HREF="https://www.google.com/accounts/ClientLogin">here</A>. </BODY> </HTML> --- end server output --- Note: I added the --insecure option because else it gave a warning of missing ssl module. Any idea how to solve or workaround this problem?

    Read the article

  • Cannot connect to Github?

    - by user2973438
    so I tried to push some updates onto my repo on github via terminal on Mac OSX 10.8.4 and it doesn't work. I've been getting the same error many times: Lillys-MacBook-Air:Yuewei Lilly$ git push origin master error: Failed connect to github.com:443; Operation timed out while accessing https://github.com/lillybeans/Yuewei.git/info/refs?service=git-receive-pack fatal: HTTP request failed Some background: I've pushed many projects onto github before using terminal (when I was in Canada). I am currently in Shanghai, China, could it be the GFW? But when I was in Beijing, I was able to push projects onto github still. when I do ping github.com: Lillys-MacBook-Air:Yuewei Lilly$ ping github.com PING github.com (192.30.252.131): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 ping: sendto: No route to host Request timeout for icmp_seq 2 ping: sendto: Host is down Request timeout for icmp_seq 3 ping: sendto: Host is down Request timeout for icmp_seq 4 ping: sendto: Host is down Request timeout for icmp_seq 5 ping: sendto: Host is down Request timeout for icmp_seq 6 ping: sendto: Host is down Request timeout for icmp_seq 7 ^C --- github.com ping statistics --- 9 packets transmitted, 0 packets received, 100.0% packet loss Lillys-MacBook-Air:Yuewei Lilly$ I have ShadowSocks (proxy) turned on. Without it I can't access github.com via browser, with it, I can. also when I do "git remote -v" I see both my pull and push remote repos correctly listed. Thank you in advance!

    Read the article

  • Sencha : how to pass parameter to php using Ext.data.HttpProxy?

    - by Lauraire Jérémy
    I have successfully completed this great tutorial : http://www.sencha.com/learn/ext-js-grids-with-php-and-sql/ I just can't use the baseParams field specified with the proxy... Here is my code that follows tutorial description : __ My Store : Communes.js ____ Ext.define('app.store.Communes', { extend: 'Ext.data.Store', id: 'communesstore', requires: ['app.model.Commune'], config: { model: 'app.model.Commune', departement:'var', // the proxy with POST method proxy: new Ext.data.HttpProxy({ url: 'app/php/communes.php', // File to connect to method: 'POST' }), // the parameter passed to the proxy baseParams:{ departement: "VAR" }, // the JSON parser reader: new Ext.data.JsonReader({ // we tell the datastore where to get his data from rootProperty: 'results' }, [ { name: 'IdCommune', type: 'integer' }, { name: 'NomCommune', type: 'string' } ]), autoLoad: true, sortInfo:{ field: 'IdCommune', direction: "ASC" } } }); _____ The php file : communes.php _____ <?php /** * CREATE THE CONNECTION */ mysql_connect("localhost", "root", "pwd") or die("Could not connect: " . mysql_error()); mysql_select_db("databasename"); /** * INITIATE THE POST */ $departement = 'null'; if ( isset($_POST['departement'])){ $departement = $_POST['departement']; // Get this from Ext } getListCommunes($departement); /** * */ function getListCommunes($departement) { [CODE HERE WORK FINE : just a connection and query but $departement is NULL] } ?> There is no parameter passed as POST method... Any idea?

    Read the article

  • Java - How to find the redirected url of a url?

    - by Yatendra Goel
    I am accessing web pages through java as follows: URLConnection con = url.openConnection(); But in some cases, a url redirects to another url. So I want to know the url to which the previous url redirected. Below are the header fields that I got as a response: null-->[HTTP/1.1 200 OK] Cache-control-->[public,max-age=3600] last-modified-->[Sat, 17 Apr 2010 13:45:35 GMT] Transfer-Encoding-->[chunked] Date-->[Sat, 17 Apr 2010 13:45:35 GMT] Vary-->[Accept-Encoding] Expires-->[Sat, 17 Apr 2010 14:45:35 GMT] Set-Cookie-->[cl_def_hp=copenhagen; domain=.craigslist.org; path=/; expires=Sun, 17 Apr 2011 13:45:35 GMT, cl_def_lang=en; domain=.craigslist.org; path=/; expires=Sun, 17 Apr 2011 13:45:35 GMT] Connection-->[close] Content-Type-->[text/html; charset=iso-8859-1;] Server-->[Apache] So at present, I am constructing the redirected url from the value of the Set-Cookie header field. In the above case, the redirected url is copenhagen.craigslist.org Is there any standard way through which I can determine which url the particular url is going to redirect. I know that when a url redirects to other url, the server sends an intermediate response containing a header field that tells the url which it is going to redirect but I am not receiving that intermediate response through the url.openConnection(); method.

    Read the article

  • IIS doesn't send two responses to the same client at the same time (only for ASP)

    - by dr. evil
    I've got 2 ASP pages. I do a request to the first page from Firefox (which takes 30 seconds to process on server-side), and during the execution of 30 seconds I do another request from Firefox to the second page (takes less than 1 second in server-side), but it does come after 31 second. Because it waits first requests to finish. When I request to the first page from Firefox and then request the second page from IE it's just instant. So basically ASP - IIS 6 somehow limiting every client to one request (long processing request) at a time. I need to get around this problem in my .NET client application. This is tested in 3 different systems. If you want to test you can try the ASP scripts at the end. This behaviour is same in a long SQL execution or just in a time consuming ASP operation. Note: It's not about HTTP Keep Alive It's not about persistent connection limit (we tried to increase this in firefox and in .NET with Net.ServicePointManager.DefaultConnectionLimit) It's not about User Agent This doesn't happen in ASP.NET so I assume it's something to the with ASP.dll I'm trying to solve this on the client not the server. I don't have direct control over the server it's a 3rd party solution. Is there any way to get around this? Sample ASP Code: First ASP: <% Set cnn = Server.CreateObject("Adodb.Connection") cnn.Open "Provider=sqloledb;Data Source=.;Initial Catalog=master;User Id=sa;Password=;" cnn.Execute("WAITFOR DELAY '0:0:30'") cnn.Close %> Second ASP: <% Response.Write "bla bla" %>

    Read the article

  • Django throws 404 at generic views

    - by x0rg
    I'm trying to get the generic views for a date-based archive working in django. I defined the urls as described in a tutorial, but django returns a 404 error whenever I want to access an url with a variable (such as month or year) in it. It don't even produces a TemplateDoesNotExist-execption. Normal urls without variables work fine. Here's my urlconf: from django.conf.urls.defaults import * from zurichlive.zhl.models import Event info_dict = { 'queryset': Event.objects.all(), 'date_field': 'date', 'allow_future': 'True', } urlpatterns += patterns('django.views.generic.date_based', (r'events/(?P<year>d{4})/(?P<month>[a-z]{3})/(?P<day>w{1,2})/(?P<slug>[-w]+)/$', 'object_detail', dict(info_dict, slug_field='slug',template_name='archive/detail.html')), (r'^events/(?P<year>d{4})/(?P<month>[a-z]{3})/(?P<day>w{1,2})/(?P<slug>[-w]+)/$', 'object_detail', dict(info_dict, template_name='archive/list.html')), (r'^events/(?P<year>d{4})/(?P<month>[a-z]{3})/(?P<day>w{1,2})/$','archive_day',dict(info_dict,template_name='archive/list.html')), (r'^events/(?P<year>d{4})/(?P<month>[a-z]{3})/$','archive_month', dict(info_dict, template_name='archive/list.html')), (r'^events/(?P<year>)/$','archive_year', dict(info_dict, template_name='archive/list.html')), (r'^events/$','archive_index', dict(info_dict, template_name='archive/list.html')), ) When I access /events/2010/may/12/this-is-a-slug I should get to the detail.html template, but instead I get a 404. What am I doing wrong?

    Read the article

  • (Weak) ETags and Last-Modified

    - by Kai Moritz
    As far as I understand the specs, the ETag, which was introduced in RFC 2616 (HTTP/1.1) is a predecessor of the Last-Modified-Header, which is proposet to give the software-architect more controll over the cache-revalidating process. If both Cache-Validation-Headers (If-None-Match and If-Modified-Since) are present, according to RFC 2616, the client (i.e. the browser) should use the ETag when checking, if a resource has changed. According to section 14.26 of RFC 2616, the server MUST NOT respond with a 304 Not Modified, if the ETag presented in a If-None-Match-Header has changed, and the server has to ignore an additional If-Modified-Since-Header, if present. If the presented ETag matches, he MUST NOT perform the request, unless the Date in the Last-Modified-Header says so. (If the presented ETag matches, the server should respond with a 304 Not Modified in case of a GET- or HEAD-request...) This section leaves room for some speculations: A strong ETag is supposed to change ''everytime'', the resource changes. So, having to responde with something else as 304 Not Modified to a request with an unchanged ETag and an If-Modified-Since-Header, which dose not match is a bit of a contradiction, because the strong ETag says, that the resource was not modified. (Though, this is not that fatal, because the server can send the same unchanged resource again.) ... ... o.k. While I was writing this, the question was boiling down to this answer: The (small) contradiction stated above, was made because of Weak ETags. A resource marked with a Weak ETag may have changed, although the ETag has not. So, in case of a Weak ETag it would be wrong, to answer with 304 Not Modified, when the ETag has not changed, but the date presented in the If-Modified-Since does not match, right?

    Read the article

  • Java Applet 411 Content Length

    - by user1903006
    I am new to Java. I wrote an applet with a gui that sends results (int w and int p) to a server, and I get the "411 Length Required" error. What am I doing wrong? How do you set a Content-Length? This is the method that communicates with the server: public void sendPoints1(int w, int p){ try { String url = "http://somename.com:309/api/Results"; String charset = "UTF-8"; String query = String.format("?key=%s&value=%s", URLEncoder.encode(String.valueOf(w), charset), URLEncoder.encode(String.valueOf(p), charset)); String length = String.valueOf((url + query).getBytes("UTF-8").length); HttpURLConnection connection = (HttpURLConnection) new URL(url + query).openConnection(); connection.setRequestMethod("POST"); connection.setRequestProperty("Content-Length", length); connection.connect(); System.out.println("Responce Code: " + connection.getResponseCode()); System.out.println("Responce Message: " + connection.getResponseMessage()); } catch (Exception e) { System.err.println(e.getMessage()); } }

    Read the article

  • Firefox not honoring must-revalidate cache headers returned by jQuery.ajax() request

    - by Oliver Weichhold
    UPDATE 1: Judging by this thread I am not the only one having this problem in FF 12 and only in 12. UPDATE 2: The problem does not seem to be limited to Ajax requests. From the looks of it everything that makes it into Firefox 12's cache will be fetched from there. No matter what. The server can specify cache control headers all day long. Bummer! What I'm trying to achieve is the following behavior: Browser may cache the response without revalidating for up to 5 minutes I don't care if the browser revalidates on every request (Both Chrome and IE9 do for example) When the expiration is up the browser MUST revalidate (which in my case will result in fresh data) Chrome and IE9 exhibit the desired behavior when issuing a jquery.ajax() request with ifModified: true and cache: true while Firefox 12 never revalidates, which poses a serious problem. These are the actual response headers: HTTP/1.1 200 OK Server: nginx Date: Sun, 03 Jun 2012 07:13:43 GMT Content-Type: text/javascript; charset=UTF-8 Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding Cache-Control: private, must-revalidate, max-age=300 Last-Modified: Sun, 03 Jun 2012 07:07:13 GMT Content-Encoding: gzip Any suggestions?

    Read the article

  • Why "Content-Length: 0" in POST requests?

    - by stesch
    A customer sometimes sends POST requests with Content-Length: 0 when submitting a form (10 to over 40 fields). We tested it with different browsers and from different locations but couldn't reproduce the error. The customer is using Internet Explorer 7 and a proxy. We asked them to let their system administrator see into the problem from their side. Running some tests without the proxy, etc.. In the meantime (half a year later and still no answer) I'm curious if somebody else knows of similar problems with a Content-Length: 0 request. Maybe from inside some Windows network with a special proxy for big companies. Is there a known problem with Internet Explorer 7? With a proxy system? The Windows network itself? Google only showed something in the context of NTLM (and such) authentication, but we aren't using this in the web application. Maybe it's in the way the proxy operates in the customer's network with Windows logins? (I'm no Windows expert. Just guessing.) I have no further information about the infrastructure.

    Read the article

  • Problems with this code?

    - by J4C3N-14
    I'm trying to use this code which is an example taken from here https://gist.github.com/2383248 , but it is coming up with a error on the public void onClick which is Multiple markers at this line - implements android.view.View.OnClickListener.onClick - Syntax error, insert "}" to complete MethodBody, but when I add the brace it just throws another error after many tries and fails of different suggestions and ideas. It may be a syntax error and bad coding from me (just started learning to program) but does anyone have any ideas how to resolve this or point me in the right direction I would be very grateful. public class ICSCalendarActivity extends Activity implements View.OnClickListener{ Button button1; int year1; int month1; int day1; int ShiftPattern; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); button1 = (Button)findViewById(R.id.openButton); button1.setText("open"); button1.setOnClickListener(this); Bundle extras = getIntent().getExtras(); year1 = extras.getInt("year1"); day1 = extras.getInt("day1"); month1 = extras.getInt("month1"); ShiftPattern = extras.getInt("ShiftPattern"); } public void onClick(View v){ private static void addToCalendar(Context ICSCalendarActivity, final String title, final long dtstart, final long dtend) { final ContentResolver cr = ICSCalendarActivity.getContentResolver(); Cursor cursor ; if (Integer.parseInt(Build.VERSION.SDK) >= 8 ) cursor = cr.query(Uri.parse("content://com.android.calendar/calendars"), new String[]{ "_id", "displayname" }, null, null, null); else cursor = cr.query(Uri.parse("content://calendar/calendars"), new String[]{ "_id", "displayname" }, null, null, null); if ( cursor.moveToFirst() ) { final String[] calNames = new String[cursor.getCount()]; final int[] calIds = new int[cursor.getCount()]; for (int i = 0; i < calNames.length; i++) { calIds[i] = cursor.getInt(0); calNames[i] = cursor.getString(1); cursor.moveToNext(); } AlertDialog.Builder builder = new AlertDialog.Builder(ICSCalendarActivity); builder.setSingleChoiceItems(calNames, -1, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { ContentValues cv = new ContentValues(); cv.put("calendar_id", calIds[which]); cv.put("title", title); cv.put("dtstart", dtstart ); cv.put("hasAlarm", 1); cv.put("dtend", dtend); Uri newEvent ; if (Integer.parseInt(Build.VERSION.SDK) >= 8 ) newEvent = cr.insert(Uri.parse("content://com.android.calendar/events"), cv); else newEvent = cr.insert(Uri.parse("content://calendar/events"), cv); if (newEvent != null) { long id = Long.parseLong( newEvent.getLastPathSegment() ); ContentValues values = new ContentValues(); values.put( "event_id", id ); values.put( "method", 1 ); values.put( "minutes", 15 ); // 15 minutes if (Integer.parseInt(Build.VERSION.SDK) >= 8 ) cr.insert( Uri.parse( "content://com.android.calendar/reminders" ), values ); else cr.insert( Uri.parse( "content://calendar/reminders" ), values ); } dialog.cancel(); } }); builder.create().show(); } cursor.close(); } } Thank you.

    Read the article

< Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >