Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 1/232 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Maximizing the number of true concurrent / parrallel http requests in Silverlight

    - by Clems
    Hi all. I'm using SL 4 beta and my app needs to do a lot of small http requests to the server. I believe that when exceeding the number of allowed concurrent requests, the subsequent requests are put in a queue. I am also aware that SL 4 has both a http browser stack and a http client stack, with both different limit in terms of the number of concurrent requests. Let's say call those limits MAX_BROWSER and MAX_CLIENT. Also I think I read somewhere that the number of concurrent requests is limited per domain, not overall. But I'm sure if this applies to both the http client stack. That means that you CAN have MAX_BROWSER requests to domain1.com AND MAX_BROWSER requests to domain2.com at the same time. And I even believe that sub domains are considered different so you can also have MAX_BROWSER requests to domain1.com AND MAX_BROWSER requests to sub.domain1.com at the same time. I have ownership of the services and domain names so I could easily setup sub domains for my services. Given those considerations I'm trying to optimize the number of concurrent http requests to my server. Here are few questions ? Is is possible to use both stack at the same time ? Is the subdomain/domain story true for both stacks ? None ? If so that would mean that I could potentially have a number of concurrent requests equal to : (MAX_BROWSER + MAX_CLIENT) * NUMBER_OF_DOMAINS which would be fairly good. Is this correct ? I'm kind of sharing my morning thoughts here, hoping somebody has experimented with those things. Thank you.

    Read the article

  • How to process requests twice in Apache

    - by Pieter
    In order to perform realistic tests for a new backend server, I'd like to process all Apache requests twice. So simply handle all the live requests with the old server, as it's done right now, but then also 'duplicate' the requests to a different virtual host, where the new backend is deployed, which will process the request and log the response. What's the best / most simple way to achieve this in Apache? (the backend is a FastCGI process)

    Read the article

  • How to make a post request with REQUESTS package for Python?

    - by jorrebor
    I am trying to use the toggl api. I use Requests instead of Urllib2 for doing my GETs en POSTS. But i am stuck. payload = { "project":{ "name":"Another Project", "billable":False, "workspace":{ "Name":"jorrebor's workspace", "id":213272 }, "automatically_calculate_estimated_workhours":False } } url = "https://www.toggl.com/api/v6/projects.json" r = requests.post(url, data=json.dumps(payload), auth=HTTPBasicAuth('[email protected]', 'mypassword')) Authentication seems to be fine, but the payload format probably isn't. a curl command with the same parameters: curl -v -u heremytoken:api_token -H "Content-type: application/json" -d "{\"project\":{\"billable\":true,\"workspace\":{\"id\":213272},\"name\":\"Another project\",\"automatically_calculate_estimated_workhours\":false}}" -X POST https://www.toggl.com/api/v6/projects.json does work fine. What wrong with my payload? The response is get is: ["Name can't be blank","Workspace can't be blank"] which leads me to conclude that the authentication works but toggl cannot read my json object.

    Read the article

  • Source of Unexplained Requests in Server Logs

    - by Synetech inc.
    Hi, I am baffled by some entries in my server logs, specifically the web-server logs. Other than normal, expected traffic, I have noticed three types of request errors (eg 404, etc.): Broken links, ie links from old, external pages that point to pages that are no longer here Sequences of probes, ie some jerk trying to hack in by scanning my server for a series of exploitable admin type pages and such What appear to be completely random requests for things that have never existed on the server or even have anything to do with the server, and appear by themselves (ie not a series of requests like the probes) Could it somehow be a mistyped URL or IP? That’s about the only thing that I can think of, but still, how could I get a request on say, foobar.dyndns.org (12.34.56.78) for something like www.wantsfly.com/prx2.php or /MNG/LIVE or http://ant.dsabuse.com/abc.php?auth=45V456b09m&strPassword=X%5BMTR__CBZ%40VA&nLoginId=43. (Those are a few actual requests from my logs.) Can someone please explain scenario three to me? Thanks.

    Read the article

  • Python-Requests close http connection

    - by James Eggers
    I was wondering, how do you close a connection with Requests (python-requests.org)? With httplib it's HTTPConnection.close(), but how do I do the same with Requests? Code is below: r = requests.post("https://stream.twitter.com/1/statuses/filter.json", data={'track':toTrack}, auth=('username', 'passwd')) for line in r.iter_lines(): if line: self.mongo['db'].tweets.insert(json.loads(line)) Thanks in advance.

    Read the article

  • amazon ec2-medium apache requests per second terrible

    - by TheDayIsDone
    EDITED -- test running from localhost now to rule out network... i have a c1.medium using EBS. when i do an apache benchmark and i'm just printing a "hello" for the test from localhost - no database hits, it's very slow. i can repeat this test many times with the same results. any thoughts? thanks in advance. ab -n 1000 -c 100 http://localhost/home/test/ Benchmarking localhost (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Apache/2.2.23 Server Hostname: localhost Server Port: 80 Document Path: /home/test/ Document Length: 5 bytes Concurrency Level: 100 Time taken for tests: 25.300 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 816000 bytes HTML transferred: 5000 bytes Requests per second: 39.53 [#/sec] (mean) Time per request: 2530.037 [ms] (mean) Time per request: 25.300 [ms] (mean, across all concurrent requests) Transfer rate: 31.50 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 7 21.0 0 73 Processing: 81 2489 665.7 2500 4057 Waiting: 80 2443 654.0 2445 4057 Total: 85 2496 653.5 2500 4057 Percentage of the requests served within a certain time (ms) 50% 2500 66% 2651 75% 2842 80% 2932 90% 3301 95% 3506 98% 3762 99% 3838 100% 4057 (longest request)

    Read the article

  • bad request error 400 while using python requests.post function

    - by Toussah
    I'm trying to make a simple post request via the requests library of Python and I get a bad request error (400) while my url is supposedly correct since I can use it to perform a get. I'm very new in REST requests, I read many tutorials and documentation but I guess there are still things I don't get so my error could be basic. Maybe a lack of understanding on the type of url I'm supposed to send via POST. Here my code : import requests v_username = "username" v_password = "password" v_headers = {'content-type':'application/rdf+xml'} url = 'https://my.url' params = {'param': 'val_param'} payload = {'data': 'my_data'} r = requests.post(url, params = params, auth=(v_username, v_password), data=payload, headers=v_headers, verify=False) print r I used the example of the requests documentation.

    Read the article

  • How do you manage feature requests and software changes?

    - by 0A0D
    I am a Software Engineer and over the past few years I have become the de-facto software project manager simply because there isn't one. So to keep our sanity in the R&D/Engineering department, customers have become accustomed to coming to me with their requests. I have no experience in this realm so it is my first time acting as a project manager for software projects. I have managed other things but not software. So, how do you manage software projects and mark priorities? Requests come in at infrequent intervals so we very well could be working on something for someone else and then another person comes in with a "rush" job that needs working on. Is it easier to just say First Come, First Serve or is it the person with the most money?

    Read the article

  • Setting up proxy to handle subdomain requests

    - by PeeHaa
    I have setup a proxy for a site which works with the following nginx config: server { listen 80; server_name proxy.example.com; access_log /dev/null; error_log /dev/null; location / { proxy_pass http://thepiratebay.se; proxy_set_header X-Real-IP $remote_addr; } } However on this there are also styles loaded from a subdomain (static.thepiratebay.se) which aren't going through my proxy, because it links to the original domain. Is there a way to also let those requests go to my proxy? Do I have to change the contents of the pages when serving it to let them also go through my proxy? If so: how? :) Or is there another (perhaps better) way?

    Read the article

  • Zend Framework - routes - all requests to one controller except requests for existing controllers

    - by se_pavel
    How to create route that accept all requests for unexsting controllers, but leave requests for existing. This code catch all routes $route = new Zend_Controller_Router_Route_Regex('(\w+)', array('controller' = 'index', 'action' = 'index')); $router-addRoute('index', $route); how should I specify route requests like /admin/* or /feedback/* to existing adminController or feedbackController?

    Read the article

  • How to set up different documentroot for ip based requests, and different for domain based requests

    - by Carlos
    My problem is simply that I have a domain, let's say example.com, and my server's ip address is e.g. 192.168.0.1. I want to set up 2 different virtual hosts, so when user enters ip address (192.168.0.1) in his browser, he will see content from here: /var/www/staging But if user will type example.com, he will see content from here: /var/www I think it's possible but I was playing around with it and couldn't make it work. Also I don't want to make simple redirection. I know I can, but I need both of my apps (live & staging) working in root on the same server. I can't buy second domain, and I can't associate new ip address.

    Read the article

  • In IIS why do HTTP requests use the host header, and FTP requests do not

    - by Keeno
    So.... In IIS, if you use the in-build FTP you need to combine the FTP host header in the FTP username e.g. www.hello.com|domain/username So, the FTP program gets its "hook" from the username. However, you can connect to the FTP site using www.hello.com:21 over the FTP port. Why then, doesnt the FTP service work the same way as the HTTP service? IIS knows what site to serve back based on the host header after all.... Thanks!

    Read the article

  • problem with multiple ajax HTTP get requests with different imput variables using jQuery

    - by Thanasis
    I want to make asychronous get requests and to take different results based on the input that I provide to each one. Here is my code: param=1; $.get('http://localhost/my_page_1.php', param, function(data) { alert("id = "+param); $('.resul 5.t').html(data); }); param=2; $.get('http://localhost/my_page_2.php', param, function(data) { alert("id = "+param); $('.result').html(data); }); The result for both requests is: "id = 2" I want the results to be: "id = 1" for the 1st request, and "id = 2" for the second one.. I want to do this for many requests in one HTML file and integrate the results into the HTML as soon as they are ready. Can anyone help me to solve this problem? thank you in advance, Thanasis

    Read the article

  • Python requests - saving cookie for later url usage

    - by PythonRocks
    I been trying to get a cookie and post it to a url in later use in the program, but I cant seem to get the cookie parameters to work. Right now I have response = requests.get("url") But how exactly do I retrive cookies from this url and post them to a new url (the same cookies). The tutorial in requests is somewhat vague on the topic and gives examples I cannot test. Hope someone can help with further examples. This is python 2.7 btw.

    Read the article

  • multiple ajax requests with jquery

    - by Emil
    I got problems with the async nature of Javascript / JQuery. Lets say the following (no latency is counted for, in order to not make it so troublesome); I got three buttons (A, B, C) on a page, each of the buttons adds an item into a shopping cart with one ajax-request each. If I put an intentional delay of 5 seconds in the serverside script (PHP) and pushes the buttons with 1 second apart, I want the result to be the following: Request A, 5 seconds Request B, 6 seconds Request C, 7 seconds However, the result is like this Request A, 5 seconds Request B, 10 seconds Request C, 15 seconds This have to mean that the requests are queued and not run simultaneously, right? Isnt this opposite to what async is? I also tried to add a random get-parameter to the url in order to force some uniqueness to the request, no luck though. I did read a little about this. If you avoid using the same "request object (?)" this problem wont occure. Is it possible to force this behaviour in JQuery? This is the code that I am using $.ajax( { url : strAjaxUrl + '?random=' + Math.floor(Math.random()*9999999999), data : 'ajax=add-to-cart&product=' + product, type : 'GET', success : function(responseData) { // update ui }, error : function(responseData) { // show error } }); I also tried both GET and POST, no difference. I want the requests to be sent right when the button is clicked, not when the previous request is finnished. I want the requests to be run simultaneously, not in a queue.

    Read the article

  • Rails, REST Architecture and HTML 5: Cross domain requests with pre-flight requests

    - by Orion
    While working on a project to make our site HTML 5 friendly, we were eager to embrace the new method for Cross Domain requests (no more posting through hidden iframes!!!). Using the Access Control specification we begin setting up some tests to verify the behaviour of various browsers. The current Rails RESTful architecture relies on the four HTTP verbs: GET, POST, PUT, DELETE. However in the Access Control spec, it dictates that non-simple methods (PUT, DELETE) require a pre-flight request using the HTTP verb OPTIONS. In addition during testing we discovered that Firefox 3.5.8 pre-flight POST requests as well. My question is this. Is anyone aware of any project for the Rails framework working to address the issue? If not, any opinions about the best strategy to support the OPTIONS method, since it has to support the routes for all the POST, PUT, DELETE methods?

    Read the article

  • Windows 8 / IIS 8 Concurrent Requests Limit

    - by OWScott
    IIS 8 on Windows Server 2012 doesn’t have any fixed concurrent request limit, apart from whatever limit would be reached when resources are maxed. However, the client version of IIS 8, which is on Windows 8, does have a concurrent connection request limitation to limit high traffic production uses on a client edition of Windows. Starting with IIS 7 (Windows Vista), the behavior changed from previous versions.  In previous client versions of IIS, excess requests would throw a 403.9 error message (Access Forbidden: Too many users are connected.).  Instead, Windows Vista, 7 and 8 queue excessive requests so that they will be handled gracefully, although there is a maximum number of requests that will be processed simultaneously. Thomas Deml provided a concurrent request chart for Windows Vista many years ago, but I have been unable to find an equivalent chart for Windows 8 so I asked Wade Hilmo from the IIS team what the limits are.  Since this is controlled not by the IIS team itself but rather from the Windows licensing team, he asked around and found the authoritative answer, which I’ll provide below. Windows 8 – IIS 8 Concurrent Requests Limit Windows 8 3 Windows 8 Professional 10 Windows RT N/A since IIS does not run on Windows RT Windows 7 – IIS 7.5 Concurrent Requests Limit Windows 7 Home Starter 1 Windows 7 Basic 1 Windows 7 Premium 3 Windows 7 Ultimate, Professional, Enterprise 10 Windows Vista – IIS 7 Concurrent Requests Limit Windows Vista Home Basic (IIS process activation and HTTP processing only) 3 Windows Vista Home Premium 3 Windows Vista Ultimate, Professional 10 Windows Server 2003, Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012 allow an unlimited amount of simultaneously requests.

    Read the article

  • python Requests login to website returns 403

    - by Jeff
    I'm trying to use requests to login to a website but as you can guess I'm having a problem here's the the code that I'm using import requests EMAIL = '***' PASSWORD = '***' URL = 'https://portal.bitcasa.com/login' client = requests.session(config={'verbose': sys.stderr}) login_data = {'username': EMAIL, 'password': PASSWORD,} r = client.post(URL, data=login_data, headers={"Referer": "foo"}) print r and if I print out r.text I get <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html lang="en"> <head><script type="text/javascript">var NREUMQ=NREUMQ||[];NREUMQ.push(["mark","firstbyte",new Date().getTime()])</script> <meta http-equiv="content-type" content="text/html; charset=utf-8"> <meta name="robots" content="NONE,NOARCHIVE"> <title>403 Forbidden</title> <style type="text/css"> html * { padding:0; margin:0; } body * { padding:10px 20px; } body * * { padding:0; } body { font:small sans-serif; background:#eee; } body>div { border-bottom:1px solid #ddd; } h1 { font-weight:normal; margin-bottom:.4em; } h1 span { font-size:60%; color:#666; font-weight:normal; } #info { background:#f6f6f6; } #info ul { margin: 0.5em 4em; } #info p, #summary p { padding-top:10px; } #summary { background: #ffc; } #explanation { background:#eee; border-bottom: 0px none; } </style> </head> <body> <div id="summary"> <h1>Forbidden <span>(403)</span></h1> <p>CSRF verification failed. Request aborted.</p> </div> <div id="explanation"> <p><small>More information is available with DEBUG=True.</small></p> </div> <script type="text/javascript">if(!NREUMQ.f){NREUMQ.f=function(){NREUMQ.push(["load",new Date().getTime()]);var e=document.createElement("script");e.type="text/javascript";e.src=(("http:"===document.location.protocol)?"http:":"https:")+"//"+"d1ros97qkrwjf5.cloudfront.net/42/eum/rum.js";document.body.appendChild(e);if(NREUMQ.a)NREUMQ.a();};NREUMQ.a=window.onload;window.onload=NREUMQ.f;};NREUMQ.push(["nrfj","beacon-1.newrelic.com","0e859e0620",778660,"ZAZRbUcHWBAHURFYX11MdUxbBUIKCVxKVVpSDVRWGwtfBwJeAEZRQQYdWkYUUFklQRdXZloGRHRcAlIPA0UEQ1UdE0FWVgNFEDlEDFRH",0,7,new Date().getTime(),"","","","",""])</script></body> </html> They're using a combination of django and pyramid. I've been playing around with this for about two days now but, obviously, have gotten nowhere. Thanks for your help.

    Read the article

  • PHP running as a FastCGI application (php-cgi) - how to issue concurrent requests?

    - by Sbm007
    Some background information: I'm writing my own webserver in Java and a couple of days ago I asked on SO how exactly Apache interfaces with PHP, so I can implement PHP support. I learnt that FastCGI is the best approach (since mod_php is not an option). So I have looked at the FastCGI protocol specification and have managed to write a working FastCGI wrapper for my server. I have tested phpinfo() and it works, in fact all PHP functions seem to work just fine (posting data, sessions, date/time, etc etc). My webserver is able to serve requests concurrently (ie user1 can retrieve file1.html at the same time as user2 requesting some_large_binary_file.zip), it does this by spawning a new Java thread for each user request (terminating when completed or user connection with client is cancelled). However, it cannot deal with 2 (or more) FastCGI requests at the same time. What it does is, it queues them up, so when request 1 is completed immediately thereafter it starts processing request 2. I tested this with 2 PHP pages, one contains sleep(10) and the other phpinfo(). How would I go about dealing with multiple requests as I know it can be done (PHP under IIS runs as FastCGI and it can deal with multiple requests just fine). Some more info: I am coding under windows and my batch file used to execute php-cgi.exe contains: set PHP_FCGI_CHILDREN=8 set PHP_FCGI_MAX_REQUESTS=500 php-cgi.exe -b 9000 But it does not spawn 8 children, the service simply terminates after 500 requests. I have done research and from Wikipedia: Processing of multiple requests simultaneously is achieved either by using a single connection with internal multiplexing (ie. multiple requests over a single connection) and/or by using multiple connections Now clearly the multiple connections isn't working for me, as everytime a client requests something that involves FastCGI it creates a new socket to the FastCGI application, but it does not work concurrently (it queues them up instead). I know that internal multiplexing of FastCGI requests under the same connection is accomplished by issuing each unique FastCGI request with a different request ID. (also see the last 3 paragraphs of 'The Communication Protocol' heading in this article). I have not tested this, but how would I go about implementing that? I take it I need some kind of FastCGI Java thread which contains a Map of some sort and a static function which I can use to add requests to. Then in the Thread's run() function it would have a while loop and for every cycle it would check whether the Map contains new requests, if so it would assign them a request ID and write them to the FastCGI stream. And then wait for input etc etc, As you can see this becomes too complicated. Does anyone know the correct way of doing this? Or any thoughts at all? Thanks very much. Note, if required I can supply the code for my FastCGI wrapper.

    Read the article

  • .htaccess or PHP protection code against multiple speedy requests

    - by Phil Jackson
    Hi, I am looking for ideas for how I can stop external scripts connecting with my site. I'm looking for the same kind of idea behind Google. As in if a certain amount of requests are made per a certain amount of time then block the IP address or something. I thought there maybe a htaccess solution if not, I will write a PHP one. Any ideas or links to existing methods or scripts is much appreciated. Regards Phil

    Read the article

  • IIS/ASP.NET performance incident - Perfmon Current Annonymous Users going through roof but Requests/sec low

    - by Laurence
    Setup: ASP.NET 4.0 website on IIS 6.0 on Win 2003 64 bit, 8xCPUs, 16GB memory, separate SQL 2005 DB server. Had a serious slowdown today with any otherwise fairly well performing ASP.NET site. For a period of a couple of hours all page requests were taking a very long time to be served - e.g. 30-60s compared to usual 2s. The w3wp.exe's CPU and memory usage on the webserver was not much higher than normal. The application pool was not in the middle of recycling (and it hadn't recycled for several hours). Bottlenecks in the database were ruled out - no blocks occurring and query results were being returned quickly. I couldn't make any sense of it and set up the following Perfmon counters: Current Anonymous Users (for site in question) Get requests/sec (ditto) Requests/sec for the ASP.NET application running the site Get requests/sec was averaging 100-150. Requests/sec for ASP.NET was averaging 5-10. However Current Anonymous Users was around 200. And then as I was watching, the Current Anonymous Users began to climb steeply going up to about 500 within a few minutes. All this time Get requests/sec & Requests/sec for ASP.NET was if anything going down. I did a whole load of things (in a panic!) to try to get the site working, like shutting it down, recycling the app pool, and adding another worker process to the pool. I also extended the expiration time for content (in IIS under HTTP Headers) in an attempt to lower the number of requests for static files (there are a lot of images on the site). The site is now back to normal, and the counters are fairly steady and reading (added Current Connections counter): Current Anonymous Users : average 30 Get requests/sec : average 100 Requests/sec for ASP.NET : 5 Current Connections : average 300 I have also observed an inverse relationship between Get requests/sec & Current Anonymous Users. Usually both are fairly steady but there will be short periods when Get requests/sec will go down dramatically and Current Anonymous Users will go up in a perfect mirror image. Then they will flip back to their usual levels. So, my questions are: Thinking of the original performance issue - if w3wp.exe CPU, memory usage were normal and there was no DB bottleneck, what could explain page requests taking 20 times longer to be served than usual? What other counters should I be looking at if this happens again? What explains the inverse relationship between Get requests/sec & Current Anonymous Users? What could explain Current Anonymous Users going from 200 to 500 within a few minutes? Many thanks for any insight into this.

    Read the article

  • Trouble making OAuth signed requests

    - by behrk2
    Hello, I am able to successfully make non-authenticated and protected calls to the Netflix API. I am having a little trouble making signed requests to the catalog, however. Using the OAuth Test page, it is clear to me that my Base String is correct. My request URL is also correct, except for the oauth_signature. The oauth_signature is the only thing that differs. If I understand correctly, the only difference between a protected call and a signed call is that there are no tokens involved, and that I am appending on call parameters (such as term). So, I am using the exact same code that I use for my protected calls that I am for my signed calls, except my signature key ONLY contains my shared secret (with an ampersand sign on the end of it). It does not use the access token. Am I missing something here? Where else can I be going wrong? Thanks!

    Read the article

  • How to handle concurrent web requests

    - by Rajan
    I am having a simple webservice running in my local system, hosted in IIS 6.0. This webservice is accessible from any other system (within LAN and Internet) and everything works fine as long as the number of systems accessing my webservice is very much limited. Whereas, if more number of users (say 500) are concurrently trying to access my webservice, an exception is thrown saying 'Server is busy'. When i browsed in net, i found that IIS can handle only 100-150 web requests concurrently. What should i do to make my webservice accessible from more than 500 users, concurrently? Thanks in advance. Rajan.

    Read the article

  • Limiting the maximum number of concurrent requests django/apache

    - by Johan
    Hi, I have a django site that demonstrates the usage of a tool. One of my views takes a file as input and runs some fairly heavy computation trough an external python script and returns some output to the user. The tool runs fast enough to return the output in the same request though. I would however want to limit how many concurrent requests to this URL/view to keep the server from getting congested. Any tips on how i would go about doing this? The page in itself is very simple and the usage will be low.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >