Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 7/232 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • How to persist objects between requests in PHP

    - by SztupY
    I've been using rails, merb, django and asp.net mvc applications in the past. What they have common (that is relevant to the question) is that they have code that sets up the framework. This usually means creating objects and state that is persisted until the web server is recycled (like setting up routing, or checking which controllers are available, etc). As far as I know PHP is more like a CGI script that gets compiled to some bytecode each time it's run, and after the request it's discarded. Of course you can have sessions, to persist data between requests from the same user, and as I see there are extensions like APC, with which you can persist objects between requests at the server level. My question is: how can one create a PHP application that works like rails and such? I mean an application that on the first requests sets up the framework, then on the 2nd and later requests use the objects that are already set up. Is there some built in caching facility in mod_php? (for example that stores the compiled bytecode of the executed php applications) Or is using APC or some similar extensions the only way to solve this problem? How would you do it? Thanks.

    Read the article

  • Monitoring slow nginx/unicorn requests

    - by injekt
    I'm currently using Nginx to proxy requests to a Unicorn server running a Sinatra application. The application only has a couple of routes defined, those of which make fairly simple (non costly) queries to a PostgreSQL database, and finally return data in JSON format, these services are being monitored by God. I'm currently experiencing extremely slow response times from this application server. I have another two Unicorn servers being proxied via Nginx, and these are responding perfectly fine, so I think I can rule out any wrong doing from Nginx. Here is my God configuration: # God configuration APP_ROOT = File.expand_path '../', File.dirname(__FILE__) God.watch do |w| w.name = "app_name" w.interval = 30.seconds # default w.start = "cd #{APP_ROOT} && unicorn -c #{APP_ROOT}/config/unicorn.rb -D" # -QUIT = graceful shutdown, waits for workers to finish their current request before finishing w.stop = "kill -QUIT `cat #{APP_ROOT}/tmp/unicorn.pid`" w.restart = "kill -USR2 `cat #{APP_ROOT}/tmp/unicorn.pid`" w.start_grace = 10.seconds w.restart_grace = 10.seconds w.pid_file = "#{APP_ROOT}/tmp/unicorn.pid" # User under which to run the process w.uid = 'web' w.gid = 'web' # Cleanup the pid file (this is needed for processes running as a daemon) w.behavior(:clean_pid_file) # Conditions under which to start the process w.start_if do |start| start.condition(:process_running) do |c| c.interval = 5.seconds c.running = false end end # Conditions under which to restart the process w.restart_if do |restart| restart.condition(:memory_usage) do |c| c.above = 150.megabytes c.times = [3, 5] # 3 out of 5 intervals end restart.condition(:cpu_usage) do |c| c.above = 50.percent c.times = 5 end end w.lifecycle do |on| on.condition(:flapping) do |c| c.to_state = [:start, :restart] c.times = 5 c.within = 5.minute c.transition = :unmonitored c.retry_in = 10.minutes c.retry_times = 5 c.retry_within = 2.hours end end end Here is my Unicorn configuration: # Unicorn configuration file APP_ROOT = File.expand_path '../', File.dirname(__FILE__) worker_processes 8 preload_app true pid "#{APP_ROOT}/tmp/unicorn.pid" listen 8001 stderr_path "#{APP_ROOT}/log/unicorn.stderr.log" stdout_path "#{APP_ROOT}/log/unicorn.stdout.log" before_fork do |server, worker| old_pid = "#{APP_ROOT}/tmp/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end I have checked God status logs but it appears CPU and Memory Usage are never out of bounds. I also have something to kill high memory workers, which can be found on the GitHub blog page here. When running a tail -f on the Unicorn logs I see some requests, but they're far and few between, when I was at around 60-100 a second before this trouble seemed to have arrived. This log also shows workers being reaped and started as expected. So my question is, how would I go about debugging this? What are the next steps I should be taking? I'm extremely baffled that the server will sometimes respond quickly, but at others time it's very slow, for long periods of time (which may or may not be peak traffic times). Any advice is much appreciated.

    Read the article

  • How to redirect http requests to http (nginx)

    - by spuder
    There appear to be many questions and guides out there that instruct how to setup nginx to redirect http requests to https. Many are outdated, or just flat out wrong. server { listen *:80; server_name <%= @fqdn %>; #root /nowhere; #rewrite ^ https://$server_name$request_uri? permanent; #rewrite ^ https://$server_name$request_uri permanent; #return 301 https://$server_name$request_uri; #return 301 http://$server_name$request_uri; #return 301 http://192.168.33.10$request_uri; return 301 http://$host$request_uri; } server { listen *:443 ssl default_server; server_name <%= @fqdn %>; server_tokens off; root <%= @git_home %>/gitlab/public; ssl on; ssl_certificate <%= @gitlab_ssl_cert %>; ssl_certificate_key <%= @gitlab_ssl_key %>; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES:HIGH:!ADH:!MDF; ssl_prefer_server_ciphers on; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab puma) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; ect.... I've restarted after every configuration change, and yet I still only get the 'Welcome to nginx' page when visiting http://192.168.33.10. whereas https://192.168.33.10 works perfectly. Why will nginx still not redirect http requests to https? tailf /var/log/nginx/access.log 192.168.33.1 - - [22/Oct/2013:03:41:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" 192.168.33.1 - - [22/Oct/2013:03:44:43 +0000] "GET / HTTP/1.1" 200 133 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" tailf /var/log/nginx/gitlab_error.lob 2013/10/22 02:29:14 [crit] 27226#0: *1 connect() to unix:/home/git/gitlab/tmp/sockets/gitlab.socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.33.1, server: gitlab.localdomain, request: "GET / HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/", host: "192.168.33.10" Resources http://wiki.nginx.org/Pitfalls How to make nginx redirect How to force or redirect to SSL in nginx? nginx ssl redirect Nginx & Https Redirection https://www.tinywp.in/301-redirect-wordpress/ How to force or redirect to SSL in nginx?

    Read the article

  • Log requests in apache upon arrival

    - by Patrick Cornelissen
    Is it possible to log requests in the apache when they arrive not when they have been processed? We have a pretty big web based application and the way it's currently logged is sometimes confusing to read, because each request with it's subrequests is logged "upside down", because apache logs the request after it has been processed, so the initial request is the last entry for this "query". We're wondering if there is a chance to log on arrival?!

    Read the article

  • Pass HAProxy healthcheck requests as User-agent "LB-Check" to the backend webservers(apache)

    - by Joseph
    I have a HAProxy setup in front of webservers(apache) for loadbalancing. Also healthchecks for these webservers are also configured in HAProxy. option httpchk HEAD /healthcheck.txt HTTP/1.0 Is it possible to transfer these healthcheck requests to backend webservers as "LB-Check" User-agent or any other option, so that I can distinguish it from other log entries? However I dont want to go for "dontlog" option, as I dont want to miss these entries.

    Read the article

  • Using iptables to selectively route outgoing requests?

    - by Olivier
    Hello, I'd like to set up my Dd-WRT firewall so that it uses the VPN service from a VPN provider to access a bunch of destinations and normal route foe all other requests. In detail: giganews.com would be accessed thru VPN VyprVPN normal web sites such as amazon, ebay, et al thru transparent firewall. I've run nito reading SOOOO many tutorials but I can't get to understand what the different entities are. Any help? Thxs

    Read the article

  • serving mp3s to mobile devices is flooding nginx with partial requests

    - by drumfire
    I am serving mp3s with a minimalistic nginx server. What I see in my log files is that there are a lot of requests, in particular from AppleCoreMedia and sometimes Android useragents, that flood the server with short requests. Sometimes they keep requesting to download the same partial content for a very long time; sometimes more than an hour. For example: "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" [...] I also get a lot, but not as much, of these: "-" 400 0 "-" "-" 400 0 "-" The IP addresses are always from clients that start downloading shortly after that request, usually they have roughly the same UserAgent as in the first example. emphasized text I have enabled server throttling and connection limits in nginx to limit the huge amount of log entries from equivalent IPs at least somewhat. There was a performance issue when I saw the same behaviour on the previous server that used Apache. I installed nginx on a better server then moved the site. When Apache could not handle more connections from the increasing number of clients effectively that server was ddossed. There was no bandwidth issue with already connected clients and I don't know if the already connected clients were using more than one connection at a time. Please tell me: Are clients that appear to get stuck on a download a Bad Thing™ I heard people say their mobile bandwidth use was much higher than they could account for. I'm thinking this type of client behaviour can account for that. And costs us more bandwidth too. Which up to date alternatives exist out there that can handle serving this type of data better than plain HTTP? Useful general insights for someone who just came into this field straight out of the late 90s. :-)

    Read the article

  • using tcpdump to display XML API requests without headers or ack packets

    - by Carmageddon
    I need assistance, I am trying to use tcpdump in order to capture API requests and responses between two servers, so far I have the following command: tcpdump -iany -tpnAXs0 host xxx.xxx.xxx.xxx and port 6666 My problem is, that the output is still hard to read, because it sends the Headers, and the ack packets. I would like to remove those and only see the XML bodies. I tried to use grep -v, but apparently this is all one request, so it filters the entire thing... Thanks!

    Read the article

  • Strange GET requests in logs

    - by alfred
    I'm getting the following GET requests in my apache logs: 109.230.251.14 - - [29/Mar/2011:16:28:18 +0100] "GET http://209.191.92.114/config/pwtoken_get?login=jackmcphee232&src=ygodgw&passwd=e59e2240415e6f1aba3da72b8f189f4e&challenge=9TbU_9yfZhKmzlHtK9X4OkQlesTH&md5=1 HTTP/1.0" 404 1226 "-" "-" Any idea what it could be and how is that possibly a get request? That IP address seems to point to Yahoo! I'm very confused.

    Read the article

  • Loading wordpress admin page prevents and blocks further requests

    - by Auxiliary
    I've installed a Wordpress site on a host (actually I moved it from localhost). The site loads and works perfectly, however if I log in with Admin, the admin page doesn't completely load and all further requests to the server are blocked for about 10 or 15 minutes. I can't even ping to the site. Where could this problem be from? Is it firewall related or...? Any help or idea is greatly appreciated.

    Read the article

  • Apache + Passenger, requests not forwarded to Rails

    - by olalonde
    I'm trying to set up Apache + Passenger (mod_rails) and everything works fine except Apache won't forward requests to Rails. From my Apache log: File does not exist: /var/www/rails_apps/domain.com/public/controllerName My Rails app works fine with Mongrel and I also have some PHP/MySQL sites running fine. I have no .htaccess in my Rails app directory. What is going wrong? How could I troubleshoot this problem ?

    Read the article

  • Ignoring old multiple asynchronous ajax requests

    - by Travis
    I've got a custom javascript autocomplete script that hits the server with multiple asynchronous ajax requests. (Everytime a key gets pressed.) I've noticed that sometimes an earlier ajax request will be returned after a later requests, which messes things up. The way I handle this now is I have a counter that increments for each ajax request. Requests that come back with a lower count get ignored. I'm wondering: Is this proper? Or is there a better way of dealing with this issue? Thanks in advance, Travis

    Read the article

  • Rx: Piecing together multiple IObservable web requests

    - by McLovin
    Hello, I'm creating multiple asynchronous web requests using IObservables and reactive extensions. So this creates observable for "GET" web request: var tweetObservalue = from request in WebRequestExtensions.CreateWebRequest(outUrl + querystring, method) from response in request.GetResponseAsync() let responseStream = response.GetResponseStream() let reader = new StreamReader(responseStream) select reader.ReadToEnd(); And I can do tweetObservable.Subscribe(response => dosomethingwithresponse(response)); What is the correct way of executing multiple asynchronous web requests with IObservables and LINQ that have to wait until other requests have been finished? For example first I would like to verify user info: create userInfoObservable, then if user info is correct I want to update stats so I get updateStatusObservable then if status is updated I would like create friendshipObservable and so on. Also bonus question, there is a case where I would like to execute web calls simultaneously and when all are finished execute another observable which will until other calls are finished. Thank you.

    Read the article

  • Intercept web requests from a WebView Flash plugin

    - by starkos
    I've got a desktop browser app which uses a WebView to host a Flash plugin. The Flash plugin makes regular requests to an external website for new data, which it then draws as fancy graphics. I'd like to intercept these web requests and get at the data (so I can display it via Growl, instead of keeping a desktop window around). But best I can tell, requests made by Flash don't get picked up by the normal WebView delegates. Is there another place I can set a hook? I tried installing a custom NSURLCache via [NSURLCache setSharedURLCache] but that never got called. I also tried method swizzling a few of the other classes (like NSCachedURLResponse) but couldn't find a way in. Any ideas? Many thanks!

    Read the article

  • Ruby Equivalent of Python Requests Library (HTTP Client)

    - by Hartator
    There is a library in python that I love called requests. requests is a http client build on urllib3, top-notch :) (http://docs.python-requests.org/en/latest/) I am looking for something similar in ruby, basically what I need is : Upload files support (multipart/form-data) Easy get/post Cookies can be passed from a response object to a request object (build manually login script) Stable and Flexible Sessions support (to not have to handle cookies manually if we don't have too) I've looked at Typhoeus, but the code example in the home page doesn't work (they have moved code along and the get method is not longer directly accessible like that), so it's not starting well! :) Curb seems nice and I like curl, there is alson RestClient which seems popular and em-http seems pretty fast according to benchmark. There is a aso Patron and CurlFu which I haven't have the time to try. And of course Net:Http. But it doesn't seems to have a main stream solution that everyone point. I think a lot of people have been in my situation and I wonder what they have choosen and why?

    Read the article

  • IIS7 integrated mode closing token between requests

    - by user607287
    We are migrating to IIS7 integrated mode and have come across an issue. We authenticate using WindowsAuthentication but then store a reference to the WindowsPrincipal so that on future requests we can authorize as needed against AD. In IIS 7 Integrated mode, the token is being closed (between requests) so that when we try to run IsInRole it generates a disposed exception. Is there a way to cache this token or change our use of WindowsPrincipal so that we don't need to make successive AD requests to get it for each authorization request? Here is the exception being thrown from WindowsPrincipal.IsInRole("") - System.ObjectDisposedException: {"Safe handle has been closed"} Thanks.

    Read the article

  • Multiple Post Requests Occuring in Quick Succession

    - by Samuel
    This is a bit of an open ended question but we have a problem with a web application that on the final step of completing an order, multiple post requests are being made, sometimes up to 10 and all within a couple of seconds to the page. Theirs nothing unusual about the page, the user fills out a form which is then validated using the jQuery form validation plugin. We've seen this behavior exhibited over a couple of different browser types, notably IE6 but also IE8. We've also managed to trigger the bug ourselves but nothing out of the ordinary seems to occur on the browsers end, everything progresses as normal. Apache logs show that multiple post requests where made at the same time and the Rails logs show that multiple posts requests were also received by the application, leading me to think it's a problem with the browser. I've exhausted all avenues that I can think of for debugging so I'm throwing this out there to see if anyone has some ideas of what we could try or look for next.

    Read the article

  • Best wrapper for simultaneous API requests?

    - by bluebit
    I am looking for the easiest, simplest way to access web APIs that return either JSON or XML, with concurrent requests. For example, I would like to call the twitter search API and return 5 pages of results at the same time (5 requests). The results should ideally be integrated and returned in one array of hashes. I have about 15 APIs that I will be using, and already have code to access them individually (using simple a NET HTTP request) and parse them, but I need to make these requests concurrent in the easiest way possible. Additionally, any error handling for JSON/XML parsing is a bonus.

    Read the article

  • How to handle server-client requests

    - by Layne
    Currently I'm working on a Server-Client system which will be the backbone of my application. I have to find the best way to send requests and handle them on the server-side. The server-side should be able to handle requests like this one: getPortfolio -i 2 -d all In an old project I decided to send such a request as string and the server application had to look up the first part of the string ("getPortfolio"). Afterwards the server application had to find the correct method in a map which linked the methods with the the first part of the string ("getPortfolio"). The second part ("-i 2 -d all") got passed as parameter and the method itself had to handle this string/parameter. I doubt that this is the best solution in order to handle many different requests. Rgds Layne

    Read the article

  • IE sending through multiple unrequested requests for parent URL

    - by andyjeffries
    We have a rather large/complex web site (so it's not easy to extract portions of it). The problem we're having is that IE6/7/8 (but 6 is the worst) are sending through multiple requests for the parent of a URL (which fails, a different point). As an example, if you visited (dummy domain name obviously): http://www.example.com/view/event/123/456/78 We'd get 8-20 requests for: http://www.example.com/view/event/123/456/ appear in the Apache log. We've tried: Disabling all plugins (including toolbars, Flash, etc) Disabling scripting (e.g. Javascript) And it still sends through at least one erroneous request. We've viewed the requests through ieHttpHeaders so the browser is aware of them (i.e. it's not prefetching by the network's proxy). Any ideas? What can else could cause this?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >