Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 50/232 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Apache Serving 403 Forbidden after OS X Snow Leopard Upgrade to Version 10.6.6

    - by Ian Oxley
    I've just upgraded my MacBook Pro to OS X Snow Leopard version 10.6.6 and now Apache is misbehaving: requests to http://localhost/ generate a 403 Forbidden response -- FIXED requests to any of my virtual hosts seem to generate a 200 Ok response, but contain zero bytes Some further info that might be useful: I'm using the Apache that comes bundled with OS X. I'm using PHP from http://www.entropy.ch/software/macosx/php/ (which is in /usr/local/bin) I've had look at the Apache error log and the only error seems to be the following: [notice] child pid 744 exit signal Segmentation fault (11) I'm completely stumped by this. Any help would be much appreciated. UPDATE Ok, I've managed to resolve the 403 Forbidden error thanks to http://techtrouts.com/mac-os-x-105-web-sharing-forbidden-403-on-httplocalhostusername/ I'm still having the second problem though for any request e.g. this now happens when I request http://localhost

    Read the article

  • Redmine VirtualHost config not working with Document Root

    - by David Kaczynski
    I am trying to have requests for https://redmine.example.com access my redmine instance, but I am just getting an "Index of /" page with the contents of /var/www/redmine (which is a symbolic link to /usr/share/redmine/public). My VirtualHost config: <VirtualHost *:443> ServerName redmine.example.com DocumentRoot /var/www/redmine SSLEngine on SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown </VirtualHost> My /etc/apache2/sites-enables/redmine: RailsBaseURI /redmine How do I get the requests for https://redmine.example.com to correctly launch my redmine instance?

    Read the article

  • How to invalidate nginx reverse proxy cache in front of other nginx servers?

    - by Olivier Lance
    I'm running a Proxmox server on a single IP address, that will dispatch HTTP requests to containers depending on the requested host. I am using nginx on the Proxmox side to listen to HTTP requests and I am using the proxy_pass directive in my different server blocks to dispatch requests according to the server_name. My containers run on Ubuntu and are also running a nginx instance. I'm having troubles with caching on a particular website that is fully static: nginx keeps on serving me stale content after files updates, until I: Clear /var/cache/nginx/ and restart nginx or set proxy_cache off for this server and reload the config Here's the detail of my configuration: On the server (proxmox): /etc/nginx/nginx.conf: user www-data; worker_processes 8; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; use epoll; } http { ## # Basic Settings ## sendfile on; #tcp_nopush on; tcp_nodelay on; #keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; client_body_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 1 1K; ignore_invalid_headers on; client_body_timeout 5; client_header_timeout 5; keepalive_timeout 5 5; send_timeout 5; server_name_in_redirect off; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; # gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; limit_conn_zone $binary_remote_addr zone=gulag:1m; limit_conn gulag 50; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/conf.d/proxy.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; /etc/nginx/sites-available/my-domain.conf: server { listen 80; server_name .my-domain.com; access_log off; location / { proxy_pass http://my-domain.local:80/; proxy_cache cache; proxy_cache_valid 12h; expires 30d; proxy_cache_use_stale error timeout invalid_header updating; } } On the container (my-domain.local): nginx.conf: (everything is inside the main config file -- it's been done quickly...) user www-data; worker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip off; server { listen 80; server_name .my-domain.com; root /var/www; access_log logs/host.access.log; } } I've read many blog posts and answers before resolving to posting my own questions... most answers I can see suggest setting sendfile off; but that didn't work for me. I have tried many other things, double checked my settings and all seems fine. So I'm wondering whether I am not expecting nginx's cache to do something it's not meant to...? Basically, I thought that if one of my static files in my container was updated, the cache in my reverse proxy would be invalidated and my browser would get the new version of the file when it requests it... But I now have the sentiment I misunderstood many things. Of all things, I now wonder how nginx on the server can know about a file in the container has changed? I have seen a directive proxy_header_pass (or something alike), should I use this to let the nginx instance from the container somehow inform the one in Proxmox about updated files? Is this expectation just a dream, or can I do it with nginx on my current architecture?

    Read the article

  • Unextending Sharepoint 2007 Web Application from a zone

    - by dunxd
    When our Sharepoint was migrated from Sharepoint 2003 to Sharepoint 2007 (both fully paid versions), the consultants who carried it out extended each web app into two IIS sites/zones (e.g. the original Web App was http://intranet, then http://newintranet and http://intranet would be created for Sharepoint 2007 - each with its own IIS site). The idea was that during the migration period we would set up DNS to point the old url to SP2003 servers and the new one to SP2007, then once the migration was complete, do a DNS change so the SP2007 would recieve the requests to the http://intranet type URLs. Unfortunately the contractors did not tidy up the application extensions and IIS sites after the migration, and for some time both URLs were in use, resulting in many document links pointing to the http://newintranet type URLs. This means I need to maintain these URLs. Due to a rejig of organisation structure we now need to relocate some Sharepoint sites, and I'd like to use the RDA Collaboration Sharepoint URL Redirector feature. However a limitation of this is that it doesn't work for Web Applications which have been extended into multiple zones. So I have a need to tidy up the situation that our consultants left behind. I think the right thing to do is use the "Remove Sharepoint from IIS Web Site" page in Central Admin to remove the zone for the newintranet type sites, and select the option to also delete the IIS site. That should result in having no IIS sites listening for http://newintranet type URLs. Is this the right procedure? Once I have done that I need to set up Sharepoint to receive requests sent to the http://newintranet type URLs so they will continue to work. I am not sure if I should do this: using Alternative Access Mappings or, by adding a host header to the IIS site or, creating a non Sharepoint IIS site for each http://newintranet type URL, and use IIS redirection to forward the requests to the new URL using variables to pass the path to the Sharepoint site. Does anyone have any thoughts on these options, or any other way of achieving this? Sharepoint 2007 is running on Windows 2003 with IIS6. We don't currently have plans/budget to upgrade to Sharepoint 2010.

    Read the article

  • Header unset Server not working for static files

    - by Sam Lee
    I'm trying to unset the "Server" field in response headers. I do this using Header unset Server, and that works fine for requests handled by mod_perl. However, for requests to /static I use Apache to serve static files. For some reason, when these files are loaded directly in the browser, the Server field is not removed. How can I go about fixing this? Relevent parts of my httpd.conf: LoadModule headers_module modules/mod_headers.so Header unset Server <VirtualHost *:80> <Location /> SetHandler modperl PerlResponseHandler MyHandler </Location> Alias /static/ /home/site/static/ <Location /static> SetHandler None </Location> </VirtualHost>

    Read the article

  • Active DFS node did not restore after failure

    - by Mark Henderson
    On Tuesday we had a Server 2008 R2 DFS-R node go offline unexpectedly. DFS did the right thing and started routing requests to a different node, which was in a remote site. This is by design, because even though it's slow, at least it's still working. We had the local DFS-R node back online within an hour, and it had synced all its changes 10 minutes after that. 3 of the 5 terminal servers reset themselves to the local DFS node, but the other two stayed pointing at the remote DFS node for three days, until someone finally piped up about how slow requests were. What reasons could there be why some, but not all, of the server reverted? Is the currently active DFS node for a namespace exposed anywhere in the OS (WMI, or even scripts) so that we can monitor the active nodes?

    Read the article

  • Remote location looses connectivity

    - by user24559
    I have a remote location that has constant internet access using DSL. The remote location only has three Linksys cameras that use the internet access with a router. There is no computer on the network. The router is configured to use the DYN.org service to keep the IP address updated if the internet provider assigns a new ip address. The problem I am having is the router after a period of time stops responding to incoming ping requests and the cameras are shown offline. However, if I keep one of the cameras constantly streaming data then the connection to all the cameras works just fine all the time. How can I keep the router from appearing to be sleeping and not responding to incoming requests? The router is the Linksys WRT300N.

    Read the article

  • Order of mod_rewrite rules in .htaccess not being followed

    - by user39461
    We're trying to enforce HTTPS on certain URLs and HTTP on others. We are also rewriting URLs so all requests go through our index.php. Here is our .htaccess file. # enable mod_rewrite RewriteEngine on # define the base url for accessing this folder RewriteBase / # Enforce http and https for certain pages RewriteCond %{HTTPS} on RewriteCond %{REQUEST_URI} !^/(en|fr)/(customer|checkout)(.*)$ [NC] RewriteRule ^(.*)$ http://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^/(en|fr)/(customer|checkout)(.*)$ [NC] RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # rewrite all requests for file and folders that do not exists RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?query=$1 [L,QSA] If we don't include the last rule (RewriteRule ^(.*)$ index.php?query=$1 [L,QSA]), the HTTPS and HTTP rules work perfectly however; When we add the last three lines our other rules stop working properly. For example if we try to goto https:// www.domain.com/en/customer/login, it redirects to http:// www.domain.com/index.php?query=en/customer/login. It's like the last rule is being applied before the redirection is done and after the [L] flag indicating the the redirection is the last rule to apply.

    Read the article

  • apache returning "The connection was reset"

    - by usjes
    One of my dedicated servers had some network issue today and the data center has to replace some router. Since then the sites on that server returns "The connection was reset" error most of the time. I tried installing nginx and it opens better, but it still shows the error sometimes. Everything in the config seems normal, what could be causing this error? UPDATE: Just noticed that in whm apache status there are always only 1 requests currently being processed, 8 idle workers. I know for sure the server received thousands of requests per minute. What could be limiting this to such a low number?

    Read the article

  • forwardfor information is missing

    - by FAFA
    I use following configuration to load balance https connections, using haproxy 1.4.8. SSL offloading is done by apache. listen ssl_to_waf 192.168.101.54:443 mode tcp balance roundrobin option ssl-hello-chk server wafA 192.168.101.61:444 check listen ssl_from_waf 192.168.101.61:445 balance roundrobin option forwardfor server webA 192.168.101.46:80 check For HTTP requests this works great, requests are distributed to my Apache servers just fine. But for HTTPS request, I lose the "forwardfor" information. I need to save the client IP address. How can I use HAproxy to load balance across a number of SSL servers, allowing those servers to know the client's IP address?

    Read the article

  • Load Balancing a UDP server

    - by Hellfrost
    Hello StackOverflow, I have a udp server, it is a central part in my business process. in order to handle the loads I'm expecting in the production environment I'll probably need 2 or 3 instances of the server. The server is almost entirely stateless, it mostly collects data, and the layer above it knows how to handle the minimal amount of stale data that can arise from the the multiple server instances. My question is, how can I implement load balancing between the servers? I would prefer to distribute the requests as evenly as possible between the servers. I would also would like to have some fidelity, I mean if client X was routed to server y, then I want all of X's subsequent requests to go to server Y, as long as it is sensible and not overloads Y. By the way it is a .NET system... what would you recommend?

    Read the article

  • Why are there unknown URLs in router log?

    - by Martin
    I recently looked at my router log. Why are a lot of requests that I don't send originated from a computer in my home network? They do not look like 3rd-party advertisements / images embedded in a page. The request have patterns, such as: top-visitor.com/look.php www.dottip.com/search/result.php?aff=8755&req=nickelodeon+games www.placeca.com/search/result.php?aff=3778&req=wireless+cell+phone www.bb5a.com/search.php?username=3348&keywords=flights www.blazerbox.com/search.php?username=2341&keywords=colorado+springs+real+estate www.freeautosource.com/search.php?username=sun100&keywords=vehicle www.1sp2.com/search.php?username=20190&keywords=las+the+hotel+vegas www.loadgeo.com/search/result.php?aff=10357&req=winamp www.exalt123.com/portal.php?ref=seo2007 www.7catalogs.com/search.php?username=la24&keywords=shutter www.theloaninstitute.com/search.php?username=kevin&keywords=webcam www.grammt.com/search.php?username=2530&keywords=bob And there are hundreds of these requests send within a second. So what's happening?

    Read the article

  • Proxy Error 502 "Reason: Error reading from remote server" with Apache 2.2.3 (Debian) mod_proxy and Jetty 6.1.18

    - by Martin
    Apache is receiving requests at port :80 and proxying them to Jetty at port :8080 The proxy server received an invalid response from an upstream server The proxy server could not handle the request GET /. My dilemma: Everything works fine normally (fast requests, few seconds or few tens of seconds long requests are processed ok). Problems occur when request processing takes long (few minutes?). If I issue request instead directly to Jetty at port :8080 the request is processed OK. So problem is likely to sit somewhere between Apache and Jetty where I am using mod_proxy. How to solve this? I have already tried some "tricks" related to KeepAlive settings, without luck. Here is my current configuration, any suggestions? #keepalive Off ## I have tried this, does not help #SetEnv force-proxy-request-1.0 1 ## I have tried this, does not help #SetEnv proxy-nokeepalive 1 ## I have tried this, does not help #SetEnv proxy-initial-not-pooled 1 ## I have tried this, does not help KeepAlive 20 ## I have tried this, does not help KeepAliveTimeout 600 ## I have tried this, does not help ProxyTimeout 600 ## I have tried this, does not help NameVirtualHost *:80 <VirtualHost _default_:80> ServerAdmin [email protected] ServerName www.mydomain.fi ServerAlias mydomain.fi mydomain.com mydomain www.mydomain.com ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass / http://www.mydomain.fi:8080/ retry=1 acquire=3000 timeout=600 ProxyPassReverse / http://www.mydomain.fi:8080/ RewriteEngine On RewriteCond %{SERVER_NAME} !^www\.mydomain\.fi RewriteRule /(.*) http://www.mydomain.fi/$1 [redirect=301L] ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On </VirtualHost> Here is also the debug log from a failing request: 74.125.43.99 - - [29/Sep/2010:20:15:40 +0300] "GET /?wicket:bookmarkablePage=newWindow:com.mydomain.view.application.reports.SaveReportPage HTTP/1.1" 502 355 "https://www.mydomain.fi/?wicket:interface=:0:2:::" "Mozilla/5.0 (Windows; U; Windows NT 6.1; fi; rv:1.9.2.10) Gecko/20100914 Firefox/3.6.10" [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: error reading status line from remote server www.mydomain.fi, referer: https://www.mydomain.fi/?wicket:interface=:0:2::: [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: Error reading from remote server returned by /, referer: https://www.mydomain.fi/?wicket:interface=:0:2:::

    Read the article

  • How to rewrite to a virtual directory with a different application

    - by user42181
    Hi, I have a CMS application that manages multiple websites, today whenever i change the codebehind of one of these websites - i have to rebuild the dll for all websites, deploy it - this disconnects all current sessions and is really bad. The iis is configured to listen to all domain requests, if the request is to one of the websites' domain , the application rewrites it, or example, if someone requests for www.example.com, and example.com is configured in the application to be website 12, it is rewritten to www.example.com/websites/12/default.aspx. This is done for all websites. We want to seperate the dlls of the websites from each other, and from the main CMS, we have a virtual directory to each websites, but when trying to rewrite to it, we discover that IIS support this (we get an "Could not load type '_12._Default'". error). How can we perform this rewrite so it does rewrite to virtual directories, or if anyone has any other solution for the initial dll seperation problem. Thanks in advance

    Read the article

  • Intercept Apache communication

    - by Nathan Adams
    I am looking to develop a solution that eliminates potential spammers. The way this system will work is that it will watch connections and requests. Going into the specifics is more for stackoverflow, But, what I am interested in is if it is possible to tell Apache to pass the request over to my application first and give it the ability to accept/deny the request. Sure, it will make requests slower, but I think that is a trade off I am willing to take. I still want, however, Apache to run the request through any interpreters (such as PHP). The idea is that one wouldn't have to implement anti-spam measures on a per app basis but have an "umbrella" of spam protection.

    Read the article

  • Amazon Product API: "Your request is missing a required parameter combination" on Blended ItemSearch

    - by Daniel Schaffer
    I'm having some problems trying to do an ItemSearch on the Blended index using the Amazon Product API. According to the documentation, Blended requests cannot specify the MerchantId parameter - and indeed, if I try to include it I get an error telling me so. However, when I don't include it, I get an error telling me that my request is missing a required parameter combination and that a valid combination includes MerchantId... what the hell? Here's the XML response: <Items xmlns="http://webservices.amazon.com/AWSECommerceService/2005-10-05"> <Request> <IsValid>False</IsValid> <ItemSearchRequest> <Availability>Available</Availability> <Condition>All</Condition> <Keywords> home theater pc and other geekery</Keywords> <ResponseGroup>Similarities</ResponseGroup> <ResponseGroup>SalesRank</ResponseGroup> <ResponseGroup>OfferSummary</ResponseGroup> <ResponseGroup>Small</ResponseGroup> <ResponseGroup>Images</ResponseGroup> <SearchIndex>Blended</SearchIndex> </ItemSearchRequest> <Errors> <Error> <Code>AWS.MissingParameterCombination</Code> <Message>Your request is missing a required parameter combination. Required parameter combinations include MerchantId, Availability.</Message> </Error> </Errors> </Request> </Items> The failing requests are being sent as part of batches with other requests that are succeeding. I'm using REST to send my requests, so here's an example of a request: http://ecs.amazonaws.com/onca/xml?AWSAccessKeyId=-------------& ItemSearch.1.Keywords=Mates%20of%20State& ItemSearch.1.MerchantId=Amazon& ItemSearch.1.SearchIndex=DVD& ItemSearch.2.Keywords=teaching%20Lily%20various%20computer%20related%20skills& ItemSearch.2.SearchIndex=Blended& ItemSearch.Shared.Availability=Available& ItemSearch.Shared.Condition=All& ItemSearch.Shared.ResponseGroup=Small%2CSalesRank%2CImages%2COfferSummary%2CSimilarities& Operation=ItemSearch%2CSimilarityLookup& Service=AWSECommerceService& SimilarityLookup.1.ItemId=B000FNNHZ2& SimilarityLookup.2.ItemId=B000EQ5UPU& SimilarityLookup.Shared.Availability=Available& SimilarityLookup.Shared.Condition=All& SimilarityLookup.Shared.MerchantId=Amazon& SimilarityLookup.Shared.ResponseGroup=Small%2CSalesRank%2CImages%2COfferSummary& Timestamp=2010-04-02T17%3A18%3A05Z& Signature=---------------- Any ideas as to what I'm doing wrong?

    Read the article

  • How to efficiently dump a huge MySQL innodb database?

    - by Jagbir
    I got an Ubuntu 10.04 production MySQL database server where total size of database is 260 GB while size of root partition is itself 300 GB where DB is stored, essentially means around 96% of / is full and there's no space left for storing dump/backup etc. No other disk is attached to server as of now. My task is to migrate this database to other server sitting in different datacenter. Question is how to do that efficiently with minimum downtime? I'm thinking in line of: Request to attach an extra drive to server and take a dump in that drive. Transfer dump to new server, restore it and make new server slave of existing one to keep data in sync When migration is needed, break replication, update slave config to accept read/write requests and make old server read-only so it won't entertain any write requests and tell app developers to update there config with new IP address for db. What's your suggestions to improve this or any alternate better approach for this task?

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • How do I remove a URL from Google without having to have a Google E-mail Account

    - by PP
    Really simple question. I do not want a Google account. I just want Google to stop making requests every 2 minutes for a URL it should never have known about (apparently Google harvests URLs from search requests as well as private e-mails, not just from actual web pages). But when I search Google help for removing URLs it appears I have to use their "webmaster tools" which require logging into a GMail account! How do I tell Google not to index my URL without becoming a customer? Note: I already return 404 for the URLs in question using a rewrite rule - this appears to make zero difference to the crawler which continually attempts to fetch the page every 2 minutes.

    Read the article

  • nginx + Jetty - thousands of connections stuck in LAST_ACK

    - by virulence
    I have a FreeBSD machine with jails -- two in particular, one that runs nginx and another that runs a Java program that accepts requests via Jetty (embedded mode) Jetty receives upwards of 500 requests/sec constantly and there has been an issue lately where I will constantly have over 60,000 connections in the LAST_ACK state between nginx and jetty. Distribution of all connections (includes some other services, particularly php-fpm) root@host:/root # netstat -an > conns.txt root@host:/root # cat conns.txt | awk '{print $6}' | sort | uniq -c | sort -n 18 LISTEN 112 CLOSING 485 ESTABLISHED 650 FIN_WAIT_2 1425 FIN_WAIT_1 3301 TIME_WAIT 64215 LAST_ACK Distribution of nginx - jetty connections root@host:/root # cat conns.txt | grep '10.10.1.57' | awk '{print $6}' | sort | uniq -c | sort -n 1 3 CLOSE_WAIT 3 LISTEN 18 FIN_WAIT_2 125 ESTABLISHED 64193 LAST_ACK I'd prefer every request to fully close the connection. Clients requests are about 10 minutes apart from each other so connections must be closed. Some of the connections, tcp4 0 0 10.10.1.50.46809 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46805 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46797 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46794 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46790 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46789 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46771 10.10.1.57.9050 LAST_ACK etc.. On Jetty's end I've set maxIdleTime to 2000 -- before this all connections were in ESTABLISHED but they are now LAST_ACK On Jetty's end I've set Connection: close (i.e response.setHeader(HttpHeaders.CONNECTION, HttpHeaderValues.CLOSE);) Jetty never reports a lot of open connections -- always very few. PF/IPFW is not currently being used nginx - reset_timedout_connection is on I cannot figure out how to get nginx or jetty to forcibly close the connection, is this simply something that needs to be fixed in Jetty so that it fully closes the socket after the request finishes? Thanks a lot in advance EDIT: forgot my nginx config for the proxy setup- proxy_pass http://10.10.1.57:9050; proxy_set_header HTTP_X_GEOIP $http_x_geoip; proxy_set_header GEOIP_COUNTRY_CODE $geoip_country_code; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header Connection ""; proxy_http_version 1.1; EDIT2: Forcing Jetty to close the connection via request.getConnection().getEndPoint().close() does nothing -- it's obvious the connection IS being closed (as it's in LAST_ACK) but why isn't it getting past this? Is Nginx keeping the connection open to the backend for some reason?

    Read the article

  • Blocking IP address with port forwarding

    - by Jeff Storey
    I have a website setup behind a router, so the router has the external facing address and it will forward requests to the webserver inside the network. If there are X number of invalid login attempts, that IP address will be blocked from logging in. The problem is that because the site is being accessed through port forwarding, all requests show up as though they are coming from the router address, and thus the router address becomes the blocked IP. I'm not sure if this is a limitation of the router (linksys wrt160n) or if this a more general issue. Is there a way to handle this?

    Read the article

  • Setting Up nginx Site Down That Responds Differently to Ajax?

    - by dave mankoff
    I am trying to set up an automatic site-down page for nginx. So far I have this: location / { try_files /sitedown.html @myapp; } location @myapp { ... } That works well enough: if sitedown.html is present, it serves that, otherwise it serves the app. What I'd like to do, however, is respond differently to Ajax requests so that they don't error out the javascript. I believe, using the rewrite module, that I can do something like if ($http_x_requested_with = XMLHttpRequest) { but it's unclear to me how to use this in order to do what I want. I'd like requests that come with that header to return a simple JSON response like "sitedown" with the appropriate json encoding header. Barring that, it would be nice to return a 503 response code that the javascript could react to.

    Read the article

  • Web Development - How to access custom host, defined in my hosts file, from another device in the same network

    - by Neara
    Ok, I hope i'll be able to explain the issue im experiencing. I'm working on a project, that has 2 parts: one takes all requests from usual localhost, the other handles requests from myhost.local. While trying to access both addresses from my computer, it works ok. But now i need to test myhost.local on mobile devices, connected to the same network. Usually i would just run server from my computer ip in the network: python manage.py runserver 10.0.0.8:8000 And then from any device, going to 10.0.0.8:8000 would show the project im working on. However, now accessing that ip address routes me straight to localhost. So, my question is how to access myhost.local from another device in same network? I don't want to change router settings, if that can be avoided, cos sometimes i work from places where i can't access router admin. Is there any network settings on my computer, that i can change to fix the routing to myhost.local w.o losing access to localhost as well?

    Read the article

  • Worker processes not starting in IIS 7.5. What should I check?

    - by locster
    I have a Windows 7 machine (Windows version 6.1.7601 SP1 Build 7601) with IIS installed. At some point the installation appears to have become 'corrupted' in some way, as any requests are now met with the message: Service Unavailable HTTP Error 503. The service is unavailable. In IIS manager IIS is started and the app pool I am using reports itself as 'Started', yet there is no w3wp.exe process listed in the process list in task manager (I am a local admin and have clicked the 'Show processes from all users' button. I have enabled logging for the web site (at default location of %SystemDrive%\inetpub\logs\LogFiles), but this folder is empty. I am assuming that this log output is written by w3wp.exe as it handles requests (no w3wp.exe, no log file?). Presumably there is another layer of request handling that is responsible for starting the worker processes, does thsi layer have log files I can check, and/or can I uninstall/re-install that layer? Thanks.

    Read the article

  • Nginx proxy to Apache - resolve HTTP ORIGIN

    - by Fratyr
    I have a server setup with nginx serving static content and proxy all PHP/dynamic requests to apache on 127.0.0.1 I'm building an API for my databases, and I need to allow clients by their origin (domain name), rather than just IP. Based on CORS rules. So when I send an HTTP header header("Access-Control-Allow-Origin: www.client-requesting.myapi.com"); from my API server, I have to tell it which origin I allow, otherwise client side requests won't work to my API due to same-origin policy. The question is how can I know which domain name (if any) called my API? What should be the nginx and apache configuration to pass the origin parameter? I tried to google, and all I found is some possible solution with mod_rpaf, but I wanted to be sure. Thanks!

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >