Search Results

Search found 1902 results on 77 pages for 'nginx'.

Page 23/77 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Nginx static files exclude one or some file extensions

    - by Evgeniy
    I'm serving up a static site via nginx. location ~* \.(avi|bin|bmp|dmg|doc|docx|dpkg|exe|flv|gif|htm|html|ico|ics|img|jpeg|jpg|m2a|m2v|mov|mp3|mp4|mpeg|mpg|msi|pdf|pkg|png|ppt|pptx|ps|rar|rss|rtf|swf|tif|tiff|txt|wmv|xhtml|xls|xml|zip)$ { root /var/www/html1; access_log off; expires 1d; } And my goal is to exclude requests like http://connect1.webinar.ru/converter/task/. Full view is like http://mydomain.tld/converter/task/setComplete/fid/34330/fn/7c2cfed32ec2eef6788e728fa46f7a80.ppt.swf. Despite the fact these URLs ends in such a format they are not static, but fake script requests, so I have a problems with them. What is the best way to do this? How can I add an exclusion for this URL or maybe I can to exclude the specific file exptension (.ppt.swf, pptx.swf) from the list of this Nginx location? Thanks.

    Read the article

  • Configuring multiple domain in nginx in one file

    - by user22695
    I am still newbie configuring nginx. Is it posibble to configure multiple domain in one file and they share mostly the same config? For example I want to configure two domains that based from one app and one domain need basic auth, the other doesn't. I would like to do something like this, but I think this does not work: sites-enabled/mysite server { listen 127.0.0.1:80 default_server; server_name www.mysite.com; include sharedconf.conf; } server { listen 127.0.0.1:80; server_name www.mysite.co.jp; auth_basic "restricted"; auth_basic_user_file /etc/nginx.htpasswd; include sharedconf.conf; } sharedconf.conf location / { proxy_pass_header Server; #... bunch of config line ... }

    Read the article

  • rewrite all .html extension and remove index in Nginx

    - by Pardoner
    How would I remove all .html extensions as well as any occurrences of index.html from a url string in Nginx http://www.mysite/index.html to http://www.mysite http://www.mysite/articles/index.html to http://www.mysite/articles http://www.mysite/contact.html to http://www.mysite/contact http://www.mysite/foo/bar/index.html to http://www.mysite/foo/bar EDIT: Here is my conf file: server { listen 80; server_name staging.mysite.com; root /var/www/staging.mysite.com; index index.html index.htm; access_log /var/log/nginx/staging.mysite.com.log spiegle; #error_page 404 /404.html; #error_page 500 503 /500.html; rewrite ^(.*/)index\.html$ $1; rewrite ^(/.+)\.html$ $1; rewrite ^(.*/)index\.html$ $scheme://$host$1 permanent; rewrite ^(/.+)\.html$ $scheme://$host$1 permanent; location / { rewrite ^/about-us /about permanent rewrite ^/contact-us /contact permanent; try_files $uri.html $uri/ /index.html; } }

    Read the article

  • nginx dynamic servername with regular expression doesn't work for co.uk

    - by redn0x
    I'm trying to setup a nginx server which dynamically loads content from a folder for a domain. To do this I'm using regular expressions in the server name like so: server_name ((?<subdomain>.+)\.)?(?<domain>.+)\.(?<tld>.*); This will create a 3 variables for nginx to use later on, for example when using the following url: test.foo.example.com this will evaluate to: $subdomain = test.foo $domain = example $tld = com The problem arises when the co.uk top-level domain is used. In this case when using the url test.foo.example.co.ukit will evaluate to: $subdomain = test.foo.cedira $domain = co $tld = uk How can I edit the regular expression so that it will also work for co.uk?

    Read the article

  • nginx connection time issue on some IPs

    - by sheldon
    I have recently shifted my server to nginx and php-fpm getting rid of apache. This has helped improves speeds of my website. Everything seems to work fine until i came across this issue, i noticed that nginx keeps throwing connection time out errors for only certain IPs. One of the IPs is my office IP, we have a backend that is accessed from our office through out the day. I use supervisord to launch 3 php-fpm processes with workers this is my typical php-fpm config pm.max_children = 50 pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 300 Since i have a server with 4 cores and 2 GB ram this is my nginx setup worker_processes 4; worker_rlimit_nofile 8192; events { worker_connections 1024; use epoll; multi_accept off; } sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 55; recursive_error_pages on; server_name_in_redirect off; server_tokens off; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; connection_pool_size 256; client_header_buffer_size 8k; large_client_header_buffers 4 32k; request_pool_size 4k; output_buffers 4 32k; postpone_output 1460; proxy_buffer_size 32k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; fastcgi_connect_timeout 120; fastcgi_send_timeout 120; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; Where am i going wrong with the config, I have tried various settings but the issue still persists. These are the errors i keep getting 2011/11/13 18:20:33 [error] 21583#0: *311683 upstream timed out (110: Connection timed out) while reading response header from upstream, client: IP, server: tastykhana.in, request: "GET url HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm.socket:", host: "tastykhana.in", referrer: "url"

    Read the article

  • Passenger 2.2.4, nginx 0.7.61 and SSL

    - by boompa
    Has anyone had any luck configuring Passenger and nginx with SSL? I've spent hours trying to get this configuration working as I'd like, using what few resources there are out there on the net, and I can't get any of the supposedly forwarded headers to show up in the Rails controller. For example, with a conf file of the following (and multiple variations thereof): server { listen 3000; server_name .example.com; root /Users/website/public; passenger_enabled on; rails_env development; } server { listen 3443; root /Users/website/public; rails_env development; passenger_enabled on; ssl on; #ssl_verify_client on; ssl_certificate /Users/website/ssl/server.crt; ssl_certificate_key /Users/website/ssl/server.key; #ssl_client_certificate /Users/website/ssl/CA.crt; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-SSL-Subject $ssl_client_s_dn; #proxy_set_header X-SSL-Issuer $ssl_client_i_dn; proxy_redirect off; proxy_max_temp_file_size 0; } and Rails code in the controller like this: request.headers.each { |k, v| RAILS_DEFAULT_LOGGER.error "Header #{k} Val #{v}" } other headers appear, but not those set in nginx, e.g.: Header rack.multithread Val false Header REQUEST_URI Val /login/new Header REMOTE_PORT Val 64021 Header rack.multiprocess Val true Header PASSENGER_USE_GLOBAL_QUEUE Val false Header PASSENGER_APP_TYPE Val rails Header SCGI Val 1 Header SERVER_PORT Val 3443 Header HTTP_ACCEPT_CHARSET Val ISO-8859-1,utf-8;q=0.7,*;q=0.7 Header rack.request.query_hash Val Header DOCUMENT_ROOT Val /Users/website/public I've even gone so far as to modify Passenger's abstract_request_handler's main_loop method, i.e., headers, input = parse_request(client) if headers if headers[REQUEST_METHOD] == PING process_ping(headers, input, client) else headers.each { |h,v| log.unknown "abstract_request_handler: #{h} = #{v}" } process_request(headers, input, client) end end only to find that the supposedly added headers do not exist there either: abstract_request_handler: HTTP_KEEP_ALIVE = 300 abstract_request_handler: HTTP_USER_AGENT = Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1) Gecko/20090624 Firefox/3.5 abstract_request_handler: PASSENGER_SPAWN_METHOD = smart-lv2 abstract_request_handler: CONTENT_LENGTH = 0 abstract_request_handler: HTTP_IF_NONE_MATCH = "b6e8b9afbc1110ee3bf0c87e119252ad" abstract_request_handler: HTTP_ACCEPT_LANGUAGE = en-us,en;q=0.5 abstract_request_handler: SERVER_PROTOCOL = HTTP/1.1 abstract_request_handler: HTTPS = on abstract_request_handler: REMOTE_ADDR = 127.0.0.1 abstract_request_handler: SERVER_SOFTWARE = nginx/0.7.61 abstract_request_handler: SERVER_ADDR = 127.0.0.1 abstract_request_handler: SCRIPT_NAME = abstract_request_handler: PASSENGER_ENVIRONMENT = development abstract_request_handler: REMOTE_PORT = 64021 abstract_request_handler: REQUEST_URI = /login/new abstract_request_handler: HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.7 abstract_request_handler: SERVER_PORT = 3443 abstract_request_handler: SCGI = 1 abstract_request_handler: PASSENGER_APP_TYPE = rails abstract_request_handler: PASSENGER_USE_GLOBAL_QUEUE = false I'm tired of banging my head against the wall, so I'd truly appreciate any help I can get!

    Read the article

  • chunked response in nginx not working

    - by Dean Hiller
    I ran into this post which shows my problem EXACTLY http://nginx.org/en/docs/faq/chunked_encoding_from_backend.html BUT browsers are using http 1.1 these days so I really don't understand. Our backend is the playframework and I don't mind fixing it but I don't really understand what is not working ESPECIALLY since firefox, safari, chrome ALL download the response just fine with no problems. ONLY when we stick NGINX in the middle do things break and we end up with extra data in our json responses. Any idea how to fix this? as the doc above just seems wrong since we are now on later versions of http PLUS the browsers seem to work just fine. thanks, Dean

    Read the article

  • Serving static files fails - nginx

    - by Sergei
    Hi, I've been looking and trying around all night, but without success. I configured nginx to serve my static files and proxy all the other traffic: server { listen 80; server_name mydomain.com; access_log /home/boudewijn/www/bbt/brouwers/logs/access.log; error_log /home/boudewijn/www/bbt/brouwers/logs/error.log; location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy.conf; } location /media/ { root /home/boudewijn/www/bbt/brouwers/; } } The proxy passing is no problem, but when I go to mydomain.com/media/ or try to access any testfile over there, it's without success. I paid attention to the difference between root and alias, my media folder exists, I paid attention to the trailing slashes, but still I get a 404 when trying to access my static media files. Any help?

    Read the article

  • Nginx request forking

    - by Adam
    Hi, I'm wondering if nginx can "fork" a request. Let's imagine config: upstream backend { server localhost:8080; ... more servers here } server { location /myloc { FORK-REQUEST http://my-other-url:3135/something proxy_pass http://backend; } } I would like nginx to send a copy of request to the url specified by FORK-REQUEST and after that to load balance it with backend servers and return the response to the client. As I don't need the response from FORK-REQUEST it would be best if this request was async so normal prcessing doesn't have to wait. Is a scenario like this possible?

    Read the article

  • Trouble using gitweb with nginx

    - by Rayne
    I have a git repository in a directory inside of /home/raynes/pubgit/. I'm trying to use gitweb to provide a web interface to it. I use nginx as my web server for everything else, so I don't really want to have to use another just for this. I'm mostly following this guide: http://michalbugno.pl/en/blog/gitweb-nginx, which is the only guide I can find via google and is really recent. fcgiwrap apparently isn't in Lucid Lynx's repositories, so I installed it manually. I spawn instances via spawn-fcgi: spawn-fcgi -f /usr/local/sbin/fcgiwrap -a 127.0.0.1 -p 9001 That's all good. My /etc/gitweb.conf is as follows: # path to git projects (<project>.git) #$projectroot = "/home/raynes/pubgit"; $my_uri = "http://mc.raynes.me"; $home_link = "http://mc.raynes.me/"; # directory to use for temp files $git_temp = "/tmp"; # target of the home link on top of all pages #$home_link = $my_uri || "/"; # html text to include at home page $home_text = "indextext.html"; # file with project list; by default, simply scan the projectroot dir. $projects_list = $projectroot; # stylesheet to use $stylesheet = "/gitweb/gitweb.css"; # logo to use $logo = "/gitweb/git-logo.png"; # the 'favicon' $favicon = "/gitweb/git-favicon.png"; And my nginx server configuration is this: server { listen 80; server_name mc.raynes.me; location / { root /usr/share/gitweb; if (!-f $request_filename) { fastcgi_pass 127.0.0.1:9001; } fastcgi_index index.cgi; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; } } The only difference here is that I've set fastcgi_pass to 127.0.0.1:9001. When I go to http://mc.raynes.me I'm greeted with a page that simply says "403" and nothing else. I have not the slightest clue what I did wrong. Any ideas?

    Read the article

  • nginx redirect TLD to TLD with virtual folder (example.com => example.com/test)

    - by Amund
    Im running nginx and in the config file I need to always have the domain example.com redirect to example.com/test. I tried various methods for achieving this but I always got a redirect error. What is the correct way to do this? nginx.conf snippet: server { server_name example.com www.example.com; location / { rewrite ^.+ /test permanent; } } server { listen 80; server_name www.example.com example.com; location / { root /var/www/apps/example/current/public; passenger_enabled on; rails_env production; } } Thanks!

    Read the article

  • HAProxy NGInx SSL setup

    - by Niclas
    I've been looking around different setups for a server cluster supporting SSL and I would like to benchmark my idea with you. Requirements: All servers in the cluster should be under the same full domain name. (http and https) Routing to subsystems is done on URI matching in HA proxy. All URIs have support for SSL support. Wish: Centralizing routing rules ---<----http-----<-- | | Inet -->HA--+---https--->NGInx_SSL_1..N | | +---http---> Apache_1..M | +---http---> NodeJS Idea: Configure HA to route all SSL traffic (mode=tcp,algorithm=Source) to an NGInx cluster turning https traffic into http. Re-pass the http traffic from NGInx to the HA for normal load-balancing which performs load balancing based on HA config. My question is simply: Is this the best way to to configure based on requirements above?

    Read the article

  • nginx configuration for URL URI paths

    - by hachiari
    I want to switch my webserver from apache to nginx however I have difficulties in converting my current htaccess to nginx configuration the conditions that I need: I want everything to be like apache, it can read file such as js, css, jpg, png ,etc I am currently using CodeIgniter PHP frameword, it uses the URI system thingy... So my htaccess configuration for CodeIgniter URI is: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^system.* RewriteRule ^(.*)$ /index.php/$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L] RewriteCond %{HTTP_HOST} ^www.domain.tld [NC] RewriteRule ^(.*)$ hxtp://domain.tld/$1 [L,R=301] I am also using minify to compress my css and js files, so the way I call my css and js is like: hxtp://domain.tld/?=css hxtp://domain.tld/?=js I tried some configurations from the net, but I could only solve problem no 2 Thank You

    Read the article

  • Nginx Static Content Server Maxing Out?

    - by Harry
    I use nginx to serve the static content for a decently busy website of mine. I have the logging disabled, and 4 worker processes enabled with 5,000 connections per worker (which should yield a max connection limit of 20,000. The server is only operating at about 10% CPU usage and 50% ram, but it's very laggy, and sometimes nginx is so slow to respond to the requests, it times out. For a small number of connections, it's fine, but once any load starts occurring (~2,500 connections), it backs up and bogs down. Is there any other bottlenecks or limits that I might be hitting? This is a FreeBSD server, and all the static files are located locally (not NFS). The NIC is an unmetered gigabit, and it's only using around 75 megabit. Any insight would be appreciated. Thanks.

    Read the article

  • Nginx Rewrite to Previous Directory

    - by ThinkBohemian
    I am trying to move my blog from blog.example.com to example.com/blog to do this I would rather not move anything on disk, so instead i changed my nginx configuration file to the following: location /blog { if (!-e $request_filename) { rewrite ^.*$ /index.php last; } root /home/demo/public_html/blog.example.com/current/public/; index index.php index.html index.html; passenger_enabled off; index index.html index.htm index.php; try_files $uri $uri/ @blog; } This works great but when i visit example.com/blog nginx looks for: /home/demo/public_html/blog.example.com/current/public/blog/index.php instead of /home/demo/public_html/blog.example.com/current/public/index.php Is there a way to put in a rewrite rule so that I can have the server automatically take out the /blog/ directory? something like ? location /blog { rewrite \\blog\D \; }

    Read the article

  • Nginx rewrite with Simple Machines Forum

    - by Kevin Worthington
    I am running Nginx 1.5.6 and I use the Simple Machines Forum software. Most rewrite rules seem to work properly, with the exception of the RSS feeds. In my Nginx configuration, I have the following line which is supposed to handle URLs which contain ".xml": rewrite ^/forum/(\.xml|xmlhttp)/?$ "/forum/index.php?pretty;action=$1" last; The above rule produces the following URL for the main forum, which returns a 403 Error: http://www.mydomain.com/forum/.xml/?type=rss I would like the rewrite rule to produce this type of URL, which returns code 200 (a real page): http://www.mydomain.com/forum/?type=rss;action=.xml Here is the entire block pertaining to the forum rewrites: http://pastebin.com/raw.php?i=tZkAibW3 I would really appreciate some help to create a rewrite rule to do that. Thanks.

    Read the article

  • Nginx and Tomcat 6 proxy pass

    - by Patrick Schneider
    i've got problems tp configure nginx as reverse proxy for an tomcat application. I want to set domain www.example.com/blog to pass to an tomcat application. nginx-site: server { listen 80; servername example.com; location /blog { proxy_pass http://localhost:8080/blog; proxy_redirect off; } } Now when i call on my browser http://example.com/blog it redirects to localhost/blog which does not work. curl http://localhost:8080/blog -H "host: example.com/blog" -v shows a 302 to localhost/blog Any ideas?

    Read the article

  • tracd multiple projects+nginx reverse proxy

    - by Xeross
    I am trying to setup nginx with a reverse proxy to tracd, however I only want to use 1 tracd. Now first here's my config for this domain server { listen 80; server_name bugs.XXXXXXXX.com; access_log /var/log/nginx/XXXXXXXX-bugtracker.access.log proxy; location / { rewrite ^/bugtracker/(.*)$ /$1; rewrite ^/bugtracker$ /; proxy_pass http://127.0.0.1:81/bugtracker/; proxy_redirect default; proxy_set_header Host $host; } location ~ /\.ht { deny all; } } As you can see there's the rewrite rules, because for some reason all the urls that tracd spews out are like /bugtracker/something. Now this is indeed caused by tracd just sending urls like it normally should however trac is at bugs.XXXXXXXX.com/ and not at bugs.XXXXXXXX.com/bugtracker. So how can I make tracd/trac display the (In this case) correct urls ?

    Read the article

  • Configuring vsftpd with nginx on ubuntu

    - by arby
    I have vsftpd installed on Ubuntu 12.04LTS along with nginx, php, and sql on an Amazon ec2 instance. The web server is good to go, but I'm having trouble connecting to the FTP server. I'm not quite sure how to set the privileges or what configuration options I might be missing. By default, the location of the web root is at /usr/share/nginx/www and it is owned by root:root. The web server runs as user www-data in the group www-data. I've opened port 21 and set the passive ports in the ec2 backend and ufw firewall. In vsftpd.conf, I have: ... anonymous_enable=NO local_enable=YES local_umask=0027 chroot_local_user=YES pasv_enable=YES pas_max_port=12100 pasv_min_port=12000 port_enable=YES ... Now, I'm unsure how to create the FTP user that when I login, displays my web directory with write access. I've tried it a few different ways, but I keep running into errors (either no connection, no write access, or very slow timeouts.)

    Read the article

  • Nginx Cache-Control

    - by optixx
    Iam serving my static content with ngnix. location /static { alias /opt/static/blog/; access_log off; etags on; etag_hash on; etag_hash_method md5; expires 1d; add_header Pragma "public"; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } The resulting header looks like this: Cache-Control:public, must-revalidate, proxy-revalidate Cache-Control:max-age=86400 Connection:close Content-Encoding:gzip Content-Type:application/x-javascript; charset=utf-8 Date:Tue, 11 Sep 2012 08:39:05 GMT Etag:e2266fb151337fc1996218fafcf3bcee Expires:Wed, 12 Sep 2012 08:39:05 GMT Last-Modified:Tue, 11 Sep 2012 06:22:41 GMT Pragma:public Server:nginx/1.2.2 Transfer-Encoding:chunked Vary:Accept-Encoding Why is nginx sending 2 Cache-Control entries, could this be a problem for the clients?

    Read the article

  • Adding CNAME entry to nginx for cdn rewrite

    - by Ayaz Malik
    I am using apache + nginx (for serving static content) and just bought a CDN. I have added a CNAME entry to my CDN URL, which should be cdn.example.com and pointing to the original cdn url. xxx.netdna-cdn.com/ But probably because of my nginx vhost file when I run cdn.example.com, it opens the first server entry site in my vhost file. I have multiple sites in my server. I have added CNAME from CPanel DNS editor as well. No luck, so I think I need to add something in the vhost.conf.

    Read the article

  • Request Coalescing in Nginx

    - by Marcel Jackwerth
    I have an image resize server sitting behind an nginx server. On a cold cache two clients requesting the same file could trigger two resize jobs. client-01.net GET /resize.do/avatar-1234567890/300x200.png client-02.net GET /resize.do/avatar-1234567890/300x200.png It would be great if only one of the requests could go through to the backend in this situation (while the other client is set 'on-hold'). In Varnish there seems to be such a feature, called Request Coalescing. However that seems to be a Varnish-specific term. Is there something similar for Nginx?

    Read the article

  • Nginx + Passenger running a RoR app is returning 401 when 302 is expected

    - by DBruns
    I've got a RoR app running on Passenger on top of Nginx. I'm using devise for my authentication method and have a link that gets sent in an email to users that requires authentication to view. If a user clicks the link from Outlook, and IE is the default browser, IE makes an HTTP request using the following headers: GET http://www.company.com/custom_layouts/108 HTTP/1.1 Accept: */* Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.company.com Returning: HTTP/1.1 401 Unauthorized Content-Type: /; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Status: 401 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.2.15 WWW-Authenticate: Basic realm="Application" Cache-Control: no-cache X-UA-Compatible: IE=Edge,chrome=1 Set-Cookie: _vxwer_session=[sessionstr]; path=/; HttpOnly X-Runtime: 0.011918 Server: nginx/0.7.67 + Phusion Passenger 2.2.15 (mod_rails/mod_rack) 31 You need to sign in or sign up before continuing. 0 When the exact same URL is typed into the address bar, it does this: GET http://www.company.com/custom_layouts/108 HTTP/1.1 Accept: image/jpeg, application/x-ms-application, image/gif, application/xaml+xml, image/pjpeg, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */* Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.company.com Returning: HTTP/1.1 302 Found Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Status: 302 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.2.15 Location: http://www.company.com/users/sign_in Cache-Control: no-cache X-UA-Compatible: IE=Edge,chrome=1 Set-Cookie: _xswer_session=[session_info_here]; path=/; HttpOnly X-Runtime: 0.010798 Server: nginx/0.7.67 + Phusion Passenger 2.2.15 (mod_rails/mod_rack) 6f <html><body>You are being <a href="http://www.company.com/users/sign_in">redirected</a>.</body></html> 0 I expect them to return the same thing regardless.

    Read the article

  • SSL client auth in nginx with multiple server section

    - by Bastien974
    I want to implement ssl_verify_client in nginx. This works perfectly when I only have one server section, which listen to 443. In my case I have multiple, all listening on 443 but to different server_name. For one particular server (proxy.mydomain.com), I'm adding the SSL client verify, but when I test the connectivity with openssl s_client -connect proxy.mydomain.com:443 -cert xxx.crt -key xxx.key and then do a GET / HTTP/1.1 host: proxy.mydomain.com It's not working, 400 No required SSL certificate was sent I think nginx is not receiving the proper server_name and is directing it to the first server listening to 443. So I tried to listen on another port and it worked right away. What's the issue and how can I fix it ?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >