Search Results

Search found 10640 results on 426 pages for 'apache2 module'.

Page 83/426 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Setting up httpd-vhosts.conf for multiple virtual hosts

    - by Chris Sobolewski
    I have a simple test setup using xampp at home, and I am getting really weird behavior when I attempt to set up multiple virtual hosts on this box. Here is my vhosts file: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName foo DocumentRoot "D:\wamp\xampp\htdocs\foo" ErrorLog logs/foo-error_log CustomLog logs/foo-access_log common <Directory "D:\wamp\xampp\htdocs\foo"> Options Indexes FollowSymLinks Includes execCGI AllowOverride All Order Allow,Deny Allow From All </Directory> </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName bar DocumentRoot "D:\wamp\xampp\htdocs\bar" ErrorLog logs/bar-error_log CustomLog logs/bar-access_log common <Directory "D:\wamp\xampp\htdocs\bar"> Options Indexes FollowSymLinks Includes execCGI AllowOverride All Order Allow,Deny Allow From All </Directory> </VirtualHost> When I attempt to run visit the first site, it works as expected. When I attempt to run the second site, I get a weird hybrid mishmash of both sites. It's the weirdest thing.

    Read the article

  • 500 Internal Server Error when setting up Apache on localhost

    - by Martin Hoe
    I downloaded and installed XAMPP, and to keep my projects nicely separated I want to create a VirtualHost for each one based on its future domain name. For example, in my first project (we'll say it's project.com) I've put this in my Apache configuration: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/ ServerName localhost ServerAdmin admin@localhost </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/sub/ ServerName sub.project.com ServerAdmin [email protected] </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/project/ ServerName project.com ServerAdmin [email protected] </VirtualHost> And this in my hosts file: # development 127.0.0.1 localhost 127.0.0.1 project.org 127.0.0.1 sub.project.org When I go to project.com in my browser, the project loads up successfully. Same if I go to sub.project.com. But, if I navigate to: http://project.com/register (one of my site pages) I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. The error log shows this: [Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Any idea what config items I got wrong or how to get this working? It happens on any page that's not in in the root directory of project.com. Thanks.

    Read the article

  • IPv6 working fine, IPv4 throws OpenSSL error

    - by jippie
    I am building a webserver ( http://blog.linformatronics.nl/ ), which functions just fine on both IPv4 and IPv6 and when using a non-SSL connection. However when I connect to it through https, IPv6 works as expected, but an IPv4 connection throws a client side error. Server side logs are empty for the IPv4/https connection. Summarized in a table: | http | https -----+-------+------------------------------------------------------- IPv4 | works | OpenSSL error, failed. No server side logging. -----+-------+------------------------------------------------------- IPv6 | works | self signed certificate warning, but works as expected Apparently the SSL tunnel isn't even set up, which accounts for the Apache logs being empty. But why does it work fine for IPv6 and fail for IPv4? My question is why is this OpenSSL error being thrown and how can I solve it? Below is some extra information about the setup. IPv6 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -6 https://blog.linformatronics.nl --2012-11-03 15:46:48-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 2001:980:1b7f:1:a00:27ff:fea6:a2e7 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|2001:980:1b7f:1:a00:27ff:fea6:a2e7|:443... connected. WARNING: cannot verify blog.linformatronics.nl's certificate, issued by `/CN=localhost': Self-signed certificate encountered. WARNING: certificate common name `localhost' doesn't match requested host name `blog.linformatronics.nl'. HTTP request sent, awaiting response... 200 OK Length: 4556 (4.4K) [text/html] Saving to: `/dev/null' 100%[=======================================================================>] 4,556 --.-K/s in 0s 2012-11-03 15:46:49 (62.5 MB/s) - `/dev/null' saved [4556/4556] IPv4 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -4 https://blog.linformatronics.nl --2012-11-03 15:47:28-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 82.95.251.247 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|82.95.251.247|:443... connected. OpenSSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol Unable to establish SSL connection. Notes I am on Ubuntu Server 12.04.1 LTS

    Read the article

  • My website directories downloads instead of actually opening up from browser

    - by numerical25
    I added some screencast to show what I am having issues with http://screencast.com/t/212t3ANINqk http://screencast.com/t/bR44U1wkvNZl http://screencast.com/t/iDS7APYYsa but the page downloads my subdirectories instead of opening them up and displaying the index file of that page Here is the situation. I am trying to get my web service up using mac ports and I am just trying to configure all the files. I am using php, apache, etc. the page goes to the localhost root but anything beyond that. it can not find. edit Ive tried to add the following to httpd.conf within the <IfModule mime_module> but no hope AddType application/x-httpd-php .php AddType application/x-httpd-php .phtml AddType application/x-httpd-php .php3 AddType application/x-httpd-php .php4 AddType application/x-httpd-php .html AddType application/x-httpd-php-source .phps

    Read the article

  • Installing httpssl module on a running NGINX server

    - by Rob
    Hi, New to NGINX, we inherited a project that runs Django/FCGI/NGINX on a hosted RHEL box. A requirement has come in that the site now needs to have ssl enabled. Client was pretty sure the person who had built the site had made it so they could use ssl. I backed up the conf file, added the server block for the ssl instance and tried to reload. Reload failed because it didn't recognize the ssl in this line: ssl on; Not an NGINX expert, but the David Caruso in me tells me that the server (sunglasses on) is not secure. I know that you need to configure NGINX at install with this module. If this didn't happen, how hard/risky is it to reconfigure a running nginx box with this module given that we didn't configure it in the first place.

    Read the article

  • Can't install apache server, refuses to find

    - by Greg Dougherty
    I'm attempting to install Apache httpd-2.4.3. running $ ./configure --with-included-apr --with-pcre=/home/gregd/httpd-2.4.3/pcre --with-mod_alias --with-mod_include --with-mod_mime --with-mod_rewrite --with-mod_speling --with-mod_ss I get the following error: configure: error: Did not find pcre-config script at /home/gregd/httpd-2.4.3/pcre But the file pcre-config most certainly DOES exist in /home/gregd/httpd-2.4.3/pcre. So does configure. Installing on ubuntu. Any help would be greatly appreciated.

    Read the article

  • xampp admin page access forbidden

    - by Vihaan Verma
    I m new to apache world ! I read some docs online to setup virtual host . Which works fine ! Here are the content of httpd-vhosts.conf file <Directory C:/vhosts> Order Deny,Allow Allow from all </Directory> NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot "C:/htdocs" ServerName localhost </VirtualHost> <VirtualHost *:80> DocumentRoot "C:/vhosts/phpdw" ServerName phpdw </VirtualHost> But now when I access the xampp control panel and try accessing the apache admin page I get access defined eror (403) . My guess is that there needs to be some more configuration in this file to allow access to localhost. I could not find anything relevant . Thanks

    Read the article

  • ScriptAlias makes requests match too many Location blocks. What is going on?

    - by brain99
    We wish to restrict access on our development server to those users who have a valid SSL Client certificate. We are running Apache 2.2.16 on Debian 6. However, for some sections (mainly git-http, setup with gitolite on https://my.server/git/) we need an exception since many git clients don't support SSL client certificates. I have succeeded in requiring client cert authentication for the server, and in adding exceptions for some locations. However, it seems this does not work for git. The current setup is as follows: SSLCACertificateFile ssl-certs/client-ca-certs.crt <Location /> SSLVerifyClient require SSLVerifyDepth 2 </Location> # this works <Location /foo> SSLVerifyClient none </Location> # this does not <Location /git> SSLVerifyClient none </Location> I have also tried an alternative solution, with the same results: # require authentication everywhere except /git and /foo <LocationMatch "^/(?!git|foo)"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> In both these cases, a user without client certificate can perfectly access my.server/foo/, but not my.server/git/ (access is refused because no valid client certificate is given). If I disable SSL client certificate authentication completely, my.server/git/ works ok. The ScriptAlias problem Gitolite is setup using the ScriptAlias directive. I have found that the problem occurs with any similar ScriptAlias: # Gitolite ScriptAlias /git/ /path/to/gitolite-shell/ ScriptAlias /gitmob/ /path/to/gitolite-shell/ # My test ScriptAlias /test/ /path/to/test/script/ Note that /path/to/test/script is a file, not a directory, the same goes for /path/to/gitolite-shell/ My test script simply prints out the environment, super simple: #!/usr/bin/perl print "Content-type:text/plain\n\n"; print "TEST\n"; @keys = sort(keys %ENV); foreach (@keys) { print "$_ => $ENV{$_}\n"; } It seems that if I go to https://my.server/test/someLocation, that any SSLVerifyClient directives are being applied which are in Location blocks that match /test/someLocation or just /someLocation. If I have the following config: <LocationMatch "^/f"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> Then, the following URL requires a client certificate: https://my.server/test/foo. However, the following URL does not: https://my.server/test/somethingElse/foo Note that this only seems to apply for SSL configuration. The following has no effect whatsoever on https://my.server/test/foo: <LocationMatch "^/f"> Order allow,deny Deny from all </LocationMatch> However, it does block access to https://my.server/foo. This presents a major problem for cases where I have some project running at https://my.server/project (which has to require SSL client certificate authorization), and there is a git repository for that project at https://my.server/git/project which cannot require a SSL client certificate. Since the /git/project URL also gets matched agains /project Location blocks, such a configuration seems impossible given my current findings. Question: Why is this happening, and how do I solve my problem? In the end, I want to require SSL Client certificate authorization for the whole server except for /git and /someLocation, with as minimal configuration as possible (so I don't have to modify the configuration each time something new is deployed or a new git repository is added). Note: I rewrote my question (instead of just adding more updates at the bottom) to take into account my new findings and hopefully make this more clear.

    Read the article

  • Cant access folder on server- Permission denied

    - by Michal Korzeniowski
    I am running a vps with ubuntu 11.04. After a clean Modx install I've tried to access http://www.encepence.pl/manager and I've got a permission denied by my server. the thing is that I can easily access any other folder under that domain and modify this folder(manager) content via ftp. I’ve tried modifying virtual host with that <Directory /var/www/blackflow/data/www/encepence.pl/manager/> Options Indexes FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> But it didn't work. <Directory /var/www/blackflow/data/www/encepence.pl> Options -ExecCGI -Includes php_admin_value open_basedir "/var/www/blackflow/data:." php_admin_flag engine on </Directory> <VirtualHost 192.166.219.34:80 > ServerName encepence.pl CustomLog /var/www/httpd-logs/encepence.pl.access.log combined DocumentRoot /var/www/blackflow/data/www/encepence.pl ErrorLog /var/www/httpd-logs/encepence.pl.error.log ServerAdmin [email protected] ServerAlias www.encepence.pl SuexecUserGroup blackflow blackflow AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml AddType application/x-httpd-php-source .phps php_admin_value open_basedir "/var/www/blackflow/data:." php_admin_value sendmail_path "/usr/sbin/sendmail -t -i -f [email protected]" php_admin_value upload_tmp_dir "/var/www/blackflow/data/mod-tmp" php_admin_value session.save_path "/var/www/blackflow/data/mod-tmp" VirtualDocumentRoot /var/www/blackflow/data/www/%0 </VirtualHost> Any ideas on what might have gone wrong?

    Read the article

  • Apache HTTPD as a proxy

    - by markovuksanovic
    I need to redirect all the requests from localhost:8080/app1/ to localhost/app1. which is the best way to do it. The only requirement is that the user must never be aware that he is accessing the application at port 80. i guess I need to set up Apache HTTPD proxying - I'm just not sure which is the best way to do it. Thanks in advance.

    Read the article

  • Load Balancing Rails on Apache 2.x

    - by revgum
    My situation is that I need to proxy traffic to the root of my web server to port 81 for IIS, and then any traffic to a sub-directory needs to be directed to the rails app. my-server.com/ - needs to proxy to port 81 my-server.com/myapp - needs to point to the rails app This seems to be working alright for the rails application but the images, javascripts, and stylesheets are not actually working (proxied). I've tried to fiddle with the proxypass lines but it still doesn't work for me..can anyone help? Here's my complete VirtualHost portion of the config; LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so ProxyRequests off <Proxy balancer://myapp_cluster> BalancerMember http://127.0.0.1:3001 BalancerMember http://127.0.0.1:3002 </Proxy> <VirtualHost *:80> DocumentRoot "c:\ruby\apps\myapp\public" <Directory /myapp > Options FollowSymLinks AllowOverride None </Directory> ProxyPass /myapp/images ! ProxyPass /myapp/stylesheets ! ProxyPass /myapp/javascripts ! ProxyPass /myapp/ balancer://myapp_cluster/ ProxyPassReverse /myapp/ balancer://myapp_cluster/ ProxyPreserveHost on ProxyPass / http://localhost:81/ ErrorLog "c:\ruby\apps\myapp\log\error.log" # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog "c:\ruby\apps\myapp\log\access.log" combined </VirtualHost>

    Read the article

  • OPTIONS request vs GET in Ajax

    - by user41172
    I have a PHP/javascript app that queries and returns info using an ajax request. On every server I've used so far, this works as expected, passing an Ajax GET request to the server and returning json data. On a new install, the query fails and returns nothing-- I inspected the request and it turns out that rather than passing the query as a GET, the server is passing it as an OPTIONS request. Is there any reason for this? I have no idea why this might happen. THanks!

    Read the article

  • apache mod_proxy or mod_rewrite for hide a root of a webserver behind a path

    - by Giovanni Nervi
    I have 2 apache 2.2.21 one external and one internal, I need to map the internal apache behind a path in external apache, but I have some problems with absolute url. I tried these configurations: RewriteEngine on RewriteRule ^/externalpath/(.*)$ http://internal-apache.test.com/$1 [L,P,QSA] ProxyPassReverse /externalpath/ http://internal-apache.test.com/ or <Location /externalpath/> ProxyPass http://internal-apache.test.com/ ProxyPassReverse http://internal-apache.test.com/ </Location> My internal apache use absolute path for search resources as images, css and html and I can't change it now. Some suggestions? Thank you

    Read the article

  • webm html5 videos lose connection with apache server

    - by Jizbo Jonez
    webm html5 videos that are played through a domain on my server sometimes lose connection. A video that is playing will start to buffer and then stop part of way through with that message "Video playback aborted due to a network error." displayed on the html5 video player. I am delivering the webm videos via a php script on an LAMP server. There doesn't seem to be any errors in the server logs. Is there any php.ini settings or httpd.conf that I need to set? I recently set 'Keep Alive" to "on" in httpd.conf could that be causing this?

    Read the article

  • Allow from referer for HTTP-basic protected SSL apache site

    - by user64204
    I have an apache site protected by HTTP basic authentication. The authentication is working fine. Now I would like to bypass authentication for users that are coming from a particular website by relying on the HTTP Referer header. Here is the configuration: SetEnvIf Referer "^http://.*.example\.org" coming_from_example_org <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Deny from all Allow from env=coming_from_example_org AuthName "login required" AuthUserFile /opt/http_basic_usernames_and_passwords AuthType Basic Require valid-user Satisfy Any </Directory> This is working fine for HTTP, but failing for HTTPS. My understanding is that in order to inspect the HTTP headers, the SSL handshake must be completed, but apache wants to inspect the <Directory> directives before doing the SSL handshake, even if I place them at the bottom of the configuration file. Q: How could I workaround this issue? PS: I'm not obsessed with the HTTP referer header, I could use other options that would allow users from a known website to bypass authantication.

    Read the article

  • How to handle sh: fetch: command not found

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps

    Read the article

  • How to install Predis

    - by user782860
    I am trying to install Predis, but keep getting a 500 Server errror. Here is what I have done. 1.) Have apache and php installed on Ubuntu Natty. 2.) Used the instructions on this page http://redis.io/download to download Redis. 3.) Ran the following example to confirm that Redis is working: $ src/redis-cli redis> set foo bar OK redis> get foo "bar" 4.) Have a local website at /home/user/Dropbox/documents/www/mywebsite.com/index.php and have confirmed that php is working. 5.) Downloaded the .zip version of Predis ( https://github.com/nrk/predis Version: v0.6.6-PHP5.2 ), and unzipped the contents to /home/user/Dropbox/documents/www/mywebsite.com/. So now Predis is here: /home/user/Dropbox/documents/www/mywebsite.com/nrk-predis-3bf1230/ 6.) Opened the /home/user/Dropbox/documents/www/mywebsite.com/index.php page. Here is its contents: <? define("PREDIS_BASE_PATH", "nrk-predis-3bf1230/lib/"); spl_autoload_register(function($class) { $file = PREDIS_BASE_PATH.strtr($class, '\\', '/').'.php'; if (file_exists($file)) { require $file; return true; } }); $redis = new Predis_Client(); $redis->set('foo', 'bar'); $value = $redis->get('foo'); ?> I have tried changing: $redis = new Predis_Client(); to: $redis = new Predis\Client(); Have tried changing the the PREDIS_BASE_PATH to: /nrk-predis-3bf1230/lib /home/user/Dropbox/documents/www/mywebsite.com/nrk-predis-3bf1230/lib/ /home/user/Dropbox/documents/www/mywebsite.com/nrk-predis-3bf1230/lib Have done a chmod +x on both: /home/user/Dropbox/documents/www/mywebsite.com/nrk-predis-3bf1230/ /home/user/Dropbox/documents/www/mywebsite.com And doing all of the above always results in a 500 server error. What am I doing wrong?

    Read the article

  • Setup proxy with Apache 2.4 on Mac 10.8

    - by Aptos
    I have 1 application (Java) that running on my local machine (localhost:9000). I want to setup Apache as a front end proxy thus I used following configuration in the httpd.conf: <Directory /> #Options FollowSymLinks Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order deny,allow Allow from all </Directory> Listen 57173 LoadModule proxy_module modules/mod_proxy.so <VirtualHost *:9999> ProxyPreserveHost On ServerName project.play ProxyPass / http://127.0.0.1:9000/Login ProxyPassReverse / http://127.0.0.1:9000/Login LogLevel debug </VirtualHost> ServerName localhost:57173 I change my vim /private/etc/hosts to: ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1:9999 project.play and use dscacheutil -flushcache. The problem is that I can only access to localhost:57173, when I tried accessing http://project.play:9999, Chrome returns "Oops! Google Chrome could not find project.play:9999". Can somebody show me where I were wrong? Thank you very much P/S: When accessing localhost:9999 it returns The server made a boo boo.

    Read the article

  • how to go about scaling a web-application ?

    - by phoenix24
    for someone whoes been primarily a web-application developer, and know not much about scaling/scalability techniques. I'll start by stating my application is written in Python, using Django; a fairly standard setup. I currently use Apache 2.2 for my webserver, and MySql for my database server; both running on the same vps server. Up until now, it was basically a prototype and merely 15-30 concurrent users at any given time; so I had no issues, but now since we'll be adding more users we'll have severe performance issues. So my question is how do i go about scaling my web-application? and my plan is as follows. Now I have just one vps server running, apache + mysql. Next, I plan to add another vps server, to run only MySql; so i'll have one web-server and one db server. Next, I'll add Memcache to the webserver for caching data; and taking some load off mysql. Next, another web-server for serving all the static content; Next, a vps server for load-balancing (nginx/varnish) behind which would be my two web-servers and then db-server. Does that sound like a workable strategy, please guide me around here.

    Read the article

  • "Directory index forbidden by Options directive" when deleting or renaming folders through webdav

    - by sandwiches
    I am trying to delete folders through webdav but all I get is 403 on the client and "Directory index forbidden by Options directive" in the Apache error log. I enabled "options indexes" for the folder and I stopped getting the errors in either the client or the log, but I still can't rename or delete folders through webdav. Any ideas why I'm unable to edit folders through webdav? I am running WAMP, default installation with Apache 2.2.17. I can connect, create files, delete files, rename them, etc. I can create folders but not delete them or rename them, once they're created. On the access log, whenever I try to delete, I get this: "DELETE /uploads/shahs HTTP/1.1" 301 243 On the error log, I get: Directory index forbidden by Options directive: The Webdav client gives a 403 when trying to delete or rename folders. Once, I added "options indexes," I stopped getting the error message in the Apache error log and the 403 on the webdav client, but now, deleting or renaming does nothing. No error messages, but nothing happens, at all.

    Read the article

  • Nginx & Apache Cannot get try_files to work with permalinks

    - by tcherokee
    I have been working on this for the past two weeks not and for some reason I cannot seem to get nginx's try_files to work with my wordpress permalinks. I am hoping someone will be able to tell me where I am going wrong and also hopefully tell me if I made any major errors with my configurations as well (I am an nginx newbie... but learning :) ). Here are my Configuration files nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## # Defines the cache log format, cache log location # and the main access log location. log_format cache '***$time_local ' '$upstream_cache_status ' 'Cache-Control: $upstream_http_cache_control ' 'Expires: $upstream_http_expires ' '$host ' '"$request" ($status) ' '"$http_user_agent" ' ; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } mydomain.com.conf server { listen 123.456.78.901:80; # IP goes here. server_name www.mydomain.com mydomain.com; #root /var/www/mydomain.com/prod; index index.php; ## mydomain.com -> www.mydomain.com (301 - Permanent) if ($host !~* ^(www|dev)) { rewrite ^/(.*)$ $scheme://www.$host/$1 permanent; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # All media (including uploaded) is under wp-content/ so # instead of caching the response from apache, we're just # going to use nginx to serve directly from there. location ~* ^/(wp-content|wp-includes)/(.*)\.(jpg|png|gif|jpeg|css|js|m$ root /var/www/mydomain.com/prod; } # Don't cache these pages. location ~* ^/(wp-admin|wp-login.php) { proxy_pass http://backend; } location / { if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") { set $do_not_cache 1; } proxy_cache_key "$scheme://$host$request_uri $do_not_cache"; proxy_cache main; proxy_pass http://backend; proxy_cache_valid 30m; # 200, 301 and 302 will be cached. # Fallback to stale cache on certain errors. # 503 is deliberately missing, if we're down for maintenance # we want the page to display. #try_files $uri $uri/ /index.php?q=$uri$args; #try_files $uri =404; proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_504 http_404; } # Cache purge URL - works in tandem with WP plugin. # location ~ /purge(/.*) { # proxy_cache_purge main "$scheme://$host$1"; # } # No access to .htaccess files. location ~ /\.ht { deny all; } } # End server gzip.conf # Gzip Configuration. gzip on; gzip_disable msie6; gzip_static on; gzip_comp_level 4; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; proxy.conf # Set proxy headers for the passthrough proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header X-Cache-Status $upstream_cache_status; backend.conf upstream backend { # Defines backends. # Extracting here makes it easier to load balance # in the future. Needs to be specific IP as Plesk # doesn't have Apache listening on localhost. ip_hash; server 127.0.0.1:8001; # IP goes here. } cache.conf # Proxy cache and temp configuration. proxy_cache_path /var/www/nginx_cache levels=1:2 keys_zone=main:10m max_size=1g inactive=30m; proxy_temp_path /var/www/nginx_temp; proxy_cache_key "$scheme://$host$request_uri"; proxy_redirect off; # Cache different return codes for different lengths of time # We cached normal pages for 10 minutes proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; The two commented out try_files in location \ of the mydomain config files are the ones I tried. This error I found in the error log can be found below. ...rewrite or internal redirection cycle while internally redirecting to "/index.php" Thanks in advance

    Read the article

  • Prevent SSL certificate being returned for a specific domain

    - by jezmck
    Apologies for a long question: We've taken on a new client whose web hosting was previously on their in-house server which still has their Exchange/Outlook email. We now host their domain (and many others) on our server. They're complaining that they're getting errors in Outlook. I don't understand the AutoDiscover stuff at the root of the problem, but believe that I just need to stop the SSL certificate on our server being returned when requested at a particular domain: Yes it is, the issue lies with "{newclient}.com" being pointed to your server IP and that server has Port 443 open with an SSL certificate associated to it. So when Outlook/ActiveSync use autodiscover to find the mailbox settings it find your SSL (because 443 is open) and flags it as an error. The solution is to close 443 so its not discovered, Autodiscover will then proceed to mail.{newclient}.com via the MX / ServiceRecords and discover the correct SSL. I'm new here and there was no hand-over, so I don't know whether other currently hosted sites need to accept SSL connections, though I suspect some will, or may in future. This is a live server, so I can't risk trying loads of options in case I take the server offline! I feel like I should be adding something like the following to vhosts.conf. <VirtualHost *:443> ServerName {newclient}.com ServerAlias www.{newclient}.com SSLEngine Off SSLCertificateFile {NONE} SSLCertificateKeyFile {NONE} </VirtualHost> Apologies for the fact that I don't know enough about this subject to be able to ask the question more clearly!

    Read the article

  • Rails with phusion passenger and wordpress

    - by Venu
    We had a site developed using on ruby on rails. It had Website Web services for mobile app Admin panel to manage data. We started using wordpress to manage site content. We have finished development, have to move to production now. This is the current virtual host code for wordpress to work under /wordpress URI. <Location /wordpress> PassengerEnabled off <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /wordpress/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /wordpress/index.php [L] </IfModule> </Location> I want to make phusion passenger work for the /admin and /api URIs. And / to go to wordpress. Can we change the document root based on the URI? or any other better solution?

    Read the article

  • Requesting better explanation for etag/expiration of favicon.ico

    - by syn4k
    Following this article: Configuring favicon with expires header in htaccess Using YSlow, I keep getting: (no expires) http://devwww.someplace.com/favicon.ico Also, YSlow indicates: Grade C on Configure entity tags (ETags) for the same file. My relevant config (.htaccess): # Configure ETags FileETag MTime Size <IfModule mod_expires.c> # Enable Expires Headers for this directory and sub directories that don't override it ExpiresActive on # Set default expiration for all files ExpiresDefault "access plus 24 hours" # Add Proper MIME-Type for Favicon AddType image/x-icon .ico # Set specific expriation by file type ExpiresByType image/x-icon "access plus 1 month" ExpiresByType image/ico "access plus 1 month" ExpiresByType image/icon "access plus 1 month" </IfModule> As you can see, I am setting both, etags and expiration however, both seem to be ignored. Yes, mod_expires is being loaded by my Apache configuration.

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >