Search Results

Search found 10640 results on 426 pages for 'apache2 module'.

Page 86/426 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Securing php on a shared apache

    - by Jack
    I'm going to install apache+php in a server where two users, A and B, will deploy their website. I'm trying to achieve isolation of users' space for security reasons: that is no scripts from site A should be able to read files in site B. To achieve this I installed suphp. Website files of user A are owned by A:A with perm=700 and user of B are owned by B:B with perm=700. Suphp works great, but apache complains about permissions to read .htaccess. How can I let apache to read .htaccess in every dir of A and B while keeping isolation between site A and site B? I played with ownership (group = www-data) and permissions (750) but I found no way to keep isolation granted. Any idea? Maybe by running apache as root, but in this case are there any drawbacks?

    Read the article

  • Nginx reverse proxy IP issue

    - by Tiffany Walker
    For some reason Apache is still seeing my SERVERS ip. Is this an nginx problem? /etc/nginx.conf user nobody; # no need for more workers in the proxy mode worker_processes 4; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 20480; events { worker_connections 5120; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; vhost file: server { error_log /var/log/nginx/vhost-error_log warn; listen 63.6.1.12:80; server_name photo-rolldomain.com www.domain.com; access_log /usr/local/apache/domlogs/domain.com-bytes_log bytes_log; access_log /usr/local/apache/domlogs/domain.com combined; root /home/mtech/public_html; location / { location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { expires 7d; try_files $uri @backend; } error_page 405 = @backend; add_header X-Cache "HIT from Backend"; proxy_pass http://63.6.1.12:8081; include proxy.inc; } location @backend { internal; proxy_pass http://63.6.1.12:8081; include proxy.inc; } location ~ .*\.(php|jsp|cgi|pl|py)?$ { proxy_pass http://63.6.1.12:8081; include proxy.inc; } location ~ /\.ht { deny all; } }

    Read the article

  • Is it Secure to Grant Apache User Ownership of Directories & Files for Wordpress

    - by Oudin
    I'm currently setting up WordPress on an Ubuntu server 12 everything runs fine but there is an issue when it comes to automatically updating and uploading media via WP as Apache "www-data" user does not have permissions to write to the directories. "user1" has full permission All my directories have permissions of 0755 and files 644 my directories setup is as follows: /home/user1/public_html All WP files and directories are in "public_html" In order to work around the auto updating and uploading media I've granted Apache user ownership to the following directories sudo chown www-data:www-data wp-content -R sudo chown www-data:www-data wp-includes -R sudo chown www-data:www-data wp-admin -R I would like to know security wise how secure this is and if it is not secure what would be the best solution? That will allow me to keep all files and directories owned by user1 and still allow wp to be able to automatically update and uploading media

    Read the article

  • PHP Apache XAMPP Run Multiple Scripts from CLI in Background

    - by Pamela
    How can I simultaneously run dozens of PHP scripts in the background from XAMPP's command line interface? Someone suggested a batch file, but when I tried executing this: start php 1.php start php 2.php start php 3.php It only opened a command prompt window; I closed that window, then two more command prompt windows opened up executing 2.php and 3.php. I want to run as many scripts as I want all simultaneously and all in the background. What is the best way to accomplish this, and how can it be done?

    Read the article

  • Updating PHP on a Plesk managed Server

    - by mblaettermann
    I just updated PHP and MySQl on my VPS with the current Versions from Atomic Repo. Everything worked out fine so far. From console I get the new PHP 5.3: [root@server phpMyAdmin]# php -v PHP 5.3.16 (cli) (built: Aug 20 2012 11:18:05) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies with the ionCube PHP Loader v4.0.5, Copyright (c) 2002-2011, by ionCube Ltd. But through Apache I still get the old version (5.1.6). The server is running some old version of crappy Plesk Panel. That gives me the option to choose between Apache Modul, fCGI and CGI-BIN. Any hints, how to update apache, so it will use the new PHP Version? EDIT: I just needed to restart httpd (/etc/init.d/httpd restart)

    Read the article

  • Mechanism behind user forwarding in ScriptAliasMatch

    - by jolivier
    I am following this tutorial to setup gitolite and at some point the following ScriptAliasMatch is used: ScriptAliasMatch \ "(?x)^/(.*/(HEAD | \ info/refs | \ objects/(info/[^/]+ | \ [0-9a-f]{2}/[0-9a-f]{38} | \ pack/pack-[0-9a-f]{40}\.(pack|idx)) | \ git-(upload|receive)-pack))$" \ /var/www/bin/gitolite-suexec-wrapper.sh/$1 And the target script starts with USER=$1 So I am guessing this is used to forward the user name from apache to the suexec script (which indeed requires it). But I cannot see how this is done. The ScriptAliasMatch documentation makes me think that the /$1 will be replaced by the first matching group of the regexp before it. For me it captures from (?x)^/(.* to ))$ so there is nothing about a user here. My underlying problem is that USER is empty in my script so I get no authorizations in gitolite. I give my username to apache via a basic authentication: <Location /> # Crowd auth AuthType Basic AuthName "Git repositories" ... Require valid-user </Location> defined just under the previous ScriptAliasMatch. So I am really wondering how this is supposed to work and what part of the mechanism I missed so that I don't retrieve the user in my script.

    Read the article

  • nginx reverse proxy to apache mod_wsgi doesn't work

    - by user11243
    I'm trying to run a django site with apache mod-wsgi with nginx as the front-end to reverse proxy into apache. In my Apache ports.conf file: NameVirtualHost 192.168.0.1:7000 Listen 192.168.0.1:7000 <VirtualHost 192.168.0.1:7000> DocumentRoot /var/apps/example/ ServerName example.com WSGIDaemonProcess example WSGIProcessGroup example Alias /m/ /var/apps/example/forum/skins/ Alias /upfiles/ /var/apps/example/forum/upfiles/ <Directory /var/apps/example/forum/skins> Order deny,allow Allow from all </Directory> WSGIScriptAlias / /var/apps/example/django.wsgi </VirtualHost> In my nginx config: server { listen 80; server_name example.com; location / { include /usr/local/nginx/conf/proxy.conf; proxy_pass http://192.168.0.1:7000; proxy_redirect default; root /var/apps/example/forum/skins/; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } After restarting both apache and nginx, nothing works, example.com simply hangs or serves index.html in my /var/www/ folder. I'd appreciate any advice to point me in the right direction. I've tried several tutorials online to no avail.

    Read the article

  • Dynamic virtualhost causing "client denied by server configuration" error

    - by ridan
    I'm trying to configure a dynamic virtualhost on mac: NameVirtualHost *:80 <VirtualHost *:80> ServerName *.*.* ServerAlias *.*.*.* VirtualDocumentRoot "/Volumes/Work/webs/%2" VirtualScriptAlias "/Volumes/Work/webs/%2" <Directory "/Volumes/Work/webs/%2"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All order allow,deny allow from all </Directory> </VirtualHost> It causes this error: "client denied by server configuration". When I replace by it works fine... Any ideas ?

    Read the article

  • Why do HTTP loopback connections not work on my subdomains?

    - by memeLab
    I have a shared hosting account at Jumba running Linux kernel 2.6.9-103.ELsmp (don't know if that helps) with cpanel 1.0 (RC1). I am using the WordPress plugin Backup Buddy, which requires HTTP loopback connections to monitor / complete backups. This works fine on memelab.com.au, but doesn't work at any subdomain (e.g.: staging.memelab.com.au). Is it possible to setup an A record or some such to remedy this? I'm aware of a workaround, (setting WP_ALTERNATE_CRON) but I find this unsatisfactory due to the messy URLs. BackupBuddy:_Frequent_Support_Issues#HTTP_Loopback_Connections_Disabled Here is the reply from my host: …as main domain have it's own separate DNS entry it have localhost entry which helps for looback connections where as subdomains don't have separate DNS zone, so it is not possible to create looback connections for it. I have cpanel access to the 'advanced zone editor' - is there anything tricky I can do there? maybe 127.0.0.2? (I remember reading that there were at least 8 available local IPs available on (some) Linuxes.) All the A records point to the server IP, with the exception of localhost.memelab.com.au which points to 127.0.0.1. I've just tried entering a new A record: localhost.itours.memelab.com.au pointing to 127.0.0.2. I still get the warning in Backup Buddy that loopback is not active, and Cpanel won't let me enter 127.0.0.1 (guess it doesn't work like that!) nslookup itours.memelab.com.au Server: 203.88.112.33 Address: 203.88.112.33#53 Non-authoritative answer: Name: itours.memelab.com.au Address: 117.55.224.177

    Read the article

  • Redirecting specific traffic to amazon AWS

    - by yoav r
    My server has recieved sudden increase in the (read) web traffic, requesting many map image tiles, and apache cannot handle it. Apache cannot even handle the redirections! The average load I get in my CentOS machine is more then 200.. Is there some software out there that can redirect SOME of the traffic, such as only the traffic from specific directory (such as http://example.com/maptiles/abc.png) to a different address (sucha as http://s3.amazonaws.com/mytiles/abc.png) ? can this be done by HAProxy?

    Read the article

  • set all apache sites offline with temporary static cached original pages

    - by rubo77
    I would like to set all virtualhosts on my server down for maintenance for some time. The temporary page should contain something like sorry, the page www.xxx.com is down for maintenance. you can see the cached version here: Then the trick: the user should then see the cached page from a cache like googlecache or similar for the requested page as long as the server is down. This would show the correct content on pages, that are static anyway and give the visitor the needed content in many cases, while I can shut down mysql and other services that would usually be needed to show that pages. How can I set a global page on all virtualhosts, that parses the original requested URL through PHP?

    Read the article

  • reverse proxy only from one internal server

    - by hrost
    I have configured a reverse proxy and is working ok for one internal server, for example our mail server. Now, I like to know if it is possible to configure a reverse proxy for only one server /application (in this case our web intranet). Our problem is Intranet call another aplication inside same intranet server and another internal servers, and the only way that I know to publish this resources is make a reverse proxy in our dmz apache for all apllications servers, but I like that from our DMZ reverse apache only intranet will be called, and other applications will be called by intranet server, and not reverse proxy. I like to configure with this system for security reason, and only allow external access to one server. I have configured With Debian Squeeze and apache 2.2 It is possible? How?

    Read the article

  • Intermittent extrememly long response times when downloading documents

    - by pap
    I have a Java web application running om Tomcat 7 with an Apache httpd 2.2 fronting with mod_jk/AJP. One part of the application is serving files (up to 4mb size). Now, normally this all runs very smooth with stable, low response-times. However, in rare instances (<0.1% of downloads), the downloadtime will go beyond 1 minute. After activating the ThreadStuckValve in Tomcat, I can see that the long responses seem to be stuck at org.apache.tomcat.jni.Socket.sendbb(Native method) i.e network I/O. At most, these long-running downloads take 5 minutes, which I strongly suspect is because of the default 300 second timout in Apache 2.2 (http://httpd.apache.org/docs/2.2/mod/core.html, "TimeOut directive"). To me, this looks like network problems. The Apache timeout (if that is what is kicking in at the 5 minute mark) indicates that ACK packets are not being transmitted correctly. My questions are what could be causing this? Closed browser at receiving end but socket not signaled as closed properly? Packet loss or some other network failure in transit? Where would I start troubleshooting this? We're running Tomcat and Apache on Windows server 2008-R2 in a vmware virtualized server.

    Read the article

  • Apache inflate application/ with mod_filter

    - by BGT
    I need to prevent pdf objects from being gzipped. Really, this only needs to take place if the request is from the Mozilla browser (but since I can't get something as seemingly simple as no-gzip for application/pdf, I figure it's wiser to start there). From looking at the apache documentation on mod_filter, I've got the following: <Location /> FilterDeclare gzipDeflate CONTENT_SET FilterDeclare gzipInflate CONTENT_SET FilterProvider gzipDeflate deflate req=User-Agent $Mozilla/ FilterProvider gzipInflate inflate resp=Content-Type $application/ FilterChain +gzipDeflate +gzipInflate </Location> From my testing, the gzipDeflate filter is doing its job and all the pages without the Content-Type starting with application are being gzipped. But, the gzipInflate doesn't seem to be working at all. I've inspected the response in Firebug and verified that the Content-Type being sent down is application/pdf. I'll go ahead and ask a potentially stupid question though: The response's Content-Type header in its entirety read "application/pdf; charset=Windows-1252". Does that make any sort of difference or is $application/ presumably enough to catch that? Any help is greatly appreciated. One other point, the URL that returns the pdf object does not have the .pdf extension. The pdf itself is stored in an Oracle database as a blob and appended to the page when appropriate (all the urls in the system use the same baseline). This was part of an original inquiry by a helpful member at stackoverflow who pointed me towards mod_filter and suggested I post the question here.

    Read the article

  • How to block a user in apache httpd server from accessing a *.php file inside a Directory, instead user should access this using Directory name

    - by Oxi
    My requirement looks Simple, But Googling Did not help me yet. my query is i want to Throw a 404 page to a user(Not Re-Direct to another folder or file), who is trying to Access *.php files in my website ex: when a client asks for www.example.com/home/ i want to show the content , but when user says www.example.com/home/index.php i want to show a 404 page. i tried different methods, nothing worked for me, one of which tried is shown below <Directory "C:/xampp/htdocs/*"> <FilesMatch "^\.php"> Order Deny,Allow Deny from all ErrorDocument 403 /test/404/ ErrorDocument 404 /test/404/ </FilesMatch> </Directory> Thanks in Advance

    Read the article

  • Set up multiple websites on a local web server

    - by mickburkejnr
    I have spent the last few days setting up a CentOS 6 server on my local network so that I can host multiple projects that I'm currently working on. Everything has been set up so that I access the server by typing 192.168.1.10 and the Apache test page comes up. What I'm aiming to do is to access different projects by typing in 192.168.1.10/project, and then view the project as if it was on it's own standalone server. I have thought about just sticking these sites inside folders on the server then accessing them that way, but a lot of my projects use CakePHP so this isn't feasible. So what I need to do is create VirtualHosts in Apache to allow me to do this, but without using a domain name. I want to stick to using the IP address of the machine (which is static). Any ideas? EDIT I've followed Peter's suggestion, but now I have a new problem. In the httpd.conf file I have entered the following information: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /www/html/project1 ServerName local.project1.com ErrorLog logs/local.project1.com-error_log CustomLog logs/local.project1.com-access_log common </VirtualHost> And now Apache is saying: Starting httpd: Warning: DocumentRoot [/www/html/project1] does not exist When it clearly does exist. I've disabled SELinux and I can confirm this isn't turned on. I've also checked the ownership of the folder, and its owned by root. I can also save files to these folders using a guest FTP account (which isn't associated to root), so the folders are being listed and can be written to. But when I try the folder in a web browser it doesn't seem to work either. I've also done a reboot of the server and the problem persists. What should I change in order to resolve this?

    Read the article

  • How do I redirect/rewrite to the FQDN URL without setting ServerName?

    - by ChaimKut
    Often in intranets, users will direct URLs to a hostname without supplying the FQDN. Example: http://internalHost Instead of http://internalHost.example.com I would like to redirect users / rewrite URLs so that everything will use the FQDN. Here's the catch: I don't want to set ServerName explicitly. (This is for a product which will be deployed in multiple intranets so we can't know the value of ServerName ahead of time). According to: http://wiki.apache.org/httpd/CouldNotDetermineServerName Apache uses a reverse lookup to determine a default FQDN. How can I make use of/reference that FQDN that Apache is using for a mod_rewrite or redirect?

    Read the article

  • Re-Include Module

    - by Nino55
    Hello, I need some like this: module One def test; puts 'Test One'; end end module Two def test; puts 'Test Two'; end end class Foo include One include Two include One end In this case I need as a result 'Test One' but obviously it returns 'Test Two'. I need a clean simple way for re-include my module. Any suggestion? Thanks!

    Read the article

  • Only 192.168.0.3 can request most files, but anyone can request /public/file.html

    - by mattalexx
    I have the following virtual host on my development server: <VirtualHost *:80> ServerName example.com DocumentRoot /srv/web/example.com/pub <Directory /srv/web/example.com/pub> Order Deny,Allow Deny from all Allow from 192.168.0.3 </Directory> </VirtualHost> The Allow from 192.168.0.3 part is to only allow requests from my workstation machine. I want to tweak this to allow anyone to request a certain URL: http://example.com/public/file.html How do I change this to allow /public/file.html requests to get through from anyone? Note: /public/file.html doesn't actually exist as a file on the server. I redirect all incoming requests through a single index file using mod_rewrite.

    Read the article

  • Anyway to backup nginx before recompiling

    - by JM4
    I am looking to install the HttpGeoipModule for NGINX but learning I have to recompile the entire thing from source in order to do so. I have a new Media Temple DV 4.0 server and that comes with nginx v 1.3.0 stock and have never had to recompile from source before and a bit nervous to make changes without being able to revert to a previous state in the event something messes up (that and the fact it is affecting a live server so no idea what downtime is). My plan was to copy all the existing modules used (nginx -V to list them all and copy the modules already compiled). Then rebuild from source with the copied info above and including the ./configure --with-http_geoip_module reference. Is is possible to backup the existing nginx configuration in the event something goes wrong?

    Read the article

  • Setup a automatic server reboot on when a particular service fails

    - by user1179459
    I am running linux based server (centos 6.0) with cpanle and WHM, I have critical website running with a chat server which uses a openfire as the chat server backend server, i have monitored last few weeks this service crashes quite often, i have no way of knowing that, and i have to wait till the next day to restart the server. (and this can only be fixed by using server reboot as its got to do with some java memory problem) is there a way i can setup a monitoring service to the server and if this service goes down server itself will reboot ? is this something possible or is there a better way to overcome this problem ?

    Read the article

  • getting a 404/403 error for payment gateway

    - by Obay Ouano
    We are setting up an online payment facility using a payment gateway. After the payment gateway finishes processing the credit card details for a payment, the user is redirected to a "403 Forbidden" page. The logs show: [MY_IP_ADDRESS_HERE] - - [SOME_DATE_HERE] "GET /POSTBACK_URL.php?txnid=1338434567&result=failure&reason=The+remote+server+returned+an+error%3a+(404)+Not+Found.&digest=7a115270c56df5945c43ad86e56b2e930a3cfd50 HTTP/1.1" 404 - "PAYMENT_GATEWAY_URL_HERE" "BROWSER_DETAILS_HERE" It means that when the PAYMENT_GATEWAY_URL attempts to open our POSTBACK_URL, it gets a 404 error, is that correct? But why does the page say "403 Forbidden"? Anyway, we tried to copy-paste that same URL into the browser window, and the page is opened successfully, with our programmed error notification message. So, why couldn't it be opened when the payment gateway tried to redirect to it, but we could? Is this some sort of permissions issue? If so, the postback URL's file permissions are already 755. What am I missing?

    Read the article

  • apache returning "The connection was reset"

    - by usjes
    One of my dedicated servers had some network issue today and the data center has to replace some router. Since then the sites on that server returns "The connection was reset" error most of the time. I tried installing nginx and it opens better, but it still shows the error sometimes. Everything in the config seems normal, what could be causing this error? UPDATE: Just noticed that in whm apache status there are always only 1 requests currently being processed, 8 idle workers. I know for sure the server received thousands of requests per minute. What could be limiting this to such a low number?

    Read the article

  • Running Multiple sites with multiple domains apache

    - by PsychoData
    I am having a rough time running apache and using multiple domain names here is a snippet of my config file. I keep getting a error saying that NameVirtualHost has no VirtualHosts. I want them both running on the same IP and I'm not sure why this doesn't work. I've been digging through the documentation for VirtualHosts, NameVirtualHost, and apache's page about name based virtual hosting. That example in the name based page is almost exactly my config! What am I doing wrong? Listen *:80 NameVirtualHost *:80 <VirtualHost *:80> ServerName www.sample1.net DocumentRoot /var/www/sample1-net </VirtualHost> <VirtualHost *:80> ServerName www.example2.net DocumentRoot /var/www/example2-net </VirtualHost>

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >