Search Results

Search found 3247 results on 130 pages for 'apache2 2'.

Page 76/130 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Htaccess strange behaviour with Nginx

    - by Termos
    I have a site running on Nginx (v1.0.14) serving as reverse proxy which proxies requests to Apache (v2.2.19). So Nginx runs on port 80, Apache is on 8080. Overall site works fine except that i cannot block access to certain directories with .htaccess file. For example i have 'my-protected-directory' on 'www.site.com' Inside it i have htaccess with following code: <Files *> order deny,allow deny from all allow from 1.2.3.4 <--- my ip address here </Files> When i try to access this page with my ip (1.2.3.4) i get 404 error which is not what i expect: http://www.site.com/my-protected-directory However everything works as expected when this page is served directly by Apache. I can see this page, everyone else can't. http://www.site.com:8080/my-protected-directory Update. Nginx config (7.1.3.7 is site ip.): user apache; worker_processes 4; error_log logs/error.log; pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; keepalive_timeout 65; gzip on; gzip_min_length 1024; gzip_http_version 1.1; gzip_proxied any; gzip_comp_level 5; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon; server { listen 80; server_name www.site.com site.com 7.1.3.7; access_log logs/host.access.log main; # serve static files location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/www.site.com/httpdocs; proxy_set_header Range ""; expires 30d; } # pass requests for dynamic content to Apache location / { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Range ""; proxy_pass http://7.1.3.7:8080; } } Could please anyone tell me what is wrong and how this can be fixed ?

    Read the article

  • Apache on Windows random long wait times

    - by Jaxbot
    I have a development machine with Apache installed as a service on Windows. The installation is fresh out of the box, with no changes to configuration aside from adding the PHP module. From day one, I've had a problem that looks like this: Essentially, Apache is freezing for about 11 seconds before replying on random requests. This appears to happen more frequently when the host hasn't been connected to in a while, but this is not always the case. I've eliminated MySQL, PHP, and the specific application; this long wait problem will occur even when loading a static file such as favicon.ico. Thus, the only factor remaining is Apache, which is freezing for consistently around 10-11 seconds before replying. The problem is not the DNS problem that many people point to; as you can see, the DNS lookup is instant, and the problem occurs both on localhost and 127.0.0.1. Thanks for the time.

    Read the article

  • duplicate cache pages: Varnish

    - by Sukhjinder Singh
    Recently we have configured Varnish on our server, it was successfully setup but we noticed that if we open any page in multiple browsers, the Varnish send request to Apache not matter page is cached or not. If we refresh twice on each browser it creates duplicate copies of the same page. What exactly should happen: If any page is cached by Varnish, the subsequent request should be served from Varnish itself when we are opening the same page in browser OR we are opening that page from different IP address. Following is my default.vcl file backend default { .host = "127.0.0.1"; .port = "80"; } sub vcl_recv { if( req.url ~ "^/search/.*$") { }else { set req.url = regsub(req.url, "\?.*", ""); } if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (!req.backend.healthy) { unset req.http.Cookie; } set req.grace = 6h; if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } if (req.http.Cookie) { set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { unset req.http.Cookie; } else { return (pass); } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") {return(pipe);} /* Non-RFC2616 or CONNECT which is weird. */ if (req.request != "GET" && req.request != "HEAD") { return (pass); } if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unknown algorithm remove req.http.Accept-Encoding; } } return (lookup); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } sub vcl_fetch { if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } set beresp.grace = 6h; } sub vcl_hash { hash_data(req.url); if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return (hash); } sub vcl_pipe { set req.http.connection = "close"; } sub vcl_hit { if (req.request == "PURGE") {ban_url(req.url); error 200 "Purged";} if (!obj.ttl > 0s) {return(pass);} } sub vcl_miss { if (req.request == "PURGE") {error 200 "Not in cache";} }

    Read the article

  • Redirect to new domain and preserve username

    - by David Brown
    I recently switched to a new domain for a version control server I run. The server is usually accessed with a username included in the url such as https://[email protected]/some/stuff. I want to redirect requests to the old domain to the new domain and preserve everything else in the url (including the username). So the former url would be redirected to https://[email protected]/some/stuff. Currently I have the following rewrite condition and rule: RewriteCond %{HTTP_HOST} sub\.olddomain\.com RewriteRule (.*) https://sub.newdomain.net$1 [R=301,L] This works except it drops the userinfo part of the URL. Is there a way I can preserve the user info?

    Read the article

  • Fatal error: Out of memory (allocated ...) (tried to allocate ... bytes) not due to memory_limit setting

    - by Lorenz Meyer
    Since a few days, I get the following error on my server: Fatal error: Out of memory (allocated 262144) (tried to allocate 393216 bytes) Usually this error is due to a memory consumption that is exceeding the configured memory_limit, but in my case there is no relation. The memory_limit is set to 128MB, and in this case, we not even reach 1MB. Also the server does not have a big load, in fact it is an intranet server, and there are just a few people conected to it. System: Windows Server 2003, 1Go RAM, only 600 MB used. Apache 2.2.4 PHP 5.2.3 This error is appearing randomly. The memory limit reached also is randomly between a few kB to a few MB. Sometimes restarting Apache is required to get rid of the error, sometimes it disapears itself. Restarting Apache or the entire server helps temporarily. Where could this problem come from ? How could I narrow down the error source ?

    Read the article

  • How to have PHP and mod_wsgi python app on the same domain?

    - by Lazik
    I am using apache with mod_wsgi (python3) on ubuntu 12.04. I have a python app (bottle) which is at www.mysite.com/ In my python app I have routes like www.mysite.com/abbb?q=blab I would like a path www.mysite.com/forum to resolve to a php app (simple machine forums) Ideally I would like to use apache to handle the forum part and pass it to php (instead of coding it in the python app). Don't know if it's possible. I'm new to this, I have read https://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines#The_Apache_Alias_Directive but I don't understand how to use it. Here is my apache conf for the mod_wsgi app, I don't know how to specify the PHP portion. <VirtualHost *:80> ServerName www.ex.com ServerAlias ex.com *.ex.com RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}$1 [R=301,L] WSGIDaemonProcess ex user=www-data group=www-data processes=1 threads=5 WSGIScriptAlias / /var/www/vhosts/ex/app.wsgi <Directory /var/www/vhosts/ex> WSGIProcessGroup ex WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • Node.js apps and wordpress on the same vps

    - by Msencenb
    So currently my linode (ubuntu 11.10) serves up three node.js apps for me using connect's vhost middleware listening on port 80. Here is an example of how vhost sets up a domain: var portfolio = require('./bootstrap-portfolio/lib/app.js'); var server = express(); server.use(express.vhost('sencedev.com',portfolio)); server.use(express.vhost('www.sencedev.com',portfolio)); server.listen(80); However I would now like to add a wordpress installation to my vps as well. In the past for me this has meant a traditional apache installation; however I'm a bit unsure of how node.js + a different webserver (apache or nginx) should interact. Any thoughts on how I should approach hosting wordpress + node.js on the same box?

    Read the article

  • Loading wordpress admin page prevents and blocks further requests

    - by Auxiliary
    I've installed a Wordpress site on a host (actually I moved it from localhost). The site loads and works perfectly, however if I log in with Admin, the admin page doesn't completely load and all further requests to the server are blocked for about 10 or 15 minutes. I can't even ping to the site. Where could this problem be from? Is it firewall related or...? Any help or idea is greatly appreciated.

    Read the article

  • logrotate> removing delaycompress function: should I compress the last log myself?

    - by user120006
    I'm removing the delaycompress function from my logrotating script. Before running logrotate again, should I compress the last log myself ? This is the actual situation: -rw-r----- 1 root adm 4,7M 5 mag 18:38 access.log -rw-r----- 1 root adm 5,2M 29 apr 05:44 access.log.1 -rw-r----- 1 root adm 473K 22 apr 05:45 access.log.2.gz -rw-r----- 1 root adm 605K 15 apr 05:44 access.log.3.gz -rw-r----- 1 root adm 588K 8 apr 05:44 access.log.4.gz The question is: Should I compress "access.log.1" and THEN launch logrotate ? Or logrotate will understand I removed the "delaycompress" option and fix things himself ?

    Read the article

  • Access my local server by hostname or servername

    - by S.M.09
    I have a local host server hosting a few applications in tomcat which comes through a apache proxy The client or User trying to access these application has to access them like 10.XXX.XXX.XX:8080/appName OR 10.XXX.XXX.XX/appName But I want to replace the ip address with soem other name related to my applications. But I cannot go and enter the host name of the server in each users /etc/host Nor do I want to be setting up DNS. Is there another way to do this. I am using ProxyPass XXX YYY to redirect all applications of tomcat to port 80

    Read the article

  • .htaccess issue on Apache Web Server in Ubuntu VM

    - by Neon Flash
    I just installed Apache Web Server on Ubuntu 11.04 in VMWare Workstation. I created a basic HTML page, named it index.html and placed it in /var/www directory (document root). I am able to access this web page from my Host OS (Windows 7), by pointing the browser to: http://192.168.2.2/index.html where, 192.168.2.2 is the IP Address of the Ubuntu VM. Next, to test various configurations of .htaccess files, I created a new directory in /var/www called, members. Inside this directory, I created and placed a .htaccess file with the following configuration: AuthUserFile /www/Neon/auth/.htpasswd AuthName "neon's home" AuthType Basic require valid-user IndexIgnore */* I created a directory path like /var/www/Neon/auth/ and then placed a .htpasswd file inside it. To place the username and hash inside the .htpasswd file: I created a username "neon" and calculated the DES hash of a password and placed it inside .htpasswd file in format: username:hash Now, when I try to access the web page: http://192.168.2.2/members/ It does not prompt me to enter the username and password with a popup box. Instead it just displays the index.html which is placed inside members directory. I would like to get this configuration working :)

    Read the article

  • Subversion error: Repository moved permanently to please relocate

    - by Bart S.
    I've set up subversion and apache on my server. If I browse to it through my webbrowser it works fine (http://svn.host.com/reposname). However, if I do a checkout on my machine I get the following error: Command: Checkout from http://svn.host.com/reposname, revision HEAD, Fully recursive, Externals included Error: Repository moved permanently to 'http://svn.host.com/reposname/'; please relocate I checked apache's error log, but it doesn't say anything. (it does now - see edit) My repositories are stored under: /var/www/svn/repos/ My website is stored under: /var/www/vhosts/x/... Here's the conf file for the subdomain: <Location /> DAV svn SVNParentPath /var/www/svn/repos/ AuthType Basic AuthName "Authorization Realm" AuthUserFile /var/www/svn/auth/svn.htpasswd Require valid-user </Location> Authentication works fine. Does anyone know what might be causing this? -- Edit So I restarted apache (again) and tried it again and now it give me an error message, but it doesn't really help. Anyone have an idea what it means? [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] Could not fetch resource information. [403, #0] [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] (2)No such file or directory: The URI does not contain the name of a repository. [403, #190001] -- Edit 2 If I do svn info it doesn't give anything usefull: [root@eduro eduro.nl]# svn info http://svn.domain.com/repos/ Username: username Password for 'username': svn: Repository moved permanently to 'http://svn.domain.com/repos/'; please relocate I also tried doing a local checkout (svn checkout file:///var/www/svn/repos/reposname) and that works fine (also adding / commiting works fine). So it seems is has something to do with apache. Some other information: I'm running CentOs 5.3 Plesk 9.3 Subversion, version 1.6.9 (r901367) -- Edit 3 I tried moving the repositories, but it didn't make any difference. selinux is disabled so that isn't it either. -- Edit 4 Really? Nobody :(?

    Read the article

  • Apache keeps adding 8080 port by itself while I'm telling it to use 80 only

    - by laggingreflex
    Here's my httpd.conf. Inside it, I have the following in place #Listen 12.34.56.78:80 #Listen 127.0.0.1:8887 Listen 127.0.0.1:80 #Listen 127.0.0.1:8080 Listen 192.168.1.4:80 and I have a .htaccess RewriteEngine On RewriteRule ^wordpress(.*)$ wp-oct/live$1 in my local www folder with wordpress installed in /wp-oct/live/ to which /wordpress/ is supposed to redirect to, but it doesn't. It instead redirects to http://localhost:8080/wp-oct/live/. Why is 8080 showing up?

    Read the article

  • Using AWStats, cannot get MaxNbOfExtraX to limit rows in Extra Report

    - by user137519
    Folks, got something really odd here I'd like to resolve. I've been using Awstats and have a couple of extra reports. I cannot get any of them to limit the rows using MaxNbOfExtraX to work. Here are two examples: ExtraSectionName1="Top 100 Searches" ExtraSectionCodeFilter1="200 304" ExtraSectionCondition1="URL,/search/search_post.php" ExtraSectionFirstColumnTitle1="Search Parameters" ExtraSectionFirstColumnValues1="QUERY_STRING,(.*)" ExtraSectionFirstColumnFormat1="QueryParameters: %s" ExtraSectionStatTypes1=HL ExtraSectionAddAverageRow1=0 ExtraSectionAddSumRow1=1 MaxNbOfExtra1=100 MinHitExtra1=4 ExtraSectionName2="Top 100 Downloads" ExtraSectionCodeFilter2="200 304" ExtraSectionCondition2="URL,/filedownload.php" ExtraSectionFirstColumnTitle2="File Downloads" ExtraSectionFirstColumnValues2="QUERY_STRING,(.[0-9]{5})(h|p)?." ExtraSectionFirstColumnFormat2="File ID: %s" ExtraSectionStatTypes2=HL ExtraSectionAddAverageRow2=0 ExtraSectionAddSumRow2=1 MaxNbOfExtra2=100 MinHitExtra2=3 According to all documentation I've read the MaxNbOfExtra1 should keep the limit to 100. However when I run this, with the debug messages enabled I get a message indicated that the query will be in excess of of 500 and would not run it. I increased the number of ExtraTrackedRowsLimit to 2000 and it would work. But the option I provided should have lowered that. I even tried without the ExtraTrackedRowsLimit with MaxNbOfExtra1=100 but same error: No limit to 100 and the "excess of 500" error. I have the URLWithQuery=1 and my reports do run properly along with my regex filters. I am using MinHitExtra1 to limit the rows and that works, but why can I not get the MaxObOfExtraX option to work. Any ideas? Thanks in advance.

    Read the article

  • How to simulate Apache [END] flag on a redirect?

    - by Javier Méndez
    For business-specific reasons I created the following rewrite rule for Apache 2.2.22 (mod_rewrite): RewriteRule /site/(\d+)/([^/]+)\.html /site/$2/$1 [R=301,L] Which if given an URL like: http://www.mydomain.com/site/0999/document.html Is translated to: http://www.mydomain.com/site/document/0999.html That's the expected scenario. However, there are documents which name are only numbers. So consider the following case: http://www.mydomain.com/site/0055/0666.html Gets translated to: http://www.mydomain.com/site/0666/0055.html Which also matches my rewrite rule pattern, so I end up with "The web page resulted in too many redirects" errors from browsers. I have researched for a long time, and haven't found "good" solutions. Things I tried: Use the [END] flag. Unfortunately is not available on my Apache version nor it works with redirects. Use %{ENV:REDIRECT_STATUS} on a RewriteCond clause to end the rewrite process (L). For some reason %{ENV:REDIRECT_STATUS} is empty all the times I tried. Add a response header with the Header clause if my rule matches and then check for that header (see: here for details). Seems that a) REDIRECT_addHeader is empty b) headers are can't be set on the 301 response explicitly. There is another alternative. I could set a query parameter to the redirect URL which indicates it comes from a redirect, but I don't like that solution as it seems to hacky. Is there a way to do exactly what the [END] flag does but in older Apache versions? Such as mine 2.2.22. Thanks!

    Read the article

  • How to config PHP libxml path after updated libxml from 2.2.26 to 2.9

    - by Cauliturtle
    our servers need to update the libxml2 version from 2.2.26 to 2.9 (latest version). It is no problem that we have been installed the libxml2-2.9 version on our servers. but the problem is how can we config the libs path of libxml2 path in php? Since it still show the old version on phpinfo(). What we have do is 1. Install libxml2 2.7.X on CentOS 5.X Using yum to install local files, and typed yum info libxml2, it shows 2.9 was installed. Thanks!

    Read the article

  • Error configuring virtual hosts

    - by user148351
    i Have a problem using my virtual hosts: When i try to connect to my server on direct ip adress, for example http://111.11.11.111/ in apache error log i see following error: script '/var/www/html/mmm/public/index.php' not found or unable to stat File index.php exists!!! and has correct access rights. I have virtual hosts configured <VirtualHost *:80> DocumentRoot /var/www/html/mmm/public ServerName example.com ServerAlias example.com www.example..com <Directory var/www/html/mmm/public> AllowOverride All </Directory> </VirtualHost> Why when I try to connect to ip address - it try to search index.php not in servers root directory, but in root directory of virtual host.

    Read the article

  • How to allow unprivileged apache/PHP to do a root task (CentOS)

    - by Chris
    I am setting up a sort of personal dropbox for our customers on a CentOS 6.3 machine. The server will be accessible thru SFTP and a proprietary http service base on PHP. This machine will be in our DMZ so it has to be secure. Because of this I have apache running as an unprivileged user, hardened the security on apache, the OS, PHP, applied a lot of filtering in iptables and applied some restrictive TCP Wrappers. Now you might have suspected this one was coming, SELinux is also set to enforcing. I'm setting up PAM to use MySQL so my users in the web application can login. These users will all be in a group that can use SSH only for SFTP and users will be chrooted to their own 'home' folder. To allow this SELinux wants the folders to have the user_home_t tag. Also the parent directory needs to be writable by root only. If these restrictions are not met SELinux will kill the SSH pipe immediately. The files that need to be accessible thru both http and SFTP so I have made a SELinux module to allow Apache to search/attr/read/write etc. to directories with the user_home_dir_t tag. As sftp users are stored in MySQL I want to setup their home dirs upon user creation. This is a problem since Apache has no write access to the /home dir, it's only writable by root since it's required to keep SELinux and OpenSSH happy. Basically I need to let Apache do only a few tasks as root and only within /home. So I need to somehow elevate the privileges temporarily or let root do these tasks for apache instead. What I need to have apache do with root privileges is the following. mkdir /home/userdir/ mkdir /home/userdir/userdir chmod -R 0755 /home/userdir umask 011 /home/userdir/userdir chcon -R -t user_home_t /home/userdir chown -R user:sftp_admin /home/userdir/userdir chmod 2770 /home/userdir/userdir This would create a home for the user, now I have an idea that might work, cron. That would mean the server needs to check for users that have no home every minute, then when creating users the interface would freeze for an average of 30 seconds before the account creation can be confirmed which I do not prefer. Does anybody know if something can be done with sudoers? Or any other idea's are welcome... Thanks for your time!

    Read the article

  • apache url / filename with special characters

    - by Mario Delgado
    I have this url: http://domain.com/wp-content/uploads/2012/10/Hvilke-vilkår-følger-med-når-du-bestiller-nyt-bredbånd.png If I ftp/ssh or just browse to that folder (apache index feature), I see the file Hvilke-vilkår-følger-med-når-du-bestiller-nyt-bredbånd.png If I click on the link from the apache index, I can see the file, however, if I copy the URL and try to browse to it directly, I get the error: The requested URL /wp-content/uploads/2012/10/Hvilke-vilkÃ¥r-følger-med-nÃ¥r-du-bestiller-nyt-bredbÃ¥nd.png was not found on this server. Also my error log says: File does not exist: /wp-content/uploads/2012/10/Hvilke-vilk\xc3\xa5r-f\xc3\xb8lger-med-n\xc3\xa5r-du-bestiller-nyt-bredb\xc3\xa5nd.png

    Read the article

  • apache httpd cannot browse through browser

    - by nuttynibbles
    i've setup apache and php on a virtual machine. everything works fine in the virtual machine. im able to execute php files and run up phpmyadmin connecting to mysql. on my host machine, im able ping and ssh into the remote machines. however, im unable to browse the php files on the host browser using the ip address. in my httpd.conf, im listening to port 80. i enabled the ServerName 192.168.75.102:80 am i missing some settings? port settings maybe?

    Read the article

  • In Linux, what's the best way to delegate administration responsibilities, like for Apache, a database, or some other application?

    - by Andrew Banks
    In Linux, what's the best way to delegate administration responsibilities for Apache and other "applications"? File permissions? Sudo? A mix of both? Something else? At work we have two tiers of "administrators" Operating system administrators. These are your run-of-the-mill "server administrators." They are responsible for just the operating system. Application administrators. The people who build the web site. This includes not only writing the SQL, PHP, and HTML, but also setting up and running Apache and PostgreSQL or MySQL. The aforementioned OS admins will install this stuff, but it's mainly up to the app admins to edit all the config files, start and stop processes when needed, and so on. I am one of the app admins. This is different than what I am used to. I used to just write code. The sysadmin took care not only of the OS but also installing, setting up, and keeping up the server software. But he left. Now I'm in charge of setting up Apache and the database. The new sysadmins say they just handle the operating system. It's no problem. I welcome learning new stuff. But there is a learning curve, even for the OS admins. Apache, by default, seems to be set up for administration by root directly. All the config files and scripts are 644 and owned by root:root. I'm not given the root password, naturally, so the OS admins must somehow give my ordinary OS user account all the rights necessary to edit Apache's config files, start and stop it, read its log files, and so on. Right now they're using a mix of: (1) giving me certain sudo rights, (2) adding me to certain groups, and (3) changing the file permissions of various directories, to make them writable by one of the groups I'm in. This never goes smoothly. There's always a back-and-forth between me and the sysadmins. They say it's ready. Then I try certain things, and half of them I still can't do. So they make some more changes. Then finally I seem to be independent and can administer Apache and the database without pestering them anymore. It's the sheer complication and amount of changes that make me uncomfortable. Even though it finally works, more or less, it seems hackneyed. I feel like we're doing it wrong. It seems like the makers of the software would have anticipated this scenario (someone other than root administering it) and have a clean two- or three-step program to delegate responsibility to me. But it feels like we are really chewing up the filesystem and making it far and away from the default set-up. Any suggestions? Are we doing it the recommended way? P.S. For PostgreSQL it seems a little better. Its files are owned by a system user named postgres. So giving me the right to run sudo su - postgres gives me just about everything. I'm just now getting into MySQL, but it seems to be set up similarly. But it seems a little weird doing all my work as another user.

    Read the article

  • Centos + CPanel results in php always running as fgci, instead of cli, as expected

    - by quamis
    I'm having problems with a server configured by someone else. It uses CPanel, and it has Apache+PHP. For some reason, when running php -v as a user, i get "cli" as handler # php -v | head -n 1 PHP 5.3.27 (cli) (built: Oct 15 2013 16:06:48) If a make a PHP script with echo shell_exec("php -v | tail -n 1"), i get PHP 5.3.27 (cgi-fcgi) (built: Oct 15 2013 16:22:16) Why the cli/fcgi difference? I need to fix this so that scripts ran by the apache user would be running as "cli", not "fcgi". As a side-note, i'm not sure how php got installed, because in the package list i see another version installed, instead of the pne reported by php -v # rpm -qa | grep php [.... other packages...] cpanel-php53-5.3.17-5.cp1136.x86_64 rebuild_phpconf --current: /usr/local/cpanel/bin/rebuild_phpconf --current Available handlers: suphp dso fcgi cgi none DEFAULT PHP: 5 PHP4 SAPI: none PHP5 SAPI: dso SUEXEC: enabled RUID2: not installed redhat-release: # cat /etc/redhat-release CentOS release 6.4 (Final) possibly related to http://superuser.com/questions/665809/centos-6-4-running-php-for-root-as-cgi-fcgi-not-cli

    Read the article

  • .htaccess deny from all does not work?

    - by jeffery_the_wind
    I am running Apache 2.2.20 on a Ubuntu 11.04 web server. I have a Joomla site running on it, but I have also added some custom content. In the main web directly I have added a folder /images/sub_folder and in this sub_folder I have put a bunch of pictures. I do not want anyone to be able to simply access these pictures directly from the web, so I made a .htaccess file in that sub_folder and just put the following line in it: deny from all There doesn't seem to be any effect, I can still access the images directly from a web browser. I have restarted the Apache service. What am I doing wrong? Thanks Tim

    Read the article

  • Installing Bugzilla on Ubuntu 9.04 and Plesk

    - by makeflo
    Hey guys. I'm trying to install the latest Bugzilla version on my ubuntu server. (Want to use a subdomain like bugs.domain.com) I already installed all necessary perl modules and check_modules.pl doesn't show any errors. But when I'm running the testserver.pl script I get the following: TEST-OK Webserver is running under group id in $webservergroup TEST-FAILED Fetch of images/padlock.png failed I'm also not able to visit ANY file within the bugzilla folder from the browser. I'm always getting a 404 error. The bugzilla folder and all containing files are set to apache as the owner. I tried to enter the apache configuration form the installation guide in the http.include file of the domain and in the vhosts.conf file of the subdomain as well. I don't know what to do... Playing with plesks' suexecgroup doesn't bring any solution... I hope you can help me! Thanks in advance!

    Read the article

  • Apache, logerror and logrotate: what is the best method?

    - by OlivierDofus
    Hi! Here's a vhost example of my sites: <VirtualHost *:80> DocumentRoot /datas/web/woog ServerName woog.com ServerAlias www.woog.com ErrorLog "|/httpd-2.2.8/bin/rotatelogs /logs/woog/error_log 86400" CustomLog "|/httpd-2.2.8/bin/rotatelogs /logs/woog/access_log 86400" combined DirectoryIndex index.php index.htm <Location /> Allow from All </Location> <Directory /*> Options FollowSymLinks AllowOverride Limit AuthConfig </Directory> </VirtualHost> I've got 12 sites running now. This gives something like: [Shake]:/sources/software/mod_log_rotate# ps x | grep rotate /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/error_log 86400 /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/error_log 86400 [snap (as many error_log as virtual hosts)] /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/access_log 86400 /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/access_log 86400 [snap (as many access_log as virtual hosts)] grep rotate [Shake]:/sources/software/mod_log_rotate# !!! I've been looking everywhere but I've only found mod_log_rotate. The "little" problem is that the author (very good C developper) explains: "Unfortunately Apache error logs are handled in such a way that we can't work the same log rotation magic on them. Like transfer logs they support piped logging though so you can still use rotatelogs for them. " So my question is: what would be the best way to handle multiple logs? If I just do a very classical log and I use the system's "logrotate" program couldn't this be a good deal? How would/do you deal with that? Thank you!

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >