Search Results

Search found 19615 results on 785 pages for 'apache config'.

Page 506/785 | < Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >

  • HAProxy authenticated httpchk (health check)

    - by Markel
    I am using HAProxy on EC2 and using httpchk to manage node availability. I had used a pseudo-unique path as the health check route in an attempt to make sure only my servers responded to the health check. Earlier today I had an EC2 server fall out of existence, and before the haproxy config was auto-regenerated (controller issues), Amazon had reassigned the IP to someone whom 200's every request (honeypot?), my HAProxy host then pulled the server back into rotation and started distributing some of my traffic there until the controller recovered and removed the ip from the list. TLDR; Is there a way to add a server authentication method to HAProxy's httpchk?

    Read the article

  • Selinux interfering with vboxwebsrv or phpvirtualbox

    - by Mike W
    I have a brand new installation of Fedora 18, with a brand new installation of Virtualbox 4.2. I have spent a painful few hours trying to get phpVirtualBox working. Apache 2.4 and PHP 5.4 are installed, along with the phpVirtualBox software. Attempting to access phpVirtualBox allowed me to login, but then I'd have a prolonged wait until an 'Error fetching HTTP headers' message appeared. Finally, I set SeLinux to permissive, and Bingo! things start to work. For some reason the SeLinux Troubleshooter isn't flagging any messages from SeLinux, I don't know what to look for now. This is a development box so I could leave SeLinux set to permissive but I will need to make this work in anger on the next project. My question, then, is this: What changes to SeLinux policies do I need to make to allow phpVirtualBox and vboxwebsrv to work together? If there's more information I can post that will assist I'll gladly post it - just let me know what it is.

    Read the article

  • Apache2 mod_proxy and post-multipart size

    - by Pietro
    Hi, I have Apache2 configured to proxy all traffic directed to a specific virtual host to a local tomcat instance. All is good and fine but for multipart posts larger than ~100kb. Such posts fail on the tomcat end with an exception like SocketTimeoutException. If I connect directly to Tomcat (which listens on a port != 80) then all posts are handled just fine. The Apache virtual host config goes like this: NameVirtualHost * SetOutputFilter DEFLATE <VirtualHost *> ServerName foo.bar.com ErrorLog c:/wamp/logs/foo_error.log CustomLog c:/wamp/logs/foo_access.log combined ProxyTimeout 60 ProxyPass / http://localhost:10080/foo/ ProxyPassReverse / http://localhost:10080/foo/ ProxyPassReverseCookieDomain localhost bar.com ProxyPassReverseCookiePath /foo / </VirtualHost> I tried browsing the Apache2 and mod_proxy docs but found nothing useful. Any idea why Apache2 refuses to proxy requests bigger than X bytes ? Thanks!

    Read the article

  • Shutdown in background - PHP

    - by William
    I'm trying to shutdown an Ubuntu machine from PHP and am running into an issue if I want to delay the shutdown. The PHP line I'm using is: exec("sudo shutdown -h +5 &", $output); Where 5 is however many minutes in the future I want to shutdown. My problem is that this won't background and Apache hangs until either the machine is shutdown or someone else cancels the shutdown. shell_exec() has the same result. Is there another way to do this that will return immediately?

    Read the article

  • What consequences to take from what i read in logfiles?

    - by Helene Bilbo
    Since some weeks i manage my first Webserver, a Seaside application behind an Apache proxy on Linode, and i installed logwatch to send me daily logs. Where can i get information on when i have to act as a consequence of what i read in these logwatch reports? For example i read that all kinds of people try to login on funny nonexisting accounts or all kinds of webcrawlers test for nonexisting cms login pages, some ip adresses get banned and unbanned by fail2ban... I assume that's normal? Is it? But how do i know that i probably have to do something? What do i look for in the logs?

    Read the article

  • DNS entries issues

    - by Yaman
    I have some troubles with my DNS entries (or maybe my Apache conf). I have something like this : kira.mydomain.com A 123.45.67.89 youfood.mydomain.com CNAME kira.mydomain.com www.youfood.mydomain.com CNAME youfood.mydomain.com All's good when I check theses entries with nslookup. When I try going on http://www.youfood.mydomain.com, it work but not with http://youfood.mydomain.com ... Here my vhost : <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName youfood.mydomain.com ServerAlias www.youfood.mydomain.com DocumentRoot /home/ftp_youfood/www/trunk <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/ftp_youfood/www> Options FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> [...] </VirtualHost> Is there anything wrong ?

    Read the article

  • open_basedir vs sessions

    - by liquorvicar
    On a virtual hosting server I have the open_basedir set to .:/path/to/vhost/web:/tmp:/usr/share/pear for each virtual host. I have a client who's running WordPress and he's complaining about open_basedir errors thus: PHP WARNING: file_exists() [function.file-exists]: open_basedir restriction in effect. File(/var/lib/php/session/sess_42k7jn3vjenj43g3njorrnrmf2) is not within the allowed path(s): (.:/path/to/vhost/web:/tmp:/usr/share/pear) So the PHP session save_path isn't included in open_basedir but sessions across all sites on the server seems to be working fine apart from in this intermittent instance. I thought that perhaps the default session handler ignored open_basedir and this warning was caused by WP accessing the session file directly. However from what I can see PHP 5.2.4 introduced open_basedir checking to the session.save_path config: http://www.php.net/ChangeLog-5.php#5.2.4 (I am on PHP 5.2.13). Any ideas?

    Read the article

  • install CA root trust certificate in Cent OS

    - by Shyamin Ayesh
    i install SSL certificate in my web site and now i have some questions about it. my web site is working correctly in google chrome web browser but it's not working in firefox browser. one of my friend is say's me the CA Root Trust certificate is not installed in the server. now i need to know how can i confirm the CA Root Trust is not installed and how to install CA Root Trust certificate in Cent OS 6.4 minimal with Apache. my SSL certificate issued AlphaSSL and it's domain validating wildcard certificate CA - G2. thank you very much for prompt reply !

    Read the article

  • ssh through a bastion machine works on someone else's desktop but not my own

    - by Terrence Brannon
    I have to ssh into a bastion (jump) server in order to get to the final server. On the jump server, my .ssh/config says: Host * ForwardAgent yes My co-worker uses PuTTy and Pageant. When I use a putty shell to connect from his desktop to the final server as root via the jump server, it works fine. At my desk I cannot connect to the final server, only the jump server. However, if I go to his desk, and successfully log into the final server via the jump server, I can then go back to my desk and also do so.... but after a certain amount of time, my shells revert to the original behavior of not connecting to final server via jump server. The entire transcript of ssh -v -v -v final_server is here The relevant part to me is when the public key is offered but then it says 'we did not send a packet': debug1: Offering public key: /home/CORP/t.brannon/.ssh/id_dsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password debug2: we did not send a packet, disable method debug3: authmethod_lookup password

    Read the article

  • F5/BigIP rule to redirect affinity-bound users from INACTIVE pool node to other ACTIVE node

    - by j pimmel
    We have several server nodes set up for the end users of our system and because we don't use any kind of session replication in the app servers, F5 maintains affinity for users with the ACTIVE node the client was first bound to. At times when we want to re-deploy the app, we change the F5 config and take a node out of the ACTIVE pool. Gradually the users filter off and we can deploy, but the process is a bit slow. We can't just dump all the users into a different node because - given the update heavy nature of the user activities - we could cause them to lose changes. That said, there is one URL/endpoint - call it http://site/product/list - which we know, when the client hits it, that we could shove them off the INACTIVE node they had affinity with and onto a different ACTIVE node. We have had a few tries writing an F5 rule along these lines, but haven't had much success so i thought I might ask here, assuming it's possible - I have no reason to think it's not based on what we have found so far.

    Read the article

  • TIME_WAIT connections not being cleaned up after timeout period expires

    - by Mark Dawson
    I am stress testing one of my servers by hitting it with a constant stream of new network connections, the tcp_fin_timeout is set to 60, so if I send a constant stream of something like 100 requests per second, I would expect to see a rolling average of 6000 (60 * 100) connections in a TIME_WAIT state, this is happening, but looking in netstat (using -o) to see the timers, I see connections like: TIME_WAIT timewait (0.00/0/0) where their timeout has expired but the connection is still hanging around, I then eventually run out of connections. Anyone know why these connections don't get cleaned up? If I stop creating new connections they do eventually disappear but while I am constantly creating new connections they don't, seems like the kernel isn't getting chance to clean them up? Is there some other config options I need to set to remove the connections as soon as they have expired? The server is running Ubuntu and my web server is nginx. Also it has iptables with connection tracking, not sure if that would cause these TIME_WAIT connections to live on. Thanks Mark.

    Read the article

  • Nginx + PHP-FPM executes script, but returns 404

    - by MorfiusX
    I am using Nginx + PHP-FPM to run a Wordpress based site. I have a URL that should return dynamically generated JSON data for use with the DataTables jQuery plugin. The data is returned properly, but with a return code of 404. I think this is a Nginx config issue, but I haven't been able to figure out why. The script 'getTable.php' works properly on the production version of the site which is currently using Apache. Anyone know how I can get this to work on Nginx? URL: http://dev.iloveskydiving.org/wp-content/plugins/ils-workflow/lib/getTable.php SERVER: CentOS 6 + Varnish (caching disabled for development) + Nginx + PHP-FPM + Wordpress + W3 Total Cache Nginx Config: server { # Server Parameters listen 127.0.0.1:8082; server_name dev.iloveskydiving.org; root /var/www/dev.iloveskydiving.org/html; access_log /var/www/dev.iloveskydiving.org/logs/access.log main; error_log /var/www/dev.iloveskydiving.org/logs/error.log error; index index.php; # Rewrite minified CSS and JS files location ~* \.(css|js) { if (!-f $request_filename) { rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; expires max; } } # Set a variable to work around the lack of nested conditionals set $cache_uri $request_uri; # Don't cache uris containing the following segments if ($request_uri ~* "(\/wp-admin\/|\/xmlrpc.php|\/wp-(app|cron|login|register|mail)\.php|wp-.*\.php|index\.php|wp\-comments\-popup\.php|wp\-links\-opml\.php|wp\-locations\.php)") { set $cache_uri "no cache"; } # Don't use the cache for logged in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp\-postpass|wordpress_logged_in") { set $cache_uri 'no cache'; } # Use cached or actual file if they exists, otherwise pass request to WordPress location / { try_files /wp-content/w3tc/pgcache/$cache_uri/_index.html $uri $uri/ /index.php?q=$uri&$args; } # Cache static files for as long as possible location ~* \.(xml|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { try_files $uri =404; expires max; access_log off; } # Deny access to hidden files location ~* /\.ht { deny all; access_log off; log_not_found off; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_pass unix:/var/lib/php-fpm/php-fpm.sock; # port where FastCGI processes were spawned } } Fast CGI Params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; UPDATE: Upon further digging, it looks like Nginx is generating the 404 and PHP-FPM is executing the script properly and returning a 200. UPDATE: Here are the contents of the script: <?php /** * Connect to Wordpres */ require(dirname(__FILE__) . '/../../../../wp-blog-header.php'); /** * Define temporary array */ $aaData = array(); $aaData['aaData'] = array(); /** * Execute Query */ $query = new WP_Query( array( 'post_type' => 'post', 'posts_per_page' => '-1' ) ); foreach ($query->posts as $post) { array_push( $aaData['aaData'], array( $post->post_title ) ); } /** * Echo JSON encoded array */ echo json_encode($aaData);

    Read the article

  • Identify Web Server from HTML File System

    - by bumble_bee_tuna
    Hi I've been given the following html file system with no knowledge of where it came from. To me this appears to be from a web server. I've been hitting Google on "Sam" smsLtd can't find to much, is this "SAM" a propitiatory web server ? Can I throw this up on an Apache or IIS server ? Reason I ask is there are some xml data files that seem to assign permissions and logins. Figured I'd ask as I have never seen it before.

    Read the article

  • Using WSUS to update machines not on the domain

    - by Arcath
    I have a WSUS server providing updates for for the computers on my domain. We also bring allot of machines back to our office and run windows update on them as build image, this means that we end up downloading the same updates over and over again. Is there anyway to get a machine to download its updates from our WSUS Server? i found that theres something running on port 8530 but its just an empty document, in fact every folder listed in IIS config returns a blank document anyone know if this is possible? and how i would do it?

    Read the article

  • MediaTemple tcpsndbuf QoS Alerts

    - by theturninggate
    I'm hosting with MediaTemple on a (dv) Dedicated-Virtual 3.5 server. My site consists of a Wordpress blog, some custom PHP pages (nothing too intense), and I server 500-700 unique visitors per day. Despite my pretty modest numbers, I suffer from regular Apache crashes on account of QoS Alerts, mostly flagged as "tcpsndbuf". MediaTemple support -- usually tops -- has been pretty useless on this matter. I'm looking for answers as to how/why this is happening, advice on how to stop it. My website is a good portion of my livelihood, and downtime equates to lost income. Any and all help much appreciated. -Matt

    Read the article

  • Trying to install driftnet

    - by Andrew
    I'm trying to install driftnet. I think I've installed all the dependencies per the website but when I run make I get the error below. makedepend -- -g -Wall -I/usr/include/pcap -D_BSD_SOURCE `pkg-config --cflags gtk+-2.0` -DDRIFTNET_VERSION='"0.1.6"' `cat endianness` -- audio.c mpeghdr.c gif.c img.c jpeg.c png.c driftnet.c image.c display.c playaudio.c connection.c media.c util.c http.c cat: endianness: No such file or directory /bin/sh: makedepend: command not found make: *** [depend] Error 127 What have I done wrong? Is there something similar but more current?

    Read the article

  • Accessing Netatalk/AFP Shares from OS X Snow Leopard

    - by j4nus_
    Recently upgraded Ubuntu home server from 8.04 client to 10.04 server and reinstalled all services therein. One of them is a Netatalk daemon that I configured in a fashion similar to this website: http://www.kremalicious.com/2008/06/ubuntu-as-mac-file-server-and-time-machine-volume/ Finder recognizes my server and the afp service, yet when I attempt to log in (using valid credentials), Finder indicates its the wrong username and password. I've tried altering some of the config files and my Google-fu to look for solutions, but no luck. Any tips? (This was not an issue under 8.04, if it matters)

    Read the article

  • Difference between "Redirect permanent" vs. mod_rewrite

    - by Stefan Lasiewski
    This is an Apache httpd 2.2 server. We require that access to this webserver be encrypted by HTTPS. When web clients visit my site at http://www.example.org/$foo (port 80), I want to redirect their request to the HTTPS encrypted website at https://www.example.org/$foo . There seem to be two common ways to do this: First method uses the 'Redirect' directive from mod_alias: <VirtualHost *:80> Redirect permanent / https://www.example.org/ </VirtualHost> Second method uses mod_rewrite: <VirtualHost *:80> RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} </VirtualHost> What is the difference between a "Redirect permanent" and the mod_rewrite stanza. Is one better then the other?

    Read the article

  • NIC models for HVM guests in Xen 4.0

    - by Jed Daniels
    Is there a list of NIC models that I should use with various HVM guests in Xen 4.0? Is there even a list of possible choices for the model=field? I'm planning on running both Windows and FreeBSD guests on the particular Xen box that I'm configuring now, and ideally I'd like a setting that can work on both of them and get me gigabit speeds without a lot of extra hassle loading special drivers (similar to VMware's e1000 NIC option). Any suggestions? The vif line I'm currently using in my config is: vif = [ 'type=ioemu,bridge=eth0,model=ne2k_pci,script=vif-ovs' ] This works in my Windows XP SP3 VM, but is only recognized as a 10Mbit interface. Changing to model=e1000 or model=i82557b resulted in Windows not being able to find a driver for the NIC.

    Read the article

  • 301 redirect Rule For Load Balance F5 BigIp

    - by Kshah
    I have a load balancer F5 Big ip for my website. Currently, I am having 302 redirect in place; however, I wanted to apply 301 but dont know how. For example: My website (abc.com) when typed 302 redirects to abc.com/index and when typed www.abc.com 302 redirects www.abc.com/index. I wanted to have a rule which will help me in abc.com - 301 redirect - www.abc.com/index abc.com/index - 301 redirect - www.abc.com/index www.abc.com - 301 redirect - www.abc.com/index Below is the code that my tech person is trying: Redirect to WWW when HTTP_REQUEST { if { [HTTP::host] equals "abc.com" or [HTTP::host] equals "abc.co.in" or [HTTP::host] equals "www.abc.co.in" } { if {!( [HTTP::path] equals "/")} { HTTP::respond 301 Location "http://www.abc.com[HTTP::path]" } } } Redirect POST when HTTP_REQUEST { if { [HTTP::method] equals "POST" } { persist source_addr pool shop_shop_vr4_http } } Redirect-VR4 HOMEPAGE when HTTP_REQUEST { if { [HTTP::path] equals "/" or [HTTP::path] starts_with "/target/" or [HTTP::path] starts_with "/logs/" or [HTTP::path] starts_with "/config/" } { HTTP::redirect "http://[HTTP::host]/index.jsp.vr" } }

    Read the article

  • MySQL open files limit

    - by Brian
    This question is similar to set open_files_limit, but there was no good answer. I need to increase my table_open_cache, but first I need to increase the open_files_limit. I set the option in /etc/mysql/my.cnf: open-files-limit = 8192 This worked fine in my previous install (Ubuntu 8.04), but now in Ubuntu 10.04, when I start the server up, open_files_limit is reported to be 1710. That seems like a pretty random number for the limit to be clipped to. Anyway, I tried getting around it by adding a line like this in /etc/security/limits.conf: mysql hard nofile 8192 I also tried adding this to the pre-start script in mysql's upstart config (/etc/init/mysql.conf): ulimit -n 8192 Obviously neither of those things worked. So where is the hoop that has been added between Ubuntu 8.04 and 10.04 through which I must jump in order to actually increase the open files limit?

    Read the article

  • My NGINX server doesn't use my *.less file

    - by Nicolas
    On my NGINX server, I use a LESS file instead of a CSS file. But my web page is displayed without any style. I tried on Apache and it works great. So I tried to add the less mime-type to nginx in the file /etc/nginx/mime.types : types { text/css css less; And types { text/less less; None of these two try work. Does anyone knows how to use LESS files with NGINX ?

    Read the article

  • loadbalancing with difference nginx location context and backend context

    - by robinmag
    Hi, I used nginx and upstream module for load balancing with the following config upstream lb { server 127.0.0.1:8080; server 127.0.0.1:8081; } server { listen 88; server_name localhost; location /cas/ { proxy_pass http://lb; proxy_redirect off; proxy_connect_timeout 2; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } the problem is the "location /context/" have to match to the context of backend server so when i request localhost/context/index.html then nginx routes it to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html. Is it possible to have difference backend context and nginx location for example with "location /" nginx will routes the request to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html Thank you.

    Read the article

  • openLdap for windows and phpldapadmin

    - by Dr Casper Black
    Hi, Im having a problem connecting all of this. Im new to Ldap and after failing to install all of this on Ubuntu 10.04 Im trying to set it up on my local PC. I installed OpenLdap for windows http://www.userbooster.de/en/download/openldap-for-windows.aspx, Enabled the php5.3.1 extension for ldap (c:\xampp\php\ext\php_ldap.dll) in php.ini Copied the ssleay32.dll and libeay32.dll to Windows\System32 & Windows\System (Windows XP) Set the password generated by c:\Program Files\OpenLDAP\slappasswd.exe in c:\Program Files\OpenLDAP\slapd.conf (rootpw {SSHA}hash) run the c:\Program Files\OpenLDAP\slapd.exe Install phpldapadmin and call https:// 127.0.0.1 / phpldapadmin/ when I enter the credentials i get Invalid credentials (49) for user and in openldap.log i get could not stat config file "%SYSCONFDIR%\slapd.conf": No such file or directory (2) Can someone help.

    Read the article

  • Small web server hardware advice

    - by Dmitri
    We need to build a new web server for our organization. We have around 100 hundred small traffic web sites, so our hardware requirements are not too tough. We run CentOS 6, Varnish+Apache, PHP, MySQL, Typo3 CMS for most of websites. Here's a hardware we want to buy: SuperMicro X9SCA-F-O (we need to have a remote management capability) (or better X9SCM-F?) Intel Xeon E3-1220 v2 2*4Gb DDR-III 1600MHz Kingston ECC (KVR16E11/4) (currently we have 4gb, and it feels like enough, so no reason for 16gb yet). Procase EB140-B-0 (1 unit) PSU 350W Procase MG1350, Active PFC We already have: Intel 335 120GB SSD (for OS, databases and important websites). 2*2tb WD Green RAID1 (for other data and backups). Does it look like a reasonable choice for our needs? Any issues with hardware compatibility? Any other notes?

    Read the article

< Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >