Search Results

Search found 6253 results on 251 pages for 'apache2 ssl'.

Page 18/251 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Apache2 several sites and configuration question

    - by Hellnar
    Hello I want to host several sites from my VPS via apache2 under Debian. For this I removed the default from sites-enabled and added this under /etc/apache/sites-enabled/www.mysite.com : NameVirtualHost *:80 <VirtualHost *:80> ServerName mysite.com ServerAlias www.mysite.com Alias /media/ /home/myuser/mysite/media/ Alias /admin_media/ /home/myuser/django/Django-1.2/django/contrib/admin/media/ WSGIScriptAlias / /home/myuser/mysite/wsgi.py ErrorLog /home/myuser/mysite/logs/error.log CustomLog /home/myuser/mysite/logs/access.log combined </VirtualHost> After that I removed the default symbolic link from sites-enabled/ via: a2dissite default and added the new one: a2ensite www.mysite.com Still, I get this error: Restarting web server: apache2apache2: apr_sockaddr_info_get() failed for myuser apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName [Sat May 29 11:41:13 2010] [warn] NameVirtualHost *:80 has no VirtualHosts apache2: apr_sockaddr_info_get() failed for myuser apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName [Sat May 29 11:41:13 2010] [warn] NameVirtualHost *:80 has no VirtualHosts . What can be the problem ?

    Read the article

  • IPv6 working fine, IPv4 throws OpenSSL error

    - by jippie
    I am building a webserver ( http://blog.linformatronics.nl/ ), which functions just fine on both IPv4 and IPv6 and when using a non-SSL connection. However when I connect to it through https, IPv6 works as expected, but an IPv4 connection throws a client side error. Server side logs are empty for the IPv4/https connection. Summarized in a table: | http | https -----+-------+------------------------------------------------------- IPv4 | works | OpenSSL error, failed. No server side logging. -----+-------+------------------------------------------------------- IPv6 | works | self signed certificate warning, but works as expected Apparently the SSL tunnel isn't even set up, which accounts for the Apache logs being empty. But why does it work fine for IPv6 and fail for IPv4? My question is why is this OpenSSL error being thrown and how can I solve it? Below is some extra information about the setup. IPv6 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -6 https://blog.linformatronics.nl --2012-11-03 15:46:48-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 2001:980:1b7f:1:a00:27ff:fea6:a2e7 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|2001:980:1b7f:1:a00:27ff:fea6:a2e7|:443... connected. WARNING: cannot verify blog.linformatronics.nl's certificate, issued by `/CN=localhost': Self-signed certificate encountered. WARNING: certificate common name `localhost' doesn't match requested host name `blog.linformatronics.nl'. HTTP request sent, awaiting response... 200 OK Length: 4556 (4.4K) [text/html] Saving to: `/dev/null' 100%[=======================================================================>] 4,556 --.-K/s in 0s 2012-11-03 15:46:49 (62.5 MB/s) - `/dev/null' saved [4556/4556] IPv4 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -4 https://blog.linformatronics.nl --2012-11-03 15:47:28-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 82.95.251.247 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|82.95.251.247|:443... connected. OpenSSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol Unable to establish SSL connection. Notes I am on Ubuntu Server 12.04.1 LTS

    Read the article

  • HTTPS Proxy which answers CONNECT with own certificate

    - by user1109542
    I'm configuring a DMZ which has the following Scheme: Internet - Server A - Security Appliance - Server B - Intranet In this DMZ I need a Proxy server for http(s) connections from the Intranet to Internet. The Problem is, that all Traffic should be scanned by the Security Appliance. For this I have to terminate the SSL Connection at Server B, proxy it as plain http to Server A through the Security Appliance and then further as https into the Internet. An encryption is then persistent between the Client and Server B and the Target Server and Server A. The communication between Server A and Server B is unencrypted. I know about the security risks and that the client will see some warning about the unknown CA of Server B's certificate. As Software I want to use Apache Web Servers on Server A and Server B. As first step I tried to configure Server B that it serves as endpoint for the SSL Encryption. So it has to establish the encryption with the client (answering HTTP CONNECT). Listen 8443 <VirtualHost *:8443> ProxyRequests On ProxyPreserveHost On AllowCONNECT 443 # SSL ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel debug SSLProxyEngine on SSLProxyMachineCertificateFile /etc/pki/tls/certs/localhost_private_public.crt <Proxy *> Order deny,allow Deny from all Allow from 192.168.0.0/22 </Proxy> </VirtualHost> With this Proxy only the CONNECT request is passed through and an encrypted Connection between the client and the target is established. Unfortunately there is no possibility to configure mod_proxy_connect to decrypt the SSL connection. Is there any possibility to accomplish that kind of proxying with Apache?

    Read the article

  • Passenger 2.2.4, nginx 0.7.61 and SSL

    - by boompa
    Has anyone had any luck configuring Passenger and nginx with SSL? I've spent hours trying to get this configuration working as I'd like, using what few resources there are out there on the net, and I can't get any of the supposedly forwarded headers to show up in the Rails controller. For example, with a conf file of the following (and multiple variations thereof): server { listen 3000; server_name .example.com; root /Users/website/public; passenger_enabled on; rails_env development; } server { listen 3443; root /Users/website/public; rails_env development; passenger_enabled on; ssl on; #ssl_verify_client on; ssl_certificate /Users/website/ssl/server.crt; ssl_certificate_key /Users/website/ssl/server.key; #ssl_client_certificate /Users/website/ssl/CA.crt; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-SSL-Subject $ssl_client_s_dn; #proxy_set_header X-SSL-Issuer $ssl_client_i_dn; proxy_redirect off; proxy_max_temp_file_size 0; } and Rails code in the controller like this: request.headers.each { |k, v| RAILS_DEFAULT_LOGGER.error "Header #{k} Val #{v}" } other headers appear, but not those set in nginx, e.g.: Header rack.multithread Val false Header REQUEST_URI Val /login/new Header REMOTE_PORT Val 64021 Header rack.multiprocess Val true Header PASSENGER_USE_GLOBAL_QUEUE Val false Header PASSENGER_APP_TYPE Val rails Header SCGI Val 1 Header SERVER_PORT Val 3443 Header HTTP_ACCEPT_CHARSET Val ISO-8859-1,utf-8;q=0.7,*;q=0.7 Header rack.request.query_hash Val Header DOCUMENT_ROOT Val /Users/website/public I've even gone so far as to modify Passenger's abstract_request_handler's main_loop method, i.e., headers, input = parse_request(client) if headers if headers[REQUEST_METHOD] == PING process_ping(headers, input, client) else headers.each { |h,v| log.unknown "abstract_request_handler: #{h} = #{v}" } process_request(headers, input, client) end end only to find that the supposedly added headers do not exist there either: abstract_request_handler: HTTP_KEEP_ALIVE = 300 abstract_request_handler: HTTP_USER_AGENT = Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1) Gecko/20090624 Firefox/3.5 abstract_request_handler: PASSENGER_SPAWN_METHOD = smart-lv2 abstract_request_handler: CONTENT_LENGTH = 0 abstract_request_handler: HTTP_IF_NONE_MATCH = "b6e8b9afbc1110ee3bf0c87e119252ad" abstract_request_handler: HTTP_ACCEPT_LANGUAGE = en-us,en;q=0.5 abstract_request_handler: SERVER_PROTOCOL = HTTP/1.1 abstract_request_handler: HTTPS = on abstract_request_handler: REMOTE_ADDR = 127.0.0.1 abstract_request_handler: SERVER_SOFTWARE = nginx/0.7.61 abstract_request_handler: SERVER_ADDR = 127.0.0.1 abstract_request_handler: SCRIPT_NAME = abstract_request_handler: PASSENGER_ENVIRONMENT = development abstract_request_handler: REMOTE_PORT = 64021 abstract_request_handler: REQUEST_URI = /login/new abstract_request_handler: HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.7 abstract_request_handler: SERVER_PORT = 3443 abstract_request_handler: SCGI = 1 abstract_request_handler: PASSENGER_APP_TYPE = rails abstract_request_handler: PASSENGER_USE_GLOBAL_QUEUE = false I'm tired of banging my head against the wall, so I'd truly appreciate any help I can get!

    Read the article

  • Building an SSL server farm

    - by dan
    I'm interested in building the the architecture in the article referenced below. I currently have a modestly-priced layer-4 load balancer and my application servers are the SSL endpoints. I want to put an SSL server farm in between my load balancer and my app servers. Then I will put another inexpensive load balancer between the SSL farm and my app servers, to do layer-7 routing. My web application has a fairly high amount of consumer traffic, that 6 servers can handle at about 50% capacity. Additionally, I have infrastructure traffic that is several orders of magnitude heavier than my consumer traffic. This is data coming in from all over the world that must integrate with my web application in real time. In total I have 18 app servers to handle all the traffic, plus 6 database servers. I will be adding 6 more app servers over the next 2 weeks and another 6 the 2 weeks after that. Conservatively, I estimate I will need to scale to 120 servers by the end of the year. My motivation right now is to separate the consumer traffic from the infrastructure traffic. The consumer traffic is higher priority than the infrastructure traffic and I cannot allow a stampede on the infrastructure side to take down my consumer-facing servers. Having a website that is always up is the top priority. However if there is a failure in one of the consumer app servers, I want to route that traffic to the servers designated for infrastructure traffic. The complication is that all the traffic is addressed using the same hostname and is nearly 100% https. The only way in my case to distinguish infrastructure from consumer traffic is by URL (poor architecture I inherited), so I need a layer 7 load balancer to be able to route. However for that to work I need either a fancy hardware-based SSL terminator or an SSL server farm as described above. Because my user base is rapidly scaling, I worry that if I go down the hardware path it will become very expensive very fast, especially since I will need 4 of everything for high availability (2 identical setups in 2 facilities). Meanwhile, the above diagram seems very flexible and more horizontally scalable. Has anyone built this before? Are there pre-built configurations? What considerations should I make and what software should I use (I've heard of people using apache with mod-ssl, nginx, and stunnel)? Also, when does it make sense to buy an expensive load balancer vs building an SSL server farm? http://1wt.eu/articles/2006_lb/index_05.html

    Read the article

  • Apache2, PHP 5.2.8 Working - CRM Install Wizard Doesn't See the PHP Version

    - by nicorellius
    The Apache2 server and PHP version seem to be in order, but when I launch the CRM installer at http://localhost/<CRM dir>/install.php The wizard says I need a minimum of PHP 5.1 and preferably PHP 5.2.x. The thing is, I am running PHP 5.2.8, and I know this from running php --version Plus, I spent a bunch of time learning how and (I thought, successfully) compiling PHP 5.2.8. It is quite likely I screwed up and don't have some libraries I need, but I'm not sure where to look first. Thanks in advance.

    Read the article

  • Apache2 - setting PERL5LIB via SetEnv under CGI

    - by j0nes
    Hi, my setup is as follows: I have one Apache2 webserver running different vhosts, one vhost is for the production website, the other vhost is for a staging / preview system. Both vhosts have different DocumentRoots and also different (Perl) CGI folders. The used modules for each of these vhosts should be in different directories, so I did the following: <VirtualHost...> ServerName production SetEnv PERL5LIB /home/production/modules </VirtualHost> <VirtualHost...> ServerName staging SetEnv PERL5LIB /home/staging/modules </VirtualHost> However, I just noticed that in my Perl CGI scripts, both paths get filled into my @INC, so I can not separate the staging modules from the production modules, e.g. the SetEnv directive is not limited to a single virtual host, but seems to work globally. How can I solve this? Thanks! Jonas

    Read the article

  • robot hammering apache2

    - by user1571418
    My apache2 log is bombarded with lines like: 108.5.114.118 - - [03/Aug/2012:15:23:28 +0200] "GET http://xchecker.net/tmp_proxy2012/http/engine.php HTTP/1.0" 404 1690 "http://xchecker.net/tmp_proxy2012/http/engine.php" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98; Win 9x 4.90)" I am puzzled by this -- why is a request for some weird xchecker.net domain ending up on my server in the first place?! The request comes every few dozens of seconds, must be a robot. Any ideas what it is? Btw that URL is valid -- apparently it contains some test page...

    Read the article

  • Local network cache of PHP and Apache2 on Win Server 2008 R2

    - by Ahmed Benlahsen
    Software configuration : I have a new server with Windows Server 2008 R2 installed via VMWare. I have installed Apache2.2, PHP5.2 and MySQL5.5 as separate packages. Issue : On my first installation of my application, all works great. When I updated some JS and CSS files and accessed my application again from a PC on local network, I got the old JS and CSS versions. When I access the same application on local server I got the latest versions of those files. Link of my application on local server is : http://localhost/BADIL Link of my application from local network is : http://LOCAL_SERVER_IP/BADIL I think that must be some cache but I don't know where. Maybe on Win Server 2008 R2 or on VMWare? The question is: Why, when I access my application on the server, everything works fine, but when I access the same application from a local network, I do not see the updated versions of JS and CSS files?

    Read the article

  • Multiple domains, one config, hosted on apache2

    - by Kristoffer Sall Hansen
    First a quick disclaimer, I'm not a 'server guy' or a 'unix pro' or anything like that, I'm a web programmer who got stuck doing server works since I ran linux (ubuntu) on my netbook. I'm trying to set up an apache server running on Debian to automagically serve multiple domains, each domain needs to have its own directory in /var/www. Since this is the last thing I do for this company I really need it to be easy for my successor (who is even more a beginner at servers than I am), to create more domains without having to muck around with ssh or /etc/apache2/sites-available, so what I'm looking for is basically any magic mumbo-jumbo in default (or apt-get, or conf.d) that makes the server start serving any domain that has a matching folder in /var/www they will ofcourse have to initiate domain transfers the usual way. I have no problem setting up domains individually. Ick... hope the above makes sense to someone.

    Read the article

  • Varnish 3.0.2 to Apache2 sometimes return error 503

    - by Ronnie Jespersen
    Hey guys I hope you can help me out here. I have an Ngingx parsing http and https to a varnish cache(3.0.2). From the varnish it is sent to apache2. Now I have for some time been tracking some strange 503 errors. But I cant seem to find the silver bullet. Currently I am logging the 503 errors through varnish this way: sudo varnishlog -c -m TxStatus:503 >> /home/rj/varnishlog503.log and then referring to the apache access log to see if any 503 requests have been handled. Today I had a health check from the firewall that failed: 20 SessionOpen c 127.0.0.1 34319 :8081 20 ReqStart c 127.0.0.1 34319 607335635 20 RxRequest c HEAD 20 RxURL c /health-check 20 RxProtocol c HTTP/1.0 20 RxHeader c X-Real-IP: 192.168.3.254 20 RxHeader c Host: 192.168.3.189 20 RxHeader c X-Forwarded-For: 192.168.3.254 20 RxHeader c Connection: close 20 RxHeader c User-Agent: Astaro Service Monitor 0.9 20 RxHeader c Accept: */* 20 VCL_call c recv lookup 20 VCL_call c hash 20 Hash c /health-check 20 VCL_return c hash 20 VCL_call c miss fetch 20 Backend c 33 aurum aurum 20 FetchError c http first read error: -1 11 (No error recorded) 20 VCL_call c error deliver 20 VCL_call c deliver deliver 20 TxProtocol c HTTP/1.1 20 TxStatus c 503 20 TxResponse c Service Unavailable 20 TxHeader c Server: Varnish 20 TxHeader c Content-Type: text/html; charset=utf-8 20 TxHeader c Retry-After: 5 20 TxHeader c Content-Length: 879 20 TxHeader c Accept-Ranges: bytes 20 TxHeader c Date: Wed, 06 Jun 2012 12:35:12 GMT 20 TxHeader c X-Varnish: 607335635 20 TxHeader c Age: 60 20 TxHeader c Via: 1.1 varnish 20 TxHeader c Connection: close 20 Length c 879 20 ReqEnd c 607335635 1338986052.649786949 1338986112.648169994 0.000160217 59.997980356 0.000402689 Now the backend server (apache) does not have any 503 error in the access log at this point. So I am confused. Is this varnish throwing a 503 because it thinks apache is to slow? There is a lot traffic coming through at this point so I know the server is up and running. I do have other 503 error codes with posts and gets so there is really no pattern. It seems to be at random times and random requests. Even in the morning when the server dosen't seem to be doing anything. I do see another pattern in the log: 4 VCL_call c recv pass 4 VCL_call c hash 4 Hash c /?id=412 4 VCL_return c hash 4 VCL_call c pass pass 4 FetchError c no backend connection 4 VCL_call c error deliver 4 VCL_call c deliver deliver Here fetcherror says "no backend connection". A summery of the FetchErrors in todays log: 16 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 19 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 23 FetchError c http first read error: -1 11 (No error recorded) 24 FetchError c http first read error: -1 11 (No error recorded) 16 FetchError c http first read error: -1 11 (No error recorded) 6 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 22 FetchError c http first read error: -1 11 (No error recorded) 6 FetchError c http first read error: -1 11 (No error recorded) 21 FetchError c http first read error: -1 11 (No error recorded) 26 FetchError c no backend connection 4 FetchError c no backend connection 20 FetchError c http first read error: -1 11 (No error recorded) 39 FetchError c http first read error: -1 11 (No error recorded) I haven't changed the default timeout values for varnish. This is my configuration for one of the backend servers. backend xenon { .host = "192.168.3.187"; .port = "80"; .probe = { .url = "/health-check/"; .interval = 3s; .window = 5; .threshold = 2; } } I'm running prefork module on apache2 with this configuration <IfModule mpm_prefork_module> StartServers 1 MinSpareServers 2 MaxSpareServers 5 MaxClients 200 MaxRequestsPerChild 75 </IfModule> and only PHP files is sent to the server. Every other static file is handled by Nginx. Any ideas? ------- EDIT -------------- Some more debuging information I have run a varnishadm debug.health Backend radon is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002560 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend xenon is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002760 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend iridium is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.000849 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend aurum is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002100 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy And I have been monitoring varnishstat from the two load balancers 3224774 3.99 2.61 backend_conn - Backend conn. success 27 0.00 0.00 backend_unhealthy - Backend conn. not attempted 63 0.00 0.00 backend_fail - Backend conn. failures 358798 0.00 0.29 backend_reuse - Backend conn. reuses 21035 0.00 0.02 backend_toolate - Backend conn. was closed 379834 0.00 0.31 backend_recycle - Backend conn. recycles 26 0.00 0.00 backend_retry - Backend conn. retry 3217751 5.99 2.61 backend_conn - Backend conn. success 32 0.00 0.00 backend_fail - Backend conn. failures 364185 0.00 0.30 backend_reuse - Backend conn. reuses 27077 0.00 0.02 backend_toolate - Backend conn. was closed 391263 0.00 0.32 backend_recycle - Backend conn. recycles 36 0.00 0.00 backend_retry - Backend conn. retry Notice that none of them have reported backend_fail. /Ronnie

    Read the article

  • Apache2 alias in virtual host

    - by 0x7c00
    I have multiple virtual host in one server and plan to has some alias setup in one virtualhost. So I add the Alias /foo/ /path/to/foo/ in virtualhost directive,but it has no effect. request of host1/foo/ will return 404. But if I add this to /etc/apache2/mods-available/alias.conf, it works. But the problem is host2 will also share this alias. Is there a way to make the alias work only for host1? B.T.W, I use apache2ctl -l, there's no mod_alias.c listed, weird.

    Read the article

  • serving static assets via http is really slow compared to sshfs (apache2/nginx)

    - by s1lv3r
    After migrating to a new VPS I had some users complaining about slow loading images on their sites. After creating some test files with dd I realized that I can download all files via sshfs with full speed while downloads via web are painfully slow. The larger the file is and the longer the transfer takes, the slower the transfer speed gets. I thought I had some problems with Apache and just spend the whole evening with replacing Apache2 against nginx for static file serving - with no effect at all. No I/O wait states in top. Tons of RAM free, no high CPU utilization and hdparm shows a decent I/O performance at all times. I just have no idea anymore, what's happening on this server. This is a link to a demo file: http://master.dealux.de/file.tgz Anybody an idea what I can check out?

    Read the article

  • Ubuntu, apache2 wildcard dns to subdomain

    - by Mark van Velthoven
    Currently I'm hosting my own (ubuntu) server with the following services: samba, ftp and a webserver. I've bought a domain and linked the DNS A-record to my ISP's IP. This is working correctly. Now I'd like to use the DNS wildcard-record to create subdomains. I want to avoid waiting 24hrs before the DNS change completes. Thusfar I'm only able to redirect all incoming wildcards to the same directory: test1.domain.com redirects to /var/www test2.domain.com redirects to /var/www Although I'd like to get: test1.domain.com redirects to /var/www/test1 test2.domain.com redirects to /var/www/test2 My guess would be to change the file /etc/apache2/sites-available/domain. Any help or tips would be welcome! Thanks, Mark

    Read the article

  • Apache2 BufferedLogs On - anybody using it ?

    - by Qiqi
    Greetings, I am wondering, whether anybody is using BufferedLogs On with Apache2 and found any issues ? Feature is marked as experimental, but for many years now, so I guess it's rather pretty stable. I am running some servers with constrained disk IO capacity at the moment, so I turned it on hoping that even a small benefit could help in the long run ;-) I do have several to several hundreds requests per seconds so by my thoughts there is really no need to write to log after each request, cause honestly I don't think that my filesystem is the best handler for many unnecessary writes. (OCFS2 shared among several DomUs in the Xen)

    Read the article

  • Missing APR on apache2 ./configure

    - by arby
    I want to build the latest stable version of apache2. I downloaded the source and put APR & APR-util in the srclib folder, then changed directories to ./srclib/apr and ran: ./configure --prefix=/usr/local/apr sudo make sudo make install This seemed to install APR ok, but when I run ./configure from the apr-util directory, I receive the error: configure: error: APR could not be located. Please use the --with-apr option. Using ./configure --prefix=/usr/local/apr-util --with-apr=/usr/local/apr, the error becomes: checking for APR... configure: error: the --with-apr parameter is incorrect. It must specify an install prefix, a build directory, or an apr-config file. Why can't it find APR?

    Read the article

  • LameUser trying - apache2 webserver authentication - IP range to access without pass prompt others with it

    - by Mikee
    I have (maybe silly) question regarding the apache2 webserver and security - I am trying to archieve this: Users connecting from 192.168.1.24 not to be prompted for password and allowed Others asked for username and password if correct then connect. I am trying to do this for the whole directory /var/www No matter whether I put the code into .htaccess file or in httpd.conf it doesn't work for me. Order deny,allow Deny from all AuthName "PassRequest" AuthType Basic AuthUserFile /var/.htpasswd Require valid-user Allow from 192.168.1.24 Satisfy Any If I try to connect to the page I am allowed from both the allowed IP or any other, If I remove the satisfy any line then I am prompted for password, if I remove the password too and try to connect from different IP I am NOT REFUSED ... is there some module that needs to be activated or why is the IP directive skipped ? It needs to be put in every folder or /var/www/.htaccess is enough ? can I just put it in httpd.conf instead or not ?? I spend last 4 hours trying to google up why it is acting like that, Any help will be highly appreciated :-))

    Read the article

  • How to configure cookieless virtual host in Apache2?

    - by xzyfer
    We run over a hundred web applications (growing daily) on a LAMP stack using Apache2 on Ubuntu 10.04. We've would like all requests to static content to be cookieless. We host applications on many different domains, a majority of which as SaSS applications. Many of the domains host instances of the applications on sub domains, ie. myapp.example.com, myapp2.example.com myapp.otherexample.com etc.. At the moment all static content is server relative to the (sub)domain requesting it. As far as I understand the process, I would need to setup a new domain, eg. staticexample.com. In this case is special configuration in the virtual host for this domain required to ensure no cookies are served? Also, would it be possible to instead use static.example.com? In this case what configurations would I need in my virtual host for this subdomain to ensure no cookies are served?

    Read the article

  • apache2 slow responding (debian)

    - by baloo
    I'm running an apache2 2.2.9 webserver with modpython and mpm_worker_module. The current config for the mpm is ServerLimit 32 StartServers 10 MaxClients 800 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 The server has 1G of ram and a 100Mbit connection. Checking netstat -na | grep ESTABLISHED | wc -l gives me a number between 50 - 60. The load is about 1.0 Every pageload is also cached by memcached. I can't see why the server is so slow in responding to new connections, sometimes droping them completely? Also tried disabling iptables to make sure it's not because of a full state table or something like that. The only thing in dmesg is a lot of spam about "TCP: Treason uncloaked!"

    Read the article

  • Apache2 memory usage when uploading large files

    - by abhaga
    Hi, I am running apache2.2.12 along with PHP 5.2.10. PHP is configured to run as a separate process through fcgid. The problem is that when users upload a file, size of the apache process swells by almost the same amount. So if somebody tries to upload a 200 MB file, one of the child process swells to current size+200 MB. If 2 users simultaneously start uploading, my server crashes. Now it is the virtual memory size which is increasing but since I am on a OpenVZ based VPS, that is what counts. My questions are: Is it the normal Apache behavior or can I do something to fix this? If not, is there a more memory efficient way of handling big file uploads. Going by the current behavior, I will need 1 GB of free RAM for every apache child accepting a upload. Thanks! Abhaya -

    Read the article

  • Apache2 proxypass

    - by gatsby
    i'm trying to figure out why my apache2 reverse proxy doesn't work... hope someone can clarify. i'm using an apache server as a gateway with proxy pass: 10.184.1.2 is the IP. these are PP instructions i inserted in the 000-default config file. ProxyPass / http://192.168.102.31/ ProxyPassReverse / http://192.168.102.31/ the host 192.168.102.31 is an internal IP of a subnet wich is not reachable directly by clients, but only by the apache gateway. when i try to access such a address: http://apache_gateway_name/dir i see the client trying to reach 192.168.102.31 address and of course timeout occurs. can someone help? Best regards

    Read the article

  • Run a script with Apache2 in a certain directory

    - by TheGatorade
    I am trying to run WebMCP on an Apache2 server. It's got 2 executable files, which I have in /opt/webmcp/cgi-bin/webmcp.lua and /opt/webmcp/cgi-bin/webmcp-wrapper.lua If I run the wrapper from a position that is not /opt/webmcp/cgi-bin it says it cannot find webmcp.lua and gives a 500 error. If I run it from the correct directory it works. My server has webmcp.lua set as directoryindex and it's giving 500 error. It may be because of this problem? /opt/webmcp/cgi-bin/ is already set as documentroot, and is accessible by www-data.

    Read the article

  • Retrieving an RSA key from a running instance of Apache?

    - by Nathan Osman
    I created an RSA keypair for an SSL certificate and stored the private key in /etc/ssl/private/server.key. Unfortunately this was the only copy of the private key that I had. Then I accidentally overwrote the file on disk (yes, I know). Apache is still running and still serving SSL requests, leading me to believe that there may be hope in recovering the private key. (Perhaps there is a symbolic link somewhere in /proc or something?) This server is running Ubuntu 12.04 LTS.

    Read the article

  • Extended validation certificate not changing browser bar green in Firefox

    - by Max
    I'm having some problems with an Extended validation certificate on a site that isn't showing the green bar correctly in Firefox. Chrome and IE are working fine. When I load the page the bar appears for a few seconds and then disappears when the page has fully loaded. Someone mentioned it could be because of loading images over HTTPS, but I'm not sure how valid this case is. We have one image on the page that is loaded from another source over HTTPS, the rest of the images are stored in the file system on the server. FYI - its Windows Server 2008 and ASP.net UPDATE: Solved this problem - the style sheet was loading in a Google Font url using http, not https - changed it and now it's working.

    Read the article

  • June 25 changes to BIS 742.15 How does it impact SSL iPhone App export compliance

    - by Rob
    This question isn't strictly development-related but I hope it's still acceptable :) On June 25, 2010 the BIS updated 742.15 and of interest to me is the new 742.14(b)(4) "Exclusions from mass market classification request, encryption registration and self-classification reporting requirements" and 742.15(b)(4)(ii) which states… (ii) Foreign products developed with or incorporating U.S.-origin encryption source code, components, or toolkits. Foreign products developed with or incorporating U.S. origin encryption source code, components or toolkits that are subject to the EAR, provided that the U.S. origin encryption items have previously been classified or registered and authorized by BIS and the cryptographic functionality has not been changed. Such products include foreign developed products that are designed to operate with U.S. products through a cryptographic interface. I take this to mean that my Canadian produced product that uses https is now excluded from requiring a CCATTS. What does everyone else think?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >