Search Results

Search found 14226 results on 570 pages for 'feature requests'.

Page 391/570 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • configuring lighttpd for large downloads

    - by ahmedre
    i run a web site that hosts pages that are just general scripts (php, etc) and mp3 downloads (some of which are fairly large - up to 200mb). i am running lighttpd on the servers on linux (ubuntu 64). everything is fine, but under high load, the server is not accessible (or very slow - even sshing in takes a while), and i am guessing this is due to a huge number of mp3 downloads at that time. consequently, dns sees the server as down and redirects all the traffic to the other servers, and after a while, it comes back up and things work again. so what's the best way to fix this? ideally, i want the server to continue running (and the web pages - php etc - to always work, but downloads don't always have to work). should i just have 2 web servers running (one for the downloads and one for the php pages), or is it perhaps something i can fix in my lighttpd configuration? here are the snippets from my configuration: server.max-worker = 4 server.max-fds = 2048 server.max-keep-alive-requests = 4 server.max-keep-alive-idle = 4 server.stat-cache-engine = "fam" fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "64", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) # normal php site $HTTP["host"] =~ "bar.com" { server.document-root = "/usr/local/www/sites/bar.com/" accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/bar.log" } # download site $HTTP["host"] =~ "(download|stream).foo.com" { server.document-root = "/home/audio/" dir-listing.activate = "enable" dir-listing.hide-dotfiles = "enable" evasive.max-conns-per-ip = 1 evasive.silent = "enable" # connection.kbytes-per-second = 256 accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/download.log" }

    Read the article

  • mod rewrite works fine apart from for missing directory index files

    - by j w
    I have a legacy web site hosted on Apache. It has a number of web pages sitting in the public web root and its subfolders. publicDocs/ directorywith_no_defaultfile/ some-legacy-flat-page.htm .htaccess index.php some-legacy-flat-page.htm I would like to start using Zend MVC for some of the newer pages. I have got a .htaccess mod rewrite rule working so that any request for a non-existent file is sent to be handled by the MVC bootstrap file (/index.php). With my current set-up, the following types of requests are routed to '/index.php', the MVC bootstrap: /index.php /blah /directorywith_no_defaultfile/bloo The following types of request are served by old legacy (flat) pages /some-legacy-flat-page.htm /directorywith_no_defaultfile/some-legacy-flat-page.htm But, when I a request a non-existent file that is a directory like these: /directorywith_no_defaultfile or /directorywith_no_defaultfile/ I get an error: Forbidden You don't have permission to access /directorywith_no_defaultfile/ on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. I suspect this may have something to do with the way Apache handles default files. Do you know which Apache directives could be causing this?

    Read the article

  • What performance degradation to expect with Nginx over raw Gunicorn+Gevent?

    - by bouke
    I'm trying to get a very high performing webserver setup for handling long-polling, websockets etc. I have a VM running (Rackspace) with 1GB RAM / 4 cores. I've setup a very simple gunicorn 'hello world' application with (async) gevent workers. In front of gunicorn, I put Nginx with a simple proxy to Gunicorn. Using ab, Gunicorn spits out 7700 requests/sec, where Nginx only does a 5000 request/sec. Is such a performance degradation expected? Hello world: #!/usr/bin/env python def application(environ, start_response): start_response("200 OK", [("Content-type", "text/plain")]) return [ "Hello World!" ] Gunicorn: gunicorn -w8 -k gevent --keep-alive 60 application:application Nginx (stripped): user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; upstream app_server { server 127.0.0.1:8000 fail_timeout=0; } server { listen 8080 default; keepalive_timeout 5; root /home/app/app/static; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } } Benchmark: (results: nginx TCP, nginx UNIX, gunicorn) ab -c 32 -n 12000 -k http://localhost:[8000|8080]/ Running gunicorn over a unix socket gives somewhat higher throughput (5500 r/s), but it still does't match raw gunicorn's performance.

    Read the article

  • How do you get AWS VPC EC2 instances to be able to see the AWS APIs?

    - by Peter Mounce
    We're spinning up infrastructure inside of an AWS VPC via CloudFormation. We're using auto-scaling groups to bring up VPC-EC2 instances (so, we don't bring up instances directly; ASGs manage that). Inside of a PVC, EC2 instances only have a private IP; they cannot see the outside world without further work. When these instances spin up, we have some bootstrap tasks that require talking to the various AWS APIs. We also have some ongoing tasks that require AWS API traffic. How are you tackling this apparent chicken-egg problem? We've read about: NAT instances - but don't like this so much because it's another layer to our stack. assigning elastic-IPs to each VPC instance that needs to talk - but a) they all do, and b) since we're using ASGs, we don't know which instances to assign EIPs to at provision-time, and c) we'd need to set up something to monitor those ASGs and assign EIPs when instances are terminated and replaced spinning up an instance (actually, a load-balanced pair, probably spanning AZs) to act as an AWS-API proxy for all API traffic I guess I'm wondering whether there's some kind of back-door we can open that allows our VPC EC2 instances access to the AWS API endpoints, but nothing else, for cheap-complexity setup, that doesn't add another network-hop layer to our infrastructure for serving requests.

    Read the article

  • squid bypass for a domain

    - by krisdigitx
    i am using squid with adzap, it possible that squid/adzap does not cache for a particluar domain eg. cnn.com this is my squid.conf file # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 #acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 #acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 192.168.1.0/24 acl localnet src 192.168.2.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port xxx.xxx.xxx.yyy:3128 transparent visible_hostname proxyserver.local # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /var/spool/squid 1024 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 access_log /var/log/squid/squid.log squid access_log syslog squid redirect_program /usr/local/adzap/scripts/wrapzap fixed using acl allow_domains dstdomain www.cnn.com always_direct allow allow_domains

    Read the article

  • libvirt's dnsmasq does not respond to dns queries or provide dhcp

    - by Jeremy
    This is on Ubuntu 10.04 server, using KVM to run Ubuntu guests. This system has been working for a long time and I have not changed anything (other than applying security updates), but today I found dnsmasq no longer responds to requests. I cannot say how long this has been broken for me because I don't frequently use the NAT'd guests. So it could have started just after the last updates or some other event and I just now found it. I can connect to port 53 with telnet at 192.168.122.1. I've flushed ip-tables to be sure it wasn't firewall rules and that is not the problem. dnsmasq is running, virsh reports default network as stared. I can't find ANY information on troubleshooting libvirt dnsmasq except that it won't play well with other instances of dnsmasq, which is not the problem. I cannot even find where log entries might be for this service. Any ideas on where to look for more information? edit to add: I added another network and that one works fine. I guess I have a workaround but would still like to figure out how to troubleshoot this problem.

    Read the article

  • What does "incoming" and "outgoing" traffic mean?

    - by mgibsonbr
    I've seen many resources explaining how to set up a server's firewall to allow incoming and outgoing traffic on HTTP standard ports (80 and 443), but I can't figure out why I would need either of them. Do I need to unblock both for a "regular" web site to work? For file uploads to work? Are there situations where it would be advisable to unblock one and leave the other blocked? Sorry if that's a basic question, but I couldn't find it explained anywhere (also I'm not a native english speaker). I know in a "regular" web site the client is always the one who initiates a request, so I'm assuming a web server must accept incoming traffic on those ports, and my common sense tells me the server is allowed to send a response without unblocking anything else (otherwise it wouldn't make sense to have two types of rules). Is that correct? But what is an outgoing web (service) traffic, and what would be its use? AFAIK if the server wanted to initiate a connection with another machine, the specific port that matters is the one in the other end (i.e. the destination port would be 80), on its end any free port could be used (the source port would be random). I can open HTTP requests from my server (using wget for instance) without unblocking anything. So I'm assuming my concepts of "incoming" and "outgoing" are wrong somehow.

    Read the article

  • TOR Proxy / Vidalia "New Identity" button not working

    - by Yisman
    I need to hide my ip from time to time. In Vidalia, I click on "New Identity". Ihen I check http://myip.ozymo.com/ to see if my IP address has changed. But, no, it hasn't. Why is that? And how can this be fixed? I tried waiting till the button gets re-enabled to make sure that its done processing the command, but still the IP address is the same. In Fiddler each request is tracked, so it's not a cached response. It's re-requested, but simply does not change. Fiddler though does show one thing interesting. Here is the raw response of many of the requests: HTTP/1.1 200 OK Content-Length: 13 Date: Mon, 23 May 2011 12:02:57 GMT Server: Apache X-Powered-By: PHP/5.2.14 Content-Type: text/html; charset=UTF-8 Age: 1 Connection: keep-alive **Warning: 110 localhost:8118 Object is stale** 26.32.120.106 What is this warning? And is this the cause?

    Read the article

  • SVN checkout returns 400 error

    - by eboix
    I'm trying to download the http://code.opencv.org/svn/opencv/trunk/ repository of all of the OpenCV source code - as specified in an OpenCV installation tutorial. In the tutorial, the repository https://code.ros.org/svn/opencv/trunk/ is used, but they moved it to http://code.opencv.org/svn/opencv/trunk/, and now you need a password to access the code.ros.org repository. Anyway, I'm using TortoiseSVN to download the SVN repository. (I get the same error with http://sourceforge.net/projects/win32svn/) I get this: Checkout from http://code.opencv.org/svn/opencv/trunk, revision HEAD, Fully recursive, Externals included Server sent unexpected return value (400 Bad request. Method Unknown) in response to REPORT request for '/svn/opencv/!svn/vcc/default' On the TortoiseSVN site I found something about this 400 error: You're behind a firewall which blocks DAV requests. Most firewalls do that. Either ask your Administrator to change the firewall, or access the repository with https:// instead of http:// like in https://svn.collab.net/repos/svn/ That way you connect to the repository with SSL encryption, which firewalls can't interfere with (if they don't block the SSL port completely). Also some virus scanners (i.e. Kapersky) are known to interfere and cause this error. The code.ros.org repository is https://, so I would be able to access it, but I need a password, so I can't. I made an account on ros.org, but it seems that I still need a password (which I don't know) to access the code repository. My username-password combination does not work. I unblocked all of the TortoiseSVN programs in my firewall settings. Nothing changed. I temporarily stopped my firewall to see if it was interfering with my request. I got the same error. How can I do an svn checkout http://code.opencv.org/svn/opencv/trunk/opencv/ so that I don't get this error? Is there any way to make it https://? Any help would be appreciated!

    Read the article

  • Serve a specific set of error pages for different subdirectories

    - by navitronic
    I am currently trying to setup 2 different sets of Error documents for separate folders within a website. I have 2 folders within the root of a site: demo/ live/ Any requests that return 404's or 403's within the demo folder needs to load one set of pages for the Apache errordocuments, eg. ErrorDocument 404 /statuses/demo-404.html ErrorDocument 403 /statuses/demo-403.html And the live needs to go to similarly name files. ErrorDocument 404 /statuses/live-404.html ErrorDocument 403 /statuses/live-403.html So far I have tried placing an .htaccess file in both directories with the ErrorDocument directives setup pointing to the specific files, the 404 works fine and references the correct page. However, the 403s do not work and revert to the server default when trying to access forbidden folders within the demo directory, the logs indicate the following: [Wed Jun 16 04:47:44 2010] [crit] [client 115.64.131.144] (13)Permission denied: /home/abstract/public_html/demo/xxx/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Is this correct? Would apache revert to default because it is trying to look for the htaccess in a folder it doesn't have permission in? Why wouldn't it work it's way back through the folder tree? Can I make it do this?

    Read the article

  • Configure Apache + Passenger to serve static files from different directory

    - by Rory Fitzpatrick
    I'm trying to setup Apache and Passenger to serve a Rails app. However, I also need it to serve static files from a directory other than /public and give precedence to these static files over anything in the Rails app. The Rails app is in /home/user/apps/testapp and the static files in /home/user/public_html. For various reasons the static files cannot simply be moved to the Rails public folder. Also note that the root http://domain.com/ should be served by the index.html file in the public_html folder. Here is the config I'm using: <VirtualHost *:80> ServerName domain.com DocumentRoot /home/user/apps/testapp/public RewriteEngine On RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -f RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -d RewriteRule ^/(.*)$ /home/user/public_html/$1 [L] </VirtualHost> This serves the Rails application fine but gives 404 for any static content from public_html. I have also tried a configuration that uses DocumentRoot /home/user/public_html but this doesn't serve the Rails app at all, presumably because Passenger doesn't know to process the request. Interestingly, if I change the conditions to !-f and !-d and the rewrite rule to redirecto to another domain, it works as expected (e.g. http://domain.com/doesnt_exist gets redirected to http://otherdomain.com/doesnt_exist) How can I configure Apache to serve static files like this, but allow all other requests to continue to Passenger?

    Read the article

  • Can I use iptables on my Varnish server to forward HTTPS traffic to a specific server?

    - by Dylan Beattie
    We use Varnish as our front-end web cache and load balancer, so we have a Linux server in our development environment, running Varnish with some basic caching and load-balancing rules across a pair of Windows 2008 IIS web servers. We have a wildcard DNS rule that points *.development at this Varnish box, so we can browse http://www.mysite.com.development, http://www.othersite.com.development, etc. The problem is that since Varnish can't handle HTTPS traffic, we can't access https://www.mysite.com.development/ For dev/testing, we don't need any acceleration or load-balancing - all I need is to tell this box to act as a dumb proxy and forward any incoming requests on port 443 to a specific IIS server. I suspect iptables may offer a solution but it's been a long while since I wrote an iptables rule. Some initial hacking has got me as far as iptables -F iptables -A INPUT -p tcp -m tcp --sport 443 -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to 10.0.0.241:443 iptables -t nat -A POSTROUTING -p tcp -d 10.0.0.241 --dport 443 -j MASQUERADE iptables -A INPUT -j LOG --log-level 4 --log-prefix 'PreRouting ' iptables -A OUTPUT -j LOG --log-level 4 --log-prefix 'PostRouting ' iptables-save > /etc/iptables.rules (where 10.0.0.241 is the IIS box hosting the HTTPS website), but this doesn't appear to be working. To clarify - I realize there's security implications about HTTPS proxying/caching - all I'm looking for is completely transparent IP traffic forwarding. I don't need to decrypt, cache or inspect any of the packets; I just want anything on port 443 to flow through the Linux box to the IIS box behind it as though the Linux box wasn't even there. Any help gratefully received... EDIT: Included full iptables config script.

    Read the article

  • How to configure traffic from a specific IP hardcoded to an IP to forward to another IP:PORT using i

    - by cclark
    Unfortunately we have a client who has hardcoded a device to point at a specific IP and port. We'd like to redirect traffic from their IP to our load balancer which will send the HTTP POSTs to a pool of servers able to handle that request. I would like existing traffic from all other IPs to be unaffected. I believe iptables is the best way to accomplish this and I think this command should work: /sbin/iptables -t nat -A PREROUTING -s $CUSTIP -j DNAT -p tcp --dport 8080 -d $CURR_SERVER_IP --to-destination $NEW_SERVER_IP:8080 Unfortunately it isn't working as expected. I'm not sure if I need to add another rule, potentially in the POSTROUTING chain? Below I've substituted the variables above with real IPs and tried to replicate the layout in my test environment in incremental steps. $CURR_SERVER_IP = 192.168.2.11 $NEW_SERVER_IP = 192.168.2.12 $CUST_IP = 192.168.0.50 Port forward on the same IP /sbin/iptables -t nat -A PREROUTING -p tcp -d 192.168.2.11 --dport 16000 -j DNAT --to-destination 192.168.2.11:8080 Works exactly as expected. IP and port forward to a different machine /sbin/iptables -t nat -A PREROUTING -p tcp -d 192.168.2.11 --dport 16000 -j DNAT --to-destination 192.168.2.12:8080 Connections seem to timeout. Restrict IP and port forward to only be applied to requests from a specific IP /sbin/iptables -t nat -A PREROUTING -p tcp -s 192.168.0.50 -d 192.168.2.11 --dport 16000 -j DNAT --to-destination 192.168.2.12:8080 Times out as well. Probably for the same reason as the previous entry. Does anyone have any insights or suggestions? thanks,

    Read the article

  • Creating security permissions for a non-domain-member user in Windows Server 2008

    - by Overhed
    Hello everyone, I apologize in advance for incorrect use of terminology, as I'm not an IT person by trade. I'm doing some remote work via a VPN for a client and I need to add some DCOM Service security permissions for my remote user. Even though I'm on the VPN, the request for access to the DCOM service is using my PCs native user (and since I'm running Vista Home Premium it looks something like: PC-NAME\Username). The request for access comes back with access denied and I can not add this user to the security permissions as it "is not from a domain listed in the Select Location dialog box, and is therefore not valid". I'm pretty stuck and have no clue what kind of steps I need to do here. Any help would be appreciated, thanks in advance. EDIT: I have no control over what credentials are being passed in to the server by my computer. This scenario is occurring in an installation wizard that has a section which requests you point it to the machine running the "server" version of the software I'm installing (it then tries to invoke the relevant COM service, but my user does not have "Remove Activation Permissions" on that service, so I get request denied).

    Read the article

  • DNS request times out then succeeds on my local network. Why?

    - by Dan
    I have a W2K3 Server that is the Domain Controller and also the DNS server. I wanted to make another DNS zone on my network called "something.local" and then make 'A' records to point requests like 'admin.something.local' and 'www.something.local' to machines on my network. I keep getting DNS timeouts but then after 2 tries it succeeds. Why would this happen? How can I troubleshoot? From my desktop I run: nslookup admin.something.local and get: Server: server.domain.com.au.local Address: 192.168.0.10 DNS request timed out. timeout was 2 seconds. DNS request timed out. timeout was 2 seconds. Name: admin.something.local Address: 192.168.0.191 If I go back the other way: nslookup 192.168.0.191 I get: Server: server.domain.com.au.local Address: 192.168.0.10 Name: admin.something.local Address: 192.168.0.191 My DNS server address is 192.168.0.10. The new DNS zone is not hooked up to active directory. I do not have much experience with DNS. Yesterday it was working fine. I have tried doing an 'ipconfig /flushdns' on both my desktop and the DNS server

    Read the article

  • How should I deploy my JVM-based web application on ubuntu?

    - by Pieter Breed
    I've developed a web application using clojure/compojure (JVM based) and while developing I tested it using embedded jetty that runs on 0.0.0.0:8080. I would now like to deploy it to run on port 80 on ubuntu. I do dynamic virtual hosting, so any request for any host that arrives on port 80 should be handled by my application. The issues that worries me are: I can still run it embedded but I'm worried about running my app as root (needed for binding to port 80). I'm not sure if I can 'give up root' when in the JVM. Do I need to be concerned by this? besides, serving web applications is a known problem and I should be using known solutions for this (jetty or tomcat) but especially tomcat seems very heavy weight. Besides, I only have one application that listens to /* and does routing internally. (with compojure/ring). What I'm trying to say with this is that tomcat by default assigns WARs to subfolders which I don't want. So basically what I need is some very safe way of binding to port 80 on ubuntu that can with minimal interference send all requests to my app. Any ideas?

    Read the article

  • Log and debug/decrypt an windows application's HTTPS traffic

    - by cweiske
    I've got a proprietary windows-only application that uses HTTPS to speak with a (also proprietary, undocumented) web service. To ultimately be able to use the web service's functionality on my linux machines, I want to reverse-engineer the web service API by analyzing the requests sent by the application. Now the question: How can I decrypt and log the HTTPS traffic? I know of several solutions which don't apply in my case: Fiddler is a man-in-the-middle HTTPS proxy which I cannot use since the application doesn't support proxies. Also, I do not (yet) know if it works with self-signed server certificates, which I doubt. Wireshark is able to decrypt SSL streams if you have the server's private certificate, which I don't have. any browser extension since the application is not a browser If I remember correctly, there have been some trojans that capture online banking information by hooking into/replacing the window's crypto API. Since the machine is mine, low level changes are possible. Maybe there is a non-trojan (white-hat) network log application out there which does the same? There is a blackhat presentation with some details available to read. They refer to Microsoft Research Detour for easy API hooking.

    Read the article

  • Strange 400 error with IIS 7.5 and a webservice?

    - by Juw
    Ok, this is a longshot. I have been pondering this for hours. I have no clue how to solve this. But maybe someone here can recognize the problem and point me to right direction. I have an IIS 7.5 server and a MSSQL database on a different server. On the IIS server there is a webservice that communicates with the MSSQL server. The problem is that when there is data that the MSSQL server needs to send back to the webservice and the webservice delivers that back to the webbrowser (JSON) i get a 400 error. Looking through the logs for the IIS there is just a 400....nothing more. When i put in a call to the service in my browsers URL field i get this: "The server encountered an error processing the request. Please see the service help page for constructing valid requests to the service." There is NOTHING wrong with how i call the webservice. It has worked before on a different server (a dev server). Do someone have a clue on what this can be about? 400 means malformed URL...it isn´t. And why is that when there are no data to return to the user...everything works. But when there is data fetched from the MSSQL DB...the 400 error shows up. Hope someone have some tips how to solve it. Thanx in advance.

    Read the article

  • Poor performance on IIS7, only on Windows Server 2008, fine on Windows 7?

    - by user32005
    Hi, I'm new to IIS7 but have experience with other versions. I've been working on an application that works great in a dev environment (as always) but when I push it to a windows server 2008/IIS7 box performance takes a noticeable hit. The dev environment is Windows7/IIS7. The configuration in IIS is the same on the dev box as the server. I've tried all sorts of things to try and find a reason for this but I cant come to any conclusion. I've ruled out database problems on the live box as all data is cached after the first request. I've confirmed this to be true and made sure there is no additional database traffic. I've ruled out network issues with a combination of monitoring requests with fiddler and local debugging on the server. Whenever the code runs on the server there seems to be a performance issue. The server: Intel Core 2 Quad CPU Q6600 @ 2.40ghz with 2gb RAM. I know this is not fantastic but I was expecting it to at least perform as well as my dev environment (which is running much more on a lower spec). The CPU using peaks at under 60%, and memory usage is less than half of the available. I've enabled failed request tracing and most of the time is spent in a custom HttpModule, this module works to handle every request, I cant get any more detail as to what within the module may be causing the problem. Any ideas, I've been pulling my hair out for days now. Thanks

    Read the article

  • Apache 2.4 with PHP-FPM

    - by tubaguy50035
    I'm trying to setup Apache 2.4 with PHP-FPM 5.4 using the new modules with Apache 2.4. The following is what I have currently in my virtual host file: <VirtualHost *:80> ServerAdmin root@localhost DocumentRoot /var/www #Directory permissions <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Require all granted </Directory> CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> I have PHP-FPM running using Unix sockets with a sock file located at /var/run/php5-fpm.sock. How do I proxy my requests to this sock file? I've seen some sites say to use ProxyPassMatch and others are saying Rewrite Rule. Are there pros or cons on either side? Also, most sites I'm seeing are showing ProxyPassMatch with a regex to only pass .php files. Could I also send it .html files? For whatever reason, we have a ton of PHP inside .html files. Edit: As noted in the comments, it looks like mod_proxy_fcgi doesn't support Unix sockets. Is there another module I should be using?

    Read the article

  • Apache override in sub-location

    - by Atmocreations
    This is my Apache vHost-configuration: <VirtualHost subversion.domain.com:80> ServerAdmin [email protected] ServerName servername.domain.com Documentroot /srv/www/htdocs/svn ErrorLog /var/log/apache2/subversion-error_log CustomLog /var/log/apache2/subversion-access_log combined HostnameLookups Off UseCanonicalName Off ServerSignature Off <Location "/"> AuthBasicProvider ldap AuthType Basic AuthzLDAPAuthoritative on AuthName "SVN" AuthLDAPURL "ldap://myldapurl/..." NONE AuthLDAPBindDN "mybinddn" AuthLDAPBindPassword mypwd DAV svn SVNParentPath /svn/ SVNListParentPath on require ldap-group groupname Order allow,deny Allow from all </Location> </VirtualHost> This works perfectly. But I would now like to add a web-frontend for the subversion server. I therefore added the lines <Location "/web"> DAV off Order allow,deny Allow from all </Location> But they don't work, as the <Location "/">...</Location> part is directing the requests to the SVN/DAV module. Therefore, apache tells that it couldn't open the requested SVN-filsystem. Does anybody know how to override this setting? Any hint is appreciated.

    Read the article

  • Managing rolling deployments in the cloud

    - by Josh Nankin
    Recently I've been experimenting with various cloud management tools like RightScale, Scalr, custom scripts for managing a variety of servers, each hosting several roles (app, db, load balancer, job queues, etc). The one thing I find lacking in most solutions is a way to do rolling deployments, i.e. running deployments sequentially across a number of servers with the same role. For instance, I dont want to build all of my webservers at the same time, as that will almost definitely result in some down time or 500s for my customers. I'd rather have one or two servers build at a time, while other servers are still available to handle requests. The other alternative is obviously to launch new servers that automatically update themselves on boot, but this isn't as cost effective, and most likely requires more time for the build to complete (it's faster to build on an existing server than to launch a new server and kill old ones). We've all heard of the big companies having the famous "push to build" button (companies like Twilio, Etsy, etc.) but it seems that they all have custom implementations of this. I'm not talking about a simple ssh-loop, clusterssh, or even an mcollective - I preferably want something with a nice simple interface that allows me to specify something like a RightScript or a Scalr script to run on a set of servers with a specific role, and it builds them sequentially. Does any one know of easy ways to get this done, or is this a candidate for a new open source project?

    Read the article

  • Firefox takes a really long time to load some sites on Ubuntu

    - by Dave
    Hello guys, I have an issue here. Some sites - just a few - takes a really long time to load on Firefox. One example is A List Apart (http://www.alistapart.com/) which takes more than 30 minutes (yes, minutes, not seconds). On Opera, ou even through a telnet session, the problematic sites run without problem, fast as expected. I am using Linux 8.04, running Firefox 3.6.3 downloaded from mozilla site, with a 10M ADSL connection. I tried many tweaks I found googling, like disable IPv6, and change http pipelining settings on FF's about:config. None worked. I also used Firebug to find what phase during negotiation is the bottleneck. Findings are in the screenshot. Well guys, any idea what is the issue? And how to solve it? I repeat, this only happens with firefox (3.6.3 and prior versions), for a few sites only (even sites with much more requests, images, javascripts, stylesheets work fine), and http pipelines and IPv6 tweaks on about:config didn't work. Thanks

    Read the article

  • How do I prevent my ASP .NET site from continually prompting for user credentials?

    - by gilles27
    I'm trying to get an ASP .NET website up and running on IIS6. The site will run in its own application pool, and uses Windows authentication, with anonymous access turned off. When I run the app pool under NETWORK SERVICE, everything works fine. However we need the app pool to run under a different account, because this account needs some extra privileges (we are printing Word documents). This new account is a member of the local users group, and the IIS_WPG group. It has also been granted the "Log on as a service right". When I browse to the site I am prompted for credentials, not once, but several times. When the page finally loads it looks wrong because the style sheets have not been applied. My suspicion is that I am being prompted once for each file (e.g. all the images, styles and script files) the browser requests, and that for some reason the website is unable to validate those credentials in order to serve the files back. If I allow anonymous access the page loads fine - we don't want to allow it but I mention it in case it offers any further clues. My theory is that perhaps the account the app pool runs under needs permissions to validate domain credentials? If that is so, how do I enable this?

    Read the article

  • Last (I think and hope) problems configuring SSL certificate with Apache and VirtualHosts

    - by user65567
    Finally I set apache2 to get a single certificate for all subdomains. [...] # Go ahead and accept connections for these vhosts # from non-SNI clients SSLStrictSNIVHostCheck off # Apache setup which will listen for and accept SSL connections on port 443. Listen 443 # Listen for virtual host requests on all IP addresses NameVirtualHost *:443 # Because this virtual host is defined first, it will # be used as the default if the hostname is not received # in the SSL handshake, e.g. if the browser doesn't support # SNI. <VirtualHost *:443> ServerName domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/domain/public" <Directory "/Users/<my_user_name>/Sites/domain/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain1.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain1/public" <Directory "/Users/<my_user_name>/Sites/subdomain1/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain2.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain2/public" <Directory "/Users/<my_user_name>/Sites/subdomain2/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> So, for example, I can correctly access https://subdomain1.domain.localhost https://subdomain2.domain.localhost ... Now, anyway, I have problems on accessing http://subdomain1.domain.localhost http://subdomain2.domain.localhost ... Since I use a Mac Os, on accessing the "http: version", I get a default page "Your website." (instead of a error). Why does it happen?

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >