Search Results

Search found 181 results on 8 pages for 'haproxy'.

Page 6/8 | < Previous Page | 2 3 4 5 6 7 8  | Next Page >

  • 1K incoming http post requests per second, each with a 10-50K file

    - by Blankman
    I'm trying to figure out what kind of server setup I will need to support: 1K http post requests per second each post will contain a xml file between 5-50K (average of 25 kilobytes) Even if I get a 100 Mb/s connection with my dedicated box (they usually give 10 Mb/s but you can upgrade), from my calculations that is about 12K kb/s which means about 480 25kb files per second. So this means I need around 3 servers then, each with 100 Mb/s connection. Would a single server running HAProxy be able to redirect the requests to other servers or does this mean I need to get something else that can handle more than 100 Mb/s to proxy things out to the other servers? If my math is off I'd appreciate any corrections you may have.

    Read the article

  • how to proxy sql queries (INSERT, UPDATE e.t.c.)

    - by XakRu
    I have installed cluster MYSQL (galley with mariadb) As an application server installed Apache. on a server with Apache installed haproxy which proxies requests from php in this case installed for zabbix server cluster. But faced with deadlocks, now I want to proxy requests WRITE, INSERT, UPDATE to the second server. SELECT queries to the second and third server. I would be happy to see your suggestions. Please do not write: use mysql - proxy. I want to see what program it may to proxy SQL requests. scheme: http://www.gliffy.com/pubdoc/4474830/L.png

    Read the article

  • Handling OPTIONS request in nginx

    - by ctmiller
    We're using HAProxy as a load balancer at the moment, and it regularly makes requests to the downstream boxes to make sure they're alive using an OPTIONS request: OPTIONS /index.html HTTP/1.0 I'm working with getting nginx set up as a reverse proxy with caching (using ncache). For some reason, nginx is returning a 405 when an OPTIONS request comes in: 192.168.1.10 - - [22/Oct/2008:16:36:21 -0700] "OPTIONS /index.html HTTP/1.0" 405 325 "-" "-" 192.168.1.10 When hitting the downstream webserver directly, I get a proper 200 response. My question is: how to you make nginx pass that response along to HAProxy, or, how can I set the response in the nginx.conf? Thanks!

    Read the article

  • Does any Certificate Authority support both SAN and wildcards?

    - by nicholas a. evans
    My basic quandry is that wildcard certificates don't support subdomains of subdomains, nor do they help with alternate domain names. Basically, if my CN is example.com, I want a Subject Alternative Name field that looks roughly like so: DNS:example.com DNS*.example.com DNS:*.beta.example.com DNS:example.net DNS:*.example.net DNS:*.beta.example.net Using a self-signed cert, I verified that the browsers will work just fine with this. Unfortunately, none of the Certificate Authorities that I looked into (Thawte, GoDaddy, Verisign, Digicert) seemed to support both wildcard certs and Subject Alternative Name (sometimes referred to as "Multiple Domain UCC"). I even called up GoDaddy tech support to confirm. Is there a CA (trusted by 99% of browsers) that supports wildcards for the Subject Alternative Name? One little restriction: I'm saddled with Amazon EC2's single Elastic IP per instance limitation. Here are what I see as my backup plans: set up three extra EC2 instances, each configured for a different IP address and cert, and nginx reverse proxy from three of them into the app server(s) introduces latency(?), and even the cheapest EC2 instance isn't that cheap instead of dedicated reverse proxy instances, setup the four or more almost identical EC2 app servers, with nginx using the port to determine which cert to deliver, and use haproxy to distribute the traffic amongst themselves. complicated to configure and manage? I'm not using the cheapest EC2 instance type for my app servers. If I don't need 4+ app servers for the load, it raises the cost. set up an external server (outside of EC2) that doesn't have EC2's Elastic IP address restrictions, setup all of the alternate IP addresses and certificates on that server, and nginx reverse proxy from that server into the EC2 app servers. extra IP addresses are almost free (still need to pay for the server of course), but don't come with the robust "elasticity" that Amazon's Elastic IPs provide. even more latency than in the first scenario. Are these approaches crazy or reasonable? Do you have another one to suggest?

    Read the article

  • Using nginx to rewrite urls inside outgoing responses

    - by Kev
    We have a customer with a site running on Apache. Recently the site has been seeing increased load and as a stop gap we want to shift all the static content on the site to a cookieless domains, e.g. http://static.thedomain.com. The application is not well understood. So to give the developers time to amend the code to point their links to the static content server (http://static.thedomain.com) I thought about proxying the site through nginx and rewriting the outgoing responses such that links to /images/... are rewritten as http://static.thedomain.com/images/.... So for example, in the response from Apache to nginx there is a blob of Headers + HTML. In the HTML returned from Apache we have <img> tags that look like: <img src="/images/someimage.png" /> I want to transform this to: <img src="http://static.thedomain.com/images/someimage.png" /> So that the browser upon receiving the HTML page then requests the images directly from the static content server. Is this possible with nginx (or HAProxy)? I have had a cursory glance through the docs but nothing jumped out at me except rewriting inbound urls.

    Read the article

  • what reverse proxy server will direct traffic to healthy servers whose health is based on a result string

    - by joshua paul
    what reverse proxy server will direct traffic to healthy servers whose health is based on a result string?? ideally i'd like something like dnsmadeeasy or ultradns - lol - but for reverse proxy i have looked at pound, delegate, ha proxy, squid, varnish, nginx, apache, and cherokee but can't see that they will work - they only test for HTTP result code scenario client request www.aaa.com www.aaa.com is a reverse proxy reverse proxy looks at "test.php" on server 1.aaa.com, 2.aaa.com and 3.aaa.com for result string "OK" if the server is "OK" then proxy requests to them help!

    Read the article

  • Keepalived alternative for Solaris 10

    - by antispam
    We are considering an architecture like the one in the picture for Solaris 10 That is, high avalaibility software load balancers in front of web and application servers. Unfortunately, Keepalived is not available for Solaris at the moment. Is there an equivalent artifact for substituing Keepalived which is supported in Solaris 10? Is there an equivalent architecture for Solaris using HA SW load balancing? Thank you.

    Read the article

  • How to deal with 100's or 1000's of virtual hosts

    - by anton
    I am curious to know how services such as heroku manage 1000's of virtual hosts - ie if you create a web site/app, and put it up on these services, you get your own virtual host name - foo.heroku.com etc (the same applies to many other sites that have vanity URLs). I know with various web servers and proxies you can configure as many virtual hosts as you want - but there must be some upper limit to this ? Do they programmatically add virtual hosts - perhaps spreading the load? Or are there other solutions.

    Read the article

  • Load balancing a Windows File Share using HA-Proxy

    - by NathanE
    After pulling my hair out over DFS I just had this weird and potentially dangerous idea come into my head whereby, just possibly, I might be able to use HA-Proxy to load balance a file share between servers. I've done some remedial packet traces and it does appear that TCP port 445 is the only thing involved in using Windows file sharing. I've always thought for many years that UDP 139, 135 etc were also involved in at least establishing the connection - but apparently not! So I setup a basic test: listen SMBTest *:445 mode tcp server Smb1 172.16.61.201:445 server Smb2 172.16.61.202:445 And you'll never guess what... it works??? (!) Now obviously there is the whole concern about synchronisation between the file servers (of course). That could easily be taken care of with a little bit of Robocopy script. And considering I only need a HA read-only file share there wouldn't be any issues with regard to file locking etc. Can anyone tell me if what I'm playing with here is fire? I really didn't think it would work at all and now I'm a little shocked. What would be the downsides? Could this be relied upon for a production environment?

    Read the article

  • 250k connections for comet with node.js

    - by Nenad
    How to implement node.js to be able to handle 250k connections as comet server (client side we use socket.io)? Would the use of nginx as proxy/loadbalancer be the right solution? Or will HA-Proxy be the better way? Has anyone real world experience with 100k+ connections and can share his setup? Would a setup like this be the right one (Quad core CPU per server - start 4 Instances of node.js per Server?): nginx (as proxy / load balancing server) / | \ / | \ / | \ / | \ node server #1 node server #2 node server #3 4 instances 4 instances 4 instances

    Read the article

  • Apache with mod_php high memory utilization

    - by Raj
    We have Magento application deployed on Apache with mod_php and mysql. I have observed that sometime apache server starts consuming high memory which causes memory swapping and results in high load on servers. whenever there is high load on apache server, the apache processes which are causing the high load were in sleep mode at mysql end and CLOSE_WAIT state at client side. Any help is appreciated to resolve this issue.

    Read the article

  • Zero downtime deployment (Tomcat), Nginx or HAProxy, behind hardware LB - how to "starve" old server?

    - by alexeypro
    Currently we have the following setup. Hardware Load Balancer (LB) Box A running Tomcat on 8080 (TA) Box B running Tomcat on 8080 (TB) TA and TB are running behind LB. For now it's pretty complicated and manual job to take Box A or Box B out of LB to do the zero downtime deployment. I am thinking to do something like this: Hardware Load Balancer (LB) Box A running Nginx on 8080 (NA) Box A running Tomcat on 8081 (TA1) Box A running Tomcat on 8082 (TA2) Box B running Nginx on 8080 (NB) Box B running Tomcat on 8081 (TB1) Box B running Tomcat on 8082 (TB2) Basically LB will be directing traffic between NA and NB now. On each of Nginx's we'll have TA1, TA2 and TB1, TB2 configured as upstream servers. Once one of the upstreams's healthcheck page is unresponsive (shutdown) the traffic goes to another one (HttpHealthcheckModule module on Nginx). So the deploy process is simple. Say, TA1 is active with version 0.1 of the app. Healthcheck on TA1 is OK. We start TA2 with Healthcheck on it as ERROR. So Nginx is not talking to it. We deploy app version 0.2 to TA2. Make sure it works. Now, we switch the Healthcheck on TA2 to OK, switch Healthcheck to TA1 to ERROR. Nginx will start serving TA2, and will remove TA1 out of rotation. Done! And now same with the other box. While it sounds all cool and nice, how do we "starve" the Nginx? Say we have pending connections, some users on TA1. If we just turn it off, sessions will break (we have cookie-based sessions). Not good. Any way to starve traffic to one of the upstream servers with Nginx? Thanks!

    Read the article

  • Solaris TCP stack tuning

    - by disserman
    We have a large web project (about 2-3k requests per second), using haproxy (http://haproxy.1wt.eu/) as a frontend and load balancer between the java application servers. The frontend (haproxy) is running on Linux but we are going to migrate it to the Solaris 10 as all our other servers are running under Solaris. After switching a traffic I see the two things: a) the web site became loading slower (5-10 seconds with images in comparison to 2-3 seconds on Linux) b) sometimes haproxy fails to perform a "lifecheck" (get a special web page and analyze http response code) due to the socket timeout. After switching traffic back to Linux everything is okay. I've tried to tune all params I found in /dev/tcp but no progress. I believe the problem is in some open socket limitations. If someone can point me to the answer, I would be greatly appreciated. p.s. haproxy is running under Xen DomU on Linux (Kernel 2.6.18, Debian 5), under zone on Solaris (10 u8). the only thing we did on Linux is increasing of ip_conntrack_max (I believe Solaris option tcp_conn_req_max_q is the equivalent).

    Read the article

  • Self-healing Cloud vs Failover Boxes

    - by IMB
    Now that self-healing cloud servers are becoming more and more popular, I am currently torn between the decision if I should setup a HAproxy failover for my VPS or if should save myself the trouble and just put my sites on a self-healing cloud server. Does it still make sense to setup your own failover system (HAproxy + 2 or more servers for example) when self healing cloud seems like a practical solution? They seem to do the same job or am I missing something?

    Read the article

  • Send Redirects To Specific Ports

    - by Garrett
    I have an Rails application server that is listening on port 9000, and is being called through haproxy. All my redirects from that server are being redirected back through port 9000, when they should be sent back on port 80. I am using a combination of haproxy + nginx + passenger. Is there a way to make sure all redirects are being sent through port 80, regardless of what port the actual server is listening on? I don't care if its a haproxy, nginx, Passenger, or Rails change. I just need to make sure most requests unless specified otherwise, are sent back to port 80. Thanks!

    Read the article

  • Apache with mod_perl eating memory when idle

    - by syneticon-dj
    An Apache webserver running a mod_perl application is exposing abnormal memory usage - after the "day load" ceases, the system's memory is being exhausted by the Apache processes and oom_killer is being invoked. As the load returns the following morning, the memory usage normalizes - probably because Apache workers get recycled periodically if a sufficient number of hits is generated: This is the graph for apache hits per second to correlate: The remaining 2 hits per second throughout the night are induced by HAProxy checks - it runs HEAD http://mydomain.example.com/running HTTP/1.0 requests against the server every half a second with "running" being a static file (i.e. not invoking any perl code). It also seems that disabling these checks remedies the memory usage problem, but obviously cannot be a solution. All of 3 similarly configured servers (behind HAProxy) expose this behavior. The running OS is Ubuntu 10.10, Apache version 2.2.16. This seems to be a memory leak but I have no idea how to start debugging it - any hints?

    Read the article

  • solutions for a webserver dedicated to manage permissions/ACL and (reverse) proxying API servers?

    - by giohappy
    I'm considering various layouts to expose various HTTP API services (running on their own differents servers) through a frontend server dedicated to manage permissions on behalf of the API services. I've considered various options, from the classical ones like Nginx, Apache, etc. to HAProxy, passing by the various Python webserver solutions like Tornado, Twisted (which gives me the opportunity to implement my own ACL system easily). The foundamental feature is high performance and scalability, and the ability to manage fine grained ACL rules (similar to the HAProxy ACL system) I would like to know what is a suggested approach to setup what I need, and if (opne source) ready-to-use solutions are already available dedicated to this.

    Read the article

  • cheap way to scale a rails application

    - by VP
    I have an application, that is becoming big, but until now, its not giving me a good revenue. That means, short money to re-invest on that. In this scenario, i found a way to make a "cheap distributed rails" deployment. I've got 4 VPS. All of them are in the same physical server. I added a load balance server running HAproxy in one dedicated VPS. There i pointed my virtual ip address where my domain name is associated. Behind this HAproxy i have more two VPS running my rails APP, passenger and memcache. Both apps servers are looking to the same database server, my 4th VPS. So with $44/month, i mounted a distributed environment. It won't be my final choice, but now, that the budget is short, is that a good way to deploy a rails application? Any pros or cons? It worth my $44/month?

    Read the article

  • Slow Client connection blocks Mongrel

    - by Sanjay
    I have a Apache + Haproxy + Mongrel setup for my rails application. When I hit a particular server page, mongrel takes around 100ms to process the request and I get the page in around 5 secs due to data transmission time on my slow home connection. Now I see that during these 5 secs of data transmission, mongrel does not serve any other request. I am surprised as that means mongrel is serving the response html to the client and is blocked till the client receives it. Shouldn't serving response be the job of Apache? This puts serious bottleneck in the no of requests Mongrel can serve as that would depend on the speed of the client connection. Is there any way that html generated by mongrel is served by apache/haproxy or any other web server like nginx? I wonder how the other high traffic sites are managing it?

    Read the article

  • Where is all the memory being consumed?

    - by Mark L
    Hello, I have a Dell R300 Ubuntu 9.10 box with 4GB of memory. All I'm running on there is haproxy, nagios and postfix yet there is ~2.7GB of memory being consumed. I've run ps and I can't get the sums to add up. Could anyone shed any light on where all the memory is being used? Cheers, Mark $ sudo free -m total used free shared buffers cached Mem: 3957 2746 1211 0 169 2320 -/+ buffers/cache: 256 3701 Swap: 6212 0 6212 Sorry for pasting all of ps' output but I'm keen to get to the bottom of this. $ sudo ps aux [sudo] password for mark: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 19320 1656 ? Ss May20 0:05 /sbin/init root 2 0.0 0.0 0 0 ? S< May20 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S< May20 0:00 [migration/0] root 4 0.0 0.0 0 0 ? S< May20 0:16 [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S< May20 0:00 [watchdog/0] root 6 0.0 0.0 0 0 ? S< May20 0:03 [migration/1] root 7 0.0 0.0 0 0 ? S< May20 3:10 [ksoftirqd/1] root 8 0.0 0.0 0 0 ? S< May20 0:00 [watchdog/1] root 9 0.0 0.0 0 0 ? S< May20 0:00 [migration/2] root 10 0.0 0.0 0 0 ? S< May20 0:19 [ksoftirqd/2] root 11 0.0 0.0 0 0 ? S< May20 0:00 [watchdog/2] root 12 0.0 0.0 0 0 ? S< May20 0:01 [migration/3] root 13 0.0 0.0 0 0 ? S< May20 0:41 [ksoftirqd/3] root 14 0.0 0.0 0 0 ? S< May20 0:00 [watchdog/3] root 15 0.0 0.0 0 0 ? S< May20 0:03 [events/0] root 16 0.0 0.0 0 0 ? S< May20 0:10 [events/1] root 17 0.0 0.0 0 0 ? S< May20 0:08 [events/2] root 18 0.0 0.0 0 0 ? S< May20 0:08 [events/3] root 19 0.0 0.0 0 0 ? S< May20 0:00 [cpuset] root 20 0.0 0.0 0 0 ? S< May20 0:00 [khelper] root 21 0.0 0.0 0 0 ? S< May20 0:00 [netns] root 22 0.0 0.0 0 0 ? S< May20 0:00 [async/mgr] root 23 0.0 0.0 0 0 ? S< May20 0:00 [kintegrityd/0] root 24 0.0 0.0 0 0 ? S< May20 0:00 [kintegrityd/1] root 25 0.0 0.0 0 0 ? S< May20 0:00 [kintegrityd/2] root 26 0.0 0.0 0 0 ? S< May20 0:00 [kintegrityd/3] root 27 0.0 0.0 0 0 ? S< May20 0:00 [kblockd/0] root 28 0.0 0.0 0 0 ? S< May20 0:01 [kblockd/1] root 29 0.0 0.0 0 0 ? S< May20 0:04 [kblockd/2] root 30 0.0 0.0 0 0 ? S< May20 0:02 [kblockd/3] root 31 0.0 0.0 0 0 ? S< May20 0:00 [kacpid] root 32 0.0 0.0 0 0 ? S< May20 0:00 [kacpi_notify] root 33 0.0 0.0 0 0 ? S< May20 0:00 [kacpi_hotplug] root 34 0.0 0.0 0 0 ? S< May20 0:00 [ata/0] root 35 0.0 0.0 0 0 ? S< May20 0:00 [ata/1] root 36 0.0 0.0 0 0 ? S< May20 0:00 [ata/2] root 37 0.0 0.0 0 0 ? S< May20 0:00 [ata/3] root 38 0.0 0.0 0 0 ? S< May20 0:00 [ata_aux] root 39 0.0 0.0 0 0 ? S< May20 0:00 [ksuspend_usbd] root 40 0.0 0.0 0 0 ? S< May20 0:00 [khubd] root 41 0.0 0.0 0 0 ? S< May20 0:00 [kseriod] root 42 0.0 0.0 0 0 ? S< May20 0:00 [kmmcd] root 43 0.0 0.0 0 0 ? S< May20 0:00 [bluetooth] root 44 0.0 0.0 0 0 ? S May20 0:00 [khungtaskd] root 45 0.0 0.0 0 0 ? S May20 0:00 [pdflush] root 46 0.0 0.0 0 0 ? S May20 0:09 [pdflush] root 47 0.0 0.0 0 0 ? S< May20 0:00 [kswapd0] root 48 0.0 0.0 0 0 ? S< May20 0:00 [aio/0] root 49 0.0 0.0 0 0 ? S< May20 0:00 [aio/1] root 50 0.0 0.0 0 0 ? S< May20 0:00 [aio/2] root 51 0.0 0.0 0 0 ? S< May20 0:00 [aio/3] root 52 0.0 0.0 0 0 ? S< May20 0:00 [ecryptfs-kthrea] root 53 0.0 0.0 0 0 ? S< May20 0:00 [crypto/0] root 54 0.0 0.0 0 0 ? S< May20 0:00 [crypto/1] root 55 0.0 0.0 0 0 ? S< May20 0:00 [crypto/2] root 56 0.0 0.0 0 0 ? S< May20 0:00 [crypto/3] root 70 0.0 0.0 0 0 ? S< May20 0:00 [scsi_eh_0] root 71 0.0 0.0 0 0 ? S< May20 0:00 [scsi_eh_1] root 74 0.0 0.0 0 0 ? S< May20 0:00 [scsi_eh_2] root 75 0.0 0.0 0 0 ? S< May20 0:00 [scsi_eh_3] root 82 0.0 0.0 0 0 ? S< May20 0:00 [kstriped] root 83 0.0 0.0 0 0 ? S< May20 0:00 [kmpathd/0] root 84 0.0 0.0 0 0 ? S< May20 0:00 [kmpathd/1] root 85 0.0 0.0 0 0 ? S< May20 0:00 [kmpathd/2] root 86 0.0 0.0 0 0 ? S< May20 0:00 [kmpathd/3] root 87 0.0 0.0 0 0 ? S< May20 0:00 [kmpath_handlerd] root 88 0.0 0.0 0 0 ? S< May20 0:00 [ksnapd] root 89 0.0 0.0 0 0 ? S< May20 0:00 [kondemand/0] root 90 0.0 0.0 0 0 ? S< May20 0:00 [kondemand/1] root 91 0.0 0.0 0 0 ? S< May20 0:00 [kondemand/2] root 92 0.0 0.0 0 0 ? S< May20 0:00 [kondemand/3] root 93 0.0 0.0 0 0 ? S< May20 0:00 [kconservative/0] root 94 0.0 0.0 0 0 ? S< May20 0:00 [kconservative/1] root 95 0.0 0.0 0 0 ? S< May20 0:00 [kconservative/2] root 96 0.0 0.0 0 0 ? S< May20 0:00 [kconservative/3] root 97 0.0 0.0 0 0 ? S< May20 0:00 [krfcommd] root 315 0.0 0.0 0 0 ? S< May20 0:09 [mpt_poll_0] root 317 0.0 0.0 0 0 ? S< May20 0:00 [mpt/0] root 547 0.0 0.0 0 0 ? S< May20 0:00 [scsi_eh_4] root 587 0.0 0.0 0 0 ? S< May20 0:11 [kjournald2] root 636 0.0 0.0 12748 860 ? S May20 0:00 upstart-udev-bridge --daemon root 657 0.0 0.0 17064 924 ? S<s May20 0:00 udevd --daemon root 666 0.0 0.0 8192 612 ? Ss May20 0:00 dd bs=1 if=/proc/kmsg of=/var/run/rsyslog/kmsg root 774 0.0 0.0 17060 888 ? S< May20 0:00 udevd --daemon root 775 0.0 0.0 17060 888 ? S< May20 0:00 udevd --daemon syslog 825 0.0 0.0 191696 1988 ? Sl May20 0:31 rsyslogd -c4 root 839 0.0 0.0 0 0 ? S< May20 0:00 [edac-poller] root 870 0.0 0.0 0 0 ? S< May20 0:00 [kpsmoused] root 1006 0.0 0.0 5988 604 tty4 Ss+ May20 0:00 /sbin/getty -8 38400 tty4 root 1008 0.0 0.0 5988 604 tty5 Ss+ May20 0:00 /sbin/getty -8 38400 tty5 root 1015 0.0 0.0 5988 604 tty2 Ss+ May20 0:00 /sbin/getty -8 38400 tty2 root 1016 0.0 0.0 5988 608 tty3 Ss+ May20 0:00 /sbin/getty -8 38400 tty3 root 1018 0.0 0.0 5988 604 tty6 Ss+ May20 0:00 /sbin/getty -8 38400 tty6 daemon 1025 0.0 0.0 16512 472 ? Ss May20 0:00 atd root 1026 0.0 0.0 18708 1000 ? Ss May20 0:03 cron root 1052 0.0 0.0 49072 1252 ? Ss May20 0:25 /usr/sbin/sshd root 1084 0.0 0.0 5988 604 tty1 Ss+ May20 0:00 /sbin/getty -8 38400 tty1 root 6320 0.0 0.0 19440 956 ? Ss May21 0:00 /usr/sbin/xinetd -pidfile /var/run/xinetd.pid -stayalive -inetd_compat -inetd_ipv6 nagios 8197 0.0 0.0 27452 1696 ? SNs May21 2:57 /usr/sbin/nagios3 -d /etc/nagios3/nagios.cfg root 10882 0.1 0.0 70280 3104 ? Ss 10:30 0:00 sshd: mark [priv] mark 10934 0.0 0.0 70432 1776 ? S 10:30 0:00 sshd: mark@pts/0 mark 10935 1.4 0.1 21572 4336 pts/0 Ss 10:30 0:00 -bash root 10953 1.0 0.0 15164 1136 pts/0 R+ 10:30 0:00 ps aux haproxy 12738 0.0 0.0 17208 992 ? Ss Jun08 0:49 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg root 23953 0.0 0.0 37012 2192 ? Ss Jun04 0:03 /usr/lib/postfix/master postfix 23955 0.0 0.0 39232 2356 ? S Jun04 0:00 qmgr -l -t fifo -u postfix 32603 0.0 0.0 39072 2132 ? S 09:05 0:00 pickup -l -t fifo -u -c Here's meminfo: $ cat /proc/meminfo MemTotal: 4052852 kB MemFree: 1240488 kB Buffers: 173172 kB Cached: 2376420 kB SwapCached: 0 kB Active: 1479288 kB Inactive: 1081876 kB Active(anon): 11792 kB Inactive(anon): 0 kB Active(file): 1467496 kB Inactive(file): 1081876 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 6361700 kB SwapFree: 6361700 kB Dirty: 44 kB Writeback: 0 kB AnonPages: 11568 kB Mapped: 5844 kB Slab: 155032 kB SReclaimable: 145804 kB SUnreclaim: 9228 kB PageTables: 1592 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 8388124 kB Committed_AS: 51732 kB VmallocTotal: 34359738367 kB VmallocUsed: 282604 kB VmallocChunk: 34359453499 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 6784 kB DirectMap2M: 4182016 kB Here's slabinfo: $ cat /proc/slabinfo slabinfo - version: 2.1 # name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail> ip6_dst_cache 50 50 320 25 2 : tunables 0 0 0 : slabdata 2 2 0 UDPLITEv6 0 0 960 17 4 : tunables 0 0 0 : slabdata 0 0 0 UDPv6 68 68 960 17 4 : tunables 0 0 0 : slabdata 4 4 0 tw_sock_TCPv6 0 0 320 25 2 : tunables 0 0 0 : slabdata 0 0 0 TCPv6 72 72 1792 18 8 : tunables 0 0 0 : slabdata 4 4 0 dm_raid1_read_record 0 0 1064 30 8 : tunables 0 0 0 : slabdata 0 0 0 kcopyd_job 0 0 368 22 2 : tunables 0 0 0 : slabdata 0 0 0 dm_uevent 0 0 2608 12 8 : tunables 0 0 0 : slabdata 0 0 0 dm_rq_target_io 0 0 376 21 2 : tunables 0 0 0 : slabdata 0 0 0 uhci_urb_priv 0 0 56 73 1 : tunables 0 0 0 : slabdata 0 0 0 cfq_queue 0 0 168 24 1 : tunables 0 0 0 : slabdata 0 0 0 mqueue_inode_cache 18 18 896 18 4 : tunables 0 0 0 : slabdata 1 1 0 fuse_request 0 0 632 25 4 : tunables 0 0 0 : slabdata 0 0 0 fuse_inode 0 0 768 21 4 : tunables 0 0 0 : slabdata 0 0 0 ecryptfs_inode_cache 0 0 1024 16 4 : tunables 0 0 0 : slabdata 0 0 0 hugetlbfs_inode_cache 26 26 608 26 4 : tunables 0 0 0 : slabdata 1 1 0 journal_handle 680 680 24 170 1 : tunables 0 0 0 : slabdata 4 4 0 journal_head 144 144 112 36 1 : tunables 0 0 0 : slabdata 4 4 0 revoke_table 256 256 16 256 1 : tunables 0 0 0 : slabdata 1 1 0 revoke_record 512 512 32 128 1 : tunables 0 0 0 : slabdata 4 4 0 ext4_inode_cache 53306 53424 888 18 4 : tunables 0 0 0 : slabdata 2968 2968 0 ext4_free_block_extents 292 292 56 73 1 : tunables 0 0 0 : slabdata 4 4 0 ext4_alloc_context 112 112 144 28 1 : tunables 0 0 0 : slabdata 4 4 0 ext4_prealloc_space 156 156 104 39 1 : tunables 0 0 0 : slabdata 4 4 0 ext4_system_zone 0 0 40 102 1 : tunables 0 0 0 : slabdata 0 0 0 ext2_inode_cache 0 0 776 21 4 : tunables 0 0 0 : slabdata 0 0 0 ext3_inode_cache 0 0 784 20 4 : tunables 0 0 0 : slabdata 0 0 0 ext3_xattr 0 0 88 46 1 : tunables 0 0 0 : slabdata 0 0 0 dquot 0 0 256 16 1 : tunables 0 0 0 : slabdata 0 0 0 shmem_inode_cache 606 620 800 20 4 : tunables 0 0 0 : slabdata 31 31 0 pid_namespace 0 0 2112 15 8 : tunables 0 0 0 : slabdata 0 0 0 UDP-Lite 0 0 832 19 4 : tunables 0 0 0 : slabdata 0 0 0 RAW 183 210 768 21 4 : tunables 0 0 0 : slabdata 10 10 0 UDP 76 76 832 19 4 : tunables 0 0 0 : slabdata 4 4 0 tw_sock_TCP 80 80 256 16 1 : tunables 0 0 0 : slabdata 5 5 0 TCP 81 114 1664 19 8 : tunables 0 0 0 : slabdata 6 6 0 blkdev_integrity 144 144 112 36 1 : tunables 0 0 0 : slabdata 4 4 0 blkdev_queue 64 64 2024 16 8 : tunables 0 0 0 : slabdata 4 4 0 blkdev_requests 120 120 336 24 2 : tunables 0 0 0 : slabdata 5 5 0 fsnotify_event 156 156 104 39 1 : tunables 0 0 0 : slabdata 4 4 0 bip-256 7 7 4224 7 8 : tunables 0 0 0 : slabdata 1 1 0 bip-128 0 0 2176 15 8 : tunables 0 0 0 : slabdata 0 0 0 bip-64 0 0 1152 28 8 : tunables 0 0 0 : slabdata 0 0 0 bip-16 84 84 384 21 2 : tunables 0 0 0 : slabdata 4 4 0 sock_inode_cache 224 276 704 23 4 : tunables 0 0 0 : slabdata 12 12 0 file_lock_cache 88 88 184 22 1 : tunables 0 0 0 : slabdata 4 4 0 net_namespace 0 0 1920 17 8 : tunables 0 0 0 : slabdata 0 0 0 Acpi-ParseExt 640 672 72 56 1 : tunables 0 0 0 : slabdata 12 12 0 taskstats 48 48 328 24 2 : tunables 0 0 0 : slabdata 2 2 0 proc_inode_cache 1613 1750 640 25 4 : tunables 0 0 0 : slabdata 70 70 0 sigqueue 100 100 160 25 1 : tunables 0 0 0 : slabdata 4 4 0 radix_tree_node 22443 22475 560 29 4 : tunables 0 0 0 : slabdata 775 775 0 bdev_cache 72 72 896 18 4 : tunables 0 0 0 : slabdata 4 4 0 sysfs_dir_cache 9866 9894 80 51 1 : tunables 0 0 0 : slabdata 194 194 0 inode_cache 2268 2268 592 27 4 : tunables 0 0 0 : slabdata 84 84 0 dentry 285907 286062 192 21 1 : tunables 0 0 0 : slabdata 13622 13622 0 buffer_head 256447 257472 112 36 1 : tunables 0 0 0 : slabdata 7152 7152 0 vm_area_struct 1469 1541 176 23 1 : tunables 0 0 0 : slabdata 67 67 0 mm_struct 82 95 832 19 4 : tunables 0 0 0 : slabdata 5 5 0 files_cache 104 161 704 23 4 : tunables 0 0 0 : slabdata 7 7 0 signal_cache 163 187 960 17 4 : tunables 0 0 0 : slabdata 11 11 0 sighand_cache 145 165 2112 15 8 : tunables 0 0 0 : slabdata 11 11 0 task_xstate 118 140 576 28 4 : tunables 0 0 0 : slabdata 5 5 0 task_struct 128 165 5808 5 8 : tunables 0 0 0 : slabdata 33 33 0 anon_vma 731 896 32 128 1 : tunables 0 0 0 : slabdata 7 7 0 shared_policy_node 85 85 48 85 1 : tunables 0 0 0 : slabdata 1 1 0 numa_policy 170 170 24 170 1 : tunables 0 0 0 : slabdata 1 1 0 idr_layer_cache 240 240 544 30 4 : tunables 0 0 0 : slabdata 8 8 0 kmalloc-8192 27 32 8192 4 8 : tunables 0 0 0 : slabdata 8 8 0 kmalloc-4096 291 344 4096 8 8 : tunables 0 0 0 : slabdata 43 43 0 kmalloc-2048 225 240 2048 16 8 : tunables 0 0 0 : slabdata 15 15 0 kmalloc-1024 366 432 1024 16 4 : tunables 0 0 0 : slabdata 27 27 0 kmalloc-512 536 544 512 16 2 : tunables 0 0 0 : slabdata 34 34 0 kmalloc-256 406 528 256 16 1 : tunables 0 0 0 : slabdata 33 33 0 kmalloc-128 503 576 128 32 1 : tunables 0 0 0 : slabdata 18 18 0 kmalloc-64 3467 3712 64 64 1 : tunables 0 0 0 : slabdata 58 58 0 kmalloc-32 1520 1920 32 128 1 : tunables 0 0 0 : slabdata 15 15 0 kmalloc-16 3547 3840 16 256 1 : tunables 0 0 0 : slabdata 15 15 0 kmalloc-8 4607 4608 8 512 1 : tunables 0 0 0 : slabdata 9 9 0 kmalloc-192 4620 5313 192 21 1 : tunables 0 0 0 : slabdata 253 253 0 kmalloc-96 1780 1848 96 42 1 : tunables 0 0 0 : slabdata 44 44 0 kmem_cache_node 0 0 64 64 1 : tunables 0 0 0 : slabdata 0 0 0

    Read the article

  • Dynamic subdomain routing

    - by Nader
    Hi everyone, I asked this question over at stackoverflow, but got very few views: http://stackoverflow.com/questions/2284917/route-web-requests-to-different-servers-based-on-subdomain Perhaps it's more applicable to this crowd. Here it is again for convenience: I have a platform where a user can create a new website using a subdomain. There will be thousands of these, eg abc.mydomain.com, def.mydomain.com . Hopefully if we are successful hundreds of thousands. I need to be able to route these domains to a different IPs to point at a particular app server. I have this mapping in a database right now. What are the best practices and recommended technologies here? I see a couple options: Have DNS setup with a wildcard CNAME entry so that all requests go to a single IP where perhaps two machines using heartbeat (for failover) know how to look up the IP in the database and then do an http redirect to the appropriate app server. This seems clunky and slow to me. Run my own DNS server that can be programatically managed such that when a new site is created a DNS entry is added. We also move sites around to different app servers, so I would need to be able to update DNS entries in close to real time. Thoughts anyone? Thanks. Update2: I've setup external wildcard DNS pointing at an HAProxy web server whose job it is to route requests to backend servers. The mapping is stored in our internal PowerDNS server. Question now is how to get the HAProxy server (or another) to use the value of the internal DNS and not some config file or access list? – Update: Based on some suggestions below, it seems like reverse-proxy server(s) is the way to go. As I'll be rebalancing the domain-server mapping, these need to work instantly and the TTL on a DNS solution could be a problem. Any recommendations on software to use considering this domain-IP data is stored in a DB, and I'll need this to be performant?

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • *Simple* way to block DDoS by number of requests

    - by Eduard Luca
    I have 3 Varnish 3.0.2 servers with Apache 2 as backends, which are being load balanced through a HAproxy separate server. I need to find a very simple program (I'm not much of a sysadmin), which blocks requests from an IP, if that IP has made more than X requests in Y seconds. Would something like this be achievable with a simple solution? Right now I have to block all requests manually with iptables.

    Read the article

  • Advice on where to install Redis

    - by redsquare
    I have just introduced Redis into our application and I am not sure where best to install in production. I read that the Windows option is not production quality so i need to install on Linux. I currently have 5 redhat boxes and cannot get any more provisioned at this current time. These consist of Active/Passive HaProxy load balancer and a cluster of three RabbitMQ boxes. Where would you install an Active/Passive redis instances?

    Read the article

  • Algorithms behind load-balancers?

    - by Vimvq1987
    I need to study about load-balancers, such as Network Load Balancing, Linux Virtual Server, HAProxy,...There're somethings under-the-hood I need to know: What algorithms/technologies are used in these load-balancers? Which is the most popular? most effective? I expect that these algorithms/technologies will not be too complicated. Are there some resources written about them? Thank you very much for your help.

    Read the article

< Previous Page | 2 3 4 5 6 7 8  | Next Page >