Why is my concurrency capacity so low for my web app on a LAMP EC2 instance?

Posted by AMF on Server Fault See other posts from Server Fault or by AMF
Published on 2012-09-07T17:25:06Z Indexed on 2012/09/09 15:39 UTC
Read the original article Hit count: 153

Filed under:
|
|
|

I come from a web developer background and have been humming along building my PHP app, using the CakePHP framework. The problem arose when I began the ab (Apache Bench) testing on the Amazon EC2 instance in which the app resides. I'm getting pretty horrendous average page load times, even though I'm running a c1.medium instance (2 cores, 2GB RAM), and I think I'm doing everything right.

I would run:

ab -n 200 -c 20 http://localhost/heavy-but-view-cached-page.php

Here are the results:

Concurrency Level:      20 
Time taken for tests:   48.197 seconds 
Complete requests:      200 
Failed requests:        0 
Write errors:    0 
Total transferred:      392111200 bytes 
HTML transferred:       392047600 bytes 
Requests per second:    4.15 [#/sec] (mean) 
Time per request:       4819.723 [ms] (mean) 
Time per request:       240.986 [ms] (mean, across all concurrent requests) 
Transfer rate:          7944.88 [Kbytes/sec] received

While the ab test is running, I run VMStat, which shows that Swap stays at 0, CPU is constantly at 80-100% (although I'm not sure I can trust this on a VM), RAM utilization ramps up to about 1.6G (leaving 400M free). Load goes up to about 8 and site slows to a crawl.

Here's what I think I'm doing right on the code side:

  • In Chrome browser uncached pages typically load in 800-1000ms, and cached pages load in 300-500ms. Not stunning, but not terrible either.
  • Thanks to view caching, there might be at most one DB query per page-load to write session data. So we can rule out a DB bottleneck.
  • I have APC on.
  • I am using Memcached to serve the view cache and other site caches.
  • xhprof code profiler shows that cached pages take up 10MB-40MB in memory and 100ms - 1000ms in wall time.

Pages that would be the worst offenders would look something like this in xhprof:

Total Incl. Wall Time (microsec):   330,143 microsecs
Total Incl. CPU (microsecs):    320,019 microsecs
Total Incl. MemUse (bytes): 36,786,192 bytes
Total Incl. PeakMemUse (bytes): 46,667,008 bytes
Number of Function Calls:   5,195

My Apache config:

KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 3

<IfModule mpm_prefork_module>
    StartServers           5
    MinSpareServers        5
    MaxSpareServers       10
    MaxClients            120
    MaxRequestsPerChild  1000
</IfModule>

Is there something wrong with the server? Some gotcha with the EC2? Or is it my code? Some obvious setting I should look into? Too many DNS lookups? What am I missing? I really want to get to 1,000 concurrency capacity, but at this rate, it ain't gonna happen.

© Server Fault or respective owner

Related posts about apache2

Related posts about php