How do I find the cause for a huge difference in performance between two identical Ubuntu servers?

Posted by the.duckman on Server Fault See other posts from Server Fault or by the.duckman
Published on 2012-12-04T16:57:11Z Indexed on 2012/12/04 17:08 UTC
Read the original article Hit count: 209

Filed under:
|
|
|
|

I am running two Dell R410 servers in the same rack of a data center. Both have the same hardware configuration, run Ubuntu 10.4, have the same packages installed and run the same Java web servers. No other load.

One of them is 20-30% faster than the other, very consistently. I used dstat to figure out, if there are more context switches, IO, swapping or anything, but I see no reason for the difference. With the same workload, (no swapping, virtually no IO), the cpu usage and load is higher on one server.

So the difference appears to be mainly CPU bound, but while a simple cpu benchmark using sysbench (with all other load turned off) did yield a difference, it was only 6%. So maybe it is not only CPU but also memory performance.

I tried to figure out if the BIOS settings differ in some parameter, did a dump using dmidecode, but that yielded no difference.

I compared /proc/cpuinfo, no difference. I compared the output of cpufreq-info, no difference.

I am lost. What can I do, to figure out, what is going on?

© Server Fault or respective owner

Related posts about linux

Related posts about ubuntu