I've obtained a substantial performance improvement on a SPARC T4-2 Server running a J2EE Application Server Cluster by deploying the cluster members into Oracle Solaris Containers and binding those containers to cores of the SPARC T4 Processor. This is not a surprising result, in fact, it is consistent with other results that are available on the Internet. See the "references", below, for some examples. Nonetheless, here is a summary of my configuration and results.
(1.0) Before deploying a J2EE Application Server Cluster into a virtualized environment, many decisions need to be made. I'm not claiming that all of the decisions that I have a made will work well for every environment. In fact, I'm not even claiming that all of the decisions are the best possible for my environment. I'm only claiming that of the small sample of configurations that I've tested, this is the one that is working best for me. Here are some of the decisions that needed to be made:
(1.1) Which virtualization option? There are several virtualization options and isolation levels that are available. Options include:
Hard partitions: Dynamic Domains on Sun SPARC Enterprise M-Series Servers
Hypervisor based virtualization such as Oracle VM Server for SPARC (LDOMs) on SPARC T-Series Servers
OS Virtualization using Oracle Solaris Containers
Resource management tools in the Oracle Solaris OS to control the amount of resources an application receives, such as CPU cycles, physical memory, and network bandwidth.
Oracle Solaris Containers provide the right level of isolation and flexibility for my environment. To borrow some words from my friends in marketing, "The SPARC T4 processor leverages the unique, no-cost virtualization capabilities of Oracle Solaris Zones"
(1.2) How to associate Oracle Solaris Containers with resources? There are several options available to associate containers with resources, including (a) resource pool association (b) dedicated-cpu resources and (c) capped-cpu resources. I chose to create resource pools and associate them with the containers because I wanted explicit control over the cores and virtual processors.
(1.3) Cluster Topology? Is it best to deploy (a) multiple application servers on one node, (b) one application server on multiple nodes, or (c) multiple application servers on multiple nodes? After a few quick tests, it appears that one application server per Oracle Solaris Container is a good solution.
(1.4) Number of cluster members to deploy? I chose to deploy four big 64-bit application servers. I would like go back a test many 32-bit application servers, but that is left for another day.
(2.0) Configuration tested.
(2.1) I was using a SPARC T4-2 Server which has 2 CPU and 128 virtual processors. To understand the physical layout of the hardware on Solaris 10, I used the OpenSolaris psrinfo perl script available at http://hub.opensolaris.org/bin/download/Community+Group+performance/files/psrinfo.pl:
test# ./psrinfo.pl -pv
The physical processor has 8 cores and 64 virtual processors (0-63) The core has 8 virtual processors (0-7)
The core has 8 virtual processors (8-15)
The core has 8 virtual processors (16-23)
The core has 8 virtual processors (24-31)
The core has 8 virtual processors (32-39)
The core has 8 virtual processors (40-47)
The core has 8 virtual processors (48-55)
The core has 8 virtual processors (56-63)
SPARC-T4 (chipid 0, clock 2848 MHz)
The physical processor has 8 cores and 64 virtual processors (64-127)
The core has 8 virtual processors (64-71)
The core has 8 virtual processors (72-79)
The core has 8 virtual processors (80-87)
The core has 8 virtual processors (88-95)
The core has 8 virtual processors (96-103)
The core has 8 virtual processors (104-111)
The core has 8 virtual processors (112-119)
The core has 8 virtual processors (120-127)
SPARC-T4 (chipid 1, clock 2848 MHz)
(2.2) The "before" test: without processor binding. I started with a 4-member cluster deployed into 4 Oracle Solaris Containers. Each container used a unique gigabit Ethernet port for HTTP traffic. The containers shared a 10 gigabit Ethernet port for JDBC traffic.
(2.3) The "after" test: with processor binding. I ran one application server in the Global Zone and another application server in each of the three non-global zones (NGZ):
(3.0) Configuration steps. The following steps need to be repeated for all three Oracle Solaris Containers.
(3.1) Stop AppServers from the BUI.
(3.2) Stop the NGZ.
test# ssh test-z2 init 5
(3.3) Enable resource pools:
test# svcadm enable pools
(3.4) Create the resource pool:
test# poolcfg -dc 'create pool pool-test-z2'
(3.5) Create the processor set:
test# poolcfg -dc 'create pset pset-test-z2'
(3.6) Specify the maximum number of CPU's that may be addd to the processor set:
test# poolcfg -dc 'modify pset pset-test-z2 (uint pset.max=32)'
(3.7) bash syntax to add Virtual CPUs to the processor set:
test# (( i = 64 )); while (( i < 96 )); do poolcfg -dc "transfer to pset pset-test-z2 (cpu $i)"; (( i = i + 1 )) ; done
(3.8) Associate the resource pool with the processor set:
test# poolcfg -dc 'associate pool pool-test-z2 (pset pset-test-z2)'
(3.9) Tell the zone to use the resource pool that has been created:
test# zonecfg -z test-z1 set pool=pool-test-z2
(3.10) Boot the Oracle Solaris Container
test# zoneadm -z test-z2 boot
(3.11) Save the configuration to /etc/pooladm.conf
test# pooladm -s
(4.0) Results. Using the resource pools improves both throughput and response time:
System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones
Capitalizing on large numbers of processors with WebSphere Portal on Solaris
WebSphere Application Server and T5440 (Dileep Kumar's Weblog)
Reuters Market Data System, RMDS 6 Multiple Instances (Consolidated), Performance Test Results in Solaris, Containers/Zones Environment on Sun Blade X6270 by Amjad Khan, 2009.