Friday, April 8, 2011

Pushing 8Gb FC storage limits!

As part of our Systems testing and validation of the new HP Virtual Connect Enclosure design in the Data Center, today we made an intresting discovery! We fully populated an HP enclosure with (16) BL460c G7 servers. Each server has 192GB of memory and 2 Quad Core E5650 Xeon processors On each server we installed ESXi 4.1 and created a 16 node cluster in VMware vCenter.

On this 16 node cluster we shared (4) 2TB volumes off the array. On each of these VMware hosts we created 2 VMs each with 4 vCPUs and 96GB memory each. While we originally launched “Heavyload” Mem and CPU burner to max out resources on all hosts, this created maximum input power draw on the enclosure (4300w) but did not generate any storage load. I then proceeded to kick off multiple Storage vMotions of each VM to move them from one shared datastore to another. With (2) 8Gb uplinks on the virtual connect FC module and single 8Gb connection to the array we saw speeds of up to 1GB/sec. I then un-plugged 1 of 2 uplinks on the virtual connect and the speed remained at 1GB/sec. The next test I did was to stop all storage motion traffic so that traffic was at near zero. I then fired up 8 VMs on a single ESX hosts ( a single 8Gb adapter) and kicked off storage motions of all VMs. Due that ESXi 4.1 can only do 2 concurrent storage motions the speed was only getting up to around 700-800mb/s. I then kicked off IOmeter in the remaining VMs and the speed then again reached 1GB/sec.

This test confirmed that a single HP BL460c G7 running ESXi 4.1 server with a Virtual Connect server profile configured with a single 8Gb adapater and a Virtual Connect storage fabric configured with a single 8Gb uplink connected to an 8Gb SAN and a single 8Gb array ports could push a storage load of over 1,000 Mb/s. All components from serve edge to array were capable of this outrageous speed that is beyond what the components are rated or advertised to handle.

No comments:

Post a Comment