Monday, April 25, 2011
New VMware 5.0 is another game changer
I've just joined the new VMware 5.0 beta under NDA. Without talking about specific features I can say that this new version is definitely another game changer in the world of x86 server virtualization. At this point I've only read the white papers, but if the product does everything they are claiming then wow! My takeaway is that the current x86 virtualization market leader has found new innovative ways to mature their product in ways that the likes of Microsoft and Citrix can't even compete. With VMware Cloud Director becoming established and 5.0 set to be released at VMworld 2011 in Las Vegas, I would say VMware is poised to continue revolutionizing and leading the forefront of server virtualization and cloud computing. Good job VMware!
Thursday, April 14, 2011
The future of streaming HD (1080p) to your house
Everyone knows the future home entertainment arena is now dominated by flat screen televisions, HD, 1080p, blu-ray, and now 3-D. What's questionable is the best method of getting this high def content to your TV. Right now there is a plethora of content sources. From blockbuster to redbox, from netflix to hulu and the likes. Today people like having options of different sources and especially love the convenience of streaming movies over there home internet. There's only one problem (for me anyways) with streaming content, and that is that its not 1080p blu-ray quality. This is why I still have an "old-fashion" blockbuster in-store account. I go to blockbuster almost bi-daily so I can pick up a physical copy of 1080p quality blu-ray content. Why do I do this? Because if I'm going to watch movies I only want the best quality. Why can't this be streamed over the internet? The problem is due to home bandwidth limitations. Current streaming sources can only stream and que video based on your home internet connection speed. Because these connections are too slow to support high quality we get low quality legacy picture. Why is a high quality tied to high bandwidth? The reason is that traditional DVDs hold about 4.5 GB of information. A Blu-ray disc holds about 50 GB of information. This is why Blu-ray can show such higher quality, the media technology holds 10x the data of its predecessor. But wait, there is light at the end of the tunnel! Home internet speeds are getting faster everyday and streaming sources are allowing per customer streaming quality based on there individual bandwidths. I'm happy we are finally getting to this point. I'm tired of going down to blockbuster and I've never even used Redbox because they don't have blu-ray. Soon I'll have blu-ray streaming to my TV. Next we will be streaming 3D. All hail another luxury of our great 1st world country and the many benefits of ever increasing home bandwith..
BTW - Comcast now offering 105\MBs , soon to be a common speed for all carriers.
http://www.digitaltrends.com/computing/comcast-takes-home-broadband-to-105-mbps/
BTW - Comcast now offering 105\MBs , soon to be a common speed for all carriers.
http://www.digitaltrends.com/computing/comcast-takes-home-broadband-to-105-mbps/
Monday, April 11, 2011
Friday, April 8, 2011
Pushing 8Gb FC storage limits!
As part of our Systems testing and validation of the new HP Virtual Connect Enclosure design in the Data Center, today we made an intresting discovery! We fully populated an HP enclosure with (16) BL460c G7 servers. Each server has 192GB of memory and 2 Quad Core E5650 Xeon processors On each server we installed ESXi 4.1 and created a 16 node cluster in VMware vCenter.
On this 16 node cluster we shared (4) 2TB volumes off the array. On each of these VMware hosts we created 2 VMs each with 4 vCPUs and 96GB memory each. While we originally launched “Heavyload” Mem and CPU burner to max out resources on all hosts, this created maximum input power draw on the enclosure (4300w) but did not generate any storage load. I then proceeded to kick off multiple Storage vMotions of each VM to move them from one shared datastore to another. With (2) 8Gb uplinks on the virtual connect FC module and single 8Gb connection to the array we saw speeds of up to 1GB/sec. I then un-plugged 1 of 2 uplinks on the virtual connect and the speed remained at 1GB/sec. The next test I did was to stop all storage motion traffic so that traffic was at near zero. I then fired up 8 VMs on a single ESX hosts ( a single 8Gb adapter) and kicked off storage motions of all VMs. Due that ESXi 4.1 can only do 2 concurrent storage motions the speed was only getting up to around 700-800mb/s. I then kicked off IOmeter in the remaining VMs and the speed then again reached 1GB/sec.
This test confirmed that a single HP BL460c G7 running ESXi 4.1 server with a Virtual Connect server profile configured with a single 8Gb adapater and a Virtual Connect storage fabric configured with a single 8Gb uplink connected to an 8Gb SAN and a single 8Gb array ports could push a storage load of over 1,000 Mb/s. All components from serve edge to array were capable of this outrageous speed that is beyond what the components are rated or advertised to handle.
On this 16 node cluster we shared (4) 2TB volumes off the array. On each of these VMware hosts we created 2 VMs each with 4 vCPUs and 96GB memory each. While we originally launched “Heavyload” Mem and CPU burner to max out resources on all hosts, this created maximum input power draw on the enclosure (4300w) but did not generate any storage load. I then proceeded to kick off multiple Storage vMotions of each VM to move them from one shared datastore to another. With (2) 8Gb uplinks on the virtual connect FC module and single 8Gb connection to the array we saw speeds of up to 1GB/sec. I then un-plugged 1 of 2 uplinks on the virtual connect and the speed remained at 1GB/sec. The next test I did was to stop all storage motion traffic so that traffic was at near zero. I then fired up 8 VMs on a single ESX hosts ( a single 8Gb adapter) and kicked off storage motions of all VMs. Due that ESXi 4.1 can only do 2 concurrent storage motions the speed was only getting up to around 700-800mb/s. I then kicked off IOmeter in the remaining VMs and the speed then again reached 1GB/sec.
This test confirmed that a single HP BL460c G7 running ESXi 4.1 server with a Virtual Connect server profile configured with a single 8Gb adapater and a Virtual Connect storage fabric configured with a single 8Gb uplink connected to an 8Gb SAN and a single 8Gb array ports could push a storage load of over 1,000 Mb/s. All components from serve edge to array were capable of this outrageous speed that is beyond what the components are rated or advertised to handle.
Subscribe to:
Posts (Atom)