TechOpsGuys.com Diggin' technology every day

November 14, 2011

Oracle throws in Xen virtualization towel?

Filed under: Virtualization — Tags: , — Nate @ 7:03 am

This just hit me a few seconds ago and it gave me something else to write about so here goes.

Oracle recently released Solaris 11, the first major rev to Solaris in many many years. I remember using Solaris 10 back in 2005, wow it’s been a while!

They’re calling it the first cloud OS. I can’t say I really agree with that, vSphere, and even ESX before that has been more cloudy than Solaris for many years now, and remains today.

While their Xen-based Oracle VM is still included in Solaris 11, the focus clearly seems to be Solaris Zones, which, as far as I know is a more advanced version of User mode linux (which seems to be abandoned now?).

Zones, and UML are nothing new, Zones having been first released more than six years ago. It’s certainly a different approach to a full hypervisor approach so has less overhead, but overall I believe is an outdated approach to utility computing (using the term cloud computing makes me feel sick).

Oracle Solaris Zones virtualization scales up to hundreds of zones per physical node at a 15x lower overhead than VMware and without artificial limits on memory, network, CPU and storage resources.

It’s an interesting strategy, and a fairly unique one in today’s world, so it should give Oracle some differentiation.  I have been following the Xen bandwagon off and on for many years and never felt it a compelling platform, without a re-write. Red Hat, SuSE and several other open source folks have basically abandoned Xen at this point and now it seems Oracle is shifting focus away from Xen as well.

I don’t see many new organizations gravitating towards Solaris zones that aren’t Solaris users already (or at least have Solaris expertise in house), if they haven’t switched by now…

New, integrated network virtualization allows customers to create high-performance, low-cost data center topologies within a single OS instance for ultimate flexibility, bandwidth control and observability.

The terms ultimate flexibility and single OS instance seem to be in conflict here.

The efficiency of modern hypervisors is to the point now where the overhead doesn’t matter in probably 98% of cases. The other 2% can be handled by running jobs on physical hardware. I still don’t believe I would run a hypervisor on workloads that are truely hardware bound, ones that really exploit the performance of the underlying hardware. Those are few and far between outside of specialist niches these days though, I had one about a year and a half ago, but haven’t come across one since.

 

November 3, 2011

Virtualization Surveys, and insights from Xen creator

Filed under: General — Tags: , , — Nate @ 7:06 pm

Two different stories caught my eye today, one from our friends at The Register about a survey by Veeam Software which surveyed several hundred companies with more than 1,000 employees which came to the conclusion that the average consolidation ratio was 5.1:1.

Across the four geographic regions and all the companies surveyed, the perceived consolidation ratio was 9.8 virtual machines per physical machine. But if you do the math and calculate the actual penetration ration, companies are actually squeezing only 5.1 virtual machines per host on average.

It could just be that some IT managers garbled their responses and have screwed up the data, but perhaps Veeam is on to something.

I saw that and did a virtual face palm. 5.1:1 ? Even 9.8:1 ? I think I was doing about 7-9:1 back in 2007 with my first ESX 3.0 systems on HP DL380 G5s with 8 cores and 16GB of ram. I came across a screen shot of one of those systems a couple weeks ago, brought back some good memories! (oh how iSCSI sucked on ESX 3! Speaking of which that brought up another memory I was at a dinner thrown by Dell I think a year or two ago, and they were pushing iSCSI for Vmware via some 3rd party storage/vmware consultants or something. The presenter kept trying to emphasize at the time how good iSCSI was and how there’s no reason not to use it in Vmware and I kept reminding him(in front of the group) how much iSCSI sucked in ESX 3, which is why people were still hesitant to use it even shortly after vSphere came out, he didn’t take it well, it was funny to watch)

My last VMware projects, always memory constrained of course were at the low end 14:1, and higher end maybe 24:1(64GB ram on hardware circa ~2006 – HP DL585 G1). This was without any benefits from transparent page sharing since that stuff never worked for me on Linux anyways, no swapping either. Just right sizing the VMs to the workloads, even if it meant as little as 96MB of memory for the VM.

My next project I’ll be surprised if we can’t get at least 30-40:1.

Seeing numbers like 5:1 makes me think back to when Vmware went around to their customers and saw what they were actually using before announcing their new price hikes for memory, and they set the license limits to what their typical customer was using.

5:1 ? Sad. Unless your really running a CPU bound application then that’s fine, not many of those out there though.

Second article was this one, where one of the “Godfathers” of Xen said one of the great things about virtualization is improving security with workload isolation.

Isolation — the ability to restrict what computing goes on in a given context — is a fundamental characteristic of virtualization that can be exploited to improve trustworthiness of processes on a physical system even if other processes have been compromised, says Crosby, a creator of the open source hypervisor and a founder of startup Bromium, which is looking to use Xen features to boost security.

I couldn’t agree more, which is why per-VM licensing strategies really piss me off, because it works direct opposition to that strategy. At the very least have dual licensing so customers can license based on VM or based on hardware.

On a side note, I just saw an interesting interview on CNBC, where someone was talking about the IPO of Groupon which I believe is supposed to go live tomorrow. Groupon is apparently trying to raise about $510M in a very paltry offering of something like less than 5% of their company. The funny part is apparently they owe about $505M in short term liabilities to vendors and stuff. The person being interviewed says if Groupon doesn’t pull this off soon they’ll go broke practically overnight.

They are also reporting there are 11 book runners on the deal, more than any other US IPO in history. I don’t know what a book runner is but it sounds fishy to have so many runners for such a small allocation of stock.

Burn, baby, burn.

 

Powered by WordPress