TechOpsGuys.com Diggin' technology every day

December 9, 2013

HP Moonshot: VDI Edition

Filed under: Virtualization — Tags: , — Nate @ 8:30 pm

To-date I have not been too excited about HP’s Moonshot system, I’ve been far more interested in AMD’s Seamicro. However HP has now launched a Moonshot based solution that does look very innovative and interesting.

HP Moonshot: VDI Edition (otherwise known as HP ConvergedSystem 100 for hosted desktops) takes advantage of (semi ironically enough) AMD APUs which combine CPU+GPU in a single chip and allows you to host up to 180 users in a 4.3U system each with their own dedicated CPU and GPU! The GPU being the key thing here, that is an area where most VDI has fallen far short.

Everything as you might imagine is self contained within the Moonshot chassis, there is no external storage.

EACH USER gets:

  • Quad core 1.5Ghz CPU
  • 128 GPU Cores (Radeon of course)
  • 8GB RAM (shared with GPU)
  • 32GB SSD
  • Windows 7 OS

That sounds luxurious! I’d be really interested to see how this solution stacks up against competing VDI solutions.

AMD X2150 APU: The brains of the HP Moonshot VDI experience

AMD X2150 APU: The brains of the HP Moonshot VDI experience

They claim you can be up and going in as little as two hours — with no hypervisor. This is bare metal.

You probably get superior availability as well given there are 45 different cartridges in the chassis, if one fails you lose only four desktops. The operational advantages(on paper at least) for something like this seem quite compelling.

I swear it seems a day doesn’t go by when a SSD storage vendor touts their VDI cost savings etc (and they never seem to mention things like, you know servers, LICENSING,  GPUs, etc etc – really annoys me).

VDI is not an area I have expertise in but I found this solution very interesting, and it seems like it is VERY dense at the same time.

HP Moonshot M700 cartridge

HP Moonshot M700 cartridge with four servers on it (45 of these in a chassis)

HP doesn’t seem to get specific on power usage other than you save a lot vs desktop systems. The APUs themselves seem to be rated at 15W/ea on the specs, which implies a minimum power usage of 2,700W. Though it seems each power supply in the Moonshot has a rated steady-rate power output of 653W, with four of those that is 2,600W for the whole chassis, though HP says the Moonshot supports only 1200W PSUs, so it’s sort of confusing. The HP Power Advisor has no data for this module.

It wouldn’t surprise me if the power usage was higher than a typical VDI system, but given the target workload(and the capabilities offered) it still sounds like a very compelling solution.

Obviously the question is might AMD one-up HP at their own game given that AMD owns both these APUs and SeaMico, and if so might that upset HP?

October 2, 2012

Cisco drops price on Nexus vSwitch to free

Filed under: Networking,Virtualization — Tags: , , , — Nate @ 10:02 am

I saw news yesterday that Cisco dropped the price of their vSwitch to $free, they still have a premium version which has a few more features.

I’m really not all that interested in what Cisco does, but what got me thinking again is the lack of participation by other vendors in making a similar vSwitch, of integrating their stack down to the hypervisor itself.

Back in 2009, Arista Networks launched their own vSwitch (though now that I read more on it, it wasn’t a “real” vSwitch),  but you wouldn’t know that by looking at their site today, I tried a bunch of different search terms I thought they still had it, but it seems the product is dead and buried. I have not heard myself of any other manufacturers making a software vSwitch of any kind (for VMware at least). I suppose customer demand is not there.

I asked Extreme back then if they would come out with a software vSwitch, and at the time at least they said there was no plans, instead they were focusing on direct attach, a strategy at least for VMware, appears to be dead for the moment, as the manufacturer of the NICs used to make it happen is no longer making NICs(as of about 1-2 years ago). I don’t know why they have the white paper on their site still, I guess to show the concept, since you can’t build it today.

Direct attach – at least taken to it’s logical conclusion is a method to force all inter-VM switching out of the host and into the physical switches layer. I was told that this is possible with Extreme(and possibly others too) with KVM today (I don’t know the details), just not with VMware.

They do have a switch that runs in VMware, though it’s not a vSwitch, more of a demo/type thing where you can play with commands. Their switching software has run on Intel CPUs since the initial release in 2003 (and they still have switches today that use Intel CPUs), so I imagine the work involved is not herculean to make a vSwitch happen if they wanted to.

I have seen other manufacturers (Brocade at least if I remember right) that were also looking forward to direct attach as the approach to take instead of a vSwitch. I can never remember the official networking name for the direct attach technology…

With VMware’s $1.2B purchase of Nicira it seems they believe the future is not direct attach.

Myself I like the concept of switching within the host, though I have wanted to have an actual switching fabric (in hardware) to make it happen. Some day..

Off topic – but it seems the global economic cycle has now passed the peak and now for sure headed down hill? One of my friends said yesterday the economy is “complete garbage”, I see tech company after company missing or warning, layoffs abound, whether it’s massive layoffs at HP, or smaller layoffs at Juniper that was announced this morning. Meanwhile the stock market is hitting new highs quite often.

I still maintain we are in a great depression. Lots of economists try to dispute that, though if you take away the social safety nets that we did not have in the ’20s and ’30s during the last depression I am quite certain you’d see massive numbers of people lined up at soup kitchens and the like. I think the economists try to dispute it more because they fear a self fulfilling prophecy rather than their willingness to have a serious talk on the subject. Whether or not we can get out of the depression, I don’t know. We need a catalyst – last time it was WWII, at least the last two major economic expansions were bubbles, it’s been a long time since we’ve had a more normal economy. If we don’t get a catalyst then I see stagnation for another few years, perhaps a decade while we drift downwards towards a more serious collapse (something that would make 2008 look trivial by comparison).

September 14, 2012

VMware suggests swapping as a best practice

Filed under: Virtualization — Tags: — Nate @ 11:26 am

Just came across this, and going through the PDF it says

Virtualization causes an increase in the amount of physical memory required due to the extra memory needed by ESXi for its own code and for data structures. This additional memory requirement can be separated into two components:
1. A system-wide memory space overhead for the VMkernel and various host agents (hostd, vpxa, etc.).

A new feature in ESXi 5.1 allows the use of a system swap file to reduce this memory overhead by up to 1GB when the host is under memory pressure.

That just scares me that the advocate setting up a swap file to reduce memory usage by up to 1GB. How much memory does the average VMware system have? Maybe 64GB today? So that could save 1.5% of physical memory, with the potential trade off of impacting storage performance (assuming no local storage) for all other systems in the environment.

Scares me just about as much as how 3PAR used to advocate their storage systems can get double the VM density per server because you can crank up the swapping and they can take the I/O hit (I don’t think they advocate this today though).

Now if you can somehow be sure that the system won’t be ACTIVELY swapping then it’s not a big deal, but of course you don’t want to actively swap really in any situation, unless your I/O is basically unlimited. You could go and equip your servers with say a pair of SSDs in RAID 1 to do this sort of swapping(remember it is 1GB). But it’s just not worth it. I don’t understand why VMware spent the time to come up with such a feature.

If anything the trend has been more memory in hosts not less, I’d imagine most serious deployments have well over 100GB of memory per host these days.

My best practice is don’t swap – ever. In the environments I have supported performance/latency is important so there is really no over subscription for memory, I’ve had one time where Vmware was swapping excessively at the host level, and to me it was a bug, but to Vmware it was a feature(there was tons of memory available on the host), I forgot the term but it was a documented behavior on how the hypervisor functions, just not commonly known I guess, and totally not obvious. The performance of the application obviously went in the toilet when this swapping was going on, it felt like the system was running on a 386 CPU.

Windows memory footprint is significantly different than that of Linux, Linux represents probably 98 or 99% of my VMs over the years.

Oh and that transparent page sharing VMware touts so much? I just picked one of my servers at random, 31 VMs and 147GB of memory in use, TPS is saving me a grand 3% of memory, yay TPS.

The cost of I/Os(to spinning rust, or even enterprise SSDs), unless your workload is very predictable and you do not have much active swapping, is just too much to justify the risk in allowing swap in any form in my experience. In fact the bulk of the VMs I run do have a local 500MB swap partition, enough for some very light swapping – but I’d rather have the VM fail & crash, then have it swap like crazy and take the rest of the systems down with it.

But that’s me

April 10, 2012

What’s wrong with this picture?

Filed under: Datacenter,Virtualization — Tags: , — Nate @ 7:36 am

I was reading this article from our friends at The Register which has this picture for an entry level FlexPod from NetApp/Cisco.

It just seems wrong. I mean the networking stuff. Given NetApp’s strong push for IP-based storage, one would think an entry level solution would simply have 2×48 port 10gig stackable switches, or whatever Cisco’s equivalent is(maybe this is it).

This solution is supposed to provide scalability for up to 1,000 users – what those 1,000 users are actually doing I have no idea, does it mean VDI? Database? web site users? File serving users? ?????????????

It’s also unclear in the article if this itself scales to that level or it just provides the minimum building blocks to scale to 1,000 users (I assume the latter) – and if so what does 1,000 user configuration look like? (or put another way how many users does the below picture support)

I’ll be the first to admit I’m ignorant as to the details and the reason why Cisco needs 3 different devices with these things but whatever the reason seems major overkill for an entry level solution assuming the usage of IP-based storage.

The new entry level flex pod

The choice of a NetApp FAS2000 array seems curious to me – at least given the fact that it does not appear to support that Flex Cache stuff which they like to tout so much.

April 7, 2012

Interesting discussion on vSphere vs Hyper-V

Filed under: Virtualization — Tags: , — Nate @ 1:52 pm

I stumbled upon this a few days ago and just got around to reading it now. It came out about a month ago, I forgot where I saw it, I think from Planet V12n or something.

Anyways it’s two people who sound experienced(I don’t see information on their particular backgrounds) each talking up their respective solution. Two things really stood out to me:

  • The guy pimping Microsoft was all about talking about a solution that doesn’t exist yet (“it’ll be fixed in the next version, just wait!”)
  • The guy pimping VMware was all about talking about how cost doesn’t matter because VMware is the best.

I think they are both right – and both wrong.

It’s good enough

I really believe that in Hyper-V’s case and also in KVM/RHEV’s case that for the next generation of projects these products will be “good enough” (in Hyper-V’s case – whenever Windows 8 comes out) for a large(majority) number of use cases out there. I don’t see Linux-centric shops considering Hyper-V or Windows-centric considering KVM/RHEV/etc so VMware will still obviously have a horse in the  race (as the pro-VMware person harps on in the debate).

Cost is becoming a very important issue

One thing that really got me the wrong way was when the pro-VMware person said this

Some people complain about VMware’s pricing but those are not the decision makers, they are the techies. People who have the financial responsibility for SLAs and customers aren’t going to bank on an unproven technology.

I’m sorry but that is just absurd. If cost wasn’t an issue then the techies wouldn’t be complaining about it because they know, first hand that it is an issue in their organizations. They know, first hand that they have to justify the purchase to those decision makers. The company I’m at now was in that same situation – the internal IT group could only get the most basic level of vSphere approved for purchase at the time for thier internal IT assets(this was a year or two or three ago). I hear them constantly complaining about the lack of things like vMotion, or shared storage etc. Cost was a major issue so the environment was built with disparate systems and storage and the cheap version of vSphere.

Give me an unlimited budget and I promise, PROMISE you will NEVER hear me complain about cost. I think the same is true of most people.

I’ve been there, more than once! I’ve done that exact same thing (Well in my case I managed to have good storage in most of the cases).

Those decision makers weigh the costs of maintaining that SLA with whatever solution they’re going to provide. Breaking SLAs can be more cost effective then achieving them. Especially if they are absurdly high SLAs. I remember at one company I was at they signed all these high level SLAs with their new customers — so I turned around and said – hey, in order to achieve those SLAs we need to do this laundry list of things. I think maybe 5-10% of the list got done until the budget ran out. You can continue to meet those high SLAs if your lucky, and don’t actually have the ability to sustain failure and maintain uptime. More often than not such smaller companies prefer to rely on luck then doing things right.

Another company I was at had what could of been called a disaster in itself, during the same time I was working on a so-called disaster recovery project (no coincidence). Despite the disaster, at the end of the day the management canned the disaster recovery project (which everyone agreed if it was in place it would of saved a lot of everything had it been in place at the time of the disaster). It’s not that budget wasn’t approved – it was approved. The problem was management wanting to do another project that they massively under budgeted for and decided to cannibalize the budget from DR to give to this other pet project.

Yet another company I was at signed a disaster recovery contract with Sun Guard just to tick the check box to say they have DR. The catch was – the entire company knew up front before they signed – that they would never be able to utilize the service. IT WOULD NEVER WORK. But they signed it anyways because they needed a plan, and they didn’t want to pay for a plan that would of worked.

VMware touting VM density as king

I’ve always found it interesting how VMware touts VM density, they show an automatic density advantage to VMware which automatically reduces VMware’s costs regardless of the competition. This example was posted to one of their blogs a few days ago.

They tout their memory sharing, their memory ballooning, their memory compression all as things that can increase density vs the competiton.

My own experience with memory sharing on VMware at least with Linux is pretty simple – it doesn’t work. It doesn’t give results. Looking at one of my ESX 4.1 servers (yes, no ESXi here) which has 18 VMs on it and 101GB of memory in use, how much memory am I saving with the transparent page sharing?

3,161 MB – or about 3%. Nothing to write home about.

For production loads, I don’t want to be in a situation where memory ballooning kicks in, or when memory compression kicks in, I want to keep performance high – that means no swapping of any kind from any system. Last thing I want is my VMs to start thrashing my storage with active swapping. Don’t even think about swapping if your running Java apps either, once that garbage collection kicks in your VM will grind to a halt while it performs that operation.

I would like a method to keep the Linux buffer cache under control however, whether it is ballooning that specifically targets file system cache, or some other method, that would be a welcome addition to my systems.

Another welcome addition would be the ability to flag VMs and/or resource pools to pro-actively utilize memory compression (regardless of memory usage on the host itself). Low priority VMs, VMs that sit at 1% cpu usage most of the time, VM’s where the added latency of compression on otherwise idle CPU cores isn’t that important (again – stay away from actively swapping!). As a bonus provide the ability to limit the CPU capacity consumed by compression activities, such as limiting it to the resource pool that the VM is in, and/or having a per-host setting where you could say – set aside up to 1 CPU core or whatever for compression, if you need more than that, don’t compress unless it’s an emergency.

YAWA with regards to compression would be to provide me with compression ratios – how effective is the compression when it’s in use? Recommend to me VMs that have low utilization that I could pro-actively reclaim memory by compressing these, or maybe only portions of the memory are worth compressing? The Hypervisor with the assistance of the vmware tools has the ability to see what is really going on in the guest by nature of having an agent there. The actual capability doesn’t appear to exist now but I can’t imagine it being too difficult to implement. Sort of along the lines of pro-actively inflating the memory balloon.

So, for what it’s worth for me, you can take any VM density advantages for VMware off the table when it comes from a memory perspective. For me and VM density it’s more about the efficiency of the code and how well it handles all of those virtual processors running at the same time.

Taking the Oracle VM blog post above, VMware points out Oracle supports only 128 VMs per host vs VMware at 512, good example – but really need to show how well all those VMs can work on the same host, how much overhead is there. If my average VM CPU utilization is 2-4% does that mean I can squeeze 512 VMs on a 32-core system (memory permitting of course)  — when in theory I should be able to get around 640 – memory permitting again.

Oh the number of times I was logged into an Amazon virtual machine that was suffering from CPU problems only to see that 20-30% of the CPU usage was being stolen from the VM by the hypervisor. From the sar man page

%steal

Percentage of time spent in involuntary wait by the virtual CPU or CPUs while the hypervisor was servicing another virtual processor.

Not sure if Windows has something similar.

Back to costs vs Maturity

I was firmly in the VMware camp for many years, I remember purchasing ESX 3.1 (Standard edition – no budget for higher versions) for something like $3,500 for a two-socket license. I remember how cheap it felt at the time given the power it gave us to consolidate workloads. I would of been more than happy(myself at least) to pay double for what we got at the time. I remember the arguments I got in over VMware vs Xen with my new boss at the time, and the stories of the failed attempts to migrate to Xen after I left the company.

The pro-VMware guy in the original ZDNet debate doesn’t see the damage VMware is doing to itself when it comes to licensing. VMware can do no wrong in his eyes. I’m sure there are plenty of other die hards out there that are in the same boat. The old motto of you never got fired for buying IBM right. I can certainly respect the angle though as much as it pains my own history to admit that I think the tides have changed and VMware will have a harder and harder time pitching it’s wares in the future, especially if it keeps playing games with licensing on a technology which it’s own founders (I think — I wish I could find the article) predicted would become commodity by about now. With the perceived slow uptake of vSphere 5 amongst users I think the trend is already starting to form. The problem with the uptake isn’t just the licensing of course, it’s that for many situations there isn’t a compelling reason to upgrade – it’s good enough has set in.

I can certainly, positively understand VMware providing premium pricing for premium services, an Enterprise Plus Plus ..or whatever. But don’t vary the price based on provisioned utilization that’s just plain shooting yourself (and your loyal customers) in the feet. The provisioned part is another stickler for me – the hypervisor has the ability to measure actual usage, yet they stick their model to provisioned capacity – whether or not the VM is actually using the resource. It is a simpler model but it makes planning more complicated.

The biggest scam in this whole cloud computing era so many people think we’re getting into is the vast chasm between provisioned vs utilized capacity. With companies wanting to charge you for provisioned capacity and customers wanting to over provision so they don’t have to make constant changes to manage resources, knowing that they won’t be using all that capacity up front.

The technology exists, it’s just that few people are taking advantage of it and fewer yet have figured out how to leverage it (at least in the service provider space from what I have seen).

Take Terremark (now Verizon), a VMware-based public cloud provider (and one of only two partners listed on VMware’s site for this now old program). They built their systems on VMware, they build their storage on 3PAR. Yet for this vCloud express offering there is no ability to leverage resource pools, no ability to utilize thin provisioning (from a customer standpoint). I have to pay attention to exactly how much space I provision up front, and I don’t have the option to manage it like I would on my own 3PAR array.

Now Terremark has an enterprise offering that is more flexible and does offer resource pools, but this isn’t available on their on demand offering. I still have the original quote Terremark sent me for the disaster recovery project I was working on at the time, it makes me want to either laugh or cry to this day. I have to give Terremark credit though at least they have an offering that can utilize resource pools, most others (well I haven’t heard of even one – though I haven’t looked recently) does not. (Side note: I hosted my own personal stuff on their vCloud express platform for a year so I know it first hand – it was a pretty good experience what drove me away primarily was their billing for each and every TCP and UDP port I had open on an hourly rate. Also good to not be on their platform anymore so I don’t risk them killing my system if they see something I say and take it badly).

Obviously the trend in system design over recent years has bitten into the number of licenses that VMware is able to sell – and if their claims are remotely true – that 50% of the world’s workloads are virtualized and of that they have 80% market share – it’s going to be harder and harder to maintain a decent growth rate. It’s quite a pickle they are in, customers in large part apparently haven’t bought into the more premium products VMware has provided (that are not part of the Hypervisor), so they felt the pressure to increase the costs of the Hypervisor itself  to drive that growth in revenue.

Bottom Line

VMware is in trouble.

Simple as that.

December 7, 2011

Red Hat Bringing back UML ?

Filed under: Virtualization — Tags: , — Nate @ 11:05 am

User mode linux was kind of popular many years ago especially with the cheap virtual hosting crowd, but interest seemed to die off a while ago, with what seems to be a semi-official page for user mode linux not being updated since the Fedora Core 5 days which was around 2006.

Red hat apparently just released RHEL 6.2, and among the features, is something that looks remarkably similar to UML –

Linux Containers
•    Linux containers provide a flexible approach to application runtime containment on bare-metal without the need to fully virtualize the workload. This release provides application level containers to separate and control the application resource usage policies via cgroup and namespaces. This release introduces basic management of container life-cycle by allowing for creation, editing and deletion of containers via the libvirt API and the virt-manager GUI.
•     Linux Containers provides a means to run applications in a container, a deployment model familiar to UNIX administrators. Also provides container life-cycle management for these containerized applications through a graphical user interface (GUI) and user space utility (libvirt).
•     Linux Containers is in Technology Preview at this time.

Which seems to be basically an attempt at a clone of Solaris containers. Seems like a strange approach for Red Hat to take given the investment in KVM. I struggle to think of a good use case for Linux containers over KVM.

Red hat also has enhanced KVM quite a bit, this update sort of caught my eye

Virtual CPU timeslice sharing for multiprocessor guests is a new feature in Red Hat Enterprise Linux 6.2. Scheduler changes within the kernel now allow for virtual CPUs inside a guest to make more efficient use of the timeslice allocated to the guest, before processor time is yielded back to the host. This change is especially beneficial to large SMP systems that have traditionally experienced guest performance lag due to inherent lock holder preemption  issues. In summary, this new feature eliminates resource consuming system overhead so that a guest can use more of the CPU resources assigned to them much more efficiently.

No informations on specifics as far as what constitutes a “large” system or how many virtual CPUs were provisioned for a given physical CPU etc. But it’s interesting to see, I mean it’s one of those technical details in hypervisors that you just can’t get an indication from by viewing a spec sheet or a manual or something. Such things are rarely talked about in presentations as well. I remember being at a VMware presentation a few years ago where they mentioned they could of enabled 8-way SMP on ESX 3.x, it was apparently an undocumented feature, but chose not to because the scheduler overhead didn’t make it worth while.

Red Hat also integrated the beta of their RHEV 3 platform, I’m hopeful this new platform develops into something that can better compete with vSphere. Though their website is really devoid of information at this point which is unfortunate.

They also make an erroneous claim that RHEV 3 crushes the competition by running more VMs than anyone else and site a SPECvirt benchmark as the proof. While the results are impressive they aren’t really up front with the fact that the hardware more than anything else drove the performance with 80 x 2.4Ghz CPU cores, 2TB of memory and more than 500 spindles. If you look at the results on a more level playing field the performance of RHEV 3 and vSphere is more in line. RHEV still wins, but not by a crushing amount. I really wish these VM benchmarks gave some indication as to how much disk I/O was going on. It is interesting to see all the tuning measures that are disclosed, gives some good information on settings to go investigate maybe they have broader applications than synthetic benchmarking.

Of course performance is only a part of what is needed in a hypervisor, hopefully RHEV 3 will be as functional as it is fast.

There is a Enterprise Hypervisor Comparison released recently by VMGuru.nl, which does a pretty good job at comparing the major hypervisors, though does not include KVM. I’d like to see more of these comparisons from other angles, if you know of more guides let me know.

One thing that stands out a lot is OS support, it’s strange to me how VMware can support so many operating systems but other hypervisors don’t. Is this simply a matter of choice? Or is the VM technology VMware has so much better that it allows them to support the broader number of guest operating systems with little/no effort on their part? Or both ? I mean Hyper-V not supporting Windows NT ? How hard can it be to support that old thing? Nobody other than VMware supporting Solaris ?

I’ve talked off and on about KVM, as I watch and wait for it to mature more. I haven’t used KVM yet myself. I will target RHEV 3, when it is released, to try and see where it stands.

I’m kind of excited. Kind of because breaking up with VMware after 12 years is not going to be easy for me 🙂

November 14, 2011

Oracle throws in Xen virtualization towel?

Filed under: Virtualization — Tags: , — Nate @ 7:03 am

This just hit me a few seconds ago and it gave me something else to write about so here goes.

Oracle recently released Solaris 11, the first major rev to Solaris in many many years. I remember using Solaris 10 back in 2005, wow it’s been a while!

They’re calling it the first cloud OS. I can’t say I really agree with that, vSphere, and even ESX before that has been more cloudy than Solaris for many years now, and remains today.

While their Xen-based Oracle VM is still included in Solaris 11, the focus clearly seems to be Solaris Zones, which, as far as I know is a more advanced version of User mode linux (which seems to be abandoned now?).

Zones, and UML are nothing new, Zones having been first released more than six years ago. It’s certainly a different approach to a full hypervisor approach so has less overhead, but overall I believe is an outdated approach to utility computing (using the term cloud computing makes me feel sick).

Oracle Solaris Zones virtualization scales up to hundreds of zones per physical node at a 15x lower overhead than VMware and without artificial limits on memory, network, CPU and storage resources.

It’s an interesting strategy, and a fairly unique one in today’s world, so it should give Oracle some differentiation.  I have been following the Xen bandwagon off and on for many years and never felt it a compelling platform, without a re-write. Red Hat, SuSE and several other open source folks have basically abandoned Xen at this point and now it seems Oracle is shifting focus away from Xen as well.

I don’t see many new organizations gravitating towards Solaris zones that aren’t Solaris users already (or at least have Solaris expertise in house), if they haven’t switched by now…

New, integrated network virtualization allows customers to create high-performance, low-cost data center topologies within a single OS instance for ultimate flexibility, bandwidth control and observability.

The terms ultimate flexibility and single OS instance seem to be in conflict here.

The efficiency of modern hypervisors is to the point now where the overhead doesn’t matter in probably 98% of cases. The other 2% can be handled by running jobs on physical hardware. I still don’t believe I would run a hypervisor on workloads that are truely hardware bound, ones that really exploit the performance of the underlying hardware. Those are few and far between outside of specialist niches these days though, I had one about a year and a half ago, but haven’t come across one since.

 

November 4, 2011

Mass defections away from Vmware coming?

Filed under: Virtualization — Tags: — Nate @ 10:57 am

I have expected as much since Vmware announced their abrupt licensing changes, in the same survey that I commented on last night for another reason, another site has reported on another aspect of it – nearly 40% of respondents are strongly considering moving away from Vmware in the coming year, 47% of which cite the licensing charges as the cause.

A Gartner analyst questions the numbers saying the move will be more complicated than people think and that will help Vmware retain share. I don’t agree with that myself I suspect for most customers the move will probably not be complex at all.

Myself I was just recently trying to a dig a bit more into KVM trying to figure out what they use for storage, it seems for block based systems they are using GFS2 (can’t find the link off hand)?  Though I imagine they can run on top of NFS too. I wonder what the typical deployment is for KVM when it comes to storage – is shared storage widely used or is it instead used mostly with local DAS?

I just read an interesting comment from a Xen user (I’ve never found Xen to be a compelling platform myself from a technology perspective, my own personal use of Xen has been mostly indirect by means of EC2 – which in general is an absolutely terrible experience), from a thread on slashdot about this topic –

Hyper-V is about 5 years behind and XenServer is about 3 years behind in terms of functionality and stability, mainly due to the fact that VMWare has been doing it for so long. VMWare is rock-solid and feature rich, and I’d love to use them. Currently we use XenServer, but with Citrix recently closing down their hardware API’s and not playing nicely with anyone it looks like it is going to be the first casualty. I’ve been very upset by XenServer’s HA so far, plain and simple it has sucked. I’ve had hosts reboot from crashes and the virtual machines go down, but the host thinks it has the machines and all of the other hosts think it has the machines. I’ve done everything XenServer has asked (HA quorum on a separate LUN, patches, etc), but it still just sucks. I’ve yet to see a host fail and the machines to go elsewhere, and the configuration is absolutely right and has been reviewed by Citrix. Maybe 6.0 will be better, but I just heard of major issues today with it. Hyper-V is really where the competition is going to come from, especially with how engrained it is in everything coming up. Want to run Exchange 2010 SP2? Recommendation is Hyper-V virtual machines.

God I miss VMWare.

I hope Vmware comes through for me and produces a price point for the basic vSphere services that is more cost effective(basically I’d like to see vSphere Standard edition with say something crazy like 256GB/socket vRAM with the current pricing). Though I’d settle for with whatever vRAM is available in enterprise plus.

So your actually paying more for the features.

I can certainly find ways to “make do” at a cost of $1,318/socket (w/1 year of enterprise support based on this pricing), for Standard edition (includes Vmotion and HA), vs $4,369/socket for Enterprise plus. Two sockets would be around $2,600 — which is less than where vSphere 3 was, which was in the $3,000-3,500 range per pair of sockets for standard edition in 2007.

I’m not holding my breath though(since being kicked in the teeth with vSphere 5 licensing changes).

Time will tell if there are such defections, unlike Netflix where the commitment is basically zero, we’ll have to wait for the next round of hardware refreshes to kick in to see what sort of impact there is from the licensing change. Speaking of hardware refreshes(that need vSphere 5) what the hell is taking so long with the Opteron 6200s, AMD?! I really thought they’d show up in September, then couldn’t imagine them not showing up in October, and here we are at November, and still no word.

Vmware does need a “Netflix moment”, a term that has been used quite a bit recently.

August 29, 2011

Farewell Terremark – back to co-lo

Filed under: General,Random Thought,Storage,Virtualization — Tags: , , , — Nate @ 9:43 pm

I mentioned not long ago that I was going co-lo once again. I was co-lo for a while for my own personal services but then my server started to act up (the server was 6 years old if it was still alive today) with disk “failure” after failure (or at least that’s what the 3ware card was predicting eventually it stopped complaining and the disk never died again). So I thought – do I spent a few grand to buy a new box or go “cloud”. I knew up front cloud would cost more in the long run but I ended up going cloud anyways as a stop gap – I picked Terremark because it had the highest quality design at the time(still does).

During my time with Terremark I never had any availability issues, there was one day where there was some high latency on their 3PAR arrays though they found & fixed whatever it was pretty quick (didn’t impact me all that much).

I had one main complaint with regards to billing – they charge $0.01 per hour for each open TCP or UDP port on their system, and they have no way of doing 1:1 NAT. For a web server or something this is no big deal, but for me I needed a half dozen or more ports open per system(mail, dns, vpn, ssh etc) after cutting down on ports I might not need, so it starts to add up, indeed about 65% of my monthly bill ended up being these open TCP and UDP ports.

Once both of my systems were fully spun up (the 2nd system only recently got fully spun up as I was too lazy to move it off of co-lo) my bill was around $250/mo. My previous co-lo was around $100/mo and I think I had them throttle me to 1Mbit of traffic (this blog was never hosted at that co-lo).

The one limitation I ran into on their system was that they could not assign more than 1 IP address for outbound NAT per account. In order to run SMTP I needed each of my servers to have their own unique outbound IP. So I had to make a 2nd account to run the 2nd server. Not a big deal(for me, ended up being a pain for them since their system wasn’t setup to handle such a situation), since I only ran 2 servers (and the communications between them were minimal).

As I’ve mentioned before, the only part of the service that was truly “bill for what you use” was bandwidth usage, and for that I was charged between 10-30 cents/month for my main system and 10 cents/month for my 2nd system.

Oh – and they were more than willing to setup reverse DNS for me which was nice (and required for running a mail server IMO). I had to agree to a lengthy little contract that said I wouldn’t spam in order for them to open up port 25. Not a big deal. The IP addresses were “clean” as well, no worries about black listing.

Another nice thing to have if they would of offered it is billing based on resource pools, as usual they charge for what you provision(per VM) instead of what you use. When I talked to them about their enterprise cloud offering they charged for the resource pool (unlimited VMs in a given amount of CPU/memory), but this is not available on their vCloud Express platform.

It was great to be able to VPN to their systems to use the remote console (after I spent an hour or two determining the VPN was not going to work in Linux despite my best efforts to extract linux versions of the vmware console plugin and try to use it). Mount an ISO over the VPN and install the OS. That’s how it should be. I didn’t need the functionality but I don’t doubt I would of been able to run my own DHCP/PXE server there as well if I wanted to install additional systems in a more traditional way. Each user gets their own VLAN, and is protected by a Cisco firewall, and load balanced by a Citrix load balancer.

A couple of months ago the thought came up again of off site backups. I don’t really have much “critical” data but I felt I wanted to just back it all up, because it would be a big pain if I had to reconstruct all of my media files for example. I have about 1.7TB of data at the moment.

So I looked at various cloud systems including Terremark but it was clear pretty quick no cloud company was going to be able to offer this service in a cost effective way so I decided to go co-lo again. Rackspace was a good example they have a handy little calculator on their site. This time around I went and bought a new, more capable server.

So I went to a company I used to buy a ton of equipment from in the bay area and they hooked me up with not only a server with ESXi pre-installed on it but co-location services (with “unlimited” bandwidth), and on-site support for a good price. The on-site support is mainly because I’m using their co-location services(which in itself is a co-lo inside Hurricane Electric) and their techs visit the site frequently as-is.

My server is a single socket quad core processor, 4x2TB SAS disks (~3.6TB usable which also matches my usable disk space at home which is nice – SAS because VMware doesn’t support VMFS on SATA though technically you can do it the price premium for SAS wasn’t nearly as high as I was expecting), 3ware RAID controller with battery backed write-back cache, a little USB thing for ESXi(rather have ESXi on the HDD but 3ware is not supported for booting ESXi), 8GB Registered ECC ram and redundant power supplies. Also has decent remote management with a web UI, remote KVM access, remote media etc. For co-location I asked (and received) 5 static IPs (3 IPs for VMs, 1 IP for ESX management, 1 IP for out of band management).

My bandwidth needs are really tiny, typically 1GB/month. Though now with off site backups that may go up a bit (in bursts). Only real drawback to my system is the SAS card does not have full integration with vSphere so I have to use a cli tool to check the RAID status, at some point I’ll need to hook up nagios again and run a monitor to check on the RAID status. Normally I setup the 3Ware tools to email me when bad things happen, pretty simple, but not possible when running vSphere.

The amount of storage on this box I expect to last me a good 3-5 years. The 1.7TB includes every bit of data that I still have going back a decade or more – I’m sure there’s a couple hundred gigs at least I could outright delete because I may never need it again. But right now I’m not hurting for space so I keep it there, on line and accessible.

My current setup

  • One ESX virtual switch on the internet that has two systems on it – a bridging OpenBSD firewall, and a Xangati system sniffing packets(still playing with Xangati). No IP addresses are used here.
  • One ESX virtual switch for one internal network, the bridging firewall has another interface here, and my main two internet facing servers have interfaces here, my firewall has another interface here as well for management. Only public IPs are used here.
  • One ESX virtual switch for another internal network for things that will never have public IP addresses associated with them, I run NAT on the firewall(on it’s 3rd/4th interfaces) for these systems to get internet access.

I have a site to site OpenVPN connection between my OpenBSD firewall at home and my OpenBSD firewall on the ESX system, which gives me the ability to directly access the back end, non routable network on the other end.

Normally I wouldn’t deploy an independent firewall, but I did in this case because, well I can. I do like OpenBSD’s pf more than iptables(which I hate), and it gives me a chance to play around more with pf, and gives me more freedom on the linux end to fire up services on ports that I don’t want exposed and not have to worry about individually firewalling them off, so it allows me to be more lazy in the long run.

I bought the server before I moved, once I got to the bay area I went and picked it up and kept it over a weekend to copy my main data set to it then took it back and they hooked it up again and I switched my systems over to it.

The server was about $2900 w/1 year of support, and co-location is about $100/mo. So disk space alone the first year(taking into account cost of the server) my cost is about $0.09 per GB per month (3.6TB), with subsequent years being $0.033 per GB per month (took a swag at the support cost for the 2nd year so that is included). That doesn’t even take into account the virtual machines themselves and the cost savings there over any cloud. And I’m giving the cloud the benefit of the doubt by not even looking at the cost of bandwidth for them just the cost of capacity. If I was using the cloud I probably wouldn’t allocate all 3.6TB up front but even if you use 1.8TB which is about what I’m using now with my VMs and stuff the cost still handily beats everyone out there.

What’s the most crazy is I lack the purchasing power of any of these clouds out there, I’m just a lone consumer, that bought one server. Granted I’m confident the vendor I bought from gave me excellent pricing due to my past relationship, though probably still not on the scale of the likes of Rackspace or Amazon and yet I can handily beat their costs without even working for it.

What surprised me most during my trips doing cost analysis of the “cloud” is how cheap enterprise storage is. I mean Terremark charges $0.25/GB per month(on SATA powered 3PAR arrays), Rackspace charges $0.15/GB per month(I believe Rackspace just uses DAS). I kind of would of expected the enterprise storage route to cost say 3-5x more, not less than 2x. When I was doing real enterprise cloud pricing storage for the solution I was looking for typically came in at 10-20% of the total cost, with 80%+ of the cost being CPU+memory. For me it’s a no brainier – I’d rather pay a bit more and have my storage on a 3PAR of course (when dealing with VM-based storage not bulk archival storage). With the average cost of my storage for 3.6TB over 2 years coming in at $0.06/GB it makes more sense to just do it myself.

I just hope my new server holds up, my last one lasted a long time, so I sort of expect this one to last a while too, it got burned in before I started using it and the load on the box is minimal, would not be too surprised if I can get 5 years out of it – how big will HDDs be in 5 years?

I will miss Terremark because of the reliability and availability features they offer, they have a great service, and now of course are owned by Verizon. I don’t need to worry about upgrading vSphere any time soon as there’s no reason to go to vSphere 5. The one thing I have been contemplating is whether or not to put my vSphere management interface behind the OpenBSD firewall(which is a VM of course on the same box). Kind of makes me miss the days of ESX 3, when it had a built in firewall.

I’m probably going to have to upgrade my cable internet at home, right now I only have 1Mbps upload which is fine for most things but if I’m doing off site backups too I need more performance. I can go as high as 5Mbps with a more costly plan. 50Meg down 5 meg up for about $125, but I might as well go all in and get 100meg down 5 meg up for $150, both plans have a 500GB cap with $0.25/GB charge for going over. Seems reasonable. I certainly don’t need that much downstream bandwidth(not even 50Mbps I’d be fine with 10Mbps), but really do need as much upstream as I can get. Another option could be driving a USB stick to the co-lo, which is about 35 miles away, I suppose that is a possibility but kind of a PITA still given the distance, though if I got one of those 128G+ flash drives it could be worth it. I’ve never tried hooking up USB storage to an ESX VM before, assuming it works? hmmmm..

Another option I have is AT&T Uverse, which I’ve read good and bad things about – but looking at their site their service is slower than what I can get through my local cable company (which truly is local, they only serve the city I am in). Another reason I didn’t go with Uverse for TV is due to the technology they are using I suspected it is not compatible with my Tivo (with cable cards). Though AT&T doesn’t mention their upstream speeds specifically I’ll contact them and try to figure that out.

I kept the motherboard/cpus/ram from my old server, my current plan is to mount it to a piece of wood and hang it on the wall as some sort of art. It has lots of colors and little things to look at, I think it looks cool at least. I’m no handyman so hopefully I can make it work. I was honestly shocked how heavy the copper(I assume) heatsinks were, wow, felt like 1.5 pounds a piece, massive.

While my old server is horribly obsolete, one thing it does have even on my new server is being able to support more ram. Old server could go up to 24GB(I had a max of 6GB at the time in it), new server tops out at 8GB (have 8GB in it). Not a big deal as I don’t need 24GB for my personal stuff but just thought it was kind of an interesting comparison.

This blog has been running on the new server for a couple of weeks now. One of these days I need to hook up some log analysis stuff to see how many dozen hits I get a month.

If Terremark could fix three areas of their vCloud express service – one being resource pool-based billing,  another being relaxing the costs behind opening multiple ports in the firewall (or just giving 1:1 NAT as an option), and the last one being thin provisioning friendly billing for storage — it would really be a much more awesome service than it already is.

August 23, 2011

Mac Daddy P10000

Filed under: Datacenter,Storage,Virtualization — Tags: , , — Nate @ 9:55 pm

It’s finally here, the HP P10000 – aka 3PAR V Class. 3PAR first revealed this to their customers more than a year ago, but the eagle has landed now.

When it comes to the hardware – bigger is better (usually means faster too)

Comparisons of recent 3PAR arrays

ArrayRaw
Capacity
Fibre
Ports
Data
Cache
Control
Cache
DisksInterconnect
Bandwidth
I/O
Bandwidth
SPC-1
IOPS
8-node P10000
(aka V800)
1,600 TB288 ports
(192 host)
512 GB256 GB1,920112 GB/sec96 GB/sec600,000
(guess)
8-node T800800 TB192 ports
(128 host)
96 GB32 GB1,28045 GB/sec19.2 GB/sec225,000
4-node T800
(or 4-node
T400)
400 TB96
(64 host)
48 GB 16 GB6409.6 GB/sec?~112,000
(estimate)
4-node F400384 TB32
(24 host)
24 GB16 GB3849.6 GB/sec ??93,000
Comparison between the F400, T400, T800 and the new V800. In all cases the numbers reflected are in a maximum configuration.

3PAR V800 ready to fight

The new system is based on their latest Generation 4 ASIC, and for the first time they are putting two ASICs in each controller. This is also the first system that supports PCI Express, with if my memory serves 9 PCI Express buses per controller. Front end throughput is expected to be up in the 15 Gigabytes/second range (up from ~6GB on the T800).  Just think they have nearly eight times the interconnect bandwidth than the controllers have capacity to push data to hosts, that’s just insane.

IOPS – HP apparently is not in a big rush to post SPC-1 numbers, but given the increased spindle count, cache, doubling up on ASICs, and the new ASIC design itself I would be surprised if the system would get less than say half a million IOPS on SPC-1 (by no means a perfect benchmark but at least it’s a level playing field).

It’s nice to see 3PAR finally bulk up on data cache (beefcake!!) – I mean traditionally they don’t need it all that much because their architecture blows the competition out of the water without breaking a sweat – but still – ram is cheap – it’s not as if they’re using the same type of memory you find in CPU cache – it’s industry standard ECC DIMMs. RAM may be cheap, but I’m sure HP won’t charge you industry standard DIMM pricing when you go to put 512GB in your system!

Now that they have PCI Express 3PAR can natively support 8Gbps fibre channel as well as 10Gbit iSCSI and FCoE which are coming soon.

The drive cages and magazines are more or less unchanged (physically) from the previous generation but apparently new stuff is still coming down the pike there.  The controller’s physical design (how it fits in the cabinet) seems radically different than their previous S or T series.

Another enhancement for this system is they expanded the number of drive chassis to 48, or 12 per node (up from 8 per node). Though if you go back in time you’ll find their earliest S800 actually supported 64 drive chassis for a time, since then they have refrained from daisy chaining drive chassis on their S/T/V class which is how they achieved the original 64 drive chassis configuration (or 2,560 disks back when disks were 9GB in size). The V class obviously has more ports so they can support more cages. I have no doubt they could go to even more cages by using ports assigned to hosts and assign them to disks, just a matter of testing. Flipping a fiber port from host to disk is pretty trivial on the system.

The raw capacity doesn’t quite line up with the massive amount of control cache the system has, in theory at least if 4GB of control cache per controller is good enough for 200TB raw (per controller pair), then 32GB  per controller should be able to net you 1,600 TB raw (per controller pair or 6,400 TB for the whole system), but obviously with a limit put in of 1,600 TB for the entire system they are using a lot of control cache for something else.

As far as I know the T-class isn’t going anywhere anytime soon, this V class is all about even more massive scale, at a significantly higher entry level price point than the T-class(at least $100,000 more at the baseline from what I can tell), with the beauty of running the same operating system, the same user interfaces, the same software features across the entire product line. The T-class, as-is still is mind numbingly fast and efficient, even three years after it was released.

No mainframe connectivity on this baby.

Storage Federation

The storage federation stuff is pretty cool in that it is peer based, you don’t need any external appliances to move the data around, the arrays talk to each other directly to manage all of that. This is where we get the first real integration between 3PAR and HP in that the entire line of 3PAR arrays as well as the Lefthand-based P4000 iSCSI systems (including the Virtual storage appliance even!) support this new peer federation (sort of makes me wonder where EVA support is – perhaps it’s coming later or maybe it’s a sign HP is sort of depreciating EVA when it comes to this sort of thing – I’m sure the official party line will be EVA is still a shining star).

The main advantage I think of storage federation technology over something like storage vMotion is the array has a more holistic view of what’s going on in the storage system rather than just what a particular host sees, or what a particular LUN is doing. The federation should also have more information about the location of the various arrays if they are in another data center or something and make more intelligent choices about moving stuff around. Certainly would like to see it in action myself. Even though hypervisors have had thin provisioning for a while – by no means does it reduce the need for thin provisioning at the storage level (at least for larger deployments).

I’d imagine like most things on the platform the storage federation is licensed based on the capacity of the array.

If this sort of thing interests you anywhere nearly as much as it interests me you should check out the architecture white paper from HP which has some new stuff from the V class here. You don’t have to register to download it like you did back in the good ‘ol days.

I’d be surprised if I ever decided to work for a company large enough to be able to leverage a V-class, but if anyone from 3PAR is out there reading this (I’m sure there’s more than one) since I am in the Bay area – not far from your HQ – I wouldn’t turn down an invitation to see one of these in person 🙂

Oh HP.. first you kick me in the teeth by killing WebOS devices then before I know what happened you come out with a V-class and want to make things all better, I just don’t know what to feel.

The joys of working with a 3PAR array, it’s been about a year since I laid my hands on one (working at a different company now), I do miss it.

Older Posts »

Powered by WordPress