TechOpsGuys.com Diggin' technology every day

April 7, 2012

Interesting discussion on vSphere vs Hyper-V

Filed under: Virtualization — Tags: , — Nate @ 1:52 pm

I stumbled upon this a few days ago and just got around to reading it now. It came out about a month ago, I forgot where I saw it, I think from Planet V12n or something.

Anyways it’s two people who sound experienced(I don’t see information on their particular backgrounds) each talking up their respective solution. Two things really stood out to me:

  • The guy pimping Microsoft was all about talking about a solution that doesn’t exist yet (“it’ll be fixed in the next version, just wait!”)
  • The guy pimping VMware was all about talking about how cost doesn’t matter because VMware is the best.

I think they are both right – and both wrong.

It’s good enough

I really believe that in Hyper-V’s case and also in KVM/RHEV’s case that for the next generation of projects these products will be “good enough” (in Hyper-V’s case – whenever Windows 8 comes out) for a large(majority) number of use cases out there. I don’t see Linux-centric shops considering Hyper-V or Windows-centric considering KVM/RHEV/etc so VMware will still obviously have a horse in the  race (as the pro-VMware person harps on in the debate).

Cost is becoming a very important issue

One thing that really got me the wrong way was when the pro-VMware person said this

Some people complain about VMware’s pricing but those are not the decision makers, they are the techies. People who have the financial responsibility for SLAs and customers aren’t going to bank on an unproven technology.

I’m sorry but that is just absurd. If cost wasn’t an issue then the techies wouldn’t be complaining about it because they know, first hand that it is an issue in their organizations. They know, first hand that they have to justify the purchase to those decision makers. The company I’m at now was in that same situation – the internal IT group could only get the most basic level of vSphere approved for purchase at the time for thier internal IT assets(this was a year or two or three ago). I hear them constantly complaining about the lack of things like vMotion, or shared storage etc. Cost was a major issue so the environment was built with disparate systems and storage and the cheap version of vSphere.

Give me an unlimited budget and I promise, PROMISE you will NEVER hear me complain about cost. I think the same is true of most people.

I’ve been there, more than once! I’ve done that exact same thing (Well in my case I managed to have good storage in most of the cases).

Those decision makers weigh the costs of maintaining that SLA with whatever solution they’re going to provide. Breaking SLAs can be more cost effective then achieving them. Especially if they are absurdly high SLAs. I remember at one company I was at they signed all these high level SLAs with their new customers — so I turned around and said – hey, in order to achieve those SLAs we need to do this laundry list of things. I think maybe 5-10% of the list got done until the budget ran out. You can continue to meet those high SLAs if your lucky, and don’t actually have the ability to sustain failure and maintain uptime. More often than not such smaller companies prefer to rely on luck then doing things right.

Another company I was at had what could of been called a disaster in itself, during the same time I was working on a so-called disaster recovery project (no coincidence). Despite the disaster, at the end of the day the management canned the disaster recovery project (which everyone agreed if it was in place it would of saved a lot of everything had it been in place at the time of the disaster). It’s not that budget wasn’t approved – it was approved. The problem was management wanting to do another project that they massively under budgeted for and decided to cannibalize the budget from DR to give to this other pet project.

Yet another company I was at signed a disaster recovery contract with Sun Guard just to tick the check box to say they have DR. The catch was – the entire company knew up front before they signed – that they would never be able to utilize the service. IT WOULD NEVER WORK. But they signed it anyways because they needed a plan, and they didn’t want to pay for a plan that would of worked.

VMware touting VM density as king

I’ve always found it interesting how VMware touts VM density, they show an automatic density advantage to VMware which automatically reduces VMware’s costs regardless of the competition. This example was posted to one of their blogs a few days ago.

They tout their memory sharing, their memory ballooning, their memory compression all as things that can increase density vs the competiton.

My own experience with memory sharing on VMware at least with Linux is pretty simple – it doesn’t work. It doesn’t give results. Looking at one of my ESX 4.1 servers (yes, no ESXi here) which has 18 VMs on it and 101GB of memory in use, how much memory am I saving with the transparent page sharing?

3,161 MB – or about 3%. Nothing to write home about.

For production loads, I don’t want to be in a situation where memory ballooning kicks in, or when memory compression kicks in, I want to keep performance high – that means no swapping of any kind from any system. Last thing I want is my VMs to start thrashing my storage with active swapping. Don’t even think about swapping if your running Java apps either, once that garbage collection kicks in your VM will grind to a halt while it performs that operation.

I would like a method to keep the Linux buffer cache under control however, whether it is ballooning that specifically targets file system cache, or some other method, that would be a welcome addition to my systems.

Another welcome addition would be the ability to flag VMs and/or resource pools to pro-actively utilize memory compression (regardless of memory usage on the host itself). Low priority VMs, VMs that sit at 1% cpu usage most of the time, VM’s where the added latency of compression on otherwise idle CPU cores isn’t that important (again – stay away from actively swapping!). As a bonus provide the ability to limit the CPU capacity consumed by compression activities, such as limiting it to the resource pool that the VM is in, and/or having a per-host setting where you could say – set aside up to 1 CPU core or whatever for compression, if you need more than that, don’t compress unless it’s an emergency.

YAWA with regards to compression would be to provide me with compression ratios – how effective is the compression when it’s in use? Recommend to me VMs that have low utilization that I could pro-actively reclaim memory by compressing these, or maybe only portions of the memory are worth compressing? The Hypervisor with the assistance of the vmware tools has the ability to see what is really going on in the guest by nature of having an agent there. The actual capability doesn’t appear to exist now but I can’t imagine it being too difficult to implement. Sort of along the lines of pro-actively inflating the memory balloon.

So, for what it’s worth for me, you can take any VM density advantages for VMware off the table when it comes from a memory perspective. For me and VM density it’s more about the efficiency of the code and how well it handles all of those virtual processors running at the same time.

Taking the Oracle VM blog post above, VMware points out Oracle supports only 128 VMs per host vs VMware at 512, good example – but really need to show how well all those VMs can work on the same host, how much overhead is there. If my average VM CPU utilization is 2-4% does that mean I can squeeze 512 VMs on a 32-core system (memory permitting of course)  — when in theory I should be able to get around 640 – memory permitting again.

Oh the number of times I was logged into an Amazon virtual machine that was suffering from CPU problems only to see that 20-30% of the CPU usage was being stolen from the VM by the hypervisor. From the sar man page

%steal

Percentage of time spent in involuntary wait by the virtual CPU or CPUs while the hypervisor was servicing another virtual processor.

Not sure if Windows has something similar.

Back to costs vs Maturity

I was firmly in the VMware camp for many years, I remember purchasing ESX 3.1 (Standard edition – no budget for higher versions) for something like $3,500 for a two-socket license. I remember how cheap it felt at the time given the power it gave us to consolidate workloads. I would of been more than happy(myself at least) to pay double for what we got at the time. I remember the arguments I got in over VMware vs Xen with my new boss at the time, and the stories of the failed attempts to migrate to Xen after I left the company.

The pro-VMware guy in the original ZDNet debate doesn’t see the damage VMware is doing to itself when it comes to licensing. VMware can do no wrong in his eyes. I’m sure there are plenty of other die hards out there that are in the same boat. The old motto of you never got fired for buying IBM right. I can certainly respect the angle though as much as it pains my own history to admit that I think the tides have changed and VMware will have a harder and harder time pitching it’s wares in the future, especially if it keeps playing games with licensing on a technology which it’s own founders (I think — I wish I could find the article) predicted would become commodity by about now. With the perceived slow uptake of vSphere 5 amongst users I think the trend is already starting to form. The problem with the uptake isn’t just the licensing of course, it’s that for many situations there isn’t a compelling reason to upgrade – it’s good enough has set in.

I can certainly, positively understand VMware providing premium pricing for premium services, an Enterprise Plus Plus ..or whatever. But don’t vary the price based on provisioned utilization that’s just plain shooting yourself (and your loyal customers) in the feet. The provisioned part is another stickler for me – the hypervisor has the ability to measure actual usage, yet they stick their model to provisioned capacity – whether or not the VM is actually using the resource. It is a simpler model but it makes planning more complicated.

The biggest scam in this whole cloud computing era so many people think we’re getting into is the vast chasm between provisioned vs utilized capacity. With companies wanting to charge you for provisioned capacity and customers wanting to over provision so they don’t have to make constant changes to manage resources, knowing that they won’t be using all that capacity up front.

The technology exists, it’s just that few people are taking advantage of it and fewer yet have figured out how to leverage it (at least in the service provider space from what I have seen).

Take Terremark (now Verizon), a VMware-based public cloud provider (and one of only two partners listed on VMware’s site for this now old program). They built their systems on VMware, they build their storage on 3PAR. Yet for this vCloud express offering there is no ability to leverage resource pools, no ability to utilize thin provisioning (from a customer standpoint). I have to pay attention to exactly how much space I provision up front, and I don’t have the option to manage it like I would on my own 3PAR array.

Now Terremark has an enterprise offering that is more flexible and does offer resource pools, but this isn’t available on their on demand offering. I still have the original quote Terremark sent me for the disaster recovery project I was working on at the time, it makes me want to either laugh or cry to this day. I have to give Terremark credit though at least they have an offering that can utilize resource pools, most others (well I haven’t heard of even one – though I haven’t looked recently) does not. (Side note: I hosted my own personal stuff on their vCloud express platform for a year so I know it first hand – it was a pretty good experience what drove me away primarily was their billing for each and every TCP and UDP port I had open on an hourly rate. Also good to not be on their platform anymore so I don’t risk them killing my system if they see something I say and take it badly).

Obviously the trend in system design over recent years has bitten into the number of licenses that VMware is able to sell – and if their claims are remotely true – that 50% of the world’s workloads are virtualized and of that they have 80% market share – it’s going to be harder and harder to maintain a decent growth rate. It’s quite a pickle they are in, customers in large part apparently haven’t bought into the more premium products VMware has provided (that are not part of the Hypervisor), so they felt the pressure to increase the costs of the Hypervisor itself  to drive that growth in revenue.

Bottom Line

VMware is in trouble.

Simple as that.

11 Comments

  1. I think that sometimes it’s easy to let the environment you manage color your perception of how we think others are deploying and managing technology as a whole. I know that when I write about certain aspects of technology that I will lean heavily on the assets I manage on a daily basis, its my familiarity with those systems that allow me to speak with authority about them. I sometimes take for granted that other IT shops do things differently, or may see my approach as not going to work for them. It’s very easy to generalize and jump to conclusions.
    I actually had a boss that would tell me that “cost is no issue” for every project I worked on. I knew full well that cost was the over-riding issue and would architect 3 different scenarios for each project, the “cost is no issue”, the “what I think we would really need to meet the projects needs”, and the budget solution. Invariably, the budget solution would be the one that management opted for and I’d have to make due with it. I think it really depends on the type of corporate culture at a given company as to whether cost will be an issue or not. Some companies still love to hate their IT departments, and will do their best to make due with as little as possible and take the risk that they will survive a real disaster. Others will throw money at IT and waste it as they cluelessly look for a solution to a perceived problem. I know of a company that bought an XIV system to test a workload, and then pulled all the drives and shredded them after the test due to the sensitivity of the data and their internal policies. Yes, there is close to 800K down the drain, that’s the annual IT budget for lots of shops.

    I can’t make a comparison of Hyper-V3 to VMware until it’s actually released. I refuse to take marketecture as fact, especially in the case of Microsoft products. I most certainly will not deploy any MS product until at least SP1 has come out. Those burns refuse to heal.

    We are 80/20 Windows/Linux so for us the benefits involved with memory management within VMware work well. Linux workloads, not so well, that may improve since VMware has Linux proficiency, will it with Hyper-V, I doubt it. We have achieved densities of around 35:1 with our existing host infrastructure. I don’t know of any Hyper-V shops currently getting that level with production class workloads.
    I’ve yet to see a third party that I trust provide the cost comparison between Hyper-V and VMware. Those of us with MS ELA’s can get discounts at a significant level, but then there’s all the extra costs involved. Sure the hypervisor is free, but you want to manage it? Well that’s where its going to cost you plenty.
    You’re right that VMware is expensive, and it’s going to get even more expensive as time goes on. The licensing fiasco for vSphere5 may have left people’s minds as time has gone by, but I tend to see it as VMware adopting some aspects of the Oracle pricing model. Oracle can charge what it does because they know they have many of their customers over a barrel. VMware is approaching that threshold with a fair number of large installs. Sure the SMB market may have the flexibility to move to Hyper-V, but your shops with 1000+ VM’s and all the infrastructure built out to support them will be hard pressed to make the move. This will mean that you will see shops keeping their older, and cheaper, versions of VMware unless there is a true benefit to upgrade.

    If you look at the features being crafted with each subsequent release of vSphere, you will see that they are nearly all confined to IO performance, and Storage management/offload. In robust environments, this is where the pain point continues to manifest. In the general VMware populace at large, its memory constraints, but for those groups pushing the envelope, and running the 85-100% virtualized platform, IO and Storage Constraints are key. I think that’s why you see VMware doing what they are with the VAAI primitives, and Storage DRS.

    As for Hyper-V, meh, good enough doesn’t cut it in my current organization. Then there is the lack of a real ecosystem devoted to Hyper-V and products that I can leverage. Sure as the market share increases, the ecosystem will develop, but really if I’m going to stake my job on production workloads and the SLA’s required, I’m not going to settle.

    Bottom line, I don’t doubt that feature parity is coming, but is it too late in the game for that vast swath of VMware customer base to make the change mid-stream? Sure shops may go evaluate Hyper-V and use it for test/dev and in the case of some shops production. Still, given my own scenario, I don’t see Hyper-V making any headway in our organization, and as such, for many shops it will be a decision they ultimately have to make.

    I do think that at some point the hypervisor will be given away free, it essentially is now, it’s the management that will cost you. So if you want to run 100 ESXi hosts of vSpher5 with 64GB of RAM and two sockets, then get crafty with scripting, you can essentially run a hypervisor cost of zero. Still the amount of time to manage and craft that solution may cost you as much, if not more than paying for the licenses.

    Comment by gchapman — April 7, 2012 @ 7:01 pm

  2. […] I read this post by Nate over at TechOpsGuys about Hyper-V vs VMware. It’s a good read and Nate brings several salient points to discussing the challenges facing VMware in the future as Hyper-V comes into feature parity with VMware. […]

    Pingback by Thankfully the RAID saved us. » Blog Archive » Hyper-V vs VMware a response to Nate at TechopsGuys — April 7, 2012 @ 7:04 pm

  3. Wow, those are some good points! Nice to hear some feedback who is in a mostly Windows shop, I really am not in touch with many on that side of the fence.

    The comparison with Oracle licensing is interesting, though at least with Oracle Database there is an option that a lot of people seem to dismiss or not be aware of which is Oracle Standard Edition, flat rate pricing, unlimited cores per socket – though you do have a limit of 4 sockets per server or per cluster. They even throw in RAC for “free” with Standard Edition (unless things have changed recently). There’s no limits as to the size of the database itself(on disk), or memory limits or whatever. There are limits as to some of the features that are supported (e.g. no partitioning).

    As for I/O constraints it seems the storage industry is working fast to address those with the creative new uses of flash, tiering stuff down to the server level. Then of course there is the slightly older tech with the sub lun auto tiering, EMC’s Flash cache, NetApp’s PAM, Compellent’s live volume (automatically load balance between arrays).

    I read this recently which talked about how few people use storage I/O control – it seemed many of the comments were along the lines of – my array wide stripes so if one volume is slow all of them are. I have SIOC enabled though I don’t think it’s ever kicked in

    http://blogs.vmware.com/vsphere/2012/03/debunking-storage-io-control-myths.html

    100 ESXi hosts with 64 gigs, at least for me my workloads are entirely memory constrained so that would turn into either 13 hosts with 512GB or maybe as many as 26 hosts with 256GB, which of course is a lot more manageable than 100!!

    But I certainly understand the importance of manageability – kind of makes me wonder why vSphere is still limping along at 32 hosts as the max cluster limit – seems kinda paltry in this day and age doesn’t it ? They tout their ability to get 512 VMs on a single host but then your faced with only 3,000 VMs in an entire cluster. The 32-node limit at least goes back to ESX 3.5.

    The excitement around that new Hyper-V seems pretty strong, feels similar to the excitement around vSphere 4. Who knows if they’ll be able to deliver – it won’t matter to me since I won’t use it in any case 🙂

    I’m still a believer in using physical resources when virtual doesn’t suffice, the situations where that comes into play is becoming more and more rare but it’s still there. Despite my companies 100% virtualized environment I told my current boss that if we want to really scale our main money driving application we should put it on physical hardware due to licensing. The application is licensed based on number of installations (regardless of CPU/memory/disk/whatever). Not VM-friendly in that respect. The application by itself is stateless – the state is stored in the database, or in memcache or something else – so if a server goes down it doesn’t really matter. Minimal I/O needs. So I figure if we get to that point then we get a couple beefy systems with lots of cpu cores and throw it at the app, no vmware licensing, no worries about hypervisor scaling(esp given ESX 4.1 tops out at 8 cores). It just seems wrong to deploy a single app to a hypervisor and have it use the entire set of resources on a single system. That app is unique in our environment.

    Vmware of course hates that approach though, you can’t protect it with SRM, HA or or other fancy things, and when I talk to them they don’t seem to understand that it’s stateless – if the box dies it doesn’t matter – there is no data on there that is critical. The vendor can come repair the system and it can be up the next day, nothing lost.

    Same goes for a lot of the stuff I work with, the motto ‘everything behind a load balancer’ rings with almost everything I do. And nowadays with my new Citrix Netscalers I even have intelligent MySQL load balancing (prior all of my load balancing background was F5). There are very few things in the environment that would cause a major issue if a ESX host went off line, very few single points of failure. Most of the ones that are single points aren’t critical path. There is one that is though – memcache. And I am using Vmware FT to protect it for now. With the load balancers, if a host goes down the application recovers in about 5 seconds, maybe 10 depending on the health check – a far sight better than HA 🙂 (Can’t really count FT since it limited to 1 vCPU).

    I think a lot of the stuff vmware is doing for storage they’re just doing to move it higher up in the stack to get more control – solutions exist for high I/O high contention etc today and have for a while. Of course you do have to pay for them — when I do though at least I know it will benefit (hope at least) me no matter what system is running on top of the storage whether it is a physical server a hypervisor (any kind), a NAS gateway or whatever. The vast bulk of the I/O in my environments bypasses vSphere entirely with RDMs or with guest-iSCSI attachments, or with guest-NAS attachments. Good or bad it keeps things flexible.

    thanks for the comment!!

    Comment by Nate — April 7, 2012 @ 9:06 pm

  4. I use a lot of RDM’s and iSCSI from guest access as well. If you run a lot of Microsoft clusters and use the physical RDM’s (which they recommend) it makes a lot of the features within VMware pointless. No snapshots or vmotion, it also borks the backup ability of software that uses snapshots to do VM backups. I’m not super wild on shoving all my data into VMFS data stores and like the flexibility RDM’s provide for luns larger than 2TB is crucial to our environment. This has been addressed in vsphere5 but we have not upgraded yet. I will say the improvements in VMFS5 are substantial, though I will say, VMware has been all over the map when it comes to their VMFS block sizes. They are continuing to improve upon tweaking it to be more forgiving. 1MB chunks with 8k block size are now standard in VMFS 5 as well as support for up to 60TB LUN sizes with RDM pass-through.

    http://virtualnoob.wordpress.com/2012/04/08/vsphere-5-how-to-vmfs-considerations-and-provisioning-part-1/

    I like the idea behind purpose built storage arrays for VMware loads. Look at Tintri and what they are doing. I think as that platform matures, and if the company can bring the price point down to something that the SMB market can absorb, it will be highly successful for the highly virtualized shops that want to ensure solid performance without the hassel of management and fine tuning. I see the industry as a whole moving towards the set-and-forget based appliances. There is just not enough manpower available in many companies, and we are forced to continually wear more and more hats.

    SIOC is on in our environment as well, but it never kicks in, kind of a nice to have for when a VM goes off the rails, which they will on occasion.

    I would have loved to run Oracle STD, but we went with the entire e-Biz suite and thus needed Enterprise, licensed at 9cores for P-series boxes on AIX, not the cheap alternative. I know Oracle is pushing their ODA (oracle data appliance) which forces RAC as well as a 4TB data limit unless you output to one of their SUN ZFS boxes. Still at an entry point of 50k per appliance (without licensing) its not a cheap solution, and honestly Oracles bigger costs are on the support side (who doesnt love cutting a check for 900k). The one thing that pisses me off to no end is Oracles virtualized licensing scheme, where all other hypervisors are forced to submit to soft cpu partitioning, while Oracles hypervisor (xen last I checked) is not. This locks you into huge licensing costs if you virtualized your database tier since most hosts you would want to do so on have 16 cores or more and you will be forced to license for all the cores even if you are not using them (soft partitioning) vs a Unix box where you can carve out LPARs (hard partitioning) even though in my view, this is essentially the same thing. /rant off

    Comment by gchapman — April 8, 2012 @ 10:10 am

  5. I think VMWare is in trouble as well with licensing, and their product sprawl is unreal right now. Have you seen their product listing lately?
    http://www.vmware.com/products/

    I get lost just trying to find the basic hypervisor most of the time. I remember when VMware was the Enterprise Licensing + 1 Bundle to Get Lab Manager and SRM. Now most of the time when I order licensing i get some product for free as the “promotion of the month”

    I’ve been arguing for a few years now that if your going to go upscale in your licensing, than do a Platinum license that covers EVERYTHING from the management tier and the Enterprise + Plus Hypervisor. Excluding the VDI product which is another ball game in my opinion.

    Memory overcommittment is suicide in the linux world, but Windows does do a better job of it in my experience. VM is just heavily Microsoft Focused as thats what most companies want to consolidate.

    Truely at the end of the day if your building multitenant or consumer facing applications virtualization is a mistake, you should be pushing as much into the CPU as possible and maxing out your CPU capacity before you run out of memory, and high CPU systems don’t play well in VMware.

    Comment by Justin — April 9, 2012 @ 2:25 pm

  6. I’ve always wondered why transparent page sharing never seemed to work with Linux – at first I thought it was the address randomization that the kernel does but I think Windows added that a few years back. The only other difference I can think of is windows likes to use the page file a lot more than linux likes to use swap, so windows may just keep it’s memory zero’d out more leading to higher de-dupe ratios.

    I have a win2k3 VM here on my desktop at work which has 768MB of ram – it says 329MB of page file in use, 398MB of file system cache and another 400MB of free memory. Is windows strategy to shove everything in the page file and rely on file system cache to speed up access? This goes back many years long before VMs were around, I remember trying to disable page files many years ago only to get an error saying it was required even though I may have 85% of available memory.

    Apparently the Linux folks don’t think that strategy is a good one for whatever reason.

    thanks for the post!

    Comment by Nate — April 9, 2012 @ 3:38 pm

  7. Windows loves the page file… i suspect your assumption may be correct. A windows computer doing absolutely nothing but booting itself and not running any applications will be using the page file. And it uses it all the time even if your memory usage is less than 10%. Its been this way since the beginning of the page file as far as i can tell. I’m sure it makes the SSD folks extremely happy as it artificially kills the lifetime of an SSD drive.

    Comment by Justin — April 10, 2012 @ 9:24 am

  8. Hi Nate,

    I’m Kenneth Denson, service manager for vCloud Express here at Terremark. I just wanted to chime in and thank you for the time you spent with us as a customer. I’m glad that your experience with vCloud Express was generally positive and I’m sorry that we lost you due to the cost per hour of ports. I spoke with my product manager today to ask about our reasoning behind the “penny-per-hour” approach.

    He explained to me that our pricing is broken into several components: instances, storage, bandwidth, internet service (port charges) and others. The real intent behind having such granular pricing is to fairly associate actual usage with the components that Terremark needs to buy, monitor, and manage in support of the cloud platform. We place Citrix Netscaler load balancers in front of the vCloud Express platform and for every internet service that is created it consumes a portion, albeit a small portion of the capacity on the redundant load balancers. Rather than spreading the cost over all customers we just associated it with the creation of an internet service; so any customers consuming more of the load balancers capacity will have a higher portion of their bill associated with internet services.

    And feel free to come back to us anytime, we’ll never kill your system just because we disagree with something you say on your site. You’re just calling it like you see it 🙂

    If there is ever anything I can help with, please feel free to email me. I’m happy to help.

    @KennethDenson

    Comment by Kenneth Denson — April 12, 2012 @ 3:24 pm

  9. Thanks for the comment Kenneth! I totally understand the model – the issue is billing based on provisioning vs usage. If I opened a TCP port to my server and left it open for 24 hours and nobody sent even 1 packet of data to it, I’d consider that 0 usage, but in the Terremark model that is 24 hours of usage. I understand (as an operations person) the difficulties in actually determining usage in that way and don’t have a good solution off the top of my head, but the model of billing for actual usage needs to be fixed (not just by Terremark – it’s obviously a wide scale issue). I’m well aware of your use of Citrix Netscalers and Cisco Firewalls and 3PAR storage and vmware – all major reasons for me to choose terremark in the first place. I had lots of interesting discussions with your technical teams when I was looking at the possibility of deploying some stuff on your enterprise cloud (I realize the two clouds are built differently but a lot of the stuff is similar I imagine).

    I’m a new user to Netscaler myself having had primarily all my background with F5 with regards to load balancing – one feature I saw recently that they seem to offer (didn’t exist last time I used F5) was per-virtual server CPU utilization, something that would probably go a long way towards actually being able to calculate usage for billing the customers. Maybe Citrix will get something like that at some point.

    My use case just didn’t align with what your vCloud express was built for. For example I needed two separate NAT IP addresses for outbound traffic for SMTP (1 for each of my two VMs). In order to pull that off (after consulting with support) I had to create a 2nd vCloud express account, your billing system had some issues with that but they were able to get it worked out.

    A compromise that I think would of been cool to have and would of fitted me fine was if there was an option just to do IP based 1:1 NAT. I haven’t tried it on Netscaler but on F5 at least it was pretty easy to map one external IP to an internal IP, single resource give me everything.

    While Terremark’s costs are very high compared to what I can do with my own gear on tier 1 hardware – I still consider Terremark a really good platform – built with really good components underneath. Stuff that is really built to be more of a utility and “just work” rather than taking an approach like Amazon where you have to build your stuff to expect failure at any given time and have absolutely no support whatsoever.

    Of all the cloud companies(about 5) I talked to during a disaster recovery project a couple of years ago – Terremark was the only one that was brave enough to provide a quote. It wasn’t a pretty quote mind you – but everyone else bowed out pretty quick once they realized what the cost structure was going to look like. I believe at the time the only site Terremark had that utilized 10GbE was in South America – Peru or something? talk about a weird place to be. I was going to have to wait till a new facility in Washington DC (?) came online to get 10GbE or build the solution from the ground up and the installation charges for that were pretty astronomical.

    also how many providers can say they lived through a data center fire without any customer impact(Not Internap – that’s for damn sure)?!! how cool is that! 🙂

    thanks again for the kind post and I will continue to keep an eye on Terremark, I mean Verizon in the event I need cloud services in the future (but without adjusting the cost model I just can’t see how I would want to use any cloud really – having been practically forced to use EC2 off and on for the past 1.5 years I can’t put into words how much more of a pleasure it was to work with Terremark than EC2 – the politics behind EC2 alone are enough not to use it).

    Comment by Nate — April 12, 2012 @ 3:45 pm

  10. Thank you for the nice comments! Yeah, admittedly there are some limitations that you wouldn’t run into with Enterprise Cloud, so I understand that those can be a dealbreaker for some, in which case we always try to move the user over to eCloud.

    Speaking of Enterprise Cloud, the data center you mentioned near DC is online now and it is incredibly impressive. I sit right next to one of the Enterprise Cloud service managers and he’s visited it and aways has stories about how secure it is. Really a top notch facility.

    If you ever have questions about vCloud Express, Enterprise Cloud, or other Terremark “stuff” please feel free to contact me and I’ll be happy to help, or at least get you in touch with someone who can.

    Comment by Kenneth Denson — April 16, 2012 @ 12:38 pm

  11. […] tried to be a vocal opponent to this strategy and firmly believed it was going to hurt VMware, I haven't […]

    Pingback by The Screwballs have Spoken « TechOpsGuys.com — August 20, 2012 @ 2:09 pm

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress