TechOpsGuys.com Diggin' technology every day

August 29, 2011

Farewell Terremark – back to co-lo

Filed under: General,Random Thought,Storage,Virtualization — Tags: , , , — Nate @ 9:43 pm

I mentioned not long ago that I was going co-lo once again. I was co-lo for a while for my own personal services but then my server started to act up (the server was 6 years old if it was still alive today) with disk “failure” after failure (or at least that’s what the 3ware card was predicting eventually it stopped complaining and the disk never died again). So I thought – do I spent a few grand to buy a new box or go “cloud”. I knew up front cloud would cost more in the long run but I ended up going cloud anyways as a stop gap – I picked Terremark because it had the highest quality design at the time(still does).

During my time with Terremark I never had any availability issues, there was one day where there was some high latency on their 3PAR arrays though they found & fixed whatever it was pretty quick (didn’t impact me all that much).

I had one main complaint with regards to billing – they charge $0.01 per hour for each open TCP or UDP port on their system, and they have no way of doing 1:1 NAT. For a web server or something this is no big deal, but for me I needed a half dozen or more ports open per system(mail, dns, vpn, ssh etc) after cutting down on ports I might not need, so it starts to add up, indeed about 65% of my monthly bill ended up being these open TCP and UDP ports.

Once both of my systems were fully spun up (the 2nd system only recently got fully spun up as I was too lazy to move it off of co-lo) my bill was around $250/mo. My previous co-lo was around $100/mo and I think I had them throttle me to 1Mbit of traffic (this blog was never hosted at that co-lo).

The one limitation I ran into on their system was that they could not assign more than 1 IP address for outbound NAT per account. In order to run SMTP I needed each of my servers to have their own unique outbound IP. So I had to make a 2nd account to run the 2nd server. Not a big deal(for me, ended up being a pain for them since their system wasn’t setup to handle such a situation), since I only ran 2 servers (and the communications between them were minimal).

As I’ve mentioned before, the only part of the service that was truly “bill for what you use” was bandwidth usage, and for that I was charged between 10-30 cents/month for my main system and 10 cents/month for my 2nd system.

Oh – and they were more than willing to setup reverse DNS for me which was nice (and required for running a mail server IMO). I had to agree to a lengthy little contract that said I wouldn’t spam in order for them to open up port 25. Not a big deal. The IP addresses were “clean” as well, no worries about black listing.

Another nice thing to have if they would of offered it is billing based on resource pools, as usual they charge for what you provision(per VM) instead of what you use. When I talked to them about their enterprise cloud offering they charged for the resource pool (unlimited VMs in a given amount of CPU/memory), but this is not available on their vCloud Express platform.

It was great to be able to VPN to their systems to use the remote console (after I spent an hour or two determining the VPN was not going to work in Linux despite my best efforts to extract linux versions of the vmware console plugin and try to use it). Mount an ISO over the VPN and install the OS. That’s how it should be. I didn’t need the functionality but I don’t doubt I would of been able to run my own DHCP/PXE server there as well if I wanted to install additional systems in a more traditional way. Each user gets their own VLAN, and is protected by a Cisco firewall, and load balanced by a Citrix load balancer.

A couple of months ago the thought came up again of off site backups. I don’t really have much “critical” data but I felt I wanted to just back it all up, because it would be a big pain if I had to reconstruct all of my media files for example. I have about 1.7TB of data at the moment.

So I looked at various cloud systems including Terremark but it was clear pretty quick no cloud company was going to be able to offer this service in a cost effective way so I decided to go co-lo again. Rackspace was a good example they have a handy little calculator on their site. This time around I went and bought a new, more capable server.

So I went to a company I used to buy a ton of equipment from in the bay area and they hooked me up with not only a server with ESXi pre-installed on it but co-location services (with “unlimited” bandwidth), and on-site support for a good price. The on-site support is mainly because I’m using their co-location services(which in itself is a co-lo inside Hurricane Electric) and their techs visit the site frequently as-is.

My server is a single socket quad core processor, 4x2TB SAS disks (~3.6TB usable which also matches my usable disk space at home which is nice – SAS because VMware doesn’t support VMFS on SATA though technically you can do it the price premium for SAS wasn’t nearly as high as I was expecting), 3ware RAID controller with battery backed write-back cache, a little USB thing for ESXi(rather have ESXi on the HDD but 3ware is not supported for booting ESXi), 8GB Registered ECC ram and redundant power supplies. Also has decent remote management with a web UI, remote KVM access, remote media etc. For co-location I asked (and received) 5 static IPs (3 IPs for VMs, 1 IP for ESX management, 1 IP for out of band management).

My bandwidth needs are really tiny, typically 1GB/month. Though now with off site backups that may go up a bit (in bursts). Only real drawback to my system is the SAS card does not have full integration with vSphere so I have to use a cli tool to check the RAID status, at some point I’ll need to hook up nagios again and run a monitor to check on the RAID status. Normally I setup the 3Ware tools to email me when bad things happen, pretty simple, but not possible when running vSphere.

The amount of storage on this box I expect to last me a good 3-5 years. The 1.7TB includes every bit of data that I still have going back a decade or more – I’m sure there’s a couple hundred gigs at least I could outright delete because I may never need it again. But right now I’m not hurting for space so I keep it there, on line and accessible.

My current setup

  • One ESX virtual switch on the internet that has two systems on it – a bridging OpenBSD firewall, and a Xangati system sniffing packets(still playing with Xangati). No IP addresses are used here.
  • One ESX virtual switch for one internal network, the bridging firewall has another interface here, and my main two internet facing servers have interfaces here, my firewall has another interface here as well for management. Only public IPs are used here.
  • One ESX virtual switch for another internal network for things that will never have public IP addresses associated with them, I run NAT on the firewall(on it’s 3rd/4th interfaces) for these systems to get internet access.

I have a site to site OpenVPN connection between my OpenBSD firewall at home and my OpenBSD firewall on the ESX system, which gives me the ability to directly access the back end, non routable network on the other end.

Normally I wouldn’t deploy an independent firewall, but I did in this case because, well I can. I do like OpenBSD’s pf more than iptables(which I hate), and it gives me a chance to play around more with pf, and gives me more freedom on the linux end to fire up services on ports that I don’t want exposed and not have to worry about individually firewalling them off, so it allows me to be more lazy in the long run.

I bought the server before I moved, once I got to the bay area I went and picked it up and kept it over a weekend to copy my main data set to it then took it back and they hooked it up again and I switched my systems over to it.

The server was about $2900 w/1 year of support, and co-location is about $100/mo. So disk space alone the first year(taking into account cost of the server) my cost is about $0.09 per GB per month (3.6TB), with subsequent years being $0.033 per GB per month (took a swag at the support cost for the 2nd year so that is included). That doesn’t even take into account the virtual machines themselves and the cost savings there over any cloud. And I’m giving the cloud the benefit of the doubt by not even looking at the cost of bandwidth for them just the cost of capacity. If I was using the cloud I probably wouldn’t allocate all 3.6TB up front but even if you use 1.8TB which is about what I’m using now with my VMs and stuff the cost still handily beats everyone out there.

What’s the most crazy is I lack the purchasing power of any of these clouds out there, I’m just a lone consumer, that bought one server. Granted I’m confident the vendor I bought from gave me excellent pricing due to my past relationship, though probably still not on the scale of the likes of Rackspace or Amazon and yet I can handily beat their costs without even working for it.

What surprised me most during my trips doing cost analysis of the “cloud” is how cheap enterprise storage is. I mean Terremark charges $0.25/GB per month(on SATA powered 3PAR arrays), Rackspace charges $0.15/GB per month(I believe Rackspace just uses DAS). I kind of would of expected the enterprise storage route to cost say 3-5x more, not less than 2x. When I was doing real enterprise cloud pricing storage for the solution I was looking for typically came in at 10-20% of the total cost, with 80%+ of the cost being CPU+memory. For me it’s a no brainier – I’d rather pay a bit more and have my storage on a 3PAR of course (when dealing with VM-based storage not bulk archival storage). With the average cost of my storage for 3.6TB over 2 years coming in at $0.06/GB it makes more sense to just do it myself.

I just hope my new server holds up, my last one lasted a long time, so I sort of expect this one to last a while too, it got burned in before I started using it and the load on the box is minimal, would not be too surprised if I can get 5 years out of it – how big will HDDs be in 5 years?

I will miss Terremark because of the reliability and availability features they offer, they have a great service, and now of course are owned by Verizon. I don’t need to worry about upgrading vSphere any time soon as there’s no reason to go to vSphere 5. The one thing I have been contemplating is whether or not to put my vSphere management interface behind the OpenBSD firewall(which is a VM of course on the same box). Kind of makes me miss the days of ESX 3, when it had a built in firewall.

I’m probably going to have to upgrade my cable internet at home, right now I only have 1Mbps upload which is fine for most things but if I’m doing off site backups too I need more performance. I can go as high as 5Mbps with a more costly plan. 50Meg down 5 meg up for about $125, but I might as well go all in and get 100meg down 5 meg up for $150, both plans have a 500GB cap with $0.25/GB charge for going over. Seems reasonable. I certainly don’t need that much downstream bandwidth(not even 50Mbps I’d be fine with 10Mbps), but really do need as much upstream as I can get. Another option could be driving a USB stick to the co-lo, which is about 35 miles away, I suppose that is a possibility but kind of a PITA still given the distance, though if I got one of those 128G+ flash drives it could be worth it. I’ve never tried hooking up USB storage to an ESX VM before, assuming it works? hmmmm..

Another option I have is AT&T Uverse, which I’ve read good and bad things about – but looking at their site their service is slower than what I can get through my local cable company (which truly is local, they only serve the city I am in). Another reason I didn’t go with Uverse for TV is due to the technology they are using I suspected it is not compatible with my Tivo (with cable cards). Though AT&T doesn’t mention their upstream speeds specifically I’ll contact them and try to figure that out.

I kept the motherboard/cpus/ram from my old server, my current plan is to mount it to a piece of wood and hang it on the wall as some sort of art. It has lots of colors and little things to look at, I think it looks cool at least. I’m no handyman so hopefully I can make it work. I was honestly shocked how heavy the copper(I assume) heatsinks were, wow, felt like 1.5 pounds a piece, massive.

While my old server is horribly obsolete, one thing it does have even on my new server is being able to support more ram. Old server could go up to 24GB(I had a max of 6GB at the time in it), new server tops out at 8GB (have 8GB in it). Not a big deal as I don’t need 24GB for my personal stuff but just thought it was kind of an interesting comparison.

This blog has been running on the new server for a couple of weeks now. One of these days I need to hook up some log analysis stuff to see how many dozen hits I get a month.

If Terremark could fix three areas of their vCloud express service – one being resource pool-based billing,  another being relaxing the costs behind opening multiple ports in the firewall (or just giving 1:1 NAT as an option), and the last one being thin provisioning friendly billing for storage — it would really be a much more awesome service than it already is.

August 3, 2011

VMware revamps vSphere 5 licensing again

Filed under: Virtualization — Tags: , — Nate @ 5:40 pm

I guess someone over there high up was listening, nice to see the community had some kind of impact, VMware has adjusted their policies to some degree, far from perfect, but more bearable than the original plan.

The conspiracy theorist makes me think VMware put bogus numbers out there to begin with, never having any intension of following through with them to gauge the reaction, and then adjusted them to what they probably originally would of offered and try to make people think like they “won” by getting VMware to reduce the impact to some degree.

vSphere Enterprise List Pricing comparison (w/o support)

# of SocketsRAMvSphere 4 EnterprisevSphere 5
Enterprise
(old)
vSphere 5
Enterprise
(new)
Cost increase over vSphere 4
2256GB2 Licenses - $5,7508 Licenses - $23,0004 Licenses - $11,500100%
4512GBN/A16 Licenses - $46,0008 Licenses - $23,000N/A
81024GBN/A32 Licenses - $92,00016 Licenses - $46,000N/A

vSphere Enterprise+ List Pricing comparison (w/o support)

# of SocketsRAMvSphere 4 Enterprise+vSphere 5 Enterprise+
(old)
vSphere 5 Enterprise+
(new)
Cost increase over vSphere 4
2256GB2 Licenses - $6,9905 Licenses (240GB) - $17,4753 Licenses (288GB) - $10,48550% higher
4512GB4 Licenses - $13,98011 Licenses (528GB) - $38,4455 Licenses (480GB) - $17,47525% higher
81024GB8 Licenses - $27,96021 Licenses(1008GB) - $73,995
11 Licenses (1056GB) - $38,44537% higher

There were other changes too, see the official VMware blog post above for the details. They quadrupled the amount of vRAM available for the free ESXi to 32GB which I still think is not enough, should be, say at least 128GB.

Also of course they are pooling their licenses so the numbers fudge out a bit more depending on the # of hosts and stuff.

One of the bigger changes is VMs larger than 96GB will not need more than 1 license. Though I can’t imagine there are many 96GB VMs out there… even with 1 license if I wanted several hundred gigs of ram for a system I would put in on real hardware, get more cpu cores to boot (not unlikely you have 48-64+ cores of cpu for such a system, which is far beyond where vSphere 5 can scale to for a single VM).

I did some rounding in the price estimates, because the numbers are not divisible cleanly by the amount of ram specified.

It seems VMware has effectively priced their “Enterprise” product out of the market if you have any more than a trivial amount of memory. vSphere 4 Enterprise was, of course limited to 256GB of ram, but look at the cost of that compared to the new stuff, pretty staggering.

Quad socket 512GB looks like the best bet on these configurations anyways.

I still would like to see pricing based more on features than on hardware.  E.g. give me vSphere standard edition with 96GB per CPU of vRAM licensing, because a lot of those things in Enteprise+ I don’t need (some are nice to have but very few are critical for most people I believe). As-is users are forced into the higher tiers due to the arbitrary limits set on the licensing, not as bad as the original vSphere 5 pricing but still pretty bad for some users when compared to vSphere 4.

Or give me free ESXi with the ability to individually license software features such as vMotion etc on top of it on a per-socket basis or something.

I think the licensing scheme needs more work. VMware could also do their customers a favor by communicating how this will change in the future, as bigger and bigger machines come out it’s logical to think the memory limits would be increased over time.

The biggest flaw in the licensing scheme remains it measures based on what is provisioned, rather than what is used. There is no excuse for this from VMware since they own the hypervisor and have all the data.

Billing based on provision vs usage is the biggest scam in this whole cloud era.

July 20, 2011

VMware Licensing models

Filed under: Virtualization — Tags: , — Nate @ 5:38 am

[ was originally combined with another post but I decided to split out ]

VMware has provided it’s own analysis of their customers hardware deployments and telling folks that ~95% of their customers won’t be impacted by the licensing changes. I feel pretty confident that most of those customers are likely massively under utilizing their hardware. I feel confident because I went through that phase as well. Very, very few workloads are truly cpu bound especially with 8-16+ cores per socket.

It wouldn’t surprise me at all that many of those customers when they go to refresh their hardware change their strategy pretty dramatically – provided the licensing permits it. The new licensing makes me think we should bring back 4GB memory sticks and 1 GbE. It is very wasteful to assign 11 CPU licenses to a quad socket system with 512GB of memory, memory only licenses should be available at a significant discount over CPU+memory licenses at the absolute minimum. Not only that but large amounts of memory are actually affordable now. It’s hard for me to imagine at least having a machine with a TB of memory in it for around $100k, it wasn’t TOO long ago that it would of run you 10 times that.

And as to VMware’s own claims that this new scheme will help align ANYTHING better, by using memory pools across the cluster – just keep this in mind. Before this change we didn’t have to care about memory at all, whether we used 1% or 95%, whether some hosts used all of their ram and others used hardly any. It didn’t matter. VMware is not making anything simpler. I read somewhere about them saying some crap about aligning more with IT as a service. Are you kidding me? How may buzz words do we need here?

The least VMware can do is license based on usage. Remember pay for what you use, not what you provision. When I say usage I mean actual usage. Not charging me for the memory my Linux systems are allocating towards (frequently) empty disk buffers (goes to the memory balloon argument). If I allocate 32GB of ram to a VM that is only using 1GB of memory I should be charged for 1GB, not 32GB. Using vSphere’s own active memory monitor would be an OK start.

Want to align better and be more dynamic? align based on memory usage and CPU usage, let me run unlimited cores on the cluster and you can monitor actual usage on a per-socket basis, so if on average (say you can bill based on 95% similar to bandwidth) your using 40% of your CPU then you only need 40% licensing. I still much prefer the flat licensing model in almost any arrangement rather than usage based but if your going to make it usage based, really make it usage based.

Oh yeah – and forget about anything that charges you per VM too (hello SRM). That’s another bogus licensing scheme. It goes completely against the trend of splitting workloads up into more isolated VMs and instead favors fewer much larger VMs that are doing a lot of things at the same time. Even on my own personal co-located ESXi server, I have 5 VMs on it, I could consolidate it to two and provide the similar end user services, but it’s much cleaner to do it in 5 for my own sanity.

All of this new licensing stuff also makes me think back to a project I was working on about a year ago, trying to find some way of doing DR in the cloud, the ROI for doing it in house vs. any cloud on the market(looked at about 5 different ones at the time) was never more than 3 months. In one case the up front costs for the cloud was 4 times the cost for doing it internally. The hardware needs were modest in my opinion, with the physical hardware not even requiring two full racks of equipment. The #1 cost driver was memory, #2 was CPU, storage was a distant third assuming the storage that the providers spec’d could meet the IOPS and throughput requirements, storage came in at about 10-15% of the total cost of the cloud solution.

Since most of my VMware deployments have been in performance sensitive situations (lots of Java) I run the systems with zero swapping, everything in memory has to stay in physical ram.

Cluster DRS

Filed under: Virtualization — Tags: , — Nate @ 12:05 am

Given the recent price hikes that VMware is imposing on it’s customers(because they aren’t making enough money obviously) , and looking at the list of new things in vSphere 5 and being, well underwhelmed (compared to vSphere 4), I brain stormed a bit and thought about what kind of things I’d like to see VMware add.

VMware seems to be getting more aggressive in going after service providers (their early attempts haven’t been successful, it seems they have less partners now than a year ago – btw I am a vCloud express end-user at the moment). An area that VMware has always struggled in is scalability in their clusters (granted such figures have not been released for vSphere 5 but I am not holding my breath for a 10-100x+ increase in scale)

Whether it’s the number of virtual machines in a cluster, the number of nodes, the scalability of the VMFS file system itself (assuming that’s what your using) etc.

For the most part of course, a cluster is like a management domain, which means it is, in a way a single point of failure. So it’s pretty common for people to build multiple clusters when they have a decent number of systems, if someone has 32 servers, it is unlikely they are going to build a single 32-node cluster.

A feature I would like to see is Cluster DRS, and Cluster HA. Say for example you have several clusters, some clusters are very memory heavy for loading a couple hundred VMs/host(typically 4-8 socket with several hundred gigs of ram), others are compute heavy with very low cpu consolidation ratios (probably dual socket with 128GB or less of memory). Each cluster by itself is a stand alone cluster, but there is loose logic that binds them together to allow the seamless transport of VMs between clusters either for either load balancing or fault tolerance. Combine and extend regular DRS to span clusters, on top of that you may need to do transparent storage vMotion (if required) as well along with the possibility of mapping storage on the target host (on the fly) in order to move the VM over (the forthcoming storage federation technologies could really help make hypervisor life simpler here I think).

Maybe a lot of this could be done using yet another management cluster of some kind, a sort of independent proxy of things (running on independent hardware and perhaps even dedicated storage). In the unlikely event of a catastrophic cluster failure, the management cluster would pick up on this and move the VMs to other clusters and re start them (provided there is sufficient resources of course!). In very large environments it is not be possible to map everything to everywhere, which would require multiple storage vMotions in order to get the VM from the source to a destination that the target host can access – if this can be done at the storage layer via the block level replication stuff first introduced in VAAI that could of course greatly speed up what otherwise might be a lengthy process.

Since it is unlikely anyone is going to be able to build a single cluster with shared storage that spans a great many systems(100s+) and have it be bulletproof enough to provide 99.999% uptime, this kind of capability would be a stop gap, providing the flexibility and availability of a single massive cluster, while at the same time reducing the complexity in having to try to build software that can actually pull the impossible (or what seems impossible today) off.

On the topic of automated cross cluster migrations, having global spare hardware would be nice too, much like most storage arrays have global hot spares, which can be assigned to any degraded RAID group on the system regardless of what shelf it may reside on. Global spare servers would be shared across clusters, and assigned on demand. A high end VM host is likely to cost upwards of $50,000+ in hardware these days, multiply by X number of clusters and well.. you get the idea.

While I’m here, I might as well say I’d like the ability to hot remove memory, Hyper-V has dynamic memory which seems to provide this functionality. I’m sure the guest OSs would need to be re-worked a bit too in order to support this, since in the physical world it’s not too common to need to yank live memory from a system. In the virtual world it can be very handy.

Oh and I won’t forget – give us an ability to manually control the memory balloon.

Another area that could use some improvement is the vMotion compatibility, there is EVC, but last I read you still couldn’t cross processor manufacturers when doing vMotion with EVC. KVM can apparently do it today.

July 12, 2011

VMware jacks up prices too

Filed under: Virtualization — Tags: , — Nate @ 4:34 pm

Not exactly hot on the heels of Red Hat’s 260% price increase, VMware has done something similar with the introduction of vSphere 5 which is due later this year.

The good: They seem to have eliminated the # of core/socket limit for each of the versions, and have raised the limit of vCPUs per guest to 8 from 4 on the low end, and to 32 from 8 on the high end.

The bad: They have tied licensing to the amount of memory on the server. Each CPU license is granted a set amount of memory it can address.

The ugly: The amount of memory addressable per CPU license is really low.

Example 1 – 4x[8-12] core CPUs with 512GB memory

  • vSphere 4 cost with Enterprise Plus w/o support (list pricing)  = ~$12,800
  • vSphere 5 cost with Enterprise Plus w/o support (list pricing)  = ~$38,445
  • vSphere 5 cost with Enterprise w/o support (list pricing)         = ~$46,000
  • vSphere 5 cost with Standard w/o support (list pricing)           = ~$21,890

So you pay almost double for the low end version of vSphere 5 vs the highest end version of vSphere 4.

Yes you read that right, vSphere 5 Enterprise costs more than Enterprise Plus in this example.

Example 2 – 8×10 core CPUs with 1024GB memory

  • vSphere 4 cost with Enterprise Plus w/o support (list pricing) = ~$25,600
  • vSphere 5 cost with Enterprise Plus w/o support (list pricing) = ~$76,890

It really is an unfortunate situation, while it is quite common to charge per CPU socket, or in some cases per CPU core, I have not heard of a licensing scheme that charged for the memory.

I have been saying that I would expect to be using VMware vSphere myself until the 2012 time frame at which point I hope KVM is mature enough to be a suitable replacement (I realize there are some folks out there using KVM now it’s just not mature enough for my own personal taste).

The good news, if you can call it that, is as far as I can tell you can still buy vSphere 4 licenses, and you can even convert vSphere 5 licenses to vSphere 4 (or 3). Hopefully VMware will keep the vSphere 4 license costs around for the life of (vSphere 4) product, which would take customers to roughly 2015.

I have not seen much info about what is new in vSphere 5, for the most part all I see are scalability enhancements for the ultra high end (e.g. 36Gbit/s network throughput, 1 million IOPS, supporting more vCPUs per VM – number of customers that need that I can probably count on 1 hand). With vSphere 4 there was many good technological improvements that made it compelling for pretty much any customer to upgrade (unless you were using RDM with SAN snapshots), I don’t see the same in vSphere 5 (at least at the core hypervisor level). My own personal favorites for vSphere 4 enhancements over 3 were – ESXi boot from SAN, Round Robin MPIO, and the significant improvements in the base hypervisor code itself.

I can’t think of a whole lot of things I would want to see in vSphere 5 that aren’t already in vSphere 4, my needs are somewhat limited though. Most of the features in vSphere 4 are nice to have though for my own needs are not requirements. For the most part I’d be happy on vSphere standard edition (with vMotion which was added to the licensed list for Standard edition about a year ago) the only reason I go for higher end versions is because of license limitations on hardware. The base hypervisor has to be solid as a rock though.

In my humble opinion, the memory limits should look more like

  • Standard = 48GB (Currently 24GB)
  • Enterprise = 96GB (Currently 32GB)
  • Enterprise Plus = 128GB (Currently 48GB)

It just seems wrong to have to load 22 CPU licenses of vSphere on a host with 8 CPUs and 1TB of memory.

I remember upgrading from ESX 3.5 to 4.0, it was so nice to see that it was a free upgrade for those with current support contracts.

I have been a very happy, loyal and satisfied user & customer of VMware’s products since 1999, put simply they have created some of the most robust software I have ever used (second perhaps to Oracle). Maybe I have just been lucky over the years but the number of real problems (e.g. caused downtime) I have had with their products has been tiny, I don’t think it’s enough to need more than one hand to count. I have never once had a ESX or GSX server crash for example. I see mentions of the PSOD that ESX belches out on occasion but I have yet to see it in person myself.

I’ve really been impressed by the quality and performance (even going back as far as my first e-commerce launch on VMware GSX 3.0 in 2004 we did more transactions the first day than we were expecting for the entire first month), so I’m happy to admit I have become loyal to them over the years(for good reason IMO). Pricing moves like this though are very painful, and it will be difficult to break that addiction.

This also probably means if you want to use the upcoming Opteron 6200 16-core cpus (also due in Q3) on vSphere you probably have to use vSphere 5, since 4 is restricted to 12-cores per socket (though would be interesting to see what would happen if you tried).

If I’m wrong about this math please let me know, I am going by what I read here.

Microsoft’s gonna have a field day with these changes.

And people say there’s no inflation going on out there..

sigh

October 7, 2010

Testing the limits of virtualization

Filed under: Datacenter,Virtualization — Tags: , , , , , , — Nate @ 11:24 pm

You know I’m a big fan of the AMD Opteron 6100 series processor, also a fan of the HP c class blade system, specifically the BL685c G7 which was released on June 21st. I was and am very excited about it.

It is interesting to think, it really wasn’t that long ago that blade systems still weren’t all that viable for virtualization primarily because they lacked the memory density, I mean so many of them offered a paltry 2 or maybe 4 DIMM sockets. That was my biggest complaint with them for the longest time. About a year or year and a half ago that really started shifting. We all know that Cisco bought some small startup a few years ago that had their memory extender ASIC but well you know I’m not a Cisco fan so won’t give them any more real estate in this blog entry, I have better places to spend my mad typing skills.

A little over a year ago HP released their Opteron G6 blades, at the time I was looking at the half height BL485c G6 (guessing here, too lazy to check). It had 16 DIMM sockets, that was just outstanding. I mean the company I was with at the time really liked Dell (you know I hate Dell by now I’m sure), I was poking around their site at the time and they had no answer to that(they have since introduced answers), the highest capacity half height blade they had at the time anyways was 8 DIMM sockets.

I had always assumed that due to the more advanced design in the HP blades that you ended up paying a huge premium, but wow I was surprised at the real world pricing, more so at the time because you needed of course significantly higher density memory modules in the Dell model to compete with the HP model.

Anyways fast forward to the BL685c G7 powered by the Opteron 6174 processor, a 12-core 2.2Ghz 80W processor.

Load a chassis up with eight of those:

  • 384 CPU cores (860Ghz of compute)
  • 4 TB of memory (512GB/server w/32x16GB each)
  • 6,750 Watts @ 100% load (feel free to use HP dynamic power capping if you need it)

I’ve thought long and hard over the past 6 months on whether or not to go 8GB or 16GB, and all of my virtualization experience has taught me in every case I’m memory(capacity) bound, not CPU bound. I mean it wasn’t long ago we were building servers with only 32GB of memory on them!!!

There is indeed a massive premium associated with going with 16GB DIMMs but if your capacity utilization is anywhere near the industry average then it is well worth investing in those DIMMs for this system, your cost of going from 2TB to 4TB of memory using 8GB chips in this configuration makes you get a 2nd chassis and associated rack/power/cooling + hypervisor licensing. You can easily halve your costs by just taking the jump to 16GB chips and keeping it in one chassis(or at least 8 blades – maybe you want to split them between two chassis I’m not going to get into that level of detail here)

Low power memory chips aren’t available for the 16GB chips so the power usage jumps by 1.2kW/enclosure for 512GB/server vs 256GB/server. A small price to pay, really.

So onto the point of my post – testing the limits of virtualization. When your running 32, 64, 128 or even 256GB of memory on a VM server that’s great, you really don’t have much to worry about. But step it up to 512GB of memory and you might just find yourself maxing out the capabilities of the hypervisor. At least in vSphere 4.1 for example you are limited to only 512 vCPUs per server or only 320 powered on virtual machines. So it really depends on your memory requirements, If your able to achieve massive amounts of memory de duplication(myself I have not had much luck here with linux it doesn’t de-dupe well, windows seems to dedupe a lot though), you may find yourself unable to fully use the memory on the system, because you run out of the ability to fire up more VMs ! I’m not going to cover other hypervisor technologies, they aren’t worth my time at this point but like I mentioned I do have my eye on KVM for future use.

Keep in mind 320 VMs is only 6.6VMs per CPU core on a 48-core server. That to me is not a whole lot for workloads I have personally deployed in the past. Now of course everybody is different.

But it got me thinking, I mean The Register has been touting off and on for the past several months every time a new Xeon 7500-based system launches ooh they can get 1TB of ram in the box. Or in the case of the big new bad ass HP 8-way system you can get 2TB of ram. Setting aside the fact that vSphere doesn’t go above 1TB, even if you go to 1TB I bet in most cases you will run out of virtual CPUs before you run out of memory.

It was interesting to see, in the “early” years the hypervisor technology really exploiting hardware very well, and now we see the real possibility of hitting a scalability wall at least as far as a single system is concerned. I have no doubt that VMware will address these scalability issues it’s only a matter of time.

Are you concerned about running your servers with 512GB of ram? After all that is a lot of “eggs” in one basket(as one expert VMware consultant I know & respect put it). For me at smaller scales I am really not too concerned. I have been using HP hardware for a long time and on the enterprise end it really is pretty robust. I have the most concerns about memory failure, or memory errors. Fortunately HP has had Advanced ECC for a long time now(I think I remember even seeing it in the DL360 G2 back in ’03).

HP’s Advanced ECC spreads the error correcting over four different ECC chips, and it really does provide quite robust memory protection. When I was dealing with cheap crap white box servers the #1 problem BY FAR was memory, I can’t tell you how many memory sticks I had to replace it was sick. The systems just couldn’t handle errors (yes all the memory was ECC!).

By contrast, honestly I can’t even think of a time a enterprise HP server failed (e.g crashed) due to a memory problem. I recall many times the little amber status light come on and I log into the iLO and say, oh, memory errors on stick #2, so I go replace it. But no crash! There was a firmware bug in the HP DL585G1s I used to use that would cause them to crash if too many errors were encountered, but that was a bug that was fixed years ago, not a fault with the system design. I’m sure there have been other such bugs here and there, nothing is perfect.

Dell introduced their version of Advanced ECC about a year ago, but it doesn’t (or at least didn’t maybe it does now) hold a candle to the HP stuff. The biggest issue with the Dell version of Advanced ECC was if you enabled it, it disabled a bunch of your memory sockets! I could not get an answer out of Dell support at the time at least why it did that. So I left it disabled because I needed the memory capacity.

So combine Advanced ECC with ultra dense blades with 48 cores and 512GB/memory a piece and you got yourself a serious compute resource pool.

Power/cooling issues aside(maybe if your lucky you can get in to SuperNap down in Vegas) you can get up to 1,500 CPU cores and 16TB of memory in a single cabinet. That’s just nuts! WAY beyond what you expect to be able to support in a single VMware cluster(being that your limited to 3,000 powered on VMs per cluster – the density would be only 2 VMs/core and 5GB/VM!)

And if you manage to get a 47U rack, well you can get one of those c3000 chassis in the rack on top of the four c7000 and get another 2TB of memory and 192 cores. We’re talking power kicking up into the 27kW range in a single rack! Like I said you need SuperNap or the like!

Think about that for a minute, 1,500 CPU cores and 16TB of memory in a single rack. Multiply that by say 10 racks. 15,000 CPU cores and 160TB of memory. How many tens of thousands of physical servers could be consolidated into that? A conservative number may be 7 VMs/core, your talking 105,000 physical servers consolidated into ten racks. Well excluding storage of course. Think about that! Insane! I mean that’s consolidating multiple data centers into a high density closet! That’s taking tens to hundreds of megawatts of power off the grid and consolidating it into a measly 250 kW.

I built out, what was to me some pretty beefy server infrastructure back in 2005, around a $7 million project. Part of it included roughly 300 servers in roughly 28 racks. There was 336kW of power provisioned for those servers.

Think about that for a minute. And re-read the previous paragraph.

I have thought for quite a while because of this trend, the traditional network guy or server guy is well, there won’t be as many of them around going forward. When you can consolidate that much crap in that small of a space, it’s just astonishing.

One reason I really do like the Opteron 6100 is the cpu cores, just raw cores. And they are pretty fast cores too. The more cores you have the more things the hypervisor can do at the same time, and there is no possibilities of contention like there are with hyperthreading. CPU processing capacity has gotten to a point I believe where raw cpu performance matters much less than getting more cores on the boxes. More cores means more consolidation. After all industry utilization rates for CPUs are typically sub 30%. Though in my experience it’s typically sub 10%, and a lot of times sub 5%. My own server sits at less than 1% cpu usage.

Now fast raw speed is still important in some applications of course. I’m not one to promote the usage of a 100 core CPU with each core running at 100Mhz(10Ghz), there is a balance that has to be achieved, and I really do believe the Opteron 6100 has achieved that balance, I look forward to the 6200(socket compatible 16 core). Ask anyone that has known me this decade I have not been AMD’s strongest supporter for a very long period of time. But I see the light now.

September 7, 2010

vSphere VAAI only in the Enterprise

Filed under: Storage,Virtualization — Tags: , , , , — Nate @ 7:04 pm

Beam me up!

Damn those folks at VMware..

Anyways I was browsing around this afternoon looking around at things and while I suppose I shouldn’t be I was surprised to see that the new storage VAAI APIs are only available to people running Enterprise or Enterprise Plus licensing.

I think at least the block level hardware based locking for VMFS should be available to all versions of vSphere, after all VMware is offloading the work to a 3rd party product!

VAAI certainly looks like it offers some really useful capabiltiies, from the documentation on the 3PAR VAAI plugin (which is free) here are the highlights:

  • Hardware Assisted Locking is a new VMware vSphere storage feature designed to significantly reduce impediments to VM reliability and performance by locking storage at the block level instead of the logical unit number (LUN) level, which dramatically reduces SCSI reservation contentions. This new capability enables greater VM scalability without compromising performance or reliability. In addition, with the 3PAR Gen3 ASIC, metadata comparisons are executed in silicon, further improving performance in the largest, most demanding VMware vSphere and desktop virtualization environments.
  • The 3PAR Plug-In for VAAI works with the new VMware vSphere Block Zero feature to offload large, block-level write operations of zeros from virtual servers to the InServ array, boosting efficiency during several common VMware vSphere operations— including provisioning VMs from Templates and allocating new file blocks for thin provisioned virtual disks. Adding further efficiency benefits, the 3PAR Gen3 ASIC with built-in zero-detection capability prevents the bulk zero writes from ever being written to disk, so no actual space is allocated. As a result, with the 3PAR Plug-In for VAAI and the 3PAR Gen3 ASIC, these repetitive write operations now have “zero cost” to valuable server, storage, and network resources—enabling organizations to increase both VM density and performance.
  • The 3PAR Plug-In for VAAI adds support for the new VMware vSphere Full Copy feature to dramatically improve the agility of enterprise and cloud datacenters by enabling rapid VM deployment, expedited cloning, and faster Storage vMotion operations. These administrative tasks are now performed in half the time. The 3PAR plug-in not only leverages the built-in performance and efficiency advantages of the InServ platform, but also frees up critical physical server and network resources. With the use of 3PAR Thin Persistence and the 3PAR Gen3 ASIC to remove duplicated zeroed data, data copies become more efficient as well.

Cool stuff. I’ll tell you what. I really never had all that much interest in storage until I started using 3PAR about 3 and a half years ago. I mean I’ve spread my skills pretty broadly over the past decade, and I only have so much time to do stuff.

About five years ago some co-workers tried to get me excited about NetApp, though for some reason I never could get too excited about their stuff, sure it has tons of features which is nice, though the core architectural limitations of the platform (from a spinning rust perspective at least) I guess is what kept me away from them for the most part. If you really like NetApp, put a V-series in front of a 3PAR and watch it scream. I know of a few 3PAR/NetApp users that are outright refusing to entertain the option of running NetApp storage, they like the NAS, and keep the V-series but the back end doesn’t perform.

On the topic of VMFS locking – I keep seeing folks pimping the NFS route attack the VMFS locking as if there was no locking in NFS with vSphere. I’m sure prior to block level locking the NFS file level locking (assuming it is file level) is more efficient than LUN level. Though to be honest I’ve never encountered issues with SCSI reservations in the past few years I’ve been using VMFS. Probably because of how I use it. I don’t do a lot of activities that trigger reservations short of writing data.

Another graphic which I thought was kind of funny, is the current  Gartner group “magic quadrant”, someone posted a link to it for VMware in a somewhat recent post, myself I don’t rely on Gartner but I did find the lop sidedness of the situation for VMware quite amusing –

I’ve been using VMware since before 1.0, I still have my VMware 1.0.2 CD for Linux. I deployed VMware GSX to production for an e-commerce site in 2004, I’ve been using it for a while, I didn’t start using ESX until 3.0 came out(from what I’ve read about the capabiltiies of previous versions I’m kinda glad I skipped them 🙂 ). It’s got to be the most solid piece of software I’ve ever used, besides Oracle I suppose. I mean I really, honestly can not remember it ever crashing. I’m sure it has, but it’s been so rare that I have no memory of it. It’s not flawless by any means, but it’s solid. And VMware has done a lot to build up my loyalty to them over the past, what is it now eleven years? Like most everyone else at the time, I had no idea that we’d be doing the stuff with virtualization today that we are back then.

I’ve kept my eyes on other hypervisors as they come around, though even now none of the rest look very compelling. About two and a half years ago my new boss at the time was wanting to cut costs, and was trying to pressure me into trying the “free” Xen that came with CentOS at the time. He figured a hypervisor is a hypervisor. Well it’s not. I refused. Eventually I left the company and my two esteemed colleges were forced into trying it after I left(hey Dave and Tycen!) they worked on it for a month before giving up and going back to VMware. What a waste of time..

I remember Tycen at about the same time being pretty excited about Hyper-V. Well at a position he recently held he got to see Hyper-V in all it’s glory, and well he was happy to get out of that position and not having to use Hyper-V anymore.

Though I do think KVM has a chance, I think it’s too early to use it for anything too serious at this point, though I’m sure that’s not stopping tons of people from doing it anyways, just like it didn’t stop me from running production on GSX way back when. But I suspect by the time vSphere 5.0 comes out, which I’m just guessing here will be in the 2012 time frame, KVM as a hypervisor will be solid enough to use in a serious capacity. VMware will of course have a massive edge on management tools and fancy add ons, but not everyone needs all that stuff (me included). I’m perfectly happy with just vSphere and vCenter (be even happier if there was a Linux version of course).

I can’t help but laugh at the grand claims Red Hat is making for KVM scalability though. Sorry I just don’t buy that the Linux kernel itself can reach such heights and be solid & scalable, yet alone a hypervisor running on top of Linux (and before anyone asks, NO ESX does NOT run on Linux).

I love Linux, I use it every day on my servers and my desktops and laptops, have been for more than a decade. Despite all the defectors to the Mac platform I still use Linux 🙂 (I actually honestly tried a MacBook Pro for a couple weeks recently and just couldn’t get it to a usable state).

Just because the system boots with X number of CPUs and X amount of memory doesn’t mean it’s going to be able to effectively scale to use it right. I’m sure Linux will get there some day, but believe it is a ways off.

March 10, 2010

Save 50% off vSphere essentials for the next 90 days

Filed under: Virtualization — Tags: , — Nate @ 3:00 pm

Came across this today, which mentions you can save about 50% when licensing vSphere essentials for the next ~90 days. As you may know Essentials is a really cheap way to get your vSphere stuff managed by vCenter. For your average dual socket 16-blade system as an example it is 91% cheaper(savings of ~$26,000) than going with vSphere Standard edition. Note that the vCenter included with Essentials needs to be thrown away if your managing more than three hosts with it. You’ll still need to buy vCenter standard (regardless of what version of vSphere you buy).

August 25, 2009

Cheap vSphere installation managable by vCenter

Filed under: Virtualization — Tags: , , — Nate @ 4:53 pm

UPDATED – I don’t mean to turn this into a Vmware blog or a storage blog as those have been almost all of my posts so far, but as someone who works for a company that hasn’t yet invested too much in vmware(was hard enough to convince them to buy any VM solution management wanted the free stuff), I wanted to point out that you can get the “basics” of vSphere in the vSphere essentials pack, what used to cost about $3k now is about $999, and support is optional. Not only that but at least in my testing a system running a “essentials” license is fully able to connect and be managed by a vCenter “standard” edition system.

I just wanted to point it out because when I proposed this scenario a month or so ago to my VAR they wanted to call VMware to talk to see if there were any gotchas, and the initial VMWare rep we talked to couldn’t find anything that said you could or could not do this specifically but didn’t believe there was anything in the product that would block you from managing an “essentials” vSphere host with a “Standard” vCenter server. But he spent what seemed like a week trying to track down a real answer but never got back to us. Then we called in again and got another person who said something similar, he couldn’t find anything that would prevent it, but apparently it’s not something that has been proposed too widely before. The quote I got from the VAR who was still confused had a note saying that you could not do what I wanted to do, but it does work. Yes we basically throw out the “free” vCenter “foundation” edition, but it’s still a lot cheaper then going with vSphere standard:

vSphere Essentials 6 CPUs = Year 1 – $999 with 1 year subscription, support on per incident basis

vSphere Standard 6 CPUs = Year 1 – $6,408 with 1 year subscription, and gold support

Unless you expect to file a lot of support requests that is.

It is true that you get a few extra things with vSphere Standard over Essentials such as “Thin provisioning” and “High availability”. In my case thin provisioning is built into the storage, so I don’t need that. And High availability isn’t that important either as for most things we have more than 1 VM running the app and load balance using real load balancers for fault tolerance(there are exceptions like DB servers etc).

Something that is kind of interesting is that the “free” vSphere supports thin provisioning, I have 11 hosts running that version with local storage at remote sites. Odd that they throw in that with the free license but not with essentials!

The main reason for going this route to me at least is at least you can have a real vCenter server and your systems managed by it, have read-write access to the remote APIs, and of course have the option of running the full hefty ESX instead of using the “thin” ESXI. Myself I prefer the big service console, I know it’s going away at some point, but I’ll use it while it’s there. I have plenty of memory to spare. A good chunk of my production ESX infrastructure is older re-purposed HP DL585G1s with 64GB of memory, they are quad processor, dual core, which makes this licensing option even more attractive for them.

My next goal is to upgrade the infrastructure to HP c-Class blades with either 6 core Opterons or perhaps 12 core when they are out(assuming availability for 2 socket systems), 64GB of memory(the latest HP Istanbul blades have 16 memory slots), 10GbE VirtualConnect and 4Gbps Fiber VirtualConnect, and upgrade to vSphere advanced.  That’ll be sometime in 2010 though. There’s no software “upgrade” path from essentials to advanced, so I’ll just re-purpose essentials to other systems, I have at least 46 sockets in servers running the “free” license as is.

(I still remember how happy I was to pay the $3500 for two socket fee a couple of years ago for ESX “Standard” edition, now it’s about 90% less on a per-socket basis for the same abilities)

UPDATE – I haven’t done extensive testing yet but during my quick tests before a more recent entry that I posted I wanted to check to see if Essentials could/would boot a VM that was thin provisioned. Since I used storage vMotion to move some VMs over, that would be annoying if it could not. And it just so happens that I already have a VM running on my one Essentials ESX host that is thin provisioned! So it appears the license just limits you on the creation of thinly provisioned virtual disks, not the usage of them, which makes sense. It would be an Oracle-like tactic to do the former. And yes I did power off the VM and power it back on today to verify.  But that’s not all – I noticed what seems to be a loop hole in vSphere’s licensing, I mention above that vSphere Essentials does not support thin provisioning, as you can see here in their pricing PDF(and there is no mention of the option in the License configuration page on the host). When I create VMs I always use the Custom option, rather than use the Typical configuration. Anyways I found out that if you use Typical when creating a VM with the Essentials license you CAN USE THIN PROVISIONING. I created the disk, enabled the option, and even started the VM (didn’t go beyond that). If you use Custom the Thin Provisioning option flat out isn’t even offered. I wasn’t expecting the VM to be able to power on. I recall testing another unrelated but still premium option, forgot which one, and when I tried to either save the configuration or power up the VM the system stopped me saying the license didn’t permit that.

August 19, 2009

Does size matter?

Filed under: Storage,Virtualization — Tags: , , — Nate @ 10:30 am

UPDATED – I’ve been a fan of VMware for what seems like more than a decade, still have my VMware 1.0.2 for Linux CD even. I just wanted to dispel a myth that ESXi has a small disk footprint. On VMware’s own site they mention the footprint being 32MB. I believe I saw another number in the ~75MB range or something at a vSphere launch event I attended a few months ago.

Not that it’s a big deal to me but it annoys me when companies spout bullshit like that. I just wanted to dispel the myth that ESXi has a small disk foot print. My storage array has thin provisoning technology and dedicates data in 16kB increments as it is written. So I can get a clear view on how big ESXi actually is.

And the number is: ~900 Megabytes for ESXi v4. I confronted a VMware rep on this number at that event I mentioned earlier and he brushed me off, saying the extra space was other required components not just the hypervisor. In the link above they compare against MS Hyper-V, they take MS’s “full stack” and perhaps compare it to their “hypervisor”(which by itself is unusuable, you need those other required components), hence my claim that their claim is a complete and totally bullshit number.

This is significantly smaller than the full ESX, which from the range of systems I have installed uses between 3-5 Gigabytes. When I was setting up the network installer for ESX I believe it required at least 25GB for vSphere, which is slightly more than ESX 3.5.  Again with the array technology despite me allocating 25GB worth of data to the volume, vSphere has only written between 3-5GB of it, so that is all that is used. But in both cases I get accurate representations of how much real space each system requires.

ESXi v3.5 was unable to boot directly from SAN so I can’t tell with the same level of accuracy how big it is, (“df” says about 200MB) but I can say that our ESXi v3.5 systems are installed on 1GB USB sticks, and the image I decompressed onto those USB sticks is 750MB(VMware-VMvisor-big-3.5.0_Update_4-153875.i386.dd), regardless, it’s FAR from 32MB or even 75MB, at best it’s 10x larger than what they claim.

So let this one rest VMWare, give it up, stop telling people ESXi has such a tiny disk footprint, because it’s NOT TRUE.

You can pry vmware from my cold dead hands, but I still want to dispel this myth on ESXi’s size.

UPDATED – I went back to my storage array again, and found something that didn’t make sense, it’s pretty heavily virtualized itself, but after consulting with the vendor it turns out the volume is in fact 900MB of written space, rather than 1.5GB that I originally posted, if you really want to know I could share the details but I don’t think that’s too important, and without knowing the terminology of their technology it wouldn’t make much sense to anyone anyways!

The first comment I got(thanks!) mentions a significant difference in size between the embedded version of ESXi and the installable(what I’m using). This could be where the confusion lies, I have not used any systems with the embedded ESXi yet(my company is mostly a Dell shop and they charge a significant premimum for the embedded ESXi and force you on a high end support contract so we decided to install it ourselves for free).

Older Posts »

Powered by WordPress