TechOpsGuys.com Diggin' technology every day

March 30, 2010

Tough choice

Filed under: News — Nate @ 3:16 pm

I thought the Intel Xeon 7500 (Nehalem-EX) processors were supposed to launch today, I was confused earlier when I didn’t see any news on it, well finally The Register posted some news.

I will keep it short and sweet this time:

  • Intel Nehalem-EX L7555 1.86Ghz 95W 8-core processor price is $3,157 (in 1,000-unit quantities)
  • AMD Opteron 6174 2.2Ghz 80W 12-core processor price is $1,165 (in 1,000 unit quantities)

Touch choice indeed.

If you want more than four sockets of course you will likely not be able to use an off the shelf Opteron 6100 system(may need to build your own chipset), but the justification behind this decision by AMD seems very reasonable:

That 100 GB/sec of memory bandwidth in the forthcoming four-socket Opteron 6100 machine is one of the reasons why AMD decided not to do eight-socket configurations with these chips. “It is a lot of development work for a not very large market,” says Fruehe, who reckons that there are only about 1,800 x64-based eight-socket servers sold worldwide each quarter and that the number is dwindling as four-socket boxes get more powerful.

“Intel is raving about having 15 different designs for its upcoming Nehalem EX machines, but how many is each vendor going to get out of those 1,800 units?” Fruehe says that the 8P boxes account for less than two-tenths of a percent of current shipments each quarter, and that while 4P boxes are only accounting for around 4 per cent, “even though it is a small space, AMD needs to be there.”

March 29, 2010

Opteron 6100s are here

Filed under: News,Virtualization — Tags: , , — Nate @ 8:28 am

UPDATED I’ve been waiting for this for quite some time, finally the 12-core AMD Opteron 6100s have arrived. AMD did the right thing this time by not waiting to develop a “true” 12-core chip and instead bolted a pair of CPUs together into a single package. You may recall AMD lambasted Intel when it released it’s first four core CPUs a few years ago(composed of a pair of two-core chips bolted together), a strategy that paid off well for them, AMD’s market share was hurt badly as a result, a painful lesson which they learned from.

For me I’d of course rather have a “true” 12-core processor, but I’m very happy to make do with these Opteron 6100s in the meantime, I don’t want to have to wait another 2-3 years to get 12 cores in a socket.

Some highlights of the processor:

  • Clock speeds ranging from 1.7Ghz(65W) to 2.2Ghz(80W), with a turbo boost 2.3Ghz model coming in at 105W
  • Prices ranging from $744 to $1,396 in 1,000-unit quantities
  • Twelve-core and Eight–core, L2 – 512K/core, L3 – 12MB of shared L3 Cache
  • Quad-Channel LV & U/RDDR3, ECC, support for on-line spare memory
  • Supports up to 3 DIMMs/channel, up to 12 DIMMS per CPU
  • Quad 16-bit HyperTransportâ„¢ 3 technology (HT3) links, up to 6.4 GT/s per link (more than triple HT1 performance)
  • AMD SR56x0 chipset with I/O Virtualization and PCIe® 2.0
  • Socket compatibility with planned AMD Opteronâ„¢ 6200 Series processor.(16 cores?)
  • New advanced idle states allowing the processor to idle with less power usage than the previous six core systems (AMD seems to have long had the lead in idle power conservation).


The new I/O virtualization looks quite nice as well – AMD-V 2.0, from their site:

Hardware features that enhance virtualization:

  • Unmatched Memory Bandwidth and Scalability – Direct Connect Architecture 2.0 supports a larger number of cores and memory channels so you can configure robust virtual machines, allowing your virtual servers to run as close as possible to physical servers.
  • Greater I/O virtualization efficiencies –I/O virtualization to help increase I/O efficiency by supporting direct device assignment, while improving address translation to help improve the levels of hypervisor intervention.
  • Improved virtual machine integrity and security –With better isolation of virtual machines through I/O virtualization, helps increase the integrity and security of each VM instance.
  • Efficient Power Management – AMD-P technology is a suite of power management features that are designed to drive lower power consumption without compromising performance. For more information on AMD-P, click here
  • Hardware-assisted Virtualization – AMD-V technology to enhance and accelerate software-based virtualization so you can run more virtual machines, support more users and transactions per virtual machine with less overhead. This includes Rapid Virtualization Indexing (RVI) to help accelerate the performance of many virtualized applications by enabling hardware-based VM memory management. AMD-V technology is supported by leading providers of hypervisor and virtualization software, including Citrix, Microsoft, Red Hat, and VMware.
  • Extended Migration – a hardware feature that helps virtualization software enable live migration of virtual machines between all available AMD Opteronâ„¢ processor generations. For a closer look at Extended Migration, follow this link.

With AMD returning to the chipset design business I’m happy with that as well, I was never comfortable with Nvidia as a server chipset maker.

The Register has a pair of great articles on the launch as well, though the main one I was kind of annoyed I had to scroll so much to get past the Xeon news, which I don’t think they had to go out of their way to recap with such detail in an article about the Opterons, but oh well.

I thought this was an interesting note on the recent Intel announcement of integrated silicon for encryption –

While Intel was talking up the fact that it had embedded cryptographic instructions in the new Xeon 5600s to implement the Advanced Encryption Standard (AES) algorithm for encrypting and decrypting data, Opterons have had this feature since the quad-core “Barcelona” Opterons came out in late 2007, er, early 2008.

And as for performance –

Generally speaking, bin for bin, the twelve-core Magny-Cours chips provide about 88 per cent more integer performance and 119 per cent more floating point performance than the six-core “Istanbul” Opteron 2400 and 8400 chips they replace..

AMD seems geared towards reducing costs and prices as well with –

The Opteron 6100s will compete with the high-end of the Xeon 5600s in the 2P space and also take the fight on up to the 4P space. But, AMD’s chipsets and the chips themselves are really all the same. It is really a game of packaging some components in the stack up in different ways to target different markets.

Sounds like a great way to keep costs down by limiting the amount of development required to support the various configurations.

AMD themselves also blogged on the topic with some interesting tidbits of information –

You’re probably wondering why we wouldn’t put our highest speed processor up in this comparison. It’s because we realize that while performance is important, it is not the most important factor in server decisions.  In most cases, we believe price and power consumption play a far larger role.

[..]

Power consumption – Note that to get to the performance levels that our competitor has, they had to utilize a 130W processor that is not targeted at the mainstream server market, but is more likely to be used in workstations. Intel isn’t forthcoming on their power numbers so we don’t really have a good measurement of their maximum power, but their 130W TDP part is being beaten in performance by our 80W ACP part.  It feels like the power efficiency is clearly in our court.  The fact that we have doubled cores and stayed in the same power/thermal range compared to our previous generation is a testament to our power efficiency.

Price – This is an area that I don’t understand.  Coming out of one of the worst economic times in recent history, why Intel pushed up the top Xeon X series price from $1386 to $1663 is beyond me.  Customers are looking for more, not less for their IT dollar.  In the comparison above, while they still can’t match our performance, they really fall short in pricing.  At $1663 versus our $1165, their customers are paying 42% more money for the luxury of purchasing a slower processor. This makes no sense.  Shouldn’t we all be offering customers more for their money, not less?

In addition to our aggressive 2P pricing, we have also stripped away the “4P tax.” No longer do customers have to pay a premium to buy a processor capable of scaling up to 4 CPUs in a single platform.  As of today, the 4P tax is effectively $0. Well, of course, that depends on you making the right processor choice, as I am fairly sure that our competitor will still want to charge you a premium for that feature.  I recommend you don’t pay it.

As a matter of fact, a customer will probably find that a 4P server, with 32 total cores (4 x 8-core) based on our new pricing, will not only perform better than our competitor’s highest end 2P system, but it will also do it for a lower price. Suddenly, it is 4P for the masses!

While for the most part I am mainly interested in their 12-core chips, but I also see significant value in the 8 core chips, being able to replace a pair of 4 core chips with a single socket 8 core system is very appealing as well in certain situations. There is a decent premium on motherboards that need to support more than one socket. Being able to get 8, (and maybe even 12 cores) on a single socket system is just outstanding.

I also found this interesting –

Each one is capable of 105.6 Gigaflops (12 cores x 4 32-bit FPU instructions x 2.2GHz).  And that score is for the 2.2GHz model, which isn’t even the fastest one!

I still have a poster up on one of my walls back from 1995-1996 era on the world’s first Teraflop machine, which was –

The one-teraflops demonstration was achieved using 7,264 Pentium Pro processors in 57 cabinets.

With the same number of these new Opterons you could get 3/4ths of the way to a Petaflop.

SGI is raising the bar as well –

This means as many as 2,208 cores in a single rack of our Rackable™ rackmount servers. And in the SGI ICE Cube modular data center, our containerized data center environment, you can now scale within a single container to 41,760 cores! Of course, density is only part of the picture. There’s as much to be excited about when it comes to power efficiency and the memory performance of SGI servers using AMD Opteron 6100 Series processor technology

Other systems announced today include:

  • HP DL165G7
  • HP SL165z G7
  • HP DL385 G7
  • Cray XT6 supercomputer
  • There is mention of a Dell R815 though it doesn’t seem to be officially announced yet. The R815 specs seem kind of underwhelming in the memory department, with it only supporting 32 DIMMs (the HP systems above support the full 12 DIMMs/socket). It is only 2U however. Sun has had 2U quad socket Opteron systems with 32 DIMMs for a couple years now in the form of the X4440, strange that Dell did not step up to max out the system with 48 DIMMs.

I can’t put into words how happy and proud I am of AMD for this new product launch, not only is it an amazing technological achievement, but the fact that they managed to pull it off on schedule is just amazing.

Congratulations AMD!!!

March 28, 2010

Vulnerable Smart Grid

Filed under: News,Security — Nate @ 9:27 am

As some of you who know me may know, I have been against the whole concept of a “smart grid” for a few years now. The main reason behind this is security. The more intelligence you put into something especially with regards to computer technology the more complex it becomes, the more complex it becomes the harder it is to protect.

Well it seems the main stream media has picked up on this with an article from the AP

SAN FRANCISCO – Computer-security researchers say new “smart” meters that are designed to help deliver electricity more efficiently also have flaws that could let hackers tamper with the power grid in previously impossible ways.

Kind of reminds me of the RFID-based identification schemes that have been coming online in the past few years, just as prone to security issues. In the case of the smart grid, my understanding of it is that the goal is to improve energy efficiency by allowing the power company to intelligently inform downtream customers of power conditions so that things like heavy appliances can be proactively turned off in the event of a surge in usage to prevent brown and blackouts.

Sounds nice in theory, like many things, but as someone who has worked with technology for about 20 years now I see the quality of stuff that comes out of companies, and I just have no confidence that such technonlogy can be made “secure” at the same time it can be made “cost effective”. At least not at our current level of technological sophistication, I mean from an evolutionary standpoint “technology” is still a baby, we’re still figuring stuff out, it’s brand new stuff. I don’t mean to knock any company or organization in particular, they are not directly at fault, I just don’t believe – in general technology is ready for such a role, not in a society such as ours.

Today in many cases you can’t get a proper education in modern technology because the industries are moving too fast for the schools to keep up. Don’t get me started on organizations like OLPC and others trying to pitch laptop computers to schools in an attempt to make education better.

If you want to be green, in my opinion, get rid of the coal fired power plants. I mean 21st century and we still have coal has generating roughly half(or more) of our electricity ? Hasn’t anyone played Sim City?

Of course this concept doesn’t just apply to the smart grid, it applies to everything as our civilization tries to put technology to work to improve our lives. Whether it’s wifi, rfid, or online banking, all of these(and many others) expose us to significant security threats, when not deployed properly, and in my experience, from what I have seen, the numbers of implimentations that are not secure outnumber the ones that are by probably 1000:1. So we have a real significant trend of this in action(technology being deployed then being actively exploited). I’m sure you agree that our power grid is a fairly important resource, it was declared the most important engineering achievement of the 20th century.

While I don’t believe it is possible yet, we are moving down the road where scenes like those portrayed in the movie Eagle Eye (saw it recently had it on my mind), will be achievable, especially now that many nations have spun up formal hacker teams to fight future cyber wars, and you have to admit, we are a pretty tempting target.

There will be a very real cost to this continued penetration of technology into our lives. In the end I think the cost will be too high, but time will tell I guess.

You could say I long for the earlier days of technology where for the most part security “threats” were just people that wanted to poke around in systems, or compromise a host to “share” it’s bandwidth and disk space to host pirated software. Rarely was there any real malice behind any of it, not true anymore.

And for those that are wondering – the answer is no. I have never, ever had a wireless access point hooked to my home  network, and I do my online banking from Linux.

March 26, 2010

Enterprise EqualLogic

Filed under: Storage — Tags: , , — Nate @ 6:33 am

So, I attended that Dell/Denali event I mentioned recently. They covered some interesting internals on the architecture of Exchange 2010. Covering technical topics like migrating to it, how it protects data, etc. It was interesting from that standpoint, they didn’t just come out and say “Hey we are big market leader you will use us, resistance is futile”. So I certainly appreciated that although honestly I don’t really deal with MS stuff in my line of work, I was just there for the food and mainly because it was walking distance(and an excuse to get out of the office).

The other topic that was heavily covered was on Dell EqualLogic storage. This I was more interested in. I have known about EqualLogic for years, and never really liked their iSCSI-only approach(I like iSCSI but I don’t like single protocol arrays, and iSCSI is especially limiting as far as extending array functionality with other appliances, e.g. you can optionally extend a Fiber channel only array with iSCSI but not vise versa – please correct me if I’m wrong.)

I came across another blog entry last year which I found extremely informative – “Three Years of EqualLogic” which listed some great pros and some serious and legimate cons to the system after nearly three years of using it.

Anyways, being brutally honest if there is anything I did really “take away” from the conference with regards to EqualLogic storage it is this – I’m glad I chose 3PAR for my storage needs(and thanks to my original 3PAR sales rep for making the cold call many years ago to me. I knew him from an earlier company..).

So where to begin, I’ve had a night to sleep on this information and absorb it in a more logical way, I’ll start out with what I think are the pros to the EqualLogic platform:

  • Low cost – I haven’t priced it personally but people say over and over it’s low cost, which is important
  • Easy to use – It certainly looks very easy to use, very easy to setup, I’m sure they could get 20TB of EqualLogic storage up running in less time than 3PAR could do it no doubt.
  • Virtualized storage makes it flexible. It pales in comparison to 3PAR virtualization but it’s much better than legacy storage in any case.
  • All software is included – this is great too, no wild cards with licensing. 3PAR by contrast heavily licenses their software and at times it can get complicated in some situations(their decision to license the zero detection abilities of their new F/T class arrays was a surprise to me)

So it certainly looks fine for low(ish) cost workgroup storage, one of the things the Dell presenter tried to hammer on is how it is “Enterprise ready”. And yes I agree it is ready, lots of enterprises use workgroup storage I’m sure for some situations(probably because their real legacy enterprise storage is too expensive to add more applications to, or doesn’t scale to meet mixed workloads simultaneously).

Here’s where I get down & dirty.

As far as really ready for enterprise storage – no way it’s not ready, not in 2010, maybe if it was 1999.

EqualLogic has several critical architectural deficiencies that would prevent me from wanting to use it or advising others to use it:

  • Active/passive controller design – I mean come on, in 2010 your still doing active/passive? They tried to argue the point where you don’t need to “worry” about balancing the load between controllers and then losing that performance when a controller fails. Thanks, but I’ll take the extra performance from the other active controller(s)[with automagic load balancing, no worrying required], and keep performance high with 3PAR Persistant Cache in the event of a controller failure(or software/hardware upgrade/change).
  • Need to reserve space for volumes/snapshots. Hello, 21st century here, we have the technology for reservationless systems, ditching reservations is especially critical when dealing with thin provisioning.
  • Lack of storage pools. This compounds the effects of a reservation-based storage system. Maybe EqualLogic has storage pools, I just did not hear it mentioned in the conference nor anywhere else. Having to reserve space for each and every volume is just stupidly inefficient. At the very least you should be able to reserve a common pool of space and point multiple volumes to it to share. Again hints to their lack of a completely virtualized design. You get a sense that a lot of these concepts were bolted on after the fact and not designed into the system when you run into system limitations like this.
  • No global hot spares – so the more shelves you have the more spindles are sitting there idle, doing nothing. 3PAR by contrast does not use dedicated spares, each and every disk in the system has spare capacity on it. When a RAID failure occurs the rebuild is many:many instead of many:one. This improves rebuild times by 10x+. Also due to this design, 3PAR can take advantage of the I/O available on every disk on the array. There aren’t even dedicated parity disks, parity is distributed evenly across all drives on the system.
  • Narrow striping. They were talking about how the system distributes volumes over all of the disks in the system. So I asked them how far can you stripe say a 2TB volume? They said over all of the shelves if you wanted to, but there is overhead from iSCSI because apparently you need an iSCSI session to each system that is hosting data for the volume, due to this overhead they don’t see people “wide striping” of a single volume over more than a few shelves. 3PAR by contrast by default stripes across every drive in the system, and the volume is accessible from any controller(up to 8 in their high end) transparently. Data moves over an extrenely high speed backplane to the controller that is responsible for those blocks. In fact the system is so distributed that it is impossible to know where your data actually is(e.g. data resides on controller 1 so I’ll send my request to controller 1), and the system is so fast that you don’t need to worry about such things anyways.
  • Cannot easily sustain the failure of a whole shelf of storage. I asked the Dell rep sitting next to me if it was possible, he said it was but you had to have a special sort of setup, it didn’t sound like it was going to be something transparent to the host, perhaps involving synchrnous replication from one array to another, in the event of failure you probably had to re-point your systems to the backup, I don’t know but my point is I have been spoiled by 3PAR in that by default their system uses what they call cage level availability, which means data is automatically spread out over the system to ensure a failure of a shelf does not impact system availability. This requires no planning in advance vs other storage systems, it is automatic. You can turn it off if you want as there are limitations as far as what RAID levels you can use depending no the number of shelves you have (e.g. you cannot run RAID 5 with cage level availability with only 2 shelves because you need at least 3), the system will prevent you from making mistakes.
  • One RAID level per array(enclosure) from what the Dell rep sitting next to me said. Apparently even on their high end 48-drive arrays you can only run a single level of RAID on all of the disks? Seems very limiting for a array that has such grand virtualization claims. 3PAR of course doesn’t limit you in this manor, you can run multiple RAID levels on the same enclosure, you can even run multiple RAID levels on the same DISK, it is that virtualized.
  • Inefficient scale out – while scale out is probably linear, the overhead involved with so many iSCSI sessions with so many arrays has to have some penalty. Ideally what I’d like to see is at least some sort of optional Infiniband connectivity between the controllers to give them higher bandwidth, lower latency, and then do like 3PAR does – traffic can come in on any port, and routed to the appropriate active controller automatically. But their tiny controllers probably don’t have the horsepower to do that anyways.

There might be more but those are the top offenders at the top of my list. One part of the presentation which I didn’t think was very good was when the presenter streamed a video from the array and tested various failure scenarios.  The amount of performance capacity needed to transfer a video under failure conditions of a storage array is a very weak illustration on how seamless a failure can be. Pulling a hard disk out, or a disk controller or a power supply, really is trivial. To the uninformed I suppose it shows the desired effect(or lack of) though which is why it’s done. A better test I think would be running something like IO Zone on the array and showing the real time monitoring of IOPS and latency when doing failure testing(preferably with at least 45-50% of the system loaded).

You never know what you’re missing until you don’t have it anymore. You can become complacent in what you have as being “good enough” because you don’t know any better. I remember feeling this especially strongly when I changed jobs a few years ago, and I went from managing systems in a good tier 4 facility to another “tier 4” facility which had significant power issues(at least one major outage a year seemed like). I took power for granted at the first facility because we had gone so many years without so much as a hiccup. It’s times like this I realize (again) the value that 3PAR storage brings to the market and am very thankful that I can take advantage of it.

What I’d like to see though is some SPC-1 numbers posted for a rack of EqualLogic arrays. They say it is enterprise ready, and they talk about the clouds surrounding iSCSI. Well put your money where your mouth is and show the world what you can do with SPC-1.

March 17, 2010

Frightened

Filed under: General,Networking — Tags: — Nate @ 8:15 pm

Frightened. That was the word that first came to my mind when I read this article from our friends at The Register.

The report also says that 60 per cent of Google’s traffic is now delivered directly to consumer networks. In addition to building out a network of roughly 36 data centers and co-locating in more than 60 public exchanges, the company has spent the past year deploying its Google Global Cache (GGC) servers inside consumer networks across the globe. Labovitz says that according to Arbor’s anecdotal conversations, more than half of all consumer providers in North American and Europe now have at least one rack of Google’s cache servers.

Honestly, I am speechless beyond the word frightened, you may want to refer to an earlier blog post “Lesser of two Evils” for more details.

March 16, 2010

IBM partners with Red hat for KVM cloud

Filed under: News,Virtualization — Tags: , , , — Nate @ 6:32 pm

One question: Why?

IBM has bombarded the IT world for years now how they can consolidate hundreds to thousands of Linux VMs onto a single mainframe.

IBM has recently announced a partnership with Red hat to use KVM in a cloud offering. At first I thought, well maybe they are doing it to offer Microsoft applications as well, but that doesn’t appear to be the case:

Programmers who use the IBM Cloud for test and dev will be given RHEV to play with Red Hat Enterprise Linux or Novell SUSE Linux Enterprise Server images with a Java layer as they code their apps and run them through regression and other tests.

Let’s see, Linux and Java, why not use the mainframes to do this? Why KVM? As far as the end users are concerned it really shouldn’t matter, after all it’s java and linux.

Seems like a slap in the face to their mainframe division (I never bought into the mainframe/linux/VM marketing myself, I suppose they don’t either). I do remember briefly having access to a S390 running a SuSE VM about 10 years ago, it was..interesting.

Deja Vu

Filed under: News — Nate @ 9:55 am

Intel released their 5600 CPUs today, first saw an announcement on Supermicro’s site last night(there’s a dozen or two vendors whom I prowl their sites regularly). I couldn’t help but get a sense of deja vu when it came to new Intel 6-core CPUs. It seems just yesterday^W nearly 2 years ago that they released their first hex core processor, the Xeon 7000 series.  Yeah I know clock for clock these new chips are much faster, new cores, more threads etc. But purely from a core perspective..why might anyone want to go buy one of these new 5600 series systems, with the new 8 core chips coming in a couple of weeks, and AMD’s new 8 and 12 core chips coming at about the same time?

I think Intel got screwed on this one, mostly screwed by their OEMs. That is, many (most? all?) of the large OEMs have adapted the upcoming 8 core Intel chips which were intended for 4-socket and greater systems to run in dual socket configurations, something Intel obviously didn’t think of when they were designing this new 5600 series chip. On the same note I never understood why the 7000 series of chips never made it into dual socket systems, but oh well it doesn’t matter now.

I think short of an upgrade cycle for existing 5500 series systems probably next year(since the new 5600s are socket compatible, and given the 5500s are so new I can’t imagine many customers needing to upgrade so soon), I think the 5600 is a dead product.

March 11, 2010

Panasas NFS performance posted

Filed under: Storage — Tags: , , , — Nate @ 5:48 pm

I have heard of Panasas on occasion and for some reason recently I saw a story or a link to them so I decided to poke around to see what they do. I like technology..

Anyways I was shocked to see their system design. I mean I’ve seen systems like Isilon and Xiotech and Pillar who have embedded controllers in each of their storage shelves, this is an interesting concept for boosting performance though given the added complexity and stuff to each shelf I imagine can boost the costs by quite a bit too I don’t know.

But Panasas has taken it to an even further extreme, putting a disk controller for every two disks in the system! I mean I’m sure it’s great for maximum performance but wow, it just seems like such a massive overkill(which can be good for certain apps I’m sure). I was/am still shocked 🙂

So today I was poking around again at the latest SPEC SFS results for NFS, and saw they posted some numbers finally.

Fairly impressive numbers but I just can’t get past the number of CPUs they are using. They posted 77,137 IOPS with 160 disks hosting NAS data (80 SATA and 80 SSD). They used a total of 110 Intel CPUs (80 1.5Ghz Celerons and 30 1.8Ghz Pentium Ms) and 440 gigabytes of  RAM cache.

By contrast, Avere which I posted about recently (never used their stuff, never talked to them before), posted 131,591 IOPS with 72 disks hosting NAS data(48 15k SAS, 24 SATA), 14 Intel CPUs(2.5Ghz quad core, so 56 cores) and 423 gigabytes of RAM cache. This is on a 6-node cluster. This Avere configuration is not using SSD (they released an SSD version since these results were posted)

The bar certainly is being raised by these players implementing massive caches. NetApp showed off some pretty impressive numbers as well with their PAM last year, more than 500GB of cache(PAM is a read cache only) though again not nearly as effective as Avere since they came in at 60,507 IOPS with 56 15k RPM disks.

March 10, 2010

Save 50% off vSphere essentials for the next 90 days

Filed under: Virtualization — Tags: , — Nate @ 3:00 pm

Came across this today, which mentions you can save about 50% when licensing vSphere essentials for the next ~90 days. As you may know Essentials is a really cheap way to get your vSphere stuff managed by vCenter. For your average dual socket 16-blade system as an example it is 91% cheaper(savings of ~$26,000) than going with vSphere Standard edition. Note that the vCenter included with Essentials needs to be thrown away if your managing more than three hosts with it. You’ll still need to buy vCenter standard (regardless of what version of vSphere you buy).

March 9, 2010

The Atomic Unit of Compute

Filed under: Virtualization — Tags: — Nate @ 5:16 pm

I found this pretty fascinating, as someone who has been talking to several providers it certainly raises some pretty good points.

[..]Another of the challenges you’ll face along the way of Cloud is that of how to measure exactly what it is you are offering. But having a look at what the industry is doing won’t give you much help… as with so many things in IT, there is no standard. Amazon have their EC2 unit, and state that it is roughly the equivalent of 1.0-1.2GHz of a 2007 Opteron or Xeon CPU. With Azure, Microsoft haven’t gone down the same path – their indicative pricing/sizing shows a base compute unit of 1.6GHz with no indication as to what is underneath. Rackspace flip the whole thing on it’s head by deciding that memory is the primary resource constraint, therefore they’ll just charge for that and presumably give you as much CPU as you want (but with no indication as to the characteristics of the underlying CPU). Which way should you go? IMHO, none of the above.[..]

We need to have a standard unit of compute, that applies to virtual _and_ physical, new hardware and old, irrespective of AMD or Intel (or even SPARC or Power). And of course, it’s not all just about GHz because all GHz are most definitely not equal and yes it _does_ matter to applications. And lets not forget the power needed to deliver those GHz.

In talking with Terremark it seems their model is around VMware resource pools where they allocate you a set amount of Ghz for your account. They have a mixture of Intel dual socket systems and AMD quad socket systems, and if you run a lot of multi vCPU VMs you have a higher likelihood of ending up in the AMD pool vs the Intel one. I have been testing their vCloud Express product for my own personal needs(1 vCPU, 1.5GB ram 50GB HD), and noticed that my VM is on one of the AMD quad socket systems.

Older Posts »

Powered by WordPress