TechOpsGuys.com Diggin' technology every day

March 23, 2012

Hitachi trounces XIV in SPC-2 Costs

Filed under: Storage — Tags: , , — Nate @ 9:37 am

This really sort of surprised me. I came across a HP storage blog post which mentioned some new SPC-2 results for the P9500 aka Hitachi VSP, naturally I expected the system to cost quite a bit, and offer good performance but I was really not expecting these results.

A few months ago I wrote about what seemed like pretty impressive numbers from IBM XIV (albeit at a high cost), I didn’t realize how high of a cost until these latest results came out.

Not that any of my workloads are SPC-2 related (which is primarily throughput). I mean if I have a data warehouse I’d probably run HP Vertica (which slashes I/O requirements due to it’s design), negating the need for such a high performing system, if I was streaming media I would probably be running some sort of NAS – maybe Isilon or DDN, BlueArc – I don’t know. I’m pretty sure I would not be using one of these kinds of arrays though.

Anyways, the raw numbers came down to this:

IBM XIV

  • 7.4GB/sec throughput
  • $152.34 per MB/sec of throughput (42MB/sec per disk)
  • ~$7,528 per usable TB (~150TB Usable)
  • Total system cost – $1.1M for 180 x 2TB SATA disks and 360GB cache

HP P9500 aka Hitachi VSP

  • 13.1GB/sec throughput
  • $88.34 per MB/sec of throughput (26MB/sec per disk)
  • ~$9,218 per usable TB (~126TB Usable)
  • Total system cost – $1.1M for 512 x 300GB 10k SAS disks and 512GB cache

The numbers are just startling to me, I never really expected the cost of the XIV to be so high in comparison to something like the P9500. In my original post I suspected that any SPC-1 numbers coming out of XIV(based on the SPC-2 configuration cost anyways) would put the XIV as the most expensive array on the market(per IOP), which is unfortunate given it’s limited scalability to 180 disks, 7200RPM-only and RAID 10 only. I wonder what, if anything(other than margin) keeps the price so high on XIV.

I’m sure a good source for getting the cost lower on the P9500 side was the choice to use RAID 5 instead of RAID 10. The previous Hitachi results, released in 2008 for the previous generation USP platform was mirroring. And of course XIV only supports mirroring.

It seems clear to me that the VSP is the winner here, I suspect the XIV probably includes more software out of the box, while the VSP is likely not an all-inclusive system.

IBM gets some slack cut to them since they were doing a SPC-2/E energy efficiency test, though not too much since if your spending $1M on a storage system the cost of energy isn’t going to be all that important(at least given average U.S. energy rates). I’m sure the P9500 with it’s 2.5″ drives are pretty energy efficient on their own anyways.

Where XIV really fell short was on the last test for Video on Demand, for some reason the performance tanked, less than 50% of the other tests( a full 10 Gigabytes/second less than VSP). I’m not sure what the weightings are for each of the tests but if IBM was lucky and the VOD test wasn’t there it would of helped them a lot.

The XIV as tested is maxed out, so any expansion would require an additional XIV. The P9500 is nowhere close to maxed out (though throughput could be maxed out, I don’t know).

October 22, 2011

IBM posts XIV SPC-2 results

Filed under: Storage — Tags: , , — Nate @ 8:35 pm

[UPDATED – as usual I re-read my posts probably 30 times after I post them and refine them a bit if needed, this one got quite a few changes. I don’t run a newspaper here so I don’t aim to have a completely thought out article when I hit post for the first time]

IBM finally came out and posted some SPC-2 results for their XIV platform, which is better than nothing but unfortunately they did not post SPC-1 results.

SPC-2 is a sequential throughput test, more geared towards things like streaming media and data warehousing instead of random I/O which represents a more typical workload.

The numbers are certainly very impressive though, coming in at 7.3 gigabytes/second, besting most other systems out there, 42 megabytes/second per disk, IBM’s earlier high end storage array was only able to inch out 12 megabytes/second per disk(with 4 times the number of disks) with disks that were twice as fast. So at least 8 times the I/O capacity, for only about 25% more performance vs XIV, that’s a stark contrast!

SATA/Nearline/7200RPM SAS disks are typically viewed as good at sequential operations, though I would expect 15k RPM disks to do at least as well, since the faster RPM should result in more data traveling under the head at a faster rate, perhaps a sign of a good architecture in XIV with it’s distributed mirrored RAID.

While the results are quite good – again it doesn’t represent the most common types of workloads out there which is random I/O.

The $1.1M discounted price of the system seems quite high for something that only has 180 disks on it(discounts on the system seem to for the most part be 70%), though there is more than 300 gigabytes of cache. I bought a 2-node 3PAR T400 with 200 SATA disks shortly after the T was released in 2008 for significantly less, of course it only had 24GB of data cache!

I hope the $300 modem IBM(after 70% discount) is using is a USR Courier! (Your Price: $264.99still leaves a good profit for IBM). Such fond memories of the Courier.

I can only assume at this point of time IBM has refrained from posting SPC-1 results is because with a SATA-only system the results would not be impressive. In a fantasy world with nearline disks and a massive 300GB cache maybe they could achieve 200-250 IOPS/disk which would put the $1.1M system with 180 disks 36,000 – 45,000 SPC-1 IOPS, or $24-30/IOP.

A more realistic number is probably going to be 25,000 or less($44/IOP), making it one of the most expensive systems out there for I/O (even if it could score 45,000 SPC-1). 3PAR would do 14,000 IOPS (not SPC-1 IOPS mind you, SPC-1 number would probably be lower) with 180 SATA disks and RAID 10 by contrast, based on their I/O calculator with 80% read/20% write workload for about 50% less cost(after discounts) for a 4-node F400.

One of the weak spots on 3PAR is the addressable capacity per controller pair, for I/O and disk connectivity purposes a 2-node F200 (much cheaper) could easily handle 180 2TB SATA disks, but from a software perspective that is not the case. I have been complaining about this for more than 3 years now, they’ve finally addressed it to some extent in the V-class but I am still disappointed to the extent it has been addressed per the supported limits(1.6PB, should be more than double that) that exist today, but at least with the V they have enough memory on the box to scale it up with software upgrades(time will tell if such upgrades come about however).

I would not even use a F400 for this if it was me opting instead for a T800 (800TB) or a V class(800-1600TB), because with 360TB raw on the system that is very close to the limit of the F400’s addressable capacity (384TB), or the T400(400TB). You could of course get a 4-node T800(or a 2-node V400 or V800)  to start, then add additional controllers to get beyond 400TB of capacity if/when the need arises. With the 4-controller design you also get the wonderful persistent cache feature built in (one of the rare software features that is not separately licensed).

But for this case, comparing a nearly maxed out F400 against a maxed out XIV is still fair – it is one of the main reasons I did not consider XIV during my last couple storage purchases.

So there is a strong use case of when to use XIV with these results – throughput oriented workloads! The XIV would absolutely destroy the F400 in throughput, which tops out at 2.6GB/sec (to disk).

With software such as Vertica out there which slashes the need for disk I/O on data warehouses given it’s advanced design, and systems such as Isilon being so geared towards things like scale out media serving (using NFS for media serving seems like a more ideal protocol anyways), I can’t help but wonder what XIV’s place is in the market, at this price point at least. It does seem like a very nice platform from a software perspective, and with their recent switch to Infiniband from 1 Gigabit ethernet a good part of their hardware has been improved as well, also it has SSD read cache coming.

I will say though that this XIV system will handily beat even a high end 3PAR T800 for throughput. While 3PAR has never released SPC-2 numbers the T800 tops out at a 6.4 gigabytes/second(from disk), and it’s quite likely it’s SPC-2 results would be lower than that.

With the 3PAR architecture being as optimized as it is for random I/O I do believe it would suffer vs other platforms with sequential I/O. Not that the 3PAR would run slow, but it would quite likely run slower due to how data is distributed on the system. That is just speculation though a result of not having real numbers to base it on. My own production random I/O workloads in the past have had 15k RPM disks running in the range of 3-4MB/second(numbers are extrapolated as I have only had SATA and 10k RPM disks in my 3PAR arrays to-date though my new one that is coming is 15k RPM), as such with a random I/O workload you can scale up pretty high before you run into any throughput limits on the system (in fact if you max out a T800 with 1,280 drives you could do as high as 5MB/second/disk before you would hit the limit). Though XIV is distributed RAID too so who knows..

Likewise I suspect 3PAR/HP have not released SPC-2 numbers because it would not reflect their system in the most positive light, unlike SPC-1.

Sorry for the tangents on 3PAR  🙂

Powered by WordPress