TechOpsGuys.com Diggin' technology every day

March 23, 2012

Hitachi trounces XIV in SPC-2 Costs

Filed under: Storage — Tags: , , — Nate @ 9:37 am

This really sort of surprised me. I came across a HP storage blog post which mentioned some new SPC-2 results for the P9500 aka Hitachi VSP, naturally I expected the system to cost quite a bit, and offer good performance but I was really not expecting these results.

A few months ago I wrote about what seemed like pretty impressive numbers from IBM XIV (albeit at a high cost), I didn’t realize how high of a cost until these latest results came out.

Not that any of my workloads are SPC-2 related (which is primarily throughput). I mean if I have a data warehouse I’d probably run HP Vertica (which slashes I/O requirements due to it’s design), negating the need for such a high performing system, if I was streaming media I would probably be running some sort of NAS – maybe Isilon or DDN, BlueArc – I don’t know. I’m pretty sure I would not be using one of these kinds of arrays though.

Anyways, the raw numbers came down to this:

IBM XIV

  • 7.4GB/sec throughput
  • $152.34 per MB/sec of throughput (42MB/sec per disk)
  • ~$7,528 per usable TB (~150TB Usable)
  • Total system cost – $1.1M for 180 x 2TB SATA disks and 360GB cache

HP P9500 aka Hitachi VSP

  • 13.1GB/sec throughput
  • $88.34 per MB/sec of throughput (26MB/sec per disk)
  • ~$9,218 per usable TB (~126TB Usable)
  • Total system cost – $1.1M for 512 x 300GB 10k SAS disks and 512GB cache

The numbers are just startling to me, I never really expected the cost of the XIV to be so high in comparison to something like the P9500. In my original post I suspected that any SPC-1 numbers coming out of XIV(based on the SPC-2 configuration cost anyways) would put the XIV as the most expensive array on the market(per IOP), which is unfortunate given it’s limited scalability to 180 disks, 7200RPM-only and RAID 10 only. I wonder what, if anything(other than margin) keeps the price so high on XIV.

I’m sure a good source for getting the cost lower on the P9500 side was the choice to use RAID 5 instead of RAID 10. The previous Hitachi results, released in 2008 for the previous generation USP platform was mirroring. And of course XIV only supports mirroring.

It seems clear to me that the VSP is the winner here, I suspect the XIV probably includes more software out of the box, while the VSP is likely not an all-inclusive system.

IBM gets some slack cut to them since they were doing a SPC-2/E energy efficiency test, though not too much since if your spending $1M on a storage system the cost of energy isn’t going to be all that important(at least given average U.S. energy rates). I’m sure the P9500 with it’s 2.5″ drives are pretty energy efficient on their own anyways.

Where XIV really fell short was on the last test for Video on Demand, for some reason the performance tanked, less than 50% of the other tests( a full 10 Gigabytes/second less than VSP). I’m not sure what the weightings are for each of the tests but if IBM was lucky and the VOD test wasn’t there it would of helped them a lot.

The XIV as tested is maxed out, so any expansion would require an additional XIV. The P9500 is nowhere close to maxed out (though throughput could be maxed out, I don’t know).

9 Comments

  1. /fanboy hat on/

    Interesting to say the least, though I’d like to see the pricing with the software compliment that most companies would almost certainly require upon purchase, I’d hazard a guess that the price would jump a bit to near double since most of their options are licensed based on capacity. I’m not saying the results are disingenuous, but with the XIV you don’t have to license anything, its pretty much a single SKU unit. Of course then there is the simplicity of use factor as well. No RAID groups, no tiering, no special training required. No hassels. I guess it really depends on what you want to focus your administration time on.
    I will say the one thing that XIV needs to address is scaling beyond a single frame. I believe second frame connectivity is either in the works, or currently being done, but I have not seen it first-hand.

    Sadly, the benchmarks I really want to see are the 3PAR P10k. Given what I’ve heard about that rig, it’s the Galactus of storage.

    Final though, barring bragging rights the SPC benchmarks are good for getting a generalized understanding of what a given array can do, but there are some submittals that are lab queens *cough EMC’s all flash VNX* and then there are actual systems submitted that are real world arrays. With the XIV since there is no tuning and no alternate configurations, you don’t have to really worry about that.

    Do you know of anyone running two full 9500’s in this configuration? I don’t.

    /fanboy hat off/

    Yeah those results are pretty bitchen 🙂

    Comment by gchapman — March 23, 2012 @ 10:11 am

  2. Hi Nate.
    To be upfront, I work and blog for IBM.
    I read your article and while the HP/HDS numbers are very impressive, I don’t think the comparison (or the conclusions you draw) are entirely fair.

    As you said, it is not clear what copy services or smart function software is included with the HP/HDS config. I checked the Full Disclosure and could not see anything. The XIV comes with Thin Provisioning, Async and Sync mirror over FC and iSCSI, multipathing software and data migration, all included in the price. Can you confirm what the affect on pricing would be if HP added those features? I don’t see any clear information on this. If not, can you be certain of the facts backing the title of your blog post?

    The HP/HDS solution is dual rack and has no disclosed power consumption (unless I am missing something!). A TCO study has to include elements like floor space, power consumption and cooling. You state that 2.5″ drives are power efficient, but comparing 180 x 3.5″ to 512 x 2.5″ would provide some interesting maths. Did you do this maths? Again, without this information, can you be certain of the facts backing the title of your blog post?

    The XIV has very low storage admin cost. I work with many clients who have several 100 TB of XIV disk managed by a single person (who also often manages backups, the SAN and sometimes helps with the VMware farm). So understanding the cost of admin is key to doing a TCO study. Again, without this information, can you be certain of the facts backing the title of your blog post?

    In terms of scalability, IBM does also have a larger XIV now (243 TB using 3 TB drives).

    If I have failed to spot something, I am more than happy to correct myself, but I was hoping to find more detailed analysis to justify the title of your post.

    Regards,

    Anthony Vandewerdt

    Comment by Anthony Vandewerdt — March 23, 2012 @ 9:44 pm

  3. I don’t think any copy services or smart functions are included, I think most people would not expect them to be since they are not used in the test. This, I’m sure your well aware of if your an IBM blogger. This is common for systems that do not have all inclusive licensing. As a 3PAR customer for the past nearly 6 years I know first hand..paying for thin provisioning, thin reclamation, or dynamic optimization or snapshots — all separate.

    I certainly agree having all the extra software adds value, how much – I dunno. I have never really felt much value in array-based replication mainly because of the difficulties in getting things to be application consistent(does any array vendor have MySQL integration?), for me anyways the costs haven’t been worth it – rather use application level replication(with one exception – if I’m replicating lots of tiny files – using block replication for that is a lot more efficient and scalable). Don’t even get me started on vmware SRM – what a waste of money that is…(hate that per-VM licensing)! But thin provisioning, snapshots and stuff add value.

    Just look at the latest SPC-1 numbers from IBM SVC +Storewize, all I see is “base license” (for all I know that means everything – but it implies that it doesn’t).

    I looked again at the Executive Summary for any mention of extra software being included on the XIV system and found no mention of it, if there is great value attributed to this extra software(I hope there would be) — you’d think/hope/expect IBM would call this out in the document somewhere, like in the paragraph describing the system under test. After all it’s not me you have to convince it’s the less technical people that might come across that kind of thing. Perhaps you can go to to XIV team and get them to point this advantage out in a revised version of the document.

    I looked at the full disclosure document too, ran a search for the word software – no mention of any advanced software functionality.

    So I think my post is fair given the disclosures provided by the results of each of the tests.

    As for larger – I didn’t mean to imply larger disks, but rather more disks, I would hope XIV would scale to say 500-800 disks in a single system (at least) – you guys got up to what, 120 CPU cores ? That’s quite a bit! 1 CPU core per 3 physical spindles, does anyone else come close?

    I say this expecting IBM XIV software to be better — but from what I have seen on other low cost platforms that have ‘all inclusive’ licensing – more often than not the software is no good (Equallogic comes to mind). When faced with “free crappy all-inclusive” software or “better ala carte for $$” I’ll pay the extra to get the better stuff. Unfortunately for the likes of these all inclusive vendors that I have come across they don’t have premium software offerings, it’s either you take what comes with the system or don’t take anything at all.

    Perhaps another approach IBM could take is going the ala carte route, have a configuration that does NOT have all inclusive licensing. Let users that want all inclusive license it, others that don’t want it not license it and pay less. Get the costs of the XIV down – could you get the cost down to say $600-700k without the fancy software add-on ? I think that would make IBM look better than trying to show how much more expensive the other guys are when you bolt on all the extra software to match your line up (whether or not the customer wants or needs it).

    HP/3PAR has sort of done something similar with their thin provisioning package, until recently if you wanted full thin provisioning end-to-end you had to license 3 different software products, now they have a package deal where you get all 3 at a discount.

    thanks for the post!

    Comment by Nate — March 23, 2012 @ 11:46 pm

  4. Hi Nate.

    On the one hand I personally think the SPC-1 cost comparisons are almost useless. The list prices of most vendors (my own employer included) are so high as to be almost bizarre. So a huge discount off list is needed, but the discount number chosen in the SPC-1 submissions by both IBM and other vendors do not reflect the street prices that I see. Again, I can only blame my own employer (and IBMs competitors) for that, but it leaves me very dubious about analysis done on it. On the other hand… it is a publicly agreed measure done to an industry standard… so I can hardly tell you to ignore it. This leaves me a little conflicted. I also agree with Garbriel (who also commented) that many SPC-1 and SPC-2 configs are not ‘real world’. In that regard the XIV config for the SPC-2 result is exactly the same config that a client would buy. In other words…. I genuinely sell the exact same config as shown in the SPC results. In comparison, while the 3PAR SPC-1 results for instance are fantastic, they still need a 1920 disk machine .. not sure how many of those HP actually sell? (though to be clear – I am not saying that reduces their achievement in any way).

    As for the software costs… I can tell you this with certainty. The vast bulk of machines sold in Aust/NZ use the mirroring capability (sync or async) that you get included in the base price. The fact that there is no extra license is a genuine selling feature: customers love it and they use it extensively. A great many use the data migration feature and the snapshot feature and the free performance monitoring tools. These are not features that sit on the shelf and get ignored, they are viewed as having real value.

    I don’t think IBM would offer a bare metal price, the whole selling point is to have a very simple price model and that is frankly a REALLY REALLY popular feature. Client after client tell horror stories of being stung after sale on their previous setup.

    What I would really like to see is a client who has bought both solutions actually post the differences in cost as they saw them. Genuine street prices and genuine ‘what I really got value from’ feature comparisons. That would make for a very interesting blog post…

    BTW… I didn’t know about your blog till today… nice work…. am now following it.

    Comment by Anthony Vandewerdt — March 24, 2012 @ 11:32 pm

  5. Check out Oracle’s latest SPC2 submission… It leaves both IBM and HP/Hitachi in the dust in terms of $/MBPS and an extremely respectable 2nd place for MBPS…

    Comment by Darius Zanganeh — April 17, 2012 @ 4:06 pm

  6. Those are certainly very impressive results. And nice to see someone actually effectively exploiting multi core CPUs(!!), that Oracle box has 10-core processors in it. I tried looking at more of what the 7240 array offers but for some reason I get a 404 when I go to the page –
    http://www.oracle.com/us/products/servers-storage/storage/nas/overview/index.html (then click on 7240)

    I’ve been reading off and on about those ZFS-based storage boxes from Oracle – if you’ve used them what is your experience? It seems theres some discontent among customers since Oracle took over.

    It is interesting that Oracle would submit a system they pitch as “NAS” to a benchmark that is SAN-based (at least according to their web site NAS is where they pitch that product line). They must not be too proud of their Pillar stuff.

    The price difference is about what I would expect – I mean given that the HP P9500/VSP is the most high end expensive storage in the world.

    thanks for the post! Interesting stuff.

    Comment by Nate — April 17, 2012 @ 5:27 pm

  7. […] commented on here in response to my HP SPC-2 results pointing out that the new Oracle 7240 ZFS system has some new SPC-2 results that are very […]

    Pingback by Oracle not afraid to leverage Intel architecture « TechOpsGuys.com — April 20, 2012 @ 11:36 am

  8. The Oracle results are very impressive, but they skirt two things.

    First up the unused storage ratio for the Oracle result is 36.31% versus 0.45% and 2.75% for the HP/HDS and XIV results. For an SPC2 result the Oracle number is very very high.

    It then leads into a green question: spinning up 384 disks to get just 32 TB of useable space is going to consume a lot of power.

    I note that no other vendor has submitted an SPC2e number and certainly if this is the only way to get ‘worlds best’ numbers, I can see why.

    Comment by Anthony Vandewerdt — May 2, 2012 @ 2:19 am

  9. […] could not or would not do at the time. Whether it was mainframe connectivity, or perhaps ultra high speed data warehousing. When HP acquired 3PAR, the high end was still PCI-X based and there wasn't a prayer it was going […]

    Pingback by 3PAR: The Next Generation « TechOpsGuys.com — December 4, 2012 @ 12:43 am

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress