TechOpsGuys.com Diggin' technology every day

February 21, 2014

NetApp’s latest mid range SPC-1

Filed under: Storage — Tags: , , — Nate @ 4:14 pm

NetApp came out with their latest generation of storage systems recently and they were good citizens and promptly published SPC-1 results for them.

When is clustering, clustering?

NetApp is running the latest Ontap 8.2 in cluster mode I suppose, though there is only a single pair of nodes in the tested cluster. I’ve never really considered this a real cluster, it’s more of a workgroup of systems. Volumes live on a controller (pair) and can be moved around if needed, they probably have some fancy global management thing for the “cluster” but it’s just a collection of storage systems that are only loosely integrated with each other. I like to compare the NetApp style of clustering to a cluster of VMware hosts (where the VMs would be the storage volumes).

This strategy has it’s benefits as well, the main one being less likelihood that the entire cluster could be taken down by a failure(normally I’d consider this failure to be triggered by a software fault). This is the same reason why 3PAR has elected to-date to not go beyond 8-nodes in their cluster, the risk/return is not worth it in their mind. In their latest generation of high end boxes 3PAR decided to double up the ASICs to give them more performance/capacity rather than add more controllers, though technically there is nothing stopping them from extending the cluster further(to my knowledge).

The downside to workgroup style clustering is that optimal performance is significantly harder to obtain.

3PAR clustering is vastly more sophisticated and integrated by comparison. To steal a quote from their architecture document –

The HP 3PAR Architecture was designed to provide cost-effective, single-system scalability through a cache-coherent, multi-node, clustered implementation. This architecture begins with a multi-function node design and, like a modular array, requires just two initial Controller Nodes for redundancy. However, unlike traditional modular arrays, an optimized interconnect is provided between the Controller Nodes to facilitate Mesh-Active processing. With Mesh-Active controllers, volumes are not only active on all controllers, but they are autonomically provisioned and seamlessly load-balanced across all systems resources to deliver high and predictable levels of performance. The interconnect is optimized to deliver low latency, high-bandwidth communication and data movement between Controller Nodes through dedicated, point-to-point links and a low overhead protocol which features rapid inter-node messaging and acknowledgement.

Sounds pretty fancy right? It’s not something that is for high end only. They have extended the same architecture down as low as a $25,000 entry level price point on the 3PAR 7200 (that price may be out of date, it’s from an old slide).

I had the opportunity to ask what seemed to be a NetApp expert on some of the finer details of clustering in Ontap 8.1 (latest version is 8.2) a couple of years ago and he provided some very informative responses.

Anyway on to the results, after reading up on them it was hard for me not to compare them with the now five year old 3PAR F400 results.

Also I want to point out that the 3PAR F400 is End of Life, and is no longer available to purchase as new as of November 2013 (support on existing systems continues for another couple of years).

MetricNetApp
FAS8040
3PAR
F400
Date tested2/19/20144/27/2009
Controller
Count
24
(hey, it's an actual cluster)
SPC-1 IOPS86,07293,050
SPC-1 Usable
Capacity
32,219 GB
(RAID 6)
27,046 GB
(RAID 1)
Raw
Capacity
86,830 GB56,377 GB
SPC-1 Unused
Storage
Ratio
(may not exceed 45%)
41.79%0.03%
Tested storage
configuration
pricing
$495,652$548,432
SPC-1 Cost
per IOP
$5.76$5.89
Disk size and
number
192 x 450GB 10k RPM384 x 146GB 15k RPM
Data Cache64GB data cache
1,024GB Flash cache
24GB data cache

I find the comparison fascinating myself at least. It is certainly hard to compare the pricing, given the 3PAR results are five years old, the 3PAR mid range pricing model has changed significantly with the introduction of the 7000 series in late 2012.  I believe the pricing 3PAR provided SPC-1 was discounted(I can’t find indication either way, I just believe that based on my own 3PAR pricing I got back then) vs NetApp is list(says so in the document). But again, hard to compare pricing given the massive difference in elapsed time between tests.

Unused storage ratio

What is this number and why is there such a big difference? Well this is a SPC-1 metric and they say in the case of NetApp:

Total Unused Capacity (36,288.553 GB) divided by Physical Storage Capacity (86.830.090 GB) and may not exceed 45%.

A unused storage ratio of 42% is fairly typical for NetApp results.

In the case of 3PAR, you have to go to the bigger full disclosure document(72 pages), as the executive summary has evolved more over time and that specific quote is not in the 3PAR side of things.

So for 3PAR F400 SPC says:

The Physical Storage Capacity consisted of 56,377.243 GB distributed over 384 disk drives each with a formatted capacity of 146.816 GB. There was 0.00 GB (0.00%) of Unused Storage within the Physical Storage Capacity. Global Storage Overhead consisted of 199.071 GB (0.35%) of Physical Storage Capacity. There was 61.203 GB (0.11%) of Unused Storage within the Configured Storage Capacity. The Total ASU Capacity utilized 99.97% of the Addressable Storage Capacity resulting in 6.43 GB (0.03%) of Unused Storage within the Addressable Storage Capacity.

3PAR F400 Storage Hierarchy Ratios

3PAR F400 Storage Hierarchy Ratios

The full disclosure document is not (yet) available for NetApp as of 2/21/2014. It most certainly will become available at some point.

The metrics above and beyond the headline numbers is one of the main reasons I like SPC-1.

With so much wasted space on the NetApp side it is confusing to me why they don’t just use RAID 1 (I think the answer is they don’t support it).

Benefits from cache

The NetApp system is able to leverage it’s terabyte of flash cache to accelerate what is otherwise a slower set of 10k RPM disks, which is nice for them.

They also certainly have much faster CPUs, and more than double the data cache (3PAR’s architecture isolates data cache from the operating system, so I am not sure how much memory on the NetApp side is actually used for data cache vs operating system/meta data etc). 3PAR by contrast has their proprietary ASIC which is responsible for most of the magic when it comes to data processing on their systems.

3PAR does not have any flash cache capabilities so they do require (in this comparison) double the spindle count to achieve the same performance results. Obviously in a newer system configuration 3PAR would likely configure a system with SSDs and sub LUN auto tiering to compensate for the lack of a flash based cache. This does not completely completely compensate however, and of course I have been hounding 3PAR and HP for at least four years now to develop some sort of cache technology that leverages flash. They announced SmartCache in December 2012 (host-based SSD caching for Gen8 servers) however 3PAR integration has yet to materialize.

However keep in mind the NetApp flash cache only accelerates reads. If you have a workload like mine which is 90%+ write the flash cache doesn’t help.

Conclusion

NetApp certainly makes good systems, they offer a lot of features, and have respectable performance. The systems are very very flexible and they have a very unified product line up (same software runs across the board).

For me personally after seeing results like this I feel continually reassured that the 3PAR architecture was the right choice for my systems vs  NetApp (or other 3 letter storage companies).  But not everyone’s priorities are the same. I give NetApp props for continuing to support SPC-1 and being public with their numbers. Maybe some day these flashy storage startups will submit SPC-1 results…….not holding my breath though.

May 23, 2013

3PAR 7400 SSD SPC-1

Filed under: Storage — Tags: , , — Nate @ 10:41 am

I’ve been waiting to see these final results for a while, and now they are out! The numbers(performance + cost + latency) are actually better than I was expecting.

You can see a massive write up I did on this platform when it was released last year.

(last minute edits to add a new Huawei results that was released yesterday)

(more last minute edits to add a HP P6500 EVA SPC-1E)

SPC-1 Recap

I’ll say this again in case this happens to be read by someone who is new here. Myself, I see value in the SPC-1 as it provides a common playing field for reporting on performance in random transactional workloads (the vast majority of workloads are transactional). On top of the level playing field the more interesting stuff comes in the disclosures of the various vendors. You get to see things like

  • Cost (SpecSFS for example doesn’t provide this and the resulting claims from the vendors showing high performance relative to others at a massive cost premium but not disclosing the costs is very sad)
  • Utilization (SPC-1 minimum protected utilization is 55%)
  • Configuration complexity (only available in the longer full disclosure report)
  • Other compromises the vendor might of made (see the note about disabling cache mirroring)
  • 3 year 24×7 4 hour on site hardware support costs

There is a brief executive summary as well as what is normally a 50-75 page full disclosure report with the nitty gritty details.

SPC-1 also has maximum latency requirements – no I/O request can take longer than 30ms to serve or the test is invalid.

There is another test suite -  SPC-2, which tests throughput with various means. Much fewer systems participate in that test (3PAR never has, though I’d certainly like them to).

Having gone through several storage purchases over the years I can say from personal experience it is a huge pain to try to evaluate stuff under real workloads – often times vendors don’t even want to give evaluation gear (that is in fact in large part why I am a 3PAR customer today). Even if you do manage to get something in house to test, there are many things out there, with wide ranging performance / utilization ratios. At least with something like SPC-1 you can get some idea how the system performs relative to other systems at non trivial utilization rates. This example is rather extreme but is a good illustration.

I have no doubt the test is far from perfect, but in my opinion at least it’s far better than the alternatives, like people running 100% read tests with IOMeter to show they can get 1 million IOPS.

I find it quite strange that none of the new SSD startups have participated in SPC-1, I’ve talked to a couple different ones and they don’t like the test, they give the usual it’s not real world, customers should take the gear and test it out themselves. Typical stuff. Usually means they would score poorly – especially those that leverage SSD as a cache tier, with high utilization rates of SPC-1 you are quite likely to blow out that tier, once that happens performance tanks. I have heard reports of some of these guys getting their systems yanked out of production because they fail to perform after utilization goes up. System shines like a star during brief evaluation – then after several months of usage and utilization increasing, performance no longer holds up.

One person said their system is optimized for multiple workloads and SPC-1 is a single workload. I don’t really agree with that, SPC-1 does a ton of reads and writes all over the system, usually from multiple servers simultaneously. I look back to 3PAR specifically, who have been touting multiple workload (and mixed workload) support since their first array was released more than a decade ago. They have participated in SPC-1 for over a decade as well, so arguments saying testing is too expensive etc doesn’t hold water either. They did it when they were small, on systems that are designed from the ground up for multiple workloads (not just riding a wave of fast underlying storage and hoping that can carry them),  these new small folks can do it too. If they can come up with a better test with similar disclosures I’m all ears too.

3PAR Architecture with mixed workloads

The one place where I think SPC-1 could be improved is in failure testing. Testing a system in a degraded state to see how it performs.

The below results are from what I could find on all SSD SPC-1 results. If there is one/more I have missed(other than TMS, see note below), let me know. I did not include the IBM servers with SSD, since those are..servers.

Test Dates

System
Name
Date
Tested
HP 3PAR 7400May 23, 2013
HP P6500 EVA (SPC-1E)February 17, 2012
IBM Storwize V7000June 4, 2012
HDS Unified Storage 150March 26, 2013
Huawei OceanStor Dorado2100 G2May 22, 2013
Huawei OceanStor Dorado5100August 13, 2012

I left out the really old TMS (now IBM) SPC-1 results as they were from 2011, too old for a worthwhile comparison.

Performance / Latency

System NameSPC-1
IOPS
Avg Latency
(all utilization
levels)
Avg
Latency
(Max
utilization)
# of times
above 1ms
latency
# of
SSDs
HP 3PAR 7400258,0780.66ms0.86ms0 / 1532x
200GB
HP P6500 EVA (SPC-1E)20,0034.01ms11.23ms13 / 158x
200GB
IBM Storwize V7000120,4922.6ms4.32ms15 / 1518x
200GB
HDS Unified Storage 150125,0180.86ms1.09ms12 / 1520x
200GB
Huawei OceanStor Dorado2100 G2400,5870.60ms0.75ms0 / 1550x
200GB
Huawei OceanStor Dorado5100
600,0520.87ms1.09ms7 / 1596x
200GB

A couple of my own data points:

  • Avg latency (All utilization levels) – I just took aggregate latency of “All ASUs” for each of the utilization levels and divided it by 6 (the number of utilization levels)
  • Number of times above 1ms of latency – I just counted the number of cells in the I/O throughput table for each of the ASUs (15 cells total) that the test reported above 1ms of latency

Cost

System NameTotal
Cost
Cost per
SPC-1 IOP
Cost per
Usable TB
HP 3PAR 7400$148,737$0.58$133,019
HP P6500 EVA (SPC-1E)$130,982$6.55$260,239
IBM Storwize V7000$181,029$1.50$121,389
HDS Unified Storage 150$198,367$1.59$118,236
Huawei OceanStor Dorado2100 G2$227,062$0.57$61,186
Huawei OceanStor Dorado5100
$488,617$0.81$77,681

Capacity Utilization

System
Name
Raw
Capacity
Usable
Capacity
Protected
Application
Utilization
HP 3PAR 74003,250 GB1,159 GB70.46%
HP P6500 EVA (SPC-1E)1,600 GB515 GB64.41%
IBM Storwize V70003,600 GB1,546 GB84.87%
HDS Unified Storage 1503,999 GB1,717 GB85.90%
Huawei OceanStor Dorado2100 G210,002 GB3,801 GB75.97%
Huawei OceanStor Dorado510019,204 GB6,442 GB67.09%

Thoughts

The new utilization charts in the latest 3PAR/Huawei tests are quite nice to see, really good illustrations as to where the space is being used. They consume a full 3 pages in the executive summary. I wish SPC would go back and revise previous reports so they have these new easier forms of disclosure in them. The data is there for users to compute on their own.

HP EVA

This is a SPC-1e result rather than SPC-1 – I believe the work load is the same(?) they just measure power draw in addition to everything else. The stark contrast between the new 3PAR and the older P6500 is remarkable from every angle whether it is cost, performance, capacity, latency. Any way you slice it (well except power I am sure 3PAR draws more power 🙂 )

It is somewhat interesting in the power results for the P6500 that there is only a 16 watt difference between 0% load and 100% load.

I noticed that the P6500 is no longer being sold (P6550 was released to replace it – and the 3PAR 7000-series was released to replace the P6550 which is still being sold).

Huawei

While I don’t expect Huawei to be a common rival for the other three outside of China perhaps, I find their configuration very curious. On the 5100 with such a large number of apparently low cost SLC(!) SSDs, and “short stroking” (even though there are no spindles I guess the term can still apply) they have managed to provide a significant amount of performance at a reasonable cost. I am confused though they claim SLC but yet they have so many disks(would think you’d need fewer with SLC), at the same time at a much lower cost. Doesn’t compute..

No software

Huawei appears to have absolutely no software options for these products – no thin provisioning, no snapshots, no replication, nothing. Usually vendors don’t include any software options as part of the testing since they are not used. In this case the options don’t appear to exist at all.

They seem to be more in line with something that LSI/NetApp E-series, or Infortrend or something like that rather than an enterprise storage system. Though looking at Infortrend’s site earlier this morning shows them supporting thin provisioning, snapshots, and replication on some arrays. Even NetApp seems to have thin provisioning on their E-series included.

3PAR

3PAR’s metadata

3PAR’s utilization in this test is hampered by (relatively) excessive metadata, the utilization results say only 7% unused storage ratio which on the surface is an excellent number. But this number excludes metadata which in this case is 13%(418GB) of the system. Given the small capacity of the system this has a significant impact on utilization (compared to 3PAR’s past results). They are working to improve this.

The next largest meta data size in the above systems is IBM which has only 1GB of metadata (about 99.8% less than 3PAR). I would be surprised if 3PAR was not able to significantly slash the metadata size in the future.

3PAR 7400 SSD SPC-1 Configured Storage Capacity (one of the new charts from the SPC-1 report)

In the grand scheme of things this problem is pretty trivial. It’s not as if the meta data scales linearly with the system.

Only quad controller system

3PAR is the only SSD solution above tested with 4 controllers(totalling 4 Gen4 ASICs, 24 x 1.8Ghz Xeon CPU cores, 64GB of data cache, and 32GB of control cache), meaning with their persistent cache technology(which is included at no extra cost) you can lose a controller and keep a fully protected and mirrored write cache. I don’t believe any of the other systems are even capable of such a configuration regardless of cost.

3PAR Persistent Cache mirrors cache from a degraded controller pair to another pair in the cluster automatically.

The 7400 managed to stay below 1 millisecond response times even at maximum utilization which is quite impressive.

Thin provisioning built in

The new license model of the 3PAR 7000 series means this is the first SPC-1 result to include thin provisioning for a 3PAR system at least. I’m sure they did not use thin provisioning(no point when your driving to max utilization), but from a cost perspective it is something good to keep in mind. In the past thin provisioning would add significant costs onto a 3PAR system. I believe thin provisioning is still a separate license on the P10000-series (though would not be surprised if that changes as well).

Low cost model

They managed to do all of this while remaining a lower cost offering than the competition – the economics of this new 7000 series are remarkable.

IBM’s poor latency

IBM’s V7000 latency is really terrible relative to HDS and HP. I guess that is one reason they bought TMS.  Though it may take some time for them to integrate TMS technology (assuming they even try) to have similar software/availability capabilities as their main enterprise offerings.

Conclusion

With these results I believe 3PAR is showing well that they too can easily compete in the all SSD market opportunities without requiring excessive amounts of rack space or power circuits as some of their previous systems required. All of that performance(only 32 of the 48 drive bays are occupied!), in a small 4U package. Previously you’d likely be looking at a absolute minimum of half a rack!

I don’t know whether or not 3PAR will release performance results for the 7000 series on spinning rust, it’s not too important at this point though. The system architecture is distributed and they have proven time and again they can drive high utilization, so it’s just a matter of knowing the performance capacity of the controllers (which we have here), and just throwing as much disk as you want at it. The 7400 series tops out at 480 disks at the moment – even if you loaded it up with 15k spindles you wouldn’t come close to the peak performance of the controllers.

It is, of course nice to see 3PAR trouncing the primary competition in price, performance and latency. They have some work to do on utilization as mentioned above.

June 22, 2012

NetApp Cluster SPC-1

Filed under: Storage — Tags: , — Nate @ 7:19 pm

Sorry for the off topic posts recently – here is something a little more on topic.

I don’t write about NetApp much, mainly because I believe they have some pretty decent technology, they aren’t a Pillar or an Equallogic. Though sometimes I poke fun. BTW did you hear about that senior Oracle guy that got canned recently and the comments he made about Sun? Oh my, was that funny. I can only imagine what he thought of Pillar. Then there are the folks that are saying Oracle is heavily discounting software so they can sell hardware at list price thus propping up the revenues, net result is Oracle software folks hate Sun. Not a good situation to be in. I don’t know why Oracle couldn’t of just of been happy owning BEA Jrockit JVM and let Sun whither away.

Anyways…

NetApp tried to make some big news recently when they released their newest OS, Ontap 8.1.1. For such a minor version number change (8.1 -> 8.1.1) they sure did try to raise a big fuss about it. Shortly after 8.1 came out I came across some NetApp guy’s blog who was touting this release quite heavily. I was interested in some of the finer points and tried to ask some fair technical questions – I like to know the details. Despite me being a 3PAR person I tried, really hard to be polite and balanced, and the blogger was very thoughtful, informed and responsive and gave a great reply to my questions.

Anyways I’m still sort of un clear what is really new in 8.1.1 vs 8.1 – it sounds to me like it’s just some minor changes from a technical side and they slapped some new marketing on top of it. Well I think the new Hybrid aggregates are perhaps specifically new to 8.1.1 (Also I think some new form of Ontap that can run in a VM for small sites). Maybe 8.1 by itself didn’t make a big enough splash. Or maybe 8.1.1 is what 8.1 was supposed to be (I think I saw someone mention that perhaps 8.1 was a release candidate or something). The SpecSFS results posted by NetApp for their clusters are certainly pretty impressive from a raw performance standpoint. They illustrate excellent scalability up to 24 nodes.

But the whole story isn’t told in the SpecSFS results – partially because things like cost are not disclosed in the results, partially because it doesn’t illustrate the main weakness of the system in that it’s not a single file system, it’s not automatically balanced from either a load or a space perspective.

But I won’t harp on that much this post is about their recent SPC-1 results which I just stumbled upon. These are the first real SPC-1 results NetApp has posted in almost four years – you sort of have to wonder what took them so long. I mean they did release some SPC-1E results a while back but those are purely targeting energy measurements. For me at least, energy usage is probably not even in the top 5 things I look for when I want some new storage. The only time I really care about energy usage is if the installation is really, really small. I mean like the whole site being less than one rack. Energy efficiency is nice but there are a lot of things that are higher on my priority list.

This SPC-1 result from them is built using a 6-node cluster, 3TB of flash cache and 288 GB of data cache spread across the controllers, and only 432 disks – 144 x 450GB per pair of controllers protected with RAID DP. The cost given is $1.67M for the setup. They say it is list pricing – so not being a customer of theirs I’m not sure if it’s apples to apples compared to other setups – some folks show discounted pricing and some show list – I would think it would heavily benefit the tester to illustrate the typical price a customer would pay for the configuration.

  • 250,039 IOPS  @ 3.35ms latency  ($6.89 per SPC-1 IOP)
  • 69.8TB Usable capacity ($23,947 per Usable TB)

Certainly a very respectable I/O number and really amazing latency – I think this is the first real SPC-1 result that is flash accelerated (as opposed to being entirely flash).

What got me thinking though was the utilization. I ragged on what could probably be considered a tier 3 or 4 storage company a while back for just inching by the SPC-1 minimum efficiency requirements. The maximum unused storage cannot exceed 45% and that company was at 44.77%.

Where’s NetApp with this ? Honestly higher than I thought especially considering RAID DP they are at 43.20% unused storage. I mean really – would it not make more sense to simply use RAID 10 and get the extra performance ? I understand that NetApp doesn’t support RAID 10 but it just seems a crying shame to have such low utilization of the spindles. I really would of expected the Flash cache to allow them to drive utilization up. But I suppose they decided to inch out more performance at the cost of usable capacity. I’d honestly be fascinated to see results when they drive unused storage ratio down to say 20%.

The flash cache certainly does a nice job at accelerating reads and letting the spindles run more writes as a result. Chuck over at EMC wrote an interesting post where he picked apart the latest NetApp release. What I found interesting from an outsider perspective is how so much of this new NetApp technology feels bolted on rather than integrated. They seem unable to adapt the core of their OS with this (now old) scale out Spinmaker stuff even after this many years have elapsed.  From a high level perspective the new announcements really do sound pretty cool. But once I got to know more aobut what’s on the inside,  I became less enthusiastic about them. There’s some really neat stuff there but at the same time some pretty dreadful shortcomings in the system still (see the NetApp blog posts above for info).

The plus side though is that at least parts of NetApp are becoming more up front with where they target their technology. Some of the posts I have seen recently both in comments on The Register as well as the NetApp blog above have been really excellent. These posts are honest in that they acknowledge they can’t be everything to everyone, they can’t be the best in all markets. There isn’t one storage design to rule them all. As EMC’s Chuck said – compromise. All storage systems have some degree of compromise in them, NetApp always seems to have had less compromise on the features and more compromise on the design. That honesty is nice to see coming from a company like them.

I met with a system engineer of theirs about a year ago now when I had a bunch of questions to ask and I was tired of getting pitched nothing but dedupe. This guy from NetApp came out and we had a great talk for what must’ve been 90 minutes. Not once was the word dedupe used and I learned a whole lot more about the innards of the platform. It was the first honest discussion I had had with a NetApp rep in all the years I had dealt with them off and on.

At the end of the day I still wasn’t interested in using the storage but felt that hey – if some day I really feel a need to combine the best storage hardware, with what many argue to say is the best storage software (management headaches aside e.g no true active-active automagic load balanced clustering), I can – just go buy a V-series and slap it in front of a 3PAR. I did it once before (really only because there was no other option at the time). I could do it again. I don’t plan to (at least not at the company I’m at now). But the option is there. Just as long as I don’t have to deal with the NetApp team in the Northwest and their dirty underhanded threatening tactics. I’m in the Bay area now so that shouldn’t be hard. The one surprising thing I heard from the reps here is they still can’t do evaluations. Which just seems strange to me. The guy told me if a deal hinged on an evaluation he wouldn’t know what to do.

3PAR of course has no such flash cache technology shipping today, something I’ve brought up with the head of HP storage before. I have been wanting them to release something like it (more specifically more like EMC’s Fast cache – EMC has made some really interesting investments in Flash over recent years – but like NetApp – at least for me the other compromises involved in using an EMC platform doesn’t make me want to use it over a 3PAR even though they have this flash technology) for some time now. I am going to be visiting 3PAR HQ soon and will learn a lot of cool things I’m sure that I won’t be able to talk about for some time to come.

February 3, 2012

IBM shows it still has some horses left

Filed under: Storage — Tags: , , — Nate @ 11:15 am

I noticed a few days ago that IBM posted some new SPC-1 results based on their SVC system, this time using different back end storage- their Storwize product (something I had not heard of before but I don’t pay too close of attention to what IBM does they have so many things it’s hard to keep track).

The performance results are certainly very impressive, coming in at over 520,000 IOPS  at a price of $6.92 per IOP. This is the sort of results I was kind of expecting from the Hitachi VSP a while back. IBM tested with 1,920 drives the same number as the 3PAR V800. They bested the 3PAR performance by a good 70,000 IOPS with half the latency on the same number of disks and less data cache.

The capacity numbers were, and still are, sort of difficult to interpret they seem to give conflicting information. IBM is using ~138TB of disk space to protect ~99TB of disk space. While 3PAR is using ~263TB of disk space to protect ~263TB of disk space. Both results say there is 30TB+ of “unused storage” in that protection scheme.

Bottom line is the IBM box is presented with roughly 280TB of storage, and of that, 100TB is usable, or about 35%. That brings their cost per usable TB number to $36,881/TB vs the 3PAR V800 which is roughly $12,872. The V800 I/O cost was $6.59, which IBM comes real close to.

IBM has apparently gone the same route as HDS in the only 3.5″ drives they support on their Storwize systems are 3TB SATA disks. They hamper their own cost structure by not supporting larger 3.5″ 15k RPM SAS disks, which just doesn’t make sense to me. There are 300GB 15k SAS drives out and Storwize doesn’t support those either(yet at least).

It took about five pages of scripting to configure the system from the looks of the full disclosure report.

Certainly looks like a halfway decent system. I mean if you compare it to the VSP for example it has the same array virtualization abilities with the SVC, it is sporting almost double the amount of disk drives, almost double the raw performance, configuration at least appears to be less complicated. It uses those power efficient 2.5″ disks just like the VSP. It also costs quite a bit less than the VSP both on per-IOP and per-TB basis. It also appears to have mainframe support for those that need that. From the looks of Seagate 15k RPM disks at least the 2.5″ drives have an average of 15% less latency for random reads and writes than their 3.5″ counterparts. I thought the difference might be bigger than that given how much less distance the disk heads have to travel.

If I was in the market for such a big system, these results wouldn’t lead me away from 3PAR, at least based on the pricing disclosed of each system (and level of complexity to configure). I was interviewing a candidate a few weeks ago and this guy had a strong storage background. Having worked for Symantec I think for a while he was doing some sort of consulting at various companies for storage. I asked him how he provisioned storage, what his strategies were. His response was quite surprising. He said usually the vendors come out, deploy their systems and provision everything up front, all he does is carve out LUNs and present them to users. He had never been involved in the architecture planning or deployment of a storage system. He acted as if what he was doing was the standard practice (maybe it is at large companies I’ve never worked at such an organization), and that it was perfectly normal.

But it certainly seems like a good system when put up against at least the VSP, and probably the V-MAX too.

I’ve always been interested in the SVC by itself, certainly seems like a cool concept, I’ve never used one of course but having the ability to cluster at that intermediate level(in this case a 8-node cluster which may be the max I’m not sure) and then scale out storage behind it. Clearly they’ve shown with this you can pump one hell of a lot of I/O through the thing. They also seem to have SSD tiering support built into it which is nice as well.

Hopefully HP can come up with something similar at some point, as much as they talk smack about the likes of SVC today.

November 4, 2011

Hitachi VSP SPC-1 Results posted

Filed under: Storage — Tags: , , , , — Nate @ 7:41 pm

[(More) Minor updates since original post] I don’t think I’ll ever work for a company that is in a position to need to leverage something like a VSP, but that doesn’t stop me from being interested as to how it performs. After all it has been almost four years since Hitachi released results for their USP-V, which had, at the time very impressive numbers(record breaking even, only to be dethroned by the 3PAR T800 in 2008) from a performance perspective(not so much from a cost perspective not surprisingly).

So naturally, ever since Hitachi released their VSP (which is OEM’d by HP as their P9500)about a year ago I have been very curious as to how well it performs, it certainly sounded like a very impressive system on paper with regards to performance. After all it can scale to 2,000 disks and a full terabyte of cache. I read an interesting white paper (I guess you could call it that) recently on the VSP out of curiosity.

One of the things that sort of stood out is the claim of being purpose built, they’re still using a lot of custom components and monolithic architecture on the system, vs most of the rest of the industry which is trying to be more of a hybrid of modular commodity and purpose built, EMC VMAX is a good example of this trend. The white paper, which was released about a year ago, even notes

There are many storage subsystems on the market but only few can be considered as real hi-end or tier-1. EMC announced its VMAX clustered subsystem based on clustered servers in April 2009, but it is often still offering its predecessor, the DMX, as the first choice. It is rare in the industry that a company does not start ramping the new product approximately half a year after launching. Why EMC is not pushing its newer product more than a year after its announcement remains a mystery. The VMAX scales up by adding disks (if there are empty slots in a module) or adding modules, the latter of which are significantly more expensive. EMC does not participate in SPC benchmarks.

The other interesting aspect of the VSP is it’s dual chassis design(each chassis in a separate rack), which, the white paper says is not a cluster, unlike a VMAX or a 3PAR system(3PAR isn’t even a cluster the way VMAX is a cluster intentionally with the belief the more isolated design would lead to higher availability). I would assume this is in response to earlier criticisms of the USP-V design in which the virtualization layer was not redundant(when I learned that I was honestly pretty shocked) – Hitachi later rectified the situation on the USP-V by adding some magic sauce that allowed you to link a pair of them together.

Anyways with all this fancy stuff, obviously I was pretty interested when I noticed they had released SPC-1 numbers for their VSP recently. Surely, customers don’t go out and buy a VSP because of the raw performance, but because they may need to leverage the virtualization layers to abstract other arrays, or perhaps the mainframe connectivity, or maybe they get a comfortable feeling knowing that Hitachi has a guarantee on the array where they will compensate you in the event there is data loss that is found to be the fault of the array (I believe the guarantee only covers data loss and not downtime, but am not completely sure). After all, it’s the only storage system on the planet that has such a stamp of approval from the manufacturer (earlier high end HDS systems had the same guarantee).

Whatever the reason,  performance is still a factor given the amount of cache and the sheer number of drives the system supports.

One thing that is probably never a factor is ease of use – since the USP/VSP are complex beasts to manage, something your very likely need significant amounts of training. One story I remember being told is a local HDS rep in the Seattle area mentioned to a 3PAR customer “after a few short weeks of training you’ll feel as comfortable on our platform as you do on 3PAR”. Something like that, the customer said “you just made my point for me”. Something like that anyways.

Would the VSP dethrone the HP P10000 ? I found the timing of the release of the numbers an interesting coincidence after all, I mean the VSP is over a year old at this point, why release now ?

So I opened the PDF, and hesitated .. what would be the results? I do love 3PAR stuff and I still view them as  somewhat of an underdog even though they are under the HP umbrella.

Wow, was I surprised at the results.

Not. Even. Close.

The VSP’s performance was not much better than that of the USP-V which was released more than four years ago. The performance itself is not bad but it really puts the 3PAR V800 performance in some perspective:

  • VSP -   ~270,000 IOPS @ $8.18/IOP
  • V800 – ~450,000 IOPS @ $6.59/IOP

But it doesn’t stop there

  • VSP -  ~$45,600 per usable TB (39% unused storage ratio)
  • V800 – ~$13,200 per usable TB (14% unused storage ratio)

Hitachi managed to squeeze in just below the limit for unused storage – which is not allowed to be above 45%, sounds kind of familiar. The VSP had only 17TB additional usable capacity than the original USP-V as tested. This really shows how revolutionary sub disk distributed RAID is for getting higher capacity utilization out of your system, and why I was quite disappointed when the VSP came out without anything resembling it.

Hitachi managed to improve their cost per IOP on the VSP vs the USP but their cost per usable TB has skyrocketed, about triple the price of the original USP-V results. I wonder why this is ? (I mis counted the digits in my original calculations, in fact the VSP is significantly cheaper than the USP when it was tested!) One big change in the VSP is the move to 2.5″ disk drives. The decision to use 2.5″ drives did kind of hamper results I believe since this is not a SPC-1/E Energy Efficiency test so power usage was never touted. But the largest 2.5″ 15k RPM drive that is available for this platform is 146GB(which is what was used).

One of the customers(I presume) at the HP Storage presentation I was at recently was kind of dogging 3PAR saying the VSP needs less power per rack than the T or V class(which requires 4x208V 30A per fully loaded rack).

I presume it needs less power because of the 2.5″ drives, also overall the drive chassis on 3PAR boxes do draw quite a bit of power by themselves(200W empty on T/V, 150W empty on F – given the T/V hold 250% more drives/chassis it’s a pretty nice upgrade).

Though to me especially on a system like this, power usage is the last thing I would care about. The savings from disk space would pay for the next 10 years of power on the V800 vs the VSP.

My last big 3PAR array cut the number of spindles in half and the power usage in half vs the array it replaced, while giving us roughly 50% better performance at the same time and the same amount of usable storage. So knowing that, and the efficiency in general I’m much less concerned as to how much power a rack requires.

So Hitachi can get 384 x 2.5″ 15k RPM disks in a single rack, and draw less power than 3PAR can get with 320 disks in a single rack.

You could also think of it this way: Hitachi can get ~56TB RAW of 15k RPM space in a single rack, and 3PAR can get 192TB RAW of 15k RPM space in a single rack, nearly four times the RAW space for double the power(nearly five times the usable(4.72x precisely – V800 @ 80% utilization VSP @ 58%) space due to the architecture), for me that’s a good trade off, a no brainer really.

The things I am not clear on with regards to these results – is this the best the VSP can do?  This does appear to be a dual chassis system. The VSP supports 2,000 spindles, the system tested only had 1,152, which is not much more than the 1,024 tested on the original USP-V. Also the VSP supports up to 1TB of cache however the system tested only had 512GB (3PAR V800 had 512GB of data cache too).

Maybe it is one of those situations where this is the most optimal bang for the buck on the platform,  perhaps the controllers just don’t have enough horsepower to push 2,000 spindles at full tilt – I don’t know.

Hitachi may have purpose built hardware, but it doesn’t seem to do them a whole lot of good when it comes to raw performance. I don’t know about you but I’d honestly feel let down if I was an HDS customer. Where is the incentive to upgrade from USP-V  to VSP ? The cost for usable storage is far higher, the performance is not much better. Cost per IOP is less, but I suspect the USP-V at this stage of the game with more current pricing for disks and stuff would be even more competitive than VSP. (correction from above, due to my mis-reading of the digits ($135k/TB instead of $13.5k/TB VSP is in fact cheaper making it a good upgrade from USP from that perspective! Sorry for the error 🙂 )

Maybe it’s in the software – I’ve of course never used a USP/VSP system but perhaps the VSP has newer software that is somehow not backwards compatible with the USP, though I would not expect this to be the case.

Complexity is still high, the configuration of the array stretches from page 67 of the full disclosure documentation to page 100. 3PAR’s configuration by contrast is roughly a single page.

Why do you think someone would want to upgrade from a USP-V to a VSP?

I still look forward to the day when the likes of EMC make their VMAX architecture across their entire product line, and HDS also unifies their architecture. I don’t know if it’ll ever happen but it should be possible at least with VMAX given their sound thumping of the Intel CPU drum set.

The HDS AMS 2000 series of products was by no means compelling either, when you could get higher availability(by means of persistent cache with 4 controllers), three times the amount of usable storage, and about the same performance on a 3PAR F400 for about the same amount of money.

Come on EMC, show us some VMAX SPC-1 love, you are, after all a member of SPC now (though kind of odd your logo doesn’t show up, just the text of the company name).

One thing that has me wondering on this VSP configuration – with such little disk space available on the system I have to wonder why anyone would bother with such a configuration with spinning rust on this platform for performance and just go SSD instead. Well one reason may be a 400GB SSD would run $35-40,000 (after 50% discount, assuming that is list price), ouch. 220 of those (system maximum is 256) would net you roughly 88TB raw (maybe 40TB usable), but cost $7.7M for the SSDs alone.

On the V800 side a 4-pack of 200GB SSDs will cost you roughly $25,000 after discount(assuming that is list price). Fully loading a V800 with the maximum of 384 SSDs(77TB raw) would cost roughly $2.4M for the SSDs alone(and still consume 7 racks) I think I like the 3PAR SSD strategy more  — not that I would ever do such a configuration!

Space would be more of course if the SSD tests used RAID 5 instead of RAID 1, I just used RAID 1 for simplicity.

Goes to show that these big boxes have a long way to go before they can truly leverage the raw performance of SSD. with 200-300 SSDs in either of these boxes the controllers would be maxed out and the SSDs would probably for the most part be idle. I’ve been saying for a while how stupid it seems for such a big storage system to be overwhelmed so completely by SSDs and that the storage systems need to be an order of magnitude(or more) faster. 3PAR did a good start with doubling of performance, but I’m thinking the boxes need to do at least 1M IOPS to disk, if not 2M, for systems such as these — how long will it take to get there?

Maybe HDS could show better results with 900GB 10k RPM disks(and put something like 1700 disks instead of 1,152 instead of the 146GB 15k RPM disks, should hopefully show much lower per usable TB costs, though their IOPS cost would probably shoot up. Though from what I can see, 600GB is the max supported 2.5″ 10k RPM disk supported. 900GB @ 58% utilization would yield about 522GB, and 600GB on 3PAR @ 80% utilization would yield about 480GB.

I suspect they could get their costs down significantly more if they went old school and supported larger 3.5″ 15k RPM disks and used those instead(the VSP supports 3.5″ disks just nothing other than 2TB Nearline (7200 RPM) and 400GB Flash – a curious decision). If you were a customer isn’t that something you’d be interested in ? Though another compromise you make with 3.5″ disks on the VSP is your then limited to 1,280 spindles, rather than 2,048 with 2.5″. Though this could be a side effect of a maximum addressable capacity of 2.5PB which is reached with 1,280 2TB disks, they very well could probably support 2,048 3.5″ disks if they could fit in the 2.5PB limit.

Their documentation says actual maximum usable with Nearline 2TB disks is 1.256PB (with RAID 10). With 58% capacity utilization that 1.25PB drops dramatically to 742TB. With RAID 5 (7+1) they say 2.19PB usable, let’s take that 58% number again 1.2PB @ 58% capacity utilization.

V800 by contrast would top out at 640TB with RAID 10 @ 80% utilization (stupid raw capacity limits!! *shakes fist at sky*),  or somewhere in the 880TB-1.04PB (@80%) range with RAID 5 (depending on data/parity ratio).

Another strange decision on both HDS and 3PAR’s part was to only certify the 2TB disk on their platform, and not anything smaller. Since both systems become tapped out at roughly half the number of supported spindles when using 2TB disks due to capacity limits.

An even more creative solution(to work around the limitation!!), which I doubt is possible is somehow restrict the addressable capacity of each disk to 1/2 the size, so you could in effect get 1,600 2TB disks in each with 1TB of capacity, then when they release software updates to scale the box to even higher levels of capacity (32GB of control cache can EASILY handle more capacity) just unlock the drives and get that extra capacity free. That would be nice at least, probably won’t happen though. I bet it’s technically possible on the 3PAR platform due to their chunklets. 3PAR in general doesn’t seem as interested in creative solutions as I am 🙂 (I’m sure HDS is even more rigid)

Bottom Line

So HDS has managed to get their cost for usable space down quite a bit, from $135,000/TB to around $45,000/TB, and improve performance by about 25%

They still have a long ways to go in the areas of efficiency, performance, scalability, simplicity and unified architecture. I’m sure there will be plenty of customers out there that don’t care about those things(or are just not completely informed) and will continue to buy VSP for other reasons, it’s still a solid platform if your willing to make those trade offs.

October 19, 2011

Linear scalability

Filed under: Storage — Tags: , — Nate @ 9:56 am

So 3PAR released their SPC-1 results for their Mac daddy P10000, and the results aren’t as high as I originally guessed they might be.

HP claims it is a world record result for a single system. I haven’t had the time yet to try to verify but they are probably right.

I’m going to a big HP/3PAR event later today and will ask my main question – was the performance constrained by the controllers or by the disks? I’m thinking disks, given the IOPS/disk numbers below.

Here’s some of the results

SystemDate
Tested
SPC-1
IOPS
IOPS
per
Disk
SPC-1 Cost
per IOP
SPC-1 Cost
per usable
TB
3PAR V80010/17/2011450,212234$6.59$12,900
3PAR F4004/27/200993,050242$5.89$20,308
3PAR T8009/2/2008224,989175$9.30$26,885

The cost per TB number was slashed because they are using disks that are much larger (300GB vs 147GB on earlier tests).

The cost was pretty reasonable as well coming in at under $7/IOP which is actually less than their previous results on their T800 from 2008 which was already cheap at $9.30/IOP.

It is interesting that they used Windows to run the test, which is a first for them I believe, having used real Unix in the past (AIX and Solaris for T800 and F400 respectively).

The one kind of strange thing, which is typical in 3PAR SPC-1 numbers is the sheer number of volumes they used (almost 700). I’m not sure what the advantage would be to doing that, another question I will try to seek the answer to.

The system was, as expected, remarkably easy to configure, the entire storage configuration process consisted of this

for n in {0..7}
do
	for s in 1 4 7
do
	if(($s==1))
then
	for p in 4
do
	controlport offline -f $n:$s:$p
	controlport config host -ct point -f $n:$s:$p
	controlport rst -f $n:$s:$p
done
fi

for p in 2
do
	controlport offline -f $n:$s:$p
	controlport config host -ct point -f $n:$s:$p
	controlport rst -f $n:$s:$p
done
done
done

PORTS[0]=":7:2"
PORTS[1]=":1:2"
PORTS[2]=":1:4"
PORTS[3]=":4:2"

for nd in {0..7}
do
 createcpg -t r1 -rs 120 -sdgs 120g -p -nd $nd cpgfc$nd

for hba in {0..3}
do
 for i in {0..14} ; do
 id=$((1+60*nd+15*hba+i))
 createvv -i $id cpgfc${nd} asu1.${id} 240g;
 createvlun -f asu1.${id} $((15*nd+i+1)) ${nd}${PORTS[$hba]}
done
for i in {0..1} ; do
 id=$((681+8*nd+2*hba+i))
 j=$((id-680))
 createvv -i $id cpgfc${nd} asu3.${j} 360g;
 createvlun -f asu3.${j} $((2*nd+i+181)) ${nd}${PORTS[$hba]}
done
for i in {0..3} ; do
 id=$((481+16*nd+4*hba+i))
 j=$((id-480))
 createvv -i $id cpgfc${nd} asu2.${j} 840g;
 createvlun -f asu2.${j} $((4*nd+i+121)) ${nd}${PORTS[$hba]}
done
done
done

Think about that, a $3 million storage system(after discount) configured in less than 50 lines of script?

Not a typical way to configure a system, I had to look at it a couple of times but it seems they are still pinning volumes to particular controller pairs, and LUNs to particular FC ports. This is what they have done in the past so it’s nothing new but I would like to see how the system runs without such pinning of resources and let the inter-node routing do it’s magic, since that is how the customers would run the system.

But that’s what full disclosure is all about right! Another reason I like the SPC-1, is the in depth configuration information that you don’t need an NDA to see(and in 3PAR’s case you probably don’t need to attend a 3-week training course to understand!)

I’m trying to think of one but I can’t think of another storage architecture out there that scales as well as the 3PAR Inspire architecture from the low end(F200) to the high end(V800).

The cost of the V800 was a lot more reasonable than I was fearing it might be, it’s only roughly 45% more expensive than the T800 tested in September 2008, for that extra 45% you get 50% more disks, double the I/O capacity, almost three times the usable capacity. Oh, and five times more data cache, and 8 times more control cache to boot!

I’m suspecting the ASICs are not being pushed to their limits here in the V800, and that the system can go quite a bit faster provided there is not a I/O bottleneck on the disks behind the controllers.

On the backs of these numbers The Register is reporting HP is beefing up the 3PAR sales team after having experienced massive growth over the past year, seems to be at least roughly 300% increase in sales over the past year, so much that they are having a hard time keeping up with demand.

I haven’t been happy with the hefty price increases HP has put into the 3PAR product line though in a lot of cases those come back out in the form of discounts. I guess it’s what the market will bear right – as long as things are selling as fast as they can make them HP doesn’t have any need to reduce the price.

I saw an interview with the chairman of HP a few weeks ago, when they announced their new CEO. He mentioned how 3PAR had exceeded their sales expectations significantly for justification for paying that lofty price to acquire them about a year ago.

So congrats to 3PAR, I knew you could do it!

November 24, 2010

More inefficient storage

Filed under: Storage — Tags: , , — Nate @ 8:32 am

Another random thought, got woken up this morning and wound up checking what’s new on SPC-1, and a couple weeks ago the Chinese company Huawei posted results for their Oceanspace 8100 8-node storage system. This system seems to be similar to the likes of HDS USP/VSP, IBM SVC in that it has the ability to virtualize other storage systems behind it. The system is powered by 32 quad core processors or 128 CPU cores.

The thing that caught my eye is in every SPC-1 disclosure is the paragraph

Unused Storage Ratio: Total Unused Capacity (XXX GB) divided by Physical
Storage Capacity (XXX GB) and may not exceed 45%.

So what is Huawei’s Unused storage ratio? – 44.77%

I wonder how hard it was for them to get under the 45% limit, I bet they were probably at 55-60% and had to yank a bunch of drives out or something to decrease their ratio.

From their full disclosure document it appears their tested system has roughly 261TB of unused storage on it. That’s pretty bad, 3PAR F400 has a mere 75GB of unused capacity (0.14%) by contrast. The bigger T800 has roughly 21TB of unused capacity (15%).

One would think, that for Huawei, they would be better off using 146GB disks instead of the 300GB, 450GB and 600GB disks (another question is what is the point in mismatched disks for this test, maybe they didn’t have enough of one drive type which would be odd for a drive array manufacturer – maybe they mixed drive types to drive the unused capacity perhaps after having started with nothing but 600GB disks).

Speaking of drive sizes, one company I know well has a lot of big Oracle databases and are I/O bound more than space bound, so it benefits them to use smaller disk drives, their current array manufacturer no longer offers 146GB disk drives so they are forced to pay quite a bit more for the bigger disks.

Lots of IOPS to be sure, 300,000 of them (260 IOPS per drive) and 320GB of cache (see note below!), but certainly seems that you could do this a better way..

Looking deeper into the full disclosure documents(Appendix C page 64) for the Huawei system reveals this little gem

The creatlun command creates a LUN with a capacity 1,716,606 MiB. The -p 0 parameter, in the creatlun command sets the read cache policy as no prefetch and the -m 0 parameter sets the write cache policy as write cache with no mirroring.

So they seem to be effectively disabling the read cache and disabling cache mirroring making all cache a write back cache that is not protected? I would imagine they ran the test and found their read cache ineffective so disabled it and devoted it to write cache and re-ran the test.

Submitting results without mirrored cache seems, well misleading to say the least. Glad there is full disclosure!

The approximate cost of the Huawei system seems to be about $2.2 million according to the google exchange rate.

While I am here, what is it with 8 node storage systems? What is magical about that number? I’ve seen a bunch of different ones both SAN and NAS that top out at eight. Not 10? not 6? Seems a strange coincidence, and has always bugged me for some reason.

October 9, 2010

Capacity Utilization: Storage

Filed under: Storage — Tags: , , , — Nate @ 9:49 am

So I was browsing through that little drop down address bar in Firefox hitting the sites I usually hit, and I decided hey let’s go look at what Pillar is doing. I’ve never used their stuff but I dig technology you know, so I like to try to keep tabs on companies and products that I haven’t used, and may never consider using, good to see what the competition is up to, because you never know they may come out with something good.

Tired of the thin argument

So the CEO of Pillar has a blog, and he went on a mini rant about how 3PAR^H^H^H^HHP is going around telling people you can get 100TB of capacity in one of their 70TB arrays. I haven’t read too deep into what the actual claim they are making is, but being so absolutely well versed in 3P..HP technology I can comment with confidence in what their strategy is and how they can achieve those results. Whether or not they are effective at communicating that is another story, I don’t know because well I don’t read everything they say.

Pillar notes that HP is saying that due to the 3PAR technologies you can get by with less and he’s tired of hearing that old story over and over.

Forget about thin for the moment!

So let me spray paint another angle for everyone to see. As you know I do follow SPC-1 numbers pretty carefully. Again not that I really use them to make decisions, I just find the numbers and disclosure behind them very interesting and entertaining at times. It is “fun” to see what others can do with their stuff in a way that can be compared on a level playing field.

I wrote, what I consider a good article on SPC-1 benchmarks a while back, EMC gave me some flak because they don’t believe SPC-1 is a valid test, when I believe EMC just doesn’t like the disclosure requirements, but I’m sure you won’t ever hear EMC say that.

SPC-1 Results

So let’s take the one and only number that Pillar published, because, well that’s all I have to go on, I have no personal experience with their stuff, and don’t know anyone that uses it. So if this information is wrong it’s wrong because the results they submitted were wrong.

So, the Pillar Axiom 600‘s results have not stood the test of time well at all, as you would of noticed in my original article, but to highlight:

  • System tested: January 12, 2009
  • SPC-1 IOPS performance: 64,992
  • SPC-1 Usable space: 10TB
  • Disks used: 288 x 146G 15k RAID 1
  • IOPS per disk: 226 IOPS/disk
  • Average Latency at peak load: 20.92ms
  • Capacity Utilization (my own metric I just thought of): 34.72 GB/disk
  • Cost per usable TB (my own metric extrapolated from SPC-1): $57,097 per TB
  • Cost per IOP (SPC-1 publishes this): $8.79

The 3PAR F400 by contrast was tested just 105 days later and absolutely destroyed the Pillar numbers, and unlike the Pillar numbers the F400 has held up very well against the test of time all the way to present day even:

  • System tested: April 27, 2009
  • SPC-1 IOPS performance: 93,050
  • SPC-1 Usable space: 27 TB
  • Disks used: 384 x 146G 15k RAID 1
  • IOPS per disk: 242 IOPS/disk
  • Average Latency at peak load: 8.85ms
  • Capacity Utilization: 70.432 GB/disk
  • Cost per usable TB: $20,312 per TB
  • Cost per IOP: $5.89

Controller Capacity

Now in my original post I indicated stark differences in some configurations that tested substantially less physical disks than the controllers supported, there are a couple of possibilities I can think of for this:

  • The people running the test didn’t have enough disks to test (less likely)
  • The controllers on the system couldn’t scale beyond the configuration tested, so to illustrate the best bang for your $ they tested with the optimal number of spindles to maximize performance (more likely)

So in Pillar’s case I think the latter is the case as they tested with a pretty small fraction of what their system is advertised as being capable of supporting.

Efficiency

So taking that into account, the 3PAR gives you 27TB of usable capacity, note here we aren’t even taking into account the thin technologies, just throw those out the window for a moment, let’s simplify this.

The Pillar system gives you 10TB of usable capacity, the 3PAR system gives you about 270% more space and 130% more performance for less money.

What would a Pillar system look like(or Systems I guess I should say since we need more than one) that could give us 27TB usable capacity and 93,000 SPC-1 IOPS using 146G 15k RPM disks (again trying to keep level playing field here)?

Well I can only really guess, to reach the same level of performance Pillar would need an extra 124 disks, so 412 spindles. Maintaining the same level of short stroking that they are doing(34.7GB/disk), those extra 124 spindles only get you to roughly 14.3TB.

And I’m assuming here because my comments earlier about optimal number of disks to achieve performance, if you wanted to get those extra 124 spindles in you need a 2nd Axiom 600, and all the costs with the extra controllers and stuff. Controllers obviously carry a hefty premium over the disk drives. While the costs are published in Pillar’s results I don’t want to spend the time to try to extrapolate that angle.

And if you do in fact need more controllers, the system was tested with two controllers, if you have to go to four (tested 3PAR F400 has four), 3PAR has another advantage completely unrelated to SPC-1, the ability to maintain performance levels under degraded conditions (controller failure, software upgrade, whatever) with Persistent Cache. Run your same SPC-1 test, and yank a controller out from each system (3PAR and Pillar) and see what the results are. The numbers would be even more embarrassingly in 3PAR’s favor thanks to their architecture and this key caching feature. Unlike most of 3PAR’s feature add-ons, this one comes at no cost to the customer, the only requirement is you must have at least 4 controllers on the box.

So you still need to get to 27 TB of usable capacity. From here it can get really fuzzy because  you need to add enough spindles to get that high but then you need to adjust the level of short stroking your doing to use more of the space per drive, it wouldn’t surprise me if this wasn’t even possible on the Pillar system(not sure if any system can do it really,  but I don’t know).

If Pillar can’t adjust the size of the short stroking then the numbers are easy, at 34.7GB/disk they need 778 drives to get to 27TB of usable capacity, roughly double what 3PAR has.

Of course the performance of a two-system based Axiom 600 with 778 drives will likely outperform a 384-disk F400(I should hope so at least), but you see where I’m going.

I’m sure Pillar engineers could come up with a way to configure the system more optimally my 778 drive solution is crazy but from a math perspective it’s the easiest and quickest thing I could come up with, with the data I have available to me.

This is also a good illustration why when I go looking at what Xiotech posts, I really can’t compare them against 3PAR or anybody else, because they only submit results for ~16 drive systems. To me, it is not valid to compare a ~16 drive system to something that has hundreds of drives and try to extrapolate results. Xiotech really does give stellar results as far as capacity utilization and IOPS/disk and stuff, but they haven’t yet demonstrated that those numbers are scalable beyond a single ISE enclosure – yet alone to several hundred disks.

I also believe the 3PAR T800 results could be better too, the person at 3PAR who was responsible for running the test was new to the company at the time and the way he laid out the system was, odd to say the least. The commands he used were even depreciated. But 3PAR isn’t all that interested in re-testing, they’re still the record holder for spinning rust in a single system(more than two years running now no doubt!).

Better response times to boot

You can see the 3PAR system performs with less than half the amount of latency that the Pillar system does despite the Pillar system short stroking their disks. Distributed RAID with full mesh architecture at work baby. I didn’t even mention it but the Pillar system has double the cache than than the F400. I mean the comparison really almost isn’t fair.

I’m sure Pillar has bigger and better things out now since they released the SPC-1 numbers for the Axiom, so this post has the obvious caveat that I am reporting based on what is published. They’d need to pull more than a rabbit out of a hat to make up these massive gaps though I think.

Another Angle

We could look at this another way as well, assuming for simplicity’s sake for a moment that both systems scale lineally up or down, we can configure a 3PAR F400 with the same performance specs as the Pillar that was tested.

You’d need 268 disks on the 3PAR F400 to match the performance of the Pillar system. With those 268 disks you’d get 18.4 TB of usable space, same performance, fewer disks, and 184% additional usable capacity. And scaling the cost down like we scaled the performance down, the cost would drop to roughly $374,000, a full $200,000 less than Pillar for the same performance and more space.

Conclusion

So hopefully this answers the question with more clarity why you can get less storage from the 3PAR F400 and get the same or better performance and usable capacity than going with a Pillar Axiom 600.  At the end of the day 3PAR drives higher capacity utilization and delivers superior results for significantly less greenbacks. And I didn’t even take 3PAR’s thin technologies into account, the math there can become even more fuzzy depending on the customer’s actual requirements and how well they can leverage thin built in.

You may be able to understand why HP was willing to go to the end of the earth to acquire 3PAR technology. And you may be able to understand why I am so drawn to that very same technology. And here I’m just talking about performance. Something that unlike other things(ease of use etc) is really easy to put hard numbers on.

The numbers are pretty simple to understand, and you can see why the big cheese at HP responsible for spear heading the 3PAR purchase said:

The thin provisioning, automatic storage tiering, multi-tenancy, shared-memory architecture, and built-in workload management and load balancing in the 3PAR arrays are years ahead of the competition, according to Donatelli, and therefore justify the $2.4bn the company paid to acquire 3PAR in a bidding war with rival Dell.

Maybe if I’m lucky I can trigger interest in The Register again by starting a blog war or something and make tech news! woohoo! that would be cool. Of course now that I said that it probably won’t happen.

I’m told by people who know the Pillar CEO he is “raw”, much like me. so it will be interesting to see the response. I think the best thing they can do is post new SPC-1 numbers with whatever the latest technology they have is, preferably on 146G 15k disks!

Side note

It was in fact my 3PAR rep that inspired me to write about this SPC-1 stuff, I was having a conversation with him earlier in the year where he didn’t think the F400 was as competitive against the HDS AMS2500 as he felt it needed to be. I pointed out to him that despite the AMS2500 having similar SPC-1 IOPS and similar cost, the F400 offered almost twice the usable capacity. And the cost per usable TB was far higher on the 2500. He didn’t realize this.  I did see this angle so felt the need to illustrate it. Hence my Cost per SPC-1 Usable TB. It’s not a performance metric, but in my opinion from a cost perspective a very essential metric, at least for highly efficient systems.

(In case it wasn’t obvious, I am by no means compensated by 3PAR in any way for anything I write, I have a deep passion for technology and they have some really amazing technology, and they make it easy to use and cost effective to boot)

September 26, 2010

Still waiting for Xiotech..

Filed under: Random Thought,Storage — Tags: , , , — Nate @ 2:55 pm

So I was browsing the SPC-1 pages again to see if there was anything new and lo and behold, Xiotech posted some new numbers.

But once again, they appear too timid to release numbers for their 7000 series, or the 9000 series that came out somewhat recently. Instead they prefer to extrapolate performance from their individual boxes and aggregate the results. That doesn’t count of course, performance can be radically different at higher scale.

Why do I mention this? Well nearly a year ago their CEO blogged, in response to one of my posts, and that was one of the first times I made news in The Register (yay! – I really was excited) , and in part the CEO said:

Responding to the Techopsguy blog view that 3PAR’s T800 outperforms an Emprise 7000, the Xiotech writer claims that Xiotech has tested “a large Emprise 7000 configuration” on what seems to be the SPC-1 benchmark; “Those results are not published yet, but we can say with certainty that the results are superior to the array mentioned in the blog (3PAR T800) in several terms: $/IOP, IOPS/disk and IOPS/controller node, amongst others.”

So here we are almost a year later, and more than one SPC-1 result later, and still no sign of Xiotech’s SPC-1 numbers for their higher end units. I’m sorry but I can’t help but feel they are hiding something.

If I were them I would put my customers more at ease by publishing said numbers, and be prepared to justify the results if they don’t match up to Xiotech’s extrapolated numbers from the 5000 series.

Maybe they are worried they might end up like Pillar, who’s CEO was pretty happy with their SPC-1 results. Shortly afterwards the 3PAR F400 launched and absolutely destroyed the Pillar numbers from every angle. You can see more info on these results here.

At the end of the day I don’t care of course, it just was a thought in my head and gave me something to write about 🙂

I just noticed that these past two posts puts me over the top as far as the most number of posts I have done in a month since this TechOpsGuys things started. I’m glad I have my friends Dave, Jake and Tycen generating tons of content too, after all this site was their idea!

June 15, 2010

Storage Benchmarks

Filed under: Storage — Tags: , — Nate @ 11:27 pm

There are two main storage benchmarks I pay attention to:

Of course benchmarks are far from perfect, but they can provide a good starting point when determining what type of system you need to look towards based on your performance needs. Bringing in everything under the sun to test in house is a lot of work, much of it can be avoided by getting some reasonable expectations up front. Both of these benchmarks do a pretty good job. And it’s really nice to have the database of performance results for easy comparison. There’s tons of other benchmarks that can be used but very few have a good set of results you can check against.

SPC-1 is better as a process primarily because it forces the vendor to disclose the cost of the configuration and 3 years of support. They could improve on this further by forcing the vendor to provide updating pricing to the configuration for 3 years, while the performance of the configuration should not change(given the same components), the price certainly should decline over time so the cost aspects become harder to compare the further apart the tests(duh).

SPC-1 also forces full disclosure of everything required to configure the system, down to the CLI commands to configure the storage. You can get a good idea on how simple or complex the system is by looking at this information.

SPC-1 doesn’t have a specific disclosure field for Cost per Usable TB. But it is easy enough to extrapolate from the other numbers in the reports, it would be nice if this was called out(well nice for some vendors, not so much for others). Cost per usable TB can really make systems that utilize short stroking to get performance stand out like a sore thumb. Another metric that would be interesting would be Watts per IOP and Watts per usable TB. The SPC-1E test reports Watts per IOP,  though I have hard time telling whether or not the power usage is taken at max load, seems to indicate power usage was calculated at 80% load.

Storage performance is by no means the only aspect you need to consider when getting a new array, but it usually is in at least the top 5.

I made a few graphs for some SPC-1 numbers, note the cost numbers need to be taken with a few grains of salt of course depending on how close the systems were tested, the USP for example was tested in 2007. But you can see trends at least.

The majority of the systems tested used 146GB 15k RPM disks.

Somewhat odd configurations:

  • From IBM, the easy tier config is the only one that uses SSD+SATA, and the other IBM system is using their SAN Volume Controller in a clustered configuration with two storage arrays behind it (two thousand spindles).
  • Fujitsu is short stroking 300GB disks.
  • NetApp is using RAID 6 (except IBM’s SSDs everyone else is RAID 1)

I’ve never been good with spreadsheets or anything so I’m sure I could make these better, but they’ll do for now.

Links to results:

Older Posts »

Powered by WordPress