IBM recently announced that they are adding an “easy tier” of storage to some of their storage systems. This seems to be their form of what I have been calling automagic storage tiering. They are doing it at the sub LUN level in 1GB increments. And they recently posted SPC-1 numbers for this new system, finally someone posted numbers.
Configuration of the system included:
- 1 IBM DS8700
- 96 1TB SATA drives
- 16 146GB SSDs
- Total ~100TB raw space
- 256GB Cache
Performance of the system:
- 32,998 IOPS
- 34.1 TB Usable space
Cost of the system:
- $1.58 Million for the system
- $47.92 per SPC-1 IOP
- $46,545 per usable TB
Now I’m sure the system is fairly power efficient given that it only has 96 spindles on it, but I don’t think that justifies the price tag. Just take a look at this 3PAR F400 which posted results almost a year ago:
- 384 disks, 4 controllers, 24GB data cache
- 93,050 SPC-1 IOPS
- 26.4 TB Usable space (~56TB raw)
- $548k for the system (I’m sure prices have come down since)
- $5.89 per SPC-1 IOP
- $20,757 per usable TB
The system used 146GB disks, today the 450GB disks seem priced very reasonably, I would opt for those instead and get the extra space for not much of a premium.
Take a 3PAR F400 with 130 450GB 15k RPM disks, that would be about 26TB of usable space with RAID 1+0 (the tested configuration above is 1+0). That would give about 33.8% of the performance of the above 384-disk system, so say 31,487 SPC-1 IOPS, very close to the IBM system and I bet the price of the 3PAR would be close to half of the $548k above (taking into account the controllers in any system are a good chunk of the cost). 3PAR has near linear scalability making extrapolations like this possible and accurate. And you can sleep well at night knowing you can triple your space/performance online without service disruption.
Note of course you can equip a 3PAR system with SSD and use automagic storage tiering as well, they call it Adaptive Optimization, if you really wanted to. The 3PAR system moves data around in 128MB increments by contrast.
It seems the cost of the SSDs and the massive amount of cache IBM dedicated to the system more than offset the benefits of using lower cost nearline SATA disks in the system. If you do that, what’s the point of it then?
So consider me not impressed with the first results of automagic storage tiering. I expected significantly more out of it. Maybe it’s IBM specific, maybe not, time will tell.
[…] you recall not long ago IBM released some SPC-1 numbers with their automagic storage tiering technology Easy Tier. It was noted that […]
Pingback by EMC and IBM’s Thick chunks for automagic storage tiering « TechOpsGuys.com — August 24, 2010 @ 12:59 pm
So I am very confused about the blog. I don’t work for Pillar, 3Par or any other vendor, but I have a wealth of interaction with storage and have run both 3Par and Pillar. Surprisingly, both perform well under real world workloads. 3Par has some unique advantages like density, but the fundamental concepts, such as “chunklet” in 3Par speak or “MAU” in Pillar speak is the same.
A awefully big rant considering you were not intelligent enough to look at more than one dimension. Next time try to look at things that matter in the real world like disparate workloads where Pillar will do a better job due to the Quality of Service. A test of a single volume spread across the entire array (Who does that in real life) is hardly an accurate test.
I would write more, but the apparent lack of understanding of IO and what is meaningful in a storage platform, bores me.
Comment by Confused — October 12, 2010 @ 9:13 pm
Just having fun here 🙂
I think you commented on the wrong post here since this was about IBM, nothing to do with Pillar.
Comment by Nate — October 12, 2010 @ 9:26 pm
[…] Typical stuff. Usually means they would score poorly – especially those that leverage SSD as a cache tier, with high utilization rates of SPC-1 you are quite likely to blow out that tier, once that happens […]
Pingback by 3PAR 7400 SSD SPC-1 « TechOpsGuys.com — May 23, 2013 @ 11:09 am