TechOpsGuys.com Diggin' technology every day

October 19, 2011

Linear scalability

Filed under: Storage — Tags: , — Nate @ 9:56 am

So 3PAR released their SPC-1 results for their Mac daddy P10000, and the results aren’t as high as I originally guessed they might be.

HP claims it is a world record result for a single system. I haven’t had the time yet to try to verify but they are probably right.

I’m going to a big HP/3PAR event later today and will ask my main question – was the performance constrained by the controllers or by the disks? I’m thinking disks, given the IOPS/disk numbers below.

Here’s some of the results

SystemDate
Tested
SPC-1
IOPS
IOPS
per
Disk
SPC-1 Cost
per IOP
SPC-1 Cost
per usable
TB
3PAR V80010/17/2011450,212234$6.59$12,900
3PAR F4004/27/200993,050242$5.89$20,308
3PAR T8009/2/2008224,989175$9.30$26,885

The cost per TB number was slashed because they are using disks that are much larger (300GB vs 147GB on earlier tests).

The cost was pretty reasonable as well coming in at under $7/IOP which is actually less than their previous results on their T800 from 2008 which was already cheap at $9.30/IOP.

It is interesting that they used Windows to run the test, which is a first for them I believe, having used real Unix in the past (AIX and Solaris for T800 and F400 respectively).

The one kind of strange thing, which is typical in 3PAR SPC-1 numbers is the sheer number of volumes they used (almost 700). I’m not sure what the advantage would be to doing that, another question I will try to seek the answer to.

The system was, as expected, remarkably easy to configure, the entire storage configuration process consisted of this

for n in {0..7}
do
	for s in 1 4 7
do
	if(($s==1))
then
	for p in 4
do
	controlport offline -f $n:$s:$p
	controlport config host -ct point -f $n:$s:$p
	controlport rst -f $n:$s:$p
done
fi

for p in 2
do
	controlport offline -f $n:$s:$p
	controlport config host -ct point -f $n:$s:$p
	controlport rst -f $n:$s:$p
done
done
done

PORTS[0]=":7:2"
PORTS[1]=":1:2"
PORTS[2]=":1:4"
PORTS[3]=":4:2"

for nd in {0..7}
do
 createcpg -t r1 -rs 120 -sdgs 120g -p -nd $nd cpgfc$nd

for hba in {0..3}
do
 for i in {0..14} ; do
 id=$((1+60*nd+15*hba+i))
 createvv -i $id cpgfc${nd} asu1.${id} 240g;
 createvlun -f asu1.${id} $((15*nd+i+1)) ${nd}${PORTS[$hba]}
done
for i in {0..1} ; do
 id=$((681+8*nd+2*hba+i))
 j=$((id-680))
 createvv -i $id cpgfc${nd} asu3.${j} 360g;
 createvlun -f asu3.${j} $((2*nd+i+181)) ${nd}${PORTS[$hba]}
done
for i in {0..3} ; do
 id=$((481+16*nd+4*hba+i))
 j=$((id-480))
 createvv -i $id cpgfc${nd} asu2.${j} 840g;
 createvlun -f asu2.${j} $((4*nd+i+121)) ${nd}${PORTS[$hba]}
done
done
done

Think about that, a $3 million storage system(after discount) configured in less than 50 lines of script?

Not a typical way to configure a system, I had to look at it a couple of times but it seems they are still pinning volumes to particular controller pairs, and LUNs to particular FC ports. This is what they have done in the past so it’s nothing new but I would like to see how the system runs without such pinning of resources and let the inter-node routing do it’s magic, since that is how the customers would run the system.

But that’s what full disclosure is all about right! Another reason I like the SPC-1, is the in depth configuration information that you don’t need an NDA to see(and in 3PAR’s case you probably don’t need to attend a 3-week training course to understand!)

I’m trying to think of one but I can’t think of another storage architecture out there that scales as well as the 3PAR Inspire architecture from the low end(F200) to the high end(V800).

The cost of the V800 was a lot more reasonable than I was fearing it might be, it’s only roughly 45% more expensive than the T800 tested in September 2008, for that extra 45% you get 50% more disks, double the I/O capacity, almost three times the usable capacity. Oh, and five times more data cache, and 8 times more control cache to boot!

I’m suspecting the ASICs are not being pushed to their limits here in the V800, and that the system can go quite a bit faster provided there is not a I/O bottleneck on the disks behind the controllers.

On the backs of these numbers The Register is reporting HP is beefing up the 3PAR sales team after having experienced massive growth over the past year, seems to be at least roughly 300% increase in sales over the past year, so much that they are having a hard time keeping up with demand.

I haven’t been happy with the hefty price increases HP has put into the 3PAR product line though in a lot of cases those come back out in the form of discounts. I guess it’s what the market will bear right – as long as things are selling as fast as they can make them HP doesn’t have any need to reduce the price.

I saw an interview with the chairman of HP a few weeks ago, when they announced their new CEO. He mentioned how 3PAR had exceeded their sales expectations significantly for justification for paying that lofty price to acquire them about a year ago.

So congrats to 3PAR, I knew you could do it!

8 Comments

  1. Impressive results for 3PAR, sad to see HP is bumping up the pricing though.

    I’d like to see the new Gen3 XIV system SPC-1 results. The addition of infiniband backend, SAS disk, Nehelam procs and the 7.5TB or read cache should make for a pretty smoking rig. I’ve seen some published test results in the 455k IOP range

    http://storagebuddhist.wordpress.com/2011/07/15/xiv-gen3-at-full-speed/

    Comment by tbik — October 19, 2011 @ 11:06 am

  2. I did not know there was a new XIV, did they improve the back end? Last I recall it was 1GbE and they were using regular ethernet switches which seemed to contribute to a less scalable design. I will try to poke around! But yeah it would be nice to see something from XIV, IBM has posted results from some of their other systems, not sure what is holding them back from XIV results to-date.

    And, 7.5TB of read cache? really? holy crap.. that is quite a bit!

    Comment by Nate — October 19, 2011 @ 11:10 am

  3. To answer my own Q – looks like the new XIV uses Infiniband, which is a nice boost compared to 1GbE!

    Communication between modules takes place over an internal, redundant network equipped with massive bandwidth (InfiniBand for XIV Gen3 models; 1 Gbps Ethernet for second-generation XIV models) which supports rapid rebuilding when necessary.

    Do you, or anyone else happen to know if the XIV’s UPS modules are required for battery backup, or does it have built in batteries? I had some minor complications getting UPSs installed in a colo to support an Exanet cluster, it was for sure a drawback of the system not having internal batteries.

    Looks like the cache tops out at 360GB not 7TB. Though looks like there is a SSD cache option but not much info on it other than mentioning that it exists (on second look it seems to be a future upgrade, so doesn’t exist right now).

    The thing that puzzles me the most though is why IBM is limiting the system to only 180 disks, and it looks like it still only supports RAID 10. Certainly seems like a nice boost from Gen 2 to Gen 3 though! Very nice software feature set on the surface.

    Comment by Nate — October 19, 2011 @ 1:09 pm

  4. The UPS is integrated into the frame, the bottom of the array is the 3 UPS units that provide power to the array during a power outage. There is no battery backup as you would have in a standard array with dual controllers. Once a power outage occurs, the system initiates a graceful shutdown.

    As I understand it with the new Gen3 model, the 7.5TB of read cache comes in the form of flash modules that will slide into slots on the back of each of the 15 nodes for a total of 7.5TB of read cache. The standard cache for the system will still be the 360GB. This option is supposed to be available in Q2 of 2012 I believe.

    Keep in mind, Gen3 moved to nearline SAS disks over SATA that was in the Gen1/Gen2 units and the move to inifiniband for the interconnect was a much needed improvement. So right now the limit on a new Gen3 array will be around 160TB usable. But no, the system as a whole does not scale (yet).
    XIV doesn’t really use RAID10 it falls more in the line of a RAIDX type configuration. Data is spread out across all disks in the array in a pseudo-random fashion in 1MB chunks. When luns are created on the system, they come in 17GB increments, so 17, 34,51, etc (I know its kind of odd, but that’s how the system works).

    Yes the array has its limitations, and its not a one size fits all, more of a niche function array, though for me it gobbles up SQL, Domino, Oracle, Exchange, and VMWare without any issues.

    Comment by tbik — October 19, 2011 @ 7:29 pm

  5. oh yeah, I was aware that XIV distributes the data over all the disks, the RAID 10 thing was more of a question as to the overhead associated with mirroring(the space overhead at least). It’s certainly the fastest way to go, though being locked into a single RAID level (same as NetApp with RAID DP) is sub optimal..

    Do you get any indication from IBM if/when the system will scale(and what, if anything is holding them back? I recall reading about some tense politics involved internally at IBM between the XIV folks and the rest of the storage group at one point – though I seem to remember reading that the founder/head of XIV left IBM a while back so maybe things have healed somewhat). Conceptually it sounds like a much better system than say their current high end boxes from a software functionality and overall architecture standpoint. But the decisions to have nearline drive sonly, RAID 10 only, not scaling beyond 180 spindles are all very strange decisions to me. I hope IBM takes the leap and gives XIV more love and make it more flexible/scalable.

    Comment by Nate — October 19, 2011 @ 7:46 pm

  6. XIV was a Moshe Yanai (kind of designed the Symmetrix for EMC) company that he essentially sold to IBM. When we were evaluating our decision to move to XIV for several of our workloads, he came out to pitch the design and go over it with us. Apparently he left IBM under somewhat of a cloud in regards to how the array should be improved upon going forward. I know several XIV members who stayed on with IBM, but also several left, I’m guessing this is par for the course when one company buys another. The IBM people I deal with on the tech side are very excited about the system, many of them have been with IBM for 20+ years and see the XIV as a play that they have been needing for some time.

    I have not really heard anything solid yet about scaleout of the array. My best guess would be some SVC style secret sauce. XIV integrates with SVC currently so it couldn’t be that much of a stretch. I think a lot of it has to do with the software and how data is distributed on the array, than the functional ability of connecting two or more systems together.

    From an ease of use standpoint, the system is probably the simplest array you could ever use. Far simpler than Unisphere. The only other system that is even as close to being as easy to drive is an Equallogic.

    Sorry to hijack your 3PAR thread. I’d love to get my hands on one of them myself. Though I’m looking at the new storwize v7000U since it has File/Block in the same box along with SVC and it uses the same XIV GUI for administration.

    Comment by tbik — October 19, 2011 @ 8:02 pm

  7. hey no worries – it is nice to hear some stuff about XIV, it has always sounded like an interesting product, but did not know anyone who has used it. I dig technology ! 🙂

    Comment by Nate — October 19, 2011 @ 9:49 pm

  8. […] http://www.techopsguys.com/2011/10/19/linear-scalability/#comments […]

    Pingback by Why I belive you should stop worrying about SPC-1 benchmarks. « ausstorageguy — November 14, 2012 @ 6:12 am

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress