TechOpsGuys.com Diggin' technology every day

23Aug/11Off

Mac Daddy P10000

It's finally here, the HP P10000 - aka 3PAR V Class. 3PAR first revealed this to their customers more than a year ago, but the eagle has landed now.

When it comes to the hardware - bigger is better (usually means faster too)

Comparisons of recent 3PAR arrays

ArrayRaw
Capacity
Fibre
Ports
Data
Cache
Control
Cache
DisksInterconnect
Bandwidth
I/O
Bandwidth
SPC-1
IOPS
8-node P10000
(aka V800)
1,600 TB288 ports
(192 host)
512 GB256 GB1,920112 GB/sec96 GB/sec600,000
(guess)
8-node T800800 TB192 ports
(128 host)
96 GB32 GB1,28045 GB/sec19.2 GB/sec225,000
4-node T800
(or 4-node
T400)
400 TB96
(64 host)
48 GB 16 GB6409.6 GB/sec?~112,000
(estimate)
4-node F400384 TB32
(24 host)
24 GB16 GB3849.6 GB/sec ??93,000
Comparison between the F400, T400, T800 and the new V800. In all cases the numbers reflected are in a maximum configuration.

3PAR V800 ready to fight

The new system is based on their latest Generation 4 ASIC, and for the first time they are putting two ASICs in each controller. This is also the first system that supports PCI Express, with if my memory serves 9 PCI Express buses per controller. Front end throughput is expected to be up in the 15 Gigabytes/second range (up from ~6GB on the T800).  Just think they have nearly eight times the interconnect bandwidth than the controllers have capacity to push data to hosts, that's just insane.

IOPS - HP apparently is not in a big rush to post SPC-1 numbers, but given the increased spindle count, cache, doubling up on ASICs, and the new ASIC design itself I would be surprised if the system would get less than say half a million IOPS on SPC-1 (by no means a perfect benchmark but at least it's a level playing field).

It's nice to see 3PAR finally bulk up on data cache (beefcake!!) - I mean traditionally they don't need it all that much because their architecture blows the competition out of the water without breaking a sweat - but still - ram is cheap - it's not as if they're using the same type of memory you find in CPU cache - it's industry standard ECC DIMMs. RAM may be cheap, but I'm sure HP won't charge you industry standard DIMM pricing when you go to put 512GB in your system!

Now that they have PCI Express 3PAR can natively support 8Gbps fibre channel as well as 10Gbit iSCSI and FCoE which are coming soon.

The drive cages and magazines are more or less unchanged (physically) from the previous generation but apparently new stuff is still coming down the pike there.  The controller's physical design (how it fits in the cabinet) seems radically different than their previous S or T series.

Another enhancement for this system is they expanded the number of drive chassis to 48, or 12 per node (up from 8 per node). Though if you go back in time you'll find their earliest S800 actually supported 64 drive chassis for a time, since then they have refrained from daisy chaining drive chassis on their S/T/V class which is how they achieved the original 64 drive chassis configuration (or 2,560 disks back when disks were 9GB in size). The V class obviously has more ports so they can support more cages. I have no doubt they could go to even more cages by using ports assigned to hosts and assign them to disks, just a matter of testing. Flipping a fiber port from host to disk is pretty trivial on the system.

The raw capacity doesn't quite line up with the massive amount of control cache the system has, in theory at least if 4GB of control cache per controller is good enough for 200TB raw (per controller pair), then 32GB  per controller should be able to net you 1,600 TB raw (per controller pair or 6,400 TB for the whole system), but obviously with a limit put in of 1,600 TB for the entire system they are using a lot of control cache for something else.

As far as I know the T-class isn't going anywhere anytime soon, this V class is all about even more massive scale, at a significantly higher entry level price point than the T-class(at least $100,000 more at the baseline from what I can tell), with the beauty of running the same operating system, the same user interfaces, the same software features across the entire product line. The T-class, as-is still is mind numbingly fast and efficient, even three years after it was released.

No mainframe connectivity on this baby.

Storage Federation

The storage federation stuff is pretty cool in that it is peer based, you don't need any external appliances to move the data around, the arrays talk to each other directly to manage all of that. This is where we get the first real integration between 3PAR and HP in that the entire line of 3PAR arrays as well as the Lefthand-based P4000 iSCSI systems (including the Virtual storage appliance even!) support this new peer federation (sort of makes me wonder where EVA support is - perhaps it's coming later or maybe it's a sign HP is sort of depreciating EVA when it comes to this sort of thing - I'm sure the official party line will be EVA is still a shining star).

The main advantage I think of storage federation technology over something like storage vMotion is the array has a more holistic view of what's going on in the storage system rather than just what a particular host sees, or what a particular LUN is doing. The federation should also have more information about the location of the various arrays if they are in another data center or something and make more intelligent choices about moving stuff around. Certainly would like to see it in action myself. Even though hypervisors have had thin provisioning for a while - by no means does it reduce the need for thin provisioning at the storage level (at least for larger deployments).

I'd imagine like most things on the platform the storage federation is licensed based on the capacity of the array.

If this sort of thing interests you anywhere nearly as much as it interests me you should check out the architecture white paper from HP which has some new stuff from the V class here. You don't have to register to download it like you did back in the good 'ol days.

I'd be surprised if I ever decided to work for a company large enough to be able to leverage a V-class, but if anyone from 3PAR is out there reading this (I'm sure there's more than one) since I am in the Bay area - not far from your HQ - I wouldn't turn down an invitation to see one of these in person :)

Oh HP.. first you kick me in the teeth by killing WebOS devices then before I know what happened you come out with a V-class and want to make things all better, I just don't know what to feel.

The joys of working with a 3PAR array, it's been about a year since I laid my hands on one (working at a different company now), I do miss it.

TechOps Guy: Nate

Comments (4) Trackbacks (1)
  1. Seems rushed to market.

    Still no SAS, no 10GB iSCSI, no FCOE (yet). Also, who wants to pay for FC disk anymore?

  2. I think it’s fairly safe to say that regardless SAS or FC your going to be paying a LOT for disks on this platform, if you want something cheap then you’ll want another array :)

    As for the 10Gig they probably did rush it to market to some degree, in order to give them something big to talk about at VMworld.

    thanks for the comment!

  3. Re: Rushed to market
    - not so, these have been in test/dev for the last 12 months (that I know of, could be longer) to ensure compatibilty etc. Theyre targeting high end tier1 enterprise and cloud providers with this, so it has to *just work*.

    Re sas/10gb/fcoe – since the platform is now pcie based the line cards should be easy to qualify, but were not in the spec due to limited deployment in the market (little demand). SAS loop length is the reason for staying with FC, as the SAS loops would severely limit the number of disks in a system.

  4. Interesting on the SAS loop length, I thought SAS may limit length but wasn’t sure by how much (I can’t imagine many customers deploying disks 100 meters away from their controllers even though technically it’s possible). Also suspected the market share on the iSCSI/FCOE – though it still feels like HP wanted to get in for VMworld, instead of hold off 3-4 weeks or however long it takes to qualify the HBAs to announce.

    thanks for the more official info Andre!


Trackbacks are disabled.