TechOpsGuys.com Diggin' technology every day

June 11, 2012

3PAR and NPIV

Filed under: Storage — Tags: , — Nate @ 7:20 am

I was invited to a little preview of some of the storage things being announced at HP Discover last week, just couldn’t talk about it until the announcement. Since I was busy in Amsterdam all last week I really didn’t have a lot of time to think about blogging here.

But I’m back and am mostly adjusted to the time zone differences I hope. HP had at least two storage related announcements they made last Monday, one related to scaling of their StoreOnce dedupe setup and another related to 3PAR. The StoreOnce announcement seemed to be controversial, since I really have a minimal amount of exposure to that sort of product I won’t talk about it much, on the surface it sounded pretty impressive but if the EMC claims are true than it’s unfortunate.

Anyways onto the 3PAR announcement which while it had a ton of marketing around it, it basically comes down to three words:

3PAR Supports NPIV (finally)

NPIV in a nutshell the way I understand it is a way of virtualizing connections between points in a fibre channel network, most often in the past it seems to have been used to present storage directly to VM hosts, via FC switches. NPIV is also used by HP’s VirtualConnect technology on the FC side to connect the VC modules to a NPIV-aware FC switch (which is pretty much all of them these days?), and then the switch connected to the storage(duh). I assume that NPIV is required by Virtual Connect because the VC module isn’t really a switch it’s more of a funky bridge.

Because 3PAR did not support NPIV (for what reason I don’t know I kept asking them about it for years but never got a solid response as to why not or when they might support it) there was no way to directly connect a Virtual Connect module (either the new Flex Fabric or the older dedicated FC VC modules) to a 3PAR array, you had to have a switch as a middleman. Which just seemed like a waste. I mean here you have a T or now a V-class system with tons of ports, you have these big blade chassis with a bunch of servers in them, with the VC modules acting like a switch (acting as in aggregating points) and you can’t directly connect it to the 3PAR storage! It was an unfortunate situation. Even going back to the 3cV, which was a bundle of sorts of 3PAR, HP c-Class Blades and VMware (long before HP bought 3PAR of course), I would have thought getting NPIV support would of been a priority but it didn’t happen, until now (well last Monday I suppose).

So at scale you have up to 96 host fibre channel ports on a V400 or 192 FC ports on a V800 operating at 8Gbps. At a maximum you could get by with 48 blade enclosures (2 FC/VC modules each with a single connection) on a V400 or of course double that to 96 on a V800. Cut it in half if you want higher redundancy with dual paths on each FC/VC module. That’s one hell of a lot of systems directly connected to the array. Users may wish to stick to a single connection per VC module allowing the 2nd connection to be connected to something else, maybe another 3PAR array. You still have full redundancy with two modules and one path per module. 3PAR 4Gbps HBAs (note the V-class has 8Gbps) have queue depths of something like 1,536 (not sure what the 8Gbps HBAs have). If your leveraging full height blades you get 8 per chassis, absolute worst case scenario you could set a queue depth of 192/server (I use 128/server on my gear). You could probably pretty safely go quite a bit higher though more thought may have to be had in certain circumstances. I’ve found 128 has been more than enough for my own needs.

It’s cost effective today to easily get 4TB worth of memory per blade chassis, memory being the primary driver of VM density, so your talking anywhere from 96 – 384 TB of memory hooked up to a single 3PAR array. From a CPU perspective anywhere from 7,680 CPU cores all the way up to 36,684 CPU cores in front of a single storage system, a system that has been tested to run at over 450,000 SPC-1 IOPS. The numbers are just insane.

All we need now is a flat ethernet fabric to connect the Virtual Connect switches to, oh wait we have that too, though it’s not from HP. A single pair of Black Diamond X-Series switches could scale to the max here as well, supporting a full eight 10Gbit/second connections per blade chassis with 96 blade chassis directly connected – which, guess what – is the maximum number of 10GbE ports on a pair of FlexFabric Virtual Connect modules (assuming your using two ports for FC). Of course all of the bandwidth is non blocking. I don’t know what the state of interoperability is but Extreme touts their VEPA support in scaling up to 128,000 VMs in an X-series, and Virtual Connect appears to tout their own VEPA support as well. Given the lack of more traditional switching functionality in the VC modules it would probably be advantageous to leverage VEPA (whether or not this extends to the Hypervisor I don’t know – I suspect not based on what I last heard at least from VMware, I believe it is doable in KVM though) to route that inter-server traffic through the upstream switches in order to gain more insight into it and even control it. If you have upwards of 80Gbps of connectivity per chassis anyways it seems there’d be abundant bandwidth to do it. All HP needs to do now is follow the Dell and revise their VC modules to natively support 40GbE (the Dell product is a regular Blade Ethernet switch by contrast and is not yet shipping).

You’d have to cut at least one chassis out of that configuration(or reduce port counts) in order to have enough ports on the X-Series to uplink to other infrastructure. (When I did the original calculations I forgot there would be two switches not one, so there’s more than enough ports to support 96 blade chassis between a pair of X-8s going full bore with 8x10GbE/chassis and you could even use M-LAG to go active-active. if you prefer). I’m thinking load balancers, and some sort of scale-out NAS for file sharing, maybe the interwebs too.

Think about that, up to 30,000 cores, more than 300 TB of memory, sure you do have a bunch of bridges, but all of it connected by only two switches, and one storage array (perhaps two). Just insane.

One HP spokesperson mentioned that even a single V800 isn’t spec’d to support their maximum blade system configuration of 25,000 VMs. 25k VMs on a single array does seem quite high(that comes to an average of 18 SPC-1 IOPS/VM), but it really depends on what those VMs are doing. I don’t see how folks can go around tossing solutions about saying X number of VMs when workloads and applications can vary so widely.

So in short, the announcement was simple – 3PAR supports NPIV now – the benefits of that simple feature addition are pretty big though.

1 Comment

  1. […] angled to drive more revenue for the network. HP is less network oriented, and they show you can directly connect the blade chassis to a 3PAR storage system(s). I think HP's diagram is even a bit too complicated […]

    Pingback by Nth Symposium 2013: HP Bladesystem vs Cisco UCS « TechOpsGuys.com — August 8, 2013 @ 11:00 pm

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress