TechOpsGuys.com Diggin' technology every day

November 16, 2010

HP serious about blade networking

Filed under: Networking — Nate @ 10:32 am

I was doing my rounds and noticed that HP launched a new blade for the Xeon 6500/7500 processor( I don’t yet see news breaking of this on The Reg so I beat them for once!), the BL620c G7, and they have another blade the BL680c G7,  is a double wide solution, which to me looks like nothing more than a pair of 620c G7s stacked together and using the backplane to link the systems together, IBM does something similar on their Bladecenter to connect a memory expansion blade onto their HX5 blade.

But what really caught my eye more than anything else is how much networking HP is including on their latest blades, whether it is the BL685c G7, or these two newer systems.

  • BL685c G7 & BL620c G7 both include 4 x 10GbE Flexfabric ports on board (no need to use expansion ports) – that is up to 16 FlexNICs per server – with three expansion slots you can get a max of 10x10GbE ports per server (or 40 FlexNICs per server)
  • BL680c G7 has 6 x 10GbE Flexfabric ports on board providing up to 24 FlexNICs per server – with seven expansion slots you can get a max of 20x10GbE ports per server (or 80 FlexNICs per server)

Side note: Flex Fabric is HP’s term referring to CNA.

Looking at the stock networking from Cisco, Dell, and IBM

  • Cisco – their site is complex as usual but from what I can make out their B230M1 blade has 2x10Gbps CNAs
  • Dell and IBM are stuck in 1GbE land, with IBM providing 2x1GbE on their HX5 and Dell providing 4x1GbE on their M910

What is even nicer about the extra NICs on the HP side, at least on the BL685c G7 and I presume the BL620c G7 is that because they are full height, the connections from the extra 2x10GbE ports on the blade feed into the same slots on the backplane, meaning with a single pair of 10GbE modules on the chassis you can get full 4x10GbE per server (8 full height blades per chassis), normally if you would put extra NICs on the expansion ports, those ports are wired to different slots in the back needing additional networking components in those slots.

You might be asking yourself, what if you don’t have 10GbE and you only have 1GbE networking? Well first off – upgrade, 10GbE is dirt cheap now there is absolutely no excuse for getting these new higher end blade systems and trying to run them off 1GbE. You’re only hurting yourself by attempting it. But in the worst case you really don’t know what your doing and you happen to get these HP blades with 10GbE on them and want to connect them to 1GbE switches — well you can, they are backwards compatible with 1GbE switches. Either with their various 1GbE modules, or the 10GbE pass through module supporting both SFP and SFP+ optics.

So there you have it, 4x10GbE ports per blade standard, if it was me I would take 1 port from each network ASIC, and assign FlexNICs for VM traffic, and take the other port from each ASIC and enable jumbo frames for things like Vmotion, Fault tolerance, iSCSI, NFS etc traffic. I’m sure the cost of adding the extra dual port card is trivial when integrated onto the board, and HP is smart enough to recognize that!

Having more FlexNICs on board means you can use those expansion slots for other things, such as Fusion I/O accelerators, or maybe Infiniband or native Fibre channel connectivity. Having more FlexNICs on board also allows for greater flexibility in network configuration of course, take for example the Citrix Netscaler VPX, which, last I checked required essentially dedicated network ports in vSphere in order to work.

Myself I’m still not sold on the CNA concept at this point. I’m perfectly happy to run a couple FC switches per chassis, and a few extra cables to run to the storage system.

Powered by WordPress