TechOpsGuys.com Diggin' technology every day

August 17, 2009

FCoE Hype

Filed under: Storage — Nate @ 6:41 pm

I feel like I’ve been bombarded by hype about FCoE (Fiber Channel over Ethernet) over the past five months or so, and wanted to rant a bit about it. Been to several conferences and they all seem to hammer on it.

First a little background on what FCoE is and this whole converged networking stuff that some companies are pushing.

The idea behind it is to combine Fiber Channel and traditional Ethernet networking into something that runs on a single cable. So you have two 10 Gigabit connections coming out of your server for traditional FC, as well as your networking. The HBA presents itself to the server as independent FC and Ethernet connectivity. From a 10,000 foot view it sounds like a really cool thing to have, but then you get into the details.

They re-worked the foundations of ethernet networking to be better suited for storage traffic, which is a good thing, but it simultaneously makes this new FCoE technology incompatible with all existing Ethernet switches. You don’t get a true “converged” network based on Ethernet, you can’t even use the same cabling as you can for 10GbE in many cases. You cannot “route” your storage(FC) traffic across a traditional 10GbE switch despite it running over “Ethernet”.

The way it’s being pitched for the most part is somewhat of an aggregation layer, you link your servers to a FCoE switch, and then that switch splits the traffic out, uplinking 10GbE to 10GbE upstream switches, and FC traffic to FC switches(or FC storage). So what are you left with?

  • You still need two seperate networks – one for your regular Ethernet traffic, the other for the FCoE traffic
  • You still need to do things like zone your SAN as the FCoE presents itself as Fiber Channel HBAs
  • At least right now you end up paying quite a premium for the FCoE technology, from numbers I’ve seen, mostly list pricing on both sides an FCoE solution can cost 2x more than a 10GbE+8Gb fiber channel solution(never mind that the split solution as an aggregate can get much more performance).
  • With more and more people deploying blades these days, your really not cutting much of the cable clutter with FCoE, as your cables are aggregated at the chassis level. I even saw one consultant who seemed to imply some people using cables to connect their blades to their blade chassis? He sounded very confusing. Reduce your cable clutter! Cut your cables in half! Going from four, or even six cables to two or something really isn’t much to get excited about.

What would I like to see? Let the FCoE folks keep their stuff, if it makes them happy I’m happy for them. What I’d like to see as far as this converged networking goes is more 10GbE iSCSI converged HBAs. I see that Chelsio has one for example, combines 10GbE iSCSI offload and a 10GbE NIC in one package. I have no experience with their products so don’t know how good it is/not. I’m not personally aware of any storage arrays that have 10GbE iSCSI connectivity to them, though I haven’t checked recently. But what I’d like to see as an alternative is more focus on standardized ethernet as a storage transport, rather than this incompatible stuff.

Ethernet switches are so incredibly fast these days, and cheap! Line rate non blocking 1U 10GbE switches are dirt cheap these days, and many of them can even do 10GbE over regular old Cat 5E. Though I’m sure Cat 6A would provide better performance and/or latency. But the point I’m driving towards is not having to care what I’m plugging into, have it just work.

Maybe I’m just mad because I got somewhat excited about the concept of FCoE and feel totally let down by the details.

What I’d really like to see is a HP VirtualConnect 10GbE “converged” iSCSI+NIC. That’d just be cool. Toss onto that the ability to run a mix of jumbo and non jumbo frames on the same NIC(different vlans of course). Switches can do it, NICs should be able to do it too! I absolutely want jumbo frames on any storage network, but I probably do not want jumbo frames on my regular network for compatibility reasons.

4 Comments

  1. […] Networking, it’s all FCoE based, I’ve already written a blog entry on that, you can read about my thoughts on FCoE here. […]

    Pingback by The new Cisco/EMC/Vmware alliance – the vBlock « TechOpsGuys.com — November 3, 2009 @ 6:04 pm

  2. Dell Equallogic PS6010 and PS6510 are 10 Gbit iSCSI arrays, with two 10 Gbit ports per controller.

    http://www.equallogic.com/products/ps6010-series.aspx?id=8945&slider6010=1

    I’m hoping to get to test/benchmark them in the near future.. I’m expecting pretty much since Equallogic has been producing the best 1 Gbit iSCSI arrays so far, and since 2003.

    And yeah, I completely agree with your FCoE rant. It feels just like another attempt to extend the lifetime of FC. These companies have invested a lot on FC.

    Comment by Pasi Karkkainen — January 14, 2010 @ 12:56 am

  3. […] wrote back in 2009, wow was it really that long ago, one of my first posts, about how I wasn’t buying into the […]

    Pingback by Lackluster FCoE adoption « TechOpsGuys.com — February 14, 2011 @ 9:22 pm

  4. Is the ford f-150 platinum pickup a off road truck or more for luxury?

    Comment by Luxury — July 3, 2013 @ 10:29 pm

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress