TechOpsGuys.com Diggin' technology every day

30Apr/13Off

Openflow inventor admits SDN is hype

The whole SDN thing has bugged me for a long time now. The amount of hype behind it has driven me mad. I have asked folks to explain to me what SDN is, and I have never really gotten a good answer. I have a decent background in networking but it's never been my full time responsibility (nor do I wish it to be).

I am happy to report that it seems I am not crazy. Yesterday I came across an article on slashdot from the inventor of Openflow, the same guy that sold his little networking startup Nicira for a cool $1.2B (and people thought HP paid too much for 3PAR).

He admits he doesn't know what SDN is either anymore.

Reading that made me feel better :)

On top of that our friends over at El Reg recently described SDN as an industry "hype gasm". That too was refreshing to see. Finally more people are starting to cut through the hype.

I've always felt that the whole SDN thing that is going on is entirely too narrow in vision - seemingly to be focused almost entirely on switching & routing.  Most of the interesting stuff happens higher up in the advanced layer 7 load balancing where you have more insight as to what is actually traversing the wire from an application perspective.

I have no doubt that the concepts behind SDN will be/are very useful for massive scale service providers and such (though somehow they managed without it as it is trying to be defined now anyway). I don't see it as very useful for most of the rest of organizations, unlike say virtualized storage.

I cringed badly when I first saw the term software defined storage last year, it just makes me shudder as to the amount of hype people might try to pump into that. HP seems to be using this term more and more often. I believe others are too, though I can't bring myself to google the term.

TechOps Guy: Nate

Tagged as: Comments Off
Comments (5) Trackbacks (1)
  1. Hi Nate, how goes?

    The more I played with networks over the years, the more I thought about things like cache hits, memory latency, protocol offloads, buffering, routing, and so on, the more I thought that at one time this needs to be flatter.

    I think that I first started along these thought paths when I had a 2 by 2 core 3.9GHz Xeon machine doing a Boeing impression under my desk, ca. ’05.

    The clock differentials were enormous, and besides, the 747 impression was half or more to do with the whoop tonne of 15K drives stashed in there to get a measly peak 450MB/s, which now is a comfy single SSD for stream, or RAID 1 if you push small frames.

    We likely both grew up at a time when networking was only a part of everyday thinking to the likes of guys in labs at PARC. Certainly the Symbolics and other cabals didn’t really think of a computer as disconnected, and wrote object systems that inherently treated the network as the computer. My apologies to SUN, but UNIX I/O is a very different thing from inherent object communication. Nevertheless props for stealing the best marketing tagline of that decade.

    So the problem I think Casado has is that there ought not to be distinct difference between networking, programming, process communications and compute. Merely the hardware level is different in each, CMOS versus twisted pair, High-K Dialetcric versus SFP+, and there is always a necessity to have a layer that abstracts the transport of data. However, historically, that abstraction layer, to get to L2, e.g., or to create a gate in a ALU, has been hard enough to create very narrow specialisms, and further because it has been a big market, historically a very diverse market (see then the UNIX vendors, now who can sell you microservers or network gear or storage software) so specialism and differentiation play a major role in the self description of businesses supplying the field.

    To me, all these differences, whilst I have spent much of my life thoroughly enjoying learning the detail fo the distinctions, would be better served by accelerating goals towards logical homogeneity.

    The network is the computer is the app is the silicon, is the optical polarizing gate, is the quantum observation, is the molecular exchange.

    I think the real shift – oh how I wanted to say “paradigm shift”, but it is not a paradigm in my book if it is merely a integration, will become when distinctions are sufficiently blurred to make any component appear as magic hows you need it to appear to another.

    Bring it on!

    Great to have you back, Nate!

    ’till soon,

    ~ j

  2. Hey John! Yes I can see the use case of having everything be dynamic and built in similar ways. I just think for the most part many networks don’t face frequent change. Things get built, and relative to the servers stay fairly static. There isn’t a big need to be highly dynamic outside of service providers which may be provisioning new customers with dedicated network segments or something similar. I do think flat(er) networks are good from a physical perspective(fewer tiers), though still value layer 3 separation that differentiate between the different purposes on the network (e.g. production, non production, load balancers, VPNs etc..). Arista today came out with some new gear and proposed upwards of 100,000 devices on a single flat network, to me that seems really excessive. Though I guess some folks may like it.

    Having the abstraction layers is a good thing for most cases isn’t it ? I mean it allows the different components to evolve at their own rate while maintaining some level of compatibility between them.

    thanks for the comment! (as always)

  3. I agree with L3 abstraction, but that’s a pragmatic / programmatic necessity.

    Obviously my argument ignores inherent hardware properties, as ASICS and FRGAs and RF components are all built differently, but the situation is increasingly that whilst I think opf the analogy of a 286 FPU being a expensive addition most do not need, integrated silicon brings the FPU on die, and SOCs are firmly here a long while now, and so silicon can be deployed to create a abstract layer above anything.

    AMD seem to be on the case with HUMA: http://www.theregister.co.uk/2013/05/01/amd_huma/

    as to memory flatness.

    Good, makes me think of the heyday of SGI!

    Ha! We’ve been reading much the same: that Arista kit is rather nice, eh? :-)

    Indeed, very few networks need much alteration once built, but I can think with IPv6 how block lists might work, or internal allocations which comprise much of those v4 /8 uses would benefit. My own use case for OpenFlow is building a very small scale distributed facility on a tiny budget, without expensive WAN routing. I looked thanks to your exposition, at Extreme gear, for their very neat L3 handling. But that is still at least another 1U, and power budget. And then we get the fun of MLPS and all that to tie together the endpoints. Now the proprietary management layer of networks stands threat from OpenFlow and variants, and anyone can write a OpenFlow controller – - dare I hope never to see IOS again, with legacy from when they bought I forget who for their NAT tech and 500 or more other acquisitions all kludged in? – - they’re bringiong out the big hardware guns. That wasn’t overnight. Expect 40Gb/s in your den any time now!

    must dash, will return, this is a super hot subject I think in PR terms, but no-one’s looking at it clearly as are you.

    atb, ~ j

  4. Just a aside, I would like to see a multi purpose backplane standard arrive for 19″ racks.

    Either PCI-e and / or 100 Gb/s Ethernet, standard 4 slots plus controller per unit.

    Plug in a wedge of Atom chips, a sluice of GPUs, a quad of Xeons, a stack of SSD, that sort of thing.

    I get very frustrated at using up unit space when you have to deploy a racktop purposed switch that is under utilised. Usually feature set does not go with fewer ports for lower budget, which is where OPenFlow starts to look very handy.

    The buzzword might be Private Cloud, but I suggest instead we have Cloud Racks, where simply the hardware to do the job is miniaturized.

    Take my desktops: SFF Dell Vostro 270s, i5 3450s, 16GB RAM, FirePro 3900, Icydock 5.35 bay splits out slim DVD-XL for tons of parity on a archive for sending my project code off site (120GB capacity, I massively up the parity, against danmage, or use M-Disc) Startech 6Gb/s RAID tiers two Intel 225s straiught level 1, above a near line Enterprise grade WD rust spinner.

    That’s one heck of a lot in just bigger than a ream of paper, and a very very sweet dev w/station. There’s no reason the same chassis could not take 8 cores of Xeon, and more RAM. I compile anything big on what has ECC, and CAD would be a pain on this box, but very useable.

    Well, things can be really small. I just cited the specs because that’s awesome kit for the money and for the silence. At load I can’t hear it above the aircon hardly.

    Thing is, you can about use a setup like that for many a small office server (especially now taking down a AD server instance does not wreck the network) virtualized, for many office type purposes. Okay, lacks many a feature you might desire, but not so many.

    So, why not have a multi functional rack system.

    I am thinking about power budgets also. Space budgets.

    Power as in on grid say in the City of London is scary scarce and expensive. Floor acreage is just as painful to lease.

    What I am suggesting is no less than a truly open blade architecture. I want to have 100 GB/s Ethernet, and L3/4 features. But why do I need to buy a huge cage? Why does that have to be separate from the CPUs I need? I can think of many apps, mostly message passing, that do not need to scale in compute, but which are designed for many CPUs on a switch. Why recode when you can just bung all the bits into a smaller enclosure. Why if I have 4 workstations now on my desk, can they not fan out from a small (and secured from theft) wall rack?

    Just shooting the breeze with this. I’d like to see proprietary blades go, even though for some cases they save much even with the inherent premia attached. HP C chassis are cute, but why do I need their storage, and not Hitachi. That peeves me, owner of some legacy VMS stuff that chugs happily. I can get Itanium blades, but why for a small install, can I not shove in a Hitachi controller?

    My argument may be edge case, may be specious, but enterprise capability is always forced down the chain.

    One of the good arguments for cloud infrastructure is ease of provisioning, and of course the hardware boys and girls are having a nice time selling into that, where they are not being bypassed straight to OEM, as the big boys do. By adopting the kind of backplane standard I propose, they would have a in to all those little shops who want to sink their capital for a mini cloud on premise in one go. My company gets to amortize capex really fast, and we do, so no plant / fixed carry / depreciation arguments. I’d love that for any small company. Really small companies worry about cash management and cflow, and so sunk costs for flexible gear are important. The true SOHO market, not just a tiny outfit with experience with enterprise systems, is yet to be cracked by the high end.

    Furthermore, with SDN, you can effectively sell hardware by software key license.

    Who says Arista would not be attractive to SOHO shops, if they sold small modular units?

    I bought some Netgear recently, for our branch. It really is not bad at all. I do feel a bit wimpish, mind you, without a “big name” on the tin! I know, I know, it’s sad. But think how Cisco are oversold to unbeknownst small scale shops? There’s cachet in a name, and I think of it, with advertising hat on, like how jewellery is sold. Tiffany sell small items, which most people who are not spendthrift can afford as a gift, and the name impresses. I can see it now, the blaggery of a local fix it IT man, “But it works because I put a Arista card in there!”, and it might do so, as all this SDN stuff makes management so much easier. Or potentially it does, if you allow there’s a wide playing field for 3rd party applications. Actually, SOHO is just where networks get reconfigured all too often . .

    / reverie ends!

  5. I think the complexities involved in what you propose makes it cost prohibitive. Take just the CPUs for example, there’s tons of extra logic in those chips and the chipsets to link them together. AMD used to participate in the 8-way race, now they do not. The Intel 8-way systems are obviously quite expensive. There are systems that use AMD processors of course that go far beyond 8-way, but again cost a lot at least in part due to the custom chipset stuff that makes the magic happen(same goes for Intel as well of course).

    Getting the abstraction layers out of the way is nice for performance but really helps kill flexibility and interoperability, which will drive the cost higher which will drive the niche deeper and that sort of feeds on itself for a while.

    Dell came out a few years ago with basically an external PCIe expansion enclosure -
    http://www.dell.com/us/business/p/poweredge-c410x/pd

    Though it is PCIe cards only. You can connect (apparently up to 8) multiple systems to the enclosure.

    SGI has a product called Cloud rack, and it is a full rack. Though the backplane is limited to power & cooling (no fans or PSUs in the servers similar to blades).

    http://www.techopsguys.com/2010/10/05/hp-launches-new-denser-sl-series/