Diggin' technology every day

February 14, 2011

Lackluster FCoE adoption

Filed under: Networking — Tags: — Nate @ 9:22 pm

I wrote back in 2009, wow was it really that long ago, one of my first posts, about how I wasn’t buying into the FCoE movement, at first glance it sounded really nice until you got into the details and then that’s when it fell apart. Well it seems that I’m not alone, not long ago in an earnings announcement Brocade said they were seeing lackluster FCoE adoption, lower than they expected.

He discussed what Stifel’s report calls “continued lacklustre FCoE adoption.” FCoE is the running of Fibre Channel storage networking block-access protocol over Ethernet instead of using physical Fibre Channel cabling and switchgear. It has been, is being, assumed that this transition to Ethernet would happen, admittedly taking several years, because Ethernet is cheap, steamrolls all networking opposition, and is being upgraded to provide the reliable speed and lossless transmission required by Fibre Channel-using devices.

Maybe it’s just something specific to investors, I was at a conference for Brocade products I think it was in 2009 even, where they talked about FCoE among many other things and if memory serves they didn’t expect much out of FCoE for several years so maybe it was management higher up that was setting the wrong expectations or something I don’t know.

Then more recently I saw this article posted from slashdot which basically talks about the same thing.

Even today I am not sold of FCoE, I do like Fibre Channel as a protocol but don’t see a big advantage at this point to running it over native Ethernet. These days people seem to be consolidating on fewer, larger systems, I would expect the people more serious about consolidation are using quad socket systems, and much much larger memory configurations (hundreds of gigs). You can power that quad socket system with hundreds of gigs of memory with a single dual port 8Gbps fibre channel HBA.Those that know about storage and random I/O understand more than anyone how much I/O it would really take to max out an 8Gbps Fibre channel card, your not likely to ever really manage to do it with a virtualization workload, even with most database workloads. And if you do you’re probably running at a 1:1 ratio of storage arrays to servers.

The cost of the Fibre network is trivial at that point (assuming you have more than one server). I really like the latest HP blades because well you just get a ton of bandwidth options with them right out of the box, why stop with running everything over a measly single dual port 10Gbe NIC when you can have double the NICs, AND throw in a dual port Fibre adapter for not much more cash. Not only does this give more bandwidth, but more flexibility and traffic isolation as well(storage/network etc). On the blades at least it seems you can go even beyond that(more 10gig ports), I was reading in one of the spec sheets for the PCIe 10GbE cards that on the Proliant servers no more than two adapters are supported

NOTE: No more than two 10GbE I/O devices are supported in a single ProLiant server.

I suspect that NOTE may be out of date with the more recent Proliant systems that have been introduced, after all they are shipping a quad socket Intel Proliant blade with three dual port 10GbE devices on it from the get go. And I can’t help but think the beast DL980 has enough PCI busses to handle a handful of 10GbE ports. The 10GbE flexfabric cards list the BL685c G7 as supported as well, meaning you can get at least six ports on that blade as well. So who knows…..

Do the math, the added cost of a dedicated fibre channel network really is nothing. Now if you happen to go out and chose the most complicated to manage fibre channel infrastructure along with the most complicated fibre channel storage array(s) then all bets are off. But just because there are really complicated things out there doesn’t mean your forced to use them of course.

Another factor is staff I guess, if you have monkeys running your IT department maybe Fibre channel is not a good thing and you should stick to something like NFS, and you can secure your network by routing all of your VLANs through your firewall while your at it, because you know your firewall can keep up with your line rate gigabit switches right? riiight.

I’m not saying FCoE is dead, I think it’ll get here eventually, I’m not holding my breath for it though, it’s really more of a step back than a step forwards with present technology.

Vertica snatched by HP

Filed under: News — Tags: , , — Nate @ 9:00 pm

Funny timing! One of my friends who used to work for 3PAR left 3PAR not long after HP completed the acquisition and he went to Vertica, which is a scale out column-based distributed high performance database. Certainly not an area I am well versed in but I got a bit of info a couple weeks ago and the performance numbers are just outstanding, the kind of performance gains that you really probably have to see to believe, fortunately for users their software is free to download, and it sounds like it is easy to get up and running (I have no personal experience with it, but would like to see it in action at some point soon). Performance gains up up to 10,000% are not uncommon vs traditional databases.

It really sounds like an awesome product that can do more real time analysis on large amounts of data (from a few gigs to over a Petabyte). Something that Hadoop users out there should take notice of. If you recall last year I wrote a bit about organizations I have talked to that were trying to do real time with hadoop with (most likely) disastrous results, it’s not built for that, never was, which is why Google abandoned it (well not hadoop since they never used the thing but Mapreduce technology in general at least as far as their search index is concerned they may use it for other things). Vertica is unique in that it is the only product of it’s kind in the world that has a software connector that can connect hadoop to Vertica. Quite a market opportunity. Of course a lot of the PHB-types are attracted to Hadoop because it is a buzzword and because it’s free. They’ll find out the hard way that it’s not the holy grail they thought it was going to be and go to something like Vertica kicking and screaming.

So back to my friend, he’s back at HP again, he just couldn’t quite escape the gravitational pull that was HP.

Also somewhat funny as it wasn’t very long ago that HP announced a partnership with Microsoft to do data warehousing applications. Sort of reminds me when NetApp tried to go after Data Domain, mere days before they announced their bid they put out a press release saying how good their dedupe was..

Oh and here’s the news article from our friends at The Register.

The database runs in parallel across multiple machines, but has a shared-nothing architecture, so the query is routed to the data and runs locally. And the data for each column is stored in main memory, so a query can run anywhere from 50 to 1,000 times faster than a traditional data warehouse and its disk-based I/O – according to Vertica.

The Vertica Analytics Database went from project to commercial status very quickly – in under a year – and has been available for more than five years. In addition to real-time query functions, the Vertica product continuously loads data from production databases, so any queries done on the data sets is up to date. The data chunks are also replicated around the x64-based cluster for high availability and load balancing for queries. Data compression is heavily used to speed up data transfers and reduce the footprint of a relational database, something on the order of a 5X to 10X compression.

Vertica’s front page now has a picture of a c Class blade enclosure, jus think of what you can analyze with a enclosure filled with 384 x 2.3Ghz Opteron 6100s (which were released today as well and HP announced support for them on my favorite BL685c G7), and 4TB of memory all squeezed into 10U of space.

If your in the market for a data warehouse / BI platform of sorts, I urge you to at least see what Vertica has to offer, it really does seem revolutionary, and they make it easy enough to use that you don’t need an army of PhDs to design and build it yourself (i.e. google).

Speakin’ of HP, I did look at what the new Palm stuff will be and I’m pretty excited I just wish it was going to get here sooner. I went out and bought a new phone in the interim until I can get my hands on the Pre 3 and the Touchpad. My Pre 1 was not even on it’s last legs it was in a wheelchair and a oxygen bottle. New phone isn’t anything fancy just a feature phone, it does have one thing I’m not used to having though, battery life. The damn thing can go easily 3 days and the battery doesn’t even go down by 1 bar. And I have heard from folks that it will be available on Sprint, which makes me happy as a Sprint customer. Still didn’t take a chance and extend my contract just in case that changes.

Powered by WordPress