TechOpsGuys.com Diggin' technology every day

October 8, 2010

I/O Virtualization for mortals

Filed under: Networking,Virtualization — Nate @ 10:24 pm

This product isn’t that new but haven’t seen many people talk about it, I first came across it a few weeks ago certainly looked very innovative.

I don’t know who started doing I/O virtualization first, maybe it was someone like Xsigo, or maybe it was HP with their VirtualConnect or maybe it was someone else, but the space has heated up in the past couple of years.

Neterion, a name that sounds familiar but I can’t quite place it… a company by the name of Exar may of bought them or something. But anyways there is this interesting virtualized NIC that they have – the X3120 V-NIC looks pretty cool –

Neterion’s family of 10 Gigabit Ethernet adapters offer a unique multi-channel device model. Depending upon the product, a total of between eight and seventeen fully independent, hardware-based transmit and receive paths are available; each path may be prioritized for true Quality-of-Service support.

I/O Virtualization Support

  • Special “multi-function PCI device” mode brings true IOV to any industry-standard server. In multi-function mode, up to 8 physical functions are available (more in ARI-capable systems). Each physical function appears to the system as an independent Ethernet card
  • Unique, hardware-based multi-channel architecture mitigates head-of-line blocking and allows direct data transfer between hardware channels and host-based Virtual Machines without hypervisor intervention (greatly reducing CPU workload)
  • VMware® NetQueue support
  • Dedicated per-VF statistics and interrupts
  • Support for function-level reset (FLR)
  • Fully integrated Layer 2 switching function

I removed some bullet points of things to shorten the entry a bit and those things I wasn’t exactly sure what they did anyways! Anyone know what ARI means above?

Never used the product, but it is very nice to see such a product in the market place, to get Virtual Connect “like” functionality (at least as far as the virtual NICs go, I know theres a lot of other advantages to VC) in your regular rack mount systems from any vendor and at least potentially connect to any 10GbE switch, as far as I can tell there’s no special requirements for a specific type of switch.

October 6, 2010

Who’s next

Filed under: Networking,Random Thought — Tags: , , — Nate @ 9:42 pm

I was thinking about this earlier this week or late last week I forget.

It wasn’t long ago that IBM acquired Blade Network Technologies, a long time partner of IBM as Blade made a lot of switches for the Blade Center, and also for the HP blade system as well I believe.

I don’t think that Blade Networks was really well known outside of their niche of being a supplier to HP and IBM (and maybe others I don’t recall and haven’t checked recently) on the back end. I certainly never heard of them until in the past year or two and I do keep my eyes out there for such companies.

Anyways that is what started my train of thought. The next step in the process was watching several reports on CNBC about companies pulling their IPOs due to market conditions. Which to me is confusing considering how high the “market” has come recently. It apparently just boils down to investors and IPO companies not able to agree on a “market price” or whatever. I don’t really care what the reason is, but the point is this — earlier this year Force10 Networks filed for IPO, and well haven’t heard much of a peep since.

Given the recent fight over 3PAR between Dell and HP, and the continuing saga of stack wars, it got me speculating.

What I think should happen, is Dell should go buy Force10 before they IPO. Dell obviously has no networking talent in house, last I recall their Powerconnect crap was OEM’d from someone like SMC or one of those really low tier providers. I remember someone else making the decision to use that product last year, and then when we tried to send 5% of our network traffic to the site that was running those switches they flat out died, had to get remote hands to reboot them. Then shortly afterwards one of them bricked themselves when upgrading the firmware on them, had to RMA. I just pointed and laughed, since I knew it was a mistake to go with them to begin with, the people making the decisions just didn’t know any better. Several outages later they ended up replacing them, and I tought them the benefits of a true layer 3 network, no more static routes.

Then HP should go buy Extreme Networks, which is my favorite network switching company, I think HP could do well with them. Yes we all know HP bought 3COM last year, but we also know HP didn’t buy 3COM for the technology (no matter what the official company line is), they bought them for their presence in China. 3COM was practically a Chinese company by the time HP bought them, really! And yes I did read the news that HP finished kicking Cisco out of their data centers replacing their stuff with a combination of Procurve and 3COM. Juniper tried & failed to buy Extreme a few years ago shortly after they bought Netscreen.

That would make my day though, a c-Class blade system with an Extreme XOS-powered VirtualConnect Ethernet fabric combined with 3PAR storage on the back end. Hell, that’d make my year 🙂

And after that, given that HP bought Palm earlier in the year (yes I own a Palm Pre – mainly so I can run older Palm apps otherwise I’d still be on a feature phone). HP likes the consumer space so they should go buy Tivo and break into the set top box market. Did I mention I use Tivo too? I have 3 of them.

September 15, 2010

Time to drop a tier?

Filed under: Networking — Tags: , — Nate @ 8:30 am

Came across an interesting slide show, The Ultimate guide to the flat data center network. at Network World. From page 7:

All of the major switch vendors have come out with approaches that flatten the network down to two tiers, and in some cases one tier. The two-tier network eliminates the aggregation layer and creates a switch fabric based on a new protocol dubbed TRILL for Transparent Interconnection of Lots of Links. Perlman is a member of the IETF working group developing TRILL.

For myself, I have been designing two tier networks for about 6 years now with my favorite protocol ESRP. I won’t go into too much detail this time around, click the link for an in-depth article but here is a diagram I modified from Extreme to show what my deployments have looked like:

Sample ESRP Mesh network

ESRP is very simple to manage, scalable, mature, and with a mesh design like the above, the only place it needs to run is on the core. The edge switches can be any model, any vendor, managed, and even unmanaged switches will work without trouble. Fail over is sub second, not quite the 25-50ms that EAPS provides for voice grade, not that I have had any way to accurately measure it but I would say it’s reasoanble to expect a ~500ms fail over in an all-Extreme network(where the switches communicate via EDP), or ~750-1000ms for switches that are not Extreme.

Why ESRP? Well because as far as I have seen since I started using it, there is no other protocol on the market that can do what it can do (at all, let alone as easily as it can do it).

Looking at TRILL briefly, it is unclear to me if it provides layer 3 fault tolerance or if you still must use a 2nd protocol like VRRP, ESRP or HSRP(ugh!) to do it.

The indication I get is that it is a layer 2 only protocol, if that is the case, seems very short sighted to design a fancy new protocol like that and not integrate at least optional layer 3 support, we’ve been running layer 3 for more than a decade on switches.

In case you didn’t know, or didn’t click the link yet, ESRP by default runs in both Layer 2 and Layer 3, though optionally can be configured to run in only one layer if your prefer.

August 28, 2010

What a mouthful

Filed under: Networking,Random Thought — Tags: , — Nate @ 9:25 am

I’ve thought about this off and on and I better write about it so I can forget about it.

I think Force10 is way too verbose in choosing the phrase to describe their company, it’s quite a mouthful –

Force10 Networks, Inc., a global technology leader that data center, service provider and enterprise customers rely on when the network is their business[..]

I like Force10, I have been watching them for five years now, I just think any phrase you choose to describe your company should be short enough to say it in one (casual) breath.

How about “Force10 Networks Inc., a global networking technology leader”.

Force10’s marketers are very nice folks I’ve sent them two corrections over the years to their web site(one concerning the number of ports a competitor offers in their products, the other with a math error in a graphic showing much you can save on their products), they were very kind and responsive(and fixed both problems pretty quickly too). This one I won’t send to them directly since it’s more than a cosmetic change 🙂

August 25, 2010

Moving on up to Number two

Filed under: Networking — Tags: , — Nate @ 4:06 pm

Brings a tear to me eye, my favorite switching vendor had a pretty impressive announcement today:

Extreme Networks commanded the #2 revenue position for data center Top-of-Rack switches according to the quarterly Ethernet market share report, behind only Cisco, driven by its industry leading Summit(R) X650, Summit X450 and Summit X480 switches. In the “Top of Rack” switch port shipment category, Extreme Networks increased its port shipments by 194% compared to the same quarter one year ago. This demonstrates continued momentum for the Company in the dynamic and demanding data center Ethernet market.

If you haven’t already seen the X650, X480 and even X450 Series of switches check them out. They do offer several capabilities that no other vendor on the market provides. And they are very affordable.

I have blogged on some of my more favorite topics in the past, with regards to their technology. I’ve been using Extreme stuff for just about 10 years now I think.

[tangent — begin]

I remember the 2nd switch I bought(this one for my employer), a Summit 48 with an external power supply I think it was in 2001. Bought it off Ebay from what I assume was a crashed dot com or something. Anyways they didn’t include the cable(sold “as is”) to connect the switch to the redundant power supply. So I hunted around trying to find what part to order, couldn’t find anything. So I called support.

The support tech had me recite the serial# of the unit to him, and he said they don’t have a part# for that cable, so they couldn’t sell me one. But he happened to have a few cables laying around so he put one in a fedex pouch and shipped it to me, free. I didn’t have a support contract(and didn’t get a support contract until I made a much larger purchase several years later). But I guess you could say that friendly support engagement certainly played a factor in me keeping tabs on the company and the products going forward, leading up to a million dollar purchase several years later(different company) of more than 3,000 ports.

I used my first switch, also Summit 48 as my home network switch for a good 5 years, before I decided it drew too much power for what I needed(48 port switch running on maybe 5-6 ports total), and was pretty noisy(as are pretty much all switches from that era, I think it was manufactured in ’98).  Got a good deal on a Summit 48si, and upgraded to that! For another year, and then retired it to a shelf. It drew half the power, and after replacing all of the fans in the unit(original fans too loud) it was quieter, but my network needs shrank even more from ~5-6 systems to ~2-3 (yay VMware), and I wanted to upgrade to gigabit.

From the Summit 48 article above, I thought this is a good indication on how easy their stuff is to use, even more than 10 years ago:

[..]We tested it with and without the QoS enabled. Without the QoS enabled, I began to see glitches in the video. The video halted abruptly at rates over 98 percent. With two commands, I enabled QoS on the Summit switches. Summit48 intelligently discarded the packets with lower priority, preserving the video stream’s quality even at 100 percent utilization.

Eventually recycled my Summit 48, along with an old Cisco switch(which I never used), couple really old Foundry load balancers(never used them either) a couple of years ago. Was too lazy to try to ebay them or put them on craigslist. Still have my 48si, it’s a really nice switch I like it a lot, they still sell it in fact even today. And still release updates(ExtremeWare 7.x) for it. The Summit 48 code base(ExtremeWare 1.x-4.x) was retired probably in 2002, so nothing new released for it for a long time.

[tangent — end]

So, congratulations Extreme for doing such a great job.

August 23, 2010

HP FlexFabric module launched

Filed under: Datacenter,Networking,Storage,Virtualization — Tags: , , , , — Nate @ 5:03 pm

While they announced it a while back, it seems the HP VirtualConnect FlexFabric Module available for purchase for $18,500 (web price). Pretty impressive technology, Sort of a mix between FCoE and combining a Fibre channel switch and a 10Gbps Flex10 switch into one. The switch has two ports on it that can uplink (apparently) directly fiber channel 2/4/8Gbps. I haven’t read too much into it yet but I assume it can uplink directly to a storage array, unlike the previous Fibre Channel Virtual Connect module which had to be connected to a switch first (due to NPIV).

HP Virtual Connect FlexFabric 10Gb/24-port Modules are the simplest, most flexible way to connect virtualized server blades to data or storage networks. VC FlexFabric modules eliminate up to 95% of network sprawl at the server edge with one device that converges traffic inside enclosures and directly connects to external LANs and SANs. Using Flex-10 technology with Fibre Channel over Ethernet and accelerated iSCSI, these modules converge traffic over high speed 10Gb connections to servers with HP FlexFabric Adapters (HP NC551i or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapters or HP NC553i 10Gb 2-port FlexFabric Converged Network Adapter). Each redundant pair of Virtual Connect FlexFabric modules provide 8 adjustable connections ( six Ethernet and two Fibre Channel, or six Ethernet and 2 iSCSI or eight Ethernet) to dual port10Gb FlexFabric Adapters. VC FlexFabric modules avoid the confusion of traditional and other converged network solutions by eliminating the need for multiple Ethernet and Fibre Channel switches, extension modules, cables and software licenses. Also, Virtual Connect wire-once connection management is built-in enabling server adds, moves and replacement in minutes instead of days or weeks.

[..]

  • 16 x 10Gb Ethernet downlinks to server blade NICs and FlexFabric Adapters
  • Each 10Gb downlink supports up to 3 FlexNICs and 1 FlexHBA or 4 FlexNICs
  • Each FlexHBA can be configured to transport either Fiber Channel over Ethernet/CEE or Accelerated iSCSI protocol.
  • Each FlexNIC and FlexHBA is recognized by the server as a PCI-e physical function device with adjustable speeds from 100Mb to 10Gb in 100Mb increments when connected to a HP NC553i 10Gb 2-port FlexFabric Converged Network Adapter or any Flex-10 NIC and from 1Gb to 10Gb in 100Mb increments when connected to a NC551i Dual Port FlexFabric 10Gb Converged Network Adapter or NC551m Dual Port FlexFabric 10Gb Converged Network Adapter
  • 4 SFP+ external uplink ports configurable as either 10Gb Ethernet or 2/4/8Gb auto-negotiating Fibre Channel connections to external LAN or SAN switches
  • 4 SFP+ external uplink ports configurable as 1/10Gb auto-negotiating Ethernet connected to external LAN switches
  • 8 x 10Gb SR, LR fiber and copper SFP+ uplink ports (4 ports also support 10Gb LRM fiber SFP+)
  • Extended list of direct attach copper cable connections supported
  • 2 x 10Gb shared internal cross connects for redundancy and stacking
  • HBA aggregation on FC configured uplink ports using ANSI T11 standards-based N_Port ID Virtualization (NPIV) technology
  • Allows up to 255 virtual machines running on the same physical server to access separate storage resources
  • Up to 128 VLANs supported per Shared Uplink Set
  • Low latency (1.2 µs Ethernet ports and 1.7 µs Enet/Fibre Channel ports) throughput provides switch-like performance.
  • Line Rate, full-duplex 240Gbps bridging fabric
  • MTU up to 9216 Bytes – Jumbo Frames
  • Configurable up to 8192 MAC addresses and 1000 IGMP groups
  • VLAN Tagging, Pass-Thru and Link Aggregation supported on all uplinks
  • Stack multiple Virtual Connect FlexFabric modules with other VC FlexFabric, VC Flex-10 or VC Ethernet Modules across up to 4 BladeSystem enclosures allowing any server Ethernet port to connect to any Ethernet uplink

Management

  • Pre-configure server I/O configurations prior to server installation for easy deployment
  • Move, add, or change server network connections on the fly without LAN and SAN administrator involvement
  • Supported by Virtual Connect Enterprise Manager (VCEM) v6.2 and higher for centralized connection and workload management for hundreds of Virtual Connect domains. Learn more at: www.hp.com/go/vcem
  • Integrated Virtual Connect Manager included with every module, providing out-of-the-box, secure HTTP and scriptable CLI interfaces for individual Virtual Connect domain configuration and management.
  • Configuration and setup consistent with VC Flex-10 and VC Fibre Channel Modules
  • Monitoring and management via industry standard SNMP v.1 and v.2 Role-based security for network and server administration with LDAP compatibility
  • Port error and Rx/Tx data statistics displayed via CLI
  • Port Mirroring on any uplink provides network troubleshooting support with Network Analyzers
  • IGMP Snooping optimizes network traffic and reduces bandwidth for multicast applications such as streaming applications
  • Recognizes and directs Server-Side VLAN tags
  • Transparent device to the LAN Manager and SAN Manager
  • Provisioned storage resource is associated directly to a specific virtual machine – even if the virtual server is re-allocated within the BladeSystem
  • Server-side NPIV removes storage management constraint of a single physical HBA on a server blade Does not add to SAN switch domains or require traditional SAN management
  • Centralized configuration of boot from iSCSI or Fibre Channel network storage via Virtual Connect Manager GUI and CLI
  • Remotely update Virtual Connect firmware on multiple modules using Virtual Connect Support Utility 1.5.0

Options

  • Virtual Connect Enterprise Manager (VCEM), provides a central console to manage network connections and workload mobility for thousands of servers across the datacenter
  • Optional HP 10Gb SFP+ SR, LR, and LRM modules and 10Gb SFP+ Copper cables in 0.5m, 1m, 3m, 5m, and 7m lengths
  • Optional HP 8 Gb SFP+ and 4 Gb SFP optical transceivers
  • Supports all Ethernet NICs and Converged Network adapters for BladeSystem c-Class server blades: HP NC551i 10Gb FlexFabric Converged Network Adapters, HP NC551m 10Gb FlexFabric Converged Network Adapters, 1/10Gb Server NICs including LOM and Mezzanine card options and the latest 10Gb KR NICs
  • Supports use with other VC modules within the same enclosure (VC Flex-10 Ethernet Module, VC 1/10Gb Ethernet Module, VC 4 and 8 Gb Fibre Channel Modules).

So in effect this allows you to cut down on the number of switches per chassis from four to two, which can save quite a bit. HP had a cool graphic showing the amount of cables that are saved even against Cisco UCS but I can’t seem to find it at the moment.

The most recently announced G7 blade servers have the new FlexFabric technology built in(which is also backwards compatible with Flex10).

VCEM seems pretty scalable

Built on the Virtual Connect architecture integrated into every BladeSystem c-Class enclosure, VCEM provides a central console to administer network address assignments, perform group-based configuration management and to rapidly deployment, movement and failover of server connections for 250 Virtual Connect domains (up to 1,000 BladeSystem enclosures and 16,000 blade servers).

With each enclosure consuming roughly 5kW with low voltage memory and power capping, 1,000 enclosures should consume roughly 5 Megawatts? From what I see “experts” say it costs roughly ~$18 million per megawatt for a data center, so one VCEM system can manage a $90 million data center, that’s pretty bad ass. I can’t think of who would need so many blades..

If I were building a new system today I would probably get this new module, but have to think hard about sticking to regular fibre channel module to allow the technology to bake a bit more for storage.

The module is built based on Qlogic technology.

April 26, 2010

40GbE for $1,000 per port

Filed under: Networking,News — Tags: , — Nate @ 8:32 am

It seems it wasn’t too long ago that 10GbE broke the $1,000/port price barrier. Now it seems we have reached it with 40GbE as well, from my own personal favorite networking company Extreme Networks, announced today the availability of an expansion module for the X650 and X480 stackable switches to include 40GbE support. Top of rack line rate 10GbE just got more feasable.

LAS VEGAS, NV, Apr 26, 2010 (MARKETWIRE via COMTEX News Network) — Extreme Networks, Inc. (NASDAQ: EXTR) today announced highly scalable 40 Gigabit Ethernet (GbE) network solutions at Interop Las Vegas. The VIM3-40G4X adds four 40 GbE connections to the award-winning Summit(R) X650 Top-of-Rack stackable switches for $3,995, or less than $1,000 per port. The new module is fully compatible with the existing Summit X650 and Summit X480 stackable switches, preserving customers’ investments while providing a smooth upgrade to greatly increased scalability of both virtualized and non-virtualized data centers.

[..]

Utilizing Ixia’s IxYukon and IxNetwork test solutions, Extreme Networks demonstrates wire-speed 40Gbps performance and can process 60 million packets per second (120Mpps full duplex) of data center traffic between ToR and EoR switches.

April 19, 2010

Arista ignites networks with groundbreaking 10GbE performance

Filed under: Networking,News — Tags: , — Nate @ 8:53 am

In a word: Wow

Just read an article from our friends at The Register on a new 384-port chassis 10GbE switch that Arista is launching. From a hardware perspective the numbers are just draw dropping.

A base Arista 7500 costs $140,000, and a fully configured machine with all 384 ports and other bells and whistles runs to $460,800, or $1,200 per port. This machine will draw 5,072 watts of juice and take up a little more than quarter of a rack.

Compare this to a Cisco Nexus 7010 setup to get 384 wirespeed ports and deliver the same 5.76 Bpps of L3 throughput, and you need to get 18 of the units at a cost of $13.7m. Such a configuration will draw 160 kilowatts and take up 378 rack units of space – nine full racks. Arista can do the 384 ports in 1/34th the space and 1/30th the price.

I love the innovation that comes from these smaller players, really inspiring.

April 8, 2010

What can you accomplish in two microseconds?

Filed under: Networking — Nate @ 4:30 pm

An interesting post on the Datacenter Knowledge site about the growth in low latency data centers, the two things that were pretting shocking to me at the end were:

“I still find it amazing,” said McPartland. “A blink of an eye is 300 milliseconds. That’s an eternity in this business.”

How much of an eternity: “You can do a heck of a lot in 2 microseconds,” said Kaplan.

Interesting the latency requirements these fast stock traders are looking for, reminded me of a network upgrade the NYSE did deploying some Juniper stuff a while back as reported by The Register:

With the NewYork Stock Exchange down on Wall Street being about ten miles away from the data center in New Jersey, the delay between Wall Street and the systems behind the NYSE is about 105 microseconds. This is not a big deal for some trading companies, but means millions of dollars for others.

[..]

NYSE Technologies, which is the part of the company that actually hooks people into the NYSE and Euronext exchanges, has rolled out a market data system based on the Vantage 8500 switches. The system offers latencies per core switch in the range of 25 microseconds for one million messages per second on messages that are 200 bytes in size.

The Vantage 8500 switch seems pretty scalable, claiming to have non blocking scalability of 10GbE for up to 3,400 servers, announced last year.

Arista Networks somewhat recently launched an initiative aimed at this market segment as well.

Since the Juniper announcement, Force10 announced that the NYSE has chosen their gear for the next generation data centers at the NYSE, the Juniper switching gear so far hasn’t looked all that great compared to the competition, so I’d be curious how the deployment of Force10 stuff relates to the earlier deployment of Juniper stuff:

SAN JOSE, Calif., November 9, 2009 – Force10 Networks, Inc., the global technology leader that data center, service provider and enterprise customers rely on when the network is their business, today announced that the NYSE Euronext has selected its high-performance 10 Gigabit Ethernet (10 GbE) core and access switches to power the management network in their next-generation data centers in the greater New Jersey and London metro areas.

Force10 of course has been one of the early innovators and leaders in 10GbE port density and raw throughput(at least on paper, I’ve never used their stuff personally though have heard good things). On a related note it wasn’t long ago that they filed for an IPO, I wish them the best, as Force10 really is an innovative company and I’ve admired their technology for several years now.

(how do I remember all of these news articles?)

March 17, 2010

Frightened

Filed under: General,Networking — Tags: — Nate @ 8:15 pm

Frightened. That was the word that first came to my mind when I read this article from our friends at The Register.

The report also says that 60 per cent of Google’s traffic is now delivered directly to consumer networks. In addition to building out a network of roughly 36 data centers and co-locating in more than 60 public exchanges, the company has spent the past year deploying its Google Global Cache (GGC) servers inside consumer networks across the globe. Labovitz says that according to Arbor’s anecdotal conversations, more than half of all consumer providers in North American and Europe now have at least one rack of Google’s cache servers.

Honestly, I am speechless beyond the word frightened, you may want to refer to an earlier blog post “Lesser of two Evils” for more details.

« Newer PostsOlder Posts »

Powered by WordPress