TechOpsGuys.com Diggin' technology every day

April 10, 2012

Oracle first to release 10GbaseT as standard ?

Filed under: Networking — Tags: , , — Nate @ 2:21 pm

Sun has had some innovative x86-64 designs in the past, particularly on the AMD front. Of course Oracle dumped AMD a while back, and focus on Intel, despite that their market share continues to collapse (in good part probably because they screwed  over many of their partners from what I recall by going direct with so many customers, among other things).

In any case they launched a new server line up today, which otherwise is not really news since who uses Sun/Oracle x86-64 boxes anyways? But I thought the news was interesting since it seems to include 4 x 10GbaseT ports on board as standard.

Rear of Sun Fire X4170 M3 Server

The Sun Fire X4170 M3 and the X4270 M3 systems both appear to have quad port 10GbaseT on the motherboard. I haven’t heard of any other severs yet that have this as standard. Out of curioisity if you know of others I’d be interested to hear who they are.

The data sheet is kind of confusing, saying it has 4 onboard 10GbE ports but then it says Four 100/1,000/10 Base-T Ethernet ports in the network section below. Of course it was frequent to have 10/100/1000 BaseT before, so after seeing the physical rear of the system it seems convincing that they are using 10GbaseT.

Nice goin’ Oracle.

 

March 19, 2012

10GbaseT making a comeback ?

Filed under: Networking — Tags: — Nate @ 12:20 pm

Say it’s true… I’ve been a fan of 10GbaseT for a while now. Though it hasn’t really caught on in the industry, off the top of my head I can only think of Arista and Extreme who have embraced the standard from a switching perspective, with everyone else going with SFP+, or XFP or something else. Both Arista and Extreme obviously have SFP+ products as well, maybe XFP too, though I haven’t looked into why someone would use XFP over SFP+ or vise versa.

From what I know, the biggest thing that has held back adoption of 10GbaseT has been power usage. Also I think other industry organizations had given up waiting for 10GbaseT to materialize. Also cost was somewhat of a factor too, I recall at least with Extreme their 24-port 10GbaseT switch was about $10k more than their SFP+ switch (without any SFP+ adapters or cables), so it was priced similarly to an optical switch that was fairly fully populated with modules, making entry level pricing if you only needed say 10 ports initially quite a bit higher.

But I have read two different things (and heard a third)  recently, which I’m sure are related which hopefully points to a turning point in 10GbaseT adoption.

The first was a banner on Arista’s website.

The second  is this blog post talking about a new 10GbaseT chip from Intel.

Then the third thing I probably can’t talk about, so I won’t 🙂

I would love to have 10GbaseT over the passive copper cabling that most folks use now, that stuff is a pain to work with. While there are at least two switching companies that have 10GbaseT (I recall a Dell blade switch that had 10GbaseT support too), the number of NICs out there that support it is just about as sparse.

Not only that but I do like to color code my cables, and while CAT6 cables are obviously easy to get in many colors, it’s less common and harder to get those passive 10GbE cables in multiple colors, seems most everyone just has black.

Also, cable lengths are quite a bit more precise with CAT6 than with passive copper. For example from Extreme at least (I know I could go 3rd party if I wanted), their short cables are 1 meter and 3 meters. There’s a massive amount of distance between those two. CAT6 can be easily customized to any length and pre-made cables(I don’t make my own), can be fairly easily found to be in 1 foot (or even half a foot) increments.

SFP Passive copper 10GbE cable

I wonder if there are (or will there be) 10GbaseT SFP+ GBICs (so existing switches could support 10GbaseT without wholesale replacement) ? I know there are 1GbE SFP+ GBICs.

 

October 18, 2011

Cisco’s new 10GbE push – a little HP and Dell too

Filed under: Networking — Tags: , , , , — Nate @ 7:56 pm

Just got done reading this from our friends at The Register.

More than anything else this caught my eye:

On the surface it looks pretty impressive, I mean it would be interesting to see exactly how Cisco configured the competing products as in which 60 Juniper devices or 70 HP devices did they use and how were they connected?

One thing that would of been interesting to call out in such a configuration, is the number of logical devices needed for management. For example I know Brocade’s VDX product is some fancy way of connecting lots of devices sort of like more traditional stacking just at a larger scale for ease of management. I’m not sure whether or not the VDX technology extends to their chassis product as Cisco’s configuration above seems to imply using chassis switches. I believe Juniper’s Qfabric is similar. I’m not sure if HP or Arista have such technology(I don’t believe they do). I don’t think Cisco does – but they don’t claim to need it either with this big switch. So a big part of the question is managing so many devices, or just managing one. Cost of the hardware/software is one thing..

HP recently announced a revamp of their own 10GbE products, at least the 1U variety. I’ve been working off and on with HP people recently and there was a brief push to use HP networking equipment but they gave up pretty quick. They mentioned they were going to have “their version” of the 48-port 10-gig switch soon, but it turns out it’s still a ways away – early next year is when it’s supposed to ship, even if I wanted it  (which I don’t) – it’s too late for this project.

I dug into their fact sheet, which was really light on information to see what, if anything stood out with these products. I did not see anything that stood out in a positive manor, I did see this which I thought was kind of amusing –

Industry-leading HP Intelligent Resilient Framework (IRF) technology radically simplifies the architecture of server access networks and enables massive scalability—this provides up to 300% higher scalability as compared to other ToR products in the market.

Correct me if I’m wrong – but that looks like what other vendors would call Stacking, or Virtual Chassis. An age-old technology, but the key point here was the up to 300% higher scalability. Another way of putting it is at least 50% less scalable – when your comparing it to the Extreme Networks Summit X670V(which is shipping I just ordered some).

The Summit X670 series is available in two models: Summit X670V and Summit X670. Summit X670V provides high density for 10 Gigabit Ethernet switching in a small 1RU form factor. The switch supports up to 64 ports in one system and 448 ports in a stacked system using high-speed SummitStack-V160*, which provides 160 Gbps throughput and distributed forwarding. The Summit X670 model provides up to 48 ports in one system and up to 352 ports in a stacked system using SummitStack-V longer distance (up to 40 km with 10GBASE-ER SFP+) stacking technology.

In short, it’s twice as scalable as the HP IRF feature, because it goes up to 8 devices (56x10GbE each), and HP’s goes up to 4 devices (48x10GbE each — or perhaps they can do 56 too with breakout cables since both switches have the same number of physical 10GbE and 40GbE ports).

The list price on the HP switches is WAY high too, The Register calls it out at $38,000 for a 24-port switch. The X670 from Extreme has a list price of about $25,000 for 48-ports(I see it on-line for as low as about $17k). There was no disclosure of HP’s pricing for their 48-port switch.

Extreme has another 48-port switch which is cheaper (almost half the cost if I recall right – I see it on-line going for as low as $11,300) but it’s for very specialized applications where latency is really important. If I recall right they removed the PHY (?) from the switch which dramatically reduces functionality and introduces things like very short cable length limits but also slashes the latency (and cost). You wouldn’t want to use those for your VMware setup(well if you were really cost constrained these are probably better than some other alternatives especially if your considering this or 1GbE), but you may want them if your doing HPC or something with shared memory or high frequency stock trading (ugh!).

The X670 also has (or will have? I’ll find out soon) a motion sensor on the front of the switch which I thought was curious, but seems like a neat security feature, being able to tell if someone is standing in front of your switch screwing with it. It also apparently has the ability(or will have the ability) to turn off all of the LEDs on the switch when someone gets near it, and turn them back on when they go away.

(ok back on topic, Cisco!)

I looked at the Cisco slide above, and thought to myself, really, can they be that far ahead? I certainly do not go out on a routine basis and see how many devices and connectivity between them that I need to achieve  X number of line rate ports, I’ll keep it simple, if you need a large number of line rate ports just use a chassis product(you may need a few of them). It is interesting to see though, assuming it’s anywhere close to being accurate.

When I asked myself the question “Can they be that far ahead?” I wasn’t thinking of Cisco, I think I’m up to 7 readers now — you know me better than that! 🙂

I was thinking of the Extreme Networks Black Diamond X-Series which was announced (note not yet shipping…) a few months ago.

  • Cisco claims to do 768 x 10GbE ports in 25U (Extreme will do it in 14.5U)
  • Cisco claims to do 10W per 10GbE/port (Extreme will do it in 5W/port)
  • Cisco claims to do it with 1 device .. Well that’s hard to beat but Extreme can meet them, it’s hard to do it with less than one device.
  • Cisco’s new top end taps out at very respectable 550Gbit per slot (Extreme will do 1.2Tb)
  • Cisco claims to do it with a list price of $1200/port. I don’t know what Extreme’s pricing will be but typically Cisco is on the very high end for costs.

Though I don’t know how Cisco gets to 768 ports, Extreme does it via 40GbE ports and breakout cables (as far as I know), so in reality the X-series is a 40GbE switch (and I think 40GbE only – to start with unless you use the break out cables to get to 10GbE).  It was a little over a year ago that Extreme was planning on shipping 40GbE at a cost of $1,000/port. Certainly the X-series is a different class of product than what they were talking about a while ago, but prices have also come down since.

X-Series is shipping “real soon now’.  I’m sure if you ask them they’ll tell you more specifics.

It is interesting to me, and kind of sad how far Force10 has fallen in the 10GbE area, I mean they seemed to basically build themselves on the back of 10GbE(or at least tried to), but I look at their current products on the very high end, and short from the impressive little 40GbE switch they have, they seem to top out at 140 line rate 10GbE in 21U. Dell will probably do well with them, I’m sure it’ll be a welcome upgrade to those customers using Procurve, uh I mean Powerconnect? That’s what Dell call(ed) their switches right?

As much as it pains me I do have to give Dell some props for doing all of these acquisitions recently and beefing up their own technology base, whether it’s in storage, or networking they’ve come a long way (more so in storage, need more time to tell in networking). I have not liked Dell myself for quite some time, a good chunk of it is because they really had no innovation, but part of it goes back to the days before Dell shipped AMD chips and Dell was getting tons of kick backs from Intel for staying an Intel exclusive provider.

In the grand scheme of things such numbers don’t mean a whole lot, I mean how many networks in the world can actually push this kind of bandwidth? Outside of the labs I really think any organization would be very hard pressed to need such fabric capacity, but it’s there — and it’s not all that expensive.

I just dug up an old price list I had from Extreme – from late November 2005. An 6-port 10GbE module for their Black Diamond 10808 switch (I had two at the time) had a list price of $36,000. For you math buffs out there that comes to $9,000 per line rate port.

That particular product was oversubscribed (hence it not being $6,000/port) as well having a mere 40Gbps of switch fabric capacity per slot, or a total of 320Gbps for the entire switch (it was marketed as a 1.2Tb switch but hardware never came out to push the backplane to those levels – I had to dig into the depths of the documentation to find that little disclosure – naturally I found it after I purchased, didn’t matter for us though I’d be surprised if we pushed more than 5Gbps at any one point!). If I recall right the switch was 24U too. My switches were 1GbE only, cost reasons 🙂

How far we’ve come..

May 11, 2011

2000+ 10GbE ports in a single rack

Filed under: Datacenter,Networking — Tags: , , , — Nate @ 9:41 pm

The best word I can come up with when I saw this was

oof

What I’m talking about is the announcement of the Black Diamond X-Series from my favorite switching company Extreme Networks. I have been hearing a lot about other switching companies coming out with new next gen 10 GbE and 40GbE switches, more than one using Broadcom chips (which Extreme uses as well), so have been patiently awaiting their announcements.

I don’t have a lot to say so I’ll let the specs do the talking

Extreme Networks Black Diamond X-Series

 

  • 14.5 U
  • 20 Tbps switching fabric (up ~4x from previous models)
  • 1.2 Tbps fabric per line slot (up ~10x from previous models)
  • 2,304 line rate 10GbE ports per rack (5 watts per port) (768 line rate per chassis)
  • 576 line rate 40GbE ports per rack (192 line rate per chassis)
  • Built in support to switch up to 128,000 virtual machines using their VEPA/ Direct Attach system

 

 

 

This was fascinating to me:

Ultra high scalability is enabled by an industry-leading fabric design with an orthogonal direct mating system between I/O modules and fabric modules, which eliminates the performance bottleneck of pure backplane or midplane designs.

I was expecting their next gen platform to be a mid plane design (like that of the Black Diamond 20808), their previous 10GbE high density Enterprise switch Black Diamond 8800, by contrast was a backplane design (originally released about six years ago). The physical resemblance to the Arista networks chassis switches is remarkable. I would like to see how this direct mating system looks in a diagram of some kind to get a better idea on what this new design is.

Mini RJ21 adapters, 1 plug on the switch, goes to 6x1GbE ports

To put that port density in to some perspective, their older system (Black Diamond 8800), by comparison, has an option to use Mini RJ21 adapters to achieve 768 1GbE ports in a chassis (14U), so an extra inch of space gets you the same number of ports running at 10 times the speed, and line rate (the 768x1GbE is not quite to line rate but still damn fast). It’s the only way to fit so many copper ports in such a small space.

 
 
 

It seems they have phased out the Black Diamond 10808 (I deployed a pair of these several years ago first released 2003), the Black Diamond 12804C (first released about 2007), the Black Diamond 12804R (also released around 2007) and the Black Diamond 20808 (this one is kind of surprising given how recent it was though didn’t have anything approaching this level of performance of course, I think it was released in around 2009). They also finally seemed to drop the really ancient Alpine series (10+ year old technology) as well.

Also they seem to have announced a new high density stackable 10GbE switch the Summit X670, the successor to the X650 which was already an outstanding product offering several features that until recently nobody else in the market was providing.

Extreme Networks Summit X670

  • 1U
  • 1.28 Tbps switching fabric (roughly double that of the X650)
  • 48 x 10Gbps line rate standard (64 x 10Gbps max)
  • 4 x 40Gbps line rate (or 16 x 10Gbps)
  • Long distance stacking support (up to 40 kilometers)

The X670 from purely a port configuration standpoint looks similar to some of other recently announced products from other companies, like Arista and Force10, both of whom are using the Broadcom Trident+ chipset, I assume Extreme is using the same. These days given so many manufacturers are using the same type of hardware you have to differentiate yourself in the software, which is really what drives me to Extreme more than anything else, their Linux-based easy-to-use Extremeware XOS operating system.

Neither of these products appear to be shipping, not sure when they might ship, maybe sometime in Q3 or something.

40GbE has taken longer than I expected to finalize, they were one of the first to demonstrate 40GbE at Interop Las Vegas last year, but the parts have yet to ship (or if they have the web site is not updated).

For the most part, the number of companies that are able to drive even 10% of the performance of these new lines of networking products is really tiny. But the peace of mind that comes with everything being line rate, really is worth something !

x86 or ASIC? I’m sure performance boosts like the ones offered here pretty much guarantees that x86 (or any general purpose CPU for that matter) will not be driving high speed networking for a very long time to come.

Myself I am not yet sold on this emerging trend in the networking industry that is trying to drive everything to be massive layer 2 domains. I still love me some ESRP! I think part of it has to do with selling the public on getting rid of STP. I haven’t used STP in 7+ years so not using any form of STP is nothing new for me!

April 19, 2010

Arista ignites networks with groundbreaking 10GbE performance

Filed under: Networking,News — Tags: , — Nate @ 8:53 am

In a word: Wow

Just read an article from our friends at The Register on a new 384-port chassis 10GbE switch that Arista is launching. From a hardware perspective the numbers are just draw dropping.

A base Arista 7500 costs $140,000, and a fully configured machine with all 384 ports and other bells and whistles runs to $460,800, or $1,200 per port. This machine will draw 5,072 watts of juice and take up a little more than quarter of a rack.

Compare this to a Cisco Nexus 7010 setup to get 384 wirespeed ports and deliver the same 5.76 Bpps of L3 throughput, and you need to get 18 of the units at a cost of $13.7m. Such a configuration will draw 160 kilowatts and take up 378 rack units of space – nine full racks. Arista can do the 384 ports in 1/34th the space and 1/30th the price.

I love the innovation that comes from these smaller players, really inspiring.

November 17, 2009

Affordable 10GbE has arrived

Filed under: Networking — Tags: , — Nate @ 6:00 pm

10 Gigabit Ethernet has been around for many years, for much of that time it has been for the most part(and with most vendors still is) restricted to more expensive chassis switches. For most of these switches the port density available for 10GbE is quite low as well, often maxing out at less than 10 ports per slot.

Within the past year Extreme Networks launched their X650 series of 1U switches, which currently consists of 3 models:

  • 24-port 10GbE SFP+
  • 24-port 10GbaseT first generation
  • 24-port 10GbaseT second generation (added link to press release, I didn’t even know they announced the product yesterday it’s been available for a little while at least)

For those that aren’t into networking too much, 10GbaseT is an ethernet standard that provides 10 Gigabit speeds over standard CAT5e/CAT6/CAT6a cable.

All three of them are line rate, full layer 3 capable, and even have high speed stacking(ranging from 40Gbps to 512Gbps depending on configuration). Really nobody else in the industry has this ability at this time at least among:

  • Brocade (Foundry Networks) – Layer 2 only (L3 coming at some point via software update), no stacking, no 10GbaseT
  • Force10 Networks – Layer 2 only, no stacking, no 10GbaseT
  • Juniper Networks – Layer 2 only, no stacking, no 10GbaseT. An interesting tidbit here is the Juniper 1U 10GbE switch is an OEM’d product, does not run their “JunOS” operating system, and will never have Layer 3 support. They will at some point I’m sure have a proper 10GbE switch but they don’t at the moment.
  • Arista Networks – Partial Layer 3(more coming in software update at some point), no stacking, they do have 10GbaseT and offer a 48-port version of the switch.
  • Brocade 8000 – Layer 2 only, no stacking, no 10GbaseT (This is a FCoE switch but you can run 10GbE on it as well)
  • Cisco Nexus 5000 – Layer 2 only, no stacking, no 10GbaseT (This is a FCoE switch but you can run 10GbE on it as well)
  • Fulcrum Micro Monte Carlo – I had not heard of these guys until 30 seconds ago, found them just now. I’m not sure if this is a real product, it says reference design, I think you can get it but it seems targeted at OEMs rather than end users. Perhaps this is what Juniper OEMs for their stuff(The Fulcrum Monaco looks the same as the Juniper switch). Anyways they do have 10GbaseT, no mention of Layer 3 that I can find beyond basic IP routing, no stacking. Probably not something you want to use in your data center directlty due to it’s reference design intentions.

The biggest complaints against 10GbaseT have been that it was late to market(first switches appeared somewhat recently), and it is more power hungry. Well fortunately for it the adoption rate of 10GbE has been pretty lackluster over the past few years with few deployments outside of really high end networks because the cost was too prohibitive.

As for the power usage, the earlier 10GbaseT switches did use more power because well it usually requires more power to drive stuff over copper vs fiber. But the second generation X650-24T from Extreme has lowered the power requirements by ~30%(reduction of 200W per switch), making it draw less power than the SFP+ version of the product! All models have an expansion slot on the rear for stacking and additional 10GbE ports. For example if you wanted all copper ports on the front but needed a few optical, you could get an expansion module for the back that provides 8x 10GbE SFP+ ports on the rear. Standard it comes with a module that has 4x1GbE SFP ports and 40Gbps stacking ports.

So what does it really cost? I poked around some sites trying to find some of the “better” fully layer 3 1U switches out there from various vendors to show how cost effective 10GbE can be, at least on a per-gigabit basis it is cheaper than 1GbE is today. This is street pricing, not list pricing, and not “back room” discount pricing. YMMV

VendorModelNumber of ports on the frontBandwidth
for front
ports
(Full Duplex)
Priced
From
Street
Price
Cost per
Gigabit
Support
Costs?
Extreme NetworksX650-24t24 x 10GbE480 GbpsCDW$19,755 *$41.16Yes
Force10 NetworksS50N48 x 1GbE 96 GbpsInsight$5,078$52.90Yes
Extreme NetworksX450e-48p48 x 1GbE 96 GbpsDell$5,479$57.07Optional
Extreme NetworksX450a-48t48 x 1GbE 96 GbpsDell$6,210$64.69Yes
Juniper NetworksEX420048 x 1GbE 96 GbpsCDW$8,323$86.69Yes
Brocade (Foundry Networks)NetIron CES 2048C48 x 1GbE 96 GbpsPendingPendingPendingYes
Cisco Systems3750E-48TD48 x 1GbE 96 GbpsCDW$13,500$140.63Yes

* The Extreme X650 switch by default does not include a power supply(it has two internal power supply bays for AC or DC PSUs). So the price includes the cost of a single AC power supply.

Powered by WordPress