TechOpsGuys.com Diggin' technology every day

July 20, 2011

I called it! – Force10 bought by Dell

Filed under: Networking — Tags: , — Nate @ 11:35 am

Not that it matters to me too much either way but Dell just bought Force10. I called it! Well it matters to me in that I didn’t want Dell near my Extreme Networks 🙂

It is kind of sad that Force10 was never able to pull off their IPO. I have heard that they have been losing quite a bit of talent recently, but don’t know to what degree. It’s also unfortunate they weren’t able to fully capitalize on their early leadership in the 10 gigabit arena, Arista seems to be the new Force10 in some respects, though it wouldn’t surprise me if they have a hard time growing too barring some next gen revolutionary product.

I wonder if anyone will scoop up BlueArc, they have been trying to IPO as well for a couple of years now, I’d be surprised if they can pull it off in this market.  They have good technology just a whole lot of debt. Though recently I read they started turning a profit..

 

July 8, 2011

Wired or Wireless?

Filed under: Networking,Random Thought,Uncategorized — Tags: — Nate @ 9:58 am

I’ll start out by saying I’ve never been a fan of Wifi, it’s always felt like a nice gimmick-like feature to have but other than that I usually steered clear. Wifi has been deployed at all companies I have worked at in the past 7-8 years though in all cases I was never responsible for that (I haven’t done internal IT since 2002, at which time wifi was still in it’s early stages(assuming it was out at all yet? I don’t remember) and was not deployed widely at all – including at my company). I could probably count on one hand the number of public wifi networks I have used over the years, excluding hotels (of which there was probably ten).

In the early days it was mostly because of paranoia around security/encryption though over the past several years encryption has really picked up and helped that area a lot. There is still a little bit of fear in me that the encryption is not up to snuff, and I would prefer using a VPN on top of wifi to make it even more secure, only really then would I feel comfortable from a security standpoint of using wifi.

From a security standpoint I am less concerned about people intercepting my transmissions over wifi than I am about people breaking into my home network over wifi (which usually happens by intercepting transmissions – my point is more of the content of what I’m transferring, if it is important is always protected by SSL or SSH or in the case of communicating with my colo or cloud hosted server there is a OpenVPN SSL layer under that as well).

Many years ago, I want to say 2005-2006 time frame, there was quite a bit of hype around the Linksys WRT-54G wifi router, for being easy to replace the firmware with custom stuff and get more functionality out of it. So I ordered one at the time, put dd-wrt on it, which is a custom firmware that was talked a lot about back then (is there something better out there? I haven’t looked). I never ended up hooking it to my home network, just a crossover cable to my laptop to look at the features.

Then I put it back in it’s box and put it in storage.

Until earlier this week, when I decided to break it out again to play with in combination with my new HP Touchpad, which can only talk over Wifi.

My first few days with the Touchpad involved having it use my Sprint 3G/4G Mifi access point. As I mentioned earlier I don’t care about people seeing my wifi transmissions I care about protecting my home network. Since the Mifi is not even remotely related to my home network I had no problem using it for extended periods.

The problem with the Mifi, from my apartment is the performance. At best I can get 20% signal strength for 4G, and I can get maybe 80% signal strength for 3G, latency is quite bad in both cases, and throughput isn’t the best either, a lot of times it felt like I was on a 56k modem. Other times it was faster. For the most part I used 3G because it was more reliable for my location, however I do have a 5 gig data cap/month for 3G so considering I started using the Touchpad on the 1st of the month I got kind of concerned I may run into that playing with the new toy during the first month. I just checked Sprint’s site and I don’t see a way to see intra month data usage, only data usage for the month once it’s completed. The mifi tracks data usage while it is running but this data is not persisted across reboots, and I think it’s also reset if the mifi changes between 3G and 4G services. I have unlimited 4G data, but the signal strength where I’m at just isn’t strong enough.

I looked into the possibility of replacing my Mifi with newer technology, but after reading some customer reviews of the newer stuff it seemed unlikely I would get a significant improvement in performance at my location, enough to justify the cost of the upgrade at least so I decided against that for now.

So I broke out the WRT-54G access point and hooked it up. Installed the latest recommended version of firmware, configured the thing and hooked up the touchpad.

I knew there was a pretty high number of personal access points deployed near me, it was not uncommon to see more than 20 SSIDs being broadcast at any given time. So interference was going to be an issue. At one point my laptop showed me that 42 access points were broadcasting SSIDs. And that of course does not even count the ones that are not broadcasting, who knows how many there are there, I haven’t tried to get that number.

With my laptop and touchpad being located no more than 5 feet away from the AP, I had signal strengths of roughly 65-75%. To me that seemed really low given the proximity. I suspected significant interference was causing signal loss. Only when I put the touchpad within say 10 inches of the antenna from the AP did the signal strength go above 90%.

 

Looking into the large number of receive errors told me that those errors are caused almost entirely by interference.

So then I wanted to see what channels were most being used and try to use a channel that has less congestion, the AP defaulted to channel 6.

The last time I mucked with wifi on linux there seemed to be an endless stream of wireless scanning, cracking, hacking tools. Much to my shock and surprise these days most of those tools haven’t been maintained in 5-6-7-8+ years. There aren’t many left. Sadly enough the default Ubuntu wifi apps do not report channels they just report SSIDs. So I went on a quest to find a tool I could use. I finally came across something called wifi radar, which did the job more or less.

I counted about 25 broadcasting SSIDs using wifi radar, nearly half of them if I recall right were on channel 6. A bunch more on 11 and 1, the other two major channels. My WRT54G had channels going all the way up to 14. I recall reading several years ago about frequency restrictions in different places, but in any case I tried channel 14 (which is banned in the US). Wifi router said it was channel 14, but neither my laptop nor Touchpad would connect. I suspect since they flat out don’t support it. No big deal.

Then I went to channel 13. Laptop immediately connected, Touchpad did not. Channel 13 is banned in many areas, but is allowed in the U.S. if the power level is low.

Next I went to channel 12. Laptop immediately connected again, Touchpad did not. This time I got suspicious of the Touchpad. So I fired up my Palm Pre, which uses an older version of the same operating system. It saw my wifi router on channel 12 no problem. But the Touchpad remained unable to connect even if I manually input the SSID. Channel 12 is also allowed in the U.S. if the power level is low enough.

So I ended up on channel 11. Everything could see everything at that point. I enabled WPA2 encryption, enabled MAC address filtering (yes I know you can spoof MACs pretty easily on wifi, but at the same time I have only 2 devices I’ll ever connect so blah). I don’t have a functional VPN yet mainly because I don’t have a way (yet) to access VPN on the Touchpad, it has built in support for two types of Cisco VPNs but that’s it. I installed OpenVPN on it but I have no way to launch it on demand without being connected to the USB terminal.  I suppose I could just leave it running and in theory it should automatically connect when it finds a network but I haven’t tried that.

So on to my last point on wifi – interference. As I mentioned earlier signal quality was not good even being a few feet away from the access point. I decided to try out speedtest.net to run a basic throughput test on both the Touchpad and the Laptop. All tests were using the same Comcast consumer broadband connection

DeviceConnectivity TypeLatencyDownload PerformanceUpload Performance
HP Touchpad802.11g Wireless18 milliseconds5.32 Megabits4.78 Megabits
Toshiba dual core Laptop with Ubuntu 10.04 and Firefox 3.6802.11g Wireless13 milliseconds9.46 Megabits4.89 Megabits
Toshiba dual core Laptop with Ubuntu 10.04 and Firefox 3.61 Gigabit ethernet9 milliseconds27.48 Megabits5.09 Megabits

The test runs in flash, and as you can see of course the Touchpad’s browser (or flash) is not nearly as fast as the laptop, not too unexpected.

Comparing LAN transfer speeds was even more of a joke of course, I didn’t bother involving the Touchpad in this test just the laptop. I used iperf to test throughput(no special options just default settings).

  • Wireless – 7.02 Megabits/second (3.189 milliseconds latency)
  • Wired – 930 Megabits/second (0.3 milliseconds latency)

What honestly surprised me though was over the WAN, how much slower wifi was on the laptop vs wired connection, it’s almost 1/3rd the performance on the same laptop/browser. I justed measured to be sure – my laptop’s screen (where I believe the antenna is at) is 52 inches from the WRT54G router.

It’s “fast enough” for the Touchpad’s casual browsing, but certainly wouldn’t want to run my home network on it, defeats the purpose of paying for the faster connectivity.

I don’t know how typical these results out there. One place I recently worked at was plagued with wireless problems, performance was soo terrible and unreliable. They upgraded the network and I wasn’t able to maintain a connection for more than two minutes which sucks for SSH. To make matters worse the vast majority of their LAN was in fact wireless, there was very little cable infrastructure in the office. Smart people hooked up switches and stuff for their own tables which made things more usable, though still a far cry from optimal.

In a world where we are getting even more dense populations and technology continues to penetrate driving more deployments of wifi, I suspect interference problems will only get worse.

I’m sure it’s great if the only APs within range are your own, if you live or work at a place that is big enough. But small/medium businesses frequently won’t be so lucky, and if you live in a condo or apartment like me, ouch…

My AP is not capable of operating in the 5Ghz range 802.11a/n, that very well could be significantly less congested. I don’t know if it is accurate or not but wifi radar claims every AP within range of my laptop(47 at the moment) is 802.11g (same as me). My laptop’s specs say it supports 802.11b/g/n, so I’d expect if anyone around me was using N then wifi radar would pick it up, assuming the data being reported by wifi radar is accurate.

Since I am moving in about two weeks I’ll wait till I’m at my new apartment before I think more about the possibility of going to a 802.11n capable device for reduced interference. On that note does any of my 3-4 readers have AP suggestions?

Hopefully my new place will get better 4G wireless coverage as well, I already checked the coverage maps and there are two towers within one mile of me, so it all depends on the apartment itself, how much interference is caused by the building and stuff around it.

I’m happy I have stuck with ethernet for as long as I have at my home, and will continue to use ethernet at home and at work wherever possible.

May 11, 2011

2000+ 10GbE ports in a single rack

Filed under: Datacenter,Networking — Tags: , , , — Nate @ 9:41 pm

The best word I can come up with when I saw this was

oof

What I’m talking about is the announcement of the Black Diamond X-Series from my favorite switching company Extreme Networks. I have been hearing a lot about other switching companies coming out with new next gen 10 GbE and 40GbE switches, more than one using Broadcom chips (which Extreme uses as well), so have been patiently awaiting their announcements.

I don’t have a lot to say so I’ll let the specs do the talking

Extreme Networks Black Diamond X-Series

 

  • 14.5 U
  • 20 Tbps switching fabric (up ~4x from previous models)
  • 1.2 Tbps fabric per line slot (up ~10x from previous models)
  • 2,304 line rate 10GbE ports per rack (5 watts per port) (768 line rate per chassis)
  • 576 line rate 40GbE ports per rack (192 line rate per chassis)
  • Built in support to switch up to 128,000 virtual machines using their VEPA/ Direct Attach system

 

 

 

This was fascinating to me:

Ultra high scalability is enabled by an industry-leading fabric design with an orthogonal direct mating system between I/O modules and fabric modules, which eliminates the performance bottleneck of pure backplane or midplane designs.

I was expecting their next gen platform to be a mid plane design (like that of the Black Diamond 20808), their previous 10GbE high density Enterprise switch Black Diamond 8800, by contrast was a backplane design (originally released about six years ago). The physical resemblance to the Arista networks chassis switches is remarkable. I would like to see how this direct mating system looks in a diagram of some kind to get a better idea on what this new design is.

Mini RJ21 adapters, 1 plug on the switch, goes to 6x1GbE ports

To put that port density in to some perspective, their older system (Black Diamond 8800), by comparison, has an option to use Mini RJ21 adapters to achieve 768 1GbE ports in a chassis (14U), so an extra inch of space gets you the same number of ports running at 10 times the speed, and line rate (the 768x1GbE is not quite to line rate but still damn fast). It’s the only way to fit so many copper ports in such a small space.

 
 
 

It seems they have phased out the Black Diamond 10808 (I deployed a pair of these several years ago first released 2003), the Black Diamond 12804C (first released about 2007), the Black Diamond 12804R (also released around 2007) and the Black Diamond 20808 (this one is kind of surprising given how recent it was though didn’t have anything approaching this level of performance of course, I think it was released in around 2009). They also finally seemed to drop the really ancient Alpine series (10+ year old technology) as well.

Also they seem to have announced a new high density stackable 10GbE switch the Summit X670, the successor to the X650 which was already an outstanding product offering several features that until recently nobody else in the market was providing.

Extreme Networks Summit X670

  • 1U
  • 1.28 Tbps switching fabric (roughly double that of the X650)
  • 48 x 10Gbps line rate standard (64 x 10Gbps max)
  • 4 x 40Gbps line rate (or 16 x 10Gbps)
  • Long distance stacking support (up to 40 kilometers)

The X670 from purely a port configuration standpoint looks similar to some of other recently announced products from other companies, like Arista and Force10, both of whom are using the Broadcom Trident+ chipset, I assume Extreme is using the same. These days given so many manufacturers are using the same type of hardware you have to differentiate yourself in the software, which is really what drives me to Extreme more than anything else, their Linux-based easy-to-use Extremeware XOS operating system.

Neither of these products appear to be shipping, not sure when they might ship, maybe sometime in Q3 or something.

40GbE has taken longer than I expected to finalize, they were one of the first to demonstrate 40GbE at Interop Las Vegas last year, but the parts have yet to ship (or if they have the web site is not updated).

For the most part, the number of companies that are able to drive even 10% of the performance of these new lines of networking products is really tiny. But the peace of mind that comes with everything being line rate, really is worth something !

x86 or ASIC? I’m sure performance boosts like the ones offered here pretty much guarantees that x86 (or any general purpose CPU for that matter) will not be driving high speed networking for a very long time to come.

Myself I am not yet sold on this emerging trend in the networking industry that is trying to drive everything to be massive layer 2 domains. I still love me some ESRP! I think part of it has to do with selling the public on getting rid of STP. I haven’t used STP in 7+ years so not using any form of STP is nothing new for me!

March 14, 2011

Right vs Privilege for Broadband

Filed under: Networking,Random Thought — Tags: — Nate @ 8:38 am

I wrote about this a while back, at the time the topic was AT&T imposing caps on mobile data plans, so won’t go into all the same arguments again.

But this time it is AT&T imposing caps on their various broadband plans. I don’t know whether to laugh at or feel sorry for some of these people (see the comments on the site) that believe they have a right to maximum performance, unlimited bandwidth for a few bucks a month.

**** YOU and your troll crap. DONT BE MAD BECAUSE IM TELLING THE TRUTH YOU AT&T DRONE. YOU WEEP FOR THE COMPANY WHO HAS MORE MONEY THAN THE U.S TREASURY. GET YOUR HEAD EXAMINED.

IF YOUR CRAPPY DSL IS SLOW IT’S BECAUSE YOUR ISP IS TOO CHEAP TO UPGRADE TO THE NEEDS OF THE WORLD IN 2011!!!!

This post really is funny

If only two percent of people are affected, why do you feel the need to screw the rest of the 98%?!

As a Comcast broadband customer I have a 250GB cap a month. I have no doubt though that I fall far short of the cap, I’d be surprised if I do more than 20GB a month (that is with occasional netflix streaming though these days I can’t find anything I want to stream on Netflix, I’ve watched one or two things in the past month) UPDATE – I forgot Comcast does have a bandwidth meter you can check, so I got my account info and checked it out. I wonder where I stand as far as a percentile of their customers – low usage on average? medium?

I do run a server as well in the Terremark cloud, so I checked out the bandwidth on it as well (which hosts this blog along with my email services, other web sites etc)

  • February – 6GB data transfer (I assume they charge on inbound and outbound transfers?)
  • January – 2GB data transfer
  • December – 2GB data transfer

The world is built in over subscription, that’s a big driver to keeping costs low. Whether it’s bandwidth, or phone/mobile call capacity, or even your local grocery store.

I for one think AT&T’s plan is very reasonable, they will charge you $10 per 50GB over their limits, $100 for 500GB of data transferred. They will also provide notifications when you hit certain levels of that cap.

The big mistake all of these providers made was of course to offer unlimited plans in the first place.

February 14, 2011

Lackluster FCoE adoption

Filed under: Networking — Tags: — Nate @ 9:22 pm

I wrote back in 2009, wow was it really that long ago, one of my first posts, about how I wasn’t buying into the FCoE movement, at first glance it sounded really nice until you got into the details and then that’s when it fell apart. Well it seems that I’m not alone, not long ago in an earnings announcement Brocade said they were seeing lackluster FCoE adoption, lower than they expected.

He discussed what Stifel’s report calls “continued lacklustre FCoE adoption.” FCoE is the running of Fibre Channel storage networking block-access protocol over Ethernet instead of using physical Fibre Channel cabling and switchgear. It has been, is being, assumed that this transition to Ethernet would happen, admittedly taking several years, because Ethernet is cheap, steamrolls all networking opposition, and is being upgraded to provide the reliable speed and lossless transmission required by Fibre Channel-using devices.

Maybe it’s just something specific to investors, I was at a conference for Brocade products I think it was in 2009 even, where they talked about FCoE among many other things and if memory serves they didn’t expect much out of FCoE for several years so maybe it was management higher up that was setting the wrong expectations or something I don’t know.

Then more recently I saw this article posted from slashdot which basically talks about the same thing.

Even today I am not sold of FCoE, I do like Fibre Channel as a protocol but don’t see a big advantage at this point to running it over native Ethernet. These days people seem to be consolidating on fewer, larger systems, I would expect the people more serious about consolidation are using quad socket systems, and much much larger memory configurations (hundreds of gigs). You can power that quad socket system with hundreds of gigs of memory with a single dual port 8Gbps fibre channel HBA.Those that know about storage and random I/O understand more than anyone how much I/O it would really take to max out an 8Gbps Fibre channel card, your not likely to ever really manage to do it with a virtualization workload, even with most database workloads. And if you do you’re probably running at a 1:1 ratio of storage arrays to servers.

The cost of the Fibre network is trivial at that point (assuming you have more than one server). I really like the latest HP blades because well you just get a ton of bandwidth options with them right out of the box, why stop with running everything over a measly single dual port 10Gbe NIC when you can have double the NICs, AND throw in a dual port Fibre adapter for not much more cash. Not only does this give more bandwidth, but more flexibility and traffic isolation as well(storage/network etc). On the blades at least it seems you can go even beyond that(more 10gig ports), I was reading in one of the spec sheets for the PCIe 10GbE cards that on the Proliant servers no more than two adapters are supported

NOTE: No more than two 10GbE I/O devices are supported in a single ProLiant server.

I suspect that NOTE may be out of date with the more recent Proliant systems that have been introduced, after all they are shipping a quad socket Intel Proliant blade with three dual port 10GbE devices on it from the get go. And I can’t help but think the beast DL980 has enough PCI busses to handle a handful of 10GbE ports. The 10GbE flexfabric cards list the BL685c G7 as supported as well, meaning you can get at least six ports on that blade as well. So who knows…..

Do the math, the added cost of a dedicated fibre channel network really is nothing. Now if you happen to go out and chose the most complicated to manage fibre channel infrastructure along with the most complicated fibre channel storage array(s) then all bets are off. But just because there are really complicated things out there doesn’t mean your forced to use them of course.

Another factor is staff I guess, if you have monkeys running your IT department maybe Fibre channel is not a good thing and you should stick to something like NFS, and you can secure your network by routing all of your VLANs through your firewall while your at it, because you know your firewall can keep up with your line rate gigabit switches right? riiight.

I’m not saying FCoE is dead, I think it’ll get here eventually, I’m not holding my breath for it though, it’s really more of a step back than a step forwards with present technology.

February 2, 2011

Oh no! We Ran outta IPs yesterday!

Filed under: Networking,Random Thought — Nate @ 9:37 pm

The Register put it better than I could put it

World shrugs as IPv4 addresses finally exhausted

Count me among those that shrugged, commented on this topic a few months ago.

November 16, 2010

HP serious about blade networking

Filed under: Networking — Nate @ 10:32 am

I was doing my rounds and noticed that HP launched a new blade for the Xeon 6500/7500 processor( I don’t yet see news breaking of this on The Reg so I beat them for once!), the BL620c G7, and they have another blade the BL680c G7,  is a double wide solution, which to me looks like nothing more than a pair of 620c G7s stacked together and using the backplane to link the systems together, IBM does something similar on their Bladecenter to connect a memory expansion blade onto their HX5 blade.

But what really caught my eye more than anything else is how much networking HP is including on their latest blades, whether it is the BL685c G7, or these two newer systems.

  • BL685c G7 & BL620c G7 both include 4 x 10GbE Flexfabric ports on board (no need to use expansion ports) – that is up to 16 FlexNICs per server – with three expansion slots you can get a max of 10x10GbE ports per server (or 40 FlexNICs per server)
  • BL680c G7 has 6 x 10GbE Flexfabric ports on board providing up to 24 FlexNICs per server – with seven expansion slots you can get a max of 20x10GbE ports per server (or 80 FlexNICs per server)

Side note: Flex Fabric is HP’s term referring to CNA.

Looking at the stock networking from Cisco, Dell, and IBM

  • Cisco – their site is complex as usual but from what I can make out their B230M1 blade has 2x10Gbps CNAs
  • Dell and IBM are stuck in 1GbE land, with IBM providing 2x1GbE on their HX5 and Dell providing 4x1GbE on their M910

What is even nicer about the extra NICs on the HP side, at least on the BL685c G7 and I presume the BL620c G7 is that because they are full height, the connections from the extra 2x10GbE ports on the blade feed into the same slots on the backplane, meaning with a single pair of 10GbE modules on the chassis you can get full 4x10GbE per server (8 full height blades per chassis), normally if you would put extra NICs on the expansion ports, those ports are wired to different slots in the back needing additional networking components in those slots.

You might be asking yourself, what if you don’t have 10GbE and you only have 1GbE networking? Well first off – upgrade, 10GbE is dirt cheap now there is absolutely no excuse for getting these new higher end blade systems and trying to run them off 1GbE. You’re only hurting yourself by attempting it. But in the worst case you really don’t know what your doing and you happen to get these HP blades with 10GbE on them and want to connect them to 1GbE switches — well you can, they are backwards compatible with 1GbE switches. Either with their various 1GbE modules, or the 10GbE pass through module supporting both SFP and SFP+ optics.

So there you have it, 4x10GbE ports per blade standard, if it was me I would take 1 port from each network ASIC, and assign FlexNICs for VM traffic, and take the other port from each ASIC and enable jumbo frames for things like Vmotion, Fault tolerance, iSCSI, NFS etc traffic. I’m sure the cost of adding the extra dual port card is trivial when integrated onto the board, and HP is smart enough to recognize that!

Having more FlexNICs on board means you can use those expansion slots for other things, such as Fusion I/O accelerators, or maybe Infiniband or native Fibre channel connectivity. Having more FlexNICs on board also allows for greater flexibility in network configuration of course, take for example the Citrix Netscaler VPX, which, last I checked required essentially dedicated network ports in vSphere in order to work.

Myself I’m still not sold on the CNA concept at this point. I’m perfectly happy to run a couple FC switches per chassis, and a few extra cables to run to the storage system.

November 11, 2010

Extreme VMware

Filed under: Networking,Virtualization — Tags: , — Nate @ 7:29 pm

So I was browsing some of the headlines of the companies I follow during lunch and came across this article (seems available on many outlets), which I thought was cool.

I’ve known VMware has been a very big happy user of Extreme Networks gear for a good long time now though I wasn’t aware of anything that was public about it, at least until today. It really makes me feel good that despite VMware’s partnerships with EMC and NetApp that include Cisco networking gear, at the end of the day they chose not to run Cisco for their own business.

But going beyond even that it makes me feel good that politics didn’t win out here, obviously the people running the network have a preference, and they were either able to fight, or didn’t have to fight to get what they wanted. Given VMware is a big company and given their big relationship with Cisco I would kind of think that Cisco would try to muscle their way in. Many times they can succeed depending on the management at the client company, but fortunately for the likes of VMware they did not.

SYDNEY, November 12. Extreme Networks, Inc., (Nasdaq: EXTR) today announced that VMware, the global leader in virtualisation and cloud infrastructure, has deployed its innovative enterprise, data centre and Metro Ethernet networking solutions.

VMware’s network features over 50,000 Ethernet ports that deliver connectivity to its engineering lab and supports the IT infrastructure team for its converged voice implementation.

Extreme Networks met VMware’s demanding requirements for highly resilient and scalable network connectivity. Today, VMware’s thousands of employees across multiple campuses are served by Extreme Networks’ leading Ethernet switching solutions featuring 10 Gigabit Ethernet, Gigabit Ethernet and Fast Ethernet, all powered by the ExtremeXOS® modular operating system.

[..]

“We required a robust, feature rich and energy efficient network to handle our data, virtualised applications and converged voice, and we achieved this through a trusted vendor like Extreme Networks, as they help it to achieve maximum availability so that we can drive continuous development,” said Drew Kramer, senior director of technical operations and R&D for VMware. “Working with Extreme Networks, from its high performance products to its knowledgeable and dedicated staff, has resulted in a world class infrastructure.”

Nice to see technology win out for once instead of back room deals which often end up screwing the customer over in the long run.

Since I’m here I guess I should mention the release of the X460 series of switches which came out a week or two ago, intended to replace the now 4-year old X450 series(both “A” and “E”). Notable differences & improvements include:

  • Dual hot swap internal power supplies
  • User swappable fan tray
  • Long distance stacking over 10GbE – up to 40 kilometers
  • Clear-Flow now available when the switches are stacked (prior hardware switches could not be stacked to use Clear-Flow
  • Stacking module is now optional (X450 it was built in)
  • Standard license is Edge license (X450A was Advanced Edge) – still software upgradable all the way to Core license (BGP etc). My favorite protocol ESRP requires Advanced Edge and not Core licensing.
  • Hardware support for IPFIX, which they say is complimentary to sFlow
  • Lifetime hardware warranty with advanced hardware replacement (X450E had lifetime, X450A did not)
  • Layer 3 Virtual Switching (yay!) – I first used this functionality on the Black Diamond 10808 back in 2005, it’s really neat.

The X460 seems to be aimed at the mid to upper range of GbE switches, with the X480 being the high end offering.

October 22, 2010

IPv4 address space exhaustion – tired

Filed under: Networking — Tags: , — Nate @ 11:21 am

Just saw YASOSAIV6 (Yet another story on Slashdot about IPv6)..

They’ve been saying it for years, maybe even a decade? That we are running out of IPs and we need to move to IPv6. It’s taken forever for various software and hardware manufacturers to get IPv6 into their stacks, and even now most of them haven’t seen much real world testing. IPv6 is of course a chicken and egg problem.

My take on it, from a technological standpoint I do not look forward to IPv6, not at all. Really for one simple reason – IPv4 IP addresses are fairly simple to remember, and simpler to recognize. IPv6 – forget about it. I’m a simple minded person and that is a simple reason I don’t look forward to IPv6.

I don’t have a problem with Network Address Translation (NAT), it’s amazing to me how many people out there absolutely despise NAT, I won’t spend much time talking about why I think NAT is a good thing because I have better things to spend my time on 🙂 [And yes when I’m not using NAT I absolutely run my firewalls in bridging mode again for simplicity purposes]

I don’t believe we have an IPv4 crisis yet, sure IANA or whoever is the organization that assigns IP addresses says we are low on that free pool but guess what, service providers around the world have gobs of unused IPs. I talk to service providers fairly often and none of them are concerned about it, they do want you to be smart about IP allocation however. I suppose if your some big company and want to get 5,000 IP addresses you may need to be concerned, but for smaller organizations who may need a dozen or two dozen IPs at the most – really nothing to worry about.

One thing I think could free up a bunch of IPs and allow IPv4 to scale even further is to somehow fix the SSL/TLS/HTTPS protocol(s) so that it can support virtual hosts (short of using wild card certs). I’m sure it’s possible but it won’t be easy to get the software out to the field to all the various edge devices in order to be able to support it. One company I worked at needed about a hundred IPs JUST for SSL (wild card certs were not an option at the time due to lack of client side support).

I know we’ll get to IPv6 eventually, and I’ll accept that when we get there, though it may be far enough out that I don’t deal with lower level stuff anymore so won’t need to be concerned about it, I don’t know.

October 11, 2010

Qlogic answers my call for help

Filed under: Networking,Virtualization — Tags: , — Nate @ 8:53 am

THANK YOU QLOGIC. I have been a long time user of Qlogic stuff and like them a lot. If you have been reading this blog for a while you may of noticed earlier in the year I was criticizing the network switch industry (includes my favorite manufacturers as well) for going down the route of trying to “reclaim the network” by working on standards that would move the inter-VM switching traffic out of the host and back into the network switches. I really think the whole concept is really stupid, and a desperate attempt to hold onto what will be a dramatically declining ports market in the coming years. Look no further than my recent post on testing the limits of virtualization.

My answer to the dilemma ? Put a layer 2 hardware switching fabric into the server, less latency, faster performance.

And Qlogic has done just that. I will refrain from using colorful metaphors to describe my glee, but I certainly hope this is a trend going forward.

According to our friends at The Register, Qlogic has released new Converged Network Adapters (CNA) that includes an integrated layer 2 switch for virtual machines.

EMEA Marketing head for QLogic, Henrik Hansen, said: “Within the ASIC we have embedded a layer 2 Ethernet switch [and] can carve up the two physical ports into 4 NIC partitions or NPARs, which can each be assigned to a specific VM. There can be eight of them with dual-port product.”An Ethernet message from one VM to another in the same server goes to the QLogic ASIC and is switched back to the target VM. This is reminiscent of Emulex’ VNIC feature.

From the specs:

  • PCI Express Gen2 x8
  • Dual 10Gbps and quad 1Gbps ports on a single controller
  • Integrated 10GBase-KR and 10GBase-T PHYs
  • Concurrent TCP/IP, FCoE, and iSCSI protocol support with full hardware offload
  • Industry standard SR-IOV and QLogic’s switch-agnostic NIC Partitioning (NPAR)
  • Wake-on-LAN including Magic Packet recognition
  • Common drivers and API’s with existing QLogic NIC, FCoE, and iSCSI products

Side note: I love that they have 10GbaseT too!!

I think the ASIC functionality needs more work as it seems limited to supporting only a couple VMs rather than being a more generic switching fabric but we gotta start somewhere!

The higher end 8200 CNA looks like it has much of the same technology available in the HP FlexFabric (which I know at least part of is already based on Qlogic technology though might not be these specific ASICs I don’t know)

VMflex. With QLogic’s new VMflex technology, one Converged Network Adapter is viewed by the server operating system (OS) as a flexible mix (up to  four per physical port) of standalone NICs, FCoE adapters, and iSCSI adapters, with the ability to allocate guaranteed bandwidth to each virtual adapter.  This unique feature can be switch dependent or switch agnostic— it is not necessary to pair an 8200 Series adapter with any specific 10GbE switch model to enable partitioning.

I would love to see more technical information on the VMFlex and the layer 2 switching fabric, I tried poking around on Qlogic’s site but didn’t come up with anything too useful.

So I say again, thank you Qlogic, and I hope you have started a trend here. I firmly believe that offloading the switching functionality to an ASIC rather than performing it in software is critical, and when you have several hundred VMs running on a single server not wasting your uplink bandwidth to talk between them is just as critical. The functionality of the ASIC need not offer too much, for me I think the main things would be vlan tagging and sFlow, some folks may want QoS as well.

My other request, I don’t know if it is already possible or not is to be able to run a mix of jumbo frames and standard frame sizes on different virtual NICs riding on the same physical network adapter, without configuring everything for jumbo frames, because that causes compatibility issues (especially for anything using UDP!).

The networking industry has it backwards in my opinion, but I can certainly understand the problem they face.

« Newer PostsOlder Posts »

Powered by WordPress