TechOpsGuys.com Diggin' technology every day

December 13, 2009

Save MySQL from Oracle

Filed under: News — Nate @ 11:02 am

One of the creators(the creator?) of MySQL is pleading to the public to write to the EC to save MySQL.

Myself I’m not so sure of the future of MySQL in any case, it seems since Sun bought them it has gotten into nothing but trouble. I’m sure the MySQL guys enjoyed the big payout but it may of cost them even more. I’m still using versions of MySQL that were released before Sun bought them because there is so much uncertainty around the versions that are out now.  It’s been forked at least once, and I have questions on the stability of the latest official branches.

Keep in mind if you do use MySQL and want to secure some sort of gaurantee from Oracle, that Oracle already owns InnoDB and BerkleyDB, InnoDB of course being probably the widest deployed engine for MySQL.

I for one am against the merger, not for MySQL but for Java. Split Java(and MySQL I suppose) out and Oracle can have the rest of Sun. Oracle already has one of the big enterprise JVMs – Jrockit, acquired when they bought BEA. The only other big JVM I know of is from IBM.

December 10, 2009

Lesser of two evils

Filed under: General — Nate @ 10:05 pm

Thanks to The Register for another interesting thing to write about.This time it’s about a Mozilla guy, who apparently was the one who wrote Firefox (I still miss Phoenix, it was really a light browser, unlike Firefox today) suggesting people should switch their search engines from Google to Bing because Bing has a better privacy policy.

So which is the lesser of the two evils? Microsoft or Google? For me at least for the moment it is Google, but with each passing day my distrust of them grows. I have never signed up for their services, I have never accepted their cookies, and while I do use their search engine it’s unlikely the searches I do provide much value to their advertisers. I used to use alltheweb.com as my search engine, I resisted Google for as long as I could. The thing that drove me away from alltheweb at the time was when they introduced banner ads. I even told them as much and they thanked me for the feedback and said they would take it under consideration for future improvements or something along those lines. I notice now they do not have banner ads.. I’m not against advertising myself,(I do not and never have used ad blocking browser plugins) but am against collection of data on me for that purpose. I don’t bother trying to opt out of such systems, because I don’t trust the opt out in the first place, would much rather take the time to block the data collection on my end(wherever possible). I’m sure I won’t get them all, but I’ll get most.

Anyways on the topic of privacy on the net. I’m probably one of the few that take it fairly seriously. That is I rarely sign up for any offers, I do create unique email addresses for each organization I have a relationship with(which as of last count is roughly 230 unique email addresses each with an associated inbox). I host my own email,  DNS, and web services on a server I physically own at a local co-location facility that I pay for.  I have hosted my own email+web+DNS for more than ten years now. This blog is not hosted there because it wasn’t setup by me, and there isn’t much private information here anyways.

On to web browsing. I have had my web browser prompt me for each and every web cookie that comes in for at least the last five years now(I do love that feature that saves the preferences for the site). Checking the sqlite database in Firefox reveals

  • Reject cookies outright from 2,099 web sites
  • Accept cookies from 216 web sites
  • Accept cookies from 470 web sites for “session” only

I read recently that Flash cookies are becoming a more common means of tracking users as well, because they are more difficult to detect/delete. In fact I had no idea that there was such a thing as cookies in Flash until I read the article. (Thanks again to The Reg). I have been using the Prefbar Firefox plugin for years now(since Phoenix days I believe), that provides a couple of handy things for Flash, one is to enable/disable the plugin on demand, the other is to immediately kill all flash objects in the page. It works pretty well. I usually keep Flash off unless I specifically need it, not for privacy reasons but more so for performance and stability reasons(and most Flash advertisements are very annoying). I know there are more advanced plugins that deal with Flash and advertisements in general, I’m just too lazy to try them. I’ve used the same basic plugins for several years and haven’t really tried anything new.

I am becoming more convinced as time goes on that Google is nothing more than a front for the NSA/CIA or some other 3 letter organization that you’ve never heard of in order to try to get you to willingly give them all of your information, whether it is email, IM, DNS,  or voice mail, phone calls, pictures, hell I can’t think of all of the services they offer since I don’t use them. I see comments on slashdot and am shocked to see people say things like they’d rather Google have their private data than their ISP. Me I’m the opposite. I’d rather have my ISP have my data, there’s a lot less chance they’ll have any interest in it, and even less of a chance they’ll be able to effectively use it against me than the data mining masterminds at Google.

I have (to put it mildly as anyone who knows me will attest) have a deep rooted mistrust for Microsoft as well, it has bonded with my DNA at this point, that is somewhat of a different post though.

I’m not quite to the point where I tunnel my internet traffic over a VPN to my co-located server but who knows, perhaps in a few years that’s what I will have to resort to. My DNS traffic is tunneled to my co-located server today, mainly because I host my own internal DNS and the master zones live on the other end of the connection so I rely on it for my internal + external DNS.

So, lesser of two evils, Microsoft or Google, tough choice indeed, perhaps the one or two readers of this blog can contribute links to other search engines, hopefully less obvious ones that might be worth while to use.

December 9, 2009

AT&T plans on stricter mobile data plans

Filed under: Networking — Nate @ 6:03 pm

You know one thing that really drives me crazy about users? It’s those people that think they have a right to megabits, if not tens of megabits of bandwidth for pennies a month. Those people that complain $50/mo is such a ripoff for 5-10Mbit broadband!

I have always had a problem with unlimited plans myself, I recall in the mid 90s getting kicked off more than a few ISPs for being connected to their modems 24/7 for days on end. The plan was unlimited. So I used it. I asked, even pleaded for the ISPs to tell me what is the real limit. You know what? Of all of the ones I tried at the time there was only one. I was in Orange County California at the time and the ISP was neptune.net. I still recall to this day the owner’s answer. He did the math calculating the number of hours in a day/week/month and said that’s how many I can use. So I signed up and used that ISP for a few years(until I moved to Washington) and he never complained(and almost never got a busy signal). I have absolutely no problem paying more for premium service, it’s just I appreciate full disclosure on any service I get especially if it is advertised as unlimited.

Companies are starting to realize that the internet wasn’t built to scale at the edge. It’s somewhat fast at the core, but the pipes from the edge to the core are a tiny fraction of what they could be(and if we increased those edge pipes you need to increase the core by an order(s) of magnitude). Take for example streaming video.  There is almost non stop chatter on the net on how people are going to ditch TV and watch everything on the internet(or many things on the internet). I had a lengthy job interview with one such company that wanted to try to make that happen, they are now defunct, but they specialized in peer-to-peer video streaming with a touch of CDN. I remember the CTO telling me some stat he saw from Akamai, which is one of the largest CDNs out there(certainly the most well known I believe). Saying how at one point they were bragging about having something like 10,000 simultaneous video streams flowing through their system(or maybe it was 50,000 or something).

Put that in some perspective, think about the region around you, how many cable/satellite subscribers there are, and think how well your local broadband provider could handle unicast streaming of video from so many of them from sites out on the net. Things will come to a grinding halt very quickly.

It certainly is a nice concept to be able to stream video(I love that video, it’s the perfect example illustrating the promise of the internet) and other high bit rate content(maybe video games), but the fact is it just doesn’t scale. It works fine when there are only a few users. We need an order(s) of magnitude more bandwidth towards the edge to be able to handle this. Or, in theory at least, high grade multicast, and vast amounts of edge caching. Though multicast is complicated enough that I’m not holding my breath for it being deployed on a wide scale on the internet anytime soon, the best hope might be when everyone is on IPv6, but I’m not sure. On paper it sounds good, don’t know how well it might work in practice though on a massive scale.

So as a result companies are wising up, a small percentage of users are abusing their systems by actually using them for what they are worth. The rest of the users haven’t caught on yet. These power users are forcing the edge bandwidth providers to realize that the plans dreamed up by the marketing departments just isn’t going to cut it(at least not right now, maybe in the future). So they are doing things like capping data transfers, or charging slightly excessive fees, or cutting users off entirely.

The biggest missing piece to the puzzle has been to provide an easy way for the end user to know how much bandwidth they are using so they can control the usage themselves, don’t blow your monthly cap in 24 hours. It seems that Comcast is working on this now, and AT&T is working on it for their wireless subscribers. That’s great news. Provide solid limits for their various tiers of service for the users, and provide an easy way for users to monitor their progress on their limits. I only wish wireless companies did that for their voice plans(how hard can it be for a phone to keep track of your minutes). That said I did sign up for Sprint’s simply unlimited plan so I wouldn’t have to worry about minutes myself, saved a good chunk off my previous 2000-minute plan. Even though I don’t use anywhere near what I used to(seem to average 300-500 minutes/month at the most), I still like the unlimited plan just in case.

Anyways, I suppose it’s unfortunate that the users get the shaft in the end, they should of gotten the shaft from the beginning but I suppose the various network providers wanted to get their foot in the door with the users, get them addicted(or at least try), then jack up the rates later once they realize their original ideas were not possible.

Bandwidth isn’t cheap, at low volumes it can cost upwards of $100/Mbit or even more at a data center(where you don’t need to be concerned about telco charges or things like local loops). So if you think your getting the shaft for paying $50/mo for a 10Mbit+ burstable connection, shut up and be thankful your not paying more than 10x that.

So no, I’m not holding my breath for wide scale deployment of video streaming over the internet, or wireless data plans that simultaneously allow you to download at multi-megabit speeds while providing really unlimited data plans at consumer level pricing. The math just doesn’t work.

I’m not bitter or anything, you’d probably be shocked on how little bandwidth I actually use on my own broadband connection, it’s a tiny amount, mainly because there isn’t a whole lot of stuff on the internet that I find interesting anymore. I was much more excited back in the 90s, but as time as gone on my interest in the internet in general has declined(probably doesn’t help that my job for the past several years has been supporting various companies where their main business was internet-facing).

I suppose the next step beyond basic bandwidth monitoring might be something along the lines of internet roaming. In which you can get a data plan with a very high cap(or unlimited), but only for certain networks(perhaps mainly local ones to avoid going over the backbones), but pay a different rate for general access to the internet. Myself, I’m very much for net neutrality only where it relates to restricting bandwidth providers from directly charging content companies for access for their users(e.g. Comcast charging Google extra so Comcast users can watch Youtube). They should be charging the users for that access, not the content providers.

(In case your wondering what inspired this post it was the AT&T iPhone data plan changes that I linked to above).

December 8, 2009

Fusion IO throughput benchmarks

Filed under: Storage — Tags: , , — Nate @ 4:51 pm

I don’t visit the MySQL performance blog too often, but today happened to run across a very interesting post here comparing a Fusion IO card to an 8-disk 15k RPM RAID 1+0 array. Myself I’ve been interested in Fusion IO since I first heard about it, very interesting technology, have not used it personally yet.

The most interesting numbers to me was the comparably poor sequential write performance vs random write performance on the same card. Random write was upwards of 3 times faster.

December 2, 2009

Extremely Simple Redundancy Protocol

Filed under: Networking — Tags: , , , — Nate @ 7:31 am

ESRP. That is what I have started calling it at least. The official designation is Extreme Standby Router Protocol. It’s one of, if not the main reason I prefer Extreme switches at the core of any Layer 3 network. I’ll try to explain why here, because Extreme really doesn’t spend any time promoting this protocol, I’m still pushing them to change that.

I’ve deployed ESRP at two different companies in the past five years ranging from

What are two basic needs of any modern network?

  1. Layer 2 loop prevention
  2. Layer 3 fault tolerance

Traditionally these are handled by separate protocols that are completely oblivious to one another mainly some form of STP/RSTP and VRRP(or maybe HSRP if your crazy). There have been for a long time interoperability issues with various implementations of STP as well over the years, further complicating the issue because STP often needs to run on every network device for it to work right.

With ESRP life is simpler.

Advantages of ESRP include:

  • Collapsing of layer 2 loop prevention and layer 3 fault tolerance(with IP/MAC takeover) into a single protocol
  • Can run in either layer 2 only mode, layer 3 only mode or in combination mode(default).
  • Sub second convergence/recovery times.
  • Eliminates the need to run protocols of any sort on downstream network equipment
  • Virtually all down stream devices supported. Does not require an Extreme-only network. Fully inter operable with other vendors like Cisco, HP, Foundry, Linksys, Netgear etc.
  • Supports both managed and unmanaged down stream switches
  • Able to override loop prevention on a per-port basis(e.g. hook a firewall or load balancer directly to the core switches, and you trust they will handle loop prevention themselves in active/fail over mode)
  • The “who is master?” question can be determined by setting an ESRP priority level which is a number from 0-254 with 255 being standby state.
  • Set up from scratch in as little as three commands(for each core switch)
  • Protect a new vlan with as little as 1 command (for each core switch)
  • Only one IP address per vlan needed for layer 3 fault tolerance(IP-based management provided by dedicated out of band management port)
  • Supports protecting up to 3000 vlans per ESRP instance
  • Optional “load balancing” by running core switches in active-active mode with some vlans on one, and others on the other.
  • Additional fail over based on tracking of pings, route table entries or vlans.
  • For small to medium sized networks you can use a pair of X450A(48x1GbE) or X650(24x10GbE) switches as your core for a very low priced entry level solution.
  • Mature protocol. I don’t know exactly how old it is, but doing some searches indicates at least 10 years old at this point
  • Can provide significantly higher overall throughput vs ring based protocols(depending on the size of the ring), as every edge switch is directly connected to the core.
  • Nobody else in the industry has a protocol that can do this. If you know of another protocol that combines layer 2 and layer 3 into a single protocol let me know. For a while I thought Foundry’s VSRP was it, but it turns out that is mainly layer 2 only. I swear I read a PDF that talked about limited layer 3 support in VSRP back in 2004/2005 time frame but not anymore.  I haven’t spent the time to determine the use cases between VSRP and Foundry’s MRP which sounds similar to Extreme’s EAPS which is a layer 2 ring protocol heavily promoted by Extreme.

Downsides to ESRP:

  • Extreme Proprietary protocol. To me this is not a big deal as you only run this protocol at the core. Downstream switches can be any vendor.
  • Perceived complexity due to wide variety of options, but they are optional, basic configurations should work fine for most people and it is simple to configure.
  • Default election algorithm includes port weighting, this can be good and bad depending on your point of view. Port weighting means if you have an equal number of active links of the same speed on each core switch, and the master switch has a link go down the network will fail over. If you have non-switches connected directly to the core(e.g. firewall) I will usually disable the port weighting on those specific ports so I can reboot the firewall without causing the core network to fail over. I like port weighting myself, viewing it as the network trying to maintain it’s highest level of performance/availability. That is, who knows why that port was disconnected, bad cable? bad ASIC, bad port? Fail over to the other switch that has all of it’s links in a healthy state.
  • Not applicable to all network designs(is anything?)

The optimal network configuration for ESRP is very simple, it involves two core switches cross connected to each other(with at least two links), with a number of edge switches, each edge switch has at least one link to each core switch. You can have as few as three switches in your network, or you can have several hundred(as many as you can connect to your core switches max today I think is say 760 switches using high density 1GbE ports on a Black Diamond 8900, plus 8x1Gbps ports for cross connect).

ESRP Mesh Network Design

ESRP Domains

ESRP uses a concept of domains to scale itself. A single switch is master of a particular domain which can include any number of vlans up to 3000. Health packets are sent for the domain itself, rather than the individual vlans dramatically simplifying things and making them more scalable simultaneously.

This does mean that if there is a failure in one vlan, all of the vlans for that domain will fail over, not that one specific vlan. You can configure multiple domains if you want, I configure my networks with one domain per ESRP instance. Multiple domains can come in handy if you want to distribute the load between the core switches. A vlan can be a member of only one ESRP domain(I expect, I haven’t  tried to verify).

Layer 2 loop prevention

The way ESRP loop prevention works is the links going to the slave switch are placed in a blocking state, which eliminates the need for downstream protocols and allows you to provide support for even unmanaged switches transparently.

Layer 3 fault tolerance

Layer 3 fault tolerance in ESRP operates in two different modes depending on whether or not the downstream switches are Extreme. It assumes by default they are, you can override this behavior on a per-port basis. In an all-Extreme network ESRP uses EDP [Extreme Discovery Protocol](similar to Cisco’s CDP) to inform down stream switches the core has failed over and to flush their forwarding entries for the core switch.

If downstream switches are not Extreme switches, and you decided to leave the core switch in default configuration, it will likely take some time(seconds, minutes) for those switches to expire their forwarding table entries and discover the network has changed.

Port Restart

If you know you have downstream switches that are not Extreme I suggest for best availability to configure the core switches to restart the ports those switches are on. Port restart is a feature of ESRP which will cause the core switch to reset the links of the ports you configure to try to force those switches to flush their forwarding table. This process takes more time than in an Extreme-only network. In my own tests specifically with older Cisco layer 2 switches, with F5 BigIP v9, and Cisco PIX this process takes less than one second(if you have a ping session going and trigger a fail over event to occur rarely is a ping lost).

Host attached ports

If you are connecting devices like a load balancer, or a firewall directly to the switch, you typically want to hand off loop prevention to those devices, so that the slave core switch will allow traffic to traverse those specific ports regardless of the state of the network. Host attached mode is an ESRP feature that is enabled on a per-port basis.

Integration with ELRP

ESRP does not protect you from every type of loop in the network, by design it’s intended to prevent a loop from occurring between the edge switch and the two core switches. If someone plugs an edge switch back into itself for example that will cause a loop still.

ESRP integrates with another Extreme specific protocol named ELRP or Extreme Loop Recovery Protocol. Again I know of no other protocol in the industry that is similar, if you do let me know.

What ELRP does is it sends packets out on the various ports you configure and looks for the number of responses. If there is more than it expects it sees that as a loop. There are three modes to ELRP(this is getting a bit off topic but is still related). The simplist mode is one shot mode where you can have ELRP send it’s packets once and report, the second mode is periodic mode where you configure the switch to send packets periodically, I usually use 10 seconds or something, and it will log if there are loops detected(it tells you specifically what ports the loops are originating on).

The third mode is integrated mode, which is how it relates to ESRP. Myself I don’t use integrated mode and suggest you don’t either at least if you follow an architecture that is the same as mine. What integrated mode does is if there is a loop detected it will tell ESRP to fail over, hoping that the standby switch has no such loop. In my setups the entire network is flat, so if there is a loop detected on one core switch, chances are extremely(no pun intended) high that the same loop exists on the other switch. So there’s no point in trying to fail over. But I still configure all of my Extreme switches(both edge and core) with ELRP in periodic mode, so if a loop occurs I can track it down easier.

Example of an ESRP configuration

We will start with this configuration:

  • A pair of Summit X450A-48T switches as our core
  • 4x1Gbps trunked cross connects between the switches (on ports 1-4)
  • Two downstream switches, each with 2x1Gbps uplinks on ports 5,6 and 7,8 respectively which are trunked as well.
  • One VLAN named “webservers” with a tag of 3500 and an IP address of 10.60.1.1
  • An ESRP domain named esrp-prod

The non ESRP portion of this configuration is:

enable sharing 1 grouping 1-4 address-based L3_L4
enable sharing 5 grouping 5-6 address-based L3_L4
enable sharing 7 grouping 7-8 address-based L3_L4
create vlan webservers
config webservers tag 3500
config webservers ipaddress 10.60.1.1 255.255.255.0
config webservers add ports 1,5,7 tagged

What this configuration does

  • Creates a port sharing group(802.3ad) grouping ports 1-4 into a virtual port 1.
  • Creates a port sharing group(802.3ad) grouping ports 5-6 into a virtual port 5.
  • Creates a port sharing group(802.3ad) grouping ports 5-7 into a virtual port 7.
  • Creates a vlan named webservers
  • Assigns tag 3500 to the vlan webservers
  • Assigns the IP 10.60.1.1 with the netmask 255.255.255.0 to the vlan webservers
  • Adds the virtual ports 1,5,7 in a tagged mode to the vlan webservers

The ESRP portion of this configuration is:

create esrp esrp-prod
config esrp-prod add master webservers
config esrp-prod priority 100
config esrp-prod ports mode 1 host
enable esrp

The only difference between the master and slave, is to change the priority. From 0-254 higher numbers is higher priority, 255 is reserved for putting the switch in standby state.

What this configuration does

  • Creates an ESRP domain named esrp-prod.
  • Adds a master vlan to the domain, I believe the master vlan carries the control traffic
  • Configures the switch for a specific priority [optional – I highly recommend doing it]
  • Enables host attach mode for port 1, which is a virtual trunk for ports 1-4. This allows traffic for potentially other host attached ports on the slave switch to traverse to the master to reach other hosts on the network. [optional – I highly recommend doing it]
  • enables ESRP itself (you can use the command show esrp at this point to view the status)

Protecting additional vlans with ESRP

It is a simple one liner command to each core switch, extending the example above, say you added a vlan appservers with it’s associated parameters and wanted to protect it, the command is:

config esrp-prod add member appservers

That’s it.

Gotchas with ESRP

There is only one gotcha that I can think of off hand specific to ESRP. I believe it is a bug, and reported it a couple of years ago(code rev 11.6.3.3 and earlier, current code rev is 12.3.x) I don’t know if it is fixed yet. But if you are using port restart configured ports on your switches, and you add a vlan to your ESRP domain, those links will get restarted(as expected), what is not expected is this causes the network to fail over because for a moment the port weighting kicks in and detects link failure so it forces the switch to a slave state. I think the software could be aware why the ports are going down and not go to a slave state.

Somewhat related, again with port weightings, if you are connecting a new switch to the network, and you happen to connect it to the slave switch first, port weighting will kick in being that the slave switch now has more active ports than the master, and will trigger ESRP to fail over.

The workaround to this, and in general it’s a good practice anyways with ESRP, is to put the slave switch in a standby state when you are doing maintenance on it, this will prevent any unintentional network fail overs from occurring while your messing with ports/vlans etc. You can do this by setting the ESRP priority to 255. Just remember to put it back to a normal priority after you are done. Even in a standby state, if you have ports that are in host attached mode(again e.g. firewalls or load balancers) those ports are not impacted by any state changes in ESRP.

Sample Modern Network design with ESRP

Switches:

  • 2 x Extreme Networks Summit X650-24t with 10GbaseT for the core
  • 22 x Extreme Networks Summit X450A-48T each with an XGM2-2xn expansion module which provides 2x10GbaseT up links providing 1,056 ports of highest performance edge connectivity (optionally select X450e for lower, or X350 for lowest cost edge connectivity. Feel free to mix/match all of them use the same 10GbaseT up link module).

Cross connect the X650 switches to each other using 2x10GbE links with CAT6A UTP cable. Connect each of the edge switches to each of the core switches with CAT5e/CAT6/CAT6a UTP cable. Since we are working at 10Gbps speeds there is no link aggregation/trunking needed for the edge(there is still aggregation used between the core switches) simplifying configuration even further

Is a thousand ports not enough? Break out the 512Gbps stacking for the X650 and add another pair of X650s, your configuration changes to include:

  • Two pairs of 2 x Extreme Networks X650-24t switches in stacked mode with a 512Gbps interconnect(exceeds many chassis switch backplane performance).
  • 46 x 48-port edge switches providing 2,208 ports of edge connectivity.

Two thousand ports not enough, really? You can go further though the stacking interconnect performance drops in half, add another pair of X650s and your configuration changes to include:

  • Two pairs of 3 x Extreme Networks X650-24t switches in stacked mode with a 256Gbps interconnect(still exceeds many chassis switch backplane performance).
  • 70 x 48-port edge switches providing 3,360 ports of edge connectivity.

The maximum number of switches in an X650 stack is eight. My personal preference is with this sort of setup don’t go beyond three. There’s only so much horsepower to do all of the routing and stuff and when your talking about having more than three thousand ports connected to them, I just feel more comfortable that you have a bigger switch if you go beyond that.

Take a look at the Black Diamond 8900 series switch modules on the 8800 series chassis. It is a more traditional core switch that is chassis based. The 8900 series modules are new, providing high density 10GbE and even high density 1GbE(96 ports per slot). It does not support 10GbaseT at the moment, but I’m sure that support isn’t far off. It does offer a 24-port 10GbE line card with SFP+ ports(there is a SFP+ variant of the X650 as well). I believe the 512Gbps stacking between a pair of X650s is faster than the backplane interconnect on the Black Diamond 8900 which is between 80-128Gbps per slot depending on the size of the chassis(this performance expected to double in 2010). While the backplane is not as fast, the CPUs are much faster, and there is a lot more memory, to do routing/management tasks than is available on the X650.

The upgrade process for going from an X650-based stack to a Black Diamond based infrastructure is fairly straight forward. They run the same operating system, they have the same configuration files. You can take down your slave ESRP switch, copy the configuration to the Black Diamond, re-establish all of the links and then repeat the process with the master ESRP switch. You can do this all with approximately one second of combined downtime.

So I hope, in part with this posting you can see what draws me to the Extreme portfolio of products. It’s not just the hardware, or the lower cost, but the unique software components that tie it together. In fact as far as I know Extreme doesn’t even make their own network chipsets anymore. I think the last one was in the Black Diamond 10808 released in 2003, which is a high end FPGA-based architecture(they call it programable ASICs, I suspect that means high end FPGA but not certain). They primarily(if not exclusively) use Broadcom chipsets now. They’ve used Broadcom in their Summit series for many years, but their decision to stop making their own chips is interesting in that it does lower their costs quite a bit. And their software is modular enough to be able to adapt to many configurations (e.g. their Black Diamond 10808 uses dual processor Pentium III CPUs, the Summit X450 series uses ARM-based CPUs I think)

Powered by WordPress