Diggin' technology every day


Extremely Simple Redundancy Protocol

ESRP. That is what I have started calling it at least. The official designation is Extreme Standby Router Protocol. It's one of, if not the main reason I prefer Extreme switches at the core of any Layer 3 network. I'll try to explain why here, because Extreme really doesn't spend any time promoting this protocol, I'm still pushing them to change that.

I've deployed ESRP at two different companies in the past five years ranging from

What are two basic needs of any modern network?

  1. Layer 2 loop prevention
  2. Layer 3 fault tolerance

Traditionally these are handled by separate protocols that are completely oblivious to one another mainly some form of STP/RSTP and VRRP(or maybe HSRP if your crazy). There have been for a long time interoperability issues with various implementations of STP as well over the years, further complicating the issue because STP often needs to run on every network device for it to work right.

With ESRP life is simpler.

Advantages of ESRP include:

  • Collapsing of layer 2 loop prevention and layer 3 fault tolerance(with IP/MAC takeover) into a single protocol
  • Can run in either layer 2 only mode, layer 3 only mode or in combination mode(default).
  • Sub second convergence/recovery times.
  • Eliminates the need to run protocols of any sort on downstream network equipment
  • Virtually all down stream devices supported. Does not require an Extreme-only network. Fully inter operable with other vendors like Cisco, HP, Foundry, Linksys, Netgear etc.
  • Supports both managed and unmanaged down stream switches
  • Able to override loop prevention on a per-port basis(e.g. hook a firewall or load balancer directly to the core switches, and you trust they will handle loop prevention themselves in active/fail over mode)
  • The "who is master?" question can be determined by setting an ESRP priority level which is a number from 0-254 with 255 being standby state.
  • Set up from scratch in as little as three commands(for each core switch)
  • Protect a new vlan with as little as 1 command (for each core switch)
  • Only one IP address per vlan needed for layer 3 fault tolerance(IP-based management provided by dedicated out of band management port)
  • Supports protecting up to 3000 vlans per ESRP instance
  • Optional "load balancing" by running core switches in active-active mode with some vlans on one, and others on the other.
  • Additional fail over based on tracking of pings, route table entries or vlans.
  • For small to medium sized networks you can use a pair of X450A(48x1GbE) or X650(24x10GbE) switches as your core for a very low priced entry level solution.
  • Mature protocol. I don't know exactly how old it is, but doing some searches indicates at least 10 years old at this point
  • Can provide significantly higher overall throughput vs ring based protocols(depending on the size of the ring), as every edge switch is directly connected to the core.
  • Nobody else in the industry has a protocol that can do this. If you know of another protocol that combines layer 2 and layer 3 into a single protocol let me know. For a while I thought Foundry's VSRP was it, but it turns out that is mainly layer 2 only. I swear I read a PDF that talked about limited layer 3 support in VSRP back in 2004/2005 time frame but not anymore.  I haven't spent the time to determine the use cases between VSRP and Foundry's MRP which sounds similar to Extreme's EAPS which is a layer 2 ring protocol heavily promoted by Extreme.

Downsides to ESRP:

  • Extreme Proprietary protocol. To me this is not a big deal as you only run this protocol at the core. Downstream switches can be any vendor.
  • Perceived complexity due to wide variety of options, but they are optional, basic configurations should work fine for most people and it is simple to configure.
  • Default election algorithm includes port weighting, this can be good and bad depending on your point of view. Port weighting means if you have an equal number of active links of the same speed on each core switch, and the master switch has a link go down the network will fail over. If you have non-switches connected directly to the core(e.g. firewall) I will usually disable the port weighting on those specific ports so I can reboot the firewall without causing the core network to fail over. I like port weighting myself, viewing it as the network trying to maintain it's highest level of performance/availability. That is, who knows why that port was disconnected, bad cable? bad ASIC, bad port? Fail over to the other switch that has all of it's links in a healthy state.
  • Not applicable to all network designs(is anything?)

The optimal network configuration for ESRP is very simple, it involves two core switches cross connected to each other(with at least two links), with a number of edge switches, each edge switch has at least one link to each core switch. You can have as few as three switches in your network, or you can have several hundred(as many as you can connect to your core switches max today I think is say 760 switches using high density 1GbE ports on a Black Diamond 8900, plus 8x1Gbps ports for cross connect).

ESRP Mesh Network Design

ESRP Domains

ESRP uses a concept of domains to scale itself. A single switch is master of a particular domain which can include any number of vlans up to 3000. Health packets are sent for the domain itself, rather than the individual vlans dramatically simplifying things and making them more scalable simultaneously.

This does mean that if there is a failure in one vlan, all of the vlans for that domain will fail over, not that one specific vlan. You can configure multiple domains if you want, I configure my networks with one domain per ESRP instance. Multiple domains can come in handy if you want to distribute the load between the core switches. A vlan can be a member of only one ESRP domain(I expect, I haven't  tried to verify).

Layer 2 loop prevention

The way ESRP loop prevention works is the links going to the slave switch are placed in a blocking state, which eliminates the need for downstream protocols and allows you to provide support for even unmanaged switches transparently.

Layer 3 fault tolerance

Layer 3 fault tolerance in ESRP operates in two different modes depending on whether or not the downstream switches are Extreme. It assumes by default they are, you can override this behavior on a per-port basis. In an all-Extreme network ESRP uses EDP [Extreme Discovery Protocol](similar to Cisco's CDP) to inform down stream switches the core has failed over and to flush their forwarding entries for the core switch.

If downstream switches are not Extreme switches, and you decided to leave the core switch in default configuration, it will likely take some time(seconds, minutes) for those switches to expire their forwarding table entries and discover the network has changed.

Port Restart

If you know you have downstream switches that are not Extreme I suggest for best availability to configure the core switches to restart the ports those switches are on. Port restart is a feature of ESRP which will cause the core switch to reset the links of the ports you configure to try to force those switches to flush their forwarding table. This process takes more time than in an Extreme-only network. In my own tests specifically with older Cisco layer 2 switches, with F5 BigIP v9, and Cisco PIX this process takes less than one second(if you have a ping session going and trigger a fail over event to occur rarely is a ping lost).

Host attached ports

If you are connecting devices like a load balancer, or a firewall directly to the switch, you typically want to hand off loop prevention to those devices, so that the slave core switch will allow traffic to traverse those specific ports regardless of the state of the network. Host attached mode is an ESRP feature that is enabled on a per-port basis.

Integration with ELRP

ESRP does not protect you from every type of loop in the network, by design it's intended to prevent a loop from occurring between the edge switch and the two core switches. If someone plugs an edge switch back into itself for example that will cause a loop still.

ESRP integrates with another Extreme specific protocol named ELRP or Extreme Loop Recovery Protocol. Again I know of no other protocol in the industry that is similar, if you do let me know.

What ELRP does is it sends packets out on the various ports you configure and looks for the number of responses. If there is more than it expects it sees that as a loop. There are three modes to ELRP(this is getting a bit off topic but is still related). The simplist mode is one shot mode where you can have ELRP send it's packets once and report, the second mode is periodic mode where you configure the switch to send packets periodically, I usually use 10 seconds or something, and it will log if there are loops detected(it tells you specifically what ports the loops are originating on).

The third mode is integrated mode, which is how it relates to ESRP. Myself I don't use integrated mode and suggest you don't either at least if you follow an architecture that is the same as mine. What integrated mode does is if there is a loop detected it will tell ESRP to fail over, hoping that the standby switch has no such loop. In my setups the entire network is flat, so if there is a loop detected on one core switch, chances are extremely(no pun intended) high that the same loop exists on the other switch. So there's no point in trying to fail over. But I still configure all of my Extreme switches(both edge and core) with ELRP in periodic mode, so if a loop occurs I can track it down easier.

Example of an ESRP configuration

We will start with this configuration:

  • A pair of Summit X450A-48T switches as our core
  • 4x1Gbps trunked cross connects between the switches (on ports 1-4)
  • Two downstream switches, each with 2x1Gbps uplinks on ports 5,6 and 7,8 respectively which are trunked as well.
  • One VLAN named "webservers" with a tag of 3500 and an IP address of
  • An ESRP domain named esrp-prod

The non ESRP portion of this configuration is:

enable sharing 1 grouping 1-4 address-based L3_L4
enable sharing 5 grouping 5-6 address-based L3_L4
enable sharing 7 grouping 7-8 address-based L3_L4
create vlan webservers
config webservers tag 3500
config webservers ipaddress
config webservers add ports 1,5,7 tagged

What this configuration does

  • Creates a port sharing group(802.3ad) grouping ports 1-4 into a virtual port 1.
  • Creates a port sharing group(802.3ad) grouping ports 5-6 into a virtual port 5.
  • Creates a port sharing group(802.3ad) grouping ports 5-7 into a virtual port 7.
  • Creates a vlan named webservers
  • Assigns tag 3500 to the vlan webservers
  • Assigns the IP with the netmask to the vlan webservers
  • Adds the virtual ports 1,5,7 in a tagged mode to the vlan webservers

The ESRP portion of this configuration is:

create esrp esrp-prod
config esrp-prod add master webservers
config esrp-prod priority 100
config esrp-prod ports mode 1 host
enable esrp

The only difference between the master and slave, is to change the priority. From 0-254 higher numbers is higher priority, 255 is reserved for putting the switch in standby state.

What this configuration does

  • Creates an ESRP domain named esrp-prod.
  • Adds a master vlan to the domain, I believe the master vlan carries the control traffic
  • Configures the switch for a specific priority [optional - I highly recommend doing it]
  • Enables host attach mode for port 1, which is a virtual trunk for ports 1-4. This allows traffic for potentially other host attached ports on the slave switch to traverse to the master to reach other hosts on the network. [optional - I highly recommend doing it]
  • enables ESRP itself (you can use the command show esrp at this point to view the status)

Protecting additional vlans with ESRP

It is a simple one liner command to each core switch, extending the example above, say you added a vlan appservers with it's associated parameters and wanted to protect it, the command is:

config esrp-prod add member appservers

That's it.

Gotchas with ESRP

There is only one gotcha that I can think of off hand specific to ESRP. I believe it is a bug, and reported it a couple of years ago(code rev and earlier, current code rev is 12.3.x) I don't know if it is fixed yet. But if you are using port restart configured ports on your switches, and you add a vlan to your ESRP domain, those links will get restarted(as expected), what is not expected is this causes the network to fail over because for a moment the port weighting kicks in and detects link failure so it forces the switch to a slave state. I think the software could be aware why the ports are going down and not go to a slave state.

Somewhat related, again with port weightings, if you are connecting a new switch to the network, and you happen to connect it to the slave switch first, port weighting will kick in being that the slave switch now has more active ports than the master, and will trigger ESRP to fail over.

The workaround to this, and in general it's a good practice anyways with ESRP, is to put the slave switch in a standby state when you are doing maintenance on it, this will prevent any unintentional network fail overs from occurring while your messing with ports/vlans etc. You can do this by setting the ESRP priority to 255. Just remember to put it back to a normal priority after you are done. Even in a standby state, if you have ports that are in host attached mode(again e.g. firewalls or load balancers) those ports are not impacted by any state changes in ESRP.

Sample Modern Network design with ESRP


  • 2 x Extreme Networks Summit X650-24t with 10GbaseT for the core
  • 22 x Extreme Networks Summit X450A-48T each with an XGM2-2xn expansion module which provides 2x10GbaseT up links providing 1,056 ports of highest performance edge connectivity (optionally select X450e for lower, or X350 for lowest cost edge connectivity. Feel free to mix/match all of them use the same 10GbaseT up link module).

Cross connect the X650 switches to each other using 2x10GbE links with CAT6A UTP cable. Connect each of the edge switches to each of the core switches with CAT5e/CAT6/CAT6a UTP cable. Since we are working at 10Gbps speeds there is no link aggregation/trunking needed for the edge(there is still aggregation used between the core switches) simplifying configuration even further

Is a thousand ports not enough? Break out the 512Gbps stacking for the X650 and add another pair of X650s, your configuration changes to include:

  • Two pairs of 2 x Extreme Networks X650-24t switches in stacked mode with a 512Gbps interconnect(exceeds many chassis switch backplane performance).
  • 46 x 48-port edge switches providing 2,208 ports of edge connectivity.

Two thousand ports not enough, really? You can go further though the stacking interconnect performance drops in half, add another pair of X650s and your configuration changes to include:

  • Two pairs of 3 x Extreme Networks X650-24t switches in stacked mode with a 256Gbps interconnect(still exceeds many chassis switch backplane performance).
  • 70 x 48-port edge switches providing 3,360 ports of edge connectivity.

The maximum number of switches in an X650 stack is eight. My personal preference is with this sort of setup don't go beyond three. There's only so much horsepower to do all of the routing and stuff and when your talking about having more than three thousand ports connected to them, I just feel more comfortable that you have a bigger switch if you go beyond that.

Take a look at the Black Diamond 8900 series switch modules on the 8800 series chassis. It is a more traditional core switch that is chassis based. The 8900 series modules are new, providing high density 10GbE and even high density 1GbE(96 ports per slot). It does not support 10GbaseT at the moment, but I'm sure that support isn't far off. It does offer a 24-port 10GbE line card with SFP+ ports(there is a SFP+ variant of the X650 as well). I believe the 512Gbps stacking between a pair of X650s is faster than the backplane interconnect on the Black Diamond 8900 which is between 80-128Gbps per slot depending on the size of the chassis(this performance expected to double in 2010). While the backplane is not as fast, the CPUs are much faster, and there is a lot more memory, to do routing/management tasks than is available on the X650.

The upgrade process for going from an X650-based stack to a Black Diamond based infrastructure is fairly straight forward. They run the same operating system, they have the same configuration files. You can take down your slave ESRP switch, copy the configuration to the Black Diamond, re-establish all of the links and then repeat the process with the master ESRP switch. You can do this all with approximately one second of combined downtime.

So I hope, in part with this posting you can see what draws me to the Extreme portfolio of products. It's not just the hardware, or the lower cost, but the unique software components that tie it together. In fact as far as I know Extreme doesn't even make their own network chipsets anymore. I think the last one was in the Black Diamond 10808 released in 2003, which is a high end FPGA-based architecture(they call it programable ASICs, I suspect that means high end FPGA but not certain). They primarily(if not exclusively) use Broadcom chipsets now. They've used Broadcom in their Summit series for many years, but their decision to stop making their own chips is interesting in that it does lower their costs quite a bit. And their software is modular enough to be able to adapt to many configurations (e.g. their Black Diamond 10808 uses dual processor Pentium III CPUs, the Summit X450 series uses ARM-based CPUs I think)

TechOps Guy: Nate

Comments (17) Trackbacks (6)
  1. Just curious.. why not just stack the core and edge as much as possible. The core could just be two or more 650′s stacked and the edge could be several stacked 450′s uplinked to the core switches (same logical switch). You get the added benefit of 802.3ad across the two physical core switches since they are acting as one. Perhaps I’m missing something though.

    I guess what I’m curious about is the pro/con of running a stacked core in comparison exclusively (one large logical switch) to multiple independent core switches (not stacked using layer 2-3 failover protocols). Seems like you could really simplify the topology using stacked exclusively in the core as long as hitless upgrades as some vendors call them can be performed. Nice article and I post because my team has recently had some pre-sales discussions with Extreme.

  2. I don’t trust a single stack. I really don’t like stacking period, but having a pair of stacks would be passable for a HA core. A single stack wouldn’t be sufficient for an HA core. The integration is too tight, doesn’t take too much to (in theory) take down an entire stack. Also frequently with major software upgrades the entire stack has to be taken off line to re-negotiate the links. Using something like ESRP which is OS independent because it doesn’t rely on the integration that stacking provides gives much higher availability, in my mind anyways.

    I wouldn’t want a single stack any more than I would want a single chassis switch(even with redundant modules). Well a single chassis switch has better availability than a stack, but I wouldn’t promote any solution myself that consisted of a single chassis switch any more than I would a single stack, at least for the core. You certainly could go with stacked switches at the edge and reduce the # of management devices but the system is so easy to manage as it is I don’t see a lot of benefit, and again there’s a bigger chance (theoretically anyways) of issues with the stacking.

    Also, some Extreme switches have an Extreme-specific security system called ClearFLOW(probably should write about that at some point), which basically allows the switch to do somewhat deep packet inspection on every port at line rate and respond in real time to certain threats. This functionality is not available for switches operating in a stack. It used to be restricted to their biggest chassis switches I think because of the CPU requirements, but they re-introduced it into their Summit series in the past couple of years.

    Stacking is certainly a valid config for many situations, it just depends on the needs, and the scale that your at/think you might get to.

  3. Hy,

    I use ESRP in our network and one strange thing that i didnt understand. If the neighbor switch is offline my esrp domain goes to neutral.. ? should it not go to master ? hopefully some one can give me a hint…

    thanx’s a lot

  4. From what I see in the docs a switch is in neutral state if esrp is not initialized, have you run ‘enable esrp’ on both switches?

    If you want you can email me your esrp config along with a sample cabling diagram(or description) and the output of the show esrp command and I can take a look. email address you can use is – blog (at) techopsguys (dot) com

  5. Excellent article. I have primarily Cisco and HP experience and have done some L2 implementations of Extremeware switches. Your article suggests that downstream switches (Extreme or others) can be run without any STP or L2 loop prevention features. Will it make sense to not have STP on the edge switches, but at least have some sort of L2 loop prevention (like keepalive in Cisco that sends out L2 packet every ten seconds and expects to receive it back on same port, or loop-protection in HP that uses multicast packet and if two ports are short circuited on same switches, then it shuts down those two to insulate network from them.

    Further, assuming Extreme switches capable of running OSPF (advanced edge), is there any thing to be careful about running OSPF with blade server chassis with blade switches, so that automatic failover can happen? I will though like to use ECMP OSPF, if this is supported, so that vlan based load balancing can happen for different subnets.

    Thanks and keep up the good work.

  6. Thanks for the kind comment!

    For Extreme yes you can build networks entirely without STP, whether your using Extreme at the core, or in some cases at the edge (without an extreme core).

    At the core there are 3 main options I am aware of to run an entirely STP-free network(with Extreme gear at either the core or edge or both)

    - ESRP – my preferred method and what the article was about, this protocol runs at the core – typically active/passive, and downstream switches can be Extreme, Cisco, HP, any brand any type, managed or unmanaged. Layer 2 or Layer 3 or both.
    - EAPS – this is a layer 2 ring protocol (there seem to be several implementations of this design from various manufacturers). Designed for metro area networks I suppose you could say, this can work in data centers as well though the configuration is more complex since it is layer 2 only. EAPS is a “industry standard” (accepted by some standards body, forgot which), and there are a couple other manufacturers out there (I don’t recall names, I think they are names associated with more service provider switching rather than enterprise/data center) that have implemented EAPS.
    - M-LAG – It makes a pair (or perhaps more? I’m not sure) of core switches act as one logical switch from a layer 2 perspective. The downstream devices can use their uplinks in an active-active fashion and the core switches prevent the loop from occurring. This, like ESRP requires no downstream switch support, any device that supports regular link aggregation can use M-LAG, the downstream device does not know (nor does it need to) that M-LAG is in use since the technology is transparent. This is layer 2 only as well. So you still need VRRP or ESRP for layer 3.

    EAPS and ESRP are 10+ years old at this point with MLAG being introduced to Extreme in the past 1.5 years, and TRILL still seems to be fairly bleeding edge. I believe Brocade/Foundry uses TRILL on their VCS product line, and Arista and Force10 recently announced(I believe it was recently) TRILL support.

    I’m not too interested in wide scale deployment of M-LAG myself primarily because I am worried about the bandwidth in between the two core switches, at this point in time I’d rather keep my network active/passive with ESRP and have full line rate connectivity between every up link on the core switch. I haven’t dealt with a chassis switch(one of the reasons I like Extreme is they are good at deploying layer 3 support across the board) in many years so I don’t have the benefit of hitless fail over during things like software upgrades or something(I think stacking addresses that to some point).

    Another protocol, which as far as I know Extreme does not yet support is TRILL. Which is a larger scale protocol which kind of reminds me of STP really – however in TRILL all links are active-active. I’m not yet sold on TRILL myself, not the protocol specifically but just having so many devices integrated together like that in a full active mesh feels significantly more risky than a more traditional active/passive design. It’s a mental barrier I’ll have to try to get over at some point. TRILL is layer 2 only as well.

    As for OSPF. Equal cost multi path is certainly supported with Extreme. I have limited OSPF experience myself(I used it on the black diamond 10k back in 2005 – though I had consultants do the initial configuration) on Extreme (and no OSPF experience on other platforms).

    My networks are smaller scale so I stick to static routing(on the switches) and just design the subnets so that routing is easy and I don’t have to add static routes to the switches often. As for being careful, myself I would not use OSPF at the edge because of the longer convergence times. When VRRP and ESRP can fail over in less than a second, typically OSPF can take 10 to 20 seconds or perhaps more (depending if you change the default timer settings – and there are warnings about changing the defaults which can cause route flapping and stuff if I recall right if your not careful and really know what your doing).

    I don’t anticipate being in a situation where I would need a “real” routing protocol like OSPF or BGP or whatever in the future, keeping to clean network design makes things a lot simpler, and I outsource the routing of the internet connection to my upstream ISP who can do a lot better job at it than I can. I was at a company not too long ago where they did BGP internally (as in managed internally – BGP was used for internet connectivity at two different sites — I wasn’t responsible for the network there they had dedicated staff to handle it), and it caused more problems than it was worth, things got a lot better when we got rid of our BGP routers and let our ISP(Internap at the time) handle it.

    Here’s a basic diagram of the network I designed back in 2004/2005 which involved using OSPF, Extreme’s virtual router capability as well as leveraging the 128,000 ACL capacity of the Black Diamond 10k switches for greater isolation between environments within a single layer 2 domain while maintaining line rate performance.

    From a physical network perspective the design was unusual for the time(maybe still is for now) since the design involved using just two core switches and carving them up into multiple switches. We used cables to link the virtual switches together, which under normal circumstances would cause a network loop if you plugged a cable into two different ports of the same physical switch, but due to the logical isolation it was the only way(at the time – not sure if that is changed now I don’t think it is) to get connectivity between virtual switches – and of course there were no loops. OSPF was used on all virtual routers but thinking back we only really needed it on the firewalls, since I had the firewalls in layer 2 bridging (active-active) mode I let OSPF handle the fault tolerance, if a firewall went down then the route would go down and the switch would fail over to the other route, which had a different firewall. Even though the firewalls were active active I didn’t trust the state replication to be real time enough to actually use them in true active-active form, in the event state replication took a few extra milliseconds causing traffic to get dropped because the other firewall wasn’t aware of the connection. The firewalls were OpenBSD.

    In my new network that is going to be deployed next month I took even greater care with the network design, I have a mixture of mostly /24s and a couple /22s, and encompass those within larger /20s, with the entire “facility” within a /16, to make things like routing, access control etc much simpler, than I’ve even done in the past.

  7. Hi
    I am configuring ELRP on a Extreme summit, on edge ports.

    I would like to know if it is good or not to activate it on trunk ports.
    Another question: do you know if ELRP packets will be sent on blocking ports ?

  8. I enable it on all ports on a per-VLAN basis. It looks like you can specifically exclude certain ports for ELRP, default is on all ports. I remember a bug in XOS many years ago that I reported where ELRP would falsely identify loops present on the ESRP slave switch at least on the platform I was using at the time X450A, that was fixed probably 3-4 years ago though. There was no harm in the bug other than filling up the logs.

    I haven’t used ELRP in it’s new capability where it can shut off the port that has a loop for a few seconds to see if it solves the loop. Prior to that feature some folks would use the CLI scripting to accomplish the same goal. I’m still old school, loops in my networks are rare enough that I just keep it in logging mode so I can get alerted and can investigate manually.

  9. Hi,
    I have some other question – does anyone knows what exact modules of Broadcom chipsets are used in Summit X650 switches ??

    I have problems with FDB, that is already ~50% full (16k of 32k) but for a long time I’am observing that there are some MAC addresses that don’t get into FDB so traffic is flooded.
    Just a few “hardware table full” messages recently, no other information…

  10. I am not sure no — I suggest you file a support ticket. Are you sure that it is the L2 FDB that is getting full and not the L3 ?

    In doing a little research looking at the iparp commands it appears the default setting for a virutal router is a max of 8192 iparp entries (configure iparp max_entries). The absolute maximum is 20,480. I believe this mainly applies only to VLANs with a routing interface on them – but this is beyond my own personal experience for scale.

    It’s also possible there is an uneven distribution of FDB entries, perhaps one or more of the broadcom chips is overloaded while the others are not — you can use the command show fdb stats ports all to see how many fdb entries there are on each port.

  11. Hi,

    I am implementing a very similar setup, but my downstream switches are 2 z9000 connected to each other with VLT so they act as one switch. Sometimes when one of them is restarted, the x670s loose contact with whatever is downstream. I have noted that one of the lacp pairs from x670v to the restarted z9000 stayed down after the restart, shutting it down and and re enabling fixed lacp but did not restore communication.

    Tcpdump in the hosts shows that the x670 sends arp requests to the host, the host replies, but apparently(?) the reply does not reach the x670.

    Normally my esrped VLAN will be up in the master and down on the same, but when the problem occurs is is up in both (as seen in show esrp detail).

    If I disable both ports in the z9000 that I just rebooted to trigger the problem, communication is restored. If I re enable them, the problem comes back.

    any ideas?

  12. That is not an issue I have heard of before – what version of XOS? Are both Z9000s connected to both X670Vs ? Can you provide me with the basic lacp, esrp and port configurations (blog (at) techopsguys #dot# com)? I’ve never worked with Force10 before though can’t imagine there being a compatibility issue.

    Also have you filed a support request with either vendor?

  13. Figured it out – thnx for the hints Nate. While the z9000s are one switch as far as the x670s are concerned, the x670s are 2 different switches in active/passive, so 2 separate lags (port channels) have to be created in each switch-originally I was using 1 per switch. Each LAG has to contain the port(s) that goes to one particular 670-the other member(s) of the lag will come from the other z9000.


  14. I do not even know how I ended up here, but I thought this post was great.

    I do not know who you are but definitely you are going to a famous
    blogger if you are not already ;) Cheers!

  15. Dimitri –
    Phew, glad to hear it!

    thanks for reading :)

    Thanks visual kei!

  16. Hi,

    I use ESRP in our network and one strange thing that i didnt understand. If the neighbor switch(master or slave) is offline , my esrp domain goes to neutral.. ? should it not go to master ? Business interruption,hopefully some one can give me a hint…

  17. feel free to contact me directly (email address on right side of page) – need network config, diagram etc.