robert at raszuk
May 4, 2012, 11:18 AM
Post #4 of 6
> it isn't quite that easy. Never heard before about the diverse-path
> feature on Cisco for RRs, but looking at your link it looks like to
> have this probably limiting restriction in most setups: 'Path
> diversity is configured within an AS, within a single RR cluster.
> That is, the RR will advertise the diverse path to its RR client
> peers only.'
Well all it says is that you can enable it towards your RR clients.
The case becomes appealing where you already have more then one path on
the RRs and would like to enable your clients to receive more then
overall best path.
For example clients may be configured with best-external allowing them
to advertise external path towards RRs mesh even if the overall best is
IBGP learned one.
> In case you have one RR cluster per datacenter and multiple DNS
> anycast servers per datacenter, only the best path per datacenter
> will be distributed to the iBGP full-mesh and only the local DC
> routers will know about local multiple paths.
True, but let's discuss this a bit. The entire point of anycasting is
not to advertise multiple paths. The anycast address is the identical
address (or BGP next hop) advertised simultaneously from more then one
location. It is IGP which will take the burden to distribute the load
(or switchover upon failure) to alternative servers when one goes down.
I am really not clear what is a anycast "paths" and why would you
actually end up with more then one per data center. In fact I would go
opposite .. I would setup the anycast to cover more then one DC (just to
protect from entire DC failures). Then each prefix advertised by any DC
will have anycast address as next hop. IGP would however know how to
reach closest live such next hop from any point in the network.
I would argue that you do not need anything in BGP for this functionality.
> In case the backbone
> routers connected to the DC can directly reach all DC routers, only
> one of the DNS anycast servers will be contacted (assuming the
> anycast servers are connected to different DC distribution routers).
> So no traffic balancing will happen for traffic comming from your
> backbone-routers (part of the full mesh).
As mentioned load balancing will be done at the IGP level. Most likely
ECMP (unless you have some tunneling/encapsulation in place which would
allow unequal cost as well).
> If you use a global RR cluster for all datacenters, even traffic
> distribution accross severall datacenters won't happen if your setup
> includes full-meshed iBGP peers.
> So it's not only turning that feature on on your RRs, but you'll have
> to consider how your RR-clusters are setup and how they are placed in
> your topology (for anycast it is more or less the same like trying to
> get BGP based multipathing to work in a RR environment).
I agree that you need to know your RR topology and the goals. Typically
diverse-path can be used in all cases where your RRs have more then one
path for each bgp net. There are many ways to achieve that, but I think
maybe this needs more bandwith then email :)
> Or did I miss something?
> Cheers, Matthias
> On Fri, 04 May 2012 17:47:39 +0200 Robert Raszuk<robert [at] raszuk>
>> Hi Henry,
>>> Currently we have issues with the RR (Only select the main
>> That's an easy one to solve :)
>> Try using either add-paths or diverse-path on the RR. The latter is
>> much easier as it does not require upgrade of all of your BGP
>> speakers !
>> Best, R.
>>> We want to work with DNS that are span geographical. Our DNS have
>>> the same IP. We need to configure the Backbone IP (BGP) to
>>> distribute this IP (Anycast). Could you have any examples over
>>> how to deployment Anycast? Currently we have issues with the RR
>>> (Only select the main route)
>>> Thanks a lot!
>>> Henry _______________________________________________ cisco-nsp
>>> mailing list cisco-nsp [at] puck
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp archive at
>> _______________________________________________ cisco-nsp mailing
>> list cisco-nsp [at] puck
>> https://puck.nether.net/mailman/listinfo/cisco-nsp archive at
cisco-nsp mailing list cisco-nsp [at] puck
archive at http://puck.nether.net/pipermail/cisco-nsp/