Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Cisco: NSP

Anycast//DNS - BGP

 

 

Cisco nsp RSS feed   Index | Next | Previous | View Threaded


henry.huaman at yahoo

May 4, 2012, 8:19 AM

Post #1 of 6 (1849 views)
Permalink
Anycast//DNS - BGP

We want to work with DNS that are span geographical. Our DNS have the same IP.
We need to configure the Backbone IP (BGP) to distribute this IP (Anycast).
Could you have any examples over how to deployment Anycast?
Currently we have issues with the RR (Only select the main route)

Thanks a lot!

Henry
_______________________________________________
cisco-nsp mailing list cisco-nsp [at] puck
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


robert at raszuk

May 4, 2012, 8:47 AM

Post #2 of 6 (1792 views)
Permalink
Re: Anycast//DNS - BGP [In reply to]

Hi Henry,

> Currently we have issues with the RR (Only select the main route)

That's an easy one to solve :)

Try using either add-paths or diverse-path on the RR. The latter is much
easier as it does not require upgrade of all of your BGP speakers !

http://goo.gl/KDjlg

Best,
R.

> We want to work with DNS that are span geographical. Our DNS have the same IP.
> We need to configure the Backbone IP (BGP) to distribute this IP (Anycast).
> Could you have any examples over how to deployment Anycast?
> Currently we have issues with the RR (Only select the main route)
>
> Thanks a lot!
>
> Henry
> _______________________________________________
> cisco-nsp mailing list cisco-nsp [at] puck
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
>

_______________________________________________
cisco-nsp mailing list cisco-nsp [at] puck
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


cnsp at matthias-mueller

May 4, 2012, 10:40 AM

Post #3 of 6 (1799 views)
Permalink
Re: Anycast//DNS - BGP [In reply to]

Hi,

it isn't quite that easy. Never heard before about the diverse-path feature on Cisco for RRs, but looking at your link it looks like to have this probably limiting restriction in most setups:
'Path diversity is configured within an AS, within a single RR cluster. That is, the RR will advertise the diverse path to its RR client peers only.'

In case you have one RR cluster per datacenter and multiple DNS anycast servers per datacenter, only the best path per datacenter will be distributed to the iBGP full-mesh and only the local DC routers will know about local multiple paths. In case the backbone routers connected to the DC can directly reach all DC routers, only one of the DNS anycast servers will be contacted (assuming the anycast servers are connected to different DC distribution routers). So no traffic balancing will happen for traffic comming from your backbone-routers (part of the full mesh).

If you use a global RR cluster for all datacenters, even traffic distribution accross severall datacenters won't happen if your setup includes full-meshed iBGP peers.

So it's not only turning that feature on on your RRs, but you'll have to consider how your RR-clusters are setup and how they are placed in your topology (for anycast it is more or less the same like trying to get BGP based multipathing to work in a RR environment).

Or did I miss something?

Cheers,
Matthias

On Fri, 04 May 2012 17:47:39 +0200
Robert Raszuk <robert [at] raszuk> wrote:

> Hi Henry,
>
> > Currently we have issues with the RR (Only select the main route)
>
> That's an easy one to solve :)
>
> Try using either add-paths or diverse-path on the RR. The latter is much
> easier as it does not require upgrade of all of your BGP speakers !
>
> http://goo.gl/KDjlg
>
> Best,
> R.
>
> > We want to work with DNS that are span geographical. Our DNS have the same IP.
> > We need to configure the Backbone IP (BGP) to distribute this IP (Anycast).
> > Could you have any examples over how to deployment Anycast?
> > Currently we have issues with the RR (Only select the main route)
> >
> > Thanks a lot!
> >
> > Henry
> > _______________________________________________
> > cisco-nsp mailing list cisco-nsp [at] puck
> > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > archive at http://puck.nether.net/pipermail/cisco-nsp/
> >
> >
>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp [at] puck
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list cisco-nsp [at] puck
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


robert at raszuk

May 4, 2012, 11:18 AM

Post #4 of 6 (1831 views)
Permalink
Re: Anycast//DNS - BGP [In reply to]

Hi Matthias,

> it isn't quite that easy. Never heard before about the diverse-path
> feature on Cisco for RRs, but looking at your link it looks like to
> have this probably limiting restriction in most setups: 'Path
> diversity is configured within an AS, within a single RR cluster.
> That is, the RR will advertise the diverse path to its RR client
> peers only.'

Well all it says is that you can enable it towards your RR clients.

The case becomes appealing where you already have more then one path on
the RRs and would like to enable your clients to receive more then
overall best path.

For example clients may be configured with best-external allowing them
to advertise external path towards RRs mesh even if the overall best is
IBGP learned one.

> In case you have one RR cluster per datacenter and multiple DNS
> anycast servers per datacenter, only the best path per datacenter
> will be distributed to the iBGP full-mesh and only the local DC
> routers will know about local multiple paths.

True, but let's discuss this a bit. The entire point of anycasting is
not to advertise multiple paths. The anycast address is the identical
address (or BGP next hop) advertised simultaneously from more then one
location. It is IGP which will take the burden to distribute the load
(or switchover upon failure) to alternative servers when one goes down.

I am really not clear what is a anycast "paths" and why would you
actually end up with more then one per data center. In fact I would go
opposite .. I would setup the anycast to cover more then one DC (just to
protect from entire DC failures). Then each prefix advertised by any DC
will have anycast address as next hop. IGP would however know how to
reach closest live such next hop from any point in the network.

I would argue that you do not need anything in BGP for this functionality.

> In case the backbone
> routers connected to the DC can directly reach all DC routers, only
> one of the DNS anycast servers will be contacted (assuming the
> anycast servers are connected to different DC distribution routers).
> So no traffic balancing will happen for traffic comming from your
> backbone-routers (part of the full mesh).

As mentioned load balancing will be done at the IGP level. Most likely
ECMP (unless you have some tunneling/encapsulation in place which would
allow unequal cost as well).

> If you use a global RR cluster for all datacenters, even traffic
> distribution accross severall datacenters won't happen if your setup
> includes full-meshed iBGP peers.

see above.

> So it's not only turning that feature on on your RRs, but you'll have
> to consider how your RR-clusters are setup and how they are placed in
> your topology (for anycast it is more or less the same like trying to
> get BGP based multipathing to work in a RR environment).

I agree that you need to know your RR topology and the goals. Typically
diverse-path can be used in all cases where your RRs have more then one
path for each bgp net. There are many ways to achieve that, but I think
maybe this needs more bandwith then email :)

best,
r.


>
> Or did I miss something?
>
> Cheers, Matthias
>
> On Fri, 04 May 2012 17:47:39 +0200 Robert Raszuk<robert [at] raszuk>
> wrote:
>
>> Hi Henry,
>>
>>> Currently we have issues with the RR (Only select the main
>>> route)
>>
>> That's an easy one to solve :)
>>
>> Try using either add-paths or diverse-path on the RR. The latter is
>> much easier as it does not require upgrade of all of your BGP
>> speakers !
>>
>> http://goo.gl/KDjlg
>>
>> Best, R.
>>
>>> We want to work with DNS that are span geographical. Our DNS have
>>> the same IP. We need to configure the Backbone IP (BGP) to
>>> distribute this IP (Anycast). Could you have any examples over
>>> how to deployment Anycast? Currently we have issues with the RR
>>> (Only select the main route)
>>>
>>> Thanks a lot!
>>>
>>> Henry _______________________________________________ cisco-nsp
>>> mailing list cisco-nsp [at] puck
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp archive at
>>> http://puck.nether.net/pipermail/cisco-nsp/
>>>
>>>
>>
>> _______________________________________________ cisco-nsp mailing
>> list cisco-nsp [at] puck
>> https://puck.nether.net/mailman/listinfo/cisco-nsp archive at
>> http://puck.nether.net/pipermail/cisco-nsp/
>
>
> 1-888-285-9363


_______________________________________________
cisco-nsp mailing list cisco-nsp [at] puck
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


adam.vitkovsky at swan

May 6, 2012, 11:43 PM

Post #5 of 6 (1760 views)
Permalink
Re: Anycast//DNS - BGP [In reply to]

Hi Henry,

Please note that even though the diverse-path RR feature is easy to deploy
there's one possible drawback to be taken into consideration
Once configured with the diverse-path feature the RR will start to advertise
all paths for all prefixes to all its clients -so you should make sure all
the RR clients can cope with the tables growth (depending on the # of
prefixes and the # of alternate paths per each prefix) -please note that
this will affect all the bgp speakers not just those clients that really
need the diverse paths for loadsharing or anycasting purposes

If you are using mpls in your backbone you'd definitely want to be using
different RD per each bgp speaker to make the prefix unique from RR's
perspective

Matthias,
What I've used for anycast or multipath in pure IP bgp-RR environments prior
to the diverse-path or add-path features was adding the Path-Based-RR to the
existing AddressBased-RR or Topology-Bassed-RR
So for 6 paths you'd have 6 RRs each choosing the same prefix with different
path as best passing it down to all clients -and have clients to peer with
all 6 RRs
Please note these where pure control plane RRs
As you can see this doesn't scale well as the separate RR infrastructure
needs to be built for each diverse path that needs to be propagated to RR
clients


adam

-----Original Message-----
From: cisco-nsp-bounces [at] puck
[mailto:cisco-nsp-bounces [at] puck] On Behalf Of Robert Raszuk
Sent: Friday, May 04, 2012 5:48 PM
To: henrry huaman
Cc: cisco-nsp [at] puck
Subject: Re: [c-nsp] Anycast//DNS - BGP

Hi Henry,

> Currently we have issues with the RR (Only select the main route)

That's an easy one to solve :)

Try using either add-paths or diverse-path on the RR. The latter is much
easier as it does not require upgrade of all of your BGP speakers !

http://goo.gl/KDjlg

Best,
R.

> We want to work with DNS that are span geographical. Our DNS have the same
IP.
> We need to configure the Backbone IP (BGP) to distribute this IP
(Anycast).
> Could you have any examples over how to deployment Anycast?
> Currently we have issues with the RR (Only select the main route)
>
> Thanks a lot!
>
> Henry
> _______________________________________________
> cisco-nsp mailing list cisco-nsp [at] puck
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
>

_______________________________________________
cisco-nsp mailing list cisco-nsp [at] puck
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

_______________________________________________
cisco-nsp mailing list cisco-nsp [at] puck
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


robert at raszuk

May 7, 2012, 12:50 AM

Post #6 of 6 (1772 views)
Permalink
Re: Anycast//DNS - BGP [In reply to]

Hi Adam,

> Hi Henry,
>
> Please note that even though the diverse-path RR feature is easy to deploy
> there's one possible drawback to be taken into consideration
> Once configured with the diverse-path feature the RR will start to advertise
> all paths for all prefixes to all its clients

Completely wrong. You are mixing diverse-path with add-paths - two
completely different features.

With diverse path you are sending _only_ second best on the session
dedicated for the second best. The benefit is that you are controlling
very well number of paths on your clients, do not need to upgrade the
clients yet can benefit from BGP level load balancing as well as PIC.

> -so you should make sure all
> the RR clients can cope with the tables growth (depending on the # of
> prefixes and the # of alternate paths per each prefix) -please note that
> this will affect all the bgp speakers not just those clients that really
> need the diverse paths for loadsharing or anycasting purposes

Even with add-paths you are really talking about the number of paths
which need to be advertised can be chosen by cli.

> If you are using mpls in your backbone you'd definitely want to be using
> different RD per each bgp speaker to make the prefix unique from RR's
> perspective

RD per VRF is a good recommendation. But this falls into your issue of
having all paths on each PE where non null RT intersection exists.

Best,
R.


> Matthias,
> What I've used for anycast or multipath in pure IP bgp-RR environments prior
> to the diverse-path or add-path features was adding the Path-Based-RR to the
> existing AddressBased-RR or Topology-Bassed-RR
> So for 6 paths you'd have 6 RRs each choosing the same prefix with different
> path as best passing it down to all clients -and have clients to peer with
> all 6 RRs
> Please note these where pure control plane RRs
> As you can see this doesn't scale well as the separate RR infrastructure
> needs to be built for each diverse path that needs to be propagated to RR
> clients
>
>
> adam
>
> -----Original Message-----
> From: cisco-nsp-bounces [at] puck
> [mailto:cisco-nsp-bounces [at] puck] On Behalf Of Robert Raszuk
> Sent: Friday, May 04, 2012 5:48 PM
> To: henrry huaman
> Cc: cisco-nsp [at] puck
> Subject: Re: [c-nsp] Anycast//DNS - BGP
>
> Hi Henry,
>
> > Currently we have issues with the RR (Only select the main route)
>
> That's an easy one to solve :)
>
> Try using either add-paths or diverse-path on the RR. The latter is much
> easier as it does not require upgrade of all of your BGP speakers !
>
> http://goo.gl/KDjlg
>
> Best,
> R.
>
>> We want to work with DNS that are span geographical. Our DNS have the same
> IP.
>> We need to configure the Backbone IP (BGP) to distribute this IP
> (Anycast).
>> Could you have any examples over how to deployment Anycast?
>> Currently we have issues with the RR (Only select the main route)
>>
>> Thanks a lot!
>>
>> Henry
>> _______________________________________________
>> cisco-nsp mailing list cisco-nsp [at] puck
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
>>
>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp [at] puck
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
>
>

_______________________________________________
cisco-nsp mailing list cisco-nsp [at] puck
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Cisco nsp RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.