Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: nsp: ipv6

Greenfield IPv4 + IPv6 broadband deployment

 

 

First page Previous page 1 2 Next page Last page  View All nsp ipv6 RSS feed   Index | Next | Previous | View Threaded


lists at memetic

Feb 26, 2011, 10:07 AM

Post #1 of 29 (3486 views)
Permalink
Greenfield IPv4 + IPv6 broadband deployment

Hi All,

I'm currently in the planning stages of a large scale broadband
deployment, with the hopes of doing sane dual-stacked v4/v6 to every
subscriber from day one.

I know the CPE issue has been talked about to death, and I'm pretty
unhappy with the situation there at the moment, but for the time being
I'm assuming CPE are not an issue.

All transport is ethernet, with subs being dragged back to a small
number of central gateways. I'm looking at a mix of DHCP and DHCPv6-PD
to distribute addresses. PPP isn't an option.

Is anyone else running a similar setup? Are there any recommendations on
how to handle customers with static assignments?

Is it better to try to give every subscriber a static v6 assignment, to
reduce issues with internal addressing?

Does anyone have any recommendations on how to manage many static v6
assignments in a DHCP environment?

As I'm not dropping a v6 deployment on top of an existing v4 deployment,
I'd like to make sure I've done it right for both.

Does anyone have any recommendations regarding 1:1 or 1:N VLAN models?

Thanks in advance,
adam.


martin at millnert

Feb 26, 2011, 10:47 AM

Post #2 of 29 (3409 views)
Permalink
Re: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

On Sat, 2011-02-26 at 18:07 +0000, Adam Armstrong wrote:
>
> Does anyone have any recommendations on how to manage many static v6
> assignments in a DHCP environment?

PostgreSQL, and optionally, go crazy and invest heavily in (PL/Pgsql)
triggers and functions for automation. The 'inet' data type pgsql
sports, with the functions you can run on it, lets you do _a lot_ of
clever things.

Regards,
Martin


frnkblk at iname

Feb 26, 2011, 11:02 AM

Post #3 of 29 (3426 views)
Permalink
RE: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

Adam:

Best practices haven't been established for broadband deployment, though
there are a few informational RFCs that discuss a variety of approached.

In our network we have four physical access platforms (xDSL, FTTH (GPON),
cable broadband, and broadband wireless) and each of those is configured
differently: xDSL: mostly PPPoA, lot of PPPoE, and a few bridged; FTTH:
mostly 1:N, a few PPPoE left; cable broadband: all bridged; broadband
wireless: all bridged.

I've started with our FTTH first, and with the 1:N approach I already ran
into an issue with our access platform. The mode that enforces L2
separation prevents stateful DHCPv6 from succeeding because multicast is
blocked. The vendor is aware of the issue but a solution won't be GA until
Q4 at the earliest. In the meantime I've configured trial customers with L2
separation turned off so that the multicast flows through. Unfortunately I
can't scale L2-separation-off because the vendor has a limit of 210 such
subs per VLAN for other reasons. I'm also waiting for LDRA support. Going
to 1:1 would resolve all these issues, but that would mean purchasing an
ES+/ES20 card for our 7609-S and completely changing our approach.

I've chosen to use stateful DHCPv6 (with prefix delegation) because
CableLabs requires it (and might as well as make configuration as similar as
possible between access platforms) and so that I make it easier for us to
comply with CALEA requests and general tracking. I'm assigning a dynamic
/56 to each broadband sub using a separate DHCP server for v6. I haven't
tried any static prefix delegation, but I presume I can do that if I know
the customer's IAID (just like if I know the customer's WAN MAC)?

That's as far as I am today. It will be Q3 (at the earliest) before our
CMTS vendor has an IPv6-ready load for us to try. I'm not sure if I'm going
to do bridged for our xDSL, but I will for our VDSL2. Our broadband
wireless will be bridged.

If you have a certain access vendor in mind, I would strongly encourage you
to ask your sales engineer for advice on how to deploy IPv6 access on their
products and then test and scale it. It's likely your sales engineer will
says "this is the first time anyone has asked me about this".

Frank

-----Original Message-----
From: ipv6-ops-bounces+frnkblk=iname.com [at] lists
[mailto:ipv6-ops-bounces+frnkblk=iname.com [at] lists] On Behalf Of
Adam Armstrong
Sent: Saturday, February 26, 2011 12:07 PM
To: IPv6 operators forum
Subject: Greenfield IPv4 + IPv6 broadband deployment

Hi All,

I'm currently in the planning stages of a large scale broadband
deployment, with the hopes of doing sane dual-stacked v4/v6 to every
subscriber from day one.

I know the CPE issue has been talked about to death, and I'm pretty
unhappy with the situation there at the moment, but for the time being
I'm assuming CPE are not an issue.

All transport is ethernet, with subs being dragged back to a small
number of central gateways. I'm looking at a mix of DHCP and DHCPv6-PD
to distribute addresses. PPP isn't an option.

Is anyone else running a similar setup? Are there any recommendations on
how to handle customers with static assignments?

Is it better to try to give every subscriber a static v6 assignment, to
reduce issues with internal addressing?

Does anyone have any recommendations on how to manage many static v6
assignments in a DHCP environment?

As I'm not dropping a v6 deployment on top of an existing v4 deployment,
I'd like to make sure I've done it right for both.

Does anyone have any recommendations regarding 1:1 or 1:N VLAN models?

Thanks in advance,
adam.


js at yllq

Feb 26, 2011, 11:54 AM

Post #4 of 29 (3411 views)
Permalink
Re: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

On Sat, 2011-02-26 at 18:07 +0000, Adam Armstrong wrote:
> Hi All,
>
> I'm currently in the planning stages of a large scale broadband
> deployment, with the hopes of doing sane dual-stacked v4/v6 to every
> subscriber from day one.
>
> I know the CPE issue has been talked about to death, and I'm pretty
> unhappy with the situation there at the moment, but for the time being
> I'm assuming CPE are not an issue.

Have a look at UK ISP AAISP, they are pretty open about their setup and
they are v4/v6 provider for years. It's worth skimming through Revks
blog at www.me.uk ( he runs that business )


--
Mateusz Pawlowski <js [at] yllq>


lists at memetic

Feb 26, 2011, 12:23 PM

Post #5 of 29 (3414 views)
Permalink
Re: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

On 26/02/2011 19:54, Mateusz Pawlowski wrote:
> On Sat, 2011-02-26 at 18:07 +0000, Adam Armstrong wrote:
>> Hi All,
>>
>> I'm currently in the planning stages of a large scale broadband
>> deployment, with the hopes of doing sane dual-stacked v4/v6 to every
>> subscriber from day one.
>>
>> I know the CPE issue has been talked about to death, and I'm pretty
>> unhappy with the situation there at the moment, but for the time being
>> I'm assuming CPE are not an issue.
> Have a look at UK ISP AAISP, they are pretty open about their setup and
> they are v4/v6 provider for years. It's worth skimming through Revks
> blog at www.me.uk ( he runs that business )
Different access medium. I'm specifically looking for experience with
large scale ethernet dhcpv6+v4 deployments :)

Thanks,
adam.


martin at millnert

Feb 26, 2011, 12:57 PM

Post #6 of 29 (3408 views)
Permalink
Re: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

On Sat, 2011-02-26 at 20:23 +0000, Adam Armstrong wrote:
> Different access medium. I'm specifically looking for experience with
> large scale ethernet dhcpv6+v4 deployments :)

The proper way to do v4+v6 *from scratch* (from scratch implies you can
buy equipment compatible with your requirements) on ethernet, to me,
seems to be along the following lines:
1) L2 separation of each customer,
2) Statically mapped address spaces per customer, both v4 and v6
3) DHCPv4 with Option 82 to deliver v4-addresses to the customer,
4) Baseline DHCPv6 RA:ed /64 on the cust ethernet. /64 taken from a
reserved /56 or shorter for the customer in question,
5) DHCPv6 PD delivering more prefixes from that same /56 (minus the 1
link /64 done with RA, obviously),
6) Option 82 equivalence for DHCPv6 allows for having a DHCPv6 PD
server not running on the PE itself, but further away (dhcp-helper
functionality to assist getting packets there)

That's the way to do the access IMO. Interface/link separation of users
lets you map addresses more easily and forget entirely about customer
device mappings, which to me seems like such an ease of administrative
overhead/burden that it is absolutely worth investing extra to get.

Above based on experiences running a 2400-ports Ethernet access network,
that fork-lift morphed into the above. Have no operational scaling
experiences with larger networks than above, but with routed access
interfaces and IGP on the inside, it ought to scale pretty far.

My $0.02.

Cheers,
Martin


martin at millnert

Feb 26, 2011, 1:11 PM

Post #7 of 29 (3402 views)
Permalink
Re: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

(Clarification needed, apologies for list-spam)

On Sat, 2011-02-26 at 15:57 -0500, Martin Millnert wrote:
> 4) Baseline DHCPv6 RA:ed /64 on the cust ethernet. /64 taken from a
> reserved /56 or shorter for the customer in question,

Prefix, gateway, NS goes into the RA. RA can also contain a pointer to
stacks to do a DHCPv6 query for additional information (NTP, DNS again
for backwards compatibility, etc, but not address). This is the
"stateless" baseline that any customer should expect to be able to have
IMO. More magic should add to, but not substitute, the above.

Regards,
Martin
($0.04 now.)


frnkblk at iname

Feb 26, 2011, 1:12 PM

Post #8 of 29 (3397 views)
Permalink
RE: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

If someone figures out how to do (4) and (5) automated and in scale, please share. I don't know how one would reserve multiple /64's in a /56 for just one CE and have it function in a way that a CE could get successive /64's within their /56. Would seem much easier to hand the full /56 to the CE and have it dole it out.

We're doing DHCPv6-relay for everything.

Frank

-----Original Message-----
From: ipv6-ops-bounces+frnkblk=iname.com [at] lists [mailto:ipv6-ops-bounces+frnkblk=iname.com [at] lists] On Behalf Of Martin Millnert
Sent: Saturday, February 26, 2011 2:57 PM
To: Adam Armstrong
Cc: Mateusz Pawlowski; IPv6 operators forum
Subject: Re: Greenfield IPv4 + IPv6 broadband deployment

On Sat, 2011-02-26 at 20:23 +0000, Adam Armstrong wrote:
> Different access medium. I'm specifically looking for experience with
> large scale ethernet dhcpv6+v4 deployments :)

The proper way to do v4+v6 *from scratch* (from scratch implies you can
buy equipment compatible with your requirements) on ethernet, to me,
seems to be along the following lines:
1) L2 separation of each customer,
2) Statically mapped address spaces per customer, both v4 and v6
3) DHCPv4 with Option 82 to deliver v4-addresses to the customer,
4) Baseline DHCPv6 RA:ed /64 on the cust ethernet. /64 taken from a
reserved /56 or shorter for the customer in question,
5) DHCPv6 PD delivering more prefixes from that same /56 (minus the 1
link /64 done with RA, obviously),
6) Option 82 equivalence for DHCPv6 allows for having a DHCPv6 PD
server not running on the PE itself, but further away (dhcp-helper
functionality to assist getting packets there)

That's the way to do the access IMO. Interface/link separation of users
lets you map addresses more easily and forget entirely about customer
device mappings, which to me seems like such an ease of administrative
overhead/burden that it is absolutely worth investing extra to get.

Above based on experiences running a 2400-ports Ethernet access network,
that fork-lift morphed into the above. Have no operational scaling
experiences with larger networks than above, but with routed access
interfaces and IGP on the inside, it ought to scale pretty far.

My $0.02.

Cheers,
Martin


sjk at psimonkey

Feb 26, 2011, 1:53 PM

Post #9 of 29 (3409 views)
Permalink
Re: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

On Sat, 26 Feb 2011, Adam Armstrong wrote:

> On 26/02/2011 19:54, Mateusz Pawlowski wrote:
>
>> Have a look at UK ISP AAISP, they are pretty open about their setup and
>> they are v4/v6 provider for years. It's worth skimming through Revks
>> blog at www.me.uk ( he runs that business )
>
> Different access medium. I'm specifically looking for experience with
large
> scale ethernet dhcpv6+v4 deployments :)

AAISP do ethernet access products as well as ADSL. I'm not sure if they
count as large scale though.

--
Simon Key


nanog at 85d5b20a518b8f6864949bd940457dc124746ddc

Feb 26, 2011, 3:01 PM

Post #10 of 29 (3410 views)
Permalink
Re: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

Hi Martin,

On Sat, 26 Feb 2011 15:57:22 -0500
Martin Millnert <martin [at] millnert> wrote:

> On Sat, 2011-02-26 at 20:23 +0000, Adam Armstrong wrote:
> > Different access medium. I'm specifically looking for experience with
> > large scale ethernet dhcpv6+v4 deployments :)
>
> The proper way to do v4+v6 *from scratch* (from scratch implies you can
> buy equipment compatible with your requirements) on ethernet, to me,
> seems to be along the following lines:
> 1) L2 separation of each customer,
> 2) Statically mapped address spaces per customer, both v4 and v6
> 3) DHCPv4 with Option 82 to deliver v4-addresses to the customer,
> 4) Baseline DHCPv6 RA:ed /64 on the cust ethernet. /64 taken from a
> reserved /56 or shorter for the customer in question,

How are you preventing the CPE's from using this prefix DHCPv6-PD
server from using this prefix? Does your CPE recognise that the /64 it
has received for SLAAC is from within it's DHCPv6-PD prefix, and avoid
using it on it's other interfaces? I'm aware that there is an Internet
Draft proposing how to officially facilitate this, but it's a draft at
this stage, and I'm not sure I'd want to be confident of the small
variety of IPv6 CPE available always working correctly - or are you
also managing/owning the CPE and therefore can dictate/control the
feature support such as this?

What benefits are there of taking a /64 from the delegated prefix for
this purpose? I generally like the idea of saying to the customer (via
DHCPv6-PD), "here's your delegated prefix, use it how you want, I'll
use this different separate /64 that I choose and manage for the link
between us." Is it that you reduce the number of per-customer routes
that you're exporting from your customer aggregation router e.g. from 2
(/56 PD, /64 cust link) to 1 (/56 PD, cust link /64 inclusive)?

> 5) DHCPv6 PD delivering more prefixes from that same /56 (minus the 1
> link /64 done with RA, obviously),
> 6) Option 82 equivalence for DHCPv6 allows for having a DHCPv6 PD
> server not running on the PE itself, but further away (dhcp-helper
> functionality to assist getting packets there)
>
> That's the way to do the access IMO. Interface/link separation of users
> lets you map addresses more easily and forget entirely about customer
> device mappings, which to me seems like such an ease of administrative
> overhead/burden that it is absolutely worth investing extra to get.
>
> Above based on experiences running a 2400-ports Ethernet access network,
> that fork-lift morphed into the above. Have no operational scaling
> experiences with larger networks than above, but with routed access
> interfaces and IGP on the inside, it ought to scale pretty far.
>

If I understand you, you're using an IGP to push these per customer
routes around. I think BGP would make this scale a lot further if
necessary. Depending on the sorts of possible outages you have, and how
many customer connections are impacted by them, BGP might be worth
using anyway, as because it uses TCP, if a BGP peer is struggling
with temporary processing load, it can use TCP windows to tell it's
peers to back off for a while.

Regards,
Mark.


martin at millnert

Feb 26, 2011, 3:30 PM

Post #11 of 29 (3402 views)
Permalink
Re: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

Hi Mark,

I realize I might have given the impression that what I described was
rolling today. It is not. The design only exists on paper atm, and
equipment is only being delivered as we speak. Your feedback is
appreciated.

On Sun, 2011-02-27 at 09:31 +1030, Mark Smith wrote:
> Hi Martin,
> What benefits are there of taking a /64 from the delegated prefix for
> this purpose? I generally like the idea of saying to the customer (via
> DHCPv6-PD), "here's your delegated prefix, use it how you want, I'll
> use this different separate /64 that I choose and manage for the link
> between us."

Well, yeah, keeping routes down would be the motivation. But you are
correct in that you could just as well use a /64 from a separate range
for the RA prefixes. (Aggregatable per PE box, as well)

> If I understand you, you're using an IGP to push these per customer
> routes around. I think BGP would make this scale a lot further if
> necessary. Depending on the sorts of possible outages you have, and how
> many customer connections are impacted by them, BGP might be worth
> using anyway, as because it uses TCP, if a BGP peer is struggling
> with temporary processing load, it can use TCP windows to tell it's
> peers to back off for a while.

Possibly. It is entirely a topic of its own though. :) Keep in mind,
the "PE" switches in question are 24 or 48p switches: there are a lot of
them. How do you set it up? (Personal experience with larger scale
shops is limited.)

A full mesh iBGP with so many devices requires very clever configuration
management, and has inherent scaling problems.
Things you could do to avoid the scaling problems, I guess includes
"hacks" such as confederation (each cross-connect room could in theory
be its own private ASN then, peering with other cross-connect rooms
and/or core - interesting idea actually), or use route-reflectors (Not a
very attractive idea IMO).


Regards,
Martin


dwhite at olp

Feb 26, 2011, 3:30 PM

Post #12 of 29 (3404 views)
Permalink
Re: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

On 26/02/11 18:07 +0000, Adam Armstrong wrote:
>Hi All,
>
>I'm currently in the planning stages of a large scale broadband
>deployment, with the hopes of doing sane dual-stacked v4/v6 to every
>subscriber from day one.
>
>I know the CPE issue has been talked about to death, and I'm pretty
>unhappy with the situation there at the moment, but for the time
>being I'm assuming CPE are not an issue.
>
>All transport is ethernet, with subs being dragged back to a small
>number of central gateways. I'm looking at a mix of DHCP and
>DHCPv6-PD to distribute addresses. PPP isn't an option.

Some of this has already been mentioned by Frank and Martin and others.

I'd recommend investing in a good router (or routers) which support
subscriber management, and try to design your network so that your
customers terminate to it via Q-in-Q VLANs (or ATM or PPPoX where
appropriate), and handle your layer-3 enforcement on that router rather
than at the edge.

Assign static v4 addresses, or enforce DHCPv4 leases on the router. Use
proxy ARP to allow customers to talk to each other if you want (a good
subscriber management router is going to have all that).

For IPv6, assign or identify customers via subnet rather than individual v6
addresses, where you can get away with it. Assign a /64 per layer-2
broadcast domain (one broadcast domain per customer if you can), and
provide a unique RA per customer. Set up a pool of DHCPv6-PD subnets (/56
or /48 per customer) that customer routers can request from, or configure a
static DHCPv6-PD pool per customer if that makes sense. Configure the
'Other configuration' flag in your RAs so customer routers retrieve DNS
servers dynamically.

Consider how you're going to handle the inevitable abuse complaints your
going to receive (SPAM and Copyright violations), and how you're going
to identify which customer triggered the complaint.

--
Dan White


nanog at 85d5b20a518b8f6864949bd940457dc124746ddc

Feb 26, 2011, 3:57 PM

Post #13 of 29 (3410 views)
Permalink
Re: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

Hi Martin,

On Sat, 26 Feb 2011 18:30:16 -0500
Martin Millnert <martin [at] millnert> wrote:

> Hi Mark,
>
> I realize I might have given the impression that what I described was
> rolling today. It is not. The design only exists on paper atm, and
> equipment is only being delivered as we speak. Your feedback is
> appreciated.
>
> On Sun, 2011-02-27 at 09:31 +1030, Mark Smith wrote:
> > Hi Martin,
> > What benefits are there of taking a /64 from the delegated prefix for
> > this purpose? I generally like the idea of saying to the customer (via
> > DHCPv6-PD), "here's your delegated prefix, use it how you want, I'll
> > use this different separate /64 that I choose and manage for the link
> > between us."
>
> Well, yeah, keeping routes down would be the motivation. But you are
> correct in that you could just as well use a /64 from a separate range
> for the RA prefixes. (Aggregatable per PE box, as well)
>
> > If I understand you, you're using an IGP to push these per customer
> > routes around. I think BGP would make this scale a lot further if
> > necessary. Depending on the sorts of possible outages you have, and how
> > many customer connections are impacted by them, BGP might be worth
> > using anyway, as because it uses TCP, if a BGP peer is struggling
> > with temporary processing load, it can use TCP windows to tell it's
> > peers to back off for a while.
>
> Possibly. It is entirely a topic of its own though. :) Keep in mind,
> the "PE" switches in question are 24 or 48p switches: there are a lot of
> them. How do you set it up? (Personal experience with larger scale
> shops is limited.)
>

I'd probably stick to BGP for everything but loopbacks model. Once you
have your route-reflectors configured, and liberally are using templated
configurations (e.g. BGP peer-groups corresponding to device roles
(e.g. core, peer, edge etc.), route maps, route filters via prefix-lists
etc.), configuring and operating BGP is mainly a cut-and-paste job. For
edge devices, sending them just a default route and applying basic
inbound filtering (which may just be a "customer route" community, which
_shouldn't_ be applied by default by the edge device, use a aggregate
prefix-list to apply it - uncontrolled redistibution is a hair trigger
in my opinion) is enough.

Alternatively you might run an IGP instance within clusters of edge
devices and then have a couple of them (or more likely upstream
distribution routers) inject those routes into BGP. Following the "less
is more" principle, I think I'd still use BGP for this purpose though
if all my edge devices can talk it.

BGP scales much better than IGPs. For example, a IGP having to deal
with 5K+ routes fluctuating is a potential nightmare I'd never want to
experience, where as with BGP it's pretty much a walk in the park (for
a reasonably good implementation). With a goal of providing stable IPv6
addresses to customers, I think there is value in pushing around
individual customer routes within your routing domain within a limited
scope (e.g. geographic region, PoP or chosen cluster of customer
aggregation routers), rather than having a single edge device being a
customer route aggregation boundary. BGP is much more suited to that
task.

> A full mesh iBGP with so many devices requires very clever configuration
> management, and has inherent scaling problems.
> Things you could do to avoid the scaling problems, I guess includes
> "hacks" such as confederation (each cross-connect room could in theory
> be its own private ASN then, peering with other cross-connect rooms
> and/or core - interesting idea actually), or use route-reflectors (Not a
> very attractive idea IMO).
>

Why do you say that about route-reflectors? My experience using them
has been they just work. Their location tends to follow the hierarchy
of your traffic layer 3 aggregation within your network, so your
route-reflector topology matches 1 to 1 with your layer 3 aggregation
hierarchy.

If you've got access to a copy of "BGP Design and Implementation", the
case studies on ISP and large Enterprise networks is worth having a
look at.

Regards,
Mark.


lists at memetic

Feb 26, 2011, 4:23 PM

Post #14 of 29 (3397 views)
Permalink
Re: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

On 26/02/2011 23:30, Dan White wrote:
> On 26/02/11 18:07 +0000, Adam Armstrong wrote:
>> Hi All,
>>
>> I'm currently in the planning stages of a large scale broadband
>> deployment, with the hopes of doing sane dual-stacked v4/v6 to every
>> subscriber from day one.
>>
>> I know the CPE issue has been talked about to death, and I'm pretty
>> unhappy with the situation there at the moment, but for the time
>> being I'm assuming CPE are not an issue.
>>
>> All transport is ethernet, with subs being dragged back to a small
>> number of central gateways. I'm looking at a mix of DHCP and
>> DHCPv6-PD to distribute addresses. PPP isn't an option.
>
> Some of this has already been mentioned by Frank and Martin and others.
>
> I'd recommend investing in a good router (or routers) which support
> subscriber management, and try to design your network so that your
> customers terminate to it via Q-in-Q VLANs (or ATM or PPPoX where
> appropriate), and handle your layer-3 enforcement on that router rather
> than at the edge.

That's the plan currently. Purely layer 2 back a couple of very large
devices doing layer 3 aggregation. Still deciding on 1:1 or 1:N VLANs.

> Assign static v4 addresses, or enforce DHCPv4 leases on the router. Use
> proxy ARP to allow customers to talk to each other if you want (a good
> subscriber management router is going to have all that).
> For IPv6, assign or identify customers via subnet rather than
> individual v6
> addresses, where you can get away with it. Assign a /64 per layer-2
> broadcast domain (one broadcast domain per customer if you can), and
> provide a unique RA per customer. Set up a pool of DHCPv6-PD subnets (/56
> or /48 per customer) that customer routers can request from, or
> configure a
> static DHCPv6-PD pool per customer if that makes sense. Configure the
> 'Other configuration' flag in your RAs so customer routers retrieve DNS
> servers dynamically.

My primary issue at the moment is that I can't see a clean way to manage
100K static v6 prefixes via DHCP.

It's possible I'm missing something obvious, but it doesn't seem to be
coming to me no matter how hard I look.

> Consider how you're
> going to handle the inevitable abuse complaints your
> going to receive (SPAM and Copyright violations), and how you're going
> to identify which customer triggered the complaint.
Argh :)

adam.


lists at memetic

Feb 26, 2011, 4:27 PM

Post #15 of 29 (3399 views)
Permalink
Re: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

On 26/02/2011 23:30, Martin Millnert wrote:
> A full mesh iBGP with so many devices requires very clever configuration
> management, and has inherent scaling problems.
> Things you could do to avoid the scaling problems, I guess includes
> "hacks" such as confederation (each cross-connect room could in theory
> be its own private ASN then, peering with other cross-connect rooms
> and/or core - interesting idea actually), or use route-reflectors (Not a
> very attractive idea IMO).
Route-reflectors are the correct way to do this.

I would use route-reflectors with only a handful of devices. Think using
something with a very fast control-plane like an ASR RP-2 to handle your
100 BGP sessions, where as soemthing like a SUP720 only has two sessions
to look after, instead of 99.

adam.


swmike at swm

Feb 26, 2011, 4:30 PM

Post #16 of 29 (3407 views)
Permalink
Re: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

On Sun, 27 Feb 2011, Adam Armstrong wrote:

> That's the plan currently. Purely layer 2 back a couple of very large devices
> doing layer 3 aggregation. Still deciding on 1:1 or 1:N VLANs.

I recommend against that. I'd do a lot more distributed L3 switch model,
let's say one L3 switch per 1000 subscribers or so.

If there was such a device, I'd do L3 switch that the customer connects to
directly, so you never ever need L2 backhaul and all that pain that comes
with it (duplicate MAC addresses, needing to handle q-in-q on the L3
device etc).

I'd also require a CPE to do routing so your big devices never have to
handle all the customer devices and to ND etc with them. ND is very chatty
and it's a lot of state to keep, lot's of TCAM slots to handle etc. Do
link-local only to the customer and do DHCPv6-PD only, no DHCPv6/SLAAC at
all. If the customer doesn't have a CPE that does this then they won't get
IPv6, only IPv4+NAT.

> My primary issue at the moment is that I can't see a clean way to manage 100K
> static v6 prefixes via DHCP.

I'd say you need the equivalent of option 82 and a DB? I don't know how to
do this currently though.

--
Mikael Abrahamsson email: swmike [at] swm


frnkblk at iname

Feb 26, 2011, 6:35 PM

Post #17 of 29 (3399 views)
Permalink
RE: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

If you use the 1:1 model with Q-in-Q where each VLAN has it's own RA
configuration with unique /64, then you could have a unique pool per VLAN
with just one customer block (48 or 56 or 60 or 64) per pool. I don't plan
to implement that way, but that would be possible. Of course, if they
change CPE then the pool would be out of IP addresses until the previous
lease expired.

Frank

-----Original Message-----
From: ipv6-ops-bounces+frnkblk=iname.com [at] lists
[mailto:ipv6-ops-bounces+frnkblk=iname.com [at] lists] On Behalf Of
Adam Armstrong
Sent: Saturday, February 26, 2011 6:24 PM
To: Dan White; IPv6 operators forum
Subject: Re: Greenfield IPv4 + IPv6 broadband deployment

On 26/02/2011 23:30, Dan White wrote:
> On 26/02/11 18:07 +0000, Adam Armstrong wrote:
>> Hi All,
>>
>> I'm currently in the planning stages of a large scale broadband
>> deployment, with the hopes of doing sane dual-stacked v4/v6 to every
>> subscriber from day one.
>>
>> I know the CPE issue has been talked about to death, and I'm pretty
>> unhappy with the situation there at the moment, but for the time
>> being I'm assuming CPE are not an issue.
>>
>> All transport is ethernet, with subs being dragged back to a small
>> number of central gateways. I'm looking at a mix of DHCP and
>> DHCPv6-PD to distribute addresses. PPP isn't an option.
>
> Some of this has already been mentioned by Frank and Martin and others.
>
> I'd recommend investing in a good router (or routers) which support
> subscriber management, and try to design your network so that your
> customers terminate to it via Q-in-Q VLANs (or ATM or PPPoX where
> appropriate), and handle your layer-3 enforcement on that router rather
> than at the edge.

That's the plan currently. Purely layer 2 back a couple of very large
devices doing layer 3 aggregation. Still deciding on 1:1 or 1:N VLANs.

> Assign static v4 addresses, or enforce DHCPv4 leases on the router. Use
> proxy ARP to allow customers to talk to each other if you want (a good
> subscriber management router is going to have all that).
> For IPv6, assign or identify customers via subnet rather than
> individual v6
> addresses, where you can get away with it. Assign a /64 per layer-2
> broadcast domain (one broadcast domain per customer if you can), and
> provide a unique RA per customer. Set up a pool of DHCPv6-PD subnets (/56
> or /48 per customer) that customer routers can request from, or
> configure a
> static DHCPv6-PD pool per customer if that makes sense. Configure the
> 'Other configuration' flag in your RAs so customer routers retrieve DNS
> servers dynamically.

My primary issue at the moment is that I can't see a clean way to manage
100K static v6 prefixes via DHCP.

It's possible I'm missing something obvious, but it doesn't seem to be
coming to me no matter how hard I look.

> Consider how you're
> going to handle the inevitable abuse complaints your
> going to receive (SPAM and Copyright violations), and how you're going
> to identify which customer triggered the complaint.
Argh :)

adam.


frnkblk at iname

Feb 26, 2011, 6:38 PM

Post #18 of 29 (3400 views)
Permalink
RE: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

Why would you have a separate /64 stubnet (between CE and PE)? Then each
customer would use a /64 + (48 or 56 or 64).

I'm currently using a /56 in the 1:N model so that each CE's WAN interface
is in the same /56.

Frank

-----Original Message-----
From: ipv6-ops-bounces+frnkblk=iname.com [at] lists
[mailto:ipv6-ops-bounces+frnkblk=iname.com [at] lists] On Behalf Of
Dan White
Sent: Saturday, February 26, 2011 5:31 PM
To: Adam Armstrong
Cc: IPv6 operators forum
Subject: Re: Greenfield IPv4 + IPv6 broadband deployment

On 26/02/11 18:07 +0000, Adam Armstrong wrote:
>Hi All,
>
>I'm currently in the planning stages of a large scale broadband
>deployment, with the hopes of doing sane dual-stacked v4/v6 to every
>subscriber from day one.
>
>I know the CPE issue has been talked about to death, and I'm pretty
>unhappy with the situation there at the moment, but for the time
>being I'm assuming CPE are not an issue.
>
>All transport is ethernet, with subs being dragged back to a small
>number of central gateways. I'm looking at a mix of DHCP and
>DHCPv6-PD to distribute addresses. PPP isn't an option.

Some of this has already been mentioned by Frank and Martin and others.

I'd recommend investing in a good router (or routers) which support
subscriber management, and try to design your network so that your
customers terminate to it via Q-in-Q VLANs (or ATM or PPPoX where
appropriate), and handle your layer-3 enforcement on that router rather
than at the edge.

Assign static v4 addresses, or enforce DHCPv4 leases on the router. Use
proxy ARP to allow customers to talk to each other if you want (a good
subscriber management router is going to have all that).

For IPv6, assign or identify customers via subnet rather than individual v6
addresses, where you can get away with it. Assign a /64 per layer-2
broadcast domain (one broadcast domain per customer if you can), and
provide a unique RA per customer. Set up a pool of DHCPv6-PD subnets (/56
or /48 per customer) that customer routers can request from, or configure a
static DHCPv6-PD pool per customer if that makes sense. Configure the
'Other configuration' flag in your RAs so customer routers retrieve DNS
servers dynamically.

Consider how you're going to handle the inevitable abuse complaints your
going to receive (SPAM and Copyright violations), and how you're going
to identify which customer triggered the complaint.

--
Dan White


frnkblk at iname

Feb 26, 2011, 6:42 PM

Post #19 of 29 (3406 views)
Permalink
RE: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

It's a bit much, in our customer base, to require a router.

Frank

-----Original Message-----
From: ipv6-ops-bounces+frnkblk=iname.com [at] lists
[mailto:ipv6-ops-bounces+frnkblk=iname.com [at] lists] On Behalf Of
Mikael Abrahamsson
Sent: Saturday, February 26, 2011 6:30 PM
To: Adam Armstrong
Cc: Dan White; IPv6 operators forum
Subject: Re: Greenfield IPv4 + IPv6 broadband deployment

On Sun, 27 Feb 2011, Adam Armstrong wrote:

> That's the plan currently. Purely layer 2 back a couple of very large
devices
> doing layer 3 aggregation. Still deciding on 1:1 or 1:N VLANs.

I recommend against that. I'd do a lot more distributed L3 switch model,
let's say one L3 switch per 1000 subscribers or so.

If there was such a device, I'd do L3 switch that the customer connects to
directly, so you never ever need L2 backhaul and all that pain that comes
with it (duplicate MAC addresses, needing to handle q-in-q on the L3
device etc).

I'd also require a CPE to do routing so your big devices never have to
handle all the customer devices and to ND etc with them. ND is very chatty
and it's a lot of state to keep, lot's of TCAM slots to handle etc. Do
link-local only to the customer and do DHCPv6-PD only, no DHCPv6/SLAAC at
all. If the customer doesn't have a CPE that does this then they won't get
IPv6, only IPv4+NAT.

> My primary issue at the moment is that I can't see a clean way to manage
100K
> static v6 prefixes via DHCP.

I'd say you need the equivalent of option 82 and a DB? I don't know how to
do this currently though.

--
Mikael Abrahamsson email: swmike [at] swm


swmike at swm

Feb 26, 2011, 6:57 PM

Post #20 of 29 (3394 views)
Permalink
RE: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

On Sat, 26 Feb 2011, Frank Bulk wrote:

> It's a bit much, in our customer base, to require a router.

Could you please elaborate on that?

It's my world view that a majority of people already have a NAT gateway
(because they want wifi etc), and the people who don't, do they really
need IPv6 connectivity right away? When they want it, they can purchase a
router and then have it.

I just see so many downsides with supporting a vendor routed /64 that I
don't see that I can recommend it. Of course, large deployment hasn't
happened yet so operationally we don't know what's going to happen.

If you're going to be the default gw of the /64, I think it's a good idea
to enforce what number of IPv6 addresses and mac addresses you support in
the service. Handling lots of ND is not going to scale, you really don't
want to be part of the home network if you can avoid it. Think 50 devices
which might have multiple IPv6 addresses each. That's a lot of ND and TCAM
usage.

I'm sure you can get away with it initially, but isn't it better to do it
right from the start than to have to stop doing it and converting the
users later?

--
Mikael Abrahamsson email: swmike [at] swm


martin at millnert

Feb 26, 2011, 9:02 PM

Post #21 of 29 (3380 views)
Permalink
Re: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

On Sun, 2011-02-27 at 01:30 +0100, Mikael Abrahamsson wrote:
> If there was such a device, I'd do L3 switch that the customer
> connects to directly, so you never ever need L2 backhaul and all that
> pain that comes with it (duplicate MAC addresses, needing to handle
> q-in-q on the L3 device etc).

There are more than a handful 1U 48p GE L3 switch models nowadays, that
can even speak BGP. That's what we bought ~2400 ports of anyway. (Why
one would greenfield less than GE in 2011 is beyond me.)

L3 for cust makes your world *A LOT* simpler indeed. The list is very
long.

Regards,
Martin


frnkblk at iname

Feb 26, 2011, 9:04 PM

Post #22 of 29 (3383 views)
Permalink
RE: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

I'm just not aware of an ISP that requires a customer to provide their
router -- if a service provider uses a modem or ONT, the customer is free to
plug in a router or their PC directly. At least that's the way it is in
North America.

That said, I would say that definitely than 10% and probably less than 5% of
our customers don't use a router. I'm not worried about a customer having
50 devices -- they would likely have a router in that case. Most of our
configs hand out only one IPv4 address, so they're already "programmed" to
know that if they want multiple devices online the home that they need to
use a router, and 99.9%, it's a wireless router.

So I'm aware of the ND concerns, but with our small operations (<8,500
broadband users across 4 different access platforms), the ND concerns are
minimal.

Frank

-----Original Message-----
From: Mikael Abrahamsson [mailto:swmike [at] swm]
Sent: Saturday, February 26, 2011 8:58 PM
To: Frank Bulk
Cc: Adam Armstrong; Dan White; IPv6 operators forum
Subject: RE: Greenfield IPv4 + IPv6 broadband deployment

On Sat, 26 Feb 2011, Frank Bulk wrote:

> It's a bit much, in our customer base, to require a router.

Could you please elaborate on that?

It's my world view that a majority of people already have a NAT gateway
(because they want wifi etc), and the people who don't, do they really
need IPv6 connectivity right away? When they want it, they can purchase a
router and then have it.

I just see so many downsides with supporting a vendor routed /64 that I
don't see that I can recommend it. Of course, large deployment hasn't
happened yet so operationally we don't know what's going to happen.

If you're going to be the default gw of the /64, I think it's a good idea
to enforce what number of IPv6 addresses and mac addresses you support in
the service. Handling lots of ND is not going to scale, you really don't
want to be part of the home network if you can avoid it. Think 50 devices
which might have multiple IPv6 addresses each. That's a lot of ND and TCAM
usage.

I'm sure you can get away with it initially, but isn't it better to do it
right from the start than to have to stop doing it and converting the
users later?

--
Mikael Abrahamsson email: swmike [at] swm


frnkblk at iname

Feb 26, 2011, 9:04 PM

Post #23 of 29 (3381 views)
Permalink
RE: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

I'm just not aware of an ISP that requires a customer to provide their
router -- if a service provider uses a modem or ONT, the customer is free to
plug in a router or their PC directly. At least that's the way it is in
North America.

That said, I would say that definitely than 10% and probably less than 5% of
our customers don't use a router. I'm not worried about a customer having
50 devices -- they would likely have a router in that case. Most of our
configs hand out only one IPv4 address, so they're already "programmed" to
know that if they want multiple devices online the home that they need to
use a router, and 99.9%, it's a wireless router.

So I'm aware of the ND concerns, but with our small operations (<8,500
broadband users across 4 different access platforms), the ND concerns are
minimal.

Frank

-----Original Message-----
From: Mikael Abrahamsson [mailto:swmike [at] swm]
Sent: Saturday, February 26, 2011 8:58 PM
To: Frank Bulk
Cc: Adam Armstrong; Dan White; IPv6 operators forum
Subject: RE: Greenfield IPv4 + IPv6 broadband deployment

On Sat, 26 Feb 2011, Frank Bulk wrote:

> It's a bit much, in our customer base, to require a router.

Could you please elaborate on that?

It's my world view that a majority of people already have a NAT gateway
(because they want wifi etc), and the people who don't, do they really
need IPv6 connectivity right away? When they want it, they can purchase a
router and then have it.

I just see so many downsides with supporting a vendor routed /64 that I
don't see that I can recommend it. Of course, large deployment hasn't
happened yet so operationally we don't know what's going to happen.

If you're going to be the default gw of the /64, I think it's a good idea
to enforce what number of IPv6 addresses and mac addresses you support in
the service. Handling lots of ND is not going to scale, you really don't
want to be part of the home network if you can avoid it. Think 50 devices
which might have multiple IPv6 addresses each. That's a lot of ND and TCAM
usage.

I'm sure you can get away with it initially, but isn't it better to do it
right from the start than to have to stop doing it and converting the
users later?

--
Mikael Abrahamsson email: swmike [at] swm


martin at millnert

Feb 26, 2011, 9:16 PM

Post #24 of 29 (3382 views)
Permalink
Re: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

Hi Adam,

On Sun, 2011-02-27 at 00:23 +0000, Adam Armstrong wrote:
> My primary issue at the moment is that I can't see a clean way to manage
> 100K static v6 prefixes via DHCP.

100k rows in PostgreSQL is not a problem.
n*100k rows in your DHCP-server's config shouldn't be a big problem
either, for n < 100 or so. If your DHCP server can't do this, look for
another one (or replace the Pentium). :)

> It's possible I'm missing something obvious, but it doesn't seem to be
> coming to me no matter how hard I look.

http://tools.ietf.org/html/rfc4649 , I believe.

It becomes simple:
1) Connect each customer to a RFC4649 capable interface,
2) Relay DHCP to a backend DHCP-server (or several), adding the device
and port ID to the packet,
3) Take care of mappings in the backend DHCPv6 server(s).

It's either that, or just doing DHCPv6 serving directly from the device
the customers connect to, and take care of mappings there.

> > Consider how you're going to handle the inevitable abuse complaints your
> > going to receive (SPAM and Copyright violations), and how you're going
> > to identify which customer triggered the complaint.
> Argh :)

See above. Static mapping of customer interfaces (we opted to have the
customer interface receive a new mapping if a person moves out and
another one moves in) to prefixes is the only sane way to do this. Do
not attempt to look up customers by the interface-identifier of the
address. Do attempt to look up customers by the network prefix part of
the address.

Regards,
Martin


martin at millnert

Feb 26, 2011, 9:32 PM

Post #25 of 29 (3382 views)
Permalink
RE: Greenfield IPv4 + IPv6 broadband deployment [In reply to]

Mikael,

On Sun, 2011-02-27 at 03:57 +0100, Mikael Abrahamsson wrote:
> If you're going to be the default gw of the /64, I think it's a good
> idea to enforce what number of IPv6 addresses and mac addresses you
> support in the service. Handling lots of ND is not going to scale, you
> really don't want to be part of the home network if you can avoid it.
> Think 50 devices which might have multiple IPv6 addresses each. That's
> a lot of ND and TCAM usage.

On a relatively high-end 48p L3 access switch, I'm not sure I agree that
~16k ND entries are an insufficient amount for many, many years to come.
I'd be willing to make a bet with you that the device in question will
have been replaced long before the issue ever appeared. (I keep
customers habits of hooking up routers well in mind.)

You are certainly right that one should investigate the limits of the
equipment before deployment though, and I suspect that ND flood
protection etc is mandated as well on customer interfaces.

All things considered is this another reason why it is very beneficial
to go the route of connecting customers directly on L3 edge devices IMO.

Regards,
Martin

First page Previous page 1 2 Next page Last page  View All nsp ipv6 RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.