Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: nsp: foundry

Features on Brocade Ethernet platforms

 

 

nsp foundry RSS feed   Index | Next | Previous | View Threaded


robhass at gmail

Mar 12, 2011, 1:37 AM

Post #1 of 14 (4262 views)
Permalink
Features on Brocade Ethernet platforms

Hi

I'm leading project where we would like to migrate customer's D-Link
switches (6 core nodes, around 30 access) + Linux routers (6 PEs - BGP
+ OSPF) network into new hardware solution. Goal is to replace all
equipment and upgrade existing 4Gbps (4x1Gbps LACP) backbone links
into 10GE links. We considering now few vendors - and Brocade here
looks very promising but we don't have much experience with this
equipment. So I have couple of questions related to Brocade gear.

1) CER as PE router

Which is below features is supported on CER 2000 platform:

- SNMP Counters on VE interfaces
- Inter-Operability with Cisco VTP
- Port Monitoring Local Destination
- Port Monitoring Remote (RSPAN)
- Spanning-Tree BPDU Filter support
- Ingress Shaping/Policing on physcial interfaces
- Ingress Shaping/Policing on VE interfaces
- Ingress Shaping/Policing on L2 VLANs
- Egress Shaping/Policing on physcial interfaces
- Egress Shaping/Policing on VE interfaces
- Egress Shaping/Policing on L2 VLANs
- IPv4/IPv6 FIB import filter (eg. we have full bgp table in RIB, but
partial in FIB - filtering via route-map, prefix-lists etc.)
- DDoS ControlPlane protection (like CoPP on Cisco, or Loopback
inbound ACL for protect RE on Junipers)
- Port Security (max MAC addresses)
- Broadcast/Multicast storm control - configured in pps
- QoS Marking
- QoS Scheduling (assigning cos/dscp to specific queue)
- QoS Scheduling per port
- QoS Strict-Priority Queue
- ECMP L2 (LAG) Hashing (src-dst-ip or src-dst-mixed-ip-port)
- ECMP L3 (L3 equal cost routing) (src-dst-ip or src-dst-mixed-ip-port)
- NetFlow v5
- NetFlow v9
- sFlow
- VTPv2 / VTPv3
- DOM for OEM 1G/10G optics
- Mixed (DC+AC) Power Supply Option

2) TurboIron 24X - as core platform

Is this switch is good solution to pure 10GE core ? We want use them for :
- interconnect to other core nodes by 1x10GE and 2x10GE LAGs
- interconnect to upstream providers (eg. GlobalCrossing, KPN and DE-CIX)
- downlinks to PE routers by 2x10GE LAGs
- downlinks to access switches (customer access switches)
- at core we transporting multicast with IPTV (around 900Mbps) it will
be not problem ? (eg. with microbursts at core)
- is SFP+ ER (40KM) 1550nm is supported in this switch ?
- is QinQ supported on this switch ?

If TurboIron is not good idea here then what we should consider as
alternative ? MLX/RX is not option here as It's too expensive. We need
just 8 x 10GE ports (best if XFP, but SFP+ is fine with us too).

3) FCX624 - as one of access-switches

- can we do ingress/egress policing on GE ports on this switch ?
- can we do ingress/egress policing on specified VLANs on 802.1q trunk
ports ? (eg. I have trunk with VLAN 100,101,102 - and I would like to
police VLAN 100 to 20Mb, VLAN 101 to 40Mb, and do not police VLAN 102)
- optional module FCX-4XG as I see is providing 4 x 10GE SFP+ ports,
will SFP+ ER (40KM) 1550nm work with this module ?
- how deep buffers it has (eg. microbursts problem)

4) Other L2 platforms - which other Brocade 1U switch has deep buffers
and 2x10GE capability ? I also need QinQ, Ingress/Egress policing at
GE/VLANs ports.

Thanks a lot,
Robert
_______________________________________________
foundry-nsp mailing list
foundry-nsp [at] puck
http://puck.nether.net/mailman/listinfo/foundry-nsp


frnkblk at iname

Mar 12, 2011, 7:46 AM

Post #2 of 14 (4037 views)
Permalink
Re: Features on Brocade Ethernet platforms [In reply to]

Lots to like about the CER but we had to look at the MLX because the CER
only has two 10GE ports.

Frank

-----Original Message-----
From: foundry-nsp-bounces [at] puck
[mailto:foundry-nsp-bounces [at] puck] On Behalf Of Robert Hass
Sent: Saturday, March 12, 2011 3:37 AM
To: foundry-nsp [at] puck
Subject: [f-nsp] Features on Brocade Ethernet platforms

Hi

I'm leading project where we would like to migrate customer's D-Link
switches (6 core nodes, around 30 access) + Linux routers (6 PEs - BGP
+ OSPF) network into new hardware solution. Goal is to replace all
equipment and upgrade existing 4Gbps (4x1Gbps LACP) backbone links
into 10GE links. We considering now few vendors - and Brocade here
looks very promising but we don't have much experience with this
equipment. So I have couple of questions related to Brocade gear.

1) CER as PE router

Which is below features is supported on CER 2000 platform:

- SNMP Counters on VE interfaces
- Inter-Operability with Cisco VTP
- Port Monitoring Local Destination
- Port Monitoring Remote (RSPAN)
- Spanning-Tree BPDU Filter support
- Ingress Shaping/Policing on physcial interfaces
- Ingress Shaping/Policing on VE interfaces
- Ingress Shaping/Policing on L2 VLANs
- Egress Shaping/Policing on physcial interfaces
- Egress Shaping/Policing on VE interfaces
- Egress Shaping/Policing on L2 VLANs
- IPv4/IPv6 FIB import filter (eg. we have full bgp table in RIB, but
partial in FIB - filtering via route-map, prefix-lists etc.)
- DDoS ControlPlane protection (like CoPP on Cisco, or Loopback
inbound ACL for protect RE on Junipers)
- Port Security (max MAC addresses)
- Broadcast/Multicast storm control - configured in pps
- QoS Marking
- QoS Scheduling (assigning cos/dscp to specific queue)
- QoS Scheduling per port
- QoS Strict-Priority Queue
- ECMP L2 (LAG) Hashing (src-dst-ip or src-dst-mixed-ip-port)
- ECMP L3 (L3 equal cost routing) (src-dst-ip or src-dst-mixed-ip-port)
- NetFlow v5
- NetFlow v9
- sFlow
- VTPv2 / VTPv3
- DOM for OEM 1G/10G optics
- Mixed (DC+AC) Power Supply Option

2) TurboIron 24X - as core platform

Is this switch is good solution to pure 10GE core ? We want use them for :
- interconnect to other core nodes by 1x10GE and 2x10GE LAGs
- interconnect to upstream providers (eg. GlobalCrossing, KPN and DE-CIX)
- downlinks to PE routers by 2x10GE LAGs
- downlinks to access switches (customer access switches)
- at core we transporting multicast with IPTV (around 900Mbps) it will
be not problem ? (eg. with microbursts at core)
- is SFP+ ER (40KM) 1550nm is supported in this switch ?
- is QinQ supported on this switch ?

If TurboIron is not good idea here then what we should consider as
alternative ? MLX/RX is not option here as It's too expensive. We need
just 8 x 10GE ports (best if XFP, but SFP+ is fine with us too).

3) FCX624 - as one of access-switches

- can we do ingress/egress policing on GE ports on this switch ?
- can we do ingress/egress policing on specified VLANs on 802.1q trunk
ports ? (eg. I have trunk with VLAN 100,101,102 - and I would like to
police VLAN 100 to 20Mb, VLAN 101 to 40Mb, and do not police VLAN 102)
- optional module FCX-4XG as I see is providing 4 x 10GE SFP+ ports,
will SFP+ ER (40KM) 1550nm work with this module ?
- how deep buffers it has (eg. microbursts problem)

4) Other L2 platforms - which other Brocade 1U switch has deep buffers
and 2x10GE capability ? I also need QinQ, Ingress/Egress policing at
GE/VLANs ports.

Thanks a lot,
Robert
_______________________________________________
foundry-nsp mailing list
foundry-nsp [at] puck
http://puck.nether.net/mailman/listinfo/foundry-nsp

_______________________________________________
foundry-nsp mailing list
foundry-nsp [at] puck
http://puck.nether.net/mailman/listinfo/foundry-nsp


robhass at gmail

Mar 12, 2011, 9:27 AM

Post #3 of 14 (4014 views)
Permalink
Re: Features on Brocade Ethernet platforms [In reply to]

On Sat, Mar 12, 2011 at 4:46 PM, Frank Bulk <frnkblk [at] iname> wrote:
> Lots to like about the CER but we had to look at the MLX because the CER
> only has two 10GE ports.

Frank, CER for PE is just fine (2x10GE is enough). I need more 10GE
ports on L2 switch - where I'm try to use TI24X. If TI24X will not fit
then we will probably put Cisco Catalyst 4900M (8 x 10GE is OK as it's
expandable to 16 x 10GE).

Robert
_______________________________________________
foundry-nsp mailing list
foundry-nsp [at] puck
http://puck.nether.net/mailman/listinfo/foundry-nsp


nick at foobar

Mar 12, 2011, 11:16 AM

Post #4 of 14 (4072 views)
Permalink
Re: Features on Brocade Ethernet platforms [In reply to]

On 12/03/2011 09:37, Robert Hass wrote:
> 2) TurboIron 24X - as core platform
>
> Is this switch is good solution to pure 10GE core ? We want use them for :
> - interconnect to other core nodes by 1x10GE and 2x10GE LAGs
> - interconnect to upstream providers (eg. GlobalCrossing, KPN and DE-CIX)

TI24X may be useful in this situation if you're aware of its limitations
and how they might interact with your network. It's a cut-thru switch with
very small buffers (2M shared per chassis), which means that if you're
microbursting *outbound* on your 10G ports, you have enough space for 1.6ms
of buffered traffic, best case. This may or may not work for your network
configuration.

It also does L3. I don't use this capability, so have no opinion on how
good it is.

> - downlinks to PE routers by 2x10GE LAGs
> - downlinks to access switches (customer access switches)
> - at core we transporting multicast with IPTV (around 900Mbps) it will
> be not problem ? (eg. with microbursts at core)
> - is SFP+ ER (40KM) 1550nm is supported in this switch ?

As a general principle, I'm not convinced that running ER/ZR SFP+ optics is
a good idea. Apart from any heat dissipation issues (which may or may not
be a problem on the TI24X, depending on the quantity of ER transceivers
installed), SFP+ transceivers do not have on-board electronic dispersion
compensation, but instead delegate this process to the switch motherboard.
This means that instead of having all analog->digital signals processing
happening on the transceiver, with a purely digital electrical hand-off to
the switch mobo (as you happen in an XFP), you end up with two potential
kit manufacturers involved in the opto-electrical conversion process: the
SFP+ manufacturer and the switch mobo manufacturer.

This may or may not matter to you, and it will depend entirely on the type
of SFP+ used, whether it's been tested on the TI24X, and the dispersion
characteristics of the fibre run. However, if you're running on longer
fibre links, I wouldn't do it directly into any SFP+ switch, not just the
TI24X. There's too much that can potentially go wrong.

If Brocade produce an ER SFP+ for this switch, you should be ok, but I
don't think they do.

> - is QinQ supported on this switch ?

Yes, 4.2.00 or later. Haven't used it myself, but the documentation claims
it works.

> If TurboIron is not good idea here then what we should consider as
> alternative ? MLX/RX is not option here as It's too expensive. We need
> just 8 x 10GE ports (best if XFP, but SFP+ is fine with us too).

Big buffers, lots of 10G ports, cheap. Choose two. :-D

> 4) Other L2 platforms - which other Brocade 1U switch has deep buffers
> and 2x10GE capability ? I also need QinQ, Ingress/Egress policing at
> GE/VLANs ports.

fes-x600 series has 64 megs shared buffers per chassis and has an optional
2x10GE card. This works reasonably well for GE access ports with heavy
outbound traffic (32 megs shared ingress + 32 megs shared egress - i.e.
250ms line rate buffering best case). It's a 2U switch though.

Nick
_______________________________________________
foundry-nsp mailing list
foundry-nsp [at] puck
http://puck.nether.net/mailman/listinfo/foundry-nsp


georgeb at gmail

Mar 12, 2011, 10:55 PM

Post #5 of 14 (4252 views)
Permalink
Re: Features on Brocade Ethernet platforms [In reply to]

2) TurboIron 24X - as core platform
>
> Is this switch is good solution to pure 10GE core ? We want use them for :
> - interconnect to other core nodes by 1x10GE and 2x10GE LAGs
> - interconnect to upstream providers (eg. GlobalCrossing, KPN and DE-CIX)
> - downlinks to PE routers by 2x10GE LAGs
> - downlinks to access switches (customer access switches)
> - at core we transporting multicast with IPTV (around 900Mbps) it will
> be not problem ? (eg. with microbursts at core)
> - is SFP+ ER (40KM) 1550nm is supported in this switch ?
> - is QinQ supported on this switch ?
>
> If TurboIron is not good idea here then what we should consider as
> alternative ? MLX/RX is not option here as It's too expensive. We need
> just 8 x 10GE ports (best if XFP, but SFP+ is fine with us too).
>
>
An alternative to the TurboIron 24x might be something like the Arista 7100
series depending on what features you need. They produce an ER optic. One
nice feature of this switch is mLAG which means "multi-chassis lag" like the
Brocade MCT available on the MLX/XMR/CER/CES. It allows you to have a pair
of uplinks bonded as a LAG from the access switches. One link to one core
switch, other link to the second one. They are active/active. Basically it
allows you to get rid of spanning-tree without pushing layer3 out to the
access switches. They do QinQ but don't do v6 routing in hardware at this
time (it's coming later this year). I use these as an aggregation switch in
cages remote from my core infrastructure. I might have near a dozen FCX top
of rack switches in a remote cage aggregated to a pair of Arista's and those
switches then go over a pair of uplinks (one each in an mLAG) to a pair of
MLX core switches instead of having to do a long distance uplink from each
of the access switches. I have considered using the Aristas for a core
application but the lack of v6 routing protocol clobbered that option. I
went instead in that case with a pair of FCX units doing core routing with a
pair of Aristas for 10G port fanout. So the Aristas act as 10G layer 2
ports hanging off the FCX pair which are actually doing the routing in that
application.


robhass at gmail

Mar 12, 2011, 11:32 PM

Post #6 of 14 (4053 views)
Permalink
Re: Features on Brocade Ethernet platforms [In reply to]

On Sat, Mar 12, 2011 at 8:16 PM, Nick Hilliard <nick [at] foobar> wrote:

>It's a cut-thru switch with very small buffers (2M shared per chassis), which
> means that if you're microbursting *outbound* on your 10G ports, you have
> enough space for 1.6ms of buffered traffic, best case. This may or may not
> work for your network configuration.

All links will be local (<1ms RTT) except link to DE-CIX where we
taking 10G via 3rd party DWDM and RTT is around 4-5ms.

> As a general principle, I'm not convinced that running ER/ZR SFP+ optics is
> a good idea.

I think about ER as I have one node where fiber distance is around
24KM. LR optics will not handle this at all. In the past I saw LR20
(X2/XENPAK/XFP) optics - LR but with increased power budget to 20KM.
I'll check is it available at SFP+ form faction.

Thanks a lot,
Robert

_______________________________________________
foundry-nsp mailing list
foundry-nsp [at] puck
http://puck.nether.net/mailman/listinfo/foundry-nsp


robhass at gmail

Mar 12, 2011, 11:35 PM

Post #7 of 14 (4125 views)
Permalink
Re: Features on Brocade Ethernet platforms [In reply to]

On Sun, Mar 13, 2011 at 7:55 AM, George B. <georgeb [at] gmail> wrote:

> An alternative to the TurboIron 24x might be something like the Arista 7100
> series depending on what features you need. They produce an ER optic.

George, and how about preventing micro-bursts mentioned by Nick at
Arista 7100 gear ? How deep buffers it has ? How about pricing
comparing to TI24X (it's $12k GPL). I just need 8-16 10GE ports in
1-2U form factor.

Robert

_______________________________________________
foundry-nsp mailing list
foundry-nsp [at] puck
http://puck.nether.net/mailman/listinfo/foundry-nsp


georgeb at gmail

Mar 13, 2011, 8:18 PM

Post #8 of 14 (4051 views)
Permalink
Re: Features on Brocade Ethernet platforms [In reply to]

On Sun, Mar 13, 2011 at 12:35 AM, Robert Hass <robhass [at] gmail> wrote:

> On Sun, Mar 13, 2011 at 7:55 AM, George B. <georgeb [at] gmail> wrote:
>
> > An alternative to the TurboIron 24x might be something like the Arista
> 7100
> > series depending on what features you need. They produce an ER optic.
>
> George, and how about preventing micro-bursts mentioned by Nick at
> Arista 7100 gear ? How deep buffers it has ? How about pricing
> comparing to TI24X (it's $12k GPL). I just need 8-16 10GE ports in
> 1-2U form factor.
>
> Robert
>

I would say first go here:


http://www.aristanetworks.com/en/products/7100series

Notice the PDF link to "Myths about microbursts"

Here is a link to the data sheet:

http://www.aristanetworks.com/media/system/pdf/Datasheets/7100_Datasheet.pdf

And if you have been following Jim Getty's stuff ... big buffers actually
probably hurt performance rather than help, particularly if there is
congestion in the path anywhere (including outside your network in the
Internet path). The "every packet is sacred" approach to huge buffers to
prevent packet loss in the case of congestion probably hurts performance
more than it helps. Dropping a packet and having a flow back off a little
bit is how TCP was designed to deal with congestion. Having several seconds
worth of traffic sitting in various buffers along the path means that by the
time you learn that a path is congested, it might be too late to really do
anything about it. So you end up losing a packet, you back off, but now you
are still losing packets that were buffered someplace, so tcp ends up way
overcompensating ... it backs off way too much, and now the network goes
into an "accordion" mode where it is speeding up and slowing down because
all that danged buffering isn't allowing TCP to react to real world
conditions. Google "bufferbloat". He hasn't done the world's best job in
actually communicating it in a way most people can understand, but he is
dead on the money. Bigger buffers is actually probably a bad thing.


georgeb at gmail

Mar 13, 2011, 8:27 PM

Post #9 of 14 (4042 views)
Permalink
Re: Features on Brocade Ethernet platforms [In reply to]

Oh, about price ... the 7124's are about US$13,000 list. Don't expect as
much of a discount off list as they price their stuff pretty low anyway. The
ER optics are about $6K I think, SR optics are under $600

So the optics for a long distance link will cost about half as much as the
switch cost.




On Sun, Mar 13, 2011 at 12:35 AM, Robert Hass <robhass [at] gmail> wrote:

> On Sun, Mar 13, 2011 at 7:55 AM, George B. <georgeb [at] gmail> wrote:
>
> > An alternative to the TurboIron 24x might be something like the Arista
> 7100
> > series depending on what features you need. They produce an ER optic.
>
> George, and how about preventing micro-bursts mentioned by Nick at
> Arista 7100 gear ? How deep buffers it has ? How about pricing
> comparing to TI24X (it's $12k GPL). I just need 8-16 10GE ports in
> 1-2U form factor.
>
> Robert
>


nick at foobar

Mar 14, 2011, 2:58 AM

Post #10 of 14 (4023 views)
Permalink
Re: Features on Brocade Ethernet platforms [In reply to]

On 14/03/2011 03:18, George B. wrote:
> bit is how TCP was designed to deal with congestion. Having several
> seconds worth of traffic sitting in various buffers along the path means
> that by the time you learn that a path is congested, it might be too late
> to really do anything about it.

Exactly - but we're not talking about several seconds worth of buffering on
these switches. 2M per chassis on a 24 port 10G switch means 1.6ms
headroom across 24 high speed ports. That's not a lot.

The only switch which comes close to this is the Force 10 S60, which
provides 1.25G of shared buffer space for 24x1G + 4x10G. That would equate
to 10 seconds worth of buffering on a single port, assuming pathological
lab conditions and lab configuration (i.e. much less in real life).
However it's worth pointing out that F10 specifically aim it at streaming
and other applications where latency is much less of an issue than packet
drops.

IOW, choose your equipment to match your requirements.

> Google "bufferbloat". He hasn't done the world's best
> job in actually communicating it in a way most people can understand, but
> he is dead on the money. Bigger buffers is actually probably a bad thing.

We're talking apples and oranges here. Big buffers on a decent quality
switch are not the same as ridiculous buffers on a trashy CPE device.

On 14/03/2011 03:27, George B. wrote:
> Oh, about price ... the 7124's are about US$13,000 list. Don't expect as
> much of a discount off list as they price their stuff pretty low anyway.
> The ER optics are about $6K I think, SR optics are under $600

Last I heard, Arista vendor-locks their transceiver ports - although that
was a couple of years ago and maybe things have changed since then. This
means that if you're using third party SFP+ (where you might expect to pay
e.g. 120 per SR transceiver in small quantities), you will need to get
them vendor coded.

Nick

_______________________________________________
foundry-nsp mailing list
foundry-nsp [at] puck
http://puck.nether.net/mailman/listinfo/foundry-nsp


naskrfan at yahoo

Mar 17, 2011, 6:17 PM

Post #11 of 14 (4003 views)
Permalink
Re: Features on Brocade Ethernet platforms [In reply to]

Robert,

If you are looking for a 10GB switch that can scale out to 16 ports or higher
and all you need it for is L2, then the new VDX 6720-24 or 6720-60 can work for
you. It is capable of doing either 1 or 10GB, transmitting at line rates. These
switches are very easy to expand should the need arise.


While they may not "stack" like the FCX, as for sheer port count and scalability
(up to 600 1 or 10GB ports), it can not be beat.


Terry




________________________________
From: Robert Hass <robhass [at] gmail>
To: George B. <georgeb [at] gmail>
Cc: foundry-nsp [at] puck
Sent: Sun, March 13, 2011 3:35:03 AM
Subject: Re: [f-nsp] Features on Brocade Ethernet platforms

On Sun, Mar 13, 2011 at 7:55 AM, George B. <georgeb [at] gmail> wrote:

> An alternative to the TurboIron 24x might be something like the Arista 7100
> series depending on what features you need. They produce an ER optic.

George, and how about preventing micro-bursts mentioned by Nick at
Arista 7100 gear ? How deep buffers it has ? How about pricing
comparing to TI24X (it's $12k GPL). I just need 8-16 10GE ports in
1-2U form factor.

Robert

_______________________________________________
foundry-nsp mailing list
foundry-nsp [at] puck
http://puck.nether.net/mailman/listinfo/foundry-nsp


robhass at gmail

Mar 18, 2011, 4:50 PM

Post #12 of 14 (3985 views)
Permalink
Re: Features on Brocade Ethernet platforms [In reply to]

On Fri, Mar 18, 2011 at 2:17 AM, Mr. Dickerson <naskrfan [at] yahoo> wrote:
> If you are looking for a 10GB switch that can scale out to 16 ports or
> higher and all you need it for is L2, then the new VDX 6720-24 or 6720-60
> can work for you. It is capable of doing either 1 or 10GB, transmitting at
> line rates. These switches are very easy to expand should the need arise.

VDX looks very nice. But I unable to find:

- where is configuration guide PDF ?
- is CLI is the same like on TurboIrons, FastIrons etc. ?
- how large buffers they have (per port/per port-group)

Robert
_______________________________________________
foundry-nsp mailing list
foundry-nsp [at] puck
http://puck.nether.net/mailman/listinfo/foundry-nsp


naskrfan at yahoo

Mar 19, 2011, 7:28 AM

Post #13 of 14 (4096 views)
Permalink
Re: Features on Brocade Ethernet platforms [In reply to]

VDX looks very nice. But I unable to find:

- where is configuration guide PDF ?
- is CLI is the same like on TurboIrons, FastIrons etc. ?
- how large buffers they have (per port/per port-group)

Yes, there is a configuration and command reference guide for the VDX. It is
available on the Brocade public website. If you can not get to it, then let me
know, and I can send it to you.


The CLI is very much like Cisco. It is built on a complete new OS called NOS
(Network Operating System).


While I can not find the exact size of the buffers, we do support jumbo frames
of 9216 bytes. We also support port-groups with latency between ports of the
same group at 600 nanoseconds. Maximum latency within the VDX switch (ingress to
egress) is 1.8 microseconds if going between port groups.


Terry




________________________________
From: Robert Hass <robhass [at] gmail>
To: foundry-nsp [at] puck
Sent: Fri, March 18, 2011 6:50:33 PM
Subject: Re: [f-nsp] Features on Brocade Ethernet platforms

On Fri, Mar 18, 2011 at 2:17 AM, Mr. Dickerson <naskrfan [at] yahoo> wrote:
> If you are looking for a 10GB switch that can scale out to 16 ports or
> higher and all you need it for is L2, then the new VDX 6720-24 or 6720-60
> can work for you. It is capable of doing either 1 or 10GB, transmitting at
> line rates. These switches are very easy to expand should the need arise.

VDX looks very nice. But I unable to find:

- where is configuration guide PDF ?
- is CLI is the same like on TurboIrons, FastIrons etc. ?
- how large buffers they have (per port/per port-group)

Robert
_______________________________________________
foundry-nsp mailing list
foundry-nsp [at] puck
http://puck.nether.net/mailman/listinfo/foundry-nsp


nick at foobar

Mar 19, 2011, 10:57 AM

Post #14 of 14 (3982 views)
Permalink
Re: Features on Brocade Ethernet platforms [In reply to]

On 19/03/2011 14:28, Mr. Dickerson wrote:
> While I can not find the exact size of the buffers, we do support jumbo
> frames of 9216 bytes. We also support port-groups with latency between
> ports of the same group at 600 nanoseconds. Maximum latency within the
> VDX switch (ingress to egress) is 1.8 microseconds if going between port
> groups.

Terry, could you confirm which transceivers are supported on this unit?

Nick
_______________________________________________
foundry-nsp mailing list
foundry-nsp [at] puck
http://puck.nether.net/mailman/listinfo/foundry-nsp

nsp foundry RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.