joelja at bogus
Oct 30, 2011, 8:29 PM
Post #16 of 17
Sorry, this is late, as far as this thread goes but I think I'd add one
more thing since I've got oob networks big enough to have to add l3
boundries in them...
juniper's not the only vendor with this issue by far...
On 9/19/11 13:59 , Jonathan Lassoff wrote:
> On Mon, Sep 19, 2011 at 1:42 PM, Pavel Lunin <plunin [at] senetsy> wrote:
>> 2011/9/17 Chris Evans <chrisccnpspam2 [at] gmail>
>>> Juniper devices have out of band ethernet ports, but have the HUGE HUGE
>>> downfall of being in the main routing table conflicting with every other
>> BTW, can anyone give a good real-world example of a _routed_ OOB management
>> network usage?
>> As far as I understand the whole concept of OOB MGT IP interface was
>> invented to make the management network totally isolated from any transit
>> traffic. For security concerns, at the days when firewalls were not trusty
>> enough, when lack of Internet connection was not that big issue. If you
>> really need to implement this, you won't run into any routing conflict,
>> since it's a really separated network, will you?
>> But. Nowadays not really many folks run separate PCs for OOB MGT totally
>> apart of their LAN, corporate environment, email, Internet, etc. Even
>> some conservators may still desire this sort of design, most NMS need an
>> Internet connection to update something. In this case — yes, you bump into
>> routing conflict using fxp0, but why to use fxp0 in such a scenario?
So, I have a routed oob, and proxy-arp is your friend. the netmask on
the management interface needs to be big enough to cover your whole oob
network... organizing your addresses space such that this is feasible is
left as an exercise for the reader.
>> An only exception I know of (looks like a real design flaw by Juniper) is
>> the SRX cluster case, where you MUST use fxp0 interfaces, if you want to
>> have access to particular members of a cluster. Otherwise you can only
>> access a virtual devise as a whole, with no clue which particular node you
>> connect to. Not so big problem in real world, to be honest, but if you are
>> required to connect it to, say, NSM, which goes to the Internet through
>> same SRX cluster, you got a real pain in the rear (workarounds exist, of
>> Sure, there are some good applications of fxp0 in field, but this does not
>> much relate to real routing issues.
> I see two ways one can go about this. Either programmatically tunnel into an
> OOB L2 segment via a "bastion" host in an on-demand fashion, or point some
> routes (dynamically, or otherwise) into your internal network for management
> The risk of pointing routes into your internal network, IMO, is that
> very-specific ACLs for management access can begin to have a blurred
> distinction. RFC-1918 space can overlap, and public IPs within an internal
> network can sometimes overlap with an active transit path.
> It's more work to script things to work nicely, but I believe the dynamic
> tunneling method is the safer thing to do.
> In the spirit of Junipers clean separation of the data and forwarding
> planes, it seems too bad that they end up using the kernel routing table to
> hold what goes in the forwarding hardware as well.
> If you have the blessing of being able to have an all-J network, or one with
> devices that all support multiple routing tables and routing protocol
> separation (a la logical systems, or routing instance) -- setting up
> separate routing tables for management vs. traffic-carrying is probably a
> good thing.
>> juniper-nsp mailing list juniper-nsp [at] puck
> juniper-nsp mailing list juniper-nsp [at] puck
juniper-nsp mailing list juniper-nsp [at] puck