lists at alteeve
Apr 2, 2012, 12:57 AM
Post #13 of 25
On 04/02/2012 12:39 AM, Felix Frank wrote:
> On 04/01/2012 11:57 PM, Arnold Krille wrote:
>> Its not the "network layer in drbd", its "the sending buffer, the
>> switch, the receiving buffer, the remote disk latency, the sending
>> buffer, the switch, the receiving buffer" of DRBD with protocol C.
> if your DRBD setup comprises a switch (and hence probably lots of
> DRBD-unrelated traffice on the same NIC), the performance issues are
> well deserved punishment you're getting.
> Whenever possible, DRBD should use a dedicated back-to-back link.
> Buffers should not pose much of an issue then, either.
Woops, meant to say more on switches;
Using a switch will contain traffic between the ports used by DRBD. Of
course, you will very much want a dedicated interface (or, ideally, two
in Active/Passive bonding). You need to look at the switch's
capabilities, as switches are a lot more than their rated port speed.
You need to ensure that the switch's internal performance is high enough
to handle all your network load while leaving enough overhead for the
additional DRBD traffic. You also want to adjust your MTU as high as
your equipment allows for, assuming you're using decent quality
equipment. Realtek is terrible, generally speaking.
Personally speaking, I use D-Link DGS-3120 series switches with Intel
NICs. I've just started testing DGS-1210 series, which are much less
expensive, and initial testing shows them to be perfectly capable (and
much less expensive).
TL;DR - Network equipment can't be crap and not all gigabit is created
Papers and Projects: https://alteeve.com
drbd-user mailing list
drbd-user [at] lists