lmb at suse
Jul 18, 2012, 1:15 PM
Post #8 of 8
On 2012-07-18T20:01:35, Arnold Krille <arnold [at] arnoldarts> wrote:
Re: Antw: Bond mode for 2 node direct link
[In reply to]
> That would mean that your system runs the same whether one or two links are
That's not what I said. What I said (or at least meant ;-) is that, even
in the degraded state, the performance must still be within acceptable
Hence, the performance boost provided by actually utilizing the
redundancy during the fault-free phase can't be critical but only
optional. (Or put differently: nice, but not required.)
> And selling "two with double throughput but it also works when in fault-state"
> sells better than "two but you won't see it except for a slightly better
> fault-tolerance". And then count in the failure-probability of a direct link.
Agreed. But the question was what provides maximum fault tolerance.
I've seen too many cases where link down events were not detected
through bonding (because of intermittent switches or weird failure
modes). I'd be much happier if instead of dumb bonding something like
OSPF was used across the hosts ;-)
FWIW, heartbeat used to broadcast it's traffic all the time.
> And when the scenario is the prototype of HA: one service provided in HA with
> an active-backup-setup of two machines that do nothing else? then I want the
> interlink to be as reliable _and_ as fast as possible so I don't loose the
> last bit of information because the disk-mirroring hasn't pushed out the data
> fast enough because of using only one 1GB link where two where available.
You don't lose it. Because mirroring is not asymmetric, and writes
aren't confirmed before fsync() et al return. The performance impact is,
of course, granted.
SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde
Linux-HA mailing list
Linux-HA [at] lists
See also: http://linux-ha.org/ReportingProblems