andrew at beekhof
Dec 18, 2011, 2:07 PM
Post #2 of 2
On Fri, Dec 16, 2011 at 11:34 PM, Ulrich Windl
Re: Antw: Re: Q on http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active
[In reply to]
<Ulrich.Windl [at] rz> wrote:
>>>> Dominik Klein <dominik.klein [at] googlemail> schrieb am 16.12.2011 um 12:34 in
> Nachricht <4EEB2CDB.6020609 [at] googlemail>:
>> On 12/15/2011 11:19 AM, Ulrich Windl wrote:
>> > Hi!
>> > I have a problem with some client-server software (I don't want to
>> > name it here) where client and server both need an etry for inetd
>> > (xinetd). It's also possible that client and server are running on
>> > one machine.
>> > For a cluster solution I set up the server to run on one node only
>> > (using -INFINITY location restrictions for the other nodes). I've
>> > added a RA that diesables the indetd service when the server is to be
>> > down, and enabled it when the server is to be up.
>> > Summary of a longer story: Even when the resource is never intended
>> > to run anywhere else, the cluster checks the indetd service through
>> > the RA on every node, and then complains:
>> > Dec 15 09:10:55 h03 pengine: : WARN: See
>> > http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more
>> > information.
>> Then your RA is wrong. It should not say that things are running when
>> they are not.
> Please read again: The RA is right;
No. Its not. We check the resource state everywhere, not just where
it's "supposed" to live.
> the software just uses some strange concepts. I need to start some thing when it's down on the local node, whatever happened on the other nodes. Maybe I should just use a clone to start it everywhere, but I'm not sure I fully understood clones and their constraints thoroughly.
>> > I'm afraid this might cause unintended fencing at some time.
>> > Are there any rather clean solutions for this problem?
>> > I was thinking to send the node's name to the RA that check's the
>> > inetd service, and to make him lie about the state on the other
>> > nodes, but I think that is terrible solution.
>> > Cool ideas?
>> > Regards, Ulrich
>> > _______________________________________________ Linux-HA mailing
>> > list Linux-HA [at] lists
>> > http://lists.linux-ha.org/mailman/listinfo/linux-ha See also:
>> > http://linux-ha.org/ReportingProblems
>> Linux-HA mailing list
>> Linux-HA [at] lists
>> See also: http://linux-ha.org/ReportingProblems
> Linux-HA mailing list
> Linux-HA [at] lists
> See also: http://linux-ha.org/ReportingProblems
Linux-HA mailing list
Linux-HA [at] lists
See also: http://linux-ha.org/ReportingProblems