Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Linux-HA: Pacemaker

Proposed new stonith topology syntax

 

 

Linux-HA pacemaker RSS feed   Index | Next | Previous | View Threaded


andrew at beekhof

Jan 2, 2012, 10:19 PM

Post #1 of 25 (3906 views)
Permalink
Proposed new stonith topology syntax

Does anyone have an opinion on the following schema and example?
I'm not a huge fan of the index field, but nor am I of making it
sensitive to order (like groups).

Please keep in mind that the new topology section is optional and
would only be defined if:
- you wanted to specify the order in which multiple devices were tried, or
- if multiple devices need to be triggered for the node to be
considered fenced.

Most people will /NOT/ need to add this section to their configuration.

-- Andrew

<fencing-topology>
<!-- pcmk-0 requires the devices named disk + network to complete -->
<fencing-rule id="f-p0" node="pcmk-0">
<device id-ref="disk"/>
<device id-ref="network"/>
</fencing-rule>

<!-- pcmk-1 needs either the poison-pill or power device to complete
successfully -->
<fencing-rule id="f-p1.1" node="pcmk-1" index="1" device="poison-pill"/>
<fencing-rule id="f-p1.2" node="pcmk-1" index="2" device="power">

<!-- pcmk-1 needs either the disk and network devices to complete
successfully OR the device named power -->
<fencing-rule id="f-p2.1" node="pcmk-2" index="1">
<device id-ref="disk"/>
<device id-ref="network"/>
</fencing-rule>
<fencing-rule id="f-p2.2" node="pcmk-2" index="2" device="power"/>

</fencing-topology>

Conforming to:

<define name="element-stonith">
<element name="fencing-topology">
<zeroOrMore>
<ref name="element-fencing"/>
</zeroOrMore>
</element>
</define>

<define name="element-fencing">
<element name="fencing-rule">
<attribute name="id"><data type="ID"/></attribute>
<attribute name="node"><text/></attribute>
<attribute name="index"><text/></attribute>
<choice>
<attribute name="device"><text/></attribute>
<zeroOrMore>
<element name="device">
<attribute name="id-ref"><data type="IDREF"/></attribute>
</element>
</zeroOrMore>
</choice>
</element>
</define>

</grammar>

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


linux at alteeve

Jan 3, 2012, 5:28 AM

Post #2 of 25 (3857 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On 01/03/2012 01:19 AM, Andrew Beekhof wrote:
> Does anyone have an opinion on the following schema and example?
> I'm not a huge fan of the index field, but nor am I of making it
> sensitive to order (like groups).
>
> Please keep in mind that the new topology section is optional and
> would only be defined if:
> - you wanted to specify the order in which multiple devices were tried, or
> - if multiple devices need to be triggered for the node to be
> considered fenced.
>
> Most people will /NOT/ need to add this section to their configuration.

A common configuration (at least in my world) is to use IPMI/iLO/etc +
switched PDU for fencing. When ever possible, the IPMI fencing should be
primary device, because it has the ability to confirm a node's "off"
state making it more trustworthy than fencing via PDU.

When a PDU is needed though (ie: node lost it's PSU so the BMC is down),
with redundant power supplies, two separate PDUs need to both
successfully cut power to consider the fence complete.

I mention this to show that ordered and multiple device fencing isn't
that unusual. :)

> -- Andrew
>
> <fencing-topology>
> <!-- pcmk-0 requires the devices named disk + network to complete -->
> <fencing-rule id="f-p0" node="pcmk-0">
> <device id-ref="disk"/>
> <device id-ref="network"/>
> </fencing-rule>
>
> <!-- pcmk-1 needs either the poison-pill or power device to complete
> successfully -->
> <fencing-rule id="f-p1.1" node="pcmk-1" index="1" device="poison-pill"/>
> <fencing-rule id="f-p1.2" node="pcmk-1" index="2" device="power">
>
> <!-- pcmk-1 needs either the disk and network devices to complete
> successfully OR the device named power -->
> <fencing-rule id="f-p2.1" node="pcmk-2" index="1">
> <device id-ref="disk"/>
> <device id-ref="network"/>
> </fencing-rule>
> <fencing-rule id="f-p2.2" node="pcmk-2" index="2" device="power"/>
>
> </fencing-topology>
>
> Conforming to:
>
> <define name="element-stonith">
> <element name="fencing-topology">
> <zeroOrMore>
> <ref name="element-fencing"/>
> </zeroOrMore>
> </element>
> </define>
>
> <define name="element-fencing">
> <element name="fencing-rule">
> <attribute name="id"><data type="ID"/></attribute>
> <attribute name="node"><text/></attribute>
> <attribute name="index"><text/></attribute>
> <choice>
> <attribute name="device"><text/></attribute>
> <zeroOrMore>
> <element name="device">
> <attribute name="id-ref"><data type="IDREF"/></attribute>
> </element>
> </zeroOrMore>
> </choice>
> </element>
> </define>
>
> </grammar>

I wish I was more familiar with pacemaker to make an intelligent
comment. However, this looks good to me. I don't see an example of a
multi-port fence device, but I am assuming that's abstracted away for
the simplicity of the example?

Cheers!

--
Digimer
E-Mail: digimer [at] alteeve
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
"omg my singularity battery is dead again.
stupid hawking radiation." - epitron

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


andrew at beekhof

Jan 3, 2012, 3:03 PM

Post #3 of 25 (3864 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On Wed, Jan 4, 2012 at 12:28 AM, Digimer <linux [at] alteeve> wrote:
> On 01/03/2012 01:19 AM, Andrew Beekhof wrote:
>> Does anyone have an opinion on the following schema and example?
>> I'm not a huge fan of the index field, but nor am I of making it
>> sensitive to order (like groups).
>>
>> Please keep in mind that the new topology section is optional and
>> would only be defined if:
>>  - you wanted to specify the order in which multiple devices were tried, or
>>  - if multiple devices need to be triggered for the node to be
>> considered fenced.
>>
>> Most people will /NOT/ need to add this section to their configuration.
>
> A common configuration (at least in my world) is to use IPMI/iLO/etc +
> switched PDU for fencing. When ever possible, the IPMI fencing should be
> primary device, because it has the ability to confirm a node's "off"
> state making it more trustworthy than fencing via PDU.
>
> When a PDU is needed though (ie: node lost it's PSU so the BMC is down),
> with redundant power supplies, two separate PDUs need to both
> successfully cut power to consider the fence complete.
>
> I mention this to show that ordered and multiple device fencing isn't
> that unusual. :)

What you describe is already quite complex relative to "but I don't
/any/ fencing" :-)

>
>> -- Andrew
>>
>> <fencing-topology>
>>   <!-- pcmk-0 requires the devices named disk + network to complete -->
>>   <fencing-rule id="f-p0" node="pcmk-0">
>>     <device id-ref="disk"/>
>>     <device id-ref="network"/>
>>   </fencing-rule>
>>
>>   <!-- pcmk-1 needs either the poison-pill or power device to complete
>> successfully -->
>>   <fencing-rule id="f-p1.1" node="pcmk-1" index="1" device="poison-pill"/>
>>   <fencing-rule id="f-p1.2" node="pcmk-1" index="2" device="power">
>>
>>   <!-- pcmk-1 needs either the disk and network devices to complete
>> successfully OR the device named power -->
>>   <fencing-rule id="f-p2.1" node="pcmk-2" index="1">
>>     <device id-ref="disk"/>
>>     <device id-ref="network"/>
>>   </fencing-rule>
>>   <fencing-rule id="f-p2.2" node="pcmk-2" index="2" device="power"/>
>>
>> </fencing-topology>
>>
>> Conforming to:
>>
>>   <define name="element-stonith">
>>     <element name="fencing-topology">
>>       <zeroOrMore>
>>       <ref name="element-fencing"/>
>>       </zeroOrMore>
>>     </element>
>>   </define>
>>
>>   <define name="element-fencing">
>>     <element name="fencing-rule">
>>       <attribute name="id"><data type="ID"/></attribute>
>>       <attribute name="node"><text/></attribute>
>>       <attribute name="index"><text/></attribute>
>>       <choice>
>>       <attribute name="device"><text/></attribute>
>>       <zeroOrMore>
>>         <element name="device">
>>           <attribute name="id-ref"><data type="IDREF"/></attribute>
>>         </element>
>>       </zeroOrMore>
>>       </choice>
>>     </element>
>>   </define>
>>
>> </grammar>
>
> I wish I was more familiar with pacemaker to make an intelligent
> comment. However, this looks good to me. I don't see an example of a
> multi-port fence device, but I am assuming that's abstracted away for
> the simplicity of the example?

Right, the devices are defined elsewhere and do support multi-port switches.
This section defines how those devices are used in combination with each other.

(Similar to how resources are defined together and then the
constraints describe how they relate to each other).

> Cheers!
>
> --
> Digimer
> E-Mail:              digimer [at] alteeve
> Freenode handle:     digimer
> Papers and Projects: http://alteeve.com
> Node Assassin:       http://nodeassassin.org
> "omg my singularity battery is dead again.
> stupid hawking radiation." - epitron

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


dejanmm at fastmail

Jan 17, 2012, 11:00 AM

Post #4 of 25 (3837 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

Hello,

On Tue, Jan 03, 2012 at 05:19:14PM +1100, Andrew Beekhof wrote:
> Does anyone have an opinion on the following schema and example?
> I'm not a huge fan of the index field, but nor am I of making it
> sensitive to order (like groups).

What is wrong with order in XML elements? It seems like a very
clear way to express order to me.

> Please keep in mind that the new topology section is optional and
> would only be defined if:
> - you wanted to specify the order in which multiple devices were tried, or
> - if multiple devices need to be triggered for the node to be
> considered fenced.

Triggered serially I guess? Is there a possibility to express
fencing nodes simultaneously?

> Most people will /NOT/ need to add this section to their configuration.
>
> -- Andrew
>
> <fencing-topology>
> <!-- pcmk-0 requires the devices named disk + network to complete -->
> <fencing-rule id="f-p0" node="pcmk-0">
> <device id-ref="disk"/>
> <device id-ref="network"/>
> </fencing-rule>
>
> <!-- pcmk-1 needs either the poison-pill or power device to complete
> successfully -->
> <fencing-rule id="f-p1.1" node="pcmk-1" index="1" device="poison-pill"/>
> <fencing-rule id="f-p1.2" node="pcmk-1" index="2" device="power">
>
> <!-- pcmk-1 needs either the disk and network devices to complete
> successfully OR the device named power -->
> <fencing-rule id="f-p2.1" node="pcmk-2" index="1">
> <device id-ref="disk"/>
> <device id-ref="network"/>
> </fencing-rule>
> <fencing-rule id="f-p2.2" node="pcmk-2" index="2" device="power"/>
>
> </fencing-topology>
>
> Conforming to:
>
> <define name="element-stonith">
> <element name="fencing-topology">
> <zeroOrMore>
> <ref name="element-fencing"/>
> </zeroOrMore>
> </element>
> </define>
>
> <define name="element-fencing">
> <element name="fencing-rule">
> <attribute name="id"><data type="ID"/></attribute>
> <attribute name="node"><text/></attribute>
> <attribute name="index"><text/></attribute>
> <choice>
> <attribute name="device"><text/></attribute>
> <zeroOrMore>
> <element name="device">
> <attribute name="id-ref"><data type="IDREF"/></attribute>
> </element>
> </zeroOrMore>
> </choice>
> </element>
> </define>

I'd rather use "stonith-resource" than "device", because what is
referenced is a stonith resource (one device may be used in more
than one stonith resource). Or "stonith-rsc" if you're in the
shortcuts mood. Or perhaps even "agent".

"fencing-rule" for whatever reason doesn't sound just right, but
I have no alternative suggestion.

IMO, as I already said earlier, index is superfluous.

It could also be helpful to consider multiple nodes in a single
element.

Otherwise, looks fine to me.

Thanks,

Dejan

> </grammar>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


andrew at beekhof

Jan 17, 2012, 11:58 PM

Post #5 of 25 (3841 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On Wed, Jan 18, 2012 at 6:00 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> Hello,
>
> On Tue, Jan 03, 2012 at 05:19:14PM +1100, Andrew Beekhof wrote:
>> Does anyone have an opinion on the following schema and example?
>> I'm not a huge fan of the index field, but nor am I of making it
>> sensitive to order (like groups).
>
> What is wrong with order in XML elements? It seems like a very
> clear way to express order to me.

Because we end up with the same update issues as for groups.

>
>> Please keep in mind that the new topology section is optional and
>> would only be defined if:
>>  - you wanted to specify the order in which multiple devices were tried, or
>>  - if multiple devices need to be triggered for the node to be
>> considered fenced.
>
> Triggered serially I guess?

Yes.

> Is there a possibility to express
> fencing nodes simultaneously?

No. Its regular boolean shortcut semantics.

>> Most people will /NOT/ need to add this section to their configuration.
>>
>> -- Andrew
>>
>> <fencing-topology>
>>   <!-- pcmk-0 requires the devices named disk + network to complete -->
>>   <fencing-rule id="f-p0" node="pcmk-0">
>>     <device id-ref="disk"/>
>>     <device id-ref="network"/>
>>   </fencing-rule>
>>
>>   <!-- pcmk-1 needs either the poison-pill or power device to complete
>> successfully -->
>>   <fencing-rule id="f-p1.1" node="pcmk-1" index="1" device="poison-pill"/>
>>   <fencing-rule id="f-p1.2" node="pcmk-1" index="2" device="power">
>>
>>   <!-- pcmk-1 needs either the disk and network devices to complete
>> successfully OR the device named power -->
>>   <fencing-rule id="f-p2.1" node="pcmk-2" index="1">
>>     <device id-ref="disk"/>
>>     <device id-ref="network"/>
>>   </fencing-rule>
>>   <fencing-rule id="f-p2.2" node="pcmk-2" index="2" device="power"/>
>>
>> </fencing-topology>
>>
>> Conforming to:
>>
>>   <define name="element-stonith">
>>     <element name="fencing-topology">
>>       <zeroOrMore>
>>       <ref name="element-fencing"/>
>>       </zeroOrMore>
>>     </element>
>>   </define>
>>
>>   <define name="element-fencing">
>>     <element name="fencing-rule">
>>       <attribute name="id"><data type="ID"/></attribute>
>>       <attribute name="node"><text/></attribute>
>>       <attribute name="index"><text/></attribute>
>>       <choice>
>>       <attribute name="device"><text/></attribute>
>>       <zeroOrMore>
>>         <element name="device">
>>           <attribute name="id-ref"><data type="IDREF"/></attribute>
>>         </element>
>>       </zeroOrMore>
>>       </choice>
>>     </element>
>>   </define>
>
> I'd rather use "stonith-resource" than "device", because what is
> referenced is a stonith resource (one device may be used in more
> than one stonith resource).

Can you rephrase that? I don't follow. Are you talking about a group
of fencing devices?

> Or "stonith-rsc" if you're in the
> shortcuts mood. Or perhaps even "agent".
>
> "fencing-rule" for whatever reason doesn't sound just right, but
> I have no alternative suggestion.

Agreed.

>
> IMO, as I already said earlier, index is superfluous.
>
> It could also be helpful to consider multiple nodes in a single
> element.
>
> Otherwise, looks fine to me.
>
> Thanks,
>
> Dejan
>
>> </grammar>
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker [at] oss
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


dejanmm at fastmail

Jan 18, 2012, 5:15 AM

Post #6 of 25 (3834 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On Wed, Jan 18, 2012 at 06:58:20PM +1100, Andrew Beekhof wrote:
> On Wed, Jan 18, 2012 at 6:00 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> > Hello,
> >
> > On Tue, Jan 03, 2012 at 05:19:14PM +1100, Andrew Beekhof wrote:
> >> Does anyone have an opinion on the following schema and example?
> >> I'm not a huge fan of the index field, but nor am I of making it
> >> sensitive to order (like groups).
> >
> > What is wrong with order in XML elements? It seems like a very
> > clear way to express order to me.
>
> Because we end up with the same update issues as for groups.

OK.

[...]

> > Is there a possibility to express
> > fencing nodes simultaneously?
>
> No. Its regular boolean shortcut semantics.

As digimer mentioned, it is one common use case, i.e. for hosts
with multiple power supplies. So far, we recommended lights-out
devices for such hardware configurations and if those are
monitored and more or less reliable such a setup should be fine.
It would still be good to have a way to express it if some day
somebody actually implements it. I guess that the schema can be
easily extended by adding a "simultaneous" attribute to the
"fencing-rule" element.

> >> Most people will /NOT/ need to add this section to their configuration.
> >>
> >> -- Andrew
> >>
> >> <fencing-topology>
> >>   <!-- pcmk-0 requires the devices named disk + network to complete -->
> >>   <fencing-rule id="f-p0" node="pcmk-0">
> >>     <device id-ref="disk"/>
> >>     <device id-ref="network"/>
> >>   </fencing-rule>
> >>
> >>   <!-- pcmk-1 needs either the poison-pill or power device to complete
> >> successfully -->
> >>   <fencing-rule id="f-p1.1" node="pcmk-1" index="1" device="poison-pill"/>
> >>   <fencing-rule id="f-p1.2" node="pcmk-1" index="2" device="power">
> >>
> >>   <!-- pcmk-1 needs either the disk and network devices to complete
> >> successfully OR the device named power -->
> >>   <fencing-rule id="f-p2.1" node="pcmk-2" index="1">
> >>     <device id-ref="disk"/>
> >>     <device id-ref="network"/>
> >>   </fencing-rule>
> >>   <fencing-rule id="f-p2.2" node="pcmk-2" index="2" device="power"/>
> >>
> >> </fencing-topology>
> >>
> >> Conforming to:
> >>
> >>   <define name="element-stonith">
> >>     <element name="fencing-topology">
> >>       <zeroOrMore>
> >>       <ref name="element-fencing"/>
> >>       </zeroOrMore>
> >>     </element>
> >>   </define>
> >>
> >>   <define name="element-fencing">
> >>     <element name="fencing-rule">
> >>       <attribute name="id"><data type="ID"/></attribute>
> >>       <attribute name="node"><text/></attribute>
> >>       <attribute name="index"><text/></attribute>
> >>       <choice>
> >>       <attribute name="device"><text/></attribute>
> >>       <zeroOrMore>
> >>         <element name="device">
> >>           <attribute name="id-ref"><data type="IDREF"/></attribute>
> >>         </element>
> >>       </zeroOrMore>
> >>       </choice>
> >>     </element>
> >>   </define>
> >
> > I'd rather use "stonith-resource" than "device", because what is
> > referenced is a stonith resource (one device may be used in more
> > than one stonith resource).
>
> Can you rephrase that? I don't follow. Are you talking about a group
> of fencing devices?

No, just about naming. The element/attribute name "device"
doesn't seem right to me, because it references a stonith
resource. One (physical) device may be used by more than one
stonith resource. Even though "device" certainly sounds nicer,
it isn't precise. What I'm worried about is that it may be
confusing (and we have enough confusion with stonith).
(Or did I completely misunderstand the meaning of "device"?)

Thanks,

Dejan

> > Or "stonith-rsc" if you're in the
> > shortcuts mood. Or perhaps even "agent".
> >
> > "fencing-rule" for whatever reason doesn't sound just right, but
> > I have no alternative suggestion.
>
> Agreed.
>
> >
> > IMO, as I already said earlier, index is superfluous.
> >
> > It could also be helpful to consider multiple nodes in a single
> > element.
> >
> > Otherwise, looks fine to me.
> >
> > Thanks,
> >
> > Dejan
> >
> >> </grammar>
> >>
> >> _______________________________________________
> >> Pacemaker mailing list: Pacemaker [at] oss
> >> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >>
> >> Project Home: http://www.clusterlabs.org
> >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >> Bugs: http://bugs.clusterlabs.org
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker [at] oss
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


linux at alteeve

Jan 18, 2012, 7:08 AM

Post #7 of 25 (3842 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On 01/18/2012 08:15 AM, Dejan Muhamedagic wrote:
>>> Is there a possibility to express
>>> fencing nodes simultaneously?
>>
>> No. Its regular boolean shortcut semantics.
>
> As digimer mentioned, it is one common use case, i.e. for hosts
> with multiple power supplies. So far, we recommended lights-out
> devices for such hardware configurations and if those are
> monitored and more or less reliable such a setup should be fine.
> It would still be good to have a way to express it if some day
> somebody actually implements it. I guess that the schema can be
> easily extended by adding a "simultaneous" attribute to the
> "fencing-rule" element.

If I may restate;

Out of band management devices (iLO, IPMI, w/e) have two fatal flaws
which make them unreliable as sole fence devices; They share their power
with the host and they (generally) have only one network link. If the
node's PSU fails, or if the network link/BMC fails, fencing fails.

A PDU as a backup protects against this, but is not ideal as it can't
confirm a node's power state. This is why I strongly recommend for
people to use ordered fencing; out-of-band management should always be
tried first because if it works, you know for certain the node is dead.
The PDU must be available as a backup, but only be used as such.

This is why I argue so strongly for ordered fencing.

>>> I'd rather use "stonith-resource" than "device", because what is
>>> referenced is a stonith resource (one device may be used in more
>>> than one stonith resource).
>>
>> Can you rephrase that? I don't follow. Are you talking about a group
>> of fencing devices?
>
> No, just about naming. The element/attribute name "device"
> doesn't seem right to me, because it references a stonith
> resource. One (physical) device may be used by more than one
> stonith resource. Even though "device" certainly sounds nicer,
> it isn't precise. What I'm worried about is that it may be
> confusing (and we have enough confusion with stonith).
> (Or did I completely misunderstand the meaning of "device"?)
>
> Thanks,
>
> Dejan

Red Hat clusters call these "Fence Methods", with each "method"
containing one or more fence "devices". With the IPMI, there is only one
device. With Redundant PSUs across two PDUs, you have two devices in the
"method". All devices in a method must succeed for the fence method to
succeed.

It would, if nothing else, help people migrating to pacemaker from rhcs
if similar names were used.

<fence>
<method name="ipmi">
<device name="ipmi_an01" action="reboot" />
</method>
<method name="pdu">
<device name="pdu1" port="1" action="reboot" />
<device name="pdu2" port="1" action="reboot" />
</method>
</fence>

--
Digimer
E-Mail: digimer [at] alteeve
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
"omg my singularity battery is dead again.
stupid hawking radiation." - epitron

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


dejanmm at fastmail

Jan 18, 2012, 10:02 AM

Post #8 of 25 (3823 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

Hi,

On Wed, Jan 18, 2012 at 10:08:28AM -0500, Digimer wrote:
> On 01/18/2012 08:15 AM, Dejan Muhamedagic wrote:
> >>> Is there a possibility to express
> >>> fencing nodes simultaneously?
> >>
> >> No. Its regular boolean shortcut semantics.
> >
> > As digimer mentioned, it is one common use case, i.e. for hosts
> > with multiple power supplies. So far, we recommended lights-out
> > devices for such hardware configurations and if those are
> > monitored and more or less reliable such a setup should be fine.
> > It would still be good to have a way to express it if some day
> > somebody actually implements it. I guess that the schema can be
> > easily extended by adding a "simultaneous" attribute to the
> > "fencing-rule" element.
>
> If I may restate;
>
> Out of band management devices (iLO, IPMI, w/e) have two fatal flaws
> which make them unreliable as sole fence devices; They share their power
> with the host and they (generally) have only one network link. If the
> node's PSU fails, or if the network link/BMC fails, fencing fails.

I thought we were talking about computers with two PSU. If both
fail, that's already two faults and (our) clusters don't protect
from multiple faults. As for the rest (network connection, etc)
it's not shared with the host and if there's a failure in any of
these components it should be detected by the next monitor
operation on the stonith resource giving enough time to repair.
In short, a fencing device is not a SPOF.

> A PDU as a backup protects against this, but is not ideal as it can't
> confirm a node's power state.

Why is that? If you ask PDU to disconnect power to the host and
that command succeeds how high is the probability that the CPU is
still running? Or am I missing something?

> This is why I strongly recommend for
> people to use ordered fencing; out-of-band management should always be
> tried first because if it works, you know for certain the node is dead.
> The PDU must be available as a backup, but only be used as such.
>
> This is why I argue so strongly for ordered fencing.
>
> >>> I'd rather use "stonith-resource" than "device", because what is
> >>> referenced is a stonith resource (one device may be used in more
> >>> than one stonith resource).
> >>
> >> Can you rephrase that? I don't follow. Are you talking about a group
> >> of fencing devices?
> >
> > No, just about naming. The element/attribute name "device"
> > doesn't seem right to me, because it references a stonith
> > resource. One (physical) device may be used by more than one
> > stonith resource. Even though "device" certainly sounds nicer,
> > it isn't precise. What I'm worried about is that it may be
> > confusing (and we have enough confusion with stonith).
> > (Or did I completely misunderstand the meaning of "device"?)
> >
> > Thanks,
> >
> > Dejan
>
> Red Hat clusters call these "Fence Methods", with each "method"
> containing one or more fence "devices". With the IPMI, there is only one
> device. With Redundant PSUs across two PDUs, you have two devices in the
> "method". All devices in a method must succeed for the fence method to
> succeed.
>
> It would, if nothing else, help people migrating to pacemaker from rhcs
> if similar names were used.

Pacemaker is already using terminology different from RHCS. I'm
not at all against using similar (or same) names, but it's
too late for that. Introducing RHCS specific names to co-exist
with Pacemaker names... well, how is that going to help?

Thanks,

Dejan

> <fence>
> <method name="ipmi">
> <device name="ipmi_an01" action="reboot" />
> </method>
> <method name="pdu">
> <device name="pdu1" port="1" action="reboot" />
> <device name="pdu2" port="1" action="reboot" />
> </method>
> </fence>
>
> --
> Digimer
> E-Mail: digimer [at] alteeve
> Freenode handle: digimer
> Papers and Projects: http://alteeve.com
> Node Assassin: http://nodeassassin.org
> "omg my singularity battery is dead again.
> stupid hawking radiation." - epitron
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


linux at alteeve

Jan 18, 2012, 1:23 PM

Post #9 of 25 (3811 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On 01/18/2012 01:02 PM, Dejan Muhamedagic wrote:
>> If I may restate;
>>
>> Out of band management devices (iLO, IPMI, w/e) have two fatal flaws
>> which make them unreliable as sole fence devices; They share their power
>> with the host and they (generally) have only one network link. If the
>> node's PSU fails, or if the network link/BMC fails, fencing fails.
>
> I thought we were talking about computers with two PSU. If both
> fail, that's already two faults and (our) clusters don't protect
> from multiple faults. As for the rest (network connection, etc)
> it's not shared with the host and if there's a failure in any of
> these components it should be detected by the next monitor
> operation on the stonith resource giving enough time to repair.
> In short, a fencing device is not a SPOF.

I was talking about the needs for a fence to succeed. So a node as RPSU,
with each cable going to a different PDU. For the fence method to
succeed, both actions must succeed (confirmed switching off both outlets).

So I was talking (in this case) about the actual fence action succeeding
or failing.

>> A PDU as a backup protects against this, but is not ideal as it can't
>> confirm a node's power state.
>
> Why is that? If you ask PDU to disconnect power to the host and
> that command succeeds how high is the probability that the CPU is
> still running? Or am I missing something?

Two cases where this fails, both pebcak, but still real.

One; RPSU where only one link was configured (or 2 or 3, whatever).
Two; An admin moves the power cable to another outlet sometime between
original configuration/testing and the need to fence.

Never under-estimate the power of stupidity or the dangers of working
late. :)

>> Red Hat clusters call these "Fence Methods", with each "method"
>> containing one or more fence "devices". With the IPMI, there is only one
>> device. With Redundant PSUs across two PDUs, you have two devices in the
>> "method". All devices in a method must succeed for the fence method to
>> succeed.
>>
>> It would, if nothing else, help people migrating to pacemaker from rhcs
>> if similar names were used.
>
> Pacemaker is already using terminology different from RHCS. I'm
> not at all against using similar (or same) names, but it's
> too late for that. Introducing RHCS specific names to co-exist
> with Pacemaker names... well, how is that going to help?
>
> Thanks,
>
> Dejan

If it's set, then it is set and there is no more discussion to be had.
To answer your question though;

Come EL7 (or whenever Pacemaker gains full support), as rgmanager is
phased out, all the existing rhcs clusters will need to be migrated.
More prescient; The admins who managed those cluster will need to be
retrained. I would argue that everything that can be done to smooth that
migration should be done, including seemingly trivial things like naming
conventions.

Cheers

--
Digimer
E-Mail: digimer [at] alteeve
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
"omg my singularity battery is dead again.
stupid hawking radiation." - epitron

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


andrew at beekhof

Jan 19, 2012, 6:09 PM

Post #10 of 25 (3809 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On Thu, Jan 19, 2012 at 12:15 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> On Wed, Jan 18, 2012 at 06:58:20PM +1100, Andrew Beekhof wrote:
>> On Wed, Jan 18, 2012 at 6:00 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
>> > Hello,
>> >
>> > On Tue, Jan 03, 2012 at 05:19:14PM +1100, Andrew Beekhof wrote:
>> >> Does anyone have an opinion on the following schema and example?
>> >> I'm not a huge fan of the index field, but nor am I of making it
>> >> sensitive to order (like groups).
>> >
>> > What is wrong with order in XML elements? It seems like a very
>> > clear way to express order to me.
>>
>> Because we end up with the same update issues as for groups.
>
> OK.
>
> [...]
>
>> > Is there a possibility to express
>> > fencing nodes simultaneously?
>>
>> No.  Its regular boolean shortcut semantics.
>
> As digimer mentioned, it is one common use case, i.e. for hosts
> with multiple power supplies. So far, we recommended lights-out
> devices for such hardware configurations and if those are
> monitored and more or less reliable such a setup should be fine.
> It would still be good to have a way to express it if some day
> somebody actually implements it. I guess that the schema can be
> easily extended by adding a "simultaneous" attribute to the
> "fencing-rule" element.

So in the example below, you'd want the ability to not just trigger
the 'disk' and 'network' devices, but the ability to trigger them at
the same time?

>
>> >> Most people will /NOT/ need to add this section to their configuration.
>> >>
>> >> -- Andrew
>> >>
>> >> <fencing-topology>
>> >>   <!-- pcmk-0 requires the devices named disk + network to complete -->
>> >>   <fencing-rule id="f-p0" node="pcmk-0">
>> >>     <device id-ref="disk"/>
>> >>     <device id-ref="network"/>
>> >>   </fencing-rule>
>> >>
>> >>   <!-- pcmk-1 needs either the poison-pill or power device to complete
>> >> successfully -->
>> >>   <fencing-rule id="f-p1.1" node="pcmk-1" index="1" device="poison-pill"/>
>> >>   <fencing-rule id="f-p1.2" node="pcmk-1" index="2" device="power">
>> >>
>> >>   <!-- pcmk-1 needs either the disk and network devices to complete
>> >> successfully OR the device named power -->
>> >>   <fencing-rule id="f-p2.1" node="pcmk-2" index="1">
>> >>     <device id-ref="disk"/>
>> >>     <device id-ref="network"/>
>> >>   </fencing-rule>
>> >>   <fencing-rule id="f-p2.2" node="pcmk-2" index="2" device="power"/>
>> >>
>> >> </fencing-topology>
>> >>
>> >> Conforming to:
>> >>
>> >>   <define name="element-stonith">
>> >>     <element name="fencing-topology">
>> >>       <zeroOrMore>
>> >>       <ref name="element-fencing"/>
>> >>       </zeroOrMore>
>> >>     </element>
>> >>   </define>
>> >>
>> >>   <define name="element-fencing">
>> >>     <element name="fencing-rule">
>> >>       <attribute name="id"><data type="ID"/></attribute>
>> >>       <attribute name="node"><text/></attribute>
>> >>       <attribute name="index"><text/></attribute>
>> >>       <choice>
>> >>       <attribute name="device"><text/></attribute>
>> >>       <zeroOrMore>
>> >>         <element name="device">
>> >>           <attribute name="id-ref"><data type="IDREF"/></attribute>
>> >>         </element>
>> >>       </zeroOrMore>
>> >>       </choice>
>> >>     </element>
>> >>   </define>
>> >
>> > I'd rather use "stonith-resource" than "device", because what is
>> > referenced is a stonith resource (one device may be used in more
>> > than one stonith resource).
>>
>> Can you rephrase that? I don't follow.  Are you talking about a group
>> of fencing devices?
>
> No, just about naming. The element/attribute name "device"
> doesn't seem right to me, because it references a stonith
> resource. One (physical) device may be used by more than one
> stonith resource. Even though "device" certainly sounds nicer,
> it isn't precise.

Oh, I see what you mean. I'll see what I can come up with.

> What I'm worried about is that it may be
> confusing (and we have enough confusion with stonith).
> (Or did I completely misunderstand the meaning of "device"?)
>
> Thanks,
>
> Dejan
>
>> > Or "stonith-rsc" if you're in the
>> > shortcuts mood. Or perhaps even "agent".
>> >
>> > "fencing-rule" for whatever reason doesn't sound just right, but
>> > I have no alternative suggestion.
>>
>> Agreed.
>>
>> >
>> > IMO, as I already said earlier, index is superfluous.
>> >
>> > It could also be helpful to consider multiple nodes in a single
>> > element.
>> >
>> > Otherwise, looks fine to me.
>> >
>> > Thanks,
>> >
>> > Dejan
>> >
>> >> </grammar>
>> >>
>> >> _______________________________________________
>> >> Pacemaker mailing list: Pacemaker [at] oss
>> >> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>> >>
>> >> Project Home: http://www.clusterlabs.org
>> >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> >> Bugs: http://bugs.clusterlabs.org
>> >
>> > _______________________________________________
>> > Pacemaker mailing list: Pacemaker [at] oss
>> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>> >
>> > Project Home: http://www.clusterlabs.org
>> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> > Bugs: http://bugs.clusterlabs.org
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker [at] oss
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


andrew at beekhof

Jan 19, 2012, 6:12 PM

Post #11 of 25 (3814 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On Thu, Jan 19, 2012 at 8:23 AM, Digimer <linux [at] alteeve> wrote:
> On 01/18/2012 01:02 PM, Dejan Muhamedagic wrote:
>>> If I may restate;
>>>
>>> Out of band management devices (iLO, IPMI, w/e) have two fatal flaws
>>> which make them unreliable as sole fence devices; They share their power
>>> with the host and they (generally) have only one network link. If the
>>> node's PSU fails, or if the network link/BMC fails, fencing fails.
>>
>> I thought we were talking about computers with two PSU. If both
>> fail, that's already two faults and (our) clusters don't protect
>> from multiple faults. As for the rest (network connection, etc)
>> it's not shared with the host and if there's a failure in any of
>> these components it should be detected by the next monitor
>> operation on the stonith resource giving enough time to repair.
>> In short, a fencing device is not a SPOF.
>
> I was talking about the needs for a fence to succeed. So a node as RPSU,
> with each cable going to a different PDU. For the fence method to
> succeed, both actions must succeed (confirmed switching off both outlets).
>
> So I was talking (in this case) about the actual fence action succeeding
> or failing.
>
>>> A PDU as a backup protects against this, but is not ideal as it can't
>>> confirm a node's power state.
>>
>> Why is that? If you ask PDU to disconnect power to the host and
>> that command succeeds how high is the probability that the CPU is
>> still running? Or am I missing something?
>
> Two cases where this fails, both pebcak, but still real.
>
> One; RPSU where only one link was configured (or 2 or 3, whatever).
> Two; An admin moves the power cable to another outlet sometime between
> original configuration/testing and the need to fence.
>
> Never under-estimate the power of stupidity or the dangers of working
> late. :)
>
>>> Red Hat clusters call these "Fence Methods", with each "method"
>>> containing one or more fence "devices". With the IPMI, there is only one
>>> device. With Redundant PSUs across two PDUs, you have two devices in the
>>> "method". All devices in a method must succeed for the fence method to
>>> succeed.
>>>
>>> It would, if nothing else, help people migrating to pacemaker from rhcs
>>> if similar names were used.
>>
>> Pacemaker is already using terminology different from RHCS. I'm
>> not at all against using similar (or same) names, but it's
>> too late for that. Introducing RHCS specific names to co-exist
>> with Pacemaker names... well, how is that going to help?
>>
>> Thanks,
>>
>> Dejan
>
> If it's set, then it is set and there is no more discussion to be had.

Its not set in stone yet, but I don't think the term "method" works in
the pacemaker context.

> To answer your question though;
>
> Come EL7 (or whenever Pacemaker gains full support), as rgmanager is
> phased out, all the existing rhcs clusters will need to be migrated.
> More prescient; The admins who managed those cluster will need to be
> retrained. I would argue that everything that can be done to smooth that
> migration should be done, including seemingly trivial things like naming
> conventions.
>
> Cheers
>
> --
> Digimer
> E-Mail:              digimer [at] alteeve
> Freenode handle:     digimer
> Papers and Projects: http://alteeve.com
> Node Assassin:       http://nodeassassin.org
> "omg my singularity battery is dead again.
> stupid hawking radiation." - epitron
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


dejanmm at fastmail

Jan 20, 2012, 5:18 AM

Post #12 of 25 (3802 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On Fri, Jan 20, 2012 at 01:09:56PM +1100, Andrew Beekhof wrote:
> On Thu, Jan 19, 2012 at 12:15 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> > On Wed, Jan 18, 2012 at 06:58:20PM +1100, Andrew Beekhof wrote:
> >> On Wed, Jan 18, 2012 at 6:00 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> >> > Hello,
> >> >
> >> > On Tue, Jan 03, 2012 at 05:19:14PM +1100, Andrew Beekhof wrote:
> >> >> Does anyone have an opinion on the following schema and example?
> >> >> I'm not a huge fan of the index field, but nor am I of making it
> >> >> sensitive to order (like groups).
> >> >
> >> > What is wrong with order in XML elements? It seems like a very
> >> > clear way to express order to me.
> >>
> >> Because we end up with the same update issues as for groups.
> >
> > OK.
> >
> > [...]
> >
> >> > Is there a possibility to express
> >> > fencing nodes simultaneously?
> >>
> >> No.  Its regular boolean shortcut semantics.
> >
> > As digimer mentioned, it is one common use case, i.e. for hosts
> > with multiple power supplies. So far, we recommended lights-out
> > devices for such hardware configurations and if those are
> > monitored and more or less reliable such a setup should be fine.
> > It would still be good to have a way to express it if some day
> > somebody actually implements it. I guess that the schema can be
> > easily extended by adding a "simultaneous" attribute to the
> > "fencing-rule" element.
>
> So in the example below, you'd want the ability to not just trigger
> the 'disk' and 'network' devices, but the ability to trigger them at
> the same time?

Right.

> >> >> Most people will /NOT/ need to add this section to their configuration.
> >> >>
> >> >> -- Andrew
> >> >>
> >> >> <fencing-topology>
> >> >>   <!-- pcmk-0 requires the devices named disk + network to complete -->
> >> >>   <fencing-rule id="f-p0" node="pcmk-0">
> >> >>     <device id-ref="disk"/>
> >> >>     <device id-ref="network"/>
> >> >>   </fencing-rule>
> >> >>
> >> >>   <!-- pcmk-1 needs either the poison-pill or power device to complete
> >> >> successfully -->
> >> >>   <fencing-rule id="f-p1.1" node="pcmk-1" index="1" device="poison-pill"/>
> >> >>   <fencing-rule id="f-p1.2" node="pcmk-1" index="2" device="power">
> >> >>
> >> >>   <!-- pcmk-1 needs either the disk and network devices to complete
> >> >> successfully OR the device named power -->
> >> >>   <fencing-rule id="f-p2.1" node="pcmk-2" index="1">
> >> >>     <device id-ref="disk"/>
> >> >>     <device id-ref="network"/>
> >> >>   </fencing-rule>
> >> >>   <fencing-rule id="f-p2.2" node="pcmk-2" index="2" device="power"/>
> >> >>
> >> >> </fencing-topology>
> >> >>
> >> >> Conforming to:
> >> >>
> >> >>   <define name="element-stonith">
> >> >>     <element name="fencing-topology">
> >> >>       <zeroOrMore>
> >> >>       <ref name="element-fencing"/>
> >> >>       </zeroOrMore>
> >> >>     </element>
> >> >>   </define>
> >> >>
> >> >>   <define name="element-fencing">
> >> >>     <element name="fencing-rule">
> >> >>       <attribute name="id"><data type="ID"/></attribute>
> >> >>       <attribute name="node"><text/></attribute>
> >> >>       <attribute name="index"><text/></attribute>
> >> >>       <choice>
> >> >>       <attribute name="device"><text/></attribute>
> >> >>       <zeroOrMore>
> >> >>         <element name="device">
> >> >>           <attribute name="id-ref"><data type="IDREF"/></attribute>
> >> >>         </element>
> >> >>       </zeroOrMore>
> >> >>       </choice>
> >> >>     </element>
> >> >>   </define>
> >> >
> >> > I'd rather use "stonith-resource" than "device", because what is
> >> > referenced is a stonith resource (one device may be used in more
> >> > than one stonith resource).
> >>
> >> Can you rephrase that? I don't follow.  Are you talking about a group
> >> of fencing devices?
> >
> > No, just about naming. The element/attribute name "device"
> > doesn't seem right to me, because it references a stonith
> > resource. One (physical) device may be used by more than one
> > stonith resource. Even though "device" certainly sounds nicer,
> > it isn't precise.
>
> Oh, I see what you mean. I'll see what I can come up with.

OK.

Cheers,

Dejan

> > What I'm worried about is that it may be
> > confusing (and we have enough confusion with stonith).
> > (Or did I completely misunderstand the meaning of "device"?)
> >
> > Thanks,
> >
> > Dejan
> >
> >> > Or "stonith-rsc" if you're in the
> >> > shortcuts mood. Or perhaps even "agent".
> >> >
> >> > "fencing-rule" for whatever reason doesn't sound just right, but
> >> > I have no alternative suggestion.
> >>
> >> Agreed.
> >>
> >> >
> >> > IMO, as I already said earlier, index is superfluous.
> >> >
> >> > It could also be helpful to consider multiple nodes in a single
> >> > element.
> >> >
> >> > Otherwise, looks fine to me.
> >> >
> >> > Thanks,
> >> >
> >> > Dejan
> >> >
> >> >> </grammar>
> >> >>
> >> >> _______________________________________________
> >> >> Pacemaker mailing list: Pacemaker [at] oss
> >> >> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >> >>
> >> >> Project Home: http://www.clusterlabs.org
> >> >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >> >> Bugs: http://bugs.clusterlabs.org
> >> >
> >> > _______________________________________________
> >> > Pacemaker mailing list: Pacemaker [at] oss
> >> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >> >
> >> > Project Home: http://www.clusterlabs.org
> >> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >> > Bugs: http://bugs.clusterlabs.org
> >>
> >> _______________________________________________
> >> Pacemaker mailing list: Pacemaker [at] oss
> >> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >>
> >> Project Home: http://www.clusterlabs.org
> >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >> Bugs: http://bugs.clusterlabs.org
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker [at] oss
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


andrew at beekhof

Jan 22, 2012, 1:55 PM

Post #13 of 25 (3787 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On Sat, Jan 21, 2012 at 12:18 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> On Fri, Jan 20, 2012 at 01:09:56PM +1100, Andrew Beekhof wrote:
>> On Thu, Jan 19, 2012 at 12:15 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
>> > On Wed, Jan 18, 2012 at 06:58:20PM +1100, Andrew Beekhof wrote:
>> >> On Wed, Jan 18, 2012 at 6:00 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
>> >> > Hello,
>> >> >
>> >> > On Tue, Jan 03, 2012 at 05:19:14PM +1100, Andrew Beekhof wrote:
>> >> >> Does anyone have an opinion on the following schema and example?
>> >> >> I'm not a huge fan of the index field, but nor am I of making it
>> >> >> sensitive to order (like groups).
>> >> >
>> >> > What is wrong with order in XML elements? It seems like a very
>> >> > clear way to express order to me.
>> >>
>> >> Because we end up with the same update issues as for groups.
>> >
>> > OK.
>> >
>> > [...]
>> >
>> >> > Is there a possibility to express
>> >> > fencing nodes simultaneously?
>> >>
>> >> No.  Its regular boolean shortcut semantics.
>> >
>> > As digimer mentioned, it is one common use case, i.e. for hosts
>> > with multiple power supplies. So far, we recommended lights-out
>> > devices for such hardware configurations and if those are
>> > monitored and more or less reliable such a setup should be fine.
>> > It would still be good to have a way to express it if some day
>> > somebody actually implements it. I guess that the schema can be
>> > easily extended by adding a "simultaneous" attribute to the
>> > "fencing-rule" element.
>>
>> So in the example below, you'd want the ability to not just trigger
>> the 'disk' and 'network' devices, but the ability to trigger them at
>> the same time?
>
> Right.

For any particular reason? Or just in case?

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


dejanmm at fastmail

Jan 23, 2012, 12:20 PM

Post #14 of 25 (3774 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On Mon, Jan 23, 2012 at 08:55:02AM +1100, Andrew Beekhof wrote:
> On Sat, Jan 21, 2012 at 12:18 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> > On Fri, Jan 20, 2012 at 01:09:56PM +1100, Andrew Beekhof wrote:
> >> On Thu, Jan 19, 2012 at 12:15 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> >> > On Wed, Jan 18, 2012 at 06:58:20PM +1100, Andrew Beekhof wrote:
> >> >> On Wed, Jan 18, 2012 at 6:00 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> >> >> > Hello,
> >> >> >
> >> >> > On Tue, Jan 03, 2012 at 05:19:14PM +1100, Andrew Beekhof wrote:
> >> >> >> Does anyone have an opinion on the following schema and example?
> >> >> >> I'm not a huge fan of the index field, but nor am I of making it
> >> >> >> sensitive to order (like groups).
> >> >> >
> >> >> > What is wrong with order in XML elements? It seems like a very
> >> >> > clear way to express order to me.
> >> >>
> >> >> Because we end up with the same update issues as for groups.
> >> >
> >> > OK.
> >> >
> >> > [...]
> >> >
> >> >> > Is there a possibility to express
> >> >> > fencing nodes simultaneously?
> >> >>
> >> >> No.  Its regular boolean shortcut semantics.
> >> >
> >> > As digimer mentioned, it is one common use case, i.e. for hosts
> >> > with multiple power supplies. So far, we recommended lights-out
> >> > devices for such hardware configurations and if those are
> >> > monitored and more or less reliable such a setup should be fine.
> >> > It would still be good to have a way to express it if some day
> >> > somebody actually implements it. I guess that the schema can be
> >> > easily extended by adding a "simultaneous" attribute to the
> >> > "fencing-rule" element.
> >>
> >> So in the example below, you'd want the ability to not just trigger
> >> the 'disk' and 'network' devices, but the ability to trigger them at
> >> the same time?
> >
> > Right.
>
> For any particular reason? Or just in case?

For nodes with multiple PSU and without (supported) management
board. I think that one of our APC stonith agents can turn more
than one port off simultaneously.

Thanks,

Dejan

> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


andrew at beekhof

Jan 23, 2012, 8:11 PM

Post #15 of 25 (3773 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On Tue, Jan 24, 2012 at 7:20 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> On Mon, Jan 23, 2012 at 08:55:02AM +1100, Andrew Beekhof wrote:
>> On Sat, Jan 21, 2012 at 12:18 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
>> > On Fri, Jan 20, 2012 at 01:09:56PM +1100, Andrew Beekhof wrote:
>> >> On Thu, Jan 19, 2012 at 12:15 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
>> >> > On Wed, Jan 18, 2012 at 06:58:20PM +1100, Andrew Beekhof wrote:
>> >> >> On Wed, Jan 18, 2012 at 6:00 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
>> >> >> > Hello,
>> >> >> >
>> >> >> > On Tue, Jan 03, 2012 at 05:19:14PM +1100, Andrew Beekhof wrote:
>> >> >> >> Does anyone have an opinion on the following schema and example?
>> >> >> >> I'm not a huge fan of the index field, but nor am I of making it
>> >> >> >> sensitive to order (like groups).
>> >> >> >
>> >> >> > What is wrong with order in XML elements? It seems like a very
>> >> >> > clear way to express order to me.
>> >> >>
>> >> >> Because we end up with the same update issues as for groups.
>> >> >
>> >> > OK.
>> >> >
>> >> > [...]
>> >> >
>> >> >> > Is there a possibility to express
>> >> >> > fencing nodes simultaneously?
>> >> >>
>> >> >> No.  Its regular boolean shortcut semantics.
>> >> >
>> >> > As digimer mentioned, it is one common use case, i.e. for hosts
>> >> > with multiple power supplies. So far, we recommended lights-out
>> >> > devices for such hardware configurations and if those are
>> >> > monitored and more or less reliable such a setup should be fine.
>> >> > It would still be good to have a way to express it if some day
>> >> > somebody actually implements it. I guess that the schema can be
>> >> > easily extended by adding a "simultaneous" attribute to the
>> >> > "fencing-rule" element.
>> >>
>> >> So in the example below, you'd want the ability to not just trigger
>> >> the 'disk' and 'network' devices, but the ability to trigger them at
>> >> the same time?
>> >
>> > Right.
>>
>> For any particular reason?  Or just in case?
>
> For nodes with multiple PSU and without (supported) management
> board.

That still doesn't explain why the 'off' commands would need to be
simultaneous though.
To turn the node off, both devices just need to turn the port off...
there's no requirement that this happens simultaneously.

> I think that one of our APC stonith agents can turn more
> than one port off simultaneously.

If they're for the same host and device, then you don't even need this.
Just specify two ports in the host_map.

If they're not for the same host, then they're not even covered by the
same fencing operation and will never be simultaneous.

If they're for the same host but different devices, then at most
you'll get the commands sent in parallel, guaranteeing simultaneous is
near impossible.

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


dejanmm at fastmail

Jan 24, 2012, 7:22 AM

Post #16 of 25 (3785 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On Tue, Jan 24, 2012 at 03:11:31PM +1100, Andrew Beekhof wrote:
> On Tue, Jan 24, 2012 at 7:20 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> > On Mon, Jan 23, 2012 at 08:55:02AM +1100, Andrew Beekhof wrote:
> >> On Sat, Jan 21, 2012 at 12:18 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> >> > On Fri, Jan 20, 2012 at 01:09:56PM +1100, Andrew Beekhof wrote:
> >> >> On Thu, Jan 19, 2012 at 12:15 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> >> >> > On Wed, Jan 18, 2012 at 06:58:20PM +1100, Andrew Beekhof wrote:
> >> >> >> On Wed, Jan 18, 2012 at 6:00 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> >> >> >> > Hello,
> >> >> >> >
> >> >> >> > On Tue, Jan 03, 2012 at 05:19:14PM +1100, Andrew Beekhof wrote:
> >> >> >> >> Does anyone have an opinion on the following schema and example?
> >> >> >> >> I'm not a huge fan of the index field, but nor am I of making it
> >> >> >> >> sensitive to order (like groups).
> >> >> >> >
> >> >> >> > What is wrong with order in XML elements? It seems like a very
> >> >> >> > clear way to express order to me.
> >> >> >>
> >> >> >> Because we end up with the same update issues as for groups.
> >> >> >
> >> >> > OK.
> >> >> >
> >> >> > [...]
> >> >> >
> >> >> >> > Is there a possibility to express
> >> >> >> > fencing nodes simultaneously?
> >> >> >>
> >> >> >> No.  Its regular boolean shortcut semantics.
> >> >> >
> >> >> > As digimer mentioned, it is one common use case, i.e. for hosts
> >> >> > with multiple power supplies. So far, we recommended lights-out
> >> >> > devices for such hardware configurations and if those are
> >> >> > monitored and more or less reliable such a setup should be fine.
> >> >> > It would still be good to have a way to express it if some day
> >> >> > somebody actually implements it. I guess that the schema can be
> >> >> > easily extended by adding a "simultaneous" attribute to the
> >> >> > "fencing-rule" element.
> >> >>
> >> >> So in the example below, you'd want the ability to not just trigger
> >> >> the 'disk' and 'network' devices, but the ability to trigger them at
> >> >> the same time?
> >> >
> >> > Right.
> >>
> >> For any particular reason?  Or just in case?
> >
> > For nodes with multiple PSU and without (supported) management
> > board.
>
> That still doesn't explain why the 'off' commands would need to be
> simultaneous though.
> To turn the node off, both devices just need to turn the port off...
> there's no requirement that this happens simultaneously.

OK, right. What I had in mind was actually the default reset
action.

> > I think that one of our APC stonith agents can turn more
> > than one port off simultaneously.
>
> If they're for the same host and device, then you don't even need this.
> Just specify two ports in the host_map.

Cool. Didn't look into it. How would that work with say
external/rackpdu (uses snmpset(8) to manage ports)? That agent
can either use the names_oid to fetch ports by itself (in which
case they must be named after nodes) or this:

outlet_config (string): Configuration file. Other way to
recognize outlet number by nodename.
Configuration file. Other way to recognize outlet number by nodename.
Configuration file which contains
node_name=outlet_number
strings.

Example:
server1=1
server2=2

Now, how does stonithd know which parameter to use to pass the
outlet (port) number from the host_map list to the agent? I
assume that the agent should have a matching API. Does this work
only with RH fence agents?

> If they're not for the same host, then they're not even covered by the
> same fencing operation and will never be simultaneous.
>
> If they're for the same host but different devices, then at most
> you'll get the commands sent in parallel, guaranteeing simultaneous is
> near impossible.

Yes, what I meant is almost simultaneous, i.e. that both ports
are for a while turned "off" at the same time. I'm not sure how
does it work in reality. For instance, how long does the reset
command keep the power off on the outlet. So, it should be
"simultanous enough" :)

Cheers,

Dejan

> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


andrew at beekhof

Jan 24, 2012, 4:24 PM

Post #17 of 25 (3772 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On Wed, Jan 25, 2012 at 2:22 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> On Tue, Jan 24, 2012 at 03:11:31PM +1100, Andrew Beekhof wrote:
>> On Tue, Jan 24, 2012 at 7:20 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
>> > On Mon, Jan 23, 2012 at 08:55:02AM +1100, Andrew Beekhof wrote:
>> >> On Sat, Jan 21, 2012 at 12:18 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
>> >> > On Fri, Jan 20, 2012 at 01:09:56PM +1100, Andrew Beekhof wrote:
>> >> >> On Thu, Jan 19, 2012 at 12:15 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
>> >> >> > On Wed, Jan 18, 2012 at 06:58:20PM +1100, Andrew Beekhof wrote:
>> >> >> >> On Wed, Jan 18, 2012 at 6:00 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
>> >> >> >> > Hello,
>> >> >> >> >
>> >> >> >> > On Tue, Jan 03, 2012 at 05:19:14PM +1100, Andrew Beekhof wrote:
>> >> >> >> >> Does anyone have an opinion on the following schema and example?
>> >> >> >> >> I'm not a huge fan of the index field, but nor am I of making it
>> >> >> >> >> sensitive to order (like groups).
>> >> >> >> >
>> >> >> >> > What is wrong with order in XML elements? It seems like a very
>> >> >> >> > clear way to express order to me.
>> >> >> >>
>> >> >> >> Because we end up with the same update issues as for groups.
>> >> >> >
>> >> >> > OK.
>> >> >> >
>> >> >> > [...]
>> >> >> >
>> >> >> >> > Is there a possibility to express
>> >> >> >> > fencing nodes simultaneously?
>> >> >> >>
>> >> >> >> No.  Its regular boolean shortcut semantics.
>> >> >> >
>> >> >> > As digimer mentioned, it is one common use case, i.e. for hosts
>> >> >> > with multiple power supplies. So far, we recommended lights-out
>> >> >> > devices for such hardware configurations and if those are
>> >> >> > monitored and more or less reliable such a setup should be fine.
>> >> >> > It would still be good to have a way to express it if some day
>> >> >> > somebody actually implements it. I guess that the schema can be
>> >> >> > easily extended by adding a "simultaneous" attribute to the
>> >> >> > "fencing-rule" element.
>> >> >>
>> >> >> So in the example below, you'd want the ability to not just trigger
>> >> >> the 'disk' and 'network' devices, but the ability to trigger them at
>> >> >> the same time?
>> >> >
>> >> > Right.
>> >>
>> >> For any particular reason?  Or just in case?
>> >
>> > For nodes with multiple PSU and without (supported) management
>> > board.
>>
>> That still doesn't explain why the 'off' commands would need to be
>> simultaneous though.
>> To turn the node off, both devices just need to turn the port off...
>> there's no requirement that this happens simultaneously.
>
> OK, right. What I had in mind was actually the default reset
> action.
>
>> > I think that one of our APC stonith agents can turn more
>> > than one port off simultaneously.
>>
>> If they're for the same host and device, then you don't even need this.
>> Just specify two ports in the host_map.
>
> Cool. Didn't look into it. How would that work with say
> external/rackpdu (uses snmpset(8) to manage ports)?

We'll supply something like port=1,2 or port=1-3 and its up to the
agent to map that into something the device understands.

> That agent
> can either use the names_oid to fetch ports by itself (in which
> case they must be named after nodes) or this:
>
> outlet_config (string): Configuration file. Other way to
> recognize outlet number by nodename.
>    Configuration file. Other way to recognize outlet number by nodename.
>    Configuration file which contains
>    node_name=outlet_number
>    strings.
>
>    Example:
>    server1=1
>    server2=2
>
> Now, how does stonithd know which parameter to use to pass the
> outlet (port) number from the host_map list to the agent?

Item 6:
http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-stonith-configure.html

I do try to document these things.

> I
> assume that the agent should have a matching API. Does this work
> only with RH fence agents?
>
>> If they're not for the same host, then they're not even covered by the
>> same fencing operation and will never be simultaneous.
>>
>> If they're for the same host but different devices, then at most
>> you'll get the commands sent in parallel, guaranteeing simultaneous is
>> near impossible.
>
> Yes, what I meant is almost simultaneous, i.e. that both ports
> are for a while turned "off" at the same time. I'm not sure how
> does it work in reality. For instance, how long does the reset
> command keep the power off on the outlet. So, it should be
> "simultanous enough" :)

I dont think 'reboot' is an option if you're using multiple devices.
You have to use 'off' (followed by a manual 'on') for any kind of reliability.

Agents that fake 'reboot' with 'off' + sleep + 'on' would be ok, but
thats an implementation detail that the daemon shouldn't know about.

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


dejanmm at fastmail

Jan 26, 2012, 7:00 AM

Post #18 of 25 (3764 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On Wed, Jan 25, 2012 at 11:24:43AM +1100, Andrew Beekhof wrote:
> On Wed, Jan 25, 2012 at 2:22 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> > On Tue, Jan 24, 2012 at 03:11:31PM +1100, Andrew Beekhof wrote:
> >> On Tue, Jan 24, 2012 at 7:20 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
[...]
> > Cool. Didn't look into it. How would that work with say
> > external/rackpdu (uses snmpset(8) to manage ports)?
>
> We'll supply something like port=1,2 or port=1-3 and its up to the
> agent to map that into something the device understands.

OK.

> > That agent
> > can either use the names_oid to fetch ports by itself (in which
> > case they must be named after nodes) or this:
> >
> > outlet_config (string): Configuration file. Other way to
> > recognize outlet number by nodename.
> >    Configuration file. Other way to recognize outlet number by nodename.
> >    Configuration file which contains
> >    node_name=outlet_number
> >    strings.
> >
> >    Example:
> >    server1=1
> >    server2=2
> >
> > Now, how does stonithd know which parameter to use to pass the
> > outlet (port) number from the host_map list to the agent?
>
> Item 6:
> http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-stonith-configure.html
>
> I do try to document these things.

That seems to be mostly user documentation.[*]

Trying it out with this configuration:

primitive Fencing stonith:external/ssh \
params hostlist="xen-d xen-e xen-f" livedangerously="yes" pcmk_host_map="xen-d:1;xen-e:2,3;xen-f:1-3"

there was nothing new in the environment to the agent and fencing
actually wasn't tried at all:

Jan 26 15:38:29 xen-d stonith-ng: [1815]: info: can_fence_host_with_device: Fencing can not fence xen-f (aka. '1-3'): dynamic-list

Looks like I misunderstood the feature.

Thanks,

Dejan

[*] All documentation on the glue set of stonith agents is gone.
Or at least I couldn't find it on this page. Is that intentional?

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


andrew at beekhof

Jan 29, 2012, 2:14 PM

Post #19 of 25 (3751 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On Fri, Jan 27, 2012 at 2:00 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> On Wed, Jan 25, 2012 at 11:24:43AM +1100, Andrew Beekhof wrote:
>> On Wed, Jan 25, 2012 at 2:22 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
>> > On Tue, Jan 24, 2012 at 03:11:31PM +1100, Andrew Beekhof wrote:
>> >> On Tue, Jan 24, 2012 at 7:20 AM, Dejan Muhamedagic <dejanmm [at] fastmail> wrote:
> [...]
>> > Cool. Didn't look into it. How would that work with say
>> > external/rackpdu (uses snmpset(8) to manage ports)?
>>
>> We'll supply something like port=1,2 or port=1-3 and its up to the
>> agent to map that into something the device understands.
>
> OK.
>
>> > That agent
>> > can either use the names_oid to fetch ports by itself (in which
>> > case they must be named after nodes) or this:
>> >
>> > outlet_config (string): Configuration file. Other way to
>> > recognize outlet number by nodename.
>> >    Configuration file. Other way to recognize outlet number by nodename.
>> >    Configuration file which contains
>> >    node_name=outlet_number
>> >    strings.
>> >
>> >    Example:
>> >    server1=1
>> >    server2=2
>> >
>> > Now, how does stonithd know which parameter to use to pass the
>> > outlet (port) number from the host_map list to the agent?
>>
>> Item 6:
>>   http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-stonith-configure.html
>>
>> I do try to document these things.
>
> That seems to be mostly user documentation.[*]
>
> Trying it out with this configuration:
>
>        primitive Fencing stonith:external/ssh \
>                params hostlist="xen-d xen-e xen-f" livedangerously="yes" pcmk_host_map="xen-d:1;xen-e:2,3;xen-f:1-3"
>
> there was nothing new in the environment to the agent and fencing
> actually wasn't tried at all:
>
> Jan 26 15:38:29 xen-d stonith-ng: [1815]: info: can_fence_host_with_device: Fencing can not fence xen-f (aka. '1-3'): dynamic-list
>
> Looks like I misunderstood the feature.

For now you'd need to set pcmk_host_list (item 5), but we should be
able to pull the information out of pcmk_host_map in the future.

>
> Thanks,
>
> Dejan
>
> [*] All documentation on the glue set of stonith agents is gone.
> Or at least I couldn't find it on this page. Is that intentional?

You mean from Pacemaker Explained?
I just switched the example to use an RH agent since thats what I have
on my system.
The only thing that changes for the glue agents is the value for 'type'.

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


bubble at hoster-ok

Feb 3, 2012, 10:50 AM

Post #20 of 25 (3704 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

Hi Andrew, Dejan, all,

25.01.2012 03:24, Andrew Beekhof wrote:
[snip]
>>> If they're for the same host but different devices, then at most
>>> you'll get the commands sent in parallel, guaranteeing simultaneous is
>>> near impossible.
>>
>> Yes, what I meant is almost simultaneous, i.e. that both ports
>> are for a while turned "off" at the same time. I'm not sure how
>> does it work in reality. For instance, how long does the reset
>> command keep the power off on the outlet. So, it should be
>> "simultanous enough" :)
>
> I dont think 'reboot' is an option if you're using multiple devices.
> You have to use 'off' (followed by a manual 'on') for any kind of reliability.
>

Why not to implement subsequent 'ons' after all 'offs' are confirmed?
With some configurable delay f.e.
That would be great for careful admins who keep fencing device lists actual.
From admin's PoV, reset and reset-like on-off operations should not
differ in a result, offending host should be restarted if admin says
'restart' or 'reboot' in fencing parameters for that host (sorry, do not
remember which one is used).
Need in manual 'on' looks like a limitation for me so I wouldn't use
such fencing mechanism. I prefer to have everything automated and
predictable as much as possible.
If 'on' is not done, then fencing is not doing what you've specified
(for 'reboot/reset' action).

Even more, if we need to do 'reset' of a host which has two PSUs
connected to two different PDUs, then it should be translated to
'all-off' - 'delay' - 'all-on' automatically. I would like such powerful
fencing system very much (yes, I'm a careful admin).

I understand that implementation will require some efforts (even for so
great programmer like you Andrew), but that would be a really useful
feature,

Best,
Vladislav

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


linux at alteeve

Feb 3, 2012, 10:58 AM

Post #21 of 25 (3705 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On 02/03/2012 01:50 PM, Vladislav Bogdanov wrote:
> [snip]
> Why not to implement subsequent 'ons' after all 'offs' are confirmed?
> With some configurable delay f.e.
> That would be great for careful admins who keep fencing device lists actual.
>>From admin's PoV, reset and reset-like on-off operations should not
> differ in a result, offending host should be restarted if admin says
> 'restart' or 'reboot' in fencing parameters for that host (sorry, do not
> remember which one is used).
> Need in manual 'on' looks like a limitation for me so I wouldn't use
> such fencing mechanism. I prefer to have everything automated and
> predictable as much as possible.
> If 'on' is not done, then fencing is not doing what you've specified
> (for 'reboot/reset' action).
>
> Even more, if we need to do 'reset' of a host which has two PSUs
> connected to two different PDUs, then it should be translated to
> 'all-off' - 'delay' - 'all-on' automatically. I would like such powerful
> fencing system very much (yes, I'm a careful admin).
>
> I understand that implementation will require some efforts (even for so
> great programmer like you Andrew), but that would be a really useful
> feature,
>
> Best,
> Vladislav

in rhcs, this is how "reset" works. First it 'off's all devices in the
larger method and then checks all to make sure they are, in fact, off.
At this point, the fence action is deemed to have succeeded and a
cursory "on" is sent to the same devices. Whether they actually come
back on or not is of no concern to the fence action.

--
Digimer
E-Mail: digimer [at] alteeve
Papers and Projects: https://alteeve.com

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


andrew at beekhof

Feb 5, 2012, 2:55 PM

Post #22 of 25 (3697 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

On Sat, Feb 4, 2012 at 5:50 AM, Vladislav Bogdanov <bubble [at] hoster-ok> wrote:
> Hi Andrew, Dejan, all,
>
> 25.01.2012 03:24, Andrew Beekhof wrote:
> [snip]
>>>> If they're for the same host but different devices, then at most
>>>> you'll get the commands sent in parallel, guaranteeing simultaneous is
>>>> near impossible.
>>>
>>> Yes, what I meant is almost simultaneous, i.e. that both ports
>>> are for a while turned "off" at the same time. I'm not sure how
>>> does it work in reality. For instance, how long does the reset
>>> command keep the power off on the outlet. So, it should be
>>> "simultanous enough" :)
>>
>> I dont think 'reboot' is an option if you're using multiple devices.
>> You have to use 'off' (followed by a manual 'on') for any kind of reliability.
>>
>
> Why not to implement subsequent 'ons' after all 'offs' are confirmed?

That could be possible in the future.
However since none of this was possible in the old stonithd, its not
something I plan for the initial implementation.

Also, you're requiring an extra level of intelligence in stonith-ng,
to know that even though the admin asked for 'reboot' and the devices
support 'reboot', that we should ignore that and do 'off' + 'on' in
some specific scenarios.

> With some configurable delay f.e.
> That would be great for careful admins who keep fencing device lists actual.
> From admin's PoV, reset and reset-like on-off operations should not
> differ in a result, offending host should be restarted if admin says
> 'restart' or 'reboot' in fencing parameters for that host (sorry, do not
> remember which one is used).
> Need in manual 'on' looks like a limitation for me so I wouldn't use
> such fencing mechanism. I prefer to have everything automated and
> predictable as much as possible.

Then don't put a node under the control of two devices.
Have it be two ports on the same host and you wont hit this limitation.

> If 'on' is not done, then fencing is not doing what you've specified
> (for 'reboot/reset' action).
>
> Even more, if we need to do 'reset' of a host which has two PSUs
> connected to two different PDUs, then it should be translated to
> 'all-off' - 'delay' - 'all-on' automatically. I would like such powerful
> fencing system very much (yes, I'm a careful admin).
>
> I understand that implementation will require some efforts (even for so
> great programmer like you Andrew), but that would be a really useful
> feature,
>
> Best,
> Vladislav
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


bubble at hoster-ok

Feb 5, 2012, 8:29 PM

Post #23 of 25 (3694 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

06.02.2012 01:55, Andrew Beekhof wrote:
> On Sat, Feb 4, 2012 at 5:50 AM, Vladislav Bogdanov <bubble [at] hoster-ok> wrote:
>> Hi Andrew, Dejan, all,
>>
>> 25.01.2012 03:24, Andrew Beekhof wrote:
>> [snip]
>>>>> If they're for the same host but different devices, then at most
>>>>> you'll get the commands sent in parallel, guaranteeing simultaneous is
>>>>> near impossible.
>>>>
>>>> Yes, what I meant is almost simultaneous, i.e. that both ports
>>>> are for a while turned "off" at the same time. I'm not sure how
>>>> does it work in reality. For instance, how long does the reset
>>>> command keep the power off on the outlet. So, it should be
>>>> "simultanous enough" :)
>>>
>>> I dont think 'reboot' is an option if you're using multiple devices.
>>> You have to use 'off' (followed by a manual 'on') for any kind of reliability.
>>>
>>
>> Why not to implement subsequent 'ons' after all 'offs' are confirmed?
>
> That could be possible in the future.
> However since none of this was possible in the old stonithd, its not
> something I plan for the initial implementation.
>
> Also, you're requiring an extra level of intelligence in stonith-ng,
> to know that even though the admin asked for 'reboot' and the devices
> support 'reboot', that we should ignore that and do 'off' + 'on' in
> some specific scenarios.
>
>> With some configurable delay f.e.
>> That would be great for careful admins who keep fencing device lists actual.
>> From admin's PoV, reset and reset-like on-off operations should not
>> differ in a result, offending host should be restarted if admin says
>> 'restart' or 'reboot' in fencing parameters for that host (sorry, do not
>> remember which one is used).
>> Need in manual 'on' looks like a limitation for me so I wouldn't use
>> such fencing mechanism. I prefer to have everything automated and
>> predictable as much as possible.
>
> Then don't put a node under the control of two devices.
> Have it be two ports on the same host and you wont hit this limitation.

It's a SPOF in the case of PDUs.

I do not use PDUs at all, I have everything ready to shorten 'reset'
lines on servers instead of plugging off power cords, just waiting for
linear fencing topology to be implemented in both snonith-ng and crmsh.

So, I just care about generic admin who wants to use PDUs for fencing.

>
>> If 'on' is not done, then fencing is not doing what you've specified
>> (for 'reboot/reset' action).
>>
>> Even more, if we need to do 'reset' of a host which has two PSUs
>> connected to two different PDUs, then it should be translated to
>> 'all-off' - 'delay' - 'all-on' automatically. I would like such powerful
>> fencing system very much (yes, I'm a careful admin).
>>
>> I understand that implementation will require some efforts (even for so
>> great programmer like you Andrew), but that would be a really useful
>> feature,
>>
>> Best,
>> Vladislav
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker [at] oss
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


andrew at beekhof

Feb 6, 2012, 1:22 PM

Post #24 of 25 (3689 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

Stonith is never a SPOF.

Something else needs to have failed before fencing has even a chance to do so.

Unless you put all the nodes on the same PDU... but that would be silly.

On Mon, Feb 6, 2012 at 3:29 PM, Vladislav Bogdanov <bubble [at] hoster-ok> wrote:
> 06.02.2012 01:55, Andrew Beekhof wrote:
>> On Sat, Feb 4, 2012 at 5:50 AM, Vladislav Bogdanov <bubble [at] hoster-ok> wrote:
>>> Hi Andrew, Dejan, all,
>>>
>>> 25.01.2012 03:24, Andrew Beekhof wrote:
>>> [snip]
>>>>>> If they're for the same host but different devices, then at most
>>>>>> you'll get the commands sent in parallel, guaranteeing simultaneous is
>>>>>> near impossible.
>>>>>
>>>>> Yes, what I meant is almost simultaneous, i.e. that both ports
>>>>> are for a while turned "off" at the same time. I'm not sure how
>>>>> does it work in reality. For instance, how long does the reset
>>>>> command keep the power off on the outlet. So, it should be
>>>>> "simultanous enough" :)
>>>>
>>>> I dont think 'reboot' is an option if you're using multiple devices.
>>>> You have to use 'off' (followed by a manual 'on') for any kind of reliability.
>>>>
>>>
>>> Why not to implement subsequent 'ons' after all 'offs' are confirmed?
>>
>> That could be possible in the future.
>> However since none of this was possible in the old stonithd, its not
>> something I plan for the initial implementation.
>>
>> Also, you're requiring an extra level of intelligence in stonith-ng,
>> to know that even though the admin asked for 'reboot' and the devices
>> support 'reboot', that we should ignore that and do 'off' + 'on' in
>> some specific scenarios.
>>
>>> With some configurable delay f.e.
>>> That would be great for careful admins who keep fencing device lists actual.
>>> From admin's PoV, reset and reset-like on-off operations should not
>>> differ in a result, offending host should be restarted if admin says
>>> 'restart' or 'reboot' in fencing parameters for that host (sorry, do not
>>> remember which one is used).
>>> Need in manual 'on' looks like a limitation for me so I wouldn't use
>>> such fencing mechanism. I prefer to have everything automated and
>>> predictable as much as possible.
>>
>> Then don't put a node under the control of two devices.
>> Have it be two ports on the same host and you wont hit this limitation.
>
> It's a SPOF in the case of PDUs.
>
> I do not use PDUs at all, I have everything ready to shorten 'reset'
> lines on servers instead of plugging off power cords, just waiting for
> linear fencing topology to be implemented in both snonith-ng and crmsh.
>
> So, I just care about generic admin who wants to use PDUs for fencing.
>
>>
>>> If 'on' is not done, then fencing is not doing what you've specified
>>> (for 'reboot/reset' action).
>>>
>>> Even more, if we need to do 'reset' of a host which has two PSUs
>>> connected to two different PDUs, then it should be translated to
>>> 'all-off' - 'delay' - 'all-on' automatically. I would like such powerful
>>> fencing system very much (yes, I'm a careful admin).
>>>
>>> I understand that implementation will require some efforts (even for so
>>> great programmer like you Andrew), but that would be a really useful
>>> feature,
>>>
>>> Best,
>>> Vladislav
>>>
>>> _______________________________________________
>>> Pacemaker mailing list: Pacemaker [at] oss
>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker [at] oss
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


bubble at hoster-ok

Feb 6, 2012, 10:51 PM

Post #25 of 25 (3694 views)
Permalink
Re: Proposed new stonith topology syntax [In reply to]

07.02.2012 00:22, Andrew Beekhof wrote:
> Stonith is never a SPOF.

Sorry for being unclear.

I meant that having redundant PSU connected to two outlets of the same
PDU (connected to the one power source in turn) is a SPOF for a node,
not for a cluster.

So I assume that everybody connect every RPSU to two different PDUs and
to two different power sources.

Then it is impossible to do reset-like (all offs - all ons) operation on
two power outlets from within a single instance of fencing agent (which
knows about one PDU only). Then that logic should be considered to be
moved one layer upper.

>
> Something else needs to have failed before fencing has even a chance to do so.
>
> Unless you put all the nodes on the same PDU... but that would be silly.
>
> On Mon, Feb 6, 2012 at 3:29 PM, Vladislav Bogdanov <bubble [at] hoster-ok> wrote:
>> 06.02.2012 01:55, Andrew Beekhof wrote:
>>> On Sat, Feb 4, 2012 at 5:50 AM, Vladislav Bogdanov <bubble [at] hoster-ok> wrote:
>>>> Hi Andrew, Dejan, all,
>>>>
>>>> 25.01.2012 03:24, Andrew Beekhof wrote:
>>>> [snip]
>>>>>>> If they're for the same host but different devices, then at most
>>>>>>> you'll get the commands sent in parallel, guaranteeing simultaneous is
>>>>>>> near impossible.
>>>>>>
>>>>>> Yes, what I meant is almost simultaneous, i.e. that both ports
>>>>>> are for a while turned "off" at the same time. I'm not sure how
>>>>>> does it work in reality. For instance, how long does the reset
>>>>>> command keep the power off on the outlet. So, it should be
>>>>>> "simultanous enough" :)
>>>>>
>>>>> I dont think 'reboot' is an option if you're using multiple devices.
>>>>> You have to use 'off' (followed by a manual 'on') for any kind of reliability.
>>>>>
>>>>
>>>> Why not to implement subsequent 'ons' after all 'offs' are confirmed?
>>>
>>> That could be possible in the future.
>>> However since none of this was possible in the old stonithd, its not
>>> something I plan for the initial implementation.
>>>
>>> Also, you're requiring an extra level of intelligence in stonith-ng,
>>> to know that even though the admin asked for 'reboot' and the devices
>>> support 'reboot', that we should ignore that and do 'off' + 'on' in
>>> some specific scenarios.
>>>

I just
>>>> With some configurable delay f.e.
>>>> That would be great for careful admins who keep fencing device lists actual.
>>>> From admin's PoV, reset and reset-like on-off operations should not
>>>> differ in a result, offending host should be restarted if admin says
>>>> 'restart' or 'reboot' in fencing parameters for that host (sorry, do not
>>>> remember which one is used).
>>>> Need in manual 'on' looks like a limitation for me so I wouldn't use
>>>> such fencing mechanism. I prefer to have everything automated and
>>>> predictable as much as possible.
>>>
>>> Then don't put a node under the control of two devices.
>>> Have it be two ports on the same host and you wont hit this limitation.
>>
>> It's a SPOF in the case of PDUs.
>>
>> I do not use PDUs at all, I have everything ready to shorten 'reset'
>> lines on servers instead of plugging off power cords, just waiting for
>> linear fencing topology to be implemented in both snonith-ng and crmsh.
>>
>> So, I just care about generic admin who wants to use PDUs for fencing.
>>
>>>
>>>> If 'on' is not done, then fencing is not doing what you've specified
>>>> (for 'reboot/reset' action).
>>>>
>>>> Even more, if we need to do 'reset' of a host which has two PSUs
>>>> connected to two different PDUs, then it should be translated to
>>>> 'all-off' - 'delay' - 'all-on' automatically. I would like such powerful
>>>> fencing system very much (yes, I'm a careful admin).
>>>>
>>>> I understand that implementation will require some efforts (even for so
>>>> great programmer like you Andrew), but that would be a really useful
>>>> feature,
>>>>
>>>> Best,
>>>> Vladislav
>>>>
>>>> _______________________________________________
>>>> Pacemaker mailing list: Pacemaker [at] oss
>>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>>
>>>> Project Home: http://www.clusterlabs.org
>>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>>> Bugs: http://bugs.clusterlabs.org
>>>
>>> _______________________________________________
>>> Pacemaker mailing list: Pacemaker [at] oss
>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker [at] oss
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Linux-HA pacemaker RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.