Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Linux-HA: Pacemaker

Problem understanding resource stickyness

 

 

Linux-HA pacemaker RSS feed   Index | Next | Previous | View Threaded


berni at birkenwald

Apr 24, 2012, 2:30 AM

Post #1 of 4 (607 views)
Permalink
Problem understanding resource stickyness

Hi everyone,

I have a small problem with a simple two-node virtualization cluster
that runs a pair of firewall VMs. They run on DRBD devices and may both
run on one host, but they should be distributed when two nodes are
available. Also, if a VM has to be migrated, it should be the second one
(the VMs have internal HA and the first one is usually active, so that
should stay where it is).

I'm using the following configuration

primitive drbd-greatfw1 ocf:linbit:drbd \
params drbd_resource="greatfw1" \
op monitor interval="15s"
primitive drbd-greatfw2 ocf:linbit:drbd \
params drbd_resource="greatfw2" \
op monitor interval="15s"

primitive kvm-greatfw1 heartbeat:kvm \
params 1="greatfw1" \
meta resource-stickiness="1000" target-role="Started"
primitive kvm-greatfw2 heartbeat:kvm \
params 1="greatfw2"

ms ms-drbd-greatfw1 drbd-greatfw1 \
meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true"
ms ms-drbd-greatfw2 drbd-greatfw2 \
meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true"

colocation vm-greatfw1 inf: kvm-greatfw1 ms-drbd-greatfw1:Master
colocation vm-greatfw2 inf: kvm-greatfw2 ms-drbd-greatfw2:Master
colocation col-greatfw1-greatfw2 -2000: kvm-greatfw1 kvm-greatfw2

order vm-greatfw1-order inf: ms-drbd-greatfw1:promote kvm-greatfw1:start
order vm-greatfw2-order inf: ms-drbd-greatfw2:promote kvm-greatfw2:start

property $id="cib-bootstrap-options" \
dc-version="1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b" \
cluster-infrastructure="Heartbeat" \
stonith-enabled="false" \
no-quorum-policy="ignore" \
default-resource-stickiness="200" \
last-lrm-refresh="1332228434" \
start-failure-is-fatal="false"

I figure, greatfw1 is configured exactly as greatfw2, but has a higher
resource stickyness, so it should stay where it was. But when I standby
one host (migrating both to the same host) and then take it back online,
greatfw1 is migrated to the other side.

Debian Stable, pacemaker 1.0.9.1+hg15626-1 with heartbeat 1:3.0.3-2.

Best Regards,
Bernhard


_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


dvossel at redhat

Apr 25, 2012, 10:31 AM

Post #2 of 4 (575 views)
Permalink
Re: Problem understanding resource stickyness [In reply to]

----- Original Message -----
> From: "Bernhard Schmidt" <berni [at] birkenwald>
> To: pacemaker [at] clusterlabs
> Sent: Tuesday, April 24, 2012 4:30:55 AM
> Subject: [Pacemaker] Problem understanding resource stickyness
>
> Hi everyone,
>
> I have a small problem with a simple two-node virtualization cluster
> that runs a pair of firewall VMs. They run on DRBD devices and may
> both
> run on one host, but they should be distributed when two nodes are
> available. Also, if a VM has to be migrated, it should be the second
> one
> (the VMs have internal HA and the first one is usually active, so
> that
> should stay where it is).
>
> I'm using the following configuration
>
> primitive drbd-greatfw1 ocf:linbit:drbd \
> params drbd_resource="greatfw1" \
> op monitor interval="15s"
> primitive drbd-greatfw2 ocf:linbit:drbd \
> params drbd_resource="greatfw2" \
> op monitor interval="15s"
>
> primitive kvm-greatfw1 heartbeat:kvm \
> params 1="greatfw1" \
> meta resource-stickiness="1000" target-role="Started"
> primitive kvm-greatfw2 heartbeat:kvm \
> params 1="greatfw2"
>
> ms ms-drbd-greatfw1 drbd-greatfw1 \
> meta master-max="1" master-node-max="1" clone-max="2"
> clone-node-max="1" notify="true"
> ms ms-drbd-greatfw2 drbd-greatfw2 \
> meta master-max="1" master-node-max="1" clone-max="2"
> clone-node-max="1" notify="true"
>
> colocation vm-greatfw1 inf: kvm-greatfw1 ms-drbd-greatfw1:Master
> colocation vm-greatfw2 inf: kvm-greatfw2 ms-drbd-greatfw2:Master
> colocation col-greatfw1-greatfw2 -2000: kvm-greatfw1 kvm-greatfw2

Switch the order on the above colocation constraint to look like this.

colocation col-greatfw1-greatfw2 -2000: kvm-greatfw2 kvm-greatfw1

Where kvm-greatfw1 is is the with-rsc argument... Basically just reverse the two resources. This fixed it for me, but I'm not convinced this isn't a bug. I'm looking further into this.

-- Vossel


>
> order vm-greatfw1-order inf: ms-drbd-greatfw1:promote
> kvm-greatfw1:start
> order vm-greatfw2-order inf: ms-drbd-greatfw2:promote
> kvm-greatfw2:start
>
> property $id="cib-bootstrap-options" \
> dc-version="1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b" \
> cluster-infrastructure="Heartbeat" \
> stonith-enabled="false" \
> no-quorum-policy="ignore" \
> default-resource-stickiness="200" \
> last-lrm-refresh="1332228434" \
> start-failure-is-fatal="false"
>
> I figure, greatfw1 is configured exactly as greatfw2, but has a
> higher
> resource stickyness, so it should stay where it was. But when I
> standby
> one host (migrating both to the same host) and then take it back
> online,
> greatfw1 is migrated to the other side.
> Debian Stable, pacemaker 1.0.9.1+hg15626-1 with heartbeat 1:3.0.3-2.
>
> Best Regards,
> Bernhard
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


berni at birkenwald

Apr 25, 2012, 12:25 PM

Post #3 of 4 (576 views)
Permalink
Re: Problem understanding resource stickyness [In reply to]

David Vossel <dvossel [at] redhat> wrote:

Hello David,


>> colocation vm-greatfw1 inf: kvm-greatfw1 ms-drbd-greatfw1:Master
>> colocation vm-greatfw2 inf: kvm-greatfw2 ms-drbd-greatfw2:Master
>> colocation col-greatfw1-greatfw2 -2000: kvm-greatfw1 kvm-greatfw2
>
> Switch the order on the above colocation constraint to look like this.
>
> colocation col-greatfw1-greatfw2 -2000: kvm-greatfw2 kvm-greatfw1
>
> Where kvm-greatfw1 is is the with-rsc argument... Basically just
> reverse the two resources. This fixed it for me, but I'm not
> convinced this isn't a bug. I'm looking further into this.

Thanks, this fixed the problem indeed.

Best Regards,
Bernhard


_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


dvossel at redhat

Apr 25, 2012, 1:09 PM

Post #4 of 4 (572 views)
Permalink
Re: Problem understanding resource stickyness [In reply to]

----- Original Message -----
> From: "Bernhard Schmidt" <berni [at] birkenwald>
> To: pacemaker [at] clusterlabs
> Sent: Wednesday, April 25, 2012 2:25:42 PM
> Subject: Re: [Pacemaker] Problem understanding resource stickyness
>
> David Vossel <dvossel [at] redhat> wrote:
>
> Hello David,
>
>
> >> colocation vm-greatfw1 inf: kvm-greatfw1 ms-drbd-greatfw1:Master
> >> colocation vm-greatfw2 inf: kvm-greatfw2 ms-drbd-greatfw2:Master
> >> colocation col-greatfw1-greatfw2 -2000: kvm-greatfw1 kvm-greatfw2
> >
> > Switch the order on the above colocation constraint to look like
> > this.
> >
> > colocation col-greatfw1-greatfw2 -2000: kvm-greatfw2 kvm-greatfw1
> >
> > Where kvm-greatfw1 is is the with-rsc argument... Basically just
> > reverse the two resources. This fixed it for me, but I'm not
> > convinced this isn't a bug. I'm looking further into this.
>
> Thanks, this fixed the problem indeed.

Awesome :)

Here's another observation I've made.

Using your original configuration, if you set the resource-stickiness higher on kvm-greatfw1 than the -2000 colocation score, kvm-greatfw1 will stay put... but kvm-greatfw2 will not move either, which also seems undesirable.

-- Vossel


> Best Regards,
> Bernhard
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Linux-HA pacemaker RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.