Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Linux-HA: Pacemaker

VIP on Active/Active cluster

 

 

Linux-HA pacemaker RSS feed   Index | Next | Previous | View Threaded


zen.suite at gmail

May 9, 2012, 12:10 PM

Post #1 of 10 (1685 views)
Permalink
VIP on Active/Active cluster

Hello,

I wonder if someone can light me on how to handle the following cluster
scene:

2 Nodes Cluster (Active/Active)
1 Cluster managed VIP - RoundRobin ?
SAN Shared Storage (DLM CLVM O2CB) = "OCFS2"

My main question is, can one VIP serve 2 nodes?

Thanks in advance.


jsmith at argotec

May 9, 2012, 12:27 PM

Post #2 of 10 (1626 views)
Permalink
Re: VIP on Active/Active cluster [In reply to]

----- Original Message -----
> From: "Paul Damken" <zen.suite [at] gmail>
> To: pacemaker [at] oss
> Sent: Wednesday, May 9, 2012 3:10:03 PM
> Subject: [Pacemaker] VIP on Active/Active cluster
>
>
> Hello,
>
>
> I wonder if someone can light me on how to handle the following
> cluster scene:
>
>
> 2 Nodes Cluster (Active/Active)
> 1 Cluster managed VIP - RoundRobin ?
> SAN Shared Storage (DLM CLVM O2CB) = "OCFS2"
>
>
> My main question is, can one VIP serve 2 nodes?
>

crm ra info ocf:hearbeat:IPaddr2

specificly clusterip_hash

Example:
primitive p_ip_vip ocf:heartbeat:IPaddr2 \
params ip="192.168.0.254" nic="eth0" cidr_netmask="22" broadcast="192.168.3.255" clusterip_hash="sourceip-sourceport" iflabel="VIP" \
operations $id="p_ip_vip-operations" \
op start interval="0" timeout="20" \
op stop interval="0" timeout="20" \
op monitor interval="10" timeout="20" start-delay="0"

clone cl_vip p_ip_vip

HTH

Jake


>
> Thanks in advance.
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


david at davidcoulson

May 9, 2012, 12:28 PM

Post #3 of 10 (1628 views)
Permalink
Re: VIP on Active/Active cluster [In reply to]

What application is running on the nodes?

Sent from my iPad

On May 9, 2012, at 3:10 PM, Paul Damken <zen.suite [at] gmail> wrote:

> Hello,
>
> I wonder if someone can light me on how to handle the following cluster scene:
>
> 2 Nodes Cluster (Active/Active)
> 1 Cluster managed VIP - RoundRobin ?
> SAN Shared Storage (DLM CLVM O2CB) = "OCFS2"
>
> My main question is, can one VIP serve 2 nodes?
>
> Thanks in advance.
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


misch at clusterbau

May 9, 2012, 9:38 PM

Post #4 of 10 (1625 views)
Permalink
Re: VIP on Active/Active cluster [In reply to]

> Hello,
>
> I wonder if someone can light me on how to handle the following cluster
> scene:
>
> 2 Nodes Cluster (Active/Active)
> 1 Cluster managed VIP - RoundRobin ?
> SAN Shared Storage (DLM CLVM O2CB) = "OCFS2"
>
> My main question is, can one VIP serve 2 nodes?
>
> Thanks in advance.

Yes. But I would use the "localnode" feature of the Linux Virtual Server. The
LVS is a real loadbalancer that offers more features than the clustered IP
address of the normal cluster.

--
Dr. Michael Schwartzkopff
Guardinistr. 63
81375 München

Tel: (0163) 172 50 98
Attachments: signature.asc (0.19 KB)


zen.suite at gmail

May 12, 2012, 11:44 AM

Post #5 of 10 (1590 views)
Permalink
Re: VIP on Active/Active cluster [In reply to]

> Hello,
> >
> > I wonder if someone can light me on how to handle the following cluster
> > scene:
> >
> > 2 Nodes Cluster (Active/Active)
> > 1 Cluster managed VIP - RoundRobin ?
> > SAN Shared Storage (DLM CLVM O2CB) = "OCFS2"
> >
> > My main question is, can one VIP serve 2 nodes?
> >
> > Thanks in advance.
>
> Yes. But I would use the "localnode" feature of the Linux Virtual Server.
> The
> LVS is a real loadbalancer that offers more features than the clustered IP
> address of the normal cluster.
>
> --
> Dr. Michael Schwartzkopff
> Guardinistr. 63
> 81375 M?nchen
>
> Tel: (0163) 172 50 98
>

How could do so?
I tried setting up VIP + Clone VIP and once resource is cloned and started
on both nodes, it is no longer
reachable.

crm(live)configure# show
node havc1
node havc2
primitive failover-ip1 ocf:heartbeat:IPaddr2 \
params ip="192.168.1.20" cidr_netmask="24"
broadcast="192.168.1.255" nic="eth0" clusterip_hash="sourceip-sourceport" \
op monitor interval="20s"
clone ip1-clone failover-ip1 \
meta globally-unique="true" clone-max="2" clone-node-max="2"
target-role="Started"
property $id="cib-bootstrap-options" \
dc-version="1.1.2-2e096a41a5f9e184a1c1537c82c6da1093698eb5" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false" \
last-lrm-refresh="1336841278"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"

-------------------------------------------------------------------------

Chain INPUT (policy ACCEPT)
target prot opt source destination
CLUSTERIP all -- anywhere 192.168.1.20 CLUSTERIP
hashmode=sourceip clustermac=81:30:6E:B7:6D:AF total_nodes=2 local_node=1
hash_init=0

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination


Any idea what is wrong, or causing this to not being reachable once both
IPaddr2 RA start on both nodes?


jsmith at argotec

May 13, 2012, 7:36 AM

Post #6 of 10 (1597 views)
Permalink
Re: VIP on Active/Active cluster [In reply to]

clone-node-max="2" should only be one.
How about the output from crm_mon -fr1
And ip a s on each node?

Jake

----- Reply message -----
From: "Paul Damken" <zen.suite [at] gmail>
To: <pacemaker [at] oss>
Subject: [Pacemaker] VIP on Active/Active cluster
Date: Sat, May 12, 2012 2:49 pm


> Hello,
> >
> > I wonder if someone can light me on how to handle the following cluster
> > scene:
> >
> > 2 Nodes Cluster (Active/Active)
> > 1 Cluster managed VIP - RoundRobin ?
> > SAN Shared Storage (DLM CLVM O2CB) = "OCFS2"
> >
> > My main question is, can one VIP serve 2 nodes?
> >
> > Thanks in advance.
>
> Yes. But I would use the "localnode" feature of the Linux Virtual Server.
> The
> LVS is a real loadbalancer that offers more features than the clustered IP
> address of the normal cluster.
>
> --
> Dr. Michael Schwartzkopff
> Guardinistr. 63
> 81375 M?nchen
>
> Tel: (0163) 172 50 98
>

How could do so?
I tried setting up VIP + Clone VIP and once resource is cloned and started
on both nodes, it is no longer
reachable.

crm(live)configure# show
node havc1
node havc2
primitive failover-ip1 ocf:heartbeat:IPaddr2 \
params ip="192.168.1.20" cidr_netmask="24"
broadcast="192.168.1.255" nic="eth0" clusterip_hash="sourceip-sourceport" \
op monitor interval="20s"
clone ip1-clone failover-ip1 \
meta globally-unique="true" clone-max="2" clone-node-max="2"
target-role="Started"
property $id="cib-bootstrap-options" \
dc-version="1.1.2-2e096a41a5f9e184a1c1537c82c6da1093698eb5" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false" \
last-lrm-refresh="1336841278"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"

-------------------------------------------------------------------------

Chain INPUT (policy ACCEPT)
target prot opt source destination
CLUSTERIP all -- anywhere 192.168.1.20 CLUSTERIP
hashmode=sourceip clustermac=81:30:6E:B7:6D:AF total_nodes=2 local_node=1
hash_init=0

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination


Any idea what is wrong, or causing this to not being reachable once both
IPaddr2 RA start on both nodes?


zen.suite at gmail

May 14, 2012, 6:45 AM

Post #7 of 10 (1604 views)
Permalink
Re: VIP on Active/Active cluster [In reply to]

Jake Smith <jsmith@...> writes:

>
>
> clone-node-max="2" should only be one.  How about the output from crm_mon -
fr1And ip a s on each node? Jake
> ----- Reply message -----From: "Paul Damken" <zen.suite <at> gmail.com>To:
<pacemaker <at> oss.clusterlabs.org>Subject: [Pacemaker] VIP on Active/Active
clusterDate: Sat, May 12, 2012 2:49 pm
>
>
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker@...
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

Jake, Thanks here is the whole info. Same behavior. VIP not pingable nor
reachable.

Do you think that Share VIP should work on SLES 11 SP1 HAE?
I cannot get this VIP to work.

Resources:

primitive ip_vip ocf:heartbeat:IPaddr2 \
params ip="192.168.1.100" nic="bond0" cidr_netmask="22"
broadcast="192.168.1.255" clusterip_hash="sourceip-sourceport" iflabel="VIP1" \
op start interval="0" timeout="20" \
op stop interval="0" timeout="20" \
op monitor interval="10" timeout="20" start-delay="0"

clone cl_vip ip_vip \
meta interleave="true" globally-unique="true" clone-max="2" clone-node-
max="1" target-role="Started" is-managed="true"

crm_mon:

============
Last updated: Mon May 14 08:27:50 2012
Stack: openais
Current DC: hanode1 - partition with quorum
Version: 1.1.5-5bd2b9154d7d9f86d7f56fe0a74072a5a6590c60
2 Nodes configured, 2 expected votes
37 Resources configured.
============

Online: [ hanode2 hanode1 ]

Full list of resources:

cluster_mon (ocf::pacemaker:ClusterMon): Started hanode1
Clone Set: HASI [HASI_grp]
Started: [ hanode2 hanode1 ]
hanode1-stonith (stonith:external/ipmi-operator): Started hanode2
hanode2-stonith (stonith:external/ipmi-operator): Started hanode1
vghanode1 (ocf::heartbeat:LVM): Started hanode1
vghanode2 (ocf::heartbeat:LVM): Started hanode2
Clone Set: ora [ora_grp]
Started: [ hanode2 hanode1 ]
Clone Set: cl_vip [ip_vip] (unique)
ip_vip:0 (ocf::heartbeat:IPaddr2): Started hanode2
ip_vip:1 (ocf::heartbeat:IPaddr2): Started hanode1



hanode1:~ # ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
inet 127.0.0.2/8 brd 127.255.255.255 scope host secondary lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master bond0 state UP qlen 1000
link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master bond0 state UP qlen 1000
link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP
link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.58/22 brd 192.168.1.255 scope global bond0
inet 192.168.1.100/22 brd 192.168.1.255 scope global secondary bond0:VIP1
inet6 fe80::9e8e:99ff:fe24:72a0/64 scope link
valid_lft forever preferred_lft forever

-----------------------


_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


david at davidcoulson

May 14, 2012, 9:23 AM

Post #8 of 10 (1595 views)
Permalink
Re: VIP on Active/Active cluster [In reply to]

Cloning IPAddr2 resources utilizes the iptables CLUSTERIP rule. Probably a good idea to start looking at it w/ tcpdump and seeing if either box gets the icmp echo-request packet (from a ping) and determining if it just doesn't respond properly, doesn't get it at all, or something else.

I'd say it's more of a iptables/networking issue than it is a pacemaker problem now. That said, you didn't detail why you wanted a shared VIP in the first place, or what the application is, so it's perhaps going to cause more problems than it's worth (e.g. if your app is running, but is broke on one box, the VIP will still route users to it).



On May 14, 2012, at 9:45 AM, Paul Damken wrote:

> Jake Smith <jsmith@...> writes:
>
>>
>>
>> clone-node-max="2" should only be one. How about the output from crm_mon -
> fr1And ip a s on each node? Jake
>> ----- Reply message -----From: "Paul Damken" <zen.suite <at> gmail.com>To:
> <pacemaker <at> oss.clusterlabs.org>Subject: [Pacemaker] VIP on Active/Active
> clusterDate: Sat, May 12, 2012 2:49 pm
>>
>>
>>
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker@...
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>
> Jake, Thanks here is the whole info. Same behavior. VIP not pingable nor
> reachable.
>
> Do you think that Share VIP should work on SLES 11 SP1 HAE?
> I cannot get this VIP to work.
>
> Resources:
>
> primitive ip_vip ocf:heartbeat:IPaddr2 \
> params ip="192.168.1.100" nic="bond0" cidr_netmask="22"
> broadcast="192.168.1.255" clusterip_hash="sourceip-sourceport" iflabel="VIP1" \
> op start interval="0" timeout="20" \
> op stop interval="0" timeout="20" \
> op monitor interval="10" timeout="20" start-delay="0"
>
> clone cl_vip ip_vip \
> meta interleave="true" globally-unique="true" clone-max="2" clone-node-
> max="1" target-role="Started" is-managed="true"
>
> crm_mon:
>
> ============
> Last updated: Mon May 14 08:27:50 2012
> Stack: openais
> Current DC: hanode1 - partition with quorum
> Version: 1.1.5-5bd2b9154d7d9f86d7f56fe0a74072a5a6590c60
> 2 Nodes configured, 2 expected votes
> 37 Resources configured.
> ============
>
> Online: [ hanode2 hanode1 ]
>
> Full list of resources:
>
> cluster_mon (ocf::pacemaker:ClusterMon): Started hanode1
> Clone Set: HASI [HASI_grp]
> Started: [ hanode2 hanode1 ]
> hanode1-stonith (stonith:external/ipmi-operator): Started hanode2
> hanode2-stonith (stonith:external/ipmi-operator): Started hanode1
> vghanode1 (ocf::heartbeat:LVM): Started hanode1
> vghanode2 (ocf::heartbeat:LVM): Started hanode2
> Clone Set: ora [ora_grp]
> Started: [ hanode2 hanode1 ]
> Clone Set: cl_vip [ip_vip] (unique)
> ip_vip:0 (ocf::heartbeat:IPaddr2): Started hanode2
> ip_vip:1 (ocf::heartbeat:IPaddr2): Started hanode1
>
>
>
> hanode1:~ # ip a s
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
> inet 127.0.0.2/8 brd 127.255.255.255 scope host secondary lo
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> master bond0 state UP qlen 1000
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> 3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> master bond0 state UP qlen 1000
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> 4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state
> UP
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> inet 192.168.1.58/22 brd 192.168.1.255 scope global bond0
> inet 192.168.1.100/22 brd 192.168.1.255 scope global secondary bond0:VIP1
> inet6 fe80::9e8e:99ff:fe24:72a0/64 scope link
> valid_lft forever preferred_lft forever
>
> -----------------------
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


david at davidcoulson

May 14, 2012, 9:23 AM

Post #9 of 10 (1596 views)
Permalink
Re: VIP on Active/Active cluster [In reply to]

Cloning IPAddr2 resources utilizes the iptables CLUSTERIP rule. Probably a good idea to start looking at it w/ tcpdump and seeing if either box gets the icmp echo-request packet (from a ping) and determining if it just doesn't respond properly, doesn't get it at all, or something else.

I'd say it's more of a iptables/networking issue than it is a pacemaker problem now. That said, you didn't detail why you wanted a shared VIP in the first place, or what the application is, so it's perhaps going to cause more problems than it's worth (e.g. if your app is running, but is broke on one box, the VIP will still route users to it).



On May 14, 2012, at 9:45 AM, Paul Damken wrote:

> Jake Smith <jsmith@...> writes:
>
>>
>>
>> clone-node-max="2" should only be one. How about the output from crm_mon -
> fr1And ip a s on each node? Jake
>> ----- Reply message -----From: "Paul Damken" <zen.suite <at> gmail.com>To:
> <pacemaker <at> oss.clusterlabs.org>Subject: [Pacemaker] VIP on Active/Active
> clusterDate: Sat, May 12, 2012 2:49 pm
>>
>>
>>
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker@...
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>
> Jake, Thanks here is the whole info. Same behavior. VIP not pingable nor
> reachable.
>
> Do you think that Share VIP should work on SLES 11 SP1 HAE?
> I cannot get this VIP to work.
>
> Resources:
>
> primitive ip_vip ocf:heartbeat:IPaddr2 \
> params ip="192.168.1.100" nic="bond0" cidr_netmask="22"
> broadcast="192.168.1.255" clusterip_hash="sourceip-sourceport" iflabel="VIP1" \
> op start interval="0" timeout="20" \
> op stop interval="0" timeout="20" \
> op monitor interval="10" timeout="20" start-delay="0"
>
> clone cl_vip ip_vip \
> meta interleave="true" globally-unique="true" clone-max="2" clone-node-
> max="1" target-role="Started" is-managed="true"
>
> crm_mon:
>
> ============
> Last updated: Mon May 14 08:27:50 2012
> Stack: openais
> Current DC: hanode1 - partition with quorum
> Version: 1.1.5-5bd2b9154d7d9f86d7f56fe0a74072a5a6590c60
> 2 Nodes configured, 2 expected votes
> 37 Resources configured.
> ============
>
> Online: [ hanode2 hanode1 ]
>
> Full list of resources:
>
> cluster_mon (ocf::pacemaker:ClusterMon): Started hanode1
> Clone Set: HASI [HASI_grp]
> Started: [ hanode2 hanode1 ]
> hanode1-stonith (stonith:external/ipmi-operator): Started hanode2
> hanode2-stonith (stonith:external/ipmi-operator): Started hanode1
> vghanode1 (ocf::heartbeat:LVM): Started hanode1
> vghanode2 (ocf::heartbeat:LVM): Started hanode2
> Clone Set: ora [ora_grp]
> Started: [ hanode2 hanode1 ]
> Clone Set: cl_vip [ip_vip] (unique)
> ip_vip:0 (ocf::heartbeat:IPaddr2): Started hanode2
> ip_vip:1 (ocf::heartbeat:IPaddr2): Started hanode1
>
>
>
> hanode1:~ # ip a s
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
> inet 127.0.0.2/8 brd 127.255.255.255 scope host secondary lo
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> master bond0 state UP qlen 1000
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> 3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> master bond0 state UP qlen 1000
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> 4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state
> UP
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> inet 192.168.1.58/22 brd 192.168.1.255 scope global bond0
> inet 192.168.1.100/22 brd 192.168.1.255 scope global secondary bond0:VIP1
> inet6 fe80::9e8e:99ff:fe24:72a0/64 scope link
> valid_lft forever preferred_lft forever
>
> -----------------------
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker [at] oss
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


jsmith at argotec

May 14, 2012, 9:28 AM

Post #10 of 10 (1599 views)
Permalink
Re: VIP on Active/Active cluster [In reply to]

----- Original Message -----
> From: "Paul Damken" <zen.suite [at] gmail>
> To: pacemaker [at] clusterlabs
> Sent: Monday, May 14, 2012 9:45:30 AM
> Subject: Re: [Pacemaker] VIP on Active/Active cluster
>
> Jake Smith <jsmith@...> writes:
>
> >
> >
> > clone-node-max="2" should only be one.  How about the output from
> > crm_mon -
> fr1And ip a s on each node? Jake
> > ----- Reply message -----From: "Paul Damken" <zen.suite <at>
> > gmail.com>To:
> <pacemaker <at> oss.clusterlabs.org>Subject: [Pacemaker] VIP on
> Active/Active
> clusterDate: Sat, May 12, 2012 2:49 pm
> >
> >
> >
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker@...
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started:
> > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> >
>
> Jake, Thanks here is the whole info. Same behavior. VIP not pingable
> nor
> reachable.
>
> Do you think that Share VIP should work on SLES 11 SP1 HAE?
> I cannot get this VIP to work.

I use Ubuntu so I can't say 100% but I would expect so... I use it successfully in my cluster so I know it *can* work in general.

Your cidr_netmask looks odd to me given the broadcast address... should it be 24 or 23 not 22?

>
> Resources:
>
> primitive ip_vip ocf:heartbeat:IPaddr2 \
> params ip="192.168.1.100" nic="bond0" cidr_netmask="22"
> broadcast="192.168.1.255" clusterip_hash="sourceip-sourceport"
> iflabel="VIP1" \
> op start interval="0" timeout="20" \
> op stop interval="0" timeout="20" \
> op monitor interval="10" timeout="20" start-delay="0"
>
> clone cl_vip ip_vip \
> meta interleave="true" globally-unique="true" clone-max="2"
> clone-node-
> max="1" target-role="Started" is-managed="true"

Don't need any of these parameters really... just clone cl_vip ip_vip and nothing else. global-unique could be part of the problem too

interleave is default false if not defined and pretty sure you want it false
globally-unique is default false and should not be true for your use case
clone-max defaults to the number of nodes in cluster so if you have 2 nodes you get 2 clones
clone-node-max defaults to 1
target-role and is-managed are auto-generated when you did certain cluster actions and are fine as is or removed


>
> crm_mon:
>
> ============
> Last updated: Mon May 14 08:27:50 2012
> Stack: openais
> Current DC: hanode1 - partition with quorum
> Version: 1.1.5-5bd2b9154d7d9f86d7f56fe0a74072a5a6590c60
> 2 Nodes configured, 2 expected votes
> 37 Resources configured.
> ============
>
> Online: [ hanode2 hanode1 ]
>
> Full list of resources:
>
> cluster_mon (ocf::pacemaker:ClusterMon): Started hanode1
> Clone Set: HASI [HASI_grp]
> Started: [ hanode2 hanode1 ]
> hanode1-stonith (stonith:external/ipmi-operator): Started
> hanode2
> hanode2-stonith (stonith:external/ipmi-operator): Started
> hanode1
> vghanode1 (ocf::heartbeat:LVM): Started hanode1
> vghanode2 (ocf::heartbeat:LVM): Started hanode2
> Clone Set: ora [ora_grp]
> Started: [ hanode2 hanode1 ]
> Clone Set: cl_vip [ip_vip] (unique)
> ip_vip:0 (ocf::heartbeat:IPaddr2): Started hanode2
> ip_vip:1 (ocf::heartbeat:IPaddr2): Started hanode1
>

should not be (unique) as I stated above

>
>
> hanode1:~ # ip a s
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
> inet 127.0.0.2/8 brd 127.255.255.255 scope host secondary lo
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc
> pfifo_fast
> master bond0 state UP qlen 1000
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> 3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc
> pfifo_fast
> master bond0 state UP qlen 1000
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> 4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state
> UP
> link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> inet 192.168.1.58/22 brd 192.168.1.255 scope global bond0
> inet 192.168.1.100/22 brd 192.168.1.255 scope global secondary
> bond0:VIP1
> inet6 fe80::9e8e:99ff:fe24:72a0/64 scope link
> valid_lft forever preferred_lft forever
>

I would try the changes above to the clone and (possibly) the netmasks.

Then if it's still not pingable I would stop any firewall on the servers temporarily and test just to rule the firewall out.

If that doesn't work how about output from "crm_mon -fr1" "crm configure show". And "ip a s" from each node

HTH

Jake

_______________________________________________
Pacemaker mailing list: Pacemaker [at] oss
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Linux-HA pacemaker RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.