Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: DRBD: Users

drbd 8.3 - 6 nodes

 

 

DRBD users RSS feed   Index | Next | Previous | View Threaded


umarzuki at gmail

Mar 4, 2012, 9:26 PM

Post #1 of 11 (1503 views)
Permalink
drbd 8.3 - 6 nodes

hi,

for 2 cluster setup (A1, A2, A3 on cluster 1), (B1, B2, B3 on cluster 2)

any example for A3 to be able to mount LUN of A1 whenever A1 is failed
in cluster?

OS used CentOS 5.7 amd64, rgmanager, cman

--
Regards,

Umarzuki Mochlis
http://debmal.my
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


ff at mpexnet

Mar 5, 2012, 3:44 AM

Post #2 of 11 (1460 views)
Permalink
Re: drbd 8.3 - 6 nodes [In reply to]

Hi,

sorry I forget to CC the list (again), so let me bring this back to
everyone's vision.

On 03/05/2012 11:01 AM, Umarzuki Mochlis wrote:
> Pada 5 Mac 2012 5:21 PTG, Felix Frank <ff [at] mpexnet> menulis:
>> Hi,
>>
>> this is unfortunately not very clear at all.
>>
>>
>> So A1 and A2 are failover partners with DRBD? And A3 mounts a replicated
>> volume via remote block storage (iSCSI)?
>>
>> This would be a rather standard setup requiring a DRBD resource and a
>> floating IP address shared by A1 and A2. A3 uses services provided by
>> the node owning the IP address.
>>
>> I suspect you're aiming for something more complex, so please specify :-)
>>
>> Regards,
>> Felix
>
> well sir, this setup is for zimbra-cluster with rgmanager + cman + drbd 8.3
>
> A1, A2 & A3 is a zimbra cluster group (currently running). what i
> understand is that with external metadata, i could simply point A1 &
> A2 disks as drbd device without having to reformat (mkfs) them

That's true, and works for internal metadata as well, if you've got the
space at the end of your filesystem.

I don't really know what a zimbra cluster group comprises.

> but what i did not understand/know, would i be able to make A3 mount
> disk/LUN of A1 or A2 so A3 can resume A1's or A2's zimbra-cluster
> service? without drbd, A3 would automatically mount A1's LUN & run as
> A1 while resuming A1's role via rgmanager

Typically, A2 will assume A1's role in case of failure, using the DRBD
device.

I'm still not sure how your 3rd node comes into play. For mere High
Availability, 2 nodes generally suffice. Adding a 3rd one makes things a
bit harder.

Cheers,
Felix
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


umarzuki at gmail

Mar 5, 2012, 7:27 PM

Post #3 of 11 (1446 views)
Permalink
Re: drbd 8.3 - 6 nodes [In reply to]

Pada 5 Mac 2012 7:44 PTG, Felix Frank <ff [at] mpexnet> menulis:

>> but what i did not understand/know, would i be able to make A3 mount
>> disk/LUN of A1 or A2 so A3 can resume A1's or A2's zimbra-cluster
>> service? without drbd, A3 would automatically mount A1's LUN & run as
>> A1 while resuming A1's role via rgmanager
>
> Typically, A2 will assume A1's role in case of failure, using the DRBD
> device.
>
> I'm still not sure how your 3rd node comes into play. For mere High
> Availability, 2 nodes generally suffice. Adding a 3rd one makes things a
> bit harder.
>
> Cheers,
> Felix

maybe i explained it wrongly.

these are all mailbox servers

A1 in case of failure, will failover to A3
A2 in case of failure, will failover to A3
in case of the whole cluster (A1, A2 & A3) failed, manual intervention
via cluscvadm wil be done so services will be available from B1, B2 &
B3 respectively.

A3 & B3 merely acts as standby mailbox servers for A1, A2, B1, B2 respectively.

mailbox storage for A1, synched with B1
mailbox storage for A2, synched with B2
no mailbox storage for A3 & B3 since it will automatically mount
mailbox storage from A1/A2/B1/B2 respectively

i hope i explained it correctly this time. forgive me for my terrible
command of the english language.

--
Regards,

Umarzuki Mochlis
http://debmal.my
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


ff at mpexnet

Mar 6, 2012, 12:45 AM

Post #4 of 11 (1445 views)
Permalink
Re: drbd 8.3 - 6 nodes [In reply to]

Hi,

now you've cleared things up for me, thanks.

On 03/06/2012 04:27 AM, Umarzuki Mochlis wrote:
> without drbd, A3 would automatically mount A1's LUN & run as
>>> A1 while resuming A1's role via rgmanager

Without DRBD? How does this work? Are those LUNs on a SAN?

With DRBD, you would need to set things up so that A3 has 2 DRBD
volumes. One is shared with A1, another with A2. Your cluster manager
would take care to normally have A1 and A2 be primary, and A3 in
respective failover conditions.

I'm not sure how you separate A1's from A2's services on A3, but your
cluster configuration probably takes care of that already.

> A1 in case of failure, will failover to A3
> A2 in case of failure, will failover to A3

OK, fine.

> i hope i explained it correctly this time. forgive me for my terrible
> command of the english language.

Trust me, I've seen far, far worse :-)

Regards,
Felix
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


umarzuki at gmail

Mar 6, 2012, 2:18 AM

Post #5 of 11 (1443 views)
Permalink
Re: drbd 8.3 - 6 nodes [In reply to]

Pada 6 Mac 2012 4:45 PTG, Felix Frank <ff [at] mpexnet> menulis:

> On 03/06/2012 04:27 AM, Umarzuki Mochlis wrote:
>> without drbd, A3 would automatically mount A1's LUN & run as
>>>> A1 while resuming A1's role via rgmanager
>
> Without DRBD? How does this work? Are those LUNs on a SAN?
>
> With DRBD, you would need to set things up so that A3 has 2 DRBD
> volumes. One is shared with A1, another with A2. Your cluster manager
> would take care to normally have A1 and A2 be primary, and A3 in
> respective failover conditions.
>
> I'm not sure how you separate A1's from A2's services on A3, but your
> cluster configuration probably takes care of that already.
>
>> A1 in case of failure, will failover to A3
>> A2 in case of failure, will failover to A3
>
> OK, fine.
>

yes, the mailbox storages are LUN's on a SAN. it manages to do
clustering with rgmanager + cman, so A3 would take where A1 or A2 left
off by mounting A1's or A2's mailbox storage on itself since it is
already running a standby mailbox service (zimbra-cluster standby
server). i believe this is called 2+1 clustering.

but now i have to setup another cluster for disaster recovery as i
described before using drbd.

is there anyway, with this setup that i could achieve what i had intended?
FYI, all hardware had been bought and storage had been calculated
beforehand which i had no saying or part of. so this is a bit of a
problem for me.

--
Regards,

Umarzuki Mochlis
http://debmal.my
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


ff at mpexnet

Mar 6, 2012, 2:23 AM

Post #6 of 11 (1443 views)
Permalink
Re: drbd 8.3 - 6 nodes [In reply to]

On 03/06/2012 11:18 AM, Umarzuki Mochlis wrote:
> yes, the mailbox storages are LUN's on a SAN. it manages to do
> clustering with rgmanager + cman, so A3 would take where A1 or A2 left
> off by mounting A1's or A2's mailbox storage on itself since it is
> already running a standby mailbox service (zimbra-cluster standby
> server). i believe this is called 2+1 clustering.
>
> but now i have to setup another cluster for disaster recovery as i
> described before using drbd.
>
> is there anyway, with this setup that i could achieve what i had intended?
> FYI, all hardware had been bought and storage had been calculated
> beforehand which i had no saying or part of. so this is a bit of a
> problem for me.

Ah, I see now.

Technically, you'd want establish DRBD synchronisation between the SAN
at your A site and the SAN at the B site. Manual failover would include
making SAN B Primary.

Now, if said SANs are proprietary all-in-one products, your
possibilities for adding DRBD to it may be severely limited.
Your SAN vendor may or may not offer a cross-site synchronisation of
their own.

HTH,
Felix
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


kkovachev at varna

Mar 6, 2012, 2:34 AM

Post #7 of 11 (1466 views)
Permalink
Re: drbd 8.3 - 6 nodes [In reply to]

On Tue, 06 Mar 2012 11:23:13 +0100, Felix Frank <ff [at] mpexnet> wrote:
> On 03/06/2012 11:18 AM, Umarzuki Mochlis wrote:
>> yes, the mailbox storages are LUN's on a SAN. it manages to do
>> clustering with rgmanager + cman, so A3 would take where A1 or A2 left
>> off by mounting A1's or A2's mailbox storage on itself since it is
>> already running a standby mailbox service (zimbra-cluster standby
>> server). i believe this is called 2+1 clustering.
>>
>> but now i have to setup another cluster for disaster recovery as i
>> described before using drbd.
>>
>> is there anyway, with this setup that i could achieve what i had
>> intended?

Yes. You may use floating IP for DRBD and have one instance (IP) in site A
and another in site B for each service.
Do not use the service IP as floating IP as you will have problems moving
the service from A to B.

If A1 is active, you have the DRBD_A1 IP on that node, wich will move to
A3 in case of failure before the service ... now you have DRBD_A1 and the
service running on A3 over DRBD_A1, while DRBD_B1 will run undependable on
B1 or B3.
Now your A site goes down - you promote DRBD_B1 to primary and start A1
service on B1 over DRBD_B1.

>> FYI, all hardware had been bought and storage had been calculated
>> beforehand which i had no saying or part of. so this is a bit of a
>> problem for me.
>
> Ah, I see now.
>
> Technically, you'd want establish DRBD synchronisation between the SAN
> at your A site and the SAN at the B site. Manual failover would include
> making SAN B Primary.
>
> Now, if said SANs are proprietary all-in-one products, your
> possibilities for adding DRBD to it may be severely limited.
> Your SAN vendor may or may not offer a cross-site synchronisation of
> their own.

If SAN vendor offers that, there is no need to use DRBD at all.

>
> HTH,
> Felix
> _______________________________________________
> drbd-user mailing list
> drbd-user [at] lists
> http://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


ff at mpexnet

Mar 6, 2012, 2:40 AM

Post #8 of 11 (1445 views)
Permalink
Re: drbd 8.3 - 6 nodes [In reply to]

Hi,

On 03/06/2012 11:34 AM, Kaloyan Kovachev wrote:
> Yes. You may use floating IP for DRBD and have one instance (IP) in site A
> and another in site B for each service.
> Do not use the service IP as floating IP as you will have problems moving
> the service from A to B.
>
> If A1 is active, you have the DRBD_A1 IP on that node, wich will move to
> A3 in case of failure before the service ... now you have DRBD_A1 and the
> service running on A3 over DRBD_A1, while DRBD_B1 will run undependable on
> B1 or B3.
> Now your A site goes down - you promote DRBD_B1 to primary and start A1
> service on B1 over DRBD_B1.

interesting. So you suggest that A1 should DRBD-sync with B1 at all
times etc.?

Keep in mind that this is shared storage we're talking about here, no
local disks in either A1 *or* B1. I believe DRBD could be made to
operate thus, but there might be performance issues.

Cheers,
Felix
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


kkovachev at varna

Mar 6, 2012, 2:57 AM

Post #9 of 11 (1447 views)
Permalink
Re: drbd 8.3 - 6 nodes [In reply to]

On Tue, 06 Mar 2012 11:40:22 +0100, Felix Frank <ff [at] mpexnet> wrote:
> Hi,
>
> On 03/06/2012 11:34 AM, Kaloyan Kovachev wrote:
>> Yes. You may use floating IP for DRBD and have one instance (IP) in
site
>> A
>> and another in site B for each service.
>> Do not use the service IP as floating IP as you will have problems
moving
>> the service from A to B.
>>
>> If A1 is active, you have the DRBD_A1 IP on that node, wich will move
to
>> A3 in case of failure before the service ... now you have DRBD_A1 and
the
>> service running on A3 over DRBD_A1, while DRBD_B1 will run undependable
>> on
>> B1 or B3.
>> Now your A site goes down - you promote DRBD_B1 to primary and start A1
>> service on B1 over DRBD_B1.
>
> interesting. So you suggest that A1 should DRBD-sync with B1 at all
> times etc.?

Yes. That's the only option to sync both SAN's if they do not provide such
functionality.

>
> Keep in mind that this is shared storage we're talking about here, no
> local disks in either A1 *or* B1. I believe DRBD could be made to
> operate thus, but there might be performance issues.

Again Yes. Performance will be lower and protocol A with DRBD Proxy may be
required, but again if the SAN does not provide native cross-site
replication, there is not much else to do ... rsync from a snapshot is one
i can think of, but even from the currently inactive node (A3) the
performance will suffer and additionally there will be delay in
synchronization and possible data loss, so DRBD is still better.

>
> Cheers,
> Felix
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


umarzuki at gmail

Mar 6, 2012, 3:13 AM

Post #10 of 11 (1452 views)
Permalink
Re: drbd 8.3 - 6 nodes [In reply to]

Pada 6 Mac 2012 6:57 PTG, Kaloyan Kovachev <kkovachev [at] varna> menulis:

>> Keep in mind that this is shared storage we're talking about here, no
>> local disks in either A1 *or* B1. I believe DRBD could be made to
>> operate thus, but there might be performance issues.
>
> Again Yes. Performance will be lower and protocol A with DRBD Proxy may be
> required, but again if the SAN does not provide native cross-site
> replication, there is not much else to do ... rsync from a snapshot is one
> i can think of, but even from the currently inactive node (A3) the
> performance will suffer and additionally there will be delay in
> synchronization and possible data loss, so DRBD is still better.

addendum:

what i want to do is storage failover since email service's clustering
is handled by zimbra-cluster and rgmaner + cman on centos

the reason we're using drbd is because we have no budget for remote
mirroring which requires 2 SAN routers on current network setup

--
Regards,

Umarzuki Mochlis
http://debmal.my
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


umarzuki at gmail

Apr 18, 2012, 12:41 AM

Post #11 of 11 (1303 views)
Permalink
Re: drbd 8.3 - 6 nodes [In reply to]

Pada 6 Mac 2012 7:13 PTG, Umarzuki Mochlis <umarzuki [at] gmail> menulis:
> Pada 6 Mac 2012 6:57 PTG, Kaloyan Kovachev <kkovachev [at] varna> menulis:
>
> addendum:
>
> what i want to do is storage failover since email service's clustering
> is handled by zimbra-cluster and rgmaner + cman on centos
>
> the reason we're using drbd is because we have no budget for remote
> mirroring which requires 2 SAN routers on current network setup
>

What would happen if when one of the mailbox node (e.g.: mailbox 1) in
primary would be restarted (fenced) on standby mailbox (mailbox 3)?
Will it be possible that DRBD would fail and LUN would be corrupted if
I were to setup email servers like below?

Primary
-------
master LDAP
replica LDAP
1st mailbox
2nd mailbox
3rd mailbox

Secondary
--------
replica LDAP (replicates master LDAP in primary)
replica LDAP (replicates master LDAP in primary)
1st mailbox (sync own LUN with 1st mailbox in primary via DRBD)
2nd mailbox (sync own LUN with 2nd mailbox in primary via DRBD)
3rd mailbox

--
Regards,

Umarzuki Mochlis
http://debmal.my
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user

DRBD users RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.