Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: DRBD: Users

can i clone primary after failure of secondary?

 

 

DRBD users RSS feed   Index | Next | Previous | View Threaded


marini.maurizio at gmail

Jan 8, 2012, 6:34 AM

Post #1 of 3 (481 views)
Permalink
can i clone primary after failure of secondary?

Hello,

We have two Dell 1425 servers running CentOS 5.4, with RAID5, 3 disks
in each one,
and drbd 8.3 between them.

We see a disk failure on _all_ 3 disks on the secondary node.
The disks were sent us to new by Dell.

We could at this point clone the 3 disks of the primary node, using
the controller,
without booting CentOS (not sure if this will work though).

Then we can put the 3 cloned disks in the secondary node, switch on
the primary first,
then switch on the 2nd but keeping it disconnected.

We could change the network configuration on the 2nd node before
reconnecting it to the network.

We're very worried about the drbd partitions metadata being the same
on each node:
does this method have any chance of success, or are we wasting our time?

Thanks,

M.
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


arnold at arnoldarts

Jan 8, 2012, 11:38 AM

Post #2 of 3 (454 views)
Permalink
Re: can i clone primary after failure of secondary? [In reply to]

On Sunday 08 January 2012 15:34:29 Maurizio Marini Gmail wrote:
> Hello,
>
> We have two Dell 1425 servers running CentOS 5.4, with RAID5, 3 disks
> in each one,
> and drbd 8.3 between them.
>
> We see a disk failure on _all_ 3 disks on the secondary node.
> The disks were sent us to new by Dell.
>
> We could at this point clone the 3 disks of the primary node, using
> the controller,
> without booting CentOS (not sure if this will work though).
>
> Then we can put the 3 cloned disks in the secondary node, switch on
> the primary first,
> then switch on the 2nd but keeping it disconnected.
>
> We could change the network configuration on the 2nd node before
> reconnecting it to the network.
>
> We're very worried about the drbd partitions metadata being the same
> on each node:
> does this method have any chance of success, or are we wasting our time?

This is essentially a truck-based-replication, so it should be solvable by
"just" looking at the relevant part of the docs...

Have a nice week,

Arnold
Attachments: signature.asc (0.19 KB)


marini.maurizio at gmail

Jan 10, 2012, 3:20 AM

Post #3 of 3 (446 views)
Permalink
Re: can i clone primary after failure of secondary? [In reply to]

> This is essentially a truck-based-replication, so it should be solvable by
> "just" looking at the relevant part of the docs...

that's it :)

http://www.drbd.org/users-guide-8.3/s-using-truck-based-replication.html

but i did man drbdsetup on nodo1 and i se that there are 1 use cases,
in the first one the command is

drbdadm -- --clear-bitmap new-current-uuid resource

but in the second use case the commad is

drbdsetup device --clear-bitmap new-current-uuid

and by desription i ma in the second case, with disk shipping...

[code]
The necessary steps on the current active server are:

1. drbdsetup device new-current-uuid --clear-bitmap

2. Take the copy of the current active server. E.g. by pulling a disk
out of the RAID1 controller, or by copying with dd. You need to copy
the actual data, and the meta data.

3. drbdsetup device new-current-uuid


[/code]

but this is not on
http://www.drbd.org/users-guide-8.3/s-using-truck-based-replication.html
i am confused about this missing ...


hopefully i did not detroyed all the data...
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user

DRBD users RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.