hie everyone,
I want to perform replication with Debian Etch using the latest DRBD's (DRBD 8.3) " Third Node " feature. How can I do that ? Any ideas......kindly give some useful information regarding this......
hie everyone,
I want to perform replication with Debian Etch using the latest DRBD's (DRBD 8.3) " Third Node " feature. How can I do that ? Any ideas......kindly give some useful information regarding this......
The recent release of DRBD 8.3 now includes "The Third Node" feature as a freely available component.
DRBD is a block device which is designed to build high availability clusters. This is done by mirroring a whole block device via (a dedicated) network. You could see it as a network raid-1.
DRBD takes over the data, writes it to the local disk and sends it to the otherhost. On the other host, it takes it to the disk there.
The other components needed are a cluster membership service, which is supposed to be heartbeat, and some kind of application that works on top of a block device.
Each device (DRBD provides more than one of these devices) has a state, which can be 'primary' or 'secondary'. On the node with the primary device the application is supposed to run and to access the device (/dev/drbdX; used to be /dev/nbX). Every write is sent to the local 'lower level block device' and to the node with the device in 'secondary' state. The secondary device simply writes the data to its lower level block device. Reads are always carried out locally.
If the primary node fails, heartbeat is switching the secondary device into
primary state and starts the application there. (If you are using it with a
non-journaling FS this involves running fsck)
If the failed node comes up again, it is a new secondary node and has to
synchronise its content to the primary. This, of course, will happen whithout
interruption of service in the background.
And, of course, we only will resynchronize those parts of the device that
actually have been changed. DRBD has always done intelligent resynchronization when possible. Starting with the DBRD-0.7 series, you can define an "active set" of a certain size. This makes it possible to have a total resync time of 1--3 min, regardless of device size (currently up to 4TB), even after a hard crash of an active node.
The Third Node Setup
The setup is as follows -
- Three servers: alpha, bravo, foxtrot
- alpha and bravo are the primary and secondary local nodes
- foxtrot is the third node which is on a remote network
- Both alpha and bravo have interfaces on the 192.168.1.x network (eth0) for external connectivity.
- A crossover link exists on alpha and bravo (eth1) for replication using 172.16.6.10 and .20
- Heartbeat provides a virtual IP of 192.168.5.2 to communicate with the disaster recovery node located in a geographically diverse location
Installing The Source
These steps need to be done on each of the 3 nodes.
Prerequisites:
make
gcc
glibc development libraries
flex scanner generator
headers for the current kernel
Enter the following at the command line as a privileged user to satisfy these dependencies:
apt-get install make gcc libc6 flex linux-headers-`uname -r` libc6-dev linux-kernel-headers
Once the dependencies are installed, download DRBD.
After the download is complete:
- Uncompress DRBD
- Enter the source directory
- Compile the source
- Install DRBD
tar -xzvf drbd-8.3.0.tar.gz
cd /usr/src/drbd-8.3.0/
make clean all
make install
Now load and verify the module:
modprobe drbd
cat /proc/drbd
Once you have completed installing the source, the next step is to install the Heartbeat.
Setting up a third node entails stacking DRBD on top of DRBD. A virtual IP is needed for the third node to connect to, for this we will set up a simple Heartbeat v1 configuration. This section will only be done on alpha and bravo.
Install Heartbeat:
apt-get install heartbeat
Edit the authkeys file:
vi /etc/ha.d/authkeys
auth 1
1 sha1 yoursupersecretpasswordhere
Once the file has been created, change the permissions on the file. Heartbeat will not start if this step is not followed.
chmod 600 /etc/ha.d/authkeys
Copy the authkeys file to bravo:
scp /etc/ha.d/authkeys bravo:/etc/ha.d/
Edit the ha.cf file:
vi /etc/ha.d/ha.cf
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 1
deadtime 10
warntime 5
initdead 60
udpport 694
ucast eth0 192.168.1.10
ucast eth0 192.168.1.20
auto_failback off
node alpha
node bravo
Copy the ha.cf file to bravo:
scp /etc/ha.d/ha.cf bravo:/etc/ha.d/
Edit the haresources file, the IP created here will be the IP that our third node refers to.
vi /etc/ha.d/haresources
alpha IPaddr::192.168.5.2/24/eth0
Copy the haresources file to bravo:
scp /etc/ha.d/haresources bravo:/etc/ha.d/
Start the heartbeat service on both servers to bring up the virtual IP:
alpha:/# /etc/init.d/heartbeat start
bravo:/# /etc/init.d/heartbeat start
Heartbeat will bring up the new interface (eth0:0).
Note: It may take heartbeat up to one minute to bring the interface up.
alpha:/# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00: 08: C7: DB: 01: CC
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
DRBD Configuration
Configuration for DRBD is done via the drbd.conf file. This needs to be the same on all nodes (alpha, bravo, foxtrot).
Code:global { usage-count yes; } resource data-lower { protocol C; net { shared-secret "LINBIT"; } syncer { rate 12M; } on alpha { device /dev/drbd1; disk /dev/hdb1; address 172.16.6.10:7788; meta-disk internal; } on bravo { device /dev/drbd1; disk /dev/hdd1; address 172.16.6.20:7788; meta-disk internal; } } resource data-upper { protocol A; syncer { after data-lower; rate 12M; al-extents 513; } net { shared-secret "LINBIT"; } stacked-on-top-of data-lower { device /dev/drbd3; address 192.168.5.2:7788; # IP provided by Heartbeat } on foxtrot { device /dev/drbd3; disk /dev/sdb1; address 192.168.5.3:7788; # Public IP of the backup node meta-disk internal; } }
Bookmarks