What is DRBD (Distributed Replicated Block Device)?
DRBD (Distributed Replicated Block Device) is a Linux-based software component to mirror or replicate individual storage devices (such as hard disks or partitions) from one node to the other(s) over a network connection. DRBD makes it possible to maintain consistency of data among multiple systems in a network. DRBD also ensures high availability (HA) for Linux applications. DRBD supports three distinct replication modes, allowing three degrees of replication synchronicity.
- Protocol A: Asynchronous replication protocol.
- Protocol B: Memory synchronous (semi-synchronous) replication protocol.
- Protocol C: Synchronous replication protocol.
In this tutorial, we are going to create and configure a DRBD Cluster Across two servers. Both servers have an empty disk attached /dev/sdb.
Enviroment
drbd01.test.local 192.168.1.20 CentOS 7
drbd02.test.local 192.168.1.21 CentOS 7
Installing DRBD
In order to install DRBD, you will need to enable the ELRepo repository on both nodes, because this software package is not distributed through the standard CentOS and Red Hat Enterprise Linux repositories. Use the following commands to import GPG key and install ELRepo repository on both nodes:
# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
Run the following command on both nodes to install the DRBD software and the all necessaries kernel modules
# yum install drbd90-utils kmod-drbd90
– Once the installation is completed, you will need to check whether the kernel module is loaded correctly, using this command:
# lsmod | grep -i drbd
If it is not loaded automatically, you can load the module to the kernel on both nodes, using the follow command:
# modprobe drbd
Note that modprobe command will take care of loading the kernel module for the time being on your current session. However, in order for it to be loaded during boot, you have to make use of the systemd-modules-load service by creating a file inside /etc/modulesload.d/ so that the DRBD module is loaded properly each time the system boots:
# echo drbd > /etc/modules-load.d/drbd.conf
Configuring DRBD
After having successfully installed DRBD on both nodes, we need to modify the DRBD global and common settings by editing the file /etc/drbd.d/global_common.conf.
Let’s backup the original settings on both nodes with the following command:
# mv /etc/drbd.d/global_common.conf /etc/drbd.d/global_common.conf.orig
Create a new global_common.conf file on both nodes with the following contents:
# vi /etc/drbd.d/global_common.conf
global {
usage-count no;
}
common {
net {
protocol C;
}
}
Next, we will need to create a new configuration file called /etc/drbd.d/drbd0.res for the new resource named drbd0, with the following contents:
# vi /etc/drbd.d/drbd0.res
resource drbd0 {
disk /dev/sdb;
device /dev/drbd0;
meta-disk internal;
on drbd01 {
address 192.168.1.20:7789;
}
on drbd02 {
address 192.168.1.21:7789;
}
}
In the above resource file, we created a new resource drbd0 where 192.168.1.20 and 192.168.1.21 are the IP addresses of our two nodes, and 7789 is the port used for communication, using the disk /dev/sdb to create the new device /dev/drbd0.
Initialize the meta data storage on each nodes by executing the following command on both nodes
# drbdadm create-md drbd0
Starting and Enabling the DRBD Daemon on both nodes.
# systemctl start drbd
# systemctl enable drbd
Lets define the DRBD Primary node as first node “ylpldrbd01”.
# drbdadm up drbd
# drbdadm primary drbd
Note: if you get any error to make the node primary, use the following command to forcefully make the node as primary:
# drbdadm primary drbd --force
You can check the current status of the synchronization while it’s being performed. The cat /proc/drbd command displays the creation and synchronization progress of the resource, as shown here:
# cat /proc/drbd
Adjust the firewall using the following commands:
# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="ip_address" port port="7789" protocol="tcp" accept' # firewall-cmd --reload
Test the DRBD
In order to test the DRBD functionality we need to Create a file system, mount the volume and write some data on primary node “ylpldrbd01” and finally switch the primary node to “ylpldrbd02”
– Run the following command on the primary node to create an xfs filesystem on /dev/drbd0 and mount it to the mnt directory, using the following commands:
# mkfs.xfs /dev/drbd0
# mount /dev/drbd0 /mnt
– Create some data using the following command:
# touch /mnt/file{1..5}
# ls -l /mnt/
total 0
-rw-r--r--. 1 root root 0 Sep 22 21:43 file1
-rw-r--r--. 1 root root 0 Sep 22 21:43 file2
-rw-r--r--. 1 root root 0 Sep 22 21:43 file3
-rw-r--r--. 1 root root 0 Sep 22 21:43 file4
-rw-r--r--. 1 root root 0 Sep 22 21:43 file5
– Let’s now switch primary mode “drbd01” to second node “```drbd02” to check the data replication works or not.
First, we have to unmount the volume drbd0 on the first drbd cluster node “drbd01”.
# umount /mnt
Change the primary node to secondary node on the first drbd cluster node “drbd01”
# drbdadm secondary drbd
Change the secondary node to primary node on the second drbd cluster node “drbd02”
# drbdadm primary drbd
Mount the volume and check the data available or not.
# mount /dev/drbd0 /mnt
# ls -l /mnt
total 0
-rw-r--r--. 1 root root 0 Sep 22 21:43 file1
-rw-r--r--. 1 root root 0 Sep 22 21:43 file2
-rw-r--r--. 1 root root 0 Sep 22 21:43 file3
-rw-r--r--. 1 root root 0 Sep 22 21:43 file4
-rw-r--r--. 1 root root 0 Sep 22 21:43 file5