How to configure DRBD On CentOS 6.5
Introduction
The Distributed Replicated Block Device (DRBD) is a distributed replicated storage system for the Linux platform. It is implemented as several userspace management applications and some shell scripts and is normally used on
high availability (HA) computer clusters. DRBD refers as well to the
logical block devices provided by the scheme and to software that
implements it.
The DRBD
software is free software released under the terms of the GNU General
Public License version 2. DRBD is part of the Lisog open source stack initiative.
Distributed Replicated Block Device is actually a network based RAID 1.
If you need to secure data on certain disk and are therefore mirroring
your data to another disk via network, you have to configure DRBD on
your system.
In this tutorial, let us see how to install and configure DRBD on CentOS 6.5.
Requirements
- Two disks (preferably same size)
– Networking between machines (node1 & node2)
– Working DNS resolution (/etc/hosts file)
– NTP synchronized times on both nodes
– Selinux Permissive
– Iptables ports (7788) allowed
– Networking between machines (node1 & node2)
– Working DNS resolution (/etc/hosts file)
– NTP synchronized times on both nodes
– Selinux Permissive
– Iptables ports (7788) allowed
Let us start DRBD installation.
Install ELRepo repository on your both system:
rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm
Update both repo:
yum update -y
setenforce 0
Install DRBD:
[root@node1 ~]# yum -y install drbd83-utils kmod-drbd83 [root@node2 ~]# yum -y install drbd83-utils kmod-drbd83
Insert drbd module manually on both machines or reboot:
/sbin/modprobe drbd
Partition DRBD on both machines:
[root@node1 ~]# fdisk -cu /dev/sdb [root@node2 ~]# fdisk -cu /dev/sdb
Example:
[root@node1 yum.repos.d]# fdisk -cu /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0x2a0f1472. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): p Disk /dev/sdb: 2147 MB, 2147483648 bytes 255 heads, 63 sectors/track, 261 cylinders, total 4194304 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x2a0f1472 Device Boot Start End Blocks Id System Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1
First sector (2048-4194303, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-4194303, default 4194303): Using default value 4194303 Command (m for help): w The partition table has been altered!
Create the Distributed Replicated Block Device resource file (/etc/drbd.d/clusterdb.res):
[root@node1 ~]# vi /etc/drbd.d/clusterdb.res
resource clusterdb { startup { wfc-timeout 30; outdated-wfc-timeout 20; degr-wfc-timeout 30; }
net { cram-hmac-alg sha1; shared-secret sync_disk; }
syncer { rate 10M; al-extents 257; on-no-data-accessible io-error; } on node1 { device /dev/drbd0; disk /dev/sdb1; address 192.168.1.110:7788; flexible-meta-disk internal; } on node2 { device /dev/drbd0; disk /dev/sdb1; address 192.168.1.110:7788; meta-disk internal; } }
Make sure that DNS resolution is working:
/etc/hosts
192.168.1.110 node1 node1.example.com
192.168.1.111 node2 node2.example.com
Set NTP server and add it to crontab on both machines:
vi/etc/crontab 5 * * * * root ntpdate your.ntp.server
Copy DRBD configured and hosts file to node2:
[root@node1 ~]# scp /etc/drbd.d/clusterdb.res node2:/etc/drbd.d/clusterdb.res [root@node1 ~]# scp /etc/hosts node2:/etc/
Initialize the DRBD meta data storage on both machines:
[root@node1 ~]# drbdadm create-md clusterdb [root@node2 ~]# drbdadm create-md clusterdb
You want me to create a v08 style flexible-size internal meta data block. There appears to be a v08 flexible-size internal meta data block already in place on /dev/sdb1 at byte offset 2146430976 Do you really want to overwrite the existing v08 meta-data? [need to type 'yes' to confirm] yes Writing meta data... initializing activity log NOT initialized bitmap New drbd meta data block successfully created.
Start the drdb on both nodes:
[root@node1 ~]# service drbd start [root@node2 ~]# service drbd start
On the PRIMARY node run drbdadm command:
[root@node1 ~]# drbdadm — –overwrite-data-of-peer primary all
Check if Device disk initial synchronization to complete (100%) and check to confirm you are on primary node:
[root@node1 yum.repos.d]# cat /proc/drbd version: 8.3.16 (api:88/proto:86-97) GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build32R6, 2013-09-27 15:59:12 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r----- ns:78848 nr:0 dw:0 dr:79520 al:0 bm:4 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:2017180 [>....................] sync'ed: 4.0% (2017180/2096028)K finish: 0:02:58 speed: 11,264 (11,264) K/sec ns:1081628 nr:0 dw:33260 dr:1048752 al:14 bm:64 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0]
Create filesystem on Distributed Replicated Block Device device:
[root@node1 yum.repos.d]# /sbin/mkfs.ext4 /dev/drbd0 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 131072 inodes, 524007 blocks 26200 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=536870912 16 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912
Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 26 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
You can now mount DRBD device on your primary node:
[root@node1 ~]# mkdir /data [root@node1 ~]# mount /dev/drbd0 /data
Check:
[root@node1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_unixmencentos65-lv_root 19G 3.6G 15G 20% / tmpfs 1.2G 44M 1.2G 4% /dev/shm /dev/sda1 485M 80M 380M 18% /boot /dev/drbd0 2.0G 36M 1.9G 2% /data
Please note: You don’t need to mount the disk from secondary machines. All data you write on /data folder will be synced to machine2.
To see that, Unmount the /data folder from the primary node, make the secondary node as primary node, and mount back the /data on the second machine, then you will see the same contents in /data folder
TIPS:
1. Switch Primary/Secondary
[root@node1 ~]# drbdadm secondary clusterdb [root@node2 ~]# drbdadm -- --overwrite-data-of-peer primary all
Enjoy!
No comments:
Post a Comment