Installing DRBD across two Debian 7 VPS

At Network Presence we don’t meter or charge for data exchanged between customer VPS, so you can purchase 2 x VPS from us and use open-source Disk Clustering technologies described below to share a common filesystem between your VPS.

Here’s the steps (down to the commands to run as root) to install and configure the DRBD Network Disk on Debian 7 running across a 2 Node Cluster:

Some Linux or system level requirements or pre-install setup tasks are:

1) Ensure the /etc/hosts files are the same on each Node and that it contains entries for each DB Node’s IP addresses and hostname (both the node hostname and its fully qualified hostname).

eg:
192.168.0.1 node1 node1.domain.name
192.168.0.2 node2 node2.domain.name

2) Ensure that each Node’s Firewall permits each other node host IP addresses access to ports 7788 to 7799.

3) The example setup below assumes that you have a Disk Device of the same size on both nodes, in this example that device is refered to as “/dev/xvdaX” but it could be a “/dev/loop0” device created on each node with commands like:

dd if=/dev/zero of=/storage/somewhere/drbd.disk bs=1024k count=1024
losetup /dev/loop0 /storage/somewhere/drbd.disk

And use “/dev/loop0” in the DRBD config below, not “/dev/xvdaX” (for example).

Then, run/do the following on each Node to get the DRBD Disk live, sync’ing and running on the first node ASAP:

a) Install per-requisite Debian Packages:

apt-get -y install drbd8-utils

b) Configure a DRBD Disk in your VPS:

cat << EOF > /etc/drbd.d/DISK1.res
resource DISK1 {

startup {
become-primary-on both;
}
net {
## not recommended to enable the allow-two-primaries option upon initial configuration. You should do so after the initial resource synchronization has completed.
allow-two-primaries; ## After you enable the allow-two-primaries option for this resource, you will be able to promote the resource to the primary role on both nodes.
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}

on node1 {
device /dev/drbd1;
disk /dev/xvdX;
address 192.168.0.1:7789;
meta-disk internal;
}
on node2 {
device /dev/drbd1;
disk /dev/xvdX;
address 192.168.0.2:7789;
meta-disk internal;
}
}
EOF

c) Initialize the device for DRBD use:

drbdadm create-md DISK1

d) Sync the DRBD Disk across your VPS Nodes:

On one of the node’s (eg: Node 1), run:

drbdadm up all
drbdadm -- --overwrite-data-of-peer primary all

Speed the sync’s up with a command like (only needed on one node) :

drbdsetup /dev/drbdX syncer -r 110M

If you want this faster sync rate by default, then add the following to your DRBD config file (listed above) before the ‘resource’ line :

common { syncer { rate 100M; } }

e) Activate the disk as Primary/Primary:

Easiest way after the disk sync (above) has finished: Reboot Node 2.

Or, you can enter the command on Node 2 of:

drbdadm primary DISK1

(where DISK1 is the resource name listed in the DRBD config file above)

f) Once Node 2 restarts the DRBD Cluster should sync primary:primary on the 2 Nodes.

root@node2:~# cat /proc/drbd
version: 8.3.[..]
1: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----

g) From here, you can create a Distributed or Clustered Filesystem on /dev/drbd1 and mount it on the two nodes. eg: Use XFS or OCFS2 or GlusterFS or another distributed or clustered filesystem type.

A Filesystem which specifically supported multiple host mounting or is a Clustered Filesystem is required to be able to share and have the filesystem mounted on each host. ie: EXT3 or EXT4 is not capable of this and can only be mounted on one host at a time (even though the underlying disk device is common (/dev/drbd1) between the nodes).

Further posts here will build on this post to describe creating and using OCFS2 as a Clustered Filesystem across the two DRBD Nodes.

FYI and regards,
Richard.

This entry was posted in Network Presence and tagged , , . Bookmark the permalink.