Installing DRBD across two Debian 7 VPS

At Network Presence we don’t meter or charge for data exchanged between customer VPS, so you can purchase 2 x VPS from us and use open-source Disk Clustering technologies described below to share a common filesystem between your VPS.

Here’s the steps (down to the commands to run as root) to install and configure the DRBD Network Disk on Debian 7 running across a 2 Node Cluster:

Some Linux or system level requirements or pre-install setup tasks are:

1) Ensure the /etc/hosts files are the same on each Node and that it contains entries for each DB Node’s IP addresses and hostname (both the node hostname and its fully qualified hostname).

eg:
192.168.0.1 node1 node1.domain.name
192.168.0.2 node2 node2.domain.name

2) Ensure that each Node’s Firewall permits each other node host IP addresses access to ports 7788 to 7799.

3) The example setup below assumes that you have a Disk Device of the same size on both nodes, in this example that device is refered to as “/dev/xvdaX” but it could be a “/dev/loop0” device created on each node with commands like:

dd if=/dev/zero of=/storage/somewhere/drbd.disk bs=1024k count=1024
losetup /dev/loop0 /storage/somewhere/drbd.disk

And use “/dev/loop0” in the DRBD config below, not “/dev/xvdaX” (for example).

Then, run/do the following on each Node to get the DRBD Disk live, sync’ing and running on the first node ASAP:

a) Install per-requisite Debian Packages:

apt-get -y install drbd8-utils

b) Configure a DRBD Disk in your VPS:

cat << EOF > /etc/drbd.d/DISK1.res
resource DISK1 {

startup {
become-primary-on both;
}
net {
## not recommended to enable the allow-two-primaries option upon initial configuration. You should do so after the initial resource synchronization has completed.
allow-two-primaries; ## After you enable the allow-two-primaries option for this resource, you will be able to promote the resource to the primary role on both nodes.
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}

on node1 {
device /dev/drbd1;
disk /dev/xvdX;
address 192.168.0.1:7789;
meta-disk internal;
}
on node2 {
device /dev/drbd1;
disk /dev/xvdX;
address 192.168.0.2:7789;
meta-disk internal;
}
}
EOF

c) Initialize the device for DRBD use:

drbdadm create-md DISK1

d) Sync the DRBD Disk across your VPS Nodes:

On one of the node’s (eg: Node 1), run:

drbdadm up all
drbdadm -- --overwrite-data-of-peer primary all

Speed the sync’s up with a command like (only needed on one node) :

drbdsetup /dev/drbdX syncer -r 110M

If you want this faster sync rate by default, then add the following to your DRBD config file (listed above) before the ‘resource’ line :

common { syncer { rate 100M; } }

e) Activate the disk as Primary/Primary:

Easiest way after the disk sync (above) has finished: Reboot Node 2.

Or, you can enter the command on Node 2 of:

drbdadm primary DISK1

(where DISK1 is the resource name listed in the DRBD config file above)

f) Once Node 2 restarts the DRBD Cluster should sync primary:primary on the 2 Nodes.

root@node2:~# cat /proc/drbd
version: 8.3.[..]
1: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----

g) From here, you can create a Distributed or Clustered Filesystem on /dev/drbd1 and mount it on the two nodes. eg: Use XFS or OCFS2 or GlusterFS or another distributed or clustered filesystem type.

A Filesystem which specifically supported multiple host mounting or is a Clustered Filesystem is required to be able to share and have the filesystem mounted on each host. ie: EXT3 or EXT4 is not capable of this and can only be mounted on one host at a time (even though the underlying disk device is common (/dev/drbd1) between the nodes).

Further posts here will build on this post to describe creating and using OCFS2 as a Clustered Filesystem across the two DRBD Nodes.

FYI and regards,
Richard.

Posted in Network Presence | Tagged , , | Comments Off on Installing DRBD across two Debian 7 VPS

RT @sage_au: We would like to congratulate promine…

RT @sage_au: We would like to congratulate prominent member @simonhackett for his recent appointment to Australia’s NBNCo Board: http://t.c…

Posted in Tweets | Comments Off on RT @sage_au: We would like to congratulate promine…

RT @GuardianUS: Netflix and YouTube now make up a…

RT @GuardianUS: Netflix and YouTube now make up a majority of US internet traffic. See for yourself: trib.al/yszTnsV http://t.co/5uc…

Posted in Tweets | Comments Off on RT @GuardianUS: Netflix and YouTube now make up a…

Full WHM/cPanel VPS with dual IP addresses & f…

Full WHM/cPanel VPS with dual IP addresses & fast resources in Sydney. networkpresence.com.au/hosting/cpanel… #NetPres

Posted in Tweets | Comments Off on Full WHM/cPanel VPS with dual IP addresses & f…

“The EC2 APIs are very ugly & the OpenStack on…

“The EC2 APIs are very ugly & the OpenStack ones are very elegant. But…” thecloudcast.net/2013/11/the-cl… HT @thecloudcastnet

Posted in Tweets | Comments Off on “The EC2 APIs are very ugly & the OpenStack on…

Short Scheduled Network Maintenance – Sunday, Nov 10th, 8:30am AEDT

Our main Upstream Internet Provider has requested a short partial network restart of their core network in Sydney. We connect to their network via two paths and one of those paths will experience an outage while the provider’s equipment is upgraded and restarted.

This network maintenance is scheduled from 8:30am (Sydney, Australia AEDT timezone) this coming Sunday morning, November 10th and is expected to result in a short period of transition by our network fully to the other network path to the Internet of ours in Sydney.

However, we request a 30 minute window, during which there may be short (few minutes) outages of our Internet connectivity, in case the upstream provider has any problems with their works, and our network may see a short period of connectivity loss in the Sydney area at this time, if some network routers (using BGP) don’t transition to using our second uplink in Sydney.

We’ll be using this scheduled maintenance effectively as a “live test” of our network redundancy in Sydney and we’ll be able to report to customers if things don’t go as expected or optimally.

I’ll be personally contactable by phone & online during and after these works in-case there’s any issues that arise.

FYI and regards,
Richard.

Posted in Network Presence | Tagged | Comments Off on Short Scheduled Network Maintenance – Sunday, Nov 10th, 8:30am AEDT

Good to see Redhat going “all in!” too on OpenStac…

Good to see Redhat going “all in!” too on OpenStack. zdnet.com/red-hat-wants-… #cloud #NetPres

Posted in Tweets | Comments Off on Good to see Redhat going “all in!” too on OpenStac…

RT @NORA_aus: Google maps inside shopping centre:…

RT @NORA_aus: Google maps inside shopping centre: bit.ly/1cF1c58 #SouthAustralia

Posted in Tweets | Comments Off on RT @NORA_aus: Google maps inside shopping centre:…

Our Managed OpenStack Colo service. Your own physi…

Our Managed OpenStack Colo service. Your own physical private cloud server in our POP at Equinix in Sydney. networkpresence.com.au/hosting/co-loc… #NetPres

Posted in Tweets | Comments Off on Our Managed OpenStack Colo service. Your own physi…

Good to see more OpenStack offerings. This week’s…

Good to see more OpenStack offerings. This week’s Cloudcast podcast interviewing @blueboxjesse thecloudcast.net/2013/11/the-cl…

Posted in Tweets | Comments Off on Good to see more OpenStack offerings. This week’s…