Online course from Udacity for Big Data & Hadoop

Online course from Udacity for Big Data & Hadoop training. blog.udacity.com/2013/11/sebast…

Posted in Tweets | Tagged , | Comments Off on Online course from Udacity for Big Data & Hadoop

Short Scheduled Network Maintenance – Sunday, Nov 17th, 8:30am AEDT

As per last weekend, our main Upstream Internet Provider has requested a short partial network restart of their core network in Sydney. We connect to their network via two paths and one of those paths will experience an outage while the provider’s equipment is upgraded and restarted.

This network maintenance is scheduled from 8:30am (Sydney, Australia AEDT timezone) this coming Sunday morning, November 17th and is expected to result in a short period of transition by our network fully to the other network path to the Internet of ours in Sydney. This earlier transition occurred seamlessly last Sunday and we expect the same during these works by our supplier this Sunday.

However, we request a 30 minute window, during which there may be short (few minutes) outages of our Internet connectivity, in case the upstream provider has any problems with their works, and our network may see a short period of connectivity loss in the Sydney area at this time, if some network routers (using BGP) don’t transition to using our second uplink in Sydney.

We’ll be using this scheduled maintenance effectively as a “live test” of another aspect of our network redundancy in Sydney and we’ll be able to report to customers if things don’t go as expected or optimally.

I’ll be personally contactable by phone & online during and after these works in-case there’s any issues that arise.

FYI and regards,
Richard.

Posted in Network Presence | Tagged | Comments Off on Short Scheduled Network Maintenance – Sunday, Nov 17th, 8:30am AEDT

RT @JordanKueh: @netpres Yes! Brilliant! :D

RT @JordanKueh: @netpres Yes! Brilliant! 😀

Posted in Tweets | Comments Off on RT @JordanKueh: @netpres Yes! Brilliant! :D

Sub $8/month VPS via Xen with 20GB of Disk & 1…

Sub $8/month VPS via Xen with 20GB of Disk & 1TB of fast Network Data. Add IPv6 for $1/m. networkpresence.com.au/hosting/vps-pl… #NetPres

Posted in Tweets | 5 Comments

Need “Follow the Sun” Linux/Open-Source Support & Project Personnel?

Network Presence has highly-skilled and competent tele-workers on deck between 9am to 9pm Sydney, Australia time each and every day, who are able to take on either deep (Level 2 or Level 3) Support or Project based Linux and Open-Source software and infrastructure work and support tasks in those timeframes.

We can provide a valuable lynchpin for IT and online businesses to keep having high-capability and proficient IT&C/Internet Technology workers active after US and European staff have ended their daily work hours, and before Asia wakes up for the day.

This is known as “Follow the Sun” operations.

To explain where and when we can work for US businesses:

8PM in San Francisco and 11PM in New York is 3pm in Sydney and 8pm in Sydney is 1AM in San Francisco and is 4AM in New York.

For European timeframes, we’re available from 10pm through to 10am in London for telework, which is 9am Sydney time to 9pm in Sydney.

Our skillset includes my capabilities (Richard Siggs on LinkedIn) but other personnel are available that work daily at Network Presence on similar tasks, and with Richard available as backup/assurance.

Please contact us online or feel free to call me direct to discuss if you’re interested on +61 414 882 225.

Regards,
Richard.

Posted in Network Presence, Rich, Sales | Tagged , , | Comments Off on Need “Follow the Sun” Linux/Open-Source Support & Project Personnel?

Installing MariaDB Galera MySQL Cluster on Debian 7 VPS

At Network Presence we don’t meter or charge for data exchanged between customer VPS, so you can purchase 3 x VPS from us and use open-source Clustered Database technologies described below to create a High Availability and scaled MySQL compatible database for your sites and content. And it’s a Cluster that doesn’t need any shared, distributed or clustered filesystem for the storage of the MySQL Databases themselves (that’s because this data is replicated to each node by the MariaDB Galera software itself).

Here’s the steps (down to the commands to run as root) to install and configure the MariaDB Galera Clustered Database server software on Debian 7 running across a 3 Node Cluster (3 nodes to increase DB consistency, response time & reduce split-brain occurrences of full DB re-syncs).

Also note that currently (in MariaDB 5.5 at least) the MariaDB Galera Cluster only supports the InnoDB/XtraDB storage engine, so ensure that your Databases to be Clustered are InnoDB based for DB type.

Some Linux or system level requirements or pre-install setup tasks are:

1) Ensure the /etc/hosts files are the same on each Node and that it contains entries for each DB Node’s IP addresses and hostname (both the node hostname and its fully qualified hostname).

eg:
cat << EOF >> /etc/hosts
192.168.0.1 node1 node1.domain.name
192.168.0.2 node2 node2.domain.name
192.168.0.3 node3 node3.domain.name
EOF

2) Ensure that each Node’s Firewall permits each other node host IP addresses access to ports 4444 and 4567 and the standard MySQL port of 3306.

Then, run/do the following on each Node to get the MariaDB Galera software installed:

a) Install per-requisite Debian Packages:

apt-get -y install perl python-software-properties rsync

b) Setup the MariaDB 5.5 Repository (localised for Australia):

apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943db
add-apt-repository 'deb http://mirror.aarnet.edu.au/pub/MariaDB/repo/5.5/debian wheezy main'

Note: using MariaDB version 5.5 as 10.0 is still alpha/beta at this time.

c) Install MariaDB Galera:

apt-get install mariadb-galera-server galera

d) Ensure that there’s no “Empty” MySQL Users:

mysql -e "SET wsrep_on = OFF; DELETE FROM mysql.user WHERE user = '';"

Once all 3 Nodes have the above software installed and the initial MySQL settings applied, bring them all up and online (mysql running with the default Debian settings on each node is ok at this point).

Install the relevant Database content to the “first” Node, which we’ll call “Node1” or reference as “node1”, this may well involve using and knowing the MySQL ‘root’ username and its password.

After that and on Node 1, restart its MySQL software as the initiating node of a Galera Cluster by loading the following configuration file(s) and restarting Node1’s MySQL server software, as per:

d) cat << EOF > /etc/mysql/conf.d/cluster.cnf
[server]
bind_address = 0.0.0.0 ## Be reachable via the network
#
[mysqld]
##
## gcomm:// only for initial start of first Node
## After that it should list each Node's IP as it's joined to the Cluster, ending up with:
##wsrep_cluster_address = 'gcomm://192.168.0.2,192.168.0.3'
wsrep_cluster_address = 'gcomm://'
wsrep_provider = /usr/lib/galera/libgalera_smm.so
##wsrep_retry_autocommit = 0
wsrep_sst_method = rsync
binlog_format=ROW
query_cache_size=0
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
#
# optional additional suggested settings:
##innodb_buffer_pool_size=28G
##innodb_log_file_size=100M
##innodb_file_per_table
##innodb_flush_log_at_trx_commit=2
#
[mysqld_safe]
log-error=/var/log/mysql/mysqld_safe.log
EOF

e) Restart the MySQL Server on Node 1 with the above /etc/mysql/conf.d/cluster.cnf file installed:

/etc/init.d/mysql restart

f) Check how MySQL Galera Cluster servers is going after restart with:

tail -f /var/log/mysql/mysqld_safe.log

g) Check how the MariaDB Galera Server is running as a Cluster with the local mysql client command:

mysql -uroot -pMYSQLROOTPASSWD -e "SHOW STATUS LIKE 'wsrep_%';"

You should see output like:

+----------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid | cafebeef-0123-1234-9876-5ebcdef01234 |
| wsrep_protocol_version | 4 |
| wsrep_last_committed | 1 |
| wsrep_replicated | 0 |
| wsrep_replicated_bytes | 0 |
| wsrep_received | 2 |
| wsrep_received_bytes | 232 |
| wsrep_local_commits | 0 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_avg | 0.500000 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_avg | 0.000000 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_cert_deps_distance | 0.000000 |
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 0.000000 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 0.000000 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 0 |
| wsrep_causal_reads | 0 |
| wsrep_incoming_addresses | 192.168.0.1:3306 |
| wsrep_cluster_conf_id | 4 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | cafebeef-0123-1234-9876-5ebcdef01234 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_index | 1 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy |
| wsrep_provider_version | 23.2.7-wheezy(r) |
| wsrep_ready | ON |
+----------------------------+--------------------------------------+

With Node 1 now running a single-node Galera Cluster, you can configure and start Nodes 2 and 3 with these following commands done and verified on Node 2 and then Node 3:

h) Load MariaDB Cluster config file, listing Node 1’s IP address as the initial ‘gcomm’ Cluster server, on Node 2 do:

cat << EOF > /etc/mysql/conf.d/cluster.cnf
[server]
bind_address = 0.0.0.0 ## Be reachable via the network
#
[mysqld]
##
## gcomm:// only for initial start of first Node
## After that it should list each Node's IP as it's joined to the Cluster, ending up with:
##wsrep_cluster_address = 'gcomm://192.168.0.1,192.168.0.3' (for Node 2) and
##wsrep_cluster_address = 'gcomm://192.168.0.1,192.168.0.2' (for Node 3) once all nodes are in the Cluster.
wsrep_cluster_address = 'gcomm://192.168.0.1'
wsrep_provider = /usr/lib/galera/libgalera_smm.so
##wsrep_retry_autocommit = 0
wsrep_sst_method = rsync
binlog_format=ROW
query_cache_size=0
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
#
# optional additional suggested settings:
##innodb_buffer_pool_size=28G
##innodb_log_file_size=100M
##innodb_file_per_table
##innodb_flush_log_at_trx_commit=2
#
[mysqld_safe]
log-error=/var/log/mysql/mysqld_safe.log
EOF

i) Restart the MySQL Server on Node 2 with the above /etc/mysql/conf.d/cluster.cnf file installed:

/etc/init.d/mysql restart

j) Check how MySQL Galera Cluster servers is going after restart with:
Note: Any UUID numbers are replaced with ‘…’ in the following output

tail -f /var/log/mysql/mysqld_safe.log

You should see log file entries about each node joining and being syncronised into the Galera MySQL Cluster on Node 1, entries like:

[Note] WSREP: declaring ... stable
[Note] WSREP: Node ... state prim
[Note] WSREP: (..., 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://192.168.0.3:4567
[Note] WSREP: view(view_id(PRIM,...,11) memb {
...,
...,
} joined {
} left {
} partitioned {
...,
})
[Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2
[Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
[Note] WSREP: forgetting ... (tcp://192.168.0.3:4567)
[Note] WSREP: (..., 'tcp://0.0.0.0:4567') turning message relay requesting off
[Note] WSREP: STATE EXCHANGE: sent state msg: ...
[Note] WSREP: STATE EXCHANGE: got state msg: ... from 0 (node1.domain.name)
[Note] WSREP: STATE EXCHANGE: got state msg: ... from 1 (node2.domain.name)
[Note] WSREP: Quorum results:
version = 2,
component = PRIMARY,
conf_id = 8,
members = 2/2 (joined/total),
act_id = 162,
last_appl. = 0,
protocols = 0/4/2 (gcs/repl/appl),
group UUID = ...
[Note] WSREP: Flow-control interval: [23, 23]
[Note] WSREP: New cluster view: global state: ...:162, view# 9: Primary, number of nodes: 2, my index: 1, protocol version 2
[Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
[Note] WSREP: Assign initial position for certification: 162, protocol version: 2
14:24:37 [Note] WSREP: cleaning up ... (tcp://192.168.0.3:4567)
[Note] WSREP: declaring ... stable
[Note] WSREP: declaring ... stable
[Note] WSREP: Node ... state prim
[Note] WSREP: view(view_id(PRIM,...,12) memb {
...,
...,
...,
} joined {
} left {
} partitioned {
})
[Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 3
[Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
[Note] WSREP: STATE EXCHANGE: sent state msg: ...
[Note] WSREP: STATE EXCHANGE: got state msg: ... from 0 (node1.domain.name)
[Note] WSREP: STATE EXCHANGE: got state msg: ... from 1 (node2.domain.name)
[Note] WSREP: STATE EXCHANGE: got state msg: ... from 2 (node3.domain.name)
[Note] WSREP: Quorum results:
version = 2,
component = PRIMARY,
conf_id = 9,
members = 3/3 (joined/total),
act_id = 162,
last_appl. = 0,
protocols = 0/4/2 (gcs/repl/appl),
group UUID = ...
[Note] WSREP: Flow-control interval: [28, 28]
[Note] WSREP: New cluster view: global state: ...:162, view# 10: Primary, number of nodes: 3, my index: 1, protocol version 2
[Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
[Note] WSREP: Assign initial position for certification: 162, protocol version: 2
[Note] WSREP: Member 2 (node3.domain.name) synced with group.

k) Again, use the mysql client on Node 2 to query Node 2’s Galera Server for its idea of how many Nodes the Cluster has, etc.

Run on Node 2:

mysql -uroot -pMYSQLROOTPASSWD -e "SHOW STATUS LIKE 'wsrep_%';"

With Node 1 and 2 now running a dual-node Galera Cluster, you can configure and start Node 3 with these following commands done and verified on Node 1 and then Node 2:

l) Load MariaDB Cluster config file, listing Node 1 and 2’s IP address as the initial ‘gcomm’ Cluster server, on Node 3 do:

cat << EOF > /etc/mysql/conf.d/cluster.cnf
[server]
bind_address = 0.0.0.0 ## Be reachable via the network
#
[mysqld]
##
## gcomm:// only for initial start of first Node
## After that it should list each Node's IP as it's joined to the Cluster, ending up with:
##wsrep_cluster_address = 'gcomm://192.168.0.1,192.168.0.2' (for Node 3) once all nodes are in the Cluster.
wsrep_cluster_address = 'gcomm://192.168.0.1,192.168.0.2'
wsrep_provider = /usr/lib/galera/libgalera_smm.so
##wsrep_retry_autocommit = 0
wsrep_sst_method = rsync
binlog_format=ROW
query_cache_size=0
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
#
# optional additional suggested settings:
##innodb_buffer_pool_size=28G
##innodb_log_file_size=100M
##innodb_file_per_table
##innodb_flush_log_at_trx_commit=2
#
[mysqld_safe]
log-error=/var/log/mysql/mysqld_safe.log
EOF

m) Restart the MySQL Server on Node 3 with the above /etc/mysql/conf.d/cluster.cnf file installed:

/etc/init.d/mysql restart

n) Check the mysqld_safe.log logfiles on all nodes to see that Node 3 also joins and syncronises with the Cluster.

At this point you have all 3 x Nodes in the Galera MySQL Cluster and you can now update Node 1 and Node 2’s cluster.cnf file to set the IP Addresses of all Nodes in the Cluster.

o) On Node 1, change the line in /etc/mysql/conf.d/cluster.cnf that says:

From:
wsrep_cluster_address = 'gcomm://'

To:
wsrep_cluster_address = 'gcomm://192.168.0.1,192.168.0.2,192.168.0.3'

And then restart MySQL on Node 1, checking the relevant log files and mysql client command to query the Cluster membership once Node 1’s mysql server software has restarted, with:

/etc/init.d/mysql restart
tail -f /var/log/mysql/mysqld_safe.log
mysql -uroot -pMYSQLROOTPASSWD -e "SHOW STATUS LIKE 'wsrep_%';"

Once Node 1 has been reconfigured to know of all Cluster Members, do the same step above on Node 2 and Node 3, setting Node 2 and Node 3’s wsrep_cluster_address in cluster.cnf to:

wsrep_cluster_address = 'gcomm://192.168.0.1,192.168.0.2,192.168.0.3'

And restarting their MySQL Servers and checking that they then each rejoin the live Cluster.

Finally by now you should have a running live 3 Node MariaDB Galera MySQL Cluster, the status of which on each Node should list that there’s 3 members of the Cluster, via:

mysql -uroot -pMYSQLROOTPASSWD -e "SHOW STATUS LIKE 'wsrep_%';" | egrep "wsrep_incoming_addresses|wsrep_cluster_size"

FYI and regards,
Richard.

Posted in Network Presence | Tagged , , , , | Comments Off on Installing MariaDB Galera MySQL Cluster on Debian 7 VPS

Installing DRBD across two Debian 7 VPS

At Network Presence we don’t meter or charge for data exchanged between customer VPS, so you can purchase 2 x VPS from us and use open-source Disk Clustering technologies described below to share a common filesystem between your VPS.

Here’s the steps (down to the commands to run as root) to install and configure the DRBD Network Disk on Debian 7 running across a 2 Node Cluster:

Some Linux or system level requirements or pre-install setup tasks are:

1) Ensure the /etc/hosts files are the same on each Node and that it contains entries for each DB Node’s IP addresses and hostname (both the node hostname and its fully qualified hostname).

eg:
192.168.0.1 node1 node1.domain.name
192.168.0.2 node2 node2.domain.name

2) Ensure that each Node’s Firewall permits each other node host IP addresses access to ports 7788 to 7799.

3) The example setup below assumes that you have a Disk Device of the same size on both nodes, in this example that device is refered to as “/dev/xvdaX” but it could be a “/dev/loop0” device created on each node with commands like:

dd if=/dev/zero of=/storage/somewhere/drbd.disk bs=1024k count=1024
losetup /dev/loop0 /storage/somewhere/drbd.disk

And use “/dev/loop0” in the DRBD config below, not “/dev/xvdaX” (for example).

Then, run/do the following on each Node to get the DRBD Disk live, sync’ing and running on the first node ASAP:

a) Install per-requisite Debian Packages:

apt-get -y install drbd8-utils

b) Configure a DRBD Disk in your VPS:

cat << EOF > /etc/drbd.d/DISK1.res
resource DISK1 {

startup {
become-primary-on both;
}
net {
## not recommended to enable the allow-two-primaries option upon initial configuration. You should do so after the initial resource synchronization has completed.
allow-two-primaries; ## After you enable the allow-two-primaries option for this resource, you will be able to promote the resource to the primary role on both nodes.
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}

on node1 {
device /dev/drbd1;
disk /dev/xvdX;
address 192.168.0.1:7789;
meta-disk internal;
}
on node2 {
device /dev/drbd1;
disk /dev/xvdX;
address 192.168.0.2:7789;
meta-disk internal;
}
}
EOF

c) Initialize the device for DRBD use:

drbdadm create-md DISK1

d) Sync the DRBD Disk across your VPS Nodes:

On one of the node’s (eg: Node 1), run:

drbdadm up all
drbdadm -- --overwrite-data-of-peer primary all

Speed the sync’s up with a command like (only needed on one node) :

drbdsetup /dev/drbdX syncer -r 110M

If you want this faster sync rate by default, then add the following to your DRBD config file (listed above) before the ‘resource’ line :

common { syncer { rate 100M; } }

e) Activate the disk as Primary/Primary:

Easiest way after the disk sync (above) has finished: Reboot Node 2.

Or, you can enter the command on Node 2 of:

drbdadm primary DISK1

(where DISK1 is the resource name listed in the DRBD config file above)

f) Once Node 2 restarts the DRBD Cluster should sync primary:primary on the 2 Nodes.

root@node2:~# cat /proc/drbd
version: 8.3.[..]
1: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----

g) From here, you can create a Distributed or Clustered Filesystem on /dev/drbd1 and mount it on the two nodes. eg: Use XFS or OCFS2 or GlusterFS or another distributed or clustered filesystem type.

A Filesystem which specifically supported multiple host mounting or is a Clustered Filesystem is required to be able to share and have the filesystem mounted on each host. ie: EXT3 or EXT4 is not capable of this and can only be mounted on one host at a time (even though the underlying disk device is common (/dev/drbd1) between the nodes).

Further posts here will build on this post to describe creating and using OCFS2 as a Clustered Filesystem across the two DRBD Nodes.

FYI and regards,
Richard.

Posted in Network Presence | Tagged , , | Comments Off on Installing DRBD across two Debian 7 VPS

RT @sage_au: We would like to congratulate promine…

RT @sage_au: We would like to congratulate prominent member @simonhackett for his recent appointment to Australia’s NBNCo Board: http://t.c…

Posted in Tweets | Comments Off on RT @sage_au: We would like to congratulate promine…

RT @GuardianUS: Netflix and YouTube now make up a…

RT @GuardianUS: Netflix and YouTube now make up a majority of US internet traffic. See for yourself: trib.al/yszTnsV http://t.co/5uc…

Posted in Tweets | Comments Off on RT @GuardianUS: Netflix and YouTube now make up a…

Full WHM/cPanel VPS with dual IP addresses & f…

Full WHM/cPanel VPS with dual IP addresses & fast resources in Sydney. networkpresence.com.au/hosting/cpanel… #NetPres

Posted in Tweets | Comments Off on Full WHM/cPanel VPS with dual IP addresses & f…