Ending commercial hosting services through Network Presence

After some 15 years of operation, I’ve decided to shutdown the public
retail cloud/VPS operations side of Network Presence.

The first phase of this is to quickly shutdown the Sydney POP, with the
last day of service from Network Presence in Sydney being February
28th.

I apologise for the inconvenience and I’m sorry to ask you to
please relocate your services provided by Network Presence asap and by
February 28th, as any service not migrated away from Network Presence
in Sydney by February 28th will no longer be available.

The Adelaide POP will continue to operate, but I won’t be offering
commercial-grade QoS and products in Adelaide from April 2026.

Any payments made to date for service beyond February will be refunded
and please cancel any Paypal Subscriptions or scheduled payments to
Network Presence.

I can understand you’d like to know why I’m doing this & its because
the Sydney POP and the retail public cloud side of Network Presence is
deeply unfinancial, and has been for some time, with costs increasing
substantially recently (year or so).

I’ve been happy to support it while doing mostly remote IT contracting
work around Adelaide. But my work is changing soon in 2026, and I can
no longer give the time, yet alone the funding, to operate this to the
public.

I’m available to discuss this with you and please let me know anything
you’d need me to action for you as I quickly stop providing these
services in my business.

I’ve enjoyed working with you and thank you for your business over
these years and I wish you all the best for your future endeavours.

Regards,
Richard.

Posted in Sales | Comments Off on Ending commercial hosting services through Network Presence

Get all the logs and output from a systemctl based service

Use the following command to get the full output from a recent service start attempt:

SYSTEMD_LESS=”FRXMK” journalctl -xeu $SERVICENAME

eg: # SYSTEMD_LESS=”FRXMK” journalctl -xeu httpd

Posted in Sales | Comments Off on Get all the logs and output from a systemctl based service

tcpdump params to find the initial connection packets to a port

‘tcp[tcpflags] & (tcp-syn) != 0 and tcp[tcpflags] & (tcp-ack) == 0 and dst port $PORTNUM’

Posted in Sales | Comments Off on tcpdump params to find the initial connection packets to a port

Upgrading Adelaide POP – Adding new VPS Hypervisors in Adelaide

We’re bringing online new VPS hosting servers (hypervisors) in our Adelaide POP, and moving some existing Adelaide POP customers to these new servers, with short outages for each customer VPS as these migrations occur.

This is possible after we’ve integrated our 2nd fibre based link to our Adelaide POP earlier this year.

These new servers in Adelaide will help host the growth that the POP is seeing.

Posted in Sales | Comments Off on Upgrading Adelaide POP – Adding new VPS Hypervisors in Adelaide

Sydney Data Centre increases in costs by 5% every year

Once again, like clockwork, it’s the season for multi-national companies to be raising their billing rates to Australians.

This time its our Sydney Data Centre, Equinix, increasing their costs to us by 5% from January, as is their contracted right & as they do every year without fail.

As per our other posts about such cost rises, we’d like customers to consider buying more VPS resources (eg: more CPUs or more RAM or more diskspace, etc) via our website, to help fund these cost increases, while giving more resources to your own VPS.

Regards,
Richard.

Posted in Sales | Comments Off on Sydney Data Centre increases in costs by 5% every year

Installing Slurm across a multi-node cluster

In this example Slurm cluster we have 3 nodes, node1, node2 and node3

node1 is on IP 10.0.0.1, node2 is on 10.0.0.2 and node3 is on 10.0.0.3

A critical pre-req is that your /etc/hosts or DNS forward and reverse hostname and IP Address lookup commands all work and return the correct and same information about the hostnames and IPs of each node in the cluster.
In the context of this lab cluster, the /etc/hosts file is listing all hostnames and IPs used in the cluster.

All our cluster nodes are running the latest CentOS 9 Linux, updated and rebooted after running “dnf distro-sync” to be all on the same CentOS/RHEL software versions.

Firewalld is enabled and the following firewall-cmd commands have been run on each node:

firewall-cmd –permanent –zone=public –add-rich-rule=’rule family=”ipv4″ source address=”10.0.0.0/24″ port protocol=”tcp” port=”6817″ accept’
firewall-cmd –permanent –zone=public –add-rich-rule=’rule family=”ipv4″ source address=”10.0.0.0/24″ port protocol=”tcp” port=”6818″ accept’
firewall-cmd –permanent –zone=public –add-rich-rule=’rule family=”ipv4″ source address=”10.0.0.0/24″ port protocol=”tcp” port=”6819″ accept’
firewall-cmd –permanent –zone=public –add-rich-rule=’rule family=”ipv4″ source address=”10.0.0.0/24″ port protocol=”tcp” port=”60001-60100″ accept’ && firewall-cmd –reload

Whether you need munge or not, its best to install it, so do:

dnf -y install epel-release
dnf install -y munge munge-libs

On node1 (your “main” or “head” node) run: /usr/sbin/create-munge-key

And then set the various files and directory permissions on the RHEL world are needed, so run the following on each node:

sudo chown -R munge: /etc/munge/ /var/log/munge/ /var/lib/munge/ /run/munge/
sudo chmod 0700 /etc/munge/ /var/log/munge/ /var/lib/munge/
sudo chmod 0755 /run/munge/
sudo chmod 0700 /etc/munge/munge.key


Then copy the /etc/munge/munge.key file to all your nodes.

Get munge going on each node with the above dnf install command and then:

systemctl enable munge && systemctl start munge
systemctl status munge

Some tests/debugs need each node to be able to ssh into the others, so setup your authorized_keys files on each node as you need.

Now install Slurm with the following on node1:

dnf install -y slurm slurm-slurmctld slurm-slurmdbd mariadb-server slurm-slurmd

And do the following installs on every compute node:

dnf install -y slurm slurm-slurmd

To configure Slurm. all nodes have the same /etc/slurm/slurm.conf file, which critically has the following changes from the default, being:

  • Set ClusterName
  • SlurmctldHost=node1(10.0.0.1)
  • List all nodes in NodeName= entries:
    NodeName=node1 CPUs=2 State=UNKNOWN
    NodeName=node2 CPUs=2 State=UNKNOWN
    NodeName=node3 CPUs=2 State=UNKNOWN
  • Add a critical firewall compatibility, see below:
    SrunPortRange=60001-60100

    Restart slurmctld on the head/main node (node1) and then restart slurmd on all nodes.

Check slurm cluster status with the sinfo command from any node, all should return the same info in standard operations.

Debug with logfiles in /var/log/slurm/

Posted in Sales | Comments Off on Installing Slurm across a multi-node cluster

Installing GPFS

Redhat’s GPFS distributed file system is used in many HPC environments and is now known as IBM Storage Scale. It is not open-source software and can be downloaded from IBM’s sites. There is a Developer version which can be downloaded for trial use upon registering with IBM.

Here’s how to install the developers release version of IBM Storage Scale on an Alma Linux 9.6 operating system environment as of September 2025.

From a standard minimal install of Alma Linux version 9.6 (say from its ISO or an image template), which matches the newest currently supported kernel and software releases of IBM Storage Scale, see https://www.ibm.com/docs/en/storage-scale?topic=STXKQY/gpfsclustersfaq.html#fsi

Run the following commands to ensure you’re on the latest currently known working kernel version for GPFS/IBM Storage Scale, which at this date is
5.14.0-570.30.1.el9_6 in RHEL 9.6 or Alma Linux 9.6

dnf -y  update

dnf -y install kernel-devel-5.14.0-570.30.1.el9_6 kernel-5.14.0-570.30.1.el9_6 bzip2 gcc gcc-c++ make kernel-headers-5.14.0-570.30.1.el9_6 kernel-devel-5.14.0-570.30.1.el9_6 tar unzip zip

grubby --set-default /boot/vmlinuz-5.14.0-570.30.1.el9_6.x86_64

reboot

Once rebooted on this particularly kernel version with the build environment ready, you can run the “install” file supplied by IBM as the root user.
At the end of the installer’s run, part of it’s output will list:

To install a cluster manually: Use the GPFS packages located within /usr/lpp/mmfs/5.2.3.3/gpfs_rpms/

To do this, cd to the relevant directory for your RPM version Linux and run the dnf install command for the following list of package names:

cd /usr/lpp/mmfs/5.2.3.3/gpfs_rpms/

dnf -y install gpfs.base.rpm gpfs.gpl.rpm gpfs.license.dev.rpm gpfs.gskit.rpm gpfs.docs.rpm gpfs.msg.rpm gpfs.adv.rpm gpfs.crypto.rpm gpfs.docs.rpm gpfs.msg.rpm gpfs.adv.rpm gpfs.crypto.rpm

Now you can build the GPFS software suite and kernel modules with the single command provided by IBM, being: /usr/lpp/mmfs/bin/mmbuildgpl

# /usr/lpp/mmfs/bin/mmbuildgpl

mmbuildgpl: Building GPL (5.2.3.3) module begins at ....
--------------------------------------------------------
Verifying Kernel Header...
kernel version = 51400570 (514000570030000, 5.14.0-570.30.1.el9_6.x86_64, 5.14.0-570.30)
module include dir = /lib/modules/5.14.0-570.30.1.el9_6.x86_64/build/include
module build dir = /lib/modules/5.14.0-570.30.1.el9_6.x86_64/build
kernel source dir = /usr/src/linux-5.14.0-570.30.1.el9_6.x86_64/include
Found valid kernel header file under /usr/src/kernels/5.14.0-570.30.1.el9_6.x86_64/include
Getting Kernel Cipher mode...
Will use skcipher routines
Verifying Compiler...
make is present at /bin/make
cpp is present at /bin/cpp
gcc is present at /bin/gcc
g++ is present at /bin/g++
ld is present at /bin/ld
Verifying Additional System Headers...
Verifying kernel-headers is installed ...
Command: /bin/rpm -q kernel-headers
The required package kernel-headers is installed
make World ...
make InstallImages ...
--------------------------------------------------------
mmbuildgpl: Building GPL module completed successfully at ....
--------------------------------------------------------

Posted in Sales | Comments Off on Installing GPFS

Install Claude command line

Reimage your VPS with our Ubuntu 24.04 template.

Login to your VPS as root and run:

apt update && apt upgrade

Then install nodejs & npm with:

apt install nodejs npm

Now you can install claude-code with:

npm install -g @anthropic-ai/claude-code

With the above installed you can then run the “claude” command in your project’s directory, eg:

mkdir -p claude-project/1 && cd claude-project/1 && claude

Posted in Sales | Comments Off on Install Claude command line

CentOS Vault

Here’s the site where the old CentOS Linux Distros archive their final repositories.

http://archive.kernel.org/centos-vault/

Posted in Sales | Comments Off on CentOS Vault

How to SSH to Older RHEL 6 hosts from modern RHEL 9 hosts

Final command line is:

OPENSSL_CONF=~/.ssh/openssl.cnf ssh -o StrictHostKeyChecking=no -o ConnectTimeout=10 -o HostKeyAlgorithms=+ssh-rsa -o HostKeyAlgorithms=+ssh-dss -o PubkeyAcceptedAlgorithms=+ssh-dss -o PubkeyAcceptedAlgorithms=+ssh-rsa -o KexAlgorithms=+diffie-hellman-group-exchange-sha1 -o MACS=+hmac-sha1 -o PubkeyAcceptedKeyTypes=+ssh-rsa -o PubkeyAcceptedAlgorithms=+ssh-rsa $USER@$HOST

Where ~/.ssh/openssl.cnf is:

.include /etc/ssl/openssl.cnf
[openssl_init]
alg_section = evp_properties
[evp_properties]
rh-allow-sha1-signatures = yes

Posted in Sales | Comments Off on How to SSH to Older RHEL 6 hosts from modern RHEL 9 hosts