{"id":9378,"date":"2025-12-16T19:34:23","date_gmt":"2025-12-17T02:34:23","guid":{"rendered":"http:\/\/blog.networkpresence.co\/?p=9378"},"modified":"2026-02-22T21:21:55","modified_gmt":"2026-02-23T04:21:55","slug":"installing-openmpi-across-centos-rhel-9-based-nodes","status":"publish","type":"post","link":"http:\/\/blog.networkpresence.co\/?p=9378","title":{"rendered":"Installing OpenMPI across CentOS\/RHEL 9 based Nodes"},"content":{"rendered":"\n<p>OpenMPI is available for install from the AppStream Repo and EPEL Repo is also best enabled for its required associated packages in a CentOS 9 (RHEL) environment of separate like installed compute nodes in a cluster.<\/p>\n\n\n\n<p>Install OpenMPI with:<\/p>\n\n\n\n<p><code>dnf -y install openmpi<\/code><\/p>\n\n\n\n<p>The package in RHEL AppStream installs OpenMPI and its shell module file to the system, and given OpenMPI is a runtime based environment, there&#8217;s no server or such to run, the OpenMPI command line toolkit uses SSH to connect to the nodes in the OpenMPI cluster of hosts.<\/p>\n\n\n\n<p>So ensure that the users who will be running OpenMPI commands have SSH Key based authentication configured on all hosts in the cluster. ie: populate the authorized_keys files on all relevant hosts with the user&#8217;s SSH Public Key text.<\/p>\n\n\n\n<p>Once OpenMPI is invoked on a host in the cluster, there are dynamic network port based communications undertaken by the OpenMPI commands running on each node and in a RHEL FirewallD environment, the OpenMPI commands need to be configured to use a known range of network ports, which can be opened via firewall-cmd commands.<\/p>\n\n\n\n<p>In our example here, we&#8217;ll set OpenMPI commands to use ports from 50000 to 51999 and these ports are opened using the following FirewallD based commands on each node in the cluster:<\/p>\n\n\n\n<p><code>firewall-cmd --permanent --zone=public --add-rich-rule='rule family=\"ipv4\" source address=\"10.0.0.0\/24\" port protocol=\"tcp\" port=\"50000-51999\" accept' &amp;&amp; firewall-cmd --reload<\/code><\/p>\n\n\n\n<p>Note: assuming the nodes are in a network 10.0.0.0\/24 in this example.<\/p>\n\n\n\n<p>Then these network port ranges can be used on the OpenMPI &#8220;mpirun&#8221; command line with the following parameters:<\/p>\n\n\n\n<p><code><strong>--mca btl_tcp_port_min_v4 50001 --mca btl_tcp_port_range_v4 30 --mca oob_tcp_dynamic_ipv4_ports 51001-51031<\/strong><\/code><\/p>\n\n\n\n<p>The standard shell user that&#8217;s to run the OpenMPI commands in their shell needs a MPI Hosts file which lists the hostname and other settings of each node in the OpenMPI cluster, an example file could be named &#8220;mpi_hosts&#8221; in the user&#8217;s home directory and a sample of it for a 3 node cluster, where each node&#8217;s hostname and IP Address is resolvable via DNS or is listed in each host&#8217;s \/etc\/hosts file is:<\/p>\n\n\n\n<p><code>node1 slots=2<br>node2 slots=2<br>node3 slots=2<\/code><\/p>\n\n\n\n<p>An example complete mpirun command line to run the &#8220;hostname&#8221; command 6 times across a 3 node cluster is:<\/p>\n\n\n\n<p><code>\/usr\/lib64\/openmpi\/bin\/mpirun --mca btl_tcp_port_min_v4 50001 --mca btl_tcp_port_range_v4 30 --mca oob_tcp_dynamic_ipv4_ports 51001-51031 --hostfile ~\/mpi_hosts --path \/usr\/lib64\/openmpi\/bin -np 6 hostname<\/code><\/p>\n\n\n\n<p>Note: the &#8211;path is required so the binaries of OpenMPI can be found on each node.<\/p>\n\n\n\n<p>The above command will results in the hostname command being run 6 times across 3 nodes, with sample output looking like:<\/p>\n\n\n\n<p><code>node1<br>node3<br>node2<br>node1<br>node2<br>node3<\/code><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenMPI is available for install from the AppStream Repo and EPEL Repo is also best enabled for its required associated packages in a CentOS 9 (RHEL) environment of separate like installed compute nodes in a cluster. Install OpenMPI with: dnf &hellip; <a href=\"http:\/\/blog.networkpresence.co\/?p=9378\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[70],"tags":[168],"class_list":["post-9378","post","type-post","status-publish","format-standard","hentry","category-sales","tag-hpc"],"_links":{"self":[{"href":"http:\/\/blog.networkpresence.co\/index.php?rest_route=\/wp\/v2\/posts\/9378","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/blog.networkpresence.co\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/blog.networkpresence.co\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/blog.networkpresence.co\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/blog.networkpresence.co\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=9378"}],"version-history":[{"count":1,"href":"http:\/\/blog.networkpresence.co\/index.php?rest_route=\/wp\/v2\/posts\/9378\/revisions"}],"predecessor-version":[{"id":9379,"href":"http:\/\/blog.networkpresence.co\/index.php?rest_route=\/wp\/v2\/posts\/9378\/revisions\/9379"}],"wp:attachment":[{"href":"http:\/\/blog.networkpresence.co\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=9378"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/blog.networkpresence.co\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=9378"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/blog.networkpresence.co\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=9378"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}