Setting Up a (Raspberry Pi) Cluster with MPI

Anthony Morast
6 min readOct 31, 2019
Raspberry Pi Logo

In 2018 my wife got me a few Raspberry Pi 3’s for Christmas. I had the intention of setting up a small cluster of these SoCs to test parallel algorithms using MPI and pthreads.

Having set up clusters in the past (of low-powered laptops) I had somewhat of an idea of what needed to be done, but it had been a few years since actually setting up a cluster. Looking for information online I found quite a few posts about setting up MPI to work with Python on the Pi but not a lot about setting up the communications between the devices. Luckily, I have some old course notes and host files from doing this during my undergrad. With these notes and a little searching, I finally got the cluster set up and decided I should make this post to help those needing the instruction (and to remember the process).

This walkthrough assumes you’ve already done all of the networking to connect the Pi’s. In my case, I used a 5-port switch connected to my router (so I can access the cluster remotely). Therefore, the IP addresses were managed by my router. For simplicity, I used my router’s admin panel to assign static IPs to each Raspberry Pi.

Step 1: Setting Up the Pi’s

The Raspberry Pi’s I purchased (that were purchased for me, more so) came in a kit with micro SD pre-flashed with Raspbian. If that’s not the case for you this post is a good walkthrough for getting the OS on your Pi.

Once you have an operating system you will need to enable SSH capabilities. Since my Pi’s were all headless (I didn’t want to connect them all individually to a monitor just for setup) I found this documentation from the Raspberry Pi organization on enabling this functionality. To do this a file named ssh, with no extension, needs to be placed on the boot partition of the SD card. If this file is found on boot, the Pi will automatically enable SSH and delete the file.

Another thing I took care of while messing around with files in the Pi’s boot partition is overclocking the Pi’s. This is obviously only recommended if you have some way of dealing with the excess heat created from overclocking. For me, the kits I purchased came with heatsinks and the cases came with fans. For the Raspberry Pi 3 this is the only way to overclock the CPU and GPU since the options are not enabled in the raspi-config menu. The overclocking is as simple as adding/modifying some lines in the /boot/config.txt file as explained in this forum post.

Step 2: Install MPI

I used MPICH for my implementation of the MPI standards. This can be installed simply with Raspbian/Debian’s package manager apt via

sudo apt install mpich

Feel free to use another MPI implementation if you so choose.

Step 3: SSH Without Entering a Password

One thing you’ll likely run into your first time setting up a cluster to use MPI is the annoying “issue” of password prompting. To bypass this, we need to enable “SSH’ing” to each node on the cluster without having to input a password.

To do so you need to first generate the SSH key pair via ssh-keygen,

ssh-keygen -t rsa

There will be a few options after running this, just ignore them by pressing Enter without typing any values. Those more familiar with ssh-keygen can set these values however you want.

The SSH key needs to be copied to the other machines. This is done with another SSH utility, ssh-copy-id,

ssh-copy-id <other node IP>

Here, the other node IP is replaced with the IP address of the cluster’s node you are trying to connect to without a password. This step needs to be done for every other node in the cluster.

In my cluster, there is one master node and four slave nodes so I only copied the SSH key from the master to each slave as the slave nodes do not directly communicate with one another. After this is done SSH into each slave node from the master node to complete the setup and verify you are not prompted for a password.

Step 4: Create Host Names and Edit Hosts File

At this point, you can SSH to each node and change its hostname by editing the /etc/hosts and /etc/hostname files. The hostname file should only have one line which needs to be changed to whatever you want to call your nodes. As mentioned above, I choose to name the one node the master node and the others took the naming convention slave{1, 2, …}, so the first compute node would be slave1, the second is slave2, and so on. Next, the very last line of the /etc/hosts file needs to be updated to reflect the new hostname that is set up in /etc/hostname.

Now that each host has a name we can add these names to the /etc/hosts file on our master node. Append to the end of this file the line

<node IP> <5 spaces> <hostname>

for each node in the network. Similarly, on every non-master compute node you will need to add the master node’s IP address (in the same format as above) to the /etc/hosts file.

Step 5: Create an MPI Host File

This is fairly straightforward. We need to create a file that lets MPI know where are compute resources are and (optionally) how many resources they have (or how many we want to be used more so). This file just needs to hold the names/IPs of all the nodes (hosts) in the cluster. The file should just be a text file that is accessible when running the MPI programs.

The format is

<hostname> slots=<num procs>

where <hostname> is the hostname as set up in step 4 and <num procs> is an optional parameter that limits the number of processes to be created on this host (I set mine to the number of cores on each Pi). Note also that hostname can either be the hostnames set up in step 4 or the IP addresses of the compute nodes. Although, much of step 4’s purpose was to make this simpler by excluding the use of IP addresses.

If you want to use the extra processors on your master node be sure to include it in this hosts file as well. Below are examples of host files with and without the number of slots specified.

Host File With Slots

master slots=2 
slave1 slots=4
slave2 slots=4
slave3 slots=4
slave4 slots=4

Host File Without Slots

master 
slave1
slave2
slave3
slave4

Step 6: Test

Now you should be able to do a test run on your cluster. To do so, we will avoid any programming and use a built-in program called hostname. To test, run the command

mpiexec -n <number procs> -hostfile <host file name (step 5)> hostname

The command runs MPI using the hosts/procs specified in the hosts file described above in step 5. The command to be executed is at the very end of this command, viz. hostname. Running this should print the hostname of every node in the cluster, as this is the purpose of the hostname command.

The number of processes (<number procs>) should be large enough to ‘hit’ each node on the cluster if the optional slots parameter is specified in the host file. That is if there are three hosts in your host file and the first two allow for 3 processes each, putting the value 5 after -n above would not allow the third host to receive any commands from the master as the 5 processes would be started on the first two nodes (which can handle a total of 6 processes).

Originally published at https://www.anthonymorast.com on October 31, 2019.

--

--

Anthony Morast

I am a professional software engineer and an amateur mathematician. My main interests are programming, machine learning, fluid dynamics, and a few others.