` Building a Raspberry Pi mini cluster - part 1

Building a Raspberry Pi mini cluster - Part 2

In Part 1 I looked at the hardware required to build a mini Raspberry Pi cluster with 5 machines where one of the machines acts as the "head" node, connecting the cluster to the internet via a wireless lan link, and the other four are "worker" nodes.

In this post I'll describe how to configure Raspbian OS on each of the machines. We'll want to be able to log in to each of the worker nodes from the head node without a password, and we'll want to enable internet access on the head node and each of the worker nodes.

I started with installing Raspbian (build 2013-07-26-wheezy-raspbian) on SD cards for each of the machines in the cluster. The commands described here should all be executed as the pi user. First we need to set up the head node. To configure it, attach a screen and keyboard and boot up the node.

Note - to edit the files on the head and worker nodes, I used the command sudo vi filename or alternatively sudo nano filename. You can use any text editor but you will need to prefix the command with sudo because the files cannot be edited without root privilege.

Configuring networking on the head node

I decided that my cluster will use network addresses of the form 192.168.1.X for their ethernet network because my router will assign the head node an address in the range 192.168.0.X for the wireless lan. The two subnets (ethernet and wireless lan) need to be different. In the light of these choices I configured my /etc/network/interfaces file (for example use sudo vi /etc/network/interfaces to edit this file) to look like the following:

auto lo

iface lo inet loopback

allow-hotplug eth0
iface eth0 inet static

allow-hotplug wlan0
iface wlan0 inet dhcp
  wpa-ssid "my wireless ssid"
  wpa-psk "my wireless password"

Note - you'll need to replace "my wireless ssid" and "my wireless password" with your router's SSID and password (but keep the enclosing double quotes).

Next we need to make some changes to allow the other machines in the cluster to access the internet via the head node. First edit the kernel parameters file /etc/sysctl.conf and uncomment the following line:


We also need to execute the following command on startup. This can be achieved by adding the following line into /etc/rc.local (NOTE: if the file ends in the line exit 0, add the extra line BEFORE this point in the file).

/sbin/iptables --table nat -A POSTROUTING -o wlan0 -j MASQUERADE

The next step for configuring the head node is to assign names IP addresses to each of the other machines in the cluster, and configure the head node's /etc/hosts file with these details. Recall that I decided to use addresses in the range 192.168.1.X for the cluster ethernet. I added the following lines to the /etc/hosts file, naming the 4 other systems (the worker nodes) as pi1,pi2,pi3 and pi4:   pi1   pi2   pi3   pi4

Now the head node configuration is complete, we can reboot it and move on to the worker nodes.

Configuring networking on the worker nodes

For each of the worker nodes, attach a screen and keyboard to the node and boot it then follow these steps to set the host name and IP address of each worker (Many thanks to Simon the Pi Man's post for steps 1 and 2):

(1) edit the file /etc/hostname and change the name from the default (raspberrypi) to the required name (for example, pi1 for the first of my worker nodes).

(2) edit the file /etc/hosts and replace the original name raspberrypi with the new name where it appears.

(3) now we'll set the IP address of the worker. Edit the file /etc/network/interfaces and enter the following (NOTE: replace the XXX in the address to match the address you've assigned to each worker)

auto lo

iface lo inet loopback

iface eth0 inet static
  address 192.168.1.XXX

(4) one last step for each worker is to create a directory called ~/.ssh which will be needed when we configure passwordless access, use command mkdir ~/.ssh.

Now reboot the worker and move on to the next one. When all 4 workers are configured and rebooted, I also applied steps 1 and 2 on the head node to rename it from raspberrypi to pi0. Anyway, once all this is done we can test the connections.

Testing the network connections

From the head node, check you can ping and ssh to each of the worker nodes. For ssh access you'll need to enter the password for the pi user on each worker node. We'll fix that for easier access without a password in a minute.

ping pi1
PING pi1 ( 56(84) bytes of data.
64 bytes from pi1 ( icmp_req=1 ttl=64 time=2.04 ms
64 bytes from pi1 ( icmp_req=2 ttl=64 time=0.955 ms

ssh pi@pi1
pi@pi1's password:

Assuming the head node and worker nodes are now connected, we can configure passwordless access. The steps listed below are outlined in more detail at this link.

Configuring passwordless access from the head node to worker nodes

On the head node, run ssh-keygen and press enter to create an empty passphrase. Then copy the file ~/.ssh/id_rsa.pub to ~/.ssh/authorized_keys on the worker node. One easy way to do this is from head node, using scp (at this point you will still be prompted to enter the passwords for the pi account on each worker node) with the following commands.

scp -p /home/pi/.ssh/id_rsa.pub pi1:/home/pi/.ssh/authorized_keys
scp -p /home/pi/.ssh/id_rsa.pub pi2:/home/pi/.ssh/authorized_keys
scp -p /home/pi/.ssh/id_rsa.pub pi3:/home/pi/.ssh/authorized_keys
scp -p /home/pi/.ssh/id_rsa.pub pi4:/home/pi/.ssh/authorized_keys

You should now be able to ssh directly to each of the worker nodes from the head nodes, without requiring a password. If you encounter problems you may need to set the permissions of the copied files to 600 (read-write by user only) on each worker node:

chmod 600 ~/.ssh/authorized_keys

Now check you can then ssh from the head node to each worker node (for example, ssh pi1) without requiring a password.

Executing commands on each worker in turn

The last thing we'll do is to implement a simple method for executing the same command on all of the worker nodes. On the head node, create script /usr/local/bin/worker.sh (using, for example, sudo vi /usr/local/bin/worker.sh) and after entering the script below, set the script to be executable using sudo chmod a+x /usr/local/bin/worker.sh. Note - if you've used different names for your worker nodes, replace the names pi1, ... pi4 with the names you used.


HOSTS="pi1 pi2 pi3 pi4"
for HOSTNAME in $HOSTS; do
    echo executing command on $HOSTNAME
    ssh `whoami`@$HOSTNAME $@

Invoking worker.sh with no arguments will simply ssh into each worker in turn (enter command exit to go on to the next worker). However, normally you would want to invoke worker.sh followed by a command, to run the command on each worker in turn. For example to shutdown all workers run the following command:

worker.sh sudo shutdown -hP now


Leave a comment

Anti-Spam Check