In the second post of the series I dive into the details of what is involved setting up for an ever-growing number of applications on a handful of physical servers on my Home Lab.
As I mentioned in the previous post, I use small Linux boards from Pine64 to host my services. They are ideal for this use-case, don't take up too much space, consume little power but are still powerful enough. Both the Pine64 and the Rock64 boards feature a 64-bit ARMv8 processor with the available memory ranging from 512 MB to 4 GB. My current setup is a bit mixed, having instances with 1, 2 and 4 GB of memory.
The beauty of this cluster is that you can keep adding new devices whenever you need it. Docker really doesn't care about running on the same or even similar servers.
My only negative about them is their fairly long shipping time. They come from China and it can take close to a month to arrive here in the UK. If this bothers you, have a look at the Raspberry Pi 3 which features a similar ARMv8 CPU and is more likely to be available from local distributors. It is also an awesome platform in the sub $50 range, capable of running the 32-bit only official Raspbian distro based on Debian, or you can find a 64-bit capable image from the community. The makers of HypriotOS are doing some pretty cool things with the device and their OS is optimized for Docker usage.
If you do decide to go with Raspberry Pis, make sure to check out Alex Ellis' blog! He has loads of educational content and tutorials on it and on Docker.
Their available memory maxes out at 1 GB though and I needed something that can comfortably run more memory-hungry applications, like Elasticsearch on the JVM.
The Pine64 community has a wide range of Linux flavors available for the boards. I needed something that plays nicely with Docker, which is Debian or Ubuntu for this CPU architecture. The Armbian guys host great Ubuntu images for lots of ARM boards. The older Pine64 has a stable one with a fairly recent kernel version. The Rock64 only has a testing build with a slightly older kernel but it seems to work just as well for me so far.
You'll need a micro SD card for these devices. I use 32 GB class 10 SanDisk ones but honestly, you could probably get away with 16 or even 8 GB storage to get started. Download the Armbian image from their site, then write it to the SD card.
$ ls -lh -rw-r--r-- 1 xyz xyz 223M Dec 30 13:16 Armbian_5.34.171121_Rock64_Ubuntu_xenial_default_126.96.36.199z $ 7z x Armbian_5.34.171121_Rock64_Ubuntu_xenial_default_188.8.131.52z $ sudo cp Armbian_5.34.171121_Rock64_Ubuntu_xenial_default_4.4.77.img /dev/mmcblk0
It is really as easy as this, unpack the image file and copy it over the SD card device. YMMV, make sure you write it to the right device and not your laptop's hard-drive by accident. The SD card also needs to be unmounted during the write. I usually do a
sudo sync as well after it to flush the write buffer, not a 100% sure though it's necessary.
It may take a few minutes to finish. Once it's done, insert the card in the board, connect it to the network and give it power. The boot can take around 30-60 seconds. The Armbian installations come with SSH access by default and a predefined root account, you can get the password for it on the download pages. I usually check my router's attached devices list to find the IP address of the newly started device then just log in an do the initial setup over SSH.
You'll need to change the root password on the first login and should create a new user as well to use for further sessions. Log out and log back in with the user you just created. After this is done, make sure the timezone settings are correct on the box, some applications are a bit fussy about it when running in a cluster. Portainer for example uses JWT authentication which needs the time to be in sync.
Now that this is done, we can go ahead and install Docker. As I mentioned, the Armbian images are based on Ubuntu which is supported by Docker on the 64-bit ARMv8 platform. This means the installation is done with a one-liner:
$ curl -fsSL get.docker.com | sudo sh
That's it! You may get some warnings and a lot of output about the installation progress, but once finished, your Docker engine is running on the box. If you've followed the setup so far, then you're using the user account, which does not have permissions to interact with the daemon yet.
$ docker version Client: Version: 17.10.0-ce API version: 1.33 Go version: go1.8.3 Git commit: f4ffd25 Built: Tue Oct 17 19:02:43 2017 OS/Arch: linux/amd64 Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.33/version: dial unix /var/run/docker.sock: connect: permission denied
To resolve it, just add your user to the
$ sudo usermod -aG docker $USER $ docker version Client: Version: 17.10.0-ce API version: 1.33 Go version: go1.8.3 Git commit: f4ffd25 Built: Tue Oct 17 19:02:43 2017 OS/Arch: linux/amd64 Server: Version: 17.10.0-ce API version: 1.33 (minimum version 1.12) Go version: go1.8.3 Git commit: f4ffd25 Built: Tue Oct 17 19:01:22 2017 OS/Arch: linux/amd64 Experimental: false
Now you can invoke
docker commands using your user. How simple was that? If you want to test that it works OK, you can check it quickly:
$ docker run --rm -it alpine echo 'Hello world!'
If all is well, then you should see it pulling the
alpine image from the Docker Hub and printing the
Hello world! text. I should mention that we didn't have to specify CPU architecture. Recent official images are now (mostly) multi-arch images, so the exact same command would work on other platforms and CPU architectures as well.
If you have a single box at this point and just want to run a couple of services on it, then
docker-compose might be the easiest option to get started with.
By experience, the move from Docker Compose to Swarm wasn't as straightforward as I hoped it would be, so if you're planning to expand your home lab, then keep an eye out for the next post explaining the Swarm setup.
You'll need to install the executable first:
$ sudo apt-get update $ sudo apt-get install docker-compose
Once finished, quickly try if it works.
$ docker-compose version docker-compose version 1.8.0, build unknown docker-py version: 1.9.0 CPython version: 2.7.13 OpenSSL version: OpenSSL 1.1.0f 25 May 2017
Great! To configure the services in your local cluster, define them in a Composefile like this:
version: '2' services: web: image: nginx restart: always ports: - 80:80 volumes: - ./www:/usr/share/nginx/html internal: image: httpd restart: always ports: - 8080:80 volumes: - ./internal-www:/usr/local/apache2/htdocs
This example creates two services. The
web service is an Nginx instance listening on port
80 for HTTP requests. The
internal service is an Apache httpd instance listening on port
8080 from outside the container - internally it also listens on port 80, but this won't cause any issues when running with Docker. You can have as many applications listening on the same port as you want, they just have to bind to different external ports.
It's time to try them out! Let's save the YAML content above as
docker-compose.yml, create an index HTML file for each of them and check if they serve them up OK.
$ docker-compose up -d Creating network "sample_default" with the default driver Creating sample_internal_1 Creating sample_web_1 $ docker-compose ps Name Command State Ports ----------------------------------------------------------------------- sample_internal_1 httpd-foreground Up 0.0.0.0:8080->80/tcp sample_web_1 nginx -g daemon off; Up 0.0.0.0:80->80/tcp $ ls docker-compose.yml internal-www www $ echo 'Hello!' | sudo tee www/index.html Hello! $ echo 'Secret hello!' | sudo tee internal-www/index.html Secret hello! $ curl -s http://localhost/ Hello! $ curl -s http://localhost:8080/ Secret hello!
docker-compose up -d command will start both services, pulling the images first if you don't already have them. We can check that they started OK with the
docker-compose ps command. As you can see above, they are listening on the ports we defined and they have created their respective docroot folders. We then created two simple plain-text files with
echo, both of them called
index.html and tested they reply it back with
curl. If you want, you can also see it in your browser, just replace
localhost with the IP address of your server.
sudois necessary if you are using your own user on the terminal. The directories created by the containers will be owned by
root, the user the processes inside the containers run as.
restart: always lines in the Composefile make sure that Docker restarts the containers, should they exit for whatever reason. You can see what they have logged so far as well:
$ docker-compose logs Attaching to sample_web_1, sample_internal_1 web_1 | 172.22.0.1 - - [05/Jan/2018:21:25:07 +0000] "GET / HTTP/1.1" 200 7 "-" "curl/7.52.1" "-" internal_1 | AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.22.0.2. Set the 'ServerName' directive globally to suppress this message internal_1 | AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.22.0.2. Set the 'ServerName' directive globally to suppress this message internal_1 | [Fri Jan 05 21:24:51.008938 2018] [mpm_event:notice] [pid 1:tid 139822354126720] AH00489: Apache/2.4.29 (Unix) configured -- resuming normal operations internal_1 | [Fri Jan 05 21:24:51.009102 2018] [core:notice] [pid 1:tid 139822354126720] AH00094: Command line: 'httpd -D FOREGROUND' internal_1 | 172.22.0.1 - - [05/Jan/2018:21:25:10 +0000] "GET / HTTP/1.1" 200 14
-fto this command to follow the logs. Just press
Ctrl+Cto exit when you're done.
If you've had enough fun with these, you can easily stop and delete them with a simple command.
$ docker-compose down Stopping sample_web_1 ... done Stopping sample_internal_1 ... done Removing sample_web_1 ... done Removing sample_internal_1 ... done Removing network sample_default
docker-compose stop instead if you only want to stop the containers but not remove them.
This was fun, I hope, but we can do much better than manually editing files over an SSH session and executing commands on the server. You could commit your Composefile into a Git repository and set up basic automation to update the services automatically whenever something changes in it. If your stack is not private and doesn't contain any sensitive data, you could use GitHub. Alternatively, sign up for BitBucket, where you can have public and private repositories as well. Once your
docker-compose.yml file is in the cloud, install
git on the servers and clone the repository in a folder. You can now easily commit changes to the file and update the containers on the box:
$ cd /to/your/cloned/folder $ git pull $ docker-compose pull $ docker-compose up -d
git pull will get the changes from your Git repository, then
docker-compose pull will make sure that you have all the Docker images locally that are defined in the (possibly) updated YAML file. The final
docker-compose up -d command will start any new services, recreate the ones where the configuration has changed and just leave the rest alone. Now you can use your favorite desktop text editor if you want instead of relying on tools like
nano if you don't like those. On that note though, they are pretty awesome, you should definitely look into them!
Having this set up, there's nothing stopping us now to completely automate the updates after any changes in the configuration. We can use cron to do what we've just done every now and then. I used to use this initially and worked OK for me while I only had a handful of services. Let's say we want to check for changes every 15 minutes. Just edit your
cron schedule with
crontab -e and add a line like this:
*/15 * * * * cd /to/your/cloned/folder && git pull && docker-compose pull && docker-compose up -d
For bonus points, you could wrap this in a Bash script and just invoke that from
Congratulations! You now have a set of Docker containers running, which can be updated and configured with editing a simple YAML file and doing a
This solution is OK to run a few applications but doesn't scale very well. It's also not super reliable, given that you have everything on a single server, if that goes down, nothing is accessible anymore.
In the next part of the Home Lab series I'll show a way to build on what we have here and expand the setup to multiple servers. We will keep the easy configuration and the continuous deployments but we'll hopefully remove the single point of failure.
Check out the other posts in the series: