5.9 KiB
draft | title | date |
---|---|---|
true | Encrypting a Docker API for Remote Access Using Portainer | 2021-05-16 |
tl;dr This script has everything you need, just run
./docker-tcp.sh -h
(after making it executable) for any help.
Introduction
To manage my little army of servers, I use Portainer CE. It's an open-source management tool for controlling Dockerized applications across multiple hosts. It can handle regular Docker containers, compose stacks, Kubernetes clusters or Docker swarm mode. It's a really useful tool to keep track of everything, and nowadays, I really can't miss it.
Before we can add a host to Portainer, its Docker API has to be exposed to the public, and in order to do this, we need to protect it using encryption (unless of course you like random people controlling your server). This post will explain how this can be done, and I've also written a script that can automate the "heavy" lifting.
Note: This tutorial is only for Linux. I have no experience with managing a Windows server and therefore can't confirm these steps will also work on a Windows machine.
I recommend running these commands on your local Linux machine and just copying the certificates to the server later, as you'll need all the files in order to add the host to Portainer later.
Server-side
To make the connection as secure as possible, we'll use both a server- & a client-side certificate. This first section describes how to generate the former:
openssl genrsa -aes256 -out ca-key.pem 4096
openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem
These first two commands generate the CA key. You'll be asked for some basic information, e.g. your country, state, city, organization, etc. The most important one is the password. Keep this one safe, as you'll be asked for it later when creating the client key.
One thing to note here is the -days 365
flag. This defines after how many
days this certificate will expire (but only when the -x509
flag is
specified). By default, its value is set at 30 days, but I find this to be
rather short. After this time, you'll have to repeat these steps and generate a
new certificate. You'll have to figure out for yourself how long you'd like
your certificate to be valid for.
Now we can generate the server key:
openssl genrsa -out server-key.pem 4096
openssl req -subj "/CN=<HOST>" -sha256 -new -key server-key.pem -out server.csr
In the above snippet, replace <HOST>
with the hostname of the machine who's
API you want to expose. With hostname, I mean the domain from which your server
is accessible, e.g. server.example.com
. Now we've created server-key.pem
and server.csr
.
After this, we need to create a file named extfile.cnf
with the
following content:
subjectAltName = DNS:<HOST>,IP:<IP>,IP:127.0.0.1 >> extfile.cnf
extendedKeyUsage = serverAuth
Here, we once again replace <HOST>
with the machine's domain name, and <IP>
with the machine's public IP.
This file can now be used to generate the actual signed certificate:
openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem \
-CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extfile extfile.cnf
Here, we can once again change the days argument to the value we want. After all these steps, we're left with a signed server-side certificate.
Client-side
Now we'll generate the client-side certificates. We start by creating a csr
file:
openssl genrsa -out key.pem 4096
openssl req -subj '/CN=client' -new -key key.pem -out client.csr
After this, we create another .cnf
, this time to configure the client-side
keys. Add this to a file named extfile-client.cnf
:
extendedKeyUsage = clientAuth
And then, we generate the client-side key:
openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey \
ca-key.pem -CAcreateserial -out cert.pem -extfile extfile-client.cnf
Once again change the days value to whatever you want. Now we're left with all the files we need to securely expose the API.
Exposing the API
Note: the following steps will restart the Docker engine and all running containers, so make sure this won't break anything.
Start by creating a directory on the host that you're not going to delete. In
the following steps, replace <DIR>
with the absolute path to this directory.
After this, copy ca.pem
, server-cert.pem
and server-key.pem
to this
directory.
We're gonna be creating a system config file for the Docker service (this guide
assumes the use of systemd
). In
/etc/systemd/system/docker.service.d/startup_options.conf
, put the following:
[Service]
ExecStart=
ExecStart=/usr/sbin/dockerd --tlsverify --tlscacert='<DIR>/ca.pem' --tlscert='<DIR>/server-cert.pem' --tlskey='<DIR>/server-key.pem' -H fd:// -H tcp://0.0.0.0:2376
Don't forget the replace <PATH>
with the path to your actual directory.
The final step is restarting the Docker engine:
systemctl daemon-reload
systemctl restart docker.service
Note: these commands require root.
After all this, you should have a Docker API that's accessible using an encrypted connection. Let's test it by adding it to Portainer!
Adding engine to Portainer
Thankfully this is the easy part. In Portainer, add a new endpoint and choose
the "Docker" type. Pick a name for your endpoint, fill in the endpoint URL
including the port number (Docker's default port number is 2376
) and enable
the "TLS" switch. We choose "TLS with server and client verification", as this
is the safest. The files to upload are ca.pem
for the TLS CA certificate,
cert.pem
for the TLS certificate and key.pem
for the TLS key. If all goes
well, you should now connect to the host!
Now, I know these steps can be quite tedious to repeat, so I've written a script that can automate this process for you.