After you’ve got your initial Kubernetes cluster up and running the next step is to decide what core cluster services you want to install. I’ll walk through what I’ve deployed in my DigitalOcean cluster. This post is Part 2 of my experience running a personal Kubernetes Cluster on DigitalOcean’s managed Kubernetes platform. If you missed part 1, you can find it here.
The core services I chose to deploy to my Kubernetes cluster are:
Below I’m going to walk through what each of these do in detail and some specifics about how I use each of them.
Kubernetes ships with a Secrets API that gives cluster users the hooks to protect credentials and other sensitive information. Secrets themselves, however, don’t provide any kind of encryption on the data provided to them. The Kubernetes default “Opaque” secrets are just bags of keys with base64-encoded values. A base64-encoded secret isn’t something you’d want to put into, say, GitHub. Cluster users essentially have two options for secure secret storage.
The first is to manage secrets totally apart from version control. The second option is to deploy tooling that handles the encryption and decryption of secrets into their base64-encoded counterparts for Kubernetes.
I chose to use Bitnami’s Sealed Secrets Controller.
When you deploy the controller to your cluster it generates a public and private key pair for encryption and decryption, respectively. It also defines a new resource type, sealedsecret
that can be used from kubectl
and other tooling. A sealed secret is encrypted using the public key the controller generates. When you push a new sealedsecret
resource to your cluster, the controller decrypts it using its private key and creates a corresponding Kubernetes secret
that can be mounted into a Pod.
The YAML file for the sealed secret is safe to check into GitHub alongside your other Kubernetes resources because it can only be decrypted with the private key in your cluster. If it were to fall into the hands of an unsavory foe, it wouldn’t do them much good unless they are the NSA.
If you are deploying the Sealed Secrets Controller on a cluster with many users, I recommend using role-based access control to block individual users from interacting with the secret
resource from kubectl
. This will prevent users from being able to access the base64 encoded versions of secrets. It will also serve as a useful reminder that folks should be using sealedsecret
in your production cluster (because they won’t be able to directly create the less secure secret
object).
The repository for the Sealed Secrets Controller has great installation instructions. Just a few simple kubectl commands stand between you and the ability to create sealed secrets to your heart’s content.
You’ll also want to install kubeseal
on your local machine to create sealed secrets. You can download the latest binary for Linux and macOS from their releases page on GitHub or install it using Homebrew:
$ brew install kubeseal
Creating a sealed secret is a two-step process:
kubeseal
to do the encryption, and output it to YAML so you can store it somewhere. (You could also output it to JSON, but meh.)In Bash that process looks like so:
kubectl create secret generic \
--dry-run \
my-secret-name \
--from-literal KEY1=VALUE1 \
--from-literal KEY2=VALUE2 \
-o json | kubeseal --format yaml
To make this a bit easier, I’ve written a shell script that lives on my path that I’ve named ezseal
:
#!/bin/bash
set -e
kubectl create secret generic \
--dry-run \
"$@" \
-o json | kubeseal --format yaml
This allows me to get the same result by running:
$ ezseal my-secret --from-literal KEY1=VALUE1 --from-literal KEY2=VALUE2
The resulting YAML can be pushed as a sealedsecret
resource to your cluster. Once the controller picks up that the sealed secret is there, a corresponding secret
resource will appear in your cluster with the decrypted values.
Helm is a popular way to distribute applications that can be deployed to a Kubernetes cluster. Tiller is the cluster-side component that the helm
command line tool requires. To use helm
, you need Tiller.
One option for deploying Tiller is to deploy it on your local machine and configure it to talk to a remote Kubernetes cluster. This option is viable because Tiller persists everything it needs in the cluster itself. It’s a perfectly fine option if you don’t want to run Tiller remotely. I decided to run Tiller in my cluster for little other reason than I wanted to.
Note: The default Tiller configuration does not enforce TLS on Tiller’s API endpoint. Any multi-user, production deployment of Kubernetes should take care to enable TLS on Tiller.
Helm’s Documentation is pretty comprehensive. I won’t waste my effort talking about how to install Tiller and the helm
client. I will, however, share a few things I’ve done for configuring my helm
client locally.
Helm’s documentation walks you through setting up Tiller’s TLS using the raw openssl commands to create a Certificate Authority and sign the certificate. Each user should have their own client certificate they use to authenticate to Tiller, so you can get into a situation where whoever administrates these keys does a lot of key signing.
You have a few options here to make this easier with out-of-the-box tools.
The first is to use mkcert. The mkcert utility was designed for local development certificates, but it also makes running a public key infrastructure for something like Tiller a lot easier because it can issue and sign certificates from a CA with a single command. If you’re going to have mostly one person managing certificates for a Tiller install, you might want to consider this option.
The second is to look at deploying something like Vault to manage the PKI for Tiller and other services. The deployment of Vault is not a small undertaking. However, at a certain scale it is a preferable way to manage these certificates.
The helm
client does not default to using TLS. If you attempt to use helm
against a Tiller setup that has TLS turned on without providing the --tls
flag, you will get an opaque error message.
I fixed my forgetfulness by adding the following environment variable to my shell profile:
export HELM_TLS_ENABLE=true
Now my local helm
client will default to using TLS when it connects to Tiller.
Kubernetes cluster admins can choose to deploy an Ingress Controller that’s capable of processing Kubernetes ingress definitions and routing incoming requests that match that definition to the proper backend. At the end of the day, an ingress controller is a reverse proxy with some wiring that permits it to update its configuration when new ingress definitions are added to Kubernetes.
My proxy of choice is Traefik. (Apparently, pronounced “traffic” because letters don’t mean anything.)
A few things about Traefik make it my go-to solution for an edge proxy:
I used the official Helm chart to deploy Traefik to my cluster. The README there does a good job of laying out the various options for the deployment.
My cluster is a pretty simple one. A full deployment at a multi-person organization could be much, much more complex. But for one dude running a blog that occasionally gets some readers these cluster services work pretty well.