Building Docker Images with Kaniko on Kubernetes-based Gitlab Runners
Building Docker images inside Docker containers has always been a pain. Dealing with Docker in Docker (DinD) images, mounting sockets, volumes, TLS-certificates, privileged containers, tons of different guides, most of them I found not to be working (anymore), potential security risks and so on, I spent hours fiddling around with this topic. It's far too complex and if it finally works the outcome is also not really satisfying.
Looking for alternatives I stumbled over kaniko which looks promising and has the advantage that it neither requires privileged containers nor other bigger configuration stunts. It works actually pretty straightforward.
In this short guide I want to show how I use kaniko to build Docker images in GitLab pipelines with a self-hosted Kubernetes-based GitLab runner and push them into a private Docker registry. The source code is available on my GitLab instance.
For this example I am using the Kubernetes namespace gitlab-runner
and the private Docker registry my-registry.example.com
. Obviously username
has to be replaced with a valid username of the real private Docker registry and password
with the corresponding password.
Kaniko automatically looks for Docker credentials under /kaniko/.docker/config.json
. This file basically contains the URL of the private registry and the credentials in 'Basic Authentication form' means base64 encoded username:password
.
1$ echo -n "username:password" | base64
2dXNlcm5hbWU6cGFzc3dvcmQ=
The echo
command on Debian automatically creates a line break which would also be encoded, hence I am using the -n
parameter to suppress the line break.
Next I create a temporary file temp-auth.json
containing the credentials for the Docker registry like
1{
2 "auths": {
3 "my-registry.example.com": {
4 "auth": "dXNlcm5hbWU6cGFzc3dvcmQ="
5 }
6 }
7}
It is possible to inject this JSON into a GitLab pipeline via the GitLab CI/CD environment variable DOCKER_AUTH_CONFIG
but this would have to be done for every project/group in which one wants to build Docker images which is cumbersome and also storing credentials in GitLab environment variables is something I usually try to avoid. It is also possible to set this environment variable within the configuration of the runner which would avoid the first problem, but introduces a new one: I am checking everything in into Git, also the runner configuration, and checking in plain-text credentials into Git is a no-go. Therefore, I am using Mozilla sops to encrypt all Kubernetes secrets in Git.
Hence, for my setup it's best to let the GitLab runner automatically mount the file /kaniko/.docker/config.json
from a Kubernetes secret into every newly created runner pod.
For that I create a secret kaniko-secret
like so:
1$ kubectl create secret generic kaniko-secret \
2 --from-file=config.json=temp-auth.json \
3 --namespace gitlab-runner \
4 --dry-run=client \
5 -o yaml > kaniko-secret.yaml
The created file kaniko-secret.yaml
contains the base64 encoded JSON string and additional metadata:
1apiVersion: v1
2data:
3 config.json: ewogICJhdXRocyI6IHsKICAgICJteS1yZWdpc3RyeS5leGFtcGxlLmNvbSI6IHsKICAgICAgImF1dGgiOiAiZFhObGNtNWhiV1U2Y0dGemMzZHZjbVE9IgogICAgfQogIH0KfQo=
4kind: Secret
5metadata:
6 creationTimestamp: null
7 name: kaniko-secret
8 namespace: gitlab-runner
I am not directly deploying the secret into the Kubernetes cluster because I am using Flux which uses a GitOps approach for provisioning resources in Kubernetes. Instead, I check the sops-encrypted yaml secret in into the Flux Git repository and let Flux do a reconciliation in order to create the secret in Kubernetes. I really like the Flux GitOps approach and create most of the resources in Kubernetes with it.
Alternatively the secret can be deployed directly into the cluster with
1$ kubectl create secret generic kaniko-secret \
2 --from-file=config.json=temp-auth.json \
3 --namespace gitlab-runner
Since I am using the official Helm-Chart for deploying the GitLab runner into Kubernetes, the next step is adjusting the values rendered into the chart to make the runner automatically mount the above created secret into new runner pods. I am using a Flux HelmRelease for the deployment (full yaml omitted here for simplicity):
1---
2apiVersion: helm.toolkit.fluxcd.io/v2beta1
3kind: HelmRelease
4metadata:
5 name: gitlab-runner
6 namespace: gitlab-runner
7spec:
8 chart:
9 spec:
10 chart: gitlab-runner
11 values:
12 concurrent: 5
13 runners:
14 executor: kubernetes
15 config: |
16 [[runners.kubernetes.volumes.secret]]
17 name = "kaniko-secret"
18 mount_path = "/kaniko/.docker"
Everything under values
is passed on to the chart as config values, which means if you are using the plain Helm Chart without Flux, simply ignore everything above the values
section. The highlighted part mounts the secret as volume, name
refers to the name of the Kubernetes secret and mount_path
declares where the secret will be mounted into the pod.
So far, so good. Finally, we need a pipeline to create the Docker image and push it into the private Docker registry. A very basic pipeline could look like this:
1image: alpine:latest
2
3stages:
4 - build
5
6build:
7 stage: build
8 image:
9 name: gcr.io/kaniko-project/executor:v1.10.0-debug
10 entrypoint: [""]
11 script:
12 - /kaniko/executor
13 --context "${CI_PROJECT_DIR}"
14 --dockerfile "${CI_PROJECT_DIR}/Dockerfile"
15 --destination "my-registry.example.com/image:latest"
This example pipeline only contains a single build step which uses the kaniko executor image gcr.io/kaniko-project/executor:v1.10.0-debug
as base. The entrypoint has to be overwritten in order to work correctly. Then in the script
section the kaniko executor is started with three parameters:
--context
which specifies the base directory from where the build starts. In this case I am using the base directory of the Git repository itself for that but there are plenty of other options available,--dockerfile
which specifies the path to the Dockerfile to be used for creating the image- and
--destination
which tells the executor where to push the image. In this example I push the imageimage:latest
to the private registrymy-registry.example.com
.
And that's it. The pipelines successfully builds the Docker image and pushes it as expected into the private registry for further use. There are loads of additional flags available for adjusting the behaviour of kaniko in many aspects but for building a Docker image from the Git repository where the pipeline itself runs, this is sufficient.
Conclusion
Kaniko has made my life much easier because it simplifies the building of Docker images inside Docker containers a lot. No more DinD fiddling, no more privileged containers and so on. There are some restrictions to keep in mind though when using it e.g. as of now it is not possible to build multi-arch images. Still, for the time being it is the tool of choice for me when it comes to building Docker images in pipelines - easily and without headache.