Building Docker Images with Kaniko on Kubernetes-based Gitlab Runners

Building Docker images inside Docker containers has always been a pain. Dealing with Docker in Docker (DinD) images, mounting sockets, volumes, TLS-certificates, privileged containers, tons of different guides, most of them I found not to be working (anymore), potential security risks and so on, I spent hours fiddling around with this topic. It's far too complex and if it finally works the outcome is also not really satisfying.

Looking for alternatives I stumbled over kaniko which looks promising and has the advantage that it neither requires privileged containers nor other bigger configuration stunts. It works actually pretty straightforward.

In this short guide I want to show how I use kaniko to build Docker images in GitLab pipelines with a self-hosted Kubernetes-based GitLab runner and push them into a private Docker registry. The source code is available on my GitLab instance. For this example I am using the Kubernetes namespace gitlab-runner and the private Docker registry Obviously username has to be replaced with a valid username of the real private Docker registry and password with the corresponding password.

Kaniko automatically looks for Docker credentials under /kaniko/.docker/config.json. This file basically contains the URL of the private registry and the credentials in 'Basic Authentication form' means base64 encoded username:password.

1$ echo -n "username:password" | base64

The echo command on Debian automatically creates a line break which would also be encoded, hence I am using the -n parameter to suppress the line break. Next I create a temporary file temp-auth.json containing the credentials for the Docker registry like

2  "auths": {
3    "": {
4      "auth": "dXNlcm5hbWU6cGFzc3dvcmQ="
5    }
6  }

It is possible to inject this JSON into a GitLab pipeline via the GitLab CI/CD environment variable DOCKER_AUTH_CONFIG but this would have to be done for every project/group in which one wants to build Docker images which is cumbersome and also storing credentials in GitLab environment variables is something I usually try to avoid. It is also possible to set this environment variable within the configuration of the runner which would avoid the first problem, but introduces a new one: I am checking everything in into Git, also the runner configuration, and checking in plain-text credentials into Git is a no-go. Therefore, I am using Mozilla sops to encrypt all Kubernetes secrets in Git. Hence, for my setup it's best to let the GitLab runner automatically mount the file /kaniko/.docker/config.json from a Kubernetes secret into every newly created runner pod.

For that I create a secret kaniko-secret like so:

1$ kubectl create secret generic kaniko-secret \ 
2    --from-file=config.json=temp-auth.json \
3    --namespace gitlab-runner \
4    --dry-run=client \
5    -o yaml > kaniko-secret.yaml

The created file kaniko-secret.yaml contains the base64 encoded JSON string and additional metadata:

1apiVersion: v1
3  config.json: ewogICJhdXRocyI6IHsKICAgICJteS1yZWdpc3RyeS5leGFtcGxlLmNvbSI6IHsKICAgICAgImF1dGgiOiAiZFhObGNtNWhiV1U2Y0dGemMzZHZjbVE9IgogICAgfQogIH0KfQo=
4kind: Secret
6  creationTimestamp: null
7  name: kaniko-secret
8  namespace: gitlab-runner

I am not directly deploying the secret into the Kubernetes cluster because I am using Flux which uses a GitOps approach for provisioning resources in Kubernetes. Instead, I check the sops-encrypted yaml secret in into the Flux Git repository and let Flux do a reconciliation in order to create the secret in Kubernetes. I really like the Flux GitOps approach and create most of the resources in Kubernetes with it.

Alternatively the secret can be deployed directly into the cluster with

1$ kubectl create secret generic kaniko-secret \
2    --from-file=config.json=temp-auth.json \
3    --namespace gitlab-runner

Since I am using the official Helm-Chart for deploying the GitLab runner into Kubernetes, the next step is adjusting the values rendered into the chart to make the runner automatically mount the above created secret into new runner pods. I am using a Flux HelmRelease for the deployment (full yaml omitted here for simplicity):

 3kind: HelmRelease
 5  name: gitlab-runner
 6  namespace: gitlab-runner
 8  chart:
 9    spec:
10      chart: gitlab-runner
11  values:
12    concurrent: 5
13    runners:
14      executor: kubernetes
15      config: |
16        [[runners.kubernetes.volumes.secret]]
17          name = "kaniko-secret"
18          mount_path = "/kaniko/.docker"        

Everything under values is passed on to the chart as config values, which means if you are using the plain Helm Chart without Flux, simply ignore everything above the values section. The highlighted part mounts the secret as volume, name refers to the name of the Kubernetes secret and mount_path declares where the secret will be mounted into the pod.

So far, so good. Finally, we need a pipeline to create the Docker image and push it into the private Docker registry. A very basic pipeline could look like this:

 1image: alpine:latest
 4  - build
 7  stage: build
 8  image:
 9    name:
10    entrypoint: [""]
11  script:
12    - /kaniko/executor
13      --context "${CI_PROJECT_DIR}"
14      --dockerfile "${CI_PROJECT_DIR}/Dockerfile"
15      --destination ""

This example pipeline only contains a single build step which uses the kaniko executor image as base. The entrypoint has to be overwritten in order to work correctly. Then in the script section the kaniko executor is started with three parameters:

  • --context which specifies the base directory from where the build starts. In this case I am using the base directory of the Git repository itself for that but there are plenty of other options available,
  • --dockerfile which specifies the path to the Dockerfile to be used for creating the image
  • and --destination which tells the executor where to push the image. In this example I push the image image:latest to the private registry

And that's it. The pipelines successfully builds the Docker image and pushes it as expected into the private registry for further use. There are loads of additional flags available for adjusting the behaviour of kaniko in many aspects but for building a Docker image from the Git repository where the pipeline itself runs, this is sufficient.


Kaniko has made my life much easier because it simplifies the building of Docker images inside Docker containers a lot. No more DinD fiddling, no more privileged containers and so on. There are some restrictions to keep in mind though when using it e.g. as of now it is not possible to build multi-arch images. Still, for the time being it is the tool of choice for me when it comes to building Docker images in pipelines - easily and without headache.

Feedback Welcome

I am always happy to receive comments, corrections, improvements or feedback in general! Please drop me a note on Mastodon or by E-Mail anytime.