K8s is saying goodbye to Docker.. really!?

Chamsedine Mansouri
4 min readDec 20, 2020
Bye Bye Docker :(

Two weeks ago, one of the buzziest news related to kubernetes was announced: “K8s is deprecating docker support starting from the new release v1.20.0”. That breakup announcement caused some misconceptions regarding what is going to happen in the future with the orchestration platform landscape. Therefore, we (some Maibornwolff DC fellas) wanted to adjust these ideas by clarifying the whole change and its implication from a software engineering standpoint.

What is actually happening?

As part of the never-ending journey of introducing new features and fixing bugs in the Kubernetes project, the latest release (v1.20.0) highlighted that Docker will now be deprecated. Eventually, Docker as a runtime for containers will not be supported in future releases (planned initially for the v1.22). This means that only CRI compliant container runtimes like containerd or cri-o will be supported in the future.

We should highlight that we´re talking here about a deprecation that targets docker as a container runtime and its relation with the kubelet .

Kubelet is an essential K8s worker node component responsible for driving the container runtime to start workloads that are scheduled on the node and monitoring their status.

It´s also worth noting that “CRI compliant” means a runtime support the Container Runtime Interface, which is a plugin that enable the kubelet to use a wide variety of container runtimes.

Why that happened?

Performance is a key topic in the K8s realm. In contrast to docker, CRI compliant runtimes like containerd deliver a more convenient usage from a performance-wise perspective. Mainly, the performance was optimized in terms of pod startup latency and daemon resource usage. That can be explained in terms of the complexity model of the two implementations. Containerd is already used by the docker CE (container engine), but docker has so many other components that are bundled with containerd.

So yes.. kubernetes´s kubelet needs to communicate with this whole program and its unnecessary components like the neworking/storage/UX stuff, in order to be able to use containerd as a container runtime. That means more resource usage and more latency. Additionnaly, Docker doesn´t support the CRI so there was a need to implement some sort of additional bridge service that enables that kind of Kubelet-CRI grpc calls with Docker. That component is callled “Dockershim”. Again, one more layer in the stack.

Dockershim, the bridge

To overcome the complexities of that model, Kubernetes and containerd smilar projects introduced a much more straight forward approach that enables thoses grpc calls directly with container runtime by introducing a new plugin in the containerd project. Thus, the result was elimintaing multiple hops in the stack and keeping only the parts needed for managing containers in the platform. A simple model for an enhanced performance.

What is the impact on DEV/OPS end users?

This change will definitely have a variety of consequences depending on your role and interaction with the Docker-Kubernetes ecosystem. Software Developers can literally chill as they will be able to continue using Docker images and Dockerfiles like they always used to. Docker is OCI (Open Container Initiative) compliant. That means Kubernetes will still be able to manage the full lifecycle of those container instances. That´s guaranteed.

Therefore, the community can still use docker containers in their workflow and it will continue to run great on the orchestration platform.

However, if you´re an administrator or someone who manages close-up tasks around Kubernetes clusters, like ssh-ing to the nodes and collecting some application/system logs using Docker CLI commands — this won´t work for future Kubernetes releases. We recommend using some alternative CRI cli tools like crictl. It´s arguably the most adequate alternative for this transition . Please note, though, that this tool is limited for debugging purposes and won´t serve as a replacement for traditional docker cli capabilities.

Over the last weeks, Kubernetes service providers like Microsoft Azure and Google cloud have migrated to containerd as their Kubernetes runtime. Thus, vendors´ administrators will roll that runtime change when we upgrade to future Kubernetes versions. For instance, AKS clusters with v1.19 or greater, implement containerd as the default runtime. Azure provides this guideline on how to start previewing this change for AKS versions prior to 1.19.

Moreover, we should point out that we suggest starting the installation of CRI compliant runtimes like containerd , if you´re managing an on-premises K8s cluster. This official guide details the required steps for that.

Things will be relatively different in the future when it comes to this Docker-Kubernetes interaction, giving room for issues to rise. One common limitation would be using Docker in Docker (Dind ) images, that many integrate in their CI workflow. Additionally, relying on Docker Engine (/var/run/docker.sock) to collect logs and data will no longer be possible. Those limitations are being addressed by the users and some work arounds were shared, like this one.

Ultimately, the Whole community will navigate its way to a stable position soon and starting to use containerd as your cluster’s container runtime is the safest path.

--

--