Helm Vs Operators.. Still a better love story than Twilight

Chamsedine Mansouri
4 min readJun 13, 2020

As part of our never-ending obsession with the coolest of cloud native tools, we had this brief conversation lately about Operators and Helm.. and how many think these two technologies might engage in an epic rivalry.

First things first, helm and operators do not share the same aim. Each solution targets a different use case and was destined for a precise goal. Helm is a package manager for a K8s application, meaning that helm defines a structured way of creating all the resources (deployment, configmap, secret, service…etc) of an application that is meant to be deployed in the orchestration platform. The configuration of this application’s chart is done via the values.yaml file. Sincerely, Helm intricacies need a deep-dive session that goes way beyond the scope of this post and we can only recommend jumping to the official Helm documentation for such learning journey.

On the other side, Operators do focus on a completely different purposes: creating, configuring and managing (watching and reconciling the state) of different instances of a complex stateful/stateless app in Kubernetes. For a more comprehensive understanding around Operators, please read this brilliant article written by “Brandon Philips”, the former CTO and Co-founder of the great “CoreOS”.

The complexities of this “Love & Hate” relationship that was destined for this pair of tools, come essentialy from the installation/configuration capabilities that both can ensure for an application. To avoid any concept overlaping, we should highlight that an operator manages a set of instances in a seemless way, reconciling the state of it according to the mentioned specs of its CR, and Helm can´t do that. But how about combining the packaging flexibilty that helm offers with the lifecycle management of operators? This can be a brilliant SRE-Engineering model when it comes to managing multiple instances of an application that derives from a helm chart.. and that´s exactly why the team behind the Operator framework conceived helm as one of the three aproaches to create Operators: a Helm kit, Ansible kit and the Go SDK.

The following section will focus on creating a Helm Operator to demonstrate the set of challenges this duo can overpower. Doing this will require the installation of operator-sdk, docker, kubectl and of course a K8s cluster.

We will together create a MongoDB helm operator based on the “bitnami” MongoDB helm chart:

$ operator-sdk new mongodb-helm-operator — api-version=example.com/v1 — kind=MongoDB — type=helm — helm-chart=bitnami/mongodb

The SDK created all the resources needed to run an instance of the MongoDB chart. The values.yaml data will be used to populate the CR of our Operator. Of course we can/should change and customize the instance details (database name, user/password, replicas…etc) from the CR spec. It´s also worth the mentioning that the generated CRD can and should be extended/modified when dealing with a real world lifecycle management logic. Many teams had put a lot of effort to create powerful Helm operators and the MongoDB Entreprise helm Operator can be a great example. Here we´re dealing with a simple scenario, therefore we´re gonna stick with the generated CRD.

Now in order to deploy our operator, we need to build and push the Operator container:

$ operator-sdk build sunnychams/mongodb-operator:v1

$ docker push sunnychams/mongodb-operator:v1

Next, we will configure the deployment by updating the deploy/operator.yaml file that the SDK generates, with the name of the built image. The field to update is named image and can be found under: spec -> template -> spec ->containers.

Furthermore, deploy the CRD:

$ kubectl apply -f deploy/crds/*_crd.yaml

along with the generated RBAC resources needed to manage the Instance lifecycle:

$ kubectl apply -f deploy/service_account.yaml

$ kubectl apply -f deploy/role.yaml

$ kubectl apply -f deploy/role_binding.yaml

The last step is to deploy the Operator itself. You can use the previously edited operator.yaml file to deploy the Operator image into the cluster:

$ kubectl apply -f deploy/operator.yaml

Now we can submit our MongoDB CR, which the Operator is waiting for so that it can create our application:

$ kubectl apply -f deploy/crds/*_crd.yaml

The coolest thing now is that we can create multiple instances of our mongoDB app with different configs, in different namespaces (the operator sdk will generate a namespace scoped CRD by default), but all managed by the same operator.. and we will not run any Helm commands to modify the application.

We can list all the existing instances:

$ kubectl get mongodbs

NAME              AGE

mongodb-1 5m
mongodb-2 30s

The operator will watch any change and reconcile the state of the multiple instances in a seamless way.

Anyone with a previous experience (pain in the ass) in managing the upgrades of stateful applications and rolling between different versions will see such management model as a no-brainer.

Additionally, The operator SDK comes with a feature called the OLM: Operator Lifecycle Manager; that ensures this smooth installing/rolling updates and upgrading of Operators. In fact it´s the prefered way to manipulate any Operator generated with the Framework toolkit.

(PS: Openshift in its native Operator adoption comes now with an installed OLM, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster.)

Finally, we ought to mention that this was a simple demo that doesn´t go deep in the logic needed to manage real stateful applications. That requires an extensive work and Go/Ansible Operators can be most suited for reaching a mature level of management.

-The Operator Maturity Model-

--

--