How to Migrate from Helm v2 to v3

migrate helm from v2 to v3 blog

migrate helm from v2 to v3 blog

Version 3 of Helm, the package manager for Kubernetes released a few months ago. This release comes with a lot of new changes and improvements. I was trying out the beta releases of v3 with the cluster setup we have. I will be talking about changes in this release, how to migrate your charts and releases.

What is Helm?
“Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.”

Before taking a look at v3, let’s see what was the issue with Tiller component from v2.

Security issue with Tiller from Helm v2

Helm v2 comes with a server side component called Tiller. Tiller takes care of creating all the resources which are part of a chart. Usually it needs admin privileges, so that it can create all the required resources in the cluster.

Tiller exposes one gRPC port, using which helm client communicates with it. By default, this port is accessible to any authenticated user in the cluster. This also means that any application running in the cluster can access it (if you don’t have a proper NetworkPolicy). This leads to risk of any compromised application running delete/install operations in the cluster.

To mitigate this issue one can do following things,

  • Have less privileges to Tiller.
  • Install Tiller in each namespace and give it access to that namespace only.
  • Enable TLS authentication between helm client and Tiller.

Tiller component has been removed from Helm v3. The privileges helm client gets are the user’s privileges who is running the helm command (uses KUBECONFIG).

What’s new in Helm 3

While Helm v3 is a major rewrite over v2, here is the list of few changes in the tool.

  • Tiller has been removed completely. All the operations happen using the client binary itself (helm command).
  • Helm code base can now be used as a package by other tools. This can be utilized to achieve same results as of helm command using Go code.
  • Chart dependencies are moved to Chart.yaml instead of a separate requirements.yaml file
  • apiVersion of Chart.yaml has been updated to v2
  • CRDs are now handled differently than normal resources
  • Ability to use Docker registry to distribute charts, modifications to commands and so on

Checkout Overview of Helm 3 Changes and Changes since Helm 2 for more details about these changes.

Migrating to Helm 3

Before migrating existing releases from clusters, we need to make sure that the charts we use are compatible with Helm 3. The easiest way is to test all the charts in a separate cluster, which is the exact replica of an existing cluster. While doing that, I found that almost all the charts I use, were working just fine.

The prometheus-operator chart had issues as it uses CRDs to create Prometheus, Alertmanager etc.

What is changed when it comes to CRDs

The way Helm 3 handles CRDs has been changed. They are now treated as special resources and are never upgraded by Helm once installed. The upgrade operation should be done by the cluster operators (admins) with extra care. Few things about this change:

  • CRDs should be put in the crds directory at the top level of chart directory.
  • The YAML files in this directory cannot be templatized like any other resources from templates directory.
  • The crd-install hook , which used to take care of installing CRD files from templates directory has been removed. If there are any files with crd-install annotation, then those are skipped by Helm.
  • Files from crds directory are applied first before rendering the chart. These are never applied again if the CRDs exist in the cluster.

The snippet of sort() function from releaseutil package can be found here. It calls continue when it finds any unknown hook. More information about this change is available at Proposal: Manage CRDs.

Here are few links from Helm’s official documentation about CRDs, Helm | Charts – CRDs, Helm | Custom Resource Definitions.

Modifying the charts to be compatible with both versions

From all the charts I had, prometheus-operator chart was failing as it uses CRDs. Helm 3 was skipping the CRDs from templates directory. After experimenting, I came up with a solution. The idea was to copy the YAML files of CRDs from templates to the crds directory. This involves removing any templating from those files. Helm 2 will ignore files from crds and Helm 3 will skip CRD files from templates.

After proposing this change (helm/charts#18721), I got a suggestion from vsliouniaev to use .Files.Glob instead of keeping the same file at two places. There were two ways to achieve this. One was to have an object with kind: List, which will hold all the CRD objects. The second way was to have one YAML file with multiple YAML documents of CRDs separated by --- separator. The second way worked well and was backward compatible with older releases as well.

Detailed explanation can be found in helm/charts#18721 (comment).

With this change merged, prometheus-operator was compatible with both version 2 and 3 of Helm. The same solution can be applied to other charts, so that you can install them with Helm 3. All the charts from stable repository are now compatible. Thanks to everyone who helped with [stable/*] Helm 3 backwards-compatibility for community charts. Make sure you check out all the points which are mentioned in this issue to have the compatibility.

Migrating the client configurations

Helm 3 uses XDG Base Directory Specification. Data, configurations, cache are now stored in different directories instead of ~/.helm ($HELM_HOME). The migration plugin developed by Helm community can migrate the repositories, plugins etc according to the new directory structure.

helm-2to3 plugin on GitHub.

Follow the Setting up Helm v3, helm-2to3 plugin and Migrate Helm v2 configuration sections from the official blog post about migration to v3. After following the steps, you will have helm3 command along with 2to3 plugin installed on your machine.

Migrating the installed releases

When we install or update a chart, it creates one Helm release in the cluster. It is stored as either ConfigMap or Secret within the cluster in case of Helm 2. This makes it possible to do the rollback and keeps a history. The format in which Helm 3 stores the release information is different from v2. The 2to3 plugin will do the work of converting these releases to the new format.

Before migrating the releases, make sure that you are the latest version of all the charts installed in the cluster. Some of the charts might have updated to make sure they are compatible with Helm 3.

Take a look at Readme before migration.

1. Backup all the existing releases

It’s important to take a backup of all the releases before starting with the migration. Though the first step of migration won’t delete Helm 2’s release information (ConfigMaps or Secrets), it’s always better to take a backup.

Using helm-backup plugin by maorfr

The helm-backup is a Helm 2 plugin which can take backup of releases and also has ability to restore them. Here is how it achieves the backup and restore.

  • It finds the storage method used by Tiller then backs up the configMaps/Secrets along with release names as a tar file
  • While doing the restore it first applies the ConfigMaps/Secrets. Then, it tries to find the release with STATUS=DEPLOYED label, gets the manifest (YAML) for that release and then applies it.

Take a look at these functions for more details, Backup(namespace string), Restore(namespace string), helm_restore.Restore(releaseName, tillerNamespace, label string).

This installs the plugin and takes backup of all the releases from cluster-tools namespace.

Using kubectl to backup the releases

Another way is to just backup all the ConfigMaps or Secrets which hold the release information. The easiest way to do it is to run the following command. Based on your storage backend configuration of Tiller, use ConfigMaps or Secrets.

2. Convert the Helm 2 releases to Helm 3

The 2to3 plugin has a dry-run mode which can be used to check if a release will get converted properly. I wrote a small script which finds all the installed releases and then runs the convert command over it with --dry-run. After that it takes input from user if they want to convert the selected release.

The complete script can be downloaded from here.

Run helm3 ls to see if the releases are converted correctly. (Note: releases are namespace scoped in Helm 3.)

Once all the releases are converted, run an upgrade for all the releases using helm3. It’s possible that the migration of a release happens successfully but the chart is incompatible with Helm 3. Due to this, the next upgrade to the release using helm3 might fail.

3. Cleanup the Helm 2 data and resources

After converting all the releases successfully (also testing upgrades using helm3). It’s time to cleanup the cluster resources which were used by Helm 2. The 2to3 cleanup command will remove ‘Helm v2 client Configuration’, ‘Release Data’ and ‘Tiller’ from the cluster.

Follow the Clean up of Helm v2 data section from the official blog post.

Frequently asked questions

These are a few of the questions which might be useful to someone who is doing the migration or starting with Helm 3.

  1. Is it possible to upgrade from v1 of Chart.yaml to v2?
    Yes. The upgrade is same as a normal upgrade.
  2. How to list the releases from all the namespaces?
    helm3 ls --all-namespaces or helm3 ls -A (new in v3.1.0)
  3. How to add the stable or incubator repositories?
    References 1, 2.
  4. What is going to happen with stable and incubator chart repositories?
    TL;DR. Those are going to be deprecated. Read more about it here.

References

Note: This post is also available on my blog. See the article here.

Bhavin Gandhi

Author Bhavin Gandhi

Bhavin is working as a Software Engineer with InfraCloud where he is working on tools around container orchestration and related areas. Bhavin's main area of interest are Free and Open Source software, containers and Kubernetes.

More posts by Bhavin Gandhi

Leave a Reply