Chart Tests

In this blog, we will see what is Chart Tests.
Not all Kubernetes clusters are the same. A chart that works in one cluster may fail to install correctly in another. Maybe a necessary feature is lacking. Maybe there are not enough resources. Or maybe some Kubernetes objects that our chart creates fail to interconnect properly. Since no matter how good the chart is, success is not guaranteed, Helm supports what are called chart tests. These tests can help a chart user make sure that the package he just installed is fully functional, working correctly.
As usual, let’s see this in practice, to understand it better.
Building a Helm Chart Test
The test we’ll write for a chart is pretty much a regular template file and should be placed in the templates/ directory. But Helm needs a way to know this is not just some regular template file, but actually, a special one that should only be used when the user will run the helm test command. This is easily done by adding the following line in the file’s contents, in the metadata
section:
helm.sh/hook: test
So we’re basically adding a simple Helm hook annotation here and that’s enough to tell our utility what this is. But what about the test itself? How should this be built?
A test should describe a container. This in turn should run a command which somehow verifies that the installed chart is working alright. To understand this we should first explore a different subject.
Understanding Exit Codes
A command, in Linux, normally returns an exit code. This is usually not displayed to the user, but rather silently returned to the program that invoked the command. For example, when you run
ls
to list files and directories, the ls command returns an exit code. So this exit code is returned to whom if we don’t see it? It is returned to its parent process, which in this case, is the Bash shell that lets us enter commands and see their output. In Bash, we can see the exit code last returned, with this command:
echo $?
In this case, the exit code will be “0” which signals success, zero errors, command finished its job without a problem.
Example output:
[email protected]:~$ ls
bin helm-diff-linux.tgz nginx-0.1.0.tgz Pictures Videos
Desktop mariadb-9.3.16.tgz nginx-0.1.0.tgz.prov Public wordpress
diff Music nginx-backup Templates wordpress-11.0.12.tgz
Documents mypublickey nginx-chart-files test
Downloads nginx nginx-templateORIG values.yaml
[email protected]:~$ echo $?
0
Now if we’d pass a parameter that the ls command does not support, it should return some kind of error in its exit code:
ls --invalidparameter
Let’s see the exit code:
echo $?
This time we see the code “2” which in ls’ documentation we can see signifies this:
Exit status:
0 if OK,
1 if minor problems (e.g., cannot access subdirectory),
2 if serious trouble (e.g., cannot access command-line argument).
Example output:
[email protected]:~$ ls --invalidparameter
ls: unrecognized option '--invalidparameter'
Try 'ls --help' for more information.
[email protected]:~$ echo $?
2
Each command (or rather, the program’s developer/s) can choose what each exit code above 0 signifies, but 0, by convention, should always mean that the command executed successfully. This helps us a lot in our scenario.
We mentioned that a chart test should describe a container that runs a command that somehow verifies that the package we just installed with Helm is working correctly. Helm knows if the test was successful with the help of the command’s exit code. If whatever command ran, returned 0, then Helm will consider the test passed. If the code is any other number, the test is considered failed.
So what could we test? Here are some examples:
- A command that tries to log in to the database server with the username and password supplied in the values.yaml file. This way we test both that the admin username and password were set correctly, and also that the database server is fully functional.
- Test that the web server is working correctly by trying to establish a connection to it.
- Make a specific request to a web application and check that it returns a valid answer. For example, say you installed some kind of website that has a special API. You can use a command to send a specially crafted HTTP POST request that asks for the website’s version. If the API is working, it will return a valid answer. If it is not working, there will be no answer. For example, the curl command could be used for such a purpose.
You can build one or multiple tests and place them anywhere in the templates/ directory. But for more clarity, better structure, it’s recommended you create a subdirectory called tests and place them all in templates/tests. By grouping them all there, it will be easier to differentiate between your regular template files and your tests. Let’s create this subdirectory.
mkdir ~/nginx/templates/tests
And now let’s create our first test.
nano ~/nginx/templates/tests/test-nginx.yaml
apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-nginx-test"
labels:
{{- include "nginx.labels" . | indent 4 }}
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ .Release.Name }}-nginx:80']
restartPolicy: Never
We can once again see why the named template we defined in the _helpers.tpl file, in a previous blog, is useful. We used the nginx.labels
named template to produce the same labels we used in our deployment object. And we can see the test itself in the containers: section, which is rather simple. We used busybox
as a container image, which is a very lightweight image that gives us access to a few useful commands, usually used for debugging/fixing stuff. In this case, we use the wget command provided by busybox
, and we pass some command-line arguments to wget, in the args: section. wget is a simple download utility and we instruct it in the arguments to connect to the Nginx web server that our chart installs. We make sure we use the same name we used in our deployment.yaml file, {{ .Release.Name }}-nginx
and tell wget to connect to port :80
that Nginx listens to (for incoming connections) by default.
Running Helm Chart Tests
To see this in action, we’ll first need to install our chart again.
The objects our chart will create in Kubernetes are few and simple so this gets up and running fast. But for more complex charts, you might need to wait for a few minutes to make sure everything is ready, before running the tests.
Generically, Helm tests are executed with the following command: helm test NAME_OF_RELEASE
.
So, with the name of our release, my-website we can run our tests with this command:
helm test my-website
And this output confirms that the test(s) is(are) successful.
[email protected]:~$ helm test my-website
NAME: my-website
LAST DEPLOYED: Mon Jun 14 00:16:35 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: my-website-nginx-test
Last Started: Mon Jun 14 00:19:47 2021
Last Completed: Mon Jun 14 00:19:50 2021
Phase: Succeeded
NOTES:
Please wait a few seconds until the nginx chart is fully installed.
After installation is complete, you can access your website by running this command:
kubectl get --namespace default service my-website-nginx
and then entering the IP address you get, into your web browser's address bar.
But, as usual, we shouldn’t be convinced that this is working correctly until we see the other scenario. What if our Nginx web server would be down? Would this detect the anomaly? We can simulate our Nginx server failing by setting the number of replicas to zero, so no Nginx pods will be running anymore.
kubectl scale --replicas=0 deployment/my-website-nginx
Now if we rerun the Helm test, we’ll see a different result, indicating that the test has failed.
[email protected]:~$ helm test my-website
NAME: my-website
LAST DEPLOYED: Mon Jun 14 00:16:35 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: my-website-nginx-test
Last Started: Mon Jun 14 00:23:35 2021
Last Completed: Mon Jun 14 00:23:39 2021
Phase: Failed
NOTES:
Please wait a few seconds until the nginx chart is fully installed.
After installation is complete, you can access your website by running this command:
kubectl get --namespace default service my-website-nginx
and then entering the IP address you get, into your web browser's address bar.
Error: pod my-website-nginx-test failed
Nice! We can now give our chart users a quick way to verify that everything has deployed correctly in their clusters.
Since tests are Helm hooks, it’s worth noting that if you want to create multiple tests for a chart, if required, you can make sure these run in a certain order, by using helm.sh/hook-weight
annotations.
Also, remember that hooks (hence tests too) create Kubernetes objects that can get left behind. For example, in this case, the tests create some pods that, although they run once, still show up in kubectl get pods commands.
NAME READY STATUS RESTARTS AGE
my-website-nginx-test 0/1 Completed 0 103s
test-nginx-test 0/1 Error 0 28m
We may use helm.sh/hook-delete-policy
annotations to ensure these objects get deleted after they run once. If we would add the highlighted line in our test-nginx.yaml file, we could ensure that these pod resources get deleted:
- Before a new (repeated) test is executed.
- After the test passes.
- After the test fails.
It’s the same hook delete policy we learned about in a previous lesson.
If we group all policies together, we can ensure that the Kubernetes objects created by the tests always get cleaned up when the test finishes.
Example test with all three hook delete policies enabled:
apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-nginx-test"
labels:
{{- include "nginx.labels" . | indent 4 }}
annotations:
"helm.sh/hook": test
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ .Release.Name }}-nginx:80']
restartPolicy: Never
But, remember, this may not always be desired. Sometimes, you may want those objects to stick around.
Checkout the Helm for the Absolute Beginners course here
Checkout the Complete Kubernetes learning path here