You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

This document relates to investigative work being carried out on the Jira ticket POLICY-3809. This work specification is in response to requirements set out by IDUN for integrating the policy framework kubernetes pods / helm charts into their system. The general requirements of the investigation are below:

  • How to create a Kubernetes environment that can be spun up and made available on demand on suitable K8S infrastructure
  • How to write suitable test suites to verify the functional requirements below would be developed.
  • How such test suites could be done using "Contract Testing".

Functional Requirements Detail

Note that in Postgres, many of the features below are available. In the verification environment, we want to verify that the Policy Framework continues to work in the following scenarios:

  • Synchronization and Load Balancing
  • Failover
  • Backup and Restore

In addition the environment should:

  • Support measurement of Performance Lag
  • Use secure communication towards the Database
  • Verify that auditing of database operations is working


Investigated Testing Approaches

This section will outline some of the approaches to tests that are commonly used but also some unique/less common approaches

Chart Tests

Chart tests are actually built into helm and detail on them can be found here: https://helm.sh/docs/topics/chart_tests/. The task of a chart test is to verify that a chart works as expected once it is installed. Each helm chart will have a templates directory under it. The test file contains the yaml definition of  a Kubernetes Job. A Job in Kubernetes is basically a resource that creates a Pod that carries out a specific task. Once the task is executed, the Job deletes the pods and exits. In the test, the Job runs with a specified command and is considered a success if the container successfully exits with an (exit 0).

Examples:

  • Validate that your configuration from the values.yaml file was properly injected.
    • Make sure your username and password work correctly
    • Make sure an incorrect username and password does not work
  • Assert that your services are up and correctly load balancing
  • Test successful connection to a database using a specified secret

The simplicity of specifying tests in this way is a major advantage. Tests can then simply be run with a "helm test" command.

Helm Unit Test Plugin

There is an open source project that has been defined and is present on GitHub - https://github.com/quintush/helm-unittest. It can be installed easily as it is designed as a helm plugin. The plugin allows definition of tests in yaml to confirm basic functionality of the deployed pod/chart. It is operated very simply. You can define a tests/ directory under your chart e.g. YOUR_CHART/tests/deployment_test.yaml. Then an example test suite is defined below:

suite: test deployment
templates:
  - deployment.yaml
tests:
  - it: should work
    set:
      image.tag: latest
    asserts:
      - isKind:
          of: Deployment
      - matchRegex:
          path: metadata.name
          pattern: -my-chart$
      - equal:
          path: spec.template.spec.containers[0].image
          value: nginx:latest

The test asserts a few different things. The template is a Deployment type, the name of the chart and the container used. Simple cli command is then used to run the test.

helm unittest $YOUR_CHART

Although this library is useful, it does not actually serve to test the functionality of the chart, only the specification.


Octopus

The Kyma project is a cloud native application runtime that uses Kubernetes, helm and a number of other components. They used helm tests extensively and appreciated how easy the tests were to specify. However, they did find some shortcomings:

  • Running the whole suite of integration tests took a long time, so they needed an easy way of selecting tests they wanted to run.
  • The number of flaky tests increased, and they wanted to ensure they are automatically rerun.
  • They needed a way of verifying the tests' stability and detecting flaky tests.
  • They wanted to run tests concurrently to reduce the overall testing time.

For these reasons, Kyma developed their own tool called Octopus and it tackles all of the issues above: https://github.com/kyma-incubator/octopus/blob/master/README.md

In developing tests using Octopus, the tester defines 2 files

  • TestDefinition file: Defines a test for a single component or a cross-component scenario. We can see in the example below that the custom TestDefinition resource is used to define a Pod with a specified image for the container and a simple command is carried out. This is not dissimilar to the way that helm test defines tests for the charts.
TestDefinition
apiVersion: testing.kyma-project.io/v1alpha1
kind: TestDefinition
metadata:
  labels:
    component: service-catalog
  name: test-example
spec:
  template:
    spec:
      containers:
        - name: test
          image: alpine:latest
          command:
            - "pwd"

ClusterTestSuite file: This file defines which tests to run on the cluster and how to run them. In the below example, they specify to run only tests with the "service-catalog" label. It specifies how many times a test should be executed and how many retries of the test should be done. Also, concurrency is specified to define what the maximum number of concurrent tests should be running.

ClusterTestSuite
apiVersion: testing.kyma-project.io/v1alpha1
kind: ClusterTestSuite
metadata:
  labels:
    controller-tools.k8s.io: "1.0"
  name: testsuite-selected-by-labels
spec:
  count: 1
  maxRetries: 1
  concurrency: 2
  selectors:
    matchLabelExpressions:
      - component=service-catalog

Although this project seems to make some improvement on the helm chart tests, it is unclear how mature the project is. Documentation details how to define the specified files and how to use kubectl CLI to execute a test - https://github.com/kyma-incubator/octopus/blob/master/docs/tutorial.md

Terratest

This testing framework is part of the terraform project but it seems it can be used for helm charts independent of terraform. All it requires is that you have a kubernetes and helm install and have the go language installed. A very simple example of how the tests are created is found here: https://github.com/gruntwork-io/terratest-helm-testing-example.  Tests are specified in the go language and can include the instructions for deploying a chart. An example outlined here: https://blog.gruntwork.io/automated-testing-for-kubernetes-and-helm-charts-using-terratest-a4ddc4e67344 shows how the tests can be specified for 2 different scenarios - for template testing and for integration testing.

  • Template Testing: This is used to catch syntax or logical issues in your defined helm charts. The example shown below points to an example helm chart directory and then sets the image value in the chart. It renders the template but doesn't actually deploy the pod and then confirms that the rendered template has the correct image set. After the test is run, an output is provided that displays the template and whether the test is successful or not. These tests are very quick because they do not actually involve deploying any pods.
TemplateTesting
func TestPodTemplateRendersContainerImage(t *testing.T) {
    // Path to the helm chart we will test
    helmChartPath := "../charts/minimal-pod"
    // Setup the args.
    // For this test, we will set the following input values:
    // - image=nginx:1.15.8
    options := &helm.Options{
        SetValues: map[string]string{"image": "nginx:1.15.8"},
    }
    // Run RenderTemplate to render the template
    // and capture the output.
    output := helm.RenderTemplate(
        t, options, helmChartPath, "nginx",
        []string{"templates/pod.yaml"})
    // Now we use kubernetes/client-go library to render the
    // template output into the Pod struct. This will
    // ensure the Pod resource is rendered correctly.
    var pod corev1.Pod
    helm.UnmarshalK8SYaml(t, output, &pod)
    // Finally, we verify the pod spec is set to the expected 
    // container image value
    expectedContainerImage := "nginx:1.15.8"
    podContainers := pod.Spec.Containers
    if podContainers[0].Image != expectedContainerImage {
        t.Fatalf(
            "Rendered container image (%s) is not expected (%s)",
            podContainers[0].Image,
            expectedContainerImage,
        )
    }
}

This is fine as long as you don't want to test any functionality that depends on your chart being up-and-running.

  • Integration Testing: These tests deploy the rendered template from above onto an actual Kubernetes cluster. So, inputs to create the actual pods must be provided in the testing script. Helm install the chart and then,  once the test is finished uninstall the chart. See the example below:
IntegrationTesting
func TestPodDeploysContainerImage(t *testing.T) {
    // Path to the helm chart we will test
    helmChartPath := "../charts/minimal-pod"
    // Setup the kubectl config and context.
    // Here we choose to use the defaults, which is:
    // - HOME/.kube/config for the kubectl config file
    // - Current context of the kubectl config file
    // Change this to target a different Kubernetes cluster
    // We also specify to use the default namespace
    kubectlOptions := k8s.NewKubectlOptions("", "", "default")
    // Setup the args.
    // For this test, we will set the following input values:
    // - image=nginx:1.15.8
    options := &helm.Options{
        SetValues: map[string]string{"image": "nginx:1.15.8"},
    }
    // We generate a unique release name that we can refer to.
    // By doing so, we can schedule the delete call here so that
    // at the end of the test, we run `helm delete RELEASE_NAME`
    // to clean up any resources that were created.
    releaseName := fmt.Sprintf(
        "nginx-%s", strings.ToLower(random.UniqueId()))
    defer helm.Delete(t, options, releaseName, true) 
    // Deploy the chart using `helm install`.
    helm.Install(t, options, helmChartPath, releaseName)
    // Wait for the pod to come up.  It takes some time for the Pod
    // to start, so retry a few times.
    podName := fmt.Sprintf("%s-minimal-pod", releaseName)
    retries := 15
    sleep := 5 * time.Second
    k8s.WaitUntilPodAvailable(
        t, kubectlOptions, podName, retries, sleep) 
    // Now let's verify the pod. We will first open a tunnel to
    // the pod, making sure to close it at the end of the test.
    tunnel := k8s.NewTunnel(
        kubectlOptions, k8s.ResourceTypePod, podName, 0, 80)
    defer tunnel.Close()
    tunnel.ForwardPort(t) 
    // ... and now that we have the tunnel, we will verify that we
    // get back a 200 OK with the nginx welcome page.
    endpoint := fmt.Sprintf("http://%s", tunnel.Endpoint())
    http_helper.HttpGetWithRetryWithCustomValidation(
        t,
        endpoint,
        retries,
        sleep,
        func(statusCode int, body string) bool {
            isOk := statusCode == 200
            isNginx := strings.Contains(body, "Welcome to nginx")
            return isOk && isNginx               
        },
    )
}

Although the use of the go language is appealing for this kind of testing, there are some drawbacks when using this method when compared to others.

  • Terratest was built to work in Terraform. It will work independently but there could be some "gotchas" here that may result in requiring some Terraform features.
  • Terratest seems to do much the same thing as Chart Tests and one could argue that it is easier to use Chart Tests.
  • Terratest does not provide the concurrency options that are present in Octopus.



  • No labels