You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 20 Next »

The CLAMP Kubernetes Participant performs Helm chart installation and LCM for Kubernetes (K8S) microservices that take part in Control loops.  It implements the participant-Intermediary API to receive events from the CLAMP runtime and then interacts with Helm CLI. It acts as a wrapper around the Helm CLI to manage Helm charts and K8s pods deployed in the cluster.

Helm 3 overview:

In Helm3, the Helm CLI acts as an interface towards a Kubernetes cluster, and allows the user to deploy and manage helm charts. It also supports configuration of multiple chart repositories for the helm client and enables access to charts from those repos for installation of K8S microservices. The repositories can be local chart servers running on the same machine or third party chart servers running elsewhere. The URL of achart server can be configured on the Helm repo list to permit access to Helm charts hosted in those repos. Users can add repositories and push Helm charts to the repositories via Helm CLI commands.

Example: Configured chart repos and the available charts are listed via the Helm CLI

CLAMP Kubernetes Participant Architecture in Istanbul:

Prerequisites for Kubernetes participant:

  • Kubernetes cluster running.
  • Helm CLI running.
  • Currently, the operator is expected to either store the Helm charts in any of the configured Helm repositories via Helm CLI commands or ensure that the charts are available on the local storage directory of kubernetes-participant. (This directory is a preconfigured path specified on the participant config that acts as a local chart repository).

Note: When the Kubernetes participant is containerized in the upcoming release, the kubeconfig file of the required kubernetes cluster should be copied to the k8s-participant's docker container in order to make the Helm CLI work with the external cluster.


                              


In the Istanbul release, the CLAMP Kubernetes participant supports the following methods for installation of helm charts:

  • Installation of a Helm chart that is present in the same local file system as where the kubernetes participant is hosted
  • Installation of Helm charts from the Helm repositories that are configured on the Helm client

The CLAMP Kubernetes participant acts as a mediator between the CLAMP runtime and the Helm Client.

While commissioning a control loop, the Helm chart parameters are passed via TOSCA template to the control loop runtime database. And when the control loop is instantiated, the Kubernetes participant receives control loop element update event from the CLAMP runtime. It then invokes the Helm client running on the host machine to install the corresponding Helm charts associated with the affected control loop elements. 

The k8s-participant gets the parameters (chart name, version, release name and namespace) of the chart from the control loop runtime.  If the repository of the chart had not been specified in the TOSCA, it performs  "chart lookup"  on all the configured repositories of the helm client as well as on the local chart directory where the helm charts are manually onboarded by the operator. It fetches the appropriate repository info and installs the chart via helm cli.


The k8s-participant takes care of creating namespace on the cluster if required, fetching the helm chart from the available repositories, installing and uninstalling the chart in to the cluster.

In the upcoming release, additional options will be supported in control loop for onboarding the helm charts to the repositories via TOSCA (both configured helm repos and local chart directory) before instantiation. (Under discussion)

                                                                           


                                         



Sample TOSCA  template passed during commissioning of control loops. Below are the ways , the charts can be commissioned via TOSCA. 

Chart parameters: name, version, release_name and namespace are mandatory in TOSCA.

Repository can be specified with either configured helm repository name or any local directory where the helm charts are available.

(Note: Repository name is an optional parameter in control loop TOSCA template. If not specified, the k8s-particpant will do a look-up on the available repos)

Tosca template
org.onap.domain.database.HelloWorld_K8SMicroserviceControlLoopElement:
# Chart from any chart repository configured on helm client.
version: 1.2.3
type: org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement
type_version: 1.0.0
description: Control loop element for the K8S microservice for Hello World
properties:
provider: ONAP
participant_id:
name: org.onap.k8s.controlloop.K8SControlLoopParticipant
version: 2.3.4
chart:
release_name: helloworld
chart_name: hello
version: 0.1.0
repository: chartMuseum
namespace: onap

org.onap.domain.database.PMSH_K8SMicroserviceControlLoopElement:
# Chart from local file system 
version: 1.2.3
type: org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement
type_version: 1.0.0
description: Control loop element for the K8S microservice for PMSH
properties:
provider: ONAP
participant_id:
name: org.onap.k8s.controlloop.K8SControlLoopParticipant
version: 2.3.4
chart:
release_name: pmshmicroservice
chart_name: pmsh
version: 0.1.0
repository: /home/oom/helm-charts/PMSH
namespace: onap

org.onap.domain.database.Local_K8SMicroserviceControlLoopElement:
# Chart installation without passing repository name 
version: 1.2.3
type: org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement
type_version: 1.0.0
description: Control loop element for the K8S microservice for any chart
properties:
provider: ONAP
participant_id:
name: org.onap.k8s.controlloop.K8SControlLoopParticipant
version: 2.3.4
chart:
release_name: nginxms
chart_name: nginx-ingress
version: 0.9.1
namespace: onap








  • No labels