Introduction

DCAE Platform in Dublin release supports new feature to deploy components via helm chart. This is enabled by integrating Cloudify Helm plugin into Cloudify Manager instance DCAE-Platform uses to deploy other required services. The cloudify Helm plugin itself is under CCSDK project delivered part of Casablanca. For Dublin, this plugin has been integrated into DCAE ONAP deployment.  Any chart available under chart rep-url specified as configuration input can be deployed.


Dublin Scope

The helm plugin was intended to support deployment scenario of stand-alone application similar to capability offered under OOM.  With this plugin integration, any charts packaged under ONAP OOM can be deployed through DCAE platform in ONAP. This provides an opportunity for operators  to use a single orchestration through Cloudify for deploying both Helm and TOSCA work flows if required. 

As all DCAE MS currently are TOSCA work flow based, the helm plugin is not used for DCAE component deployment.

Artifacts

Repository Path: https://gerrit.onap.org/r/gitweb?p=ccsdk/platform/plugins.git;a=tree;f=helm;h=945eb3159f61071a348791b1f00d1cf4c3c97e7d;hb=HEAD

Plugin:  https://nexus.onap.org/content/sites/raw/org.onap.ccsdk.platform.plugins/plugins/helm-4.0.0-py27-none-linux_x86_64.wgn

Type file: https://nexus.onap.org/content/sites/raw/org.onap.ccsdk.platform.plugins/type_files/helm/4.0.0/helm-type.yaml

Blueprint Template:

# ============LICENSE_START==========================================
# ===================================================================
# Copyright (c) 2019 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#============LICENSE_END============================================
tosca_definitions_version: cloudify_dsl_1_3

imports:
  - http://www.getcloudify.org/spec/cloudify/4.3.1/types.yaml
  - "https://nexus.onap.org/service/local/repositories/raw/content/org.onap.ccsdk.platform.plugins/type_files/helm/4.0.0/helm-type.yaml"

inputs:
  tiller-server-ip:
    description: IP address of Kubernetes master node
  tiller-server-port:
    description: Nodeport of tiller server
  namespace:
    description: Target namespace to be installed under (requires to be new)
  chart-repo-url:
     default: https://nexus.onap.org/content/sites/oom-helm-staging
  chart-version :
    description: Chart version for identified component-name
  stable-repo-url:
    description: URL for stable repository
    type: string
    default: 'https://kubernetes-charts.storage.googleapis.com'
  config-url:
    default: ''
  config-format:
    default: 'yaml'
  component-name:
    description: onap component name
node_templates:
  dcaecomponent:
    type: onap.nodes.component
    properties:
      tiller-server-ip: { get_input: tiller-server-ip }
      tiller-server-port: { get_input: tiller-server-port }
      component-name: { get_input: component-name }
      chart-repo-url: { get_input: chart-repo-url }
      chart-version: { get_input: chart-version }
      namespace: { get_input: namespace }
      stable-repo-url: { get_input: stable-repo-url}
      config-url:  { get_input: config-url}
      config-format: { get_input: config-format}
outputs:
  dcaecomponent_install_status:
    value: { get_attribute: [ dcaecomponent, install-status ] }


There is also option to override the defaults using values.yaml equivalent using following blueprint template


Helm Template with Override
# ============LICENSE_START==========================================
# ===================================================================
# Copyright (c) 2019 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#============LICENSE_END============================================
tosca_definitions_version: cloudify_dsl_1_3

imports:
  - http://www.getcloudify.org/spec/cloudify/4.3.1/types.yaml
  - "https://nexus.onap.org/service/local/repositories/raw/content/org.onap.ccsdk.platform.plugins/type_files/helm/4.0.0/helm-type.yaml"

inputs:
  tiller-server-ip:
    description: IP address of Kubernetes master node
  tiller-server-port:
    description: Nodeport of tiller server
  namespace:
    description: Target namespace to be installed under (requires to be new)
  chart-repo-url:
     default: https://nexus.onap.org/content/sites/oom-helm-staging
  chart-version :
    description: Chart version for identified component-name
  stable-repo-url:
    description: URL for stable repository
    type: string
    default: 'https://kubernetes-charts.storage.googleapis.com'
  config-url:
    default: ''
  config-format:
    default: 'yaml'
  component-name:
    description: onap component name
node_templates:
  onap_env:
    type: onap.nodes.component
    properties:
      tiller-server-ip: { get_input: tiller-server-ip }
      tiller-server-port: { get_input: tiller-server-port }
      component-name: onap
      chart-repo-url: { get_input: chart-repo-url }
      chart-version: { get_input: chart-version }
      namespace: { get_input: namespace }
      stable-repo-url: { get_input: stable-repo-url}
      config: '{ "aaf": {"enabled": false}, "aai": {"enabled": false}, "appc": {"enabled": false}, "clamp": {"enabled": false}, "cli": {"enabled": false}, "consul": {"enabled": false}, "dcaegen2": {"enabled": false}, "dmaap": {"enabled": false}, "esr": {"enabled": false}, "log": {"enabled": false}, "sniro-emulator": {"enabled": false}, "msb": {"enabled": false}, "multicloud": {"enabled": false}, "nbi": {"enabled": false}, "oof": {"enabled": false}, "policy": {"enabled": false}, "pomba": {"enabled": false}, "portal": {"enabled": false}, "robot": {"enabled": false}, "sdc": {"enabled": false}, "sdnc": {"enabled": false}, "so": {"enabled": false}, "uui": {"enabled": false}, "vfc": {"enabled": false}, "vid": {"enabled": false}, "vnfsdk": {"enabled": false} }'

  dcaecomponent:
    type: onap.nodes.component
    properties:
      tiller-server-ip: { get_input: tiller-server-ip }
      tiller-server-port: { get_input: tiller-server-port }
      component-name: { get_input: component-name }
      chart-repo-url: { get_input: chart-repo-url }
      chart-version: { get_input: chart-version }
      namespace: { get_input: namespace }
      stable-repo-url: { get_input: stable-repo-url}
      config-url:  { get_input: config-url}
      config-format: { get_input: config-format}
    relationships:
      - type: cloudify.relationships.connected_to
        target: onap_env
outputs:
  dcaecomponent_install_status:
    value: { get_attribute: [ dcaecomponent, install-status ] }


Pre-Configuration Steps

  1. Helm needs to be installed on the CM pod            


kubectl exec -n onap <Cloudify Manager pod> /bin/bash

wget http://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz
tar -zxvf helm-v2.9.1-linux-amd64.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/helm

   Note: If wget is not found, install using - "sudo yum install wget" command on CM pod.

    2.  Tiller service should be updated to expose a nodeport

You can let K8S assign unused random port by changing the "type" from "ClusterIP" to "NodePort"  in the service definition.

kubectl edit svc -n kube-system tiller-deploy -o yaml
# Assign an unused nodeport available in cluster

#After update K8S svc definition should reflect the node port assigned
#verify node port assignment 

kubectl get svc --all-namespaces | grep tiller
kube-system   tiller-deploy             ClusterIP      10.43.218.97   <none>                                44134/TCP                       5d

Example below of modified tiller service with nodeport assigned to 32764

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2019-11-12T17:48:22Z"
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
  resourceVersion: "12675524"
  selfLink: /api/v1/namespaces/kube-system/services/tiller-deploy
  uid: 9ee66cde-0574-11ea-baf9-fa163e7033c0
spec:
  clusterIP: 10.43.128.185
  externalTrafficPolicy: Cluster
  ports:
  - name: tiller
    nodePort: 32764
    port: 44134
    protocol: TCP
    targetPort: tiller
  selector:
    app: helm
    name: tiller
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}



Installation


  1. Modify the blueprint templates 
    kubectl exec -it -n onap <dcae-bootstrap pod> /bin/bash
    cd blueprints
    ls k8s-helm.yaml k8s-helm-override.yaml
    # Helm Blueprint templates are available under this directory
    # Verify and update the blueprint parameters if required 
    # Create a corresponding input files 
    
    

    Note: Explanation of parameters are documented under CCSDK wiki page : Introduction of Helm Plugin.

  2. Create an input file with below parameters - /blueprints/k8s-helm-inputs.yaml

    Error rendering macro 'code': Invalid value specified for parameter 'com.atlassian.confluence.ext.code.render.InvalidValueException'
    tiller-server-ip: 10.12.7.116
    tiller-server-port: 32764
    namespace: onap
    chart-repo-url: https://nexus.onap.org/content/sites/oom-helm-staging
    chart-version : 3.0.0
    config-url: ''
    config-format: 'yaml'
    component-name: robot
  3. Validate blueprint into CM
    cfy blueprints validate  
    
  4. Deploy the blueprint
    cfy blueprints upload -b k8s-helm-test /blueprints/k8s-helm.yaml
    cfy deployments create -b k8s-helm-test k8s-helm-test
    cfy executions start -d k8s-helm-test install
    
    OR (do upload, create deploy and install on single command as below)
    
    cfy install -b k8s-helm-test -d k8s-helm-test -i /blueprints/k8s-helm-inputs.yaml /blueprints/k8s-helm.yaml
    
    Deployment Output
    
    [root@dev-dcaegen2-dcae-bootstrap-869957c7dc-kswcz /]# cfy install -b k8s-helm-test -d k8s-helm-test -i /blueprints/k8s-helm-inputs.yaml /blueprints/k8s-helm.yaml
    Uploading blueprint /blueprints/k8s-helm.yaml...
     k8s-helm.yaml |#######################################################| 100.0%
    Blueprint uploaded. The blueprint's id is k8s-helm-test
    Creating new deployment from blueprint k8s-helm-test...
    Deployment created. The deployment's id is k8s-helm-test
    Executing workflow install on deployment k8s-helm-test [timeout=900 seconds]
    Deployment environment creation is pending...
    2020-02-14 15:45:43.958  CFY <k8s-helm-test> Starting 'create_deployment_environment' workflow execution
    2020-02-14 15:45:44.553  CFY <k8s-helm-test> Installing deployment plugins
    2020-02-14 15:45:44.553  CFY <k8s-helm-test> Sending task 'cloudify_agent.operations.install_plugins'
    2020-02-14 15:45:44.553  CFY <k8s-helm-test> Task started 'cloudify_agent.operations.install_plugins'
    2020-02-14 15:45:45.196  LOG <k8s-helm-test> INFO: Installing plugin: helm-plugin
    2020-02-14 15:45:45.196  LOG <k8s-helm-test> INFO: Using existing installation of managed plugin: f827178d-d100-4129-a309-55a2939863b6 [package_name: helm, package_version: 4.0.0, supported_platform: linux_x86_64, distribution: centos, distribution_release: core]
    2020-02-14 15:45:45.196  CFY <k8s-helm-test> Task succeeded 'cloudify_agent.operations.install_plugins'
    2020-02-14 15:45:45.196  CFY <k8s-helm-test> Skipping starting deployment policy engine core - no policies defined
    2020-02-14 15:45:45.196  CFY <k8s-helm-test> Creating deployment work directory
    2020-02-14 15:45:45.863  CFY <k8s-helm-test> 'create_deployment_environment' workflow execution succeeded
    2020-02-14 15:45:48.029  CFY <k8s-helm-test> Starting 'install' workflow execution
    2020-02-14 15:45:48.608  CFY <k8s-helm-test> [dcaecomponent_kbvx8d] Creating node instance: nothing to do
    2020-02-14 15:45:48.608  CFY <k8s-helm-test> [dcaecomponent_kbvx8d] Configuring node instance
    2020-02-14 15:45:49.324  CFY <k8s-helm-test> [dcaecomponent_kbvx8d.configure] Sending task 'plugin.tasks.config'
    2020-02-14 15:45:53.638  CFY <k8s-helm-test> [dcaecomponent_kbvx8d.configure] Task succeeded 'plugin.tasks.config'
    2020-02-14 15:45:53.638  CFY <k8s-helm-test> [dcaecomponent_kbvx8d] Node instance configured
    2020-02-14 15:45:54.267  CFY <k8s-helm-test> [dcaecomponent_kbvx8d] Starting node instance
    2020-02-14 15:45:54.267  CFY <k8s-helm-test> [dcaecomponent_kbvx8d.start] Sending task 'plugin.tasks.start'
    2020-02-14 15:45:58.230  CFY <k8s-helm-test> [dcaecomponent_kbvx8d.start] Task succeeded 'plugin.tasks.start'
    2020-02-14 15:45:58.230  CFY <k8s-helm-test> [dcaecomponent_kbvx8d] Node instance started
    2020-02-14 15:45:58.946  CFY <k8s-helm-test> 'install' workflow execution succeeded
    Finished executing workflow install on deployment k8s-helm-test
    * Run 'cfy events list -e 74187dc8-480c-4b99-8ad9-10d8a5864bf3' to retrieve the execution's events/logs
  5. Validation
    # Verify if new NS identified in blueprint configuration is created
    kubectl get ns
    
    # Verify if required component was deployed 
    kubectl get pods -n <ns specified> 

Any error on deployment will be reported in console. Additional logs can be found also under Cloudify Manager pod (under /var/log/cloudify/mg*work/logs)

Future Enhancement

  • Support Tiller clusterIP/port as option instead of nodeport alone for tiller.
  • Support deployment on existing names spaces
  • Logging enhancements (deployment errors if any to be captured also)


  • No labels