You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 96 Next »

-------------------- Work in progress --------------------

This guide is geared to provide information regarding  how to do service design to automate instantiation and day0 configuration.


    Installation

    ONAP is meant to be deployed within a Kubernetes environment. Hence, the de-facto way to deploy CDS is through Kubernetes.

    ONAP also package Kubernetes manifest as Chart, using Helm.

    Prerequisite

    https://docs.onap.org/en/latest/guides/onap-developer/settingup/index.html

    Setup local Helm

    helm repo
    helm init --history-max 200 # To install tiller to target Kubernetes if not yet installed
    helm serve &
    helm repo add local http://127.0.0.1:8879

    Get the chart

    Make sure to checkout the release to use, by replacing $release-tag in bellow command

    git clone
    git clone https://gerrit.onap.org/r/oom
    git checkout tags/$release-tag
    cd oom/kubernetes
    make common
    make cds

    Install CDS

    helm install
    helm install --name cds cds

    Result

    kubectl output
    $ kubectl get all --selector=release=cds
    NAME                                             READY     STATUS    RESTARTS   AGE
    pod/cds-blueprints-processor-54f758d69f-p98c2    0/1       Running   1          2m
    pod/cds-cds-6bd674dc77-4gtdf                     1/1       Running   0          2m
    pod/cds-cds-db-0                                 1/1       Running   0          2m
    pod/cds-controller-blueprints-545bbf98cf-zwjfc   1/1       Running   0          2m
    NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
    service/blueprints-processor    ClusterIP   10.43.139.9     <none>        8080/TCP,9111/TCP   2m
    service/cds                     NodePort    10.43.254.69    <none>        3000:30397/TCP      2m
    service/cds-db                  ClusterIP   None            <none>        3306/TCP            2m
    service/controller-blueprints   ClusterIP   10.43.207.152   <none>        8080/TCP            2m
    NAME                                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/cds-blueprints-processor    1         1         1            0           2m
    deployment.apps/cds-cds                     1         1         1            1           2m
    deployment.apps/cds-controller-blueprints   1         1         1            1           2m
    NAME                                                   DESIRED   CURRENT   READY     AGE
    replicaset.apps/cds-blueprints-processor-54f758d69f    1         1         0         2m
    replicaset.apps/cds-cds-6bd674dc77                     1         1         1         2m
    replicaset.apps/cds-controller-blueprints-545bbf98cf   1         1         1         2m
    NAME                          DESIRED   CURRENT   AGE
    statefulset.apps/cds-cds-db   1         1         2m

    Design time

    Bellow are the requirements to enable automation for a service within ONAP.

    For instantiation, the goal is to be able to automatically resolve all the HEAT/Helm variables, called cloud parameters.

    For post-instantiation, the goal is to configure the VNF with initial configuration.


      Prerequisite

      1. Gather the cloud parameters:

          Have the HEAT template along with the HEAT environment file.

          or

          Have the Helm chart along with the Values.yaml file

          (CDS supports, but whether SO → Multicloud support for Helm/K8S is different story)

          or

          ...

          Have the configuration template to apply on the VNF.

          XML for NETCONF

          1. JSON / XML for RESTCONF
          2. JSON for Ansible
          3. etc...
        • Identify which template parameters are static and dynamic
        • Create and fill-in the a table for all the dynamic values

          While doing so, identify the resources using the same process to be resolved; for instance, if two IPs has to be resolved through the same IPAM, the process the resolve the IP is the same.

          Here are the information to capture for each dynamic cloud parameters

          Parameter Name Data Dictionary Resource source Data Dictionary Ingredients for resolution Output of resolution
          Either the cloud parameters name or the placeholder given for the dynamic property.

            Value will be given as input in the request.

            Value will be defaulted in the model.

            Value will be resolved by sending a query to the REST system


            Auth URL URI Payload VERB

            Supported Auth type

              Use token based authentication

              • token

              Use basic authentication

              • username
              • password

              Use SSL basic authentication

              • keystore type
              • truststore
              • truststore password
              • keystore
              • keystore password
              http(s)://<host>:<port> /xyz JSON formatted payload HTTP method



              Value will be resolved by sending a SQL statement to the DB system


              Type URL Query Username Password
              Only maria-db supported for now

              jdbc:mysql://<host>:<port>/db

              SQL statement



              Value will be resolved through the execution of a script.

              These are all the required parameters to process the resolution of that particular resources.

                List of placeholders used for

                • URI
                • Payload

                List of placeholders used for

                • SQL statement

                This is the expected result from the system, and you should know what value out of the response is of interest for you.

                If it's a JSON payload, then you should think about the json path to access to value of interest.

              Data dictionary

              What is a data dictionary?

              For each unique identified dynamic resource, along with all their ingredients, we need to create a data dictionary.

              Here are the modeling guideline: Modeling Concepts#resourceDefinition-modeling


              Bellow are examples of data dictionary

                Value will be pass as input.

                unit-number
                {
                    "tags": "unit-number",
                    "name": "unit-number",
                    "property": {
                      "description": "unit-number",
                      "type": "string"
                    },
                    "updated-by": "adetalhouet",
                    "sources": {
                      "input": {
                        "type": "source-input"
                      }
                    }
                  }

                Value will be defaulted.

                prefix-id
                {
                  "tags": "prefix-id",
                  "name": "prefix-id",
                  "property" :{
                    "description": "prefix-id",
                    "type": "integer"
                  },
                  "updated-by": "adetalhouet",
                  "sources": {
                    "default": {
                      "type": "source-default"
                    }
                  }
                }

                Value will be resolved through REST.

                Modeling reference: Modeling Concepts#rest


                primary-config-data via rest source type

                In this example, we're making a POST request to an IPAM system with no payload.

                Some ingredients are required to perform the query, in this case, $prefixId. Hence It is provided as an input-key-mapping and defined as a key-dependencies. Please refer to the modeling guideline for more in depth understanding.

                As part of this request, the expected response will be as bellow. What is of interest is the address field, as this is what we're trying to resolve.

                response
                {
                    "id": 4,
                    "address": "192.168.10.2/32",
                    "vrf": null,
                    "tenant": null,
                    "status": 1,
                    "role": null,
                    "interface": null,
                    "description": "",
                    "nat_inside": null,
                    "created": "2018-08-30",
                    "last_updated": "2018-08-30T14:59:05.277820Z"
                }

                To tell the resolution framework what is of interest in the response, the path property can be used, which uses JSON_PATH, to get the value.

                create_netbox_ip_address
                {
                    "tags" : "oam-local-ipv4-address",
                    "name" : "create_netbox_ip",
                    "property" : {
                      "description" : "netbox ip",
                      "type" : "string"
                    },
                    "updated-by" : "adetalhouet",
                    "sources" : {
                      "primary-config-data" : {
                        "type" : "source-rest",
                        "properties" : {
                          "type" : "JSON",
                          "verb" : "POST",
                          "endpoint-selector" : "ipam-1",
                          "url-path" : "/api/ipam/prefixes/$prefixId/available-ips/",
                          "path" : "/address",
                          "input-key-mapping" : {
                            "prefixId" : "prefix-id"
                          },
                          "output-key-mapping" : {
                            "address" : "address"
                          },
                          "key-dependencies" : [ "prefix-id" ]
                        }
                      }
                    }
                  }



                primary-aai-data via rest source type

                primary-aai-data via type source-rest


                TBD


                primary-aai-data sample
                {
                  "name" : "primary-aai-data",
                  "tags" : "primary-aai-data",
                  "updated-by" : "Steve, Siani <steve.djissitchi@bell.ca>",
                  "property" : {
                    "description" : "primary-aai-data",
                    "type" : "string"
                  },
                  "sources" : {
                    "default": {
                      "type": "source-default",
                      "properties": {
                      }
                    },
                    "input": {
                      "type": "source-input",
                      "properties": {
                      }
                    },
                    "primary-aai-data" : {
                      "type" : "source-rest",
                      "properties": {
                        "type": "JSON",
                        "url-path": "$aai-port/aai/v14/network/generic-vnfs/generic-vnf/$vnf-id",
                        "path": "",
                        "input-key-mapping": {
                          "aai-port": "port",
                          "vnf-id": "vnf-id"
                        },
                        "output-key-mapping": {
                        },
                        "key-dependencies": [
                          "port",
                          "vnf-id"
                        ]
                      }
                    }
                  }
                }



                Value will be resolved through a database.

                Modeling reference: Modeling Concepts#sql

                In this example, we're making a SQL to the primary database.

                Some ingredients are required to perform the query, in this case, $vfmoudleid. Hence It is provided as an input-key-mapping and defined as a key-dependencies. Please refer to the modeling guideline for more in depth understanding.

                As part of this request, the expected response will be as put in value. In the output-key-mapping section, that value will be mapped to the expected resource name to resolve.

                vf-module-type
                {
                  "name": "vf-module-type",
                  "tags": "vf-module-type",
                  "property": {
                    "description": "vf-module-type",
                    "type": "string"
                  },
                  "updated-by": "adetalhouet",
                  "sources": {
                    "primary-db": {
                      "type": "source-db",
                      "properties": {
                        "type": "SQL",
                        "query": "select sdnctl.demo.value as value from sdnctl.demo where sdnctl.demo.id=:vfmoduleid",
                        "input-key-mapping": {
                          "vfmoduleid": "vf-module-number"
                        },
                        "output-key-mapping": {
                          "vf-module-type": "value"
                        },
                        "key-dependencies": [
                          "vf-module-number"
                        ]
                      }
                    }
                  }
                }

                Value will be resolved through the execution of a script.

                Modeling reference: Modeling Concepts#Capability

                In this example, we're making use of a Python script.

                Some ingredients are required to perform the query, in this case, $vf-module-type. Hence It is provided as a key-dependencies. Please refer to the modeling guideline for more in depth understanding.

                As part of this request, the expected response will set within the script itself.

                interface-description
                {
                  "tags": "interface-description",
                  "name": "interface-description",
                  "property": {
                    "description": "interface-description",
                    "type": "string"
                  },
                  "updated-by": "adetalhouet",
                  "sources": {
                    "capability": {
                      "type": "source-capability",
                      "properties": {
                        "script-type": "jython",
                        "script-class-reference": "Scripts/python/DescriptionExample.py",       
                        "key-dependencies": [
                          "vf-module-type"
                        ]
                      }
                    }
                  }
                }

                The script itself is as bellow.

                The key is to have the script class derived from the framework standards.

                In the case of resource resolution, the class to derive from is AbstractRAProcessor

                It will give the required methods to implement: process and recover, along with some utility functions, such as set_resource_data_value or addError.

                These functions either come from the AbstractRAProcessor class, or from the class it derived from.

                If the resolution fail, the recover method will get called with the exception as parameter.

                Scripts/python/DescriptionExample.py
                #  Copyright (c) 2019 Bell Canada.
                #
                #  Licensed under the Apache License, Version 2.0 (the "License");
                #  you may not use this file except in compliance with the License.
                #  You may obtain a copy of the License at
                #
                #      http://www.apache.org/licenses/LICENSE-2.0
                #
                #  Unless required by applicable law or agreed to in writing, software
                #  distributed under the License is distributed on an "AS IS" BASIS,
                #  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
                #  See the License for the specific language governing permissions and
                #  limitations under the License.
                
                from abstract_ra_processor import AbstractRAProcessor
                from blueprint_constants import *
                from java.lang import Exception as JavaException
                
                class DescriptionExample(AbstractRAProcessor):
                
                    def process(self, resource_assignment):
                        try:
                            # get key-dependencies value
                            value = self.raRuntimeService.getStringFromResolutionStore("vf-module-type")
                            
                            # logic based on key-dependency outcome
                            result = ""
                            if value == "vfw":
                                result = "This is the Virtual Firewall entity"
                            elif value == "vsn":
                                result = "This is the Virtual Sink entity"
                            elif value == "vpg":
                                result = "This is the Virtual Packet Generator"
                
                            # set the value of resource getting currently resolved
                            self.set_resource_data_value(resource_assignment, result)
                
                        except JavaException, err:
                          log.error("Java Exception in the script {}", err)
                        except Exception, err:
                          log.error("Python Exception in the script {}", err)
                        return None
                
                    def recover(self, runtime_exception, resource_assignment):
                        print self.addError(runtime_exception.getMessage())
                        return None
                
                
                

                Value will be resolved through REST., and output will be a complex type.

                Modeling reference: Modeling Concepts#rest

                In this example, we're making a POST request to an IPAM system with no payload.

                Some ingredients are required to perform the query, in this case, $prefixId. Hence It is provided as an input-key-mapping and defined as a key-dependencies. Please refer to the modeling guideline for more in depth understanding.

                As part of this request, the expected response will be as bellow.

                response
                {
                    "id": 4,
                    "address": "192.168.10.2/32",
                    "vrf": null,
                    "tenant": null,
                    "status": 1,
                    "role": null,
                    "interface": null,
                    "description": "",
                    "nat_inside": null,
                    "created": "2018-08-30",
                    "last_updated": "2018-08-30T14:59:05.277820Z"
                }

                What is of interest is the address and id fields. For the process to return these two values, we need to create a custom data-type, as bellow

                dt-netbox-ip
                {
                  "version": "1.0.0",
                  "description": "This is Netbox IP Data Type",
                  "properties": {
                    "address": {
                      "required": true,
                      "type": "string"
                    },
                    "id": {
                      "required": true,
                      "type": "integer"
                    }
                  },
                  "derived_from": "tosca.datatypes.Root"
                }

                The type of the data dictionary will be dt-netbox-ip.

                To tell the resolution framework what is of interest in the response, the output-key-mapping section is used. The process will map the output-key-mapping to the defined data-type.

                create_netbox_ip_address
                {
                    "tags" : "oam-local-ipv4-address",
                    "name" : "create_netbox_ip",
                    "property" : {
                      "description" : "netbox ip",
                      "type" : "dt-netbox-ip"
                    },
                    "updated-by" : "adetalhouet",
                    "sources" : {
                      "primary-config-data" : {
                        "type" : "source-rest",
                        "properties" : {
                          "type" : "JSON",
                          "verb" : "POST",
                          "endpoint-selector" : "ipam-1",
                          "url-path" : "/api/ipam/prefixes/$prefixId/available-ips/",
                          "path" : "",
                          "input-key-mapping" : {
                            "prefixId" : "prefix-id"
                          },
                          "output-key-mapping" : {
                			"address" : "address",
                            "id" : "id"
                          },
                          "key-dependencies" : [ "prefix-id" ]
                        }
                      }
                    }
                  }

                Workflows

                The following workflows are contracts established between SO, SDNC and CDS to cover the instantiation and the post-instantiation use cases.

                Please refer to the modeling guide to understand workflow concept: Modeling Concepts#workflow


                  resource-assignment

                  This action is meant to assign resources needed to instantiate the service, e.g. to resolve all the cloud parameters.

                  Also, this action has the ability to perform a dry-run, meaning that result from the resolution will be made visible to the user.

                  If user is fine with the result, he can proceed, else, (TDB) he will have opportunity to re-trigger the resolution.

                  Context

                  This action is triggered by Generic-Resource-API (GR-API) within SDNC as part of the AssignBB orchestrated by SO.

                  It will be triggered for the service, and each VNF(s) and VF-Module(s) (referred as entity bellow).

                  See SO Building blocks Assignment.

                  Steps

                  This is a single action type of workflow, hence the target will refer to a node_template of type component-resource-resolution

                  Inputs

                  Property Description
                  template-prefix

                  This action will require resource accumulator templates for each VNF and VF-Module; this will be covered during the User Guide component explanation.

                  These templates are identified using artifact prefix. See Modeling Concepts#template

                  So in order to know for which entity the action is triggered, this is required as input is required.

                  Output

                  In order to perform dry-run, it is necessary to provide the meshed resolved template as output. To do so, the use of Modeling Concepts#getAttribute expression is required.

                  Also, as mentioned here Modeling Concepts#resourceResolution, the resource resolution component node will populate an attribute named assignment-params with the result.

                  Finally, the name of the ouput has to be meshed-template so SDNC GR-API knows how to properly parse the response.

                  Example

                  Here is an example of the resource-assignment workflow:

                  resource-assignment
                  {
                    "workflows": {
                      "resource-assignment": {
                        "steps": {
                          "resource-assignment-process": {
                            "description": "Resource Assign Workflow",
                            "target": "resource-assignment-process"
                          }
                        },
                        "inputs": {
                          "template-prefix": {
                            "required": true,
                            "type": "string"
                          },
                          "resolution-key": {
                            "required": true,
                            "type": "string"
                          },
                          "resource-assignment-properties": {
                            "description": "Dynamic PropertyDefinition for workflow(resource-assignment).",
                            "required": true,
                            "type": "dt-resource-assignment-properties"
                          }
                        },
                        "outputs": {
                          "meshed-template": {
                            "type": "json",
                            "value": {
                              "get_attribute": [
                                "SELF",
                                "assignment-params"
                              ]
                            }
                          }
                        }
                      }
                    }
                  }

                  config-assign

                  This action is meant to assign all the resources and mesh the templates needed for the configuration to apply during post-instantiation (day0 config).

                  If user is fine with the result, he can proceed, else, (TDB) he will have opportunity to re-trigger the resolution.

                  Context

                  This action is triggered by SO after the AssignBB has been executed for Service, VNF and VF-Module. It corresponds to the ConfigAssignBB.

                  See SO Building blocks Assignment.

                  Steps

                  This is a single action type of workflow, hence the target will refer to a node_template of type component-resource-resolution

                  Inputs

                  Property Description
                  resolution-key

                  The dry-run functionality requires the ability to retrieve the resolution that has been made later point in time in the process.

                  The combination of the artifact-name and the resolution-key will be used to uniquely identify the result.

                  Output

                  In order to perform dry-run, it is necessary to provide the meshed resolved template as output. To do so, the use of Modeling Concepts#getAttribute expression is required.

                  Also, as mentioned here Modeling Concepts#resourceResolution, the resource resolution component node will populate an attribute named assignment-params with the result.

                  Example

                  Here is an example of the config-assign workflow:

                  config-assign
                  {
                    "workflows": {
                      "config-assign": {
                        "steps": {
                          "config-assign-process": {
                            "description": "Config Assign Workflow",
                            "target": "config-assign-process"
                          }
                        },
                        "inputs": {
                          "resolution-key": {
                            "required": true,
                            "type": "string"
                          },
                          "config-assign-properties": {
                            "description": "Dynamic PropertyDefinition for workflow(config-assign).",
                            "required": true,
                            "type": "dt-config-assign-properties"
                          }
                        },
                        "outputs": {
                          "dry-run": {
                            "type": "json",
                            "value": {
                              "get_attribute": [
                                "SELF",
                                "assignment-params"
                              ]
                            }
                          }
                        }
                      }
                    }
                  }

                  config-deploy

                  This action is meant to push the configuration templates defined during the config-assign step for the post-instantiation.

                  This action is triggered by SO during after the CreateBB has been executed for all the VF-Modules.

                  Context

                  This action is triggered by SO after the CreateVnfBB has been executed. It corresponds to the ConfigDeployBB.

                  See SO Building blocks Assignment.

                  Steps

                  This is a single action type of workflow, hence the target will refer to a node_template of type component-netconf-executor or component-jython-executor or component-restconf-executor.

                  Inputs

                  Property Description
                  resolution-key

                  Needed to retrieve the resolution that has been made earlier point in time in the process.

                  The combination of the artifact-name and the resolution-key will be used to uniquely identify the result.

                  Output

                  SUCCESS or FAILURE

                  Example

                  Here is an example of the config-deploy workflow:

                  config-deploy
                  {
                    "workflow": {
                      "config-deploy": {
                        "steps": {
                          "config-deploy": {
                            "description": "Config Deploy using Python (Netconf) script",
                            "target": "config-deploy-process"
                          }
                        },
                        "inputs": {
                          "resolution-key": {
                            "required": true,
                            "type": "string"
                          },
                          "config-deploy-properties": {
                            "description": "Dynamic PropertyDefinition for workflow(config-deploy).",
                            "required": true,
                            "type": "dt-config-deploy-properties"
                          }
                        }
                      }
                    }
                  }

                    resource-assignment-process

                    config-assign-process

                    config-deploy-process



                    Starting from Dublin release, CDS offers a new package configuration to design the services provisioning. This section describes step by step the procedure of designing a new CBA from scratch.

                    The CBA package content is well described in CDS Modeling Concepts and also in Design Time section, it shows the structure of a CBA and the different definitions/artifacts. This section will be more focus on the creation of new CBA (The structure: required folder and files), and the enrichment procedure to generate the complete config file.

                    CBA directory and structure


                    CBA directory structure
                    ├── CBA-archive-name                             # CBA Root Directory        
                    |   └── Definitions/        
                    │       └── CBA_configuration_file.json          # CBA configuration file (Mandatory)               
                    |   └── Environments/                            # All environment files contained in this folder are loaded in Blueprint processor run-time       
                    │       └── env-prod.properties                                             
                    │       └── env-test.properties        
                    |   └── Plans/        
                    │       └── CONFIG_DirectedGraphExample.xml      # Directed graph artifact        
                    |   └── Scripts/                                 # Script used for capability resource resolution
                    │       └── kotlin/          
                    │           └── script_kotlin.kt
                    │       └── ansible/          
                    │           └── ansible_file.yaml
                    │       └── python/          
                    │           └── SamplePython.py                     
                    |   └── TOSCA-Metadata/        
                    │       └── TOSCA.meta                           # CBA entry point (Mandatory)                
                    |   └── Templates/        
                    │       └── example1-template.jinja              # Template file that will dynamic represent a payload in some execution node (Extensions supported: .vtl and .jinja)    
                    │       └── example1-mapping.json                # List of variables that will be resolved to fulfill the jinja template
                    │       └── example2-template.vtl                # Velocity Template file 
                    │       └── example2-mapping.json                # Mapping file for velocity template
                    

                                                                                                                                                             Fig. CBA config file structure

                       

                     A. CBA configuration file sections description

                    The above diagram shows a simple CBA with one workflow and one node template. The following describes each section defined in CBA config file.

                    • CBA Metadata:

                    This section specify information about the CBA such as:

                       - The Author: Name and email

                       - User privileges for this self-service provisioning execution

                       - CBA identifier: Template name and Version (Ex. Template name: My-self-service-name, Version: 1.0.0)

                       - Template tags: Reference words that can be used to find this CBA.


                    • DSL Definition:

                    We define here all parameters, in JSON, needed in service provisioning.

                    Ex. Endpoint selector to provide remote Ansible server parameters.

                    ansible-remote-endpoint
                    "ansible-remote-endpoint" : {
                       "type" : "token-auth",
                       "url" : "http://ANSIBLE_IP_ADDRESS",
                       "token" : "Bearer J9gEtMDqf7P4YsJ74fioY9VAhLDIs1"
                    }


                    • Workflows execution:

                                  - my-workflow1: This is a workflow to describe the action that will trigger the self-service provisioning in run-time. A workflow can take input and return output. It can also follow one or many steps. In this example, only one step is defined.

                    Workflow: my-workflow1
                    "my-workflow1" : {
                       "steps" : {
                          "execute-script" : {
                             "description" : "some description",
                             "target" : "my-workflow-target-node-node-template",
                             "activities" : [ {
                                "call_operation" : ""
                             } ]
                          }
                       },
                       "inputs" : {
                          "my-input" : {
                             "required" : false,
                             "type" : "string"
                          }
                       }
                    }

                    Each step points to a target which is the corresponding node template, and the target specified here is: my-workflow-target-node-node-template.


                    • Node templates: This section provide the self-service execution plan, usually DG is used here to describe complex workflow. But, the above CBA contains a simple node template (my-workflow-node-node-template) without DG:
                    my-workflow-target-node-node-template
                    "my-workflow-target-node-node-template" : {
                       "type" : "node-template-execution-type",
                       "interfaces" : {
                          "NodeTemplateInterface" : {
                             "operations" : {
                                "process" : {
                                   "implementation" : {
                                      "primary" : "component-script"
                                   },
                                   "inputs" : {
                                      "command" : "python SamplePython.py",
                                      "packages" : [ {
                                         "type" : "pip",
                                         "package" : [ "pyaml" ]
                                      } ],
                                      "argument-properties" : "*remote-argument-properties",
                                      "dynamic-properties" : "*remote-argument-properties"
                                   }
                                }
                             }
                          }
                       },
                       "artifacts" : {
                          "component-script" : {
                             "type" : "artifact-script-python",
                             "file" : "Scripts/python/SamplePython.py"
                          }
                       }
                    }


                    The node template is defined by the node-template-execution-type. This type specifies the component function to use for this node template execution. The following shows the different components that can be executed as a node template:

                    Node template types
                    ├── component-resource-resolution                             # CBA Root Directory        
                    |   └── Interface:        
                    │       ├── ResourceResolutionComponent                       # Component to resolve resources               
                    │           └── Resolution approaches:                        
                    │       	    ├── rr-processor-source-capability             # Resolve using Capability scripts such as jython or kotlin
                    │       		├── rr-processor-source-processor-db           # Resolve using database query
                    │       		├── rr-processor-source-default                # resolve by getting default value provided
                    │       		├── rr-processor-source-rest                   # Resolve using REST API request
                    ├── component-jython-executor                                 # Component to execute Jython scripts
                    |   └── Interface:        
                    │       ├── ComponentJythonExecutor
                    ├── component-remote-python-executor                          # Component to execute remote python scripts
                    |   └── Interface:        
                    │       ├── ComponentRemotePythonExecutor
                    ├── component-restconf-executor                               # Component to execute Restconf operations 
                    |   └── Interface:        
                    │       ├── ComponentRestconfExecutor
                    ├── component-netconf-executor                                # Component to execute netconf operations
                    |   └── Interface:        
                    │       ├── ComponentNetconfExecutor
                    ├── component-cli-executor                                    # Cli component
                    |   └── Interface:        
                    │       ├── ComponentCliExecutor
                    ├── component-remote-ansible-executor                         # Component to execute remote ansible playbook
                    |   └── Interface:        
                    │       ├── ComponentRemoteAnsibleExecutor
                    


                         In the case the workflow point to a DG node template, this DG will describe all the execution sequence to run for the corresponding workflow steps. In the following, the workflow point to a DG and execute two node templates:

                    • Workflow with DG
                    Workflow: my-workflow2
                    "my-workflow2" : {
                       "steps" : {
                          "execute-script" : {
                             "description" : "some description here...",
                             "target" : "my-workflow-target-node-template-with-DG",
                             "activities" : [ {
                                "call_operation" : ""
                             } ]
                          }
                       },
                       "inputs" : {
                          "my-input" : {
                             "required" : false,
                             "type" : "string"
                          }
                       }
                    }
                    • Node templates with DG
                    my-workflow-target-node-template-with-DG
                    "my-workflow-target-node-template-with-DG" : {
                       "type" : "dg-generic",
                       "properties" : {
                          "content" : {
                             "get_artifact" : [ "SELF", "dg-my-workflow1-target-node-template-with-DG" ]
                          },
                          "dependency-node-templates" : [ "target-node-template1", "target-node-template2" ]
                       },   
                       "artifacts" : {
                           "dg-my-workflow1-target-node-template-with-DG" : {
                              "type" : "artifact-directed-graph",
                              "file" : "Plans/CONFIG_DirectedGraphExample.xml"
                           }
                       }
                    }


                    in the below DG, we define the following sequence: [target-node-template1] [target-node-template2]

                    CONFIG_DirectedGraphExample.xml
                    <service-logic
                      xmlns='http://www.onap.org/sdnc/svclogic'
                      xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
                      xsi:schemaLocation='http://www.onap.org/sdnc/svclogic ./svclogic.xsd' module='CONFIG' version='1.0.0'>
                        <method rpc='dg-operation' mode='sync'>
                            <block atomic="true">
                                <execute plugin="target-node-template1" method="process">
                                    <outcome value='failure'>
                                        <return status="failure">
                                        </return>
                                    </outcome>
                                    <outcome value='success'>
                                        <execute plugin="target-node-template2" method="process">
                                            <outcome value='failure'>
                                                <return status="failure">
                                                </return>
                                            </outcome>
                                            <outcome value='success'>
                                                <return status='success'>
                                                </return>
                                            </outcome>
                                        </execute>
                                    </outcome>
                                </execute>
                            </block>
                        </method>
                    </service-logic>


                     B. Other artifacts in CBA

                    This section describes the different parts of the CBA, artifacts needed to have a model-driven package for self-service provisioning:

                    • CBA Entry point: TOSCA.meta file
                    TOSCA.meta
                    TOSCA-Meta-File-Version: 1.0.0
                    CSAR-Version: 1.0
                    Created-By: Steve Siani <alphonse.steve.siani.djissitchi@ibm.com>
                    Entry-Definitions: Definitions/CBA_configuration_file_name.json
                    Template-Name: baseconfiguration
                    Template-version: 1.0.0
                    Template-Tags: Steve Siani, remote_ansible
                    • Environment files: Some parameters need to be resolved to fulfill the template. It is possible to provide in your CBA, additional variables in environment files. In this approach, the service will get some parameters from environment file. The designer could define many environments variables in files, and those environments files are loaded automatically in the running self-service:

                                     Constraint: Save environment files in [CBA Root Folder]/Environments/

                    Environment files in CBA
                    ├── CBA-archive-name                             # CBA Root Directory        
                    |   .
                    |   .
                    |   .               
                    |   └── Environments/                            # All environment files contained in this folder are loaded in Blueprint processor run-time       
                    │       └── env-prod.properties                                             
                    │       └── env-test.properties   
                    │       └── AdditionalApplications.properties      
                    |   .
                    |   .
                    |   . 
                    
                    env-prod.properties
                    env-prod.ansible_ssh_user=<username>
                    env-prod.ansible_ssh_pass=<password>
                    env-prod.evi_id=<id>
                    env-prod.service_db_url=<service_db_url>
                    env-prod.topology_url=<topology_url>
                    env-prod.resource_allocator_url=<resource_allocator>
                    ...
                    env-test.properties
                    env-test.ansible_ssh_user=<username>
                    env-test.ansible_ssh_pass=<password>
                    env-test.evi_id=<id>
                    env-test.service_db_url=<service_db_url>
                    env-test.topology_url=<topology_url>
                    env-test.resource_allocator_url=<resource_allocator>
                    ...

                    Note:  When environment files are provided in CBA under Environments directory, the variables contained in those files are load in Blueprint run-time context as a node template "BPP". So, accessing those variables will be possible by calling the function getNodeTemplateAttributeValue("BPP", attribute) in Blueprint Runtime Service. Where "attribute" refers to the environment variable defined in environment file.

                    Ex. Getting environment variables from run time
                    val username = blueprintRuntimeService.getNodeTemplateAttributeValue("BPP", "env-test.ansible_ssh_user").asText()


                    • Template artifacts: Content the template file and the corresponding template mapping. This template provides a dynamic content to the self-service for configuration appliance.

                    Ex. Jinja template sample

                    example-template.jinja
                    site_id: {{ site_id }}
                    tenant_name: {{ tenant_name }}
                    Interfaces:
                    {%- for interface in interfaces %}
                        interface {{ interface.name }}
                        description {{ interface.description }}
                        ipv4 address {{ interface.ipv4 }}
                        mtu {{ interface.mtu }}
                    {%- endfor %}

                    Ex. Velocity template sample

                    example-template.vtl
                    site_id: ${site_id}
                    tenant_name: ${tenant_name}
                    Interfaces:
                    #foreach( $interface in $interfaces )
                        interface $interface.name
                        description $interface.description
                        ipv4 address $interface.ipv4
                        mtu $interface.mtu
                    #end

                    Ex. Corresponding template mapping file sample

                    example-mapping.json
                    [
                    	{
                    		"name": "environment",
                    		"input-param": true,
                    		"property": {
                    			"type": "string"
                    		},
                    		"dictionary-name": "input-source",
                    		"dictionary-source": "input",
                    		"dependencies": []
                    	},
                    	{
                    		"name": "site_id",
                    		"input-param": true,
                    		"property": {
                    			"type": "string"
                    		},
                    		"dictionary-name": "input-source",
                    		"dictionary-source": "input",
                    		"dependencies": []
                    	},
                    	{
                    		"name": "tenant_name",
                    		"input-param": true,
                    		"property": {
                    		  "type": "string"
                    		},
                    		"dictionary-name": "input-source",
                    		"dictionary-source": "input",
                    		"dependencies": []
                    	},
                        {
                    		"name": "interfaces",
                    		"input-param": true,
                    		"property": {
                    		   "type": "list",
                    		   "entry_schema": {
                    		      "type": "string"
                    		   }
                    		},
                    		"dictionary-name": "properties-capability-source",
                    		"dictionary-source": "capability",
                    		"dependencies": ["environment"]
                    	}
                    ]

                    In this template, some parameters are resolved using the input source and some are resolved using properties-capability-source.


                    • Script artifacts: You may need to resolve resources using a customized script (Kotlin or Python) or execute remote python script on a device. In this case, you will define scripts in your CBA under the Scripts directory.

                                       - Resource resolution using python script 

                    In the CBA, you may need to define and resolve variables. This is possible by declaring these variables as a data types and each data type belongs to a resource dictionary. Let's take the example of the variable declare above in template mapping.

                    Variable: interfaces
                        {
                    		"name": "interfaces",
                    		"input-param": true,
                    		"property": {
                    		   "type": "list",
                    		   "entry_schema": {
                    		      "type": "string"
                    		   }
                    		},
                    		"dictionary-name": "properties-capability-source",
                    		"dictionary-source": "capability",
                    		"dependencies": ["environment"]
                    	}

                    This variable is declared as an array list resolved using a resource dictionary name "properties-capability-source", from the dictionary source "capability" and will depend on variable call "environment". Dependency variable means that the "environment" variable should be resolved before "interfaces" variable is resolved.

                    The resource dictionary "properties-capability-source" must be load in CDS run time and will point to the python script to execute as Jython in order to resolve "interfaces" variable.

                    Resource dictionary: properties-capability-source
                        {
                           "name": "properties-capability-source",
                           "updated-by": "Steve Alphonse Siani, alphonse.steve.siani.djissitchi@ibm.com",
                           "tags": "properties-capability-source",
                           "property" :{
                               "description": "Data dictionary used to read properties.",
                               "type": "string"
                           },
                           "sources": {
                               "input": {
                                  "type": "source-input"
                                },
                               "default": {
                                  "type": "source-default",
                                  "properties": {}
                               },
                               "capability": {
                                  "type": "source-capability",
                                  "properties" : {
                                     "script-type" : "jython",
                                     "script-class-reference" : "Scripts/python/ResolvProperties.py"
                                  }
                               }
                           }
                        }
                    Scripts/python/ResolvProperties.py
                    from abstract_ra_processor import AbstractRAProcessor
                    from blueprint_constants import *
                    
                    class ResolvProperties(AbstractRAProcessor):
                    
                        def process(self, resource_assignment):
                            result = ""
                            env = ""
                            attribute = ""
                            # get dependencies result
                            value = self.raRuntimeService.getStringFromResolutionStore("environment")
                            
                            #logic based on dependency outcome
                            env = "env-" + value
                    
                            if resource_assignment.name == "ansible_ssh_user":
                                attribute = env + ".ansible_ssh_user"
                            if resource_assignment.name == "ansible_ssh_pass":
                                attribute = env + ".ansible_ssh_pass"
                            if resource_assignment.name == "evi_id":
                                attribute = env + ".evi_id"
                            if resource_assignment.name == "service_db_url":
                                attribute = env + ".service_db_url"
                            if resource_assignment.name == "topology_url":
                                attribute = env + ".topology_url"
                            if resource_assignment.name == "resource_allocator_url":
                                attribute = env + ".resource_allocator_url"
                    
                            result = self.raRuntimeService.getNodeTemplateAttributeValue("BPP", attribute).asText()
                    
                            # set value for resource getting currently resolved
                            self.set_resource_data_value(resource_assignment, result)
                            return None
                    
                        def recover(self, runtime_exception, resource_assignment):
                            log.error("Exception in the script {}", runtime_exception)
                            print self.addError(runtime_exception.cause.message)
                            return None

                           

                                       - Component execution on Netconf device with python script 

                    In the following, we define a node template execution as a "component-netconf-executor" and in the input we specify the script to run into the Netconf device.

                    Component Netconf executor
                    "node_templates": {
                        "config-deploy": {
                            "type": "component-netconf-executor",
                            "requirements": {
                              "netconf-connection": {
                                "capability": "netconf",
                                "node": "netconf-device",
                                "relationship": "tosca.relationships.ConnectsTo"
                              }
                            },
                            "interfaces": {
                              "ComponentScriptExecutor": {
                                "operations": {
                                  "process": {
                                    "inputs": {
                                      "script-type": "jython",
                                      "script-class-reference": "Scripts/python/ConfigDeploy.py",                 
                                      "dynamic-properties": "*config-deploy-properties"
                                    }
                                  }
                                }
                              }
                            }
                         },
                         "netconf-device": {
                            "type": "vnf-netconf-device",
                            "capabilities": {
                              "netconf": {
                                "properties": {
                                  "login-key": {
                                    "get_input": "password"
                                  },
                                  "login-account": {
                                    "get_input": "username"
                                  },
                                  "target-ip-address": {
                                    "get_input": "ip"
                                  },
                                  "port-number": 830,
                                  "connection-time-out": 5
                                }
                              }
                            }
                         }
                    }


                     C. Enrich the CBA to have complete package

                    Once CBA design is done, you need to perform the enrichment action to have a fully model-driven package to execute self-service provisioning in run-time execution. Please refer to this section "Enriching (or enhancing) a blueprint".

                    • No labels