Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Reverted from v. 1

CBA

The Controller Blueprint Archived is the overall service design, fully model-driven, package needed to automate the instantiation and any config provisioning operation, such as day0 or day2 configuration.

The CBA is .zip file, comprised of the following structure:

Code Block
.
├── Definitions
│   ├── blueprint.json
│   ├── artifact_types.json
│   ├── data_types.json
│   ├── node_types.json
│   ├── policy_types.json
│   ├── relationship_types.json
│   ├── resources_definition_types.json
│   └── *-mapping.json
├── Plans
│   ├── ResourceAssignment.xml
│   ├── ConfigAssign.xml
│   └── ConfigDeploy.xml
├── Scripts
│   └── python
│       ├── ConfigDeployExample.py
│

-------------------- Work in progress --------------------

This guide is geared to provide information regarding  how to do service design to automate instantiation and day0 configuration.

Installation

ONAP is meant to be deployed within a Kubernetes environment. Hence, the de-facto way to deploy CDS is through Kubernetes.

ONAP also package Kubernetes manifest as Chart, using Helm.

Prerequisite

https://docs.onap.org/en/latest/guides/onap-developer/settingup/index.html

Setup local Helm

Deck of Cards
idUser Guide
Card
defaulttrue
labelInstallation
Code Block
titlehelm repo
collapsetrue
helm init --history-max 200 # To install tiller to target Kubernetes if not yet installed
helm serve &
helm repo add local http://127.0.0.1:8879

Get the chart

Make sure to checkout the release to use, by replacing $release-tag in bellow command

Code Block
titlegit clone
collapsetrue
git clone https://gerrit.onap.org/r/oom
git checkout tags/$release-tag
cd oom/kubernetes
make common
make cds

Install CDS

Code Block
titlehelm install
collapsetrue
helm install --name cds cds

Result

Code Block
titlekubectl output
collapsetrue
$ kubectl get all --selector=release=cds NAME
       ├── ResourceResolutionExample.py
│       └── __init__.py
├── TOSCA-Metadata
│   └── TOSCA.meta
└── Templates
    
READY STATUS RESTARTS AGE pod/cds-blueprints-processor-54f758d69f-p98c2 0/1 Running 1 2m pod/cds-cds-6bd674dc77-4gtdf 1/1 Running 0 2m pod/cds-cds-db-0 1/1 Running 0 2m pod/cds-controller-blueprints-545bbf98cf-zwjfc 1/1 Running 0 2m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/blueprints-processor ClusterIP 10.43.139.9 <none> 8080/TCP,9111/TCP 2m service/cds NodePort 10.43.254.69 <none> 3000:30397/TCP 2m service/cds-db ClusterIP None <none> 3306/TCP 2m service/controller-blueprints ClusterIP 10.43.207.152 <none> 8080/TCP 2m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/cds-blueprints-processor 1 1 1 0 2m deployment.apps/cds-cds 1 1 1 1 2m deployment.apps/cds-controller-blueprints 1 1 1 1 2m NAME DESIRED CURRENT READY AGE replicaset.apps/cds-blueprints-processor-54f758d69f 1 1 0 2m replicaset.apps/cds-cds-6bd674dc77 1 1 1 2m replicaset.apps/cds-controller-blueprints-545bbf98cf 1 1 1 2m NAME DESIRED CURRENT AGE statefulset.apps/cds-cds-db 1 1 2m Card
labelDesign Time

CDS Design time

Bellow are the requirements to enable automation for a service within ONAP.

For instantiation, the goal is to be able to automatically resolve all the HEAT/Helm variables, called cloud parameters.

For post-instantiation, the goal is to configure the VNF with initial configuration.

As part of SDC design time, when defining the topology, for the resource of type VF or PNF, you need to specify

Deck of Cards
idDesign time
Card
defaulttrue
labelPrerequisite

Prerequisite

Gather the parameters:

Deck of Cards
idprerequisite
Card
labelinstantiation

Have the HEAT template along with the HEAT environment file.

or

Have the Helm chart along with the Values.yaml file (Integration between Multcloud and CDS TBD)

Card
labelconfiguration

Have the configuration template to apply on the VNF.

  1. XML for NETCONF
  2. JSON / XML for RESTCONF
  3. JSON for Ansible
  4. CLI
  5. ...
  • Identify which template parameters are static and dynamic
  • Create and fill-in the a table for all the dynamic values

    While doing so, identify the resources using the same process to be resolved; for instance, if two IPs has to be resolved through the same IPAM, the process the resolve the IP is the same.

    Card
    labelinstantiation

    Here are the information to capture for each dynamic cloud parameters

    Parameter NameData Dictionary Resource sourceData Dictionary Ingredients for resolutionOutput of resolutionEither the cloud parameters name or the placeholder given for the dynamic property. Deck of Cards
    idhow to resolve
    Card
    labelInput

    Value will be given as input in the request.

    Card
    labelDefault

    Value will be defaulted in the model.

    Card
    labelREST

    Value will be resolved by sending a query to the REST system

    AuthURLURIPayloadVERB

    Supported Auth type

    Deck of Cards
    idauth
    Card
    labelToken

    Use token based authentication

    • token
    Card
    labelBasic

    Use basic authentication

    • username
    • password
    Card
    labelSSL

    Use SSL basic authentication

    • keystore type
    • truststore
    • truststore password
    • keystore
    • keystore password
    http(s)://<host>:<port>/xyzJSON formatted payloadHTTP method Card
    labelSQL

    Value will be resolved by sending a SQL statement to the DB system

    TypeURLQueryUsernamePasswordOnly maria-db supported for now

    jdbc:mysql://<host>:<port>/db

    SQL statement
    Card
    labelCapability

    Value will be resolved through the execution of a script.

    These are all the required parameters to process the resolution of that particular resources.

    Deck of Cards
    idinput
    Card
    labelREST

    List of placeholders used for

    • URI
    • Payload
    Card
    labelDB

    List of placeholders used for

    • SQL statement

    This is the expected result from the system, and you should know what value out of the response is of interest for you.

    If it's a JSON payload, then you should think about the json path to access to value of interest.

    Card
    labelData Dictionary

    Data dictionary

    What is a data dictionary?

    For each unique identified dynamic resource, along with all their ingredients, we need to create a data dictionary.

    Here are the modeling guideline: Modeling Concepts#resourceDefinition-modeling

    Bellow are examples of data dictionary

    Deck of Cards
    idDD
    Card
    labelinput

    Value will be pass as input.

    Code Block
    themeEclipse
    titleunit-number
    {
        "tags": "unit-number",
        "name": "unit-number",
        "property": {
          "description": "unit-number",
          "type": "string"
        },
        "updated-by": "adetalhouet",
        "sources": {
          "input": {
            "type": "source-input"
          }
        }
      }
    Card
    labeldefault

    Value will be defaulted.

    Code Block
    themeEclipse
    titleprefix-id
    {
      "tags": "prefix-id",
      "name": "prefix-id",
      "property" :{
        "description": "prefix-id",
        "type": "integer"
      },
      "updated-by": "adetalhouet",
      "sources": {
        "default": {
          "type": "source-default"
        }
      }
    }
    Card
    labelrest

    Value will be resolved through REST.

    Modeling reference: Modeling Concepts#rest

    Panel
    titleprimary-config-data via rest source type

    In this example, we're making a POST request to an IPAM system with no payload.

    Some ingredients are required to perform the query, in this case, $prefixId. Hence It is provided as an input-key-mapping and defined as a key-dependencies. Please refer to the modeling guideline for more in depth understanding.

    As part of this request, the expected response will be as bellow. What is of interest is the address field, as this is what we're trying to resolve.

    Code Block
    themeEclipse
    titleresponse
    collapsetrue
    {
        "id": 4,
        "address": "192.168.10.2/32",
        "vrf": null,
        "tenant": null,
        "status": 1,
        "role": null,
        "interface": null,
        "description": "",
        "nat_inside": null,
        "created": "2018-08-30",
        "last_updated": "2018-08-30T14:59:05.277820Z"
    }

    To tell the resolution framework what is of interest in the response, the path property can be used, which uses JSON_PATH, to get the value.

    Code Block
    themeEclipse
    titlecreate_netbox_ip_address
    {
        "tags" : "oam-local-ipv4-address",
        "name" : "create_netbox_ip",
        "property" : {
          "description" : "netbox ip",
          "type" : "string"
        },
        "updated-by" : "adetalhouet",
        "sources" : {
          "primary-config-data" : {
            "type" : "source-rest",
            "properties" : {
              "type" : "JSON",
              "verb" : "POST",
              "endpoint-selector" : "ipam-1",
              "url-path" : "/api/ipam/prefixes/$prefixId/available-ips/",
              "path" : "/address",
              "input-key-mapping" : {
                "prefixId" : "prefix-id"
              },
              "output-key-mapping" : {
                "address" : "address"
              },
              "key-dependencies" : [ "prefix-id" ]
            }
          }
        }
      }
    Panel
    titleprimary-aai-data via rest source type

    primary-aai-data via type source-rest

    TBD

    Code Block
    titleprimary-aai-data sample
    {
      "name" : "primary-aai-data",
      "tags" : "primary-aai-data",
      "updated-by" : "Steve, Siani <steve.djissitchi@bell.ca>",
      "property" : {
        "description" : "primary-aai-data",
        "type" : "string"
      },
      "sources" : {
        "default": {
          "type": "source-default",
          "properties": {
          }
        },
        "input": {
          "type": "source-input",
          "properties": {
          }
        },
        "primary-aai-data" : {
          "type" : "source-rest",
          "properties": {
            "type": "JSON",
            "url-path": "$aai-port/aai/v14/network/generic-vnfs/generic-vnf/$vnf-id",
            "path": "",
            "input-key-mapping": {
              "aai-port": "port",
              "vnf-id": "vnf-id"
            },
            "output-key-mapping": {
            },
            "key-dependencies": [
              "port",
              "vnf-id"
            ]
          }
        }
      }
    }
    Card
    labeldb

    Value will be resolved through a database.

    Modeling reference: Modeling Concepts#sql

    In this example, we're making a SQL to the primary database.

    Some ingredients are required to perform the query, in this case, $vfmoudleid. Hence It is provided as an input-key-mapping and defined as a key-dependencies. Please refer to the modeling guideline for more in depth understanding.

    As part of this request, the expected response will be as put in value. In the output-key-mapping section, that value will be mapped to the expected resource name to resolve.

    Code Block
    themeEclipse
    titlevf-module-type
    {
      "name": "vf-module-type",
      "tags": "vf-module-type",
      "property": {
        "description": "vf-module-type",
        "type": "string"
      },
      "updated-by": "adetalhouet",
      "sources": {
        "primary-db": {
          "type": "source-db",
          "properties": {
            "type": "SQL",
            "query": "select sdnctl.demo.value as value from sdnctl.demo where sdnctl.demo.id=:vfmoduleid",
            "input-key-mapping": {
              "vfmoduleid": "vf-module-number"
            },
            "output-key-mapping": {
              "vf-module-type": "value"
            },
            "key-dependencies": [
              "vf-module-number"
            ]
          }
        }
      }
    }
    Card
    labelcapability

    Value will be resolved through the execution of a script.

    Modeling reference: Modeling Concepts#Capability

    In this example, we're making use of a Python script.

    Some ingredients are required to perform the query, in this case, $vf-module-type. Hence It is provided as a key-dependencies. Please refer to the modeling guideline for more in depth understanding.

    As part of this request, the expected response will set within the script itself.

    Code Block
    themeEclipse
    titleinterface-description
    {
      "tags": "interface-description",
      "name": "interface-description",
      "property": {
        "description": "interface-description",
        "type": "string"
      },
      "updated-by": "adetalhouet",
      "sources": {
        "capability": {
          "type": "source-capability",
          "properties": {
            "script-type": "jython",
            "script-class-reference": "Scripts/python/DescriptionExample.py",       
            "key-dependencies": [
              "vf-module-type"
            ]
          }
        }
      }
    }

    The script itself is as bellow.

    The key is to have the script class derived from the framework standards.

    In the case of resource resolution, the class to derive from is AbstractRAProcessor

    It will give the required methods to implement: process and recover, along with some utility functions, such as set_resource_data_value or addError.

    These functions either come from the AbstractRAProcessor class, or from the class it derived from.

    If the resolution fail, the recover method will get called with the exception as parameter.

    Code Block
    themeEclipse
    titleScripts/python/DescriptionExample.py
    collapsetrue
    #  Copyright (c) 2019 Bell Canada.
    #
    #  Licensed under the Apache License, Version 2.0 (the "License");
    #  you may not use this file except in compliance with the License.
    #  You may obtain a copy of the License at
    #
    #      http://www.apache.org/licenses/LICENSE-2.0
    #
    #  Unless required by applicable law or agreed to in writing, software
    #  distributed under the License is distributed on an "AS IS" BASIS,
    #  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    #  See the License for the specific language governing permissions and
    #  limitations under the License.
    
    from abstract_ra_processor import AbstractRAProcessor
    from blueprint_constants import *
    from java.lang import Exception as JavaException
    
    class DescriptionExample(AbstractRAProcessor):
    
        def process(self, resource_assignment):
            try:
                # get key-dependencies value
                value = self.raRuntimeService.getStringFromResolutionStore("vf-module-type")
                
                # logic based on key-dependency outcome
                result = ""
                if value == "vfw":
                    result = "This is the Virtual Firewall entity"
                elif value == "vsn":
                    result = "This is the Virtual Sink entity"
                elif value == "vpg":
                    result = "This is the Virtual Packet Generator"
    
                # set the value of resource getting currently resolved
                self.set_resource_data_value(resource_assignment, result)
    
            except JavaException, err:
              log.error("Java Exception in the script {}", err)
            except Exception, err:
              log.error("Python Exception in the script {}", err)
            return None
    
        def recover(self, runtime_exception, resource_assignment):
            print self.addError(runtime_exception.getMessage())
            return None
    
    
    
    Card
    labelcomplex type

    Value will be resolved through REST., and output will be a complex type.

    Modeling reference: Modeling Concepts#rest

    In this example, we're making a POST request to an IPAM system with no payload.

    Some ingredients are required to perform the query, in this case, $prefixId. Hence It is provided as an input-key-mapping and defined as a key-dependencies. Please refer to the modeling guideline for more in depth understanding.

    As part of this request, the expected response will be as bellow.

    Code Block
    themeEclipse
    titleresponse
    collapsetrue
    {
        "id": 4,
        "address": "192.168.10.2/32",
        "vrf": null,
        "tenant": null,
        "status": 1,
        "role": null,
        "interface": null,
        "description": "",
        "nat_inside": null,
        "created": "2018-08-30",
        "last_updated": "2018-08-30T14:59:05.277820Z"
    }

    What is of interest is the address and id fields. For the process to return these two values, we need to create a custom data-type, as bellow

    Code Block
    titledt-netbox-ip
    collapsetrue
    {
      "version": "1.0.0",
      "description": "This is Netbox IP Data Type",
      "properties": {
        "address": {
          "required": true,
          "type": "string"
        },
        "id": {
          "required": true,
          "type": "integer"
        }
      },
      "derived_from": "tosca.datatypes.Root"
    }

    The type of the data dictionary will be dt-netbox-ip.

    To tell the resolution framework what is of interest in the response, the output-key-mapping section is used. The process will map the output-key-mapping to the defined data-type.

    Code Block
    themeEclipse
    titlecreate_netbox_ip_address
    {
        "tags" : "oam-local-ipv4-address",
        "name" : "create_netbox_ip",
        "property" : {
          "description" : "netbox ip",
          "type" : "dt-netbox-ip"
        },
        "updated-by" : "adetalhouet",
        "sources" : {
          "primary-config-data" : {
            "type" : "source-rest",
            "properties" : {
              "type" : "JSON",
              "verb" : "POST",
              "endpoint-selector" : "ipam-1",
              "url-path" : "/api/ipam/prefixes/$prefixId/available-ips/",
              "path" : "",
              "input-key-mapping" : {
                "prefixId" : "prefix-id"
              },
              "output-key-mapping" : {
    			"address" : "address",
                "id" : "id"
              },
              "key-dependencies" : [ "prefix-id" ]
            }
          }
        }
      }
    Card
    labelCBA Scaffholding
    AnchorCBA_SAFFHOLDINGCBA_SAFFHOLDING

    CBA scaffholding

    The overall purpose of the document is the constituate a CBA, see Modeling Concepts#ControllerBlueprintArchive for understanding of what a CBA is.

    Now is the time to create the scaffholfing for your CBA.

    What you will need is the following based directory/file structure:

    Code Block
    ├── Definitions
    │   └── blueprint.json                          Overall TOSCA service template (worfklow + node_template)
    ├── Environments                                Contains *.properties files as required by the service
    ├── Plans                                       Contains Directed Graph
    ├── Scripts                                     Contains scripts
    │   ├── python                                  Python scripts
    │   └── kotlin                                  Kotlin scripts
    ├── TOSCA-Metadata
    │   └── TOSCA.meta                              Meta-data of overall package
    └── Templates                                   Contains combination of mapping and template

    The TOSCA.meta should have this information

    Code Block
    TOSCA-Meta-File-Version: 1.0.0
    CSAR-Version: 1.0
    Created-By: Alexis de Talhouët (adetalhouet89@gmail.com)
    Entry-Definitions: Definitions/blueprint.json					<- Path reference to the blueprint.json file. If the file name is changed, change here accordinlgy.
    Template-Tags: ONAP, CBA, Test
    Content-Type: application/vnd.oasis.bpmn

    The blueprint.json should have the following metadata

    Code Block
    {
      "metadata": {
        "template_author": "Alexis de Talhouët",
        "author-email": "adetalhouet89@gmail.com",
        "user-groups": "ADMIN, OPERATION",
        "template_name": "golden",									<- This is the overall CBA name, will be refer later to sdnc_blueprint_name
        "template_version": "1.0.0",								<- This is the overall CBA version, will be refer later to sdnc_blueprint_version
        "template_tags": "ONAP, CBA, Test"
      }
    . . .
    Card
    labelWorkflow

    Workflows

    The following workflows are contracts established between SO, SDNC and CDS to cover the instantiation and the post-instantiation use cases.

    Please refer to the modeling guide to understand workflow concept: Modeling Concepts#workflow

    The workflow definition will be added within the blueprint.json file, see CBA scaffholding.

    Deck of Cards
    idWorkflow
    Card
    labelresource-assignment

    resource-assignment

    This action is meant to assign resources needed to instantiate the service, e.g. to resolve all the cloud parameters.

    Also, this action has the ability to perform a dry-run, meaning that result from the resolution will be made visible to the user.

    If user is fine with the result, he can proceed, else, (TDB) he will have opportunity to re-trigger the resolution.

    Context

    This action is triggered by Generic-Resource-API (GR-API) within SDNC as part of the AssignBB orchestrated by SO.

    It will be triggered for each VNF(s) and VF-Module(s) (referred as entity bellow).

    See SO Building blocks Assignment.

    Component

    This action type required a node_template of type component-resource-resolution

    Templates

    Understand resource accumulator templates

    These templates are specific to the instantiation scenario, and relies on GR-API within SDNC.

    There are two categories of resources, the ones that get created (and will be released when destroying the service); and the ones that get resolved, that were already existing. A capability defines the former.

    The resource accumulator template is composed of the following sections:

    resource-accumulator-resolved-data

    Defines all the resources that can be resolved directly from the context. It expresses a direct mapping between the name of the resource and its value.

    Code Block
    titleRA resolved data
    collapsetrue
      "resource-accumulator-resolved-data": [
        {
          "param-name": "service-instance-id",
          "param-value": "${service-instance-id}"
        },
        {
          "param-name": "vnf_id",
          "param-value": "${vnf-id}"
        }
      ]
    capability-data

    Defines the logic to use to create a specific resource, along with the ingredients required to invoke the capability and the output mapping. See the ingredients as function parameters, and output mapping as returned value.

    The logic to resolve the resource is a DG, hence DG development is required to support a new capability.

    Currently the following capabilities exist:

    Netbox: netbox-ip-assign

    Code Block
    titleExample
    collapsetrue
        {
          "capability-name": "netbox-ip-assign",
          "key-mapping": [
            {
              "payload": [
                {
                  "param-name": "service-instance-id",
                  "param-value": "${service-instance-id}"
                },
                {
                  "param-name": "prefix-id",
                  "param-value": "${private-prefix-id}"
                },
                {
                  "param-name": "vf-module-id",
                  "param-value": "${vf-module-id}"
                },
                {
                  "param-name": "external_key",
                  "param-value": "${vf-module-id}-vpg_private_ip_1"
                }
              ],
              "output-key-mapping": [
                {
                  "resource-name": "vpg_private_ip_1",
                  "resource-value": "${vpg_private_ip_1}"
                }
              ]
            }
          ]
        }

    Name generation: generate-name

    Code Block
    titleExample
    collapsetrue
        {
          "capability-name": "generate-name",
          "key-mapping": [
            {
              "payload": [
                {
                  "param-name": "resource-name",
                  "param-value": "vnf_name"
                },
                {
                  "param-name": "resource-value",
                  "param-value": "${vnf_name}"
                },
                {
                  "param-name": "external-key",
                  "param-value": "${vnf-id}_vnf_name"
                },
                {
                  "param-name": "policy-instance-name",
                  "param-value": "${vf-naming-policy}"
                },
                {
                  "param-name": "nf-role",
                  "param-value": "${nf-role}"
                },
                {
                  "param-name": "naming-type",
                  "param-value": "VNF"
                },
                {
                  "param-name": "AIC_CLOUD_REGION",
                  "param-value": "${aic-cloud-region}"
                }
              ],
              "output-key-mapping": [
                {
                  "resource-name": "vnf_name",
                  "resource-value": "${vnf_name}"
                }
              ]
            }
          ]
        }

    Add a new capability

    In order to add a new capability, you need to do the following:

  • Create the DG that will handle the logic to resolve the resource
    If your DG requires properties, or template, etc.. use similar concept from step 2 to load them in the SDNC container (FYI, using a persistent volume for them is highly recommanded).
  • Load the DG within SDNC

    Code Block
    titleExample of script to automate deployment of DG
    collapsetrue
    #!/bin/sh
    
    # This script takes care of loading the DG into the runtime of SDNC.
    # The DG file name has to follow this pattern:
    # GENERIC-RESOURCE-API_{rpc_name}_{version}
    
    usage() {
      echo "./load-dg.sh <dg>"
      exit
    }
    
    if [[ -z $1 ]]
    then
        usage
    fi
    
    rpc_name=`echo "$1" | cut -d'_' -f2 | cut -d'.' -f1`
    version=`echo "$1" | cut -d'_' -f3`
    content=`cat $1`
    ip=$2
    
    data="$(curl -s -o /dev/null -w %{url_effective} --get --data-urlencode "$content" "")"
    dg_xml_escaped="${data##/?}"
    
    echo -e "module=GENERIC-RESOURCE-API&rpc=$rpc_name&flowXml=$dg_xml_escaped" > payload
    
    echo -e "    Installing $rpc_name version ${version%.*}"
    curl -X  POST \
      http://$ip:$SDNC_NODE_PORT/uploadxml \
      -H 'Authorization: Basic ZGd1c2VyOnRlc3QxMjM=' \
      -H 'Content-Type: application/x-www-form-urlencoded' \
      -d @payload
    
    rm payload
    
    echo -e "    Activating $rpc_name version ${version%.*}"
    activate_uri="activateDG?module=GENERIC-RESOURCE-API&rpc=$rpc_name&mode=sync&version=${version%.*}&displayOnlyCurrent=true"
    curl -X GET \
      -H 'Accept: application/json' \
      -H 'Authorization: Basic ZGd1c2VyOnRlc3QxMjM=' \
      -H 'Content-Type: application/json' \
      http://$ip:$SDNC_NODE_PORT/$activate_uri
    
    

    Add the capability in the self-serve-vnf-assign DG and/or self-serve-vf-module-assign in the node named set ss.capability.execution-order[] then upload the updated version of this DG.
    When doing so, make sure to increment the last parameter ss.capability.execution-order_length

    Expand
    titleExample

    Image Removed

    Required templates

    See Modeling Concepts#template

    The name of the templates is very important, and can't be random. Bellow are the requirements

    VNF

    The VNF Resource Accumulator Template prefix name can be anything, but what is very important is that when integrating with SDC the sdnc_artifact_name property of the VF or PNF needs to be the same; see here.

    VF-Modules

    Each vf-module will have its own resource accumulator template, and its prefix name must be the vf-module-label, which is nothing but the name of the HEAT file defining the OS::Nova::Server

    Example:

    If the file is name vfw.yaml, the vf-module-label will be vfw

    For instance, with the vFW service HEAT definition, you will see in the VSP within SDC the following screen, showing you the label of each vf-module

    Expand
    titleVSP attachement

    Image Removed

    Mapping

    Each template requires its associated mapping file, see Modeling Concepts#ArtifactMappingResource

    Required Inputs

    PropertyDescriptiontemplate-prefix

    SDNC will populate this input with the name of the template to execute. If doing VNF Assign, it will use sdnc_artifact_name as template-prefix. If doing VF-Module Assign, it will use the vf-module-label as template-prefix.

    Output

    It is necessary to provide the resolved template as output. To do so, we will use the Modeling Concepts#getAttribute expression.

    Also, as mentioned here Modeling Concepts#resourceResolution, the resource resolution component node will populate an attribute named assignment-params with the result.

    Finally, the name of the ouput has to be meshed-template so SDNC GR-API knows how to properly parse the response.

    Example

    Here is an example of the resource-assignment workflow.

    Code Block
    themeEclipse
    titleresource-assignment
    {
      "workflows": {
        "resource-assignment": {
          "steps": {
            "resource-assignment-process": {
              "description": "Resource Assign Workflow",
              "target": "resource-assignment-process"
            }
          },
          "inputs": {
            "template-prefix": {
              "required": true,
              "type": "string"
            }
          },
          "outputs": {
            "meshed-template": {
              "type": "string",
              "value": {
                "get_attribute": [
                  "SELF",
                  "assignment-params"
                ]
              }
            }
          }
        }
      }
    }

    Understand SDNC DG flow logic

    Logic for vnf and vf-module assignement is pretty much the same.

    This is the general DG logic of the VNF assign flow and sub-flows:

    1. call vnf-topology-operation
      1. call vnf-topology-operation-assign
        1. call self-serve-vnf-assign
          1. set capability.execution-order
          2. call self-serve-vnf-ra-assignment
            1. execute REST call to CDS blueprint processor
            2. put resource-accumulator-resolved-data in MDSAL GR-API/services/service/$serviceInstanceId/vnfs/vnf/$vnfId
          3. call self-serve- + capability-name
          4. put vnf information in AAI (including the selflink)
        2. call naming-policy-generate-name
        3. put generic-vnf relationship in AAI

    This is the general logic of the vf-module assign flow and sub-flows:

    1. call vf-module-topology-operation
      1. call vf-module-topology-operation-assign
        1. set service-data based on SO request (userParams / cloudParams)
        2. call self-serve-vf-module-assign
          1. set capability.execution-order
          2. call self-serve-vfmodule-ra-assignment
            1. execute REST call to CDS blueprint processor
              1. put resource-accumulator-resolved-data in MDSAL GR-API/services/service/$serviceInstanceId/vnfs/vnf/$vnfId/vf-modules/vf-module
          3. call self-serve- + capability-name
        3. put vf-module information in AAI
        4. put vnfc information in AAI
    Card
    labelconfig-assign

    config-assign

    This action is meant to assign all the resources and mesh the templates needed for the configuration to apply during post-instantiation (day0 config).

    If user is fine with the result, he can proceed, else, (TDB) he will have opportunity to re-trigger the resolution.

    Context

    This action is triggered by SO after the AssignBB has been executed for Service, VNF and VF-Module. It corresponds to the ConfigAssignBB.

    See SO Building blocks Assignment.

    Steps

    This is a single action type of workflow, hence the target will refer to a node_template of type component-resource-resolution

    Inputs

    PropertyDescriptionresolution-key

    The dry-run functionality requires the ability to retrieve the resolution that has been made later point in time in the process.

    The combination of the artifact-name and the resolution-key will be used to uniquely identify the result.

    Output

    In order to perform dry-run, it is necessary to provide the meshed resolved template as output. To do so, the use of Modeling Concepts#getAttribute expression is required.

    Also, as mentioned here Modeling Concepts#resourceResolution, the resource resolution component node will populate an attribute named assignment-params with the result.

    Example

    Here is an example of the config-assign workflow:

    Code Block
    themeEclipse
    titleconfig-assign
    {
      "workflows": {
        "config-assign": {
          "steps": {
            "config-assign-process": {
              "description": "Config Assign Workflow",
              "target": "config-assign-process"
            }
          },
          "inputs": {
            "resolution-key": {
              "required": true,
              "type": "string"
            },
            "config-assign-properties": {
              "description": "Dynamic PropertyDefinition for workflow(config-assign).",
              "required": true,
              "type": "dt-config-assign-properties"
            }
          },
          "outputs": {
            "dry-run": {
              "type": "json",
              "value": {
                "get_attribute": [
                  "SELF",
                  "assignment-params"
                ]
              }
            }
          }
        }
      }
    }
    Card
    labelconfig-deploy

    config-deploy

    This action is meant to push the configuration templates defined during the config-assign step for the post-instantiation.

    This action is triggered by SO during after the CreateBB has been executed for all the VF-Modules.

    Context

    This action is triggered by SO after the CreateVnfBB has been executed. It corresponds to the ConfigDeployBB.

    See SO Building blocks Assignment.

    Steps

    This is a single action type of workflow, hence the target will refer to a node_template of type component-netconf-executor or component-jython-executor or component-restconf-executor.

    Inputs

    PropertyDescriptionresolution-key

    Needed to retrieve the resolution that has been made earlier point in time in the process.

    The combination of the artifact-name and the resolution-key will be used to uniquely identify the result.

    Output

    SUCCESS or FAILURE

    Example

    Here is an example of the config-deploy workflow:

    Code Block
    themeEclipse
    titleconfig-deploy
    {
      "workflow": {
        "config-deploy": {
          "steps": {
            "config-deploy": {
              "description": "Config Deploy using Python (Netconf) script",
              "target": "config-deploy-process"
            }
          },
          "inputs": {
            "resolution-key": {
              "required": true,
              "type": "string"
            },
            "config-deploy-properties": {
              "description": "Dynamic PropertyDefinition for workflow(config-deploy).",
              "required": true,
              "type": "dt-config-deploy-properties"
            }
          }
        }
      }
    }
    Card
    labelComponent
    Deck of Cards
    idComponent
    Card
    labelresource-assignment-process

    resource-assignment-process

    Card
    labelconfig-assign-process

    config-assign-process

    Card
    labelconfig-deploy-process

    config-deploy-process

    Card
    labelTemplate
    Card
    labelRequirement
    Card
    labelSDC Modeling & Distribution

    Introduction

    The purpose is to describe integration of CDS within SDC

    What's new

    At the VF and PNF level, a new artifact type CONTROLLER_BLUEPRINT_ARCHIVE allow the designed to load the previsouly designed CBA as part of the resource.

    How to add the CBA in SDC VF resource (similar for PNF)

    Create the VF resource

    Image Removed

    Click on Deployment Artifact, then Add other arifacts, and select you CBA

    Image Removed
    Image Removed

    Check the artifact is uploaded OK, and click on Certify.

    Image Removed

    Create a new service model, and add the newly created VF (including CBA artifact) to the new service model. Click on "Add Service"

    Image Removed

    Click on "Composition", and drag the VF we created from the palette on the left onto the canvas in the middle.

    Then, click on "Submit for Testing".

    Image Removed

    AnchorSDC_CBA_PROPERTIESSDC_CBA_PROPERTIES

    Click on Properties Assignments, then click on the service name, e.g. "CDS-VNF-TEST" from the right bar.

    Type "sdnc" in the filter box, and add the sdnc_model_name, sdnc_model_version, and sdnc_artifact_version, and click "Save".

    • sdnc_model_name - This is the name of the blueprint (e.g. CBA name)
    • sdnc_model_version - This is the version of the blueprint
    • sdnc_artifact_name - This is the name of the VNF resource accumulator template

    Image Removed

    Type "skip" in the filter box, and set "skip post instantiation" to FALSE, then click "Save".

    Image Removed

    Login as Tester (jm0007/demo123456!) and accept the new service.

    Login as Governor (gv0001/demo123456!) and approve for distribution.

    Login as Operator (op0001/demo123456!) and click on "Distribute".

    Click on "Monitor" to check the progress of the distribution, and check that all ONAP components were notified, and downloaded the artifacts, and deployed OK.

    Image Removed

    Card
    labelDesign a new CBA
    titleHow to create a new CBA from scratch.

    Starting from Dublin release, CDS offers a new package configuration to design the services provisioning. This section describes step by step the procedure of designing a new CBA from scratch.

    The CBA package content is well described in CDS Modeling Concepts and also in Design Time section, it shows the structure of a CBA and the different definitions/artifacts. This section will be more focus on the creation of new CBA (The structure: required folder and files), and the enrichment procedure to generate the complete config file.

    CBA directory and structure

    Code Block
    titleCBA directory structure
    ├── CBA-archive-name                             # CBA Root Directory        
    |   └── Definitions/        
    │       └── CBA_configuration_file.json          # CBA configuration file (Mandatory)               
    |   └── Environments/                            # All environment files contained in this folder are loaded in Blueprint processor run-time       
    │       └── env-prod.properties                                             
    │       └── env-test.properties        
    |   └── Plans/        
    │       └── CONFIG_DirectedGraphExample.xml      # Directed graph artifact        
    |   └── Scripts/                                 # Script used for capability resource resolution
    │       └── kotlin/          
    │           └── script_kotlin.kt
    │       └── ansible/          
    │           └── ansible_file.yaml
    │       └── python/          
    │           └── SamplePython.py                     
    |   └── TOSCA-Metadata/        
    │       └── TOSCA.meta                           # CBA entry point (Mandatory)                
    |   └── Templates/        
    │       └── example1-template.jinja              # Template file that will dynamic represent a payload in some execution node (Extensions supported: .vtl and .jinja)    
    │       └── example1-mapping.json                # List of variables that will be resolved to fulfill the jinja template
    │       └── example2-template.vtl                # Velocity Template file 
    │       └── example2-mapping.json                # Mapping file for velocity template
    

    Image Removed

                                                                                                                                             Fig. CBA config file structure

       

     A. CBA configuration file sections description

    The above diagram shows a simple CBA with one workflow and one node template. The following describes each section defined in CBA config file.

    • CBA Metadata:

    This section specify information about the CBA such as:

       - The Author: Name and email

       - User privileges for this self-service provisioning execution

       - CBA identifier: Template name and Version (Ex. Template name: My-self-service-name, Version: 1.0.0)

       - Template tags: Reference words that can be used to find this CBA.

    • DSL Definition:

    We define here all parameters, in JSON, needed in service provisioning.

    Ex. Endpoint selector to provide remote Ansible server parameters.

    Code Block
    languageactionscript3
    titleansible-remote-endpoint
    linenumberstrue
    "ansible-remote-endpoint" : {
       "type" : "token-auth",
       "url" : "http://ANSIBLE_IP_ADDRESS",
       "token" : "Bearer J9gEtMDqf7P4YsJ74fioY9VAhLDIs1"
    }
    • Workflows execution:

                  - my-workflow1: This is a workflow to describe the action that will trigger the self-service provisioning in run-time. A workflow can take input and return output. It can also follow one or many steps. In this example, only one step is defined.

    Code Block
    languageperl
    titleWorkflow: my-workflow1
    linenumberstrue
    collapsetrue
    "my-workflow1" : {
       "steps" : {
          "execute-script" : {
             "description" : "some description",
             "target" : "my-workflow-target-node-node-template",
             "activities" : [ {
                "call_operation" : ""
             } ]
          }
       },
       "inputs" : {
          "my-input" : {
             "required" : false,
             "type" : "string"
          }
       }
    }

    Each step points to a target which is the corresponding node template, and the target specified here is: my-workflow-target-node-node-template.

    • Node templates: This section provide the self-service execution plan, usually DG is used here to describe complex workflow. But, the above CBA contains a simple node template (my-workflow-node-node-template) without DG:
    Code Block
    languageperl
    titlemy-workflow-target-node-node-template
    linenumberstrue
    collapsetrue
    "my-workflow-target-node-node-template" : {
       "type" : "node-template-execution-type",
       "interfaces" : {
          "NodeTemplateInterface" : {
             "operations" : {
                "process" : {
                   "implementation" : {
                      "primary" : "component-script"
                   },
                   "inputs" : {
                      "command" : "python SamplePython.py",
                      "packages" : [ {
                         "type" : "pip",
                         "package" : [ "pyaml" ]
                      } ],
                      "argument-properties" : "*remote-argument-properties",
                      "dynamic-properties" : "*remote-argument-properties"
                   }
                }
             }
          }
       },
       "artifacts" : {
          "component-script" : {
             "type" : "artifact-script-python",
             "file" : "Scripts/python/SamplePython.py"
          }
       }
    }

    The node template is defined by the node-template-execution-type. This type specifies the component function to use for this node template execution. The following shows the different components that can be executed as a node template:

    Code Block
    titleNode template types
    ├── component-resource-resolution                             # CBA Root Directory        
    |   └── Interface:        
    │       ├── ResourceResolutionComponent                       # Component to resolve resources               
    │           └── Resolution approaches:                        
    │       	    ├── rr-processor-source-capability             # Resolve using Capability scripts such as jython or kotlin
    │       		├── rr-processor-source-processor-db           # Resolve using database query
    │       		├── rr-processor-source-default                # resolve by getting default value provided
    │       		├── rr-processor-source-rest                   # Resolve using REST API request
    ├── component-jython-executor                                 # Component to execute Jython scripts
    |   └── Interface:        
    │       ├── ComponentJythonExecutor
    ├── component-remote-python-executor                          # Component to execute remote python scripts
    |   └── Interface:        
    │       ├── ComponentRemotePythonExecutor
    ├── component-restconf-executor                               # Component to execute Restconf operations 
    |   └── Interface:        
    │       ├── ComponentRestconfExecutor
    ├── component-netconf-executor                                # Component to execute netconf operations
    |   └── Interface:        
    │       ├── ComponentNetconfExecutor
    ├── component-cli-executor                                    # Cli component
    |   └── Interface:        
    │       ├── ComponentCliExecutor
    ├── component-remote-ansible-executor                         # Component to execute remote ansible playbook
    |   └── Interface:        
    │       ├── ComponentRemoteAnsibleExecutor
    

         In the case the workflow point to a DG node template, this DG will describe all the execution sequence to run for the corresponding workflow steps. In the following, the workflow point to a DG and execute two node templates:

    • Workflow with DG
    Code Block
    languageperl
    titleWorkflow: my-workflow2
    linenumberstrue
    collapsetrue
    "my-workflow2" : {
       "steps" : {
          "execute-script" : {
             "description" : "some description here...",
             "target" : "my-workflow-target-node-template-with-DG",
             "activities" : [ {
                "call_operation" : ""
             } ]
          }
       },
       "inputs" : {
          "my-input" : {
             "required" : false,
             "type" : "string"
          }
       }
    }
    • Node templates with DG
    Code Block
    languageperl
    titlemy-workflow-target-node-template-with-DG
    linenumberstrue
    collapsetrue
    "my-workflow-target-node-template-with-DG" : {
       "type" : "dg-generic",
       "properties" : {
          "content" : {
             "get_artifact" : [ "SELF", "dg-my-workflow1-target-node-template-with-DG" ]
          },
          "dependency-node-templates" : [ "target-node-template1", "target-node-template2" ]
       },   
       "artifacts" : {
           "dg-my-workflow1-target-node-template-with-DG" : {
              "type" : "artifact-directed-graph",
              "file" : "Plans/CONFIG_DirectedGraphExample.xml"
           }
       }
    }

    in the below DG, we define the following sequence: [target-node-template1] [target-node-template2]

    Code Block
    languagexml
    titleCONFIG_DirectedGraphExample.xml
    linenumberstrue
    collapsetrue
    <service-logic
      xmlns='http://www.onap.org/sdnc/svclogic'
      xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
      xsi:schemaLocation='http://www.onap.org/sdnc/svclogic ./svclogic.xsd' module='CONFIG' version='1.0.0'>
        <method rpc='dg-operation' mode='sync'>
            <block atomic="true">
                <execute plugin="target-node-template1" method="process">
                    <outcome value='failure'>
                        <return status="failure">
                        </return>
                    </outcome>
                    <outcome value='success'>
                        <execute plugin="target-node-template2" method="process">
                            <outcome value='failure'>
                                <return status="failure">
                                </return>
                            </outcome>
                            <outcome value='success'>
                                <return status='success'>
                                </return>
                            </outcome>
                        </execute>
                    </outcome>
                </execute>
            </block>
        </method>
    </service-logic>

     B. Other artifacts in CBA

    This section describes the different parts of the CBA, artifacts needed to have a model-driven package for self-service provisioning:

    • CBA Entry point: TOSCA.meta file
    Code Block
    languagecss
    titleTOSCA.meta
    linenumberstrue
    TOSCA-Meta-File-Version: 1.0.0
    CSAR-Version: 1.0
    Created-By: Steve Siani <alphonse.steve.siani.djissitchi@ibm.com>
    Entry-Definitions: Definitions/CBA_configuration_file_name.json
    Template-Name: baseconfiguration
    Template-version: 1.0.0
    Template-Tags: Steve Siani, remote_ansible
    • Environment files: Some parameters need to be resolved to fulfill the template. It is possible to provide in your CBA, additional variables in environment files. In this approach, the service will get some parameters from environment file. The designer could define many environments variables in files, and those environments files are loaded automatically in the running self-service:

                     Constraint: Save environment files in [CBA Root Folder]/Environments/

    Code Block
    titleEnvironment files in CBA
    ├── CBA-archive-name                             # CBA Root Directory        
    |   .
    |   .
    |   .               
    |   └── Environments/                            # All environment files contained in this folder are loaded in Blueprint processor run-time       
    │       └── env-prod.properties                                             
    │       └── env-test.properties   
    │       └── AdditionalApplications.properties      
    |   .
    |   .
    |   . 
    
    Code Block
    languagexml
    themeEmacs
    titleenv-prod.properties
    linenumberstrue
    collapsetrue
    env-prod.ansible_ssh_user=<username>
    env-prod.ansible_ssh_pass=<password>
    env-prod.evi_id=<id>
    env-prod.service_db_url=<service_db_url>
    env-prod.topology_url=<topology_url>
    env-prod.resource_allocator_url=<resource_allocator>
    ...
    Code Block
    languagexml
    themeEmacs
    titleenv-test.properties
    linenumberstrue
    collapsetrue
    env-test.ansible_ssh_user=<username>
    env-test.ansible_ssh_pass=<password>
    env-test.evi_id=<id>
    env-test.service_db_url=<service_db_url>
    env-test.topology_url=<topology_url>
    env-test.resource_allocator_url=<resource_allocator>
    ...

    Note:  When environment files are provided in CBA under Environments directory, the variables contained in those files are load in Blueprint run-time context as a node template "BPP". So, accessing those variables will be possible by calling the function getNodeTemplateAttributeValue("BPP", attribute) in Blueprint Runtime Service. Where "attribute" refers to the environment variable defined in environment file.

    Code Block
    languagejava
    titleEx. Getting environment variables from run time
    linenumberstrue
    val username = blueprintRuntimeService.getNodeTemplateAttributeValue("BPP", "env-test.ansible_ssh_user").asText()
    • Template artifacts: Content the template file and the corresponding template mapping. This template provides a dynamic content to the self-service for configuration appliance.

    Ex. Jinja template sample

    Code Block
    languageyml
    titleexample-template.jinja
    linenumberstrue
    collapsetrue
    site_id: {{ site_id }}
    tenant_name: {{ tenant_name }}
    Interfaces:
    {%- for interface in interfaces %}
        interface {{ interface.name }}
        description {{ interface.description }}
        ipv4 address {{ interface.ipv4 }}
        mtu {{ interface.mtu }}
    {%- endfor %}

    Ex. Velocity template sample

    Code Block
    languageyml
    titleexample-template.vtl
    linenumberstrue
    collapsetrue
    site_id: ${site_id}
    tenant_name: ${tenant_name}
    Interfaces:
    #foreach( $interface in $interfaces )
        interface $interface.name
        description $interface.description
        ipv4 address $interface.ipv4
        mtu $interface.mtu
    #end

    Ex. Corresponding template mapping file sample

    Code Block
    languageyml
    titleexample-mapping.json
    linenumberstrue
    collapsetrue
    [
    	{
    		"name": "environment",
    		"input-param": true,
    		"property": {
    			"type": "string"
    		},
    		"dictionary-name": "input-source",
    		"dictionary-source": "input",
    		"dependencies": []
    	},
    	{
    		"name": "site_id",
    		"input-param": true,
    		"property": {
    			"type": "string"
    		},
    		"dictionary-name": "input-source",
    		"dictionary-source": "input",
    		"dependencies": []
    	},
    	{
    		"name": "tenant_name",
    		"input-param": true,
    		"property": {
    		  "type": "string"
    		},
    		"dictionary-name": "input-source",
    		"dictionary-source": "input",
    		"dependencies": []
    	},
        {
    		"name": "interfaces",
    		"input-param": true,
    		"property": {
    		   "type": "list",
    		   "entry_schema": {
    		      "type": "string"
    		   }
    		},
    		"dictionary-name": "properties-capability-source",
    		"dictionary-source": "capability",
    		"dependencies": ["environment"]
    	}
    ]

    In this template, some parameters are resolved using the input source and some are resolved using properties-capability-source.

    • Script artifacts: You may need to resolve resources using a customized script (Kotlin or Python) or execute remote python script on a device. In this case, you will define scripts in your CBA under the Scripts directory.

                       - Resource resolution using python script 

    In the CBA, you may need to define and resolve variables. This is possible by declaring these variables as a data types and each data type belongs to a resource dictionary. Let's take the example of the variable declare above in template mapping.

    Code Block
    languageyml
    titleVariable: interfaces
    linenumberstrue
        {
    		"name": "interfaces",
    		"input-param": true,
    		"property": {
    		   "type": "list",
    		   "entry_schema": {
    		      "type": "string"
    		   }
    		},
    		"dictionary-name": "properties-capability-source",
    		"dictionary-source": "capability",
    		"dependencies": ["environment"]
    	}

    This variable is declared as an array list resolved using a resource dictionary name "properties-capability-source", from the dictionary source "capability" and will depend on variable call "environment". Dependency variable means that the "environment" variable should be resolved before "interfaces" variable is resolved.

    The resource dictionary "properties-capability-source" must be load in CDS run time and will point to the python script to execute as Jython in order to resolve "interfaces" variable.

    Code Block
    languageyml
    titleResource dictionary: properties-capability-source
    linenumberstrue
    collapsetrue
        {
           "name": "properties-capability-source",
           "updated-by": "Steve Alphonse Siani, alphonse.steve.siani.djissitchi@ibm.com",
           "tags": "properties-capability-source",
           "property" :{
               "description": "Data dictionary used to read properties.",
               "type": "string"
           },
           "sources": {
               "input": {
                  "type": "source-input"
                },
               "default": {
                  "type": "source-default",
                  "properties": {}
               },
               "capability": {
                  "type": "source-capability",
                  "properties" : {
                     "script-type" : "jython",
                     "script-class-reference" : "Scripts/python/ResolvProperties.py"
                  }
               }
           }
        }
    Code Block
    languagepy
    titleScripts/python/ResolvProperties.py
    linenumberstrue
    collapsetrue
    from abstract_ra_processor import AbstractRAProcessor
    from blueprint_constants import *
    
    class ResolvProperties(AbstractRAProcessor):
    
        def process(self, resource_assignment):
            result = ""
            env = ""
            attribute = ""
            # get dependencies result
            value = self.raRuntimeService.getStringFromResolutionStore("environment")
            
            #logic based on dependency outcome
            env = "env-" + value
    
            if resource_assignment.name == "ansible_ssh_user":
                attribute = env + ".ansible_ssh_user"
            if resource_assignment.name == "ansible_ssh_pass":
                attribute = env + ".ansible_ssh_pass"
            if resource_assignment.name == "evi_id":
                attribute = env + ".evi_id"
            if resource_assignment.name == "service_db_url":
                attribute = env + ".service_db_url"
            if resource_assignment.name == "topology_url":
                attribute = env + ".topology_url"
            if resource_assignment.name == "resource_allocator_url":
                attribute = env + ".resource_allocator_url"
    
            result = self.raRuntimeService.getNodeTemplateAttributeValue("BPP", attribute).asText()
    
            # set value for resource getting currently resolved
            self.set_resource_data_value(resource_assignment, result)
            return None
    
        def recover(self, runtime_exception, resource_assignment):
            log.error("Exception in the script {}", runtime_exception)
            print self.addError(runtime_exception.cause.message)
            return None

           

                       - Component execution on Netconf device with python script 

    In the following, we define a node template execution as a "component-netconf-executor" and in the input we specify the script to run into the Netconf device.

    Code Block
    languageyml
    titleComponent Netconf executor
    linenumberstrue
    collapsetrue
    "node_templates": { "config-deploy": { "type": "component-netconf-executor", "requirements": { "netconf-connection": { "capability": "netconf", "node": "netconf-device", "relationship": "tosca.relationships.ConnectsTo" } }, "interfaces": { "ComponentScriptExecutor": { "operations": { "process": { "inputs": { "script-type": "jython", "script-class-reference": "Scripts/python/ConfigDeploy.py",
    └── *-template.vtl


    Data Dictionary

    A data dictionary defines a specifc resource that can be resolved using the bellow the supported sources.

    A data dictionary can support multiple resources.

    The main goal of data dictionary is to define generic entity that could be shared accross the service catalog.

    Resolution sources

    Input

    Default

    SQL

    Default (SDNC DB)

    Generic

    REST

    Default (SDNC MDSAL)

    Generic

    Capability (scripts)

    Python

    Kotlin script

    Netconf (through Python)

    Workflow

    A workflow defines an overall action to be taken for the service; it can be composed of a set of node to execute. Currently, workflows are backed by Directed Graph engine.

    A CBA can have as many workflow as needed.

    Required workflows

    The following workflows are contracts being established between SO, SDNC and CDS to cover the instantiation and the post-instantiation use cases.

    resource-assignment

    This action is meant to assign resources needed to instantiate the service. The goal is to resolved all the HEAT environment variables.

    This action is triggered by Generic-Resource-API (GR-API) within SDNC as part of the AssignBB orchestrated by SO. Hence it will be triggered for each VNF(s) and VF-Module(s).

    In order to know what to resolved, one input is required, that is the artifact prefix (see bellow for explanation).

    artifacts

    For each VNF and VF-Module comprising the service, a combinaison of a template and mapping is needed.

    The requirement is as follow for VNF:

    ${vnf-name}-template
    ${vnf-name}-mapping

    and as follow for VF-Module, where the vf-module-label is actually the name of the HEAT template file.

    ${vf-module-label}-template
    ${vf-module-label}-mapping

    ${vnf-name} and ${vf-module-label} is what we call the artifact prefix, so the requirement could be seen as follow:

    ${artifact-prefix}-template
    ${artifact-prefix}-mapping
    template

    The template has to be a resource accumulator template; that be composed of the following sections:

    • resource-accumulator-resolved-data: defines all the resources that can be resolved directly from the context. It expresses a direct mapping between the name of the resource and its value.

      Code Block
      titleRA resolved data
      collapsetrue
        "resource-accumulator-resolved-data": [
          {
            "param-name": "service-instance-id",
            "param-value": "${service-instance-id}"
          },
          {
            "param-name": "vnf_id",
            "param-value": "${vnf-id}"
          }
        ]


    • capability-data: defines what capability to use to create a specific resource, along with the ingredients required to invoke the capability and the output mapping.

      Code Block
      titleRA capability payload
      collapsetrue
          {
            "capability-name": "netbox-ip-assign",
            "key-mapping": [
              {
                "payload": [
                  {
                    "

    ...

    • param-

    ...

    • name": "

    ...

    • service-

    ...

    • instance-

    ...

    • id",
                    

    ...

    • "param-value": "${service-instance-id}"
                  

    ...

    • },
                  

    ...

    • {
                  

    ...

    •   "param-name": "prefix-id",
                    "param-value": "${private-prefix-id}"
                  },
                  

    ...

    • {
                    "

    ...

    • param-name": "

    ...

    • vf-

    ...

    • module-

    ...

    • id",
              

    ...

    •       "param-value": "${vf-module-id}"
                 

    ...

    •  },
                  

    ...

    • {
                    "

    ...

    • param-

    ...

    • name": 

    ...

    • "external_key",
                    

    ...

    • "param-value": "

    ...

    • ${vf-module-id}-vpg_private_ip_1"
                  }
         

    ...

    •        ],
                "

    ...

    • output-key-

    ...

    • mapping": 

    ...

    • [
      

    ...

    •            

    ...

    •  {
                    

    ...

    • "resource-name": "vpg_private_ip_1",
                    "

    ...

    • resource-

    ...

    • value": "${vpg_private_ip_1}"
                  }
          

    ...

    •       ]
              }
            

    ...

     C. Enrich the CBA to have complete package

    ...

    • ]
          }


    mapping

    Defines the contract of each resource to be resolved. Each placeholder in the template must have a corresponding mapping definition.

    A mapping is comprised of:

    • name
    • required / optional
    • type (support complex type)
    • dictionary-name
    • dictionary-source
    • dependencies: this allows to make sure given resources get resolved prior the resolution of the resources defining the dependency.

    The dictionary fields reference to a specific data dictionary.

    scripts

    If any of the mapping uses a source capabbility to resolve a parameters.

    config-assign

    This action is meant to assign all the resources and mesh the templates needed for the configuration to apply post-instantiation.

    This action is triggered by SO during after the AssignBB has been executed for Service, VNF and VF-Module.

    artifacts

    Combinaison of templates with respective mappings

    Scripts if needed

    config-deploy

    This action is meant to push the configuration templates defined during the config-assign step for the post-instantiation.

    This action is triggered by SO during after the CreateBB has been executed for all the VF-Modules.

    artifacts

    Combinaison of templates with respective mappings

    Scripts using Netconf or Restconf to push configure the network element.