Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Reverted from v. 1

CBA

The Controller Blueprint Archived is the overall service design, fully model-driven, package needed to automate the instantiation and any config provisioning operation, such as day0 or day2 configuration.

The CBA is .zip file, comprised of the following structure:

Code Block
.
├── Definitions
│   ├── blueprint.json
│   ├── artifact_types.json
│   ├── data_types.json
│   ├── node_types.json
│   ├── policy_types.json
│   ├── relationship_types.json
│   ├── resources_definition_types.json
│   └── *-mapping.json
├── Plans
│   ├── ResourceAssignment.xml
│   ├── ConfigAssign.xml
│   └── ConfigDeploy.xml
├── Scripts
│   └── python
│       ├── ConfigDeployExample.py
│       ├── ResourceResolutionExample.py
│       └── __init__.py
├── TOSCA-Metadata
│   └── TOSCA.meta
└── Templates
    └── *-template.vtl


Data Dictionary

A data dictionary defines a specifc resource that can be resolved using the bellow the supported sources.

A data dictionary can support multiple resources.

The main goal of data dictionary is to define generic entity that could be shared accross the service catalog.

Resolution sources

Input

Default

SQL

Default (SDNC DB)

Generic

REST

Default (SDNC MDSAL)

Generic

Capability (scripts)

Python

Kotlin script

Netconf (through Python)

Workflow

A workflow defines an overall action to be taken for the service; it can be composed of a set of node to execute. Currently, workflows are backed by Directed Graph engine.

A CBA can have as many workflow as needed.

Required workflows

The following workflows are contracts being established between SO, SDNC and CDS to cover the instantiation and the post-instantiation use cases.

resource-assignment

This action is meant to assign resources needed to instantiate the service. The goal is to resolved all the HEAT environment variables.

This action is triggered by Generic-Resource-API (GR-API) within SDNC as part of the AssignBB orchestrated by SO. Hence it will be triggered for each VNF(s) and VF-Module(s).

In order to know what to resolved, one input is required, that is the artifact prefix (see bellow for explanation).

artifacts

For each VNF and VF-Module comprising the service, a combinaison of a template and mapping is needed.

The requirement is as follow for VNF:

${vnf-name}-template
${vnf-name}-mapping

and as follow for VF-Module, where the vf-module-label is actually the name of the HEAT template file.

${vf-module-label}-template
${vf-module-label}-mapping

${vnf-name} and ${vf-module-label} is what we call the artifact prefix, so the requirement could be seen as follow:

${artifact-prefix}-template
${artifact-prefix}-mapping
template

The template has to be a resource accumulator template; that be composed of the following sections:

  • resource-accumulator-resolved-data: defines all the resources that can be resolved directly from the context. It expresses a direct mapping between the name of the resource and its value.

    Code Block
    titleRA resolved data
    collapsetrue
      "resource-accumulator-resolved-data": [
        {
          "param-name": "service-instance-id",
          "param-value": "${service-instance-id}"
        },
        {
          "param-name": "vnf_id",
          "param-value": "${vnf-id}"
        }
      ]


  • capability-data: defines what capability to use to create a specific resource, along with the ingredients required to invoke the capability and the output mapping.

    Code Block
    titleRA capability payload
    collapsetrue
        {
          "capability-name": "netbox-ip-assign",
          "key-mapping": [
            {
              "payload": [
                {
      

This guide is geared to provide information regarding  how to build a CBA.

For some reason, links don't work. But this gives the outline of the page.

Panel
titleTable of Contents

...

idUser Guide

...

defaulttrue
labelInstallation

...

Installation

ONAP is meant to be deployed within a Kubernetes environment. Hence, the de-facto way to deploy CDS is through Kubernetes.

ONAP also package Kubernetes manifest as Chart, using Helm.

...

Prerequisite

https://docs.onap.org/en/latest/guides/onap-developer/settingup/index.html

Setup local Helm

Code Block
titlehelm repo
collapsetrue
helm init --history-max 200 # To install tiller to target Kubernetes if not yet installed
helm serve &
helm repo add local http://127.0.0.1:8879

Get the chart

Make sure to checkout the release to use, by replacing $release-tag in bellow command

Code Block
titlegit clone
collapsetrue
git clone https://gerrit.onap.org/r/oom
git checkout tags/$release-tag
cd oom/kubernetes
make common
make cds

...

Install CDS

Code Block
titlehelm install
collapsetrue
helm install --name cds cds

Result

Code Block
titlekubectl output
collapsetrue
$ kubectl get all --selector=release=cds
NAME                                             READY     STATUS    RESTARTS   AGE
pod/cds-blueprints-processor-54f758d69f-p98c2    0/1       Running   1          2m
pod/cds-cds-6bd674dc77-4gtdf                     1/1       Running   0          2m
pod/cds-cds-db-0                                 1/1       Running   0          2m
pod/cds-controller-blueprints-545bbf98cf-zwjfc   1/1       Running   0          2m
NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/blueprints-processor    ClusterIP   10.43.139.9     <none>        8080/TCP,9111/TCP   2m
service/cds                     NodePort    10.43.254.69    <none>        3000:30397/TCP      2m
service/cds-db                  ClusterIP   None            <none>        3306/TCP            2m
service/controller-blueprints   ClusterIP   10.43.207.152   <none>        8080/TCP            2m
NAME                                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cds-blueprints-processor    1         1         1            0           2m
deployment.apps/cds-cds                     1         1         1            1           2m
deployment.apps/cds-controller-blueprints   1         1         1            1           2m
NAME                                                   DESIRED   CURRENT   READY     AGE
replicaset.apps/cds-blueprints-processor-54f758d69f    1         1         0         2m
replicaset.apps/cds-cds-6bd674dc77                     1         1         1         2m
replicaset.apps/cds-controller-blueprints-545bbf98cf   1         1         1         2m
NAME                          DESIRED   CURRENT   AGE
statefulset.apps/cds-cds-db   1         1         2m

...

labelSwagger

...

Swagger

Can be found at http://$ip:$runtimePort/swagger-ui.html#/

...

labelCDS Design Time

...

CDS Design time

Bellow are the requirements to enable automation for a service within ONAP.

For instantiation, the goal is to be able to automatically resolve all the HEAT/Helm variables, called cloud parameters.

For post-instantiation, the goal is to configure the VNF with initial configuration.

As part of SDC design time, when defining the topology, for the resource of type VF or PNF, you need to specify

...

idDesign time

...

labelHelpers

...

Provide Helper scripts / tool to help with the design time activities

Here's a helper script to facilitate the deployment of data-type, data-dictionary, CBA enrichment and CBA upload.

Make sure to update the following parameters in the bellow script

  • NODE_IP: IP of one of the K8S cluster node

The script assume the following folder structure is in place, update the script accordingly to your environment

Code Block
└── service
    ├── cba
    ├── tmp
    │	└── cba.zip (temporary file)
	│	└── cba-enriched.zip (temporary file)
    ├── data-dictionary
    └── data-type
Code Block
languagebash
titleCBA Helper script
collapsetrue
#!/bin/sh
IP=NODE_IP
BLUEPRINT_PROCESSOR_PORT=30499
BLUEPRINT_PROCESSOR_URI=http://${IP}:${BLUEPRINT_PROCESSOR_PORT}
URL_ENRICH=${BLUEPRINT_PROCESSOR_URI}/api/v1/blueprint-model/enrich
URL_PUBLISH=${BLUEPRINT_PROCESSOR_URI}/api/v1/execution-service/upload
URL_DD=${BLUEPRINT_PROCESSOR_URI}/api/v1/dictionary
URL_DT=${BLUEPRINT_PROCESSOR_URI}/api/v1/model-type
CBA_ZIP=/service/tmp/cba.zip
CBA_ZIP_ENRICHED=~/service/tmp/cba_enriched.zip
CBA_PATH=/service/cba
DD_PATH=/service/data-dictionary
DT_PATH=/service/data-type

for f in $DT_PATH/*.json; do
  echo "Pushing model-type '$f'"
  curl -sS -X POST $URL_DT -H 'Content-Type: application/json' -H 'Authorization: Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==' -d "@$f" 
  echo " "
done

for f in $DD_PATH/*.json; do
  echo "Pushing data dictionary '$f'"
  curl -sS -X POST $URL_DD -H 'Content-Type: application/json' -H 'Authorization: Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==' -d "@$f"
  echo " "
done


[ -f "$CBA_ZIP" ] && rm "$CBA_ZIP"
[ -f "$CBA_ZIP_ENRICHED" ] && rm "$CBA_ZIP_ENRICHED"

pushd $CBA_PATH
zip -uqr $CBA_ZIP . --exclude=*.git*
popd
echo "Doing enrichment..."
curl -sS -X POST $URL_ENRICH -H 'content-type: multipart/form-data' -H 'Authorization: Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==' -F file=@$CBA_ZIP -o $CBA_ZIP_ENRICHED
echo "Publishing..."
curl -X POST $URL_PUBLISH -H 'content-type: multipart/form-data' -H 'Authorization: Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==' -F file=@$CBA_ZIP_ENRICHED

# rm $CBA_ZIP $CBA_ZIP_ENRICHED

...

defaulttrue
labelPrerequisite

...

Prerequisite

Gather the parameters:

...

idprerequisite
Card
labelinstantiation

Have the HEAT template along with the HEAT environment file.

or

Have the Helm chart along with the Values.yaml file (Integration between Multcloud and CDS TBD)

Card
labelconfiguration

Have the configuration template to apply on the VNF.

  1. XML for NETCONF
  2. JSON / XML for RESTCONF
  3. JSON for Ansible
  4. CLI
  5. ...

...

Create and fill-in the a table for all the dynamic values

While doing so, identify the resources using the same process to be resolved; for instance, if two IPs has to be resolved through the same IPAM, the process the resolve the IP is the same.

...

labeldd

Here are the information to capture for each dynamic cloud parameters

...

idhow to resolve
Card
labelInput

Value will be given as input in the request.

Card
labelDefault

Value will be defaulted in the model.

...

labelREST

Value will be resolved by sending a query to the REST system

...

Supported Auth type

...

idauth
Card
labelToken

Use token based authentication

  • token
Card
labelBasic

Use basic authentication

  • username
  • password
Card
labelSSL

Use SSL basic authentication

  • keystore type
  • truststore
  • truststore password
  • keystore
  • keystore password

...

labelSQL

Value will be resolved by sending a SQL statement to the DB system

...

jdbc:mysql://<host>:<port>/db

...

Card
labelCapability

Value will be resolved through the execution of a script.

These are all the required parameters to process the resolution of that particular resources.

...

idinput
Card
labelREST

List of placeholders used for

  • URI
  • Payload
Card
labelDB

List of placeholders used for

  • SQL statement

...

This is the expected result from the system, and you should know what value out of the response is of interest for you.

If it's a JSON payload, then you should think about the json path to access to value of interest.

...

labelData Dictionary

...

Data dictionary

What is a data dictionary?

For each unique identified dynamic resource, along with all their ingredients, we need to create a data dictionary.

Here are the modeling guideline: Modeling Concepts#resourceDefinition-modeling

Bellow are examples of data dictionary

...

idDD

...

labelinput

...

Value will be pass as input.

Code Block
themeEclipse
titleunit-number
{
    "tags": "unit-number",
    "name": "unit-number",
    "property": {
      "description": "unit-number",
      "type": "string"
    },
    "updated-by": "adetalhouet",
    "sources": {
      "input": {
        "type": "source-input"
      }
    }
  }

...

labeldefault

...

Value will be defaulted.

Code Block
themeEclipse
titleprefix-id
{
  "tags": "prefix-id",
  "name": "prefix-id",
  "property" :{
    "description": "prefix-id",
    "type": "integer"
  },
  "updated-by": "adetalhouet",
  "sources": {
    "default": {
      "type": "source-default"
    }
  }
}

...

labelrest

...

Value will be resolved through REST.

Modeling reference: Modeling Concepts#rest

...

titleprimary-config-data via rest source type

In this example, we're making a POST request to an IPAM system with no payload.

Some ingredients are required to perform the query, in this case, $prefixId. Hence It is provided as an input-key-mapping and defined as a key-dependencies. Please refer to the modeling guideline for more in depth understanding.

As part of this request, the expected response will be as bellow. What is of interest is the address field, as this is what we're trying to resolve.

Code Block
themeEclipse
titleresponse
collapsetrue
{
    "id": 4,
    "address": "192.168.10.2/32",
    "vrf": null,
    "tenant": null,
    "status": 1,
    "role": null,
    "interface": null,
    "description": "",
    "nat_inside": null,
    "created": "2018-08-30",
    "last_updated": "2018-08-30T14:59:05.277820Z"
}

To tell the resolution framework what is of interest in the response, the path property can be used, which uses JSON_PATH, to get the value.

Code Block
themeEclipse
titlecreate_netbox_ip_address
{
    "tags" : "oam-local-ipv4-address",
    "name" : "create_netbox_ip",
    "property" : {
      "description" : "netbox ip",
      "type" : "string"
    },
    "updated-by" : "adetalhouet",
    "sources" : {
      "primary-config-data" : {
        "type" : "source-rest",
        "properties" : {
          "type" : "JSON",
          "verb" : "POST",
          "endpoint-selector" : "ipam-1",
          "url-path" : "/api/ipam/prefixes/$prefixId/available-ips/",
          "path" : "/address",
          "input-key-mapping" : {
            "prefixId" : "prefix-id"
          },
          "output-key-mapping" : {
            "address" : "address"
          },
          "key-dependencies" : [ "prefix-id" ]
        }
      }
    }
  }

...

labeldb

...

Value will be resolved through a database.

Modeling reference: Modeling Concepts#sql

In this example, we're making a SQL to the primary database.

Some ingredients are required to perform the query, in this case, $vfmoudleid. Hence It is provided as an input-key-mapping and defined as a key-dependencies. Please refer to the modeling guideline for more in depth understanding.

As part of this request, the expected response will be as put in value. In the output-key-mapping section, that value will be mapped to the expected resource name to resolve.

Code Block
themeEclipse
titlevf-module-type
{
  "name": "vf-module-type",
  "tags": "vf-module-type",
  "property": {
    "description": "vf-module-type",
    "type": "string"
  },
  "updated-by": "adetalhouet",
  "sources": {
    "primary-db": {
      "type": "source-db",
      "properties": {
        "type": "SQL",
        "query": "select sdnctl.demo.value as value from sdnctl.demo where sdnctl.demo.id=:vfmoduleid",
        "input-key-mapping": {
          "vfmoduleid": "vf-module-number"
        },
        "output-key-mapping": {
          "vf-module-type": "value"
        },
        "key-dependencies": [
          "vf-module-number"
        ]
      }
    }
  }
}

...

labelcapability

...

Value will be resolved through the execution of a script.

Modeling reference: Modeling Concepts#Capability

In this example, we're making use of a Python script.

Some ingredients are required to perform the query, in this case, $vf-module-type. Hence It is provided as a key-dependencies. Please refer to the modeling guideline for more in depth understanding.

As part of this request, the expected response will set within the script itself.

Code Block
themeEclipse
titleinterface-description
{
  "tags": "interface-description",
  "name": "interface-description",
  "property": {
    "description": "interface-description",
    "type": "string"
  },
  "updated-by": "adetalhouet",
  "sources": {
    "capability": {
      "type": "source-capability",
      "properties": {
        "script-type": "jython",
        "script-class-reference": "Scripts/python/DescriptionExample.py",       
        "key-dependencies": [
          "vf-module-type"
        ]
      }
    }
  }
}

The script itself is as bellow.

The key is to have the script class derived from the framework standards.

In the case of resource resolution, the class to derive from is AbstractRAProcessor

It will give the required methods to implement: process and recover, along with some utility functions, such as set_resource_data_value or addError.

These functions either come from the AbstractRAProcessor class, or from the class it derived from.

If the resolution fail, the recover method will get called with the exception as parameter.

Code Block
themeEclipse
titleScripts/python/DescriptionExample.py
collapsetrue
#  Copyright (c) 2019 Bell Canada.
#
#  Licensed under the Apache License, Version 2.0 (the "License");
#  you may not use this file except in compliance with the License.
#  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
#  limitations under the License.

from abstract_ra_processor import AbstractRAProcessor
from blueprint_constants import *
from java.lang import Exception as JavaException

class DescriptionExample(AbstractRAProcessor):

    def process(self, resource_assignment):
        try:
            # get key-dependencies value
            value = self.raRuntimeService.getStringFromResolutionStore("vf-module-type")
            
            # logic based on key-dependency outcome
            result = ""
            if value == "vfw":
                result = "This is the Virtual Firewall entity"
            elif value == "vsn":
                result = "This is the Virtual Sink entity"
            elif value == "vpg":
                result = "This is the Virtual Packet Generator"

            # set the value of resource getting currently resolved
            self.set_resource_data_value(resource_assignment, result)

        except JavaException, err:
          log.error("Java Exception in the script {}", err)
        except Exception, err:
          log.error("Python Exception in the script {}", err)
        return None

    def recover(self, runtime_exception, resource_assignment):
        print self.addError(runtime_exception.getMessage())
        return None


...

labelcomplex type

...

Value will be resolved through REST., and output will be a complex type.

Modeling reference: Modeling Concepts#rest

In this example, we're making a POST request to an IPAM system with no payload.

Some ingredients are required to perform the query, in this case, $prefixId. Hence It is provided as an input-key-mapping and defined as a key-dependencies. Please refer to the modeling guideline for more in depth understanding.

As part of this request, the expected response will be as bellow.

Code Block
themeEclipse
titleresponse
collapsetrue
{
    "id": 4,
    "address": "192.168.10.2/32",
    "vrf": null,
    "tenant": null,
    "status": 1,
    "role": null,
    "interface": null,
    "description": "",
    "nat_inside": null,
    "created": "2018-08-30",
    "last_updated": "2018-08-30T14:59:05.277820Z"
}

What is of interest is the address and id fields. For the process to return these two values, we need to create a custom data-type, as bellow

Code Block
titledt-netbox-ip
collapsetrue
{
  "version": "1.0.0",
  "description": "This is Netbox IP Data Type",
  "properties": {
    "address": {
      "required": true,
      "type": "string"
    },
    "id": {
      "required": true,
      "type": "integer"
    }
  },
  "derived_from": "tosca.datatypes.Root"
}

The type of the data dictionary will be dt-netbox-ip.

To tell the resolution framework what is of interest in the response, the output-key-mapping section is used. The process will map the output-key-mapping to the defined data-type.

Code Block
themeEclipse
titlecreate_netbox_ip_address
{
    "tags" : "oam-local-ipv4-address",
    "name" : "create_netbox_ip",
    "property" : {
      "description" : "netbox ip",
      "type" : "dt-netbox-ip"
    },
    "updated-by" : "adetalhouet",
    "sources" : {
      "primary-config-data" : {
        "type" : "source-rest",
        "properties" : {
          "type" : "JSON",
          "verb" : "POST",
          "endpoint-selector" : "ipam-1",
          "url-path" : "/api/ipam/prefixes/$prefixId/available-ips/",
          "path" : "",
          "input-key-mapping" : {
            "prefixId" : "prefix-id"
          },
          "output-key-mapping" : {
			"address" : "address",
            "id" : "id"
          },
          "key-dependencies" : [ "prefix-id" ]
        }
      }
    }
  }

...

labelCBA Scaffholding

...

CBA scaffholding

The overall purpose of the document is the constituate a CBA, see Modeling Concepts#ControllerBlueprintArchive for understanding of what a CBA is.

Now is the time to create the scaffholfing for your CBA.

What you will need is the following based directory/file structure:

Code Block
├── Definitions
│   └── blueprint.json                          Overall TOSCA service template (worfklow + node_template)
├── Environments                                Contains *.properties files as required by the service
├── Plans                                       Contains Directed Graph
├── Scripts                                     Contains scripts
│   ├── python                                  Python scripts
│   └── kotlin                                  Kotlin scripts
├── TOSCA-Metadata
│   └── TOSCA.meta                              Meta-data of overall package
└── Templates                                   Contains combination of mapping and template

The TOSCA.meta should have this information

Code Block
TOSCA-Meta-File-Version: 1.0.0
CSAR-Version: 1.0
Created-By: Alexis de Talhouët (adetalhouet89@gmail.com)
Entry-Definitions: Definitions/blueprint.json					<- Path reference to the blueprint.json file. If the file name is changed, change here accordinlgy.
Template-Tags: ONAP, CBA, Test
Content-Type: application/vnd.oasis.bpmn

The blueprint.json should have the following metadata

Code Block
{
  "metadata": {
    "template_author": "Alexis de Talhouët",
    "author-email": "adetalhouet89@gmail.com",
    "user-groups": "ADMIN, OPERATION",
    "template_name": "golden",									<- This is the overall CBA name, will be refer later to sdnc_blueprint_name
    "template_version": "1.0.0",								<- This is the overall CBA version, will be refer later to sdnc_blueprint_version
    "template_tags": "ONAP, CBA, Test"
  }
. . .

...

labelONAP Specific Workflows

...

ONAP Specific Workflows

The following workflows are contracts established between SO, SDNC and CDS to cover the instantiation and the post-instantiation use cases.

Code Block
User -> SO (Macro Service Create)
  SO -> AssignBB (service, vnf, vf-module) - instantiation
          -> SDNC GR-API
                  -> CDS (resource-assignment workflow)
  SO -> ConfigAssignBB - day0 config assign
          -> CDS (config-assign workflow)
  SO -> CreateBB (VF-Module)
          -> OpenStack adapter / Multi-Cloud
  SO -> ConfigDeployBB - day0 config push
          -> CDS (config-deploy workflow)

Please refer to the modeling guide to understand workflow concept: Modeling Concepts#workflow

The workflow definition will be added within the blueprint.json file, see CBA scaffholding.

...

idWorkflow

...

labelresource-assignment

...

resource-assignment

This action is meant to assign resources needed to instantiate the service, e.g. to resolve all the cloud parameters.

Also, this action has the ability to perform a dry-run, meaning that result from the resolution will be made visible to the user.

...

Context

This action is triggered by Generic-Resource-API (GR-API) within SDNC as part of the AssignBB orchestrated by SO.

It will be triggered for each VNF(s) and VF-Module(s) (referred as entity bellow).

See SO Building blocks Assignment.

...

Templates

Understand resource accumulator templates

These templates are specific to the instantiation scenario, and relies on GR-API within SDNC.

The resource accumulator template is composed of the following sections:

...

resource-accumulator-resolved-data

Defines all the resources that can be resolved directly from the context. It expresses a direct mapping between the name of the resource and its value.

Code Block
titleRA resolved data
collapsetrue
  "resource-accumulator-resolved-data": [
    {
      "param-name": "service-instance-id",
      "param-value": "${service-instance-id}"
    },
    {
      "param-name": "vnf_id",
      "param-value": "${vnf-id}"
    }
  ]

...

capability-data

Defines the logic to use to create a specific resource, along with the ingredients required to invoke the capability and the output mapping. See the ingredients as function parameters, and output mapping as returned value.

The logic to resolve the resource is a DG, hence DG development is required to support a new capability.

Currently the following capabilities exist:

Netbox: netbox-ip-assign

Code Block
titleExample
collapsetrue
    {
      "capability-name": "netbox-ip-assign",
      "key-mapping": [
        {
          "payload": [
            {
              "param-name": "service-instance-id",
              "param-value": "${service-instance-id}"
            },
            {
              "param-name": "prefix-id",
              "param-value": "${private-prefix-id}"
            },
            {
              "param-name": "vf-module-id",
              "param-value": "${vf-module-id}"
            },
            {
              "param-name": "external_key",
              "param-value": "${vf-module-id}-vpg_private_ip_1"
            }
          ],
          "output-key-mapping": [
            {
              "resource-name": "vpg_private_ip_1",
              "resource-value": "${vpg_private_ip_1}"
            }
          ]
        }
      ]
    }

Name generation: generate-name

Code Block
titleExample
collapsetrue
    {
      "capability-name": "generate-name",
      "key-mapping": [
        {
          "payload": [
            {
              "param-name": "resource-name",
              "param-value": "vnf_name"
            },
            {
              "param-name": "resource-value",
              "param-value": "${vnf_name}"
            },
            {
              "param-name": "external-key",
              "param-value": "${vnf-id}_vnf_name"
            },
            {
              "param-name": "policy-instance-name",
              "param-value": "${vf-naming-policy}"
            },
            {
              "param-name": "nf-role",
              "param-value": "${nf-role}"
            },
            {
              "param-name": "naming-type",
              "param-value": "VNF"
            },
            {
              "param-name": "AIC_CLOUD_REGION",
              "param-value": "${aic-cloud-region}"
            }
          ],
          "output-key-mapping": [
            {
              "resource-name": "vnf_name",
              "resource-value": "${vnf_name}"
            }
          ]
        }
      ]
    }

...

Required templates

See Modeling Concepts#template

The name of the templates is very important, and can't be random. Bellow are the requirements

VNF

The VNF Resource Accumulator Template prefix name can be anything, but what is very important is that when integrating with SDC the sdnc_artifact_name property of the VF or PNF needs to be the same; see here.

VF-Modules

Each vf-module will have its own resource accumulator template, and its prefix name must be the vf-module-label, which is nothing but the name of the HEAT file defining the OS::Nova::Server

Example:

If the file is name vfw.yaml, the vf-module-label will be vfw

For instance, with the vFW service HEAT definition, you will see in the VSP within SDC the following screen, showing you the label of each vf-module

In this case, we will have 4 resource accumulator templates, following the template convention, hence ending with -template

  • base_template-template.vtl
  • vfw-template.vtl
  • vsn-template.vtl
  • vpg-template.vtl
Expand
titleVSP attachement

Image Removed

...

Mapping

Each template requires its associated mapping file, see Modeling Concepts#ArtifactMappingResource

Example:

Taking the same vFW example, we would have 4 mapping template following the convention, hence ending with -mapping:

  • base_template-mapping.vtl
  • vfw-mapping.vtl
  • vsn-mapping.vtl
  • vpg-mapping.vtl

...

Required Inputs

...

SDNC will populate this input with the name of the template to execute.

If doing VNF Assign, it will use sdnc_artifact_name as template-prefix.

If doing VF-Module Assign, it will use the vf-module-label as template-prefix.

Code Block
"template-prefix" : {
   "required" : true,
   "type" : "list",
   "entry_schema" : {
      "type" : "string"
   }

...

Output

It is necessary to provide the resolved template as output. To do so, we will use the Modeling Concepts#getAttribute expression.

Also, as mentioned here Modeling Concepts#resourceResolution, the resource resolution component node will populate an attribute named assignment-params with the result.

Finally, the name of the ouput has to be meshed-template so SDNC GR-API knows how to properly parse the response.

...

Component

This action requires a node_template of type component-resource-resolution

The name of the node_template is important, as it will be used within the Workflow definition (see step.target property Modeling Concepts#workflowProperties)

Finally, you can see the component has a list of artifacts, being the template/mapping defined before.

Example:

Taking the same vFW example, we have a node_template name resource-assignment:

Code Block
titleExample
collapsetrue
    "node_templates": {
      "resource-assignment" : {
        "type" : "component-resource-resolution",
        "interfaces" : {
          "ResourceResolutionComponent" : {
            "operations" : {
              "process" : {
                "inputs" : {
                  "artifact-prefix-names" : {
                    "get_input" : "template-prefix"
                  }
                }
              }
            }
          }
        },
        "artifacts": {
          "base-template": {
            "type": "artifact-template-velocity",
            "file": "Templates/base-template.vtl"
          },
          "base-mapping": {
            "type": "artifact-mapping-resource",
            "file": "Templates/base-mapping.json"
          },
          "vfw-template": {
            "type": "artifact-template-velocity",
            "file": "Templates/vfw-template.vtl"
          },
          "vfw-mapping": {
            "type": "artifact-mapping-resource",
            "file": "Templates/vfw-mapping.json"
          },
          "vfw-vnf-template": {
            "type": "artifact-template-velocity",
            "file": "Templates/vfw-vnf-template.vtl"
          },
          "vfw-vnf-mapping": {
            "type": "artifact-mapping-resource",
            "file": "Templates/vfw-vnf-mapping.json"
          },
          "vpg-template": {
            "type": "artifact-template-velocity",
            "file": "Templates/vpg-template.vtl"
          },
          "vpg-mapping": {
            "type": "artifact-mapping-resource",
            "file": "Templates/vpg-mapping.json"
          },
          "vsn-template": {
            "type": "artifact-template-velocity",
            "file": "Templates/vsn-template.vtl"
          },
          "vsn-mapping": {
            "type": "artifact-mapping-resource",
            "file": "Templates/vsn-mapping.json"
          }
        }
      }
    }
  }

...

Overall workflow example w/ component and artifact

Code Block
themeEclipse
titleresource-assignment
collapsetrue
{
  "metadata": {
    "template_author": "Alexis de Talhouët",
    "author-email": "adetalhouet89@gmail.com",
    "user-groups": "ADMIN, OPERATION",
    "template_name": "vFW_spinup",
    "template_version": "1.0.0",
    "template_tags": "vFW"
  },
  "topology_template": {
    "workflows": {
      "resource-assignment": {
        "steps": {
          "resource-assignment": {
            "description": "Resource Assign Workflow",
            "target": "resource-assignment"
          }
        },
        "inputs" : {
          "template-prefix" : {
            "required" : true,
            "type" : "list",
            "entry_schema" : {
              "type" : "string"
            }
          }
        },
        "outputs": {
          "meshed-template": {
            "type": "json",
            "value": {
              "get_attribute": [
                "resource-assignment",
                "assignment-params"
              ]
            }
          }
        }
      }
    },
    "node_templates": {
      "resource-assignment" : {
        "type" : "component-resource-resolution",
        "interfaces" : {
          "ResourceResolutionComponent" : {
            "operations" : {
              "process" : {
                "inputs" : {
                  "artifact-prefix-names" : {
                    "get_input" : "template-prefix"
                  }
                }
              }
            }
          }
        },
        "artifacts": {
          "base-template": {
            "type": "artifact-template-velocity",
            "file": "Templates/base-template.vtl"
          },
          "base-mapping": {
            "type": "artifact-mapping-resource",
            "file": "Templates/base-mapping.json"
          },
          "vfw-template": {
            "type": "artifact-template-velocity",
            "file": "Templates/vfw-template.vtl"
          },
          "vfw-mapping": {
            "type": "artifact-mapping-resource",
            "file": "Templates/vfw-mapping.json"
          },
          "vfw-vnf-template": {
            "type": "artifact-template-velocity",
            "file": "Templates/vfw-vnf-template.vtl"
          },
          "vfw-vnf-mapping": {
            "type": "artifact-mapping-resource",
            "file": "Templates/vfw-vnf-mapping.json"
          },
          "vpg-template": {
            "type": "artifact-template-velocity",
            "file": "Templates/vpg-template.vtl"
          },
          "vpg-mapping": {
            "type": "artifact-mapping-resource",
            "file": "Templates/vpg-mapping.json"
          },
          "vsn-template": {
            "type": "artifact-template-velocity",
            "file": "Templates/vsn-template.vtl"
          },
          "vsn-mapping": {
            "type": "artifact-mapping-resource",
            "file": "Templates/vsn-mapping.json"
          }
        }
      }
    }
  }
}

...

Add a new capability

When adding a capability, consider whether it should be available both at VNF and VF-Module level. This is important for its implementation.

Here is the

You need to do the following:

...

Load the DG within SDNC

Code Block
titleExample of script to automate deployment of DG
collapsetrue
#!/bin/sh

# This script takes care of loading the DG into the runtime of SDNC.
# The DG file name has to follow this pattern:
# GENERIC-RESOURCE-API_{rpc_name}_{version}

usage() {
  echo "./load-dg.sh <dg>"
  exit
}

if [[ -z $1 ]]
then
    usage
fi

rpc_name=`echo "$1" | cut -d'_' -f2 | cut -d'.' -f1`
version=`echo "$1" | cut -d'_' -f3`
content=`cat $1`
ip=$2

data="$(curl -s -o /dev/null -w %{url_effective} --get --data-urlencode "$content" "")"
dg_xml_escaped="${data##/?}"

echo -e "module=GENERIC-RESOURCE-API&rpc=$rpc_name&flowXml=$dg_xml_escaped" > payload

echo -e "    Installing $rpc_name version ${version%.*}"
curl -X  POST \
  http://$ip:$SDNC_NODE_PORT/uploadxml \
  -H 'Authorization: Basic ZGd1c2VyOnRlc3QxMjM=' \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  -d @payload

rm payload

echo -e "    Activating $rpc_name version ${version%.*}"
activate_uri="activateDG?module=GENERIC-RESOURCE-API&rpc=$rpc_name&mode=sync&version=${version%.*}&displayOnlyCurrent=true"
curl -X GET \
  -H 'Accept: application/json' \
  -H 'Authorization: Basic ZGd1c2VyOnRlc3QxMjM=' \
  -H 'Content-Type: application/json' \
  http://$ip:$SDNC_NODE_PORT/$activate_uri

Add the capability in the self-serve-vnf-assign DG and/or self-serve-vf-module-assign in the node named set ss.capability.execution-order[] then upload the updated version of this DG.
When doing so, make sure to increment the last parameter ss.capability.execution-order_length

Expand
titleExample

Image Removed

...

Understand overall SDNC DG flow logic

Logic for vnf and vf-module assignement is pretty much the same.

This is the general DG logic of the VNF assign flow and sub-flows:

  1. call vnf-topology-operation
    1. call vnf-topology-operation-assign
      1. call self-serve-vnf-assign
        1. set capability.execution-order
        2. call self-serve-vnf-ra-assignment
          1. execute REST call to CDS blueprint processor
          2. put resource-accumulator-resolved-data in MDSAL GR-API/services/service/$serviceInstanceId/vnfs/vnf/$vnfId
        3. call self-serve- + capability-name
        4. put vnf information in AAI (including the selflink)
      2. call naming-policy-generate-name
      3. put generic-vnf relationship in AAI

This is the general logic of the vf-module assign flow and sub-flows:

  1. call vf-module-topology-operation
    1. call vf-module-topology-operation-assign
      1. set service-data based on SO request (userParams / cloudParams)
      2. call self-serve-vf-module-assign
        1. set capability.execution-order
        2. call self-serve-vfmodule-ra-assignment
          1. execute REST call to CDS blueprint processor
            1. put resource-accumulator-resolved-data in MDSAL GR-API/services/service/$serviceInstanceId/vnfs/vnf/$vnfId/vf-modules/vf-module
        3. call self-serve- + capability-name
      3. put vf-module information in AAI
      4. put vnfc information in AAI

...

labelconfig-assign

...

config-assign

This action is meant to assign all the resources and generate the configuration to apply post-instantiation (day0 config).

...

Context

This action is triggered by SO after the AssignBB has been executed for Service, VNF and VF-Module. It corresponds to the ConfigAssignVnfBB.

See SO Building blocks Assignment.

...

Templates

For this action, you can define as many template as needed. Make sure for each template to follow the convention and to provide the mapping file, as follow:

  • xyz-template.vtl
  • xyz-mapping.vtl

...

Required Input

...

Code Block
"template-prefix" : {
   "required" : true,
   "type" : "list",
   "entry_schema" : {
      "type" : "string"
}

...

The functionality requires the ability to retrieve the resolution that has been made later point in time in the process, during config-deploy action.

Code Block
"resolution-key" : {
   "required" : true,
   "type" : "string"
}

...

Output

In order to perform dry-run, it is necessary to provide the meshed resolved template as output. To do so, the use of Modeling Concepts#getAttribute expression is required.

Also, as mentioned here Modeling Concepts#resourceResolution, the resource resolution component node will populate an attribute named assignment-params with the result.

...

Component

This action requires a node_template of type component-resource-resolution

The name of the node_template is important, as it will be used within the Workflow definition (see step.target property Modeling Concepts#workflowProperties)

Finally, you can see the component has a list of artifacts, being the template/mapping defined before.

Example:

Taking the vDNS example, we have a node_template name config-assign:

Code Block
titleExample
collapsetrue
      "config-assign" : {
        "type" : "component-resource-resolution",
        "interfaces" : {
          "ResourceResolutionComponent" : {
            "operations" : {
              "process" : {
                "inputs" : {
                  "resolution-key" : {
                    "get_input" : "resolution-key"
                  },
                  "store-result" : true,
                  "artifact-prefix-names" : [ "baseconfig", "incremental-config" ]
                }
              }
            }
          }
        },
        "artifacts" : {
          "baseconfig-template" : {
            "type" : "artifact-template-velocity",
            "file" : "Templates/baseconfig-template.vtl"
          },
          "baseconfig-mapping" : {
            "type" : "artifact-mapping-resource",
            "file" : "Templates/baseconfig-mapping.json"
          },
          "incremental-config-template" : {
            "type" : "artifact-template-velocity",
            "file" : "Templates/incremental-config-template.vtl"
          },
          "incremental-config-mapping" : {
            "type" : "artifact-mapping-resource",
            "file" : "Templates/incremental-config-mapping.json"
          }
        }
      },

...

Overall workflow example w/ component and artifact

Here is an example of the config-assign workflow:

Code Block
themeEclipse
titleconfig-assign
collapsetrue
{
  "tosca_definitions_version": "controller_blueprint_1_0_0",
  "metadata": {
    "template_author": "Abdelmuhaimen Seaudi",
    "author-email": "abdelmuhaimen.seaudi@orange.com",
    "user-groups": "ADMIN, OPERATION",
    "template_name": "test",
    "template_version": "1.0.0",
    "template_tags": "test, vDNS-CDS, SCALE-OUT, MARCO"
  },
  "topology_template": {
    "workflows": {
      "config-assign": {
        "steps": {
          "config-assign": {
            "description": "Config Assign Workflow",
            "target": "config-assign"
          }
        },
        "inputs": {
          "resolution-key": {
            "required": true,
            "type": "string"
          },
          "config-assign-properties": {
            "description": "Dynamic PropertyDefinition for workflow(config-assign).",
            "required": true,
            "type": "dt-config-assign-properties"
          }
        },
        "outputs": {
          "dry-run": {
            "type": "json",
            "value": {
              "get_attribuxte": [
                "config-assign",
                "assignment-params"
              ]
            }
          }
        }
      }
    },
    "node_templates": {
      "config-assign": {
        "type": "component-resource-resolution",
        "interfaces": {
          "ResourceResolutionComponent": {
            "operations": {
              "process": {
                "inputs": {
                  "resolution-key": {
                    "get_input": "resolution-key"
                  },
                  "store-result": true,
                  "artifact-prefix-names": [
                    "baseconfig",
                    "incremental-config"
                  ]
                }
              }
            }
          }
        },
        "artifacts": {
          "baseconfig-template": {
            "type": "artifact-template-velocity",
            "file": "Templates/baseconfig-template.vtl"
          },
          "baseconfig-mapping": {
            "type": "artifact-mapping-resource",
            "file": "Templates/baseconfig-mapping.json"
          },
          "incremental-config-template": {
            "type": "artifact-template-velocity",
            "file": "Templates/incremental-config-template.vtl"
          },
          "incremental-config-mapping": {
            "type": "artifact-mapping-resource",
            "file": "Templates/incremental-config-mapping.json"
          }
        }
      }
    }
  }
}

...

labelconfig-deploy

...

config-deploy

This action is meant to push the configuration templates defined during the config-assign step for the post-instantiation.

This action is triggered by SO after the CreateBB has been executed for all the VF-Modules.

...

Context

This action is triggered by SO after the CreateVnfBB has been executed. It corresponds to the ConfigDeployBB.

See SO Building blocks Assignment.

...

Templates

If need be, some template can be defined. They can either be resolved through the a node_template of type component-resource-resolution,which will then have to be combined with another node_template, in order to push the config in the network, or in third party system. In this case, you will want to leverage the multi-action worklow.

Else, the template could be resolved directly through a node_template of type component-script-executor through helpers functions being provided.

...

Required Inputs

...

Needed to retrieve the resolution that has been made earlier point in time in the process.

The combination of the artifact-name and the resolution-key will be used to uniquely identify the result.

...

Output

SUCCESS or FAILURE

...

Component

If you want to have a multi-action worklow, then the action will refer to a node_template of type dg-generic.

If you want to have a single action workflow, then you should use one of the following node type: component-script-executor, component-remote-script-executor, component-remote-ansible-executor

The name of the node_template is important, as it will be used within the Workflow definition (see step.target property Modeling Concepts#workflowProperties)

Finally, you can see the component(s) might have a list of artifacts, being the template/mapping defined before.

Example:

Taking the vDNS example, we have a node_template name config-deploy-process, which is of type dg-generic, hence we also have the dependent node_template.

Code Block
titleExample
collapsetrue
      "config-deploy-process" : {
        "type" : "dg-generic",
        "properties" : {
          "content" : {
            "get_artifact" : [ "SELF", "dg-config-deploy-process" ]
          },
          "dependency-node-templates" : [ "nf-account-collection", "execute" ]
        },
        "artifacts" : {
          "dg-config-deploy-process" : {
            "type" : "artifact-directed-graph",
            "file" : "Plans/CONFIG_ConfigDeploy.xml"
          }
        }
      },
      "nf-account-collection" : {
        "type" : "component-resource-resolution",
        "interfaces" : {
          "ResourceResolutionComponent" : {
            "operations" : {
              "process" : {
                "inputs" : {
                  "artifact-prefix-names" : [ "nf-params" ]
                }
              }
            }
          }
        },
        "artifacts" : {
          "nf-params-template" : {
            "type" : "artifact-template-velocity",
            "file" : "Templates/nf-params-template.vtl"
          },
          "nf-params-mapping" : {
            "type" : "artifact-mapping-resource",
            "file" : "Templates/nf-params-mapping.json"
          }
        }
      },
      "execute" : {
        "type" : "component-netconf-executor",
        "requirements" : {
          "netconf-connection" : {
            "capability" : "netconf",
            "node" : "netconf-device",
            "relationship" : "tosca.relationships.ConnectsTo"
          }
        },
        "interfaces" : {
          "ComponentNetconfExecutor" : {
            "operations" : {
              "process" : {
                "inputs" : {
                  "script-type" : "jython",
                  "script-class-reference" : "Scripts/python/ConfigDeploy.py",
                  "instance-dependencies" : [ ],
                  "dynamic-properties" : "*config-deploy-properties"
                }
              }
            }
          }
        },
        "artifacts" : {
          "baseconfig-template" : {
            "type" : "artifact-template-velocity",
            "file" : "Templates/baseconfig-template.vtl"
          },
          "baseconfig-mapping" : {
            "type" : "artifact-mapping-resource",
            "file" : "Templates/baseconfig-mapping.json"
          },
          "incremental-config-template" : {
            "type" : "artifact-template-velocity",
            "file" : "Templates/incremental-config-template.vtl"
          },
          "incremental-config-mapping" : {
            "type" : "artifact-mapping-resource",
            "file" : "Templates/incremental-config-mapping.json"
          }
        }
      }
    }
  }

...

Overall workflow example w/ component and artifact

Here is an example of the config-deploy workflow:

...

themeEclipse
titleconfig-deploy
collapsetrue

...

  •             "

...

  • param-name": "

...

  • service-

...

  • instance-

...

  • id",
                  "

...

  • param-value": "

...

  • ${service-instance-id}"
                },
              

...

  •   {

...

  • 
                  "

...

  • param-name": "

...

  • prefix-

...

  • id",
                  "

...

  • param-value": "

...

  • ${private-prefix-id}"
                },
              

...

  •   {
                  "

...

  • param-name": "

...

  • vf-

...

  • module-

...

  • id",
                  "

...

  • param-value": "

...

  • ${vf-module-id}"
                },
              

...

  •   {
                  "

...

  • param-name": "

...

  • external_key",
                  "

...

  • param-value": "

...

  • ${vf-module-id}-vpg_private_ip_1"
        

...

  •         }
          

...

  •     

...

Card
labelBuild your own workflow

TBD

...

labelSDC Modeling & Distribution

...

  • ],
              "output-key-mapping": [
                {
                  "resource-name": "vpg_private_ip_1",
                  "resource-value": "${vpg_private_ip_1}"
                }
              ]
            }
          ]
        }


mapping

Defines the contract of each resource to be resolved. Each placeholder in the template must have a corresponding mapping definition.

A mapping is comprised of:

  • name
  • required / optional
  • type (support complex type)
  • dictionary-name
  • dictionary-source
  • dependencies: this allows to make sure given resources get resolved prior the resolution of the resources defining the dependency.

The dictionary fields reference to a specific data dictionary.

scripts

If any of the mapping uses a source capabbility to resolve a parameters.

config-assign

This action is meant to assign all the resources and mesh the templates needed for the configuration to apply post-instantiation.

This action is triggered by SO during after the AssignBB has been executed for Service, VNF and VF-Module.

artifacts

Combinaison of templates with respective mappings

Scripts if needed

config-deploy

This action is meant to push the configuration templates defined during the config-assign step for the post-instantiation.

This action is triggered by SO during after the CreateBB has been executed for all the VF-Modules.

artifacts

Combinaison of templates with respective mappings

Scripts using Netconf or Restconf to push configure the network element.

...

Introduction

The purpose is to describe integration of CDS within SDC

What's new

At the VF and PNF level, a new artifact type CONTROLLER_BLUEPRINT_ARCHIVE allow the designed to load the previsouly designed CBA as part of the resource.

...

How to add the CBA in SDC VF resource (similar for PNF)

Create the VF resource

Image Removed

Click on Deployment Artifact, then Add other arifacts, and select you CBA

Image Removed
Image Removed

Check the artifact is uploaded OK, and click on Certify.

Image Removed

Create a new service model, and add the newly created VF (including CBA artifact) to the new service model. Click on "Add Service"

Image Removed

Click on "Composition", and drag the VF we created from the palette on the left onto the canvas in the middle.

Then, click on "Submit for Testing".

Image Removed

...

Click on Properties Assignments, then click on the service name, e.g. "CDS-VNF-TEST" from the right bar.

Type "sdnc" in the filter box, and add the sdnc_model_name, sdnc_model_version, and sdnc_artifact_version, and click "Save".

  • sdnc_model_name - This is the name of the blueprint (e.g. CBA name)
  • sdnc_model_version - This is the version of the blueprint
  • sdnc_artifact_name - This is the name of the VNF resource accumulator template

Image Removed

Type "skip" in the filter box, and set "skip post instantiation" to FALSE, then click "Save".

Image Removed

Login as Tester (jm0007/demo123456!) and accept the new service.

Login as Governor (gv0001/demo123456!) and approve for distribution.

Login as Operator (op0001/demo123456!) and click on "Distribute".

Click on "Monitor" to check the progress of the distribution, and check that all ONAP components were notified, and downloaded the artifacts, and deployed OK.

...