-------------------- Work in progress --------------------
This guide is geared to provide information regarding how to do service design to automate instantiation and day0 configuration.
Installation
ONAP is meant to be deployed within a Kubernetes environment. Hence, the de-facto way to deploy CDS is through Kubernetes.
ONAP also package Kubernetes manifest as Chart, using Helm.
Prerequisite
https://docs.onap.org/en/latest/guides/onap-developer/settingup/index.html
Setup local Helm
Get the chart
Make sure to checkout the release to use, by replacing $release-tag
in bellow command
Install CDS
Result
CDS Design time
Bellow are the requirements to enable automation for a service within ONAP.
For instantiation, the goal is to be able to automatically resolve all the HEAT/Helm variables, called cloud parameters.
For post-instantiation, the goal is to configure the VNF with initial configuration.
As part of SDC design time, when defining the topology, for the resource of type VF or PNF, you need to specify
Prerequisite
Gather the parameters:
- Identify which template parameters are static and dynamic
Create and fill-in the a table for all the dynamic values
While doing so, identify the resources using the same process to be resolved; for instance, if two IPs has to be resolved through the same IPAM, the process the resolve the IP is the same.
Here are the information to capture for each dynamic cloud parameters
Data dictionary
For each unique identified dynamic resource, along with all their ingredients, we need to create a data dictionary.
Here are the modeling guideline: Modeling Concepts#resourceDefinition-modeling
Bellow are examples of data dictionary
Value will be pass as input.
{ "tags": "unit-number", "name": "unit-number", "property": { "description": "unit-number", "type": "string" }, "updated-by": "adetalhouet", "sources": { "input": { "type": "source-input" } } }
Value will be defaulted.
{ "tags": "prefix-id", "name": "prefix-id", "property" :{ "description": "prefix-id", "type": "integer" }, "updated-by": "adetalhouet", "sources": { "default": { "type": "source-default" } } }
Value will be resolved through REST.
Modeling reference: Modeling Concepts#rest
In this example, we're making a POST request to an IPAM system with no payload.
Some ingredients are required to perform the query, in this case, $prefixId
. Hence It is provided as an input-key-mapping
and defined as a key-dependencies.
Please refer to the modeling guideline for more in depth understanding.
As part of this request, the expected response will be as bellow. What is of interest is the address
field, as this is what we're trying to resolve.
To tell the resolution framework what is of interest in the response, the path
property can be used, which uses JSON_PATH, to get the value.
{ "tags" : "oam-local-ipv4-address", "name" : "create_netbox_ip", "property" : { "description" : "netbox ip", "type" : "string" }, "updated-by" : "adetalhouet", "sources" : { "primary-config-data" : { "type" : "source-rest", "properties" : { "type" : "JSON", "verb" : "POST", "endpoint-selector" : "ipam-1", "url-path" : "/api/ipam/prefixes/$prefixId/available-ips/", "path" : "/address", "input-key-mapping" : { "prefixId" : "prefix-id" }, "output-key-mapping" : { "address" : "address" }, "key-dependencies" : [ "prefix-id" ] } } } }
primary-aai-data via type source-rest
TBD
{ "name" : "primary-aai-data", "tags" : "primary-aai-data", "updated-by" : "Steve, Siani <steve.djissitchi@bell.ca>", "property" : { "description" : "primary-aai-data", "type" : "string" }, "sources" : { "default": { "type": "source-default", "properties": { } }, "input": { "type": "source-input", "properties": { } }, "primary-aai-data" : { "type" : "source-rest", "properties": { "type": "JSON", "url-path": "$aai-port/aai/v14/network/generic-vnfs/generic-vnf/$vnf-id", "path": "", "input-key-mapping": { "aai-port": "port", "vnf-id": "vnf-id" }, "output-key-mapping": { }, "key-dependencies": [ "port", "vnf-id" ] } } } }
Value will be resolved through a database.
Modeling reference: Modeling Concepts#sql
In this example, we're making a SQL to the primary database.
Some ingredients are required to perform the query, in this case, $vfmoudleid
. Hence It is provided as an input-key-mapping
and defined as a key-dependencies.
Please refer to the modeling guideline for more in depth understanding.
As part of this request, the expected response will be as put in value
. In the output-key-mapping
section, that value will be mapped to the expected resource name to resolve.
{ "name": "vf-module-type", "tags": "vf-module-type", "property": { "description": "vf-module-type", "type": "string" }, "updated-by": "adetalhouet", "sources": { "primary-db": { "type": "source-db", "properties": { "type": "SQL", "query": "select sdnctl.demo.value as value from sdnctl.demo where sdnctl.demo.id=:vfmoduleid", "input-key-mapping": { "vfmoduleid": "vf-module-number" }, "output-key-mapping": { "vf-module-type": "value" }, "key-dependencies": [ "vf-module-number" ] } } } }
Value will be resolved through the execution of a script.
Modeling reference: Modeling Concepts#Capability
In this example, we're making use of a Python script.
Some ingredients are required to perform the query, in this case, $vf-module-type
. Hence It is provided as a key-dependencies.
Please refer to the modeling guideline for more in depth understanding.
As part of this request, the expected response will set within the script itself.
{ "tags": "interface-description", "name": "interface-description", "property": { "description": "interface-description", "type": "string" }, "updated-by": "adetalhouet", "sources": { "capability": { "type": "source-capability", "properties": { "script-type": "jython", "script-class-reference": "Scripts/python/DescriptionExample.py", "key-dependencies": [ "vf-module-type" ] } } } }
The script itself is as bellow.
The key is to have the script class derived from the framework standards.
In the case of resource resolution, the class to derive from is AbstractRAProcessor
It will give the required methods to implement: process
and recover
, along with some utility functions,
such as set_resource_data_value
or addError
.
These functions either come from the AbstractRAProcessor
class, or from the class it derived from.
If the resolution fail, the recover method will get called with the exception as parameter.
Value will be resolved through REST., and output will be a complex type.
Modeling reference: Modeling Concepts#rest
In this example, we're making a POST request to an IPAM system with no payload.
Some ingredients are required to perform the query, in this case, $prefixId
. Hence It is provided as an input-key-mapping
and defined as a key-dependencies.
Please refer to the modeling guideline for more in depth understanding.
As part of this request, the expected response will be as bellow.
What is of interest is the address
and id
fields. For the process to return these two values, we need to create a custom data-type, as bellow
The type of the data dictionary will be dt-netbox-ip
.
To tell the resolution framework what is of interest in the response, the output-key-mapping section is used. The process will map the output-key-mapping to the defined data-type.
{ "tags" : "oam-local-ipv4-address", "name" : "create_netbox_ip", "property" : { "description" : "netbox ip", "type" : "dt-netbox-ip" }, "updated-by" : "adetalhouet", "sources" : { "primary-config-data" : { "type" : "source-rest", "properties" : { "type" : "JSON", "verb" : "POST", "endpoint-selector" : "ipam-1", "url-path" : "/api/ipam/prefixes/$prefixId/available-ips/", "path" : "", "input-key-mapping" : { "prefixId" : "prefix-id" }, "output-key-mapping" : { "address" : "address", "id" : "id" }, "key-dependencies" : [ "prefix-id" ] } } } }
CBA scaffholding
The overall purpose of the document is the constituate a CBA, see Modeling Concepts#ControllerBlueprintArchive for understanding of what a CBA is.
Now is the time to create the scaffholfing for your CBA.
What you will need is the following based directory/file structure:
├── Definitions │ └── blueprint.json Overall TOSCA service template (worfklow + node_template) ├── Environments Contains *.properties files as required by the service ├── Plans Contains Directed Graph ├── Scripts Contains scripts │ ├── python Python scripts │ └── kotlin Kotlin scripts ├── TOSCA-Metadata │ └── TOSCA.meta Meta-data of overall package └── Templates Contains combination of mapping and template
The TOSCA.meta should have this information
TOSCA-Meta-File-Version: 1.0.0 CSAR-Version: 1.0 Created-By: Alexis de Talhouët (adetalhouet89@gmail.com) Entry-Definitions: Definitions/blueprint.json <- Path reference to the blueprint.json file. If the file name is changed, change here accordinlgy. Template-Tags: ONAP, CBA, Test Content-Type: application/vnd.oasis.bpmn
The blueprint.json should have the following metadata
{ "metadata": { "template_author": "Alexis de Talhouët", "author-email": "adetalhouet89@gmail.com", "user-groups": "ADMIN, OPERATION", "template_name": "golden", <- This is the overall CBA name, will be refer later to sdnc_blueprint_name "template_version": "1.0.0", <- This is the overall CBA version, will be refer later to sdnc_blueprint_version "template_tags": "ONAP, CBA, Test" } . . .
Workflows
The following workflows are contracts established between SO, SDNC and CDS to cover the instantiation and the post-instantiation use cases.
Please refer to the modeling guide to understand workflow concept: Modeling Concepts#workflow
The workflow definition will be added within the blueprint.json file, see CBA scaffholding.
resource-assignment
This action is meant to assign resources needed to instantiate the service, e.g. to resolve all the cloud parameters.
Also, this action has the ability to perform a dry-run, meaning that result from the resolution will be made visible to the user.
If user is fine with the result, he can proceed, else, (TDB) he will have opportunity to re-trigger the resolution.
Context
This action is triggered by Generic-Resource-API (GR-API) within SDNC as part of the AssignBB orchestrated by SO.
It will be triggered for each VNF(s) and VF-Module(s) (referred as entity bellow).
See SO Building blocks Assignment.
Component
This action type required a node_template of type component-resource-resolution
Templates
Understand resource accumulator templates
These templates are specific to the instantiation scenario, and relies on GR-API within SDNC.
There are two categories of resources, the ones that get created (and can be released during destroying the service); and the ones that get resolved, that were already existing. A capability defines the former.
The resource accumulator template is composed of the following sections:
resource-accumulator-resolved-data
Defines all the resources that can be resolved directly from the context. It expresses a direct mapping between the name of the resource and its value.
capability-data
Defines the logic to use to create a specific resource, along with the ingredients required to invoke the capability and the output mapping. See the ingredients as function parameters, and output mapping as returned value.
The logic to resolve the resource is a DG, hence DG development is required to support a new capability.
Currently the following capabilities exist:
Netbox:
netbox-ip-assign
Name generation:
generate-name
Add a new capability
In order to add a new capability, you need to do the following:
- Create the DG that will handle the logic to resolve the resource
If your DG requires properties, or template, etc.. use similar concept from step 2 to load them in the SDNC container (FYI, using a persistent volume for them is highly recommanded). Load the DG within SDNC
Add the capability in the
self-serve-vnf-assign
DG in the node named set ss.capability.execution-order[] then upload the updated version of this DG.
When doing so, make sure to increment the last parameter ss.capability.execution-order_length
Required templates
The name of the templates is very important, and can't be random. Bellow are the requirements
VNF
The VNF Resource Accumulator Template prefix name can be anything, but what is very important is that when integrating with SDC the sdnc_artifact_name property of the VF or PNF needs to be the same; see here.
VF-Modules
Each vf-module will have its own resource accumulator template, and its prefix name must be the vf-module-label, which is nothing but the name of the HEAT file defining the OS::Nova::Server
Example:
If the file is name vfw.yaml, the vf-module-label
will be vfw
For instance, with the vFW service HEAT definition, you will see in the VSP within SDC the following screen, showing you the label of each vf-module
Mapping
Each template requires its associated mapping file, See Modeling Concepts#template
The mapping file basically contains a reference to the data dictionary to use to resolve a particular resource.
The data dictionary defines the HOW and the mapping defines the WHAT.
Relation between data dictionary, mapping and template.
Below are two examples using color coding to help understand the relationships.
In orange is the information regarding the template. As mentioned before, template is part of the blueprint itself, and for the blueprint to know what template to use, the name has to match.
In green is the relationship between the value resolved within the template, and how it's mapped coming from the blueprint.
In blue is the relationship between a resource mapping to a data dictionary.
In red is the relationship between the resource name to be resolved and the HEAT environment variables.
The key takeaway here is that whatever the value is for each color, it has to match all across. This means both right and left hand side are equivalent; it's all on the designer to express the modeling for the service. That said, best practice is example 1.
Inputs
Property | Description |
---|---|
template-prefix | These templates are identified using artifact prefix. See Modeling Concepts#template So in order to know for which entity the action is triggered, this is required as input is required. |
Output
In order to perform dry-run, it is necessary to provide the meshed resolved template as output. To do so, the use of Modeling Concepts#getAttribute expression is required.
Also, as mentioned here Modeling Concepts#resourceResolution, the resource resolution component node will populate an attribute named assignment-params with the result.
Finally, the name of the ouput has to be meshed-template so SDNC GR-API knows how to properly parse the response.
Example
Here is an example of the resource-assignment workflow:
{ "workflows": { "resource-assignment": { "steps": { "resource-assignment-process": { "description": "Resource Assign Workflow", "target": "resource-assignment-process" } }, "inputs": { "template-prefix": { "required": true, "type": "string" }, "resolution-key": { "required": true, "type": "string" }, "resource-assignment-properties": { "description": "Dynamic PropertyDefinition for workflow(resource-assignment).", "required": true, "type": "dt-resource-assignment-properties" } }, "outputs": { "meshed-template": { "type": "json", "value": { "get_attribute": [ "SELF", "assignment-params" ] } } } } } }
Understand SDNC DG flow logic
Logic for vnf and vf-module assignement is pretty much the same.
This is the general DG logic of the VNF assign flow and sub-flows:
- call
vnf-topology-operation
- call
vnf-topology-operation-assign
- call
self-serve-vnf-assign
- set capability.execution-order
- call
self-serve-vnf-ra-assignment
- execute REST call to CDS blueprint processor
- put resource-accumulator-resolved-data in MDSAL GR-API/services/service/$serviceInstanceId/vnfs/vnf/$vnfId
- call
self-serve- + capability-name
- put vnf information in AAI (including the selflink)
- call
naming-policy-generate-name
- put generic-vnf relationship in AAI
- call
- call
This is the general logic of the vf-module assign flow and sub-flows:
call vf-module-topology-operation
call vf-module-topology-operation-assign
set service-data based on SO request (userParams / cloudParams)
call self-serve-vf-module-assign
set capability.execution-order: this is where we're adding our capability flow: netbox-ip-assign
call self-serve-vfmodule-ra-assignment
execute ConfigAssignment (Blueprint Processor micro-service)
put resource-accumulator-resolved-data in MDSAL service-data
call self-serve- + capability-name, e.g. self-serve-netbox-ip-assign
execute NetboxClient assignIpAddress
put vf-module information in AAI
put vnfc information in AAI
config-assign
This action is meant to assign all the resources and mesh the templates needed for the configuration to apply during post-instantiation (day0 config).
If user is fine with the result, he can proceed, else, (TDB) he will have opportunity to re-trigger the resolution.
Context
This action is triggered by SO after the AssignBB has been executed for Service, VNF and VF-Module. It corresponds to the ConfigAssignBB.
See SO Building blocks Assignment.
Steps
This is a single action type of workflow, hence the target will refer to a node_template of type component-resource-resolution
Inputs
Property | Description |
---|---|
resolution-key | The dry-run functionality requires the ability to retrieve the resolution that has been made later point in time in the process. The combination of the artifact-name and the resolution-key will be used to uniquely identify the result. |
Output
In order to perform dry-run, it is necessary to provide the meshed resolved template as output. To do so, the use of Modeling Concepts#getAttribute expression is required.
Also, as mentioned here Modeling Concepts#resourceResolution, the resource resolution component node will populate an attribute named assignment-params with the result.
Example
Here is an example of the config-assign workflow:
{ "workflows": { "config-assign": { "steps": { "config-assign-process": { "description": "Config Assign Workflow", "target": "config-assign-process" } }, "inputs": { "resolution-key": { "required": true, "type": "string" }, "config-assign-properties": { "description": "Dynamic PropertyDefinition for workflow(config-assign).", "required": true, "type": "dt-config-assign-properties" } }, "outputs": { "dry-run": { "type": "json", "value": { "get_attribute": [ "SELF", "assignment-params" ] } } } } } }
config-deploy
This action is meant to push the configuration templates defined during the config-assign step for the post-instantiation.
This action is triggered by SO during after the CreateBB has been executed for all the VF-Modules.
Context
This action is triggered by SO after the CreateVnfBB has been executed. It corresponds to the ConfigDeployBB.
See SO Building blocks Assignment.
Steps
This is a single action type of workflow, hence the target will refer to a node_template of type component-netconf-executor or component-jython-executor or component-restconf-executor.
Inputs
Property | Description |
---|---|
resolution-key | Needed to retrieve the resolution that has been made earlier point in time in the process. The combination of the artifact-name and the resolution-key will be used to uniquely identify the result. |
Output
SUCCESS or FAILURE
Example
Here is an example of the config-deploy workflow:
{ "workflow": { "config-deploy": { "steps": { "config-deploy": { "description": "Config Deploy using Python (Netconf) script", "target": "config-deploy-process" } }, "inputs": { "resolution-key": { "required": true, "type": "string" }, "config-deploy-properties": { "description": "Dynamic PropertyDefinition for workflow(config-deploy).", "required": true, "type": "dt-config-deploy-properties" } } } } }
Introduction
The purpose is to describe integration of CDS within SDC
What's new
At the VF and PNF level, a new artifact type CONTROLLER_BLUEPRINT_ARCHIVE allow the designed to load the previsouly designed CBA as part of the resource.
How to add the CBA in SDC VF resource (similar for PNF)
Create the VF resource
Click on Deployment Artifact, then Add other arifacts, and select you CBA
Check the artifact is uploaded OK, and click on Certify.
Create a new service model, and add the newly created VF (including CBA artifact) to the new service model. Click on "Add Service"
Click on "Composition", and drag the VF we created from the palette on the left onto the canvas in the middle.
Then, click on "Submit for Testing".
Click on Properties Assignments, then click on the service name, e.g. "CDS-VNF-TEST" from the right bar.
Type "sdnc" in the filter box, and add the sdnc_model_name, sdnc_model_version, and sdnc_artifact_version, and click "Save".
sdnc_model_name - This is the name of the blueprint (e.g. CBA name)
sdnc_model_version - This is the version of the blueprint
sdnc_artifact_name - This is the name of the VNF resource accumulator template
Type "skip" in the filter box, and set "skip post instantiation" to FALSE, then click "Save".
Login as Tester (jm0007/demo123456!) and accept the new service.
Login as Governor (gv0001/demo123456!) and approve for distribution.
Login as Operator (op0001/demo123456!) and click on "Distribute".
Click on "Monitor" to check the progress of the distribution, and check that all ONAP components were notified, and downloaded the artifacts, and deployed OK.
Starting from Dublin release, CDS offers a new package configuration to design the services provisioning. This section describes step by step the procedure of designing a new CBA from scratch.
The CBA package content is well described in CDS Modeling Concepts and also in Design Time section, it shows the structure of a CBA and the different definitions/artifacts. This section will be more focus on the creation of new CBA (The structure: required folder and files), and the enrichment procedure to generate the complete config file.
CBA directory and structure
├── CBA-archive-name # CBA Root Directory | └── Definitions/ │ └── CBA_configuration_file.json # CBA configuration file (Mandatory) | └── Environments/ # All environment files contained in this folder are loaded in Blueprint processor run-time │ └── env-prod.properties │ └── env-test.properties | └── Plans/ │ └── CONFIG_DirectedGraphExample.xml # Directed graph artifact | └── Scripts/ # Script used for capability resource resolution │ └── kotlin/ │ └── script_kotlin.kt │ └── ansible/ │ └── ansible_file.yaml │ └── python/ │ └── SamplePython.py | └── TOSCA-Metadata/ │ └── TOSCA.meta # CBA entry point (Mandatory) | └── Templates/ │ └── example1-template.jinja # Template file that will dynamic represent a payload in some execution node (Extensions supported: .vtl and .jinja) │ └── example1-mapping.json # List of variables that will be resolved to fulfill the jinja template │ └── example2-template.vtl # Velocity Template file │ └── example2-mapping.json # Mapping file for velocity template
Fig. CBA config file structure
A. CBA configuration file sections description
The above diagram shows a simple CBA with one workflow and one node template. The following describes each section defined in CBA config file.
- CBA Metadata:
This section specify information about the CBA such as:
- The Author: Name and email
- User privileges for this self-service provisioning execution
- CBA identifier: Template name and Version (Ex. Template name: My-self-service-name, Version: 1.0.0)
- Template tags: Reference words that can be used to find this CBA.
- DSL Definition:
We define here all parameters, in JSON, needed in service provisioning.
Ex. Endpoint selector to provide remote Ansible server parameters.
"ansible-remote-endpoint" : { "type" : "token-auth", "url" : "http://ANSIBLE_IP_ADDRESS", "token" : "Bearer J9gEtMDqf7P4YsJ74fioY9VAhLDIs1" }
- Workflows execution:
- my-workflow1: This is a workflow to describe the action that will trigger the self-service provisioning in run-time. A workflow can take input and return output. It can also follow one or many steps. In this example, only one step is defined.
Each step points to a target which is the corresponding node template, and the target specified here is: my-workflow-target-node-node-template.
- Node templates: This section provide the self-service execution plan, usually DG is used here to describe complex workflow. But, the above CBA contains a simple node template (my-workflow-node-node-template) without DG:
The node template is defined by the node-template-execution-type. This type specifies the component function to use for this node template execution. The following shows the different components that can be executed as a node template:
├── component-resource-resolution # CBA Root Directory | └── Interface: │ ├── ResourceResolutionComponent # Component to resolve resources │ └── Resolution approaches: │ ├── rr-processor-source-capability # Resolve using Capability scripts such as jython or kotlin │ ├── rr-processor-source-processor-db # Resolve using database query │ ├── rr-processor-source-default # resolve by getting default value provided │ ├── rr-processor-source-rest # Resolve using REST API request ├── component-jython-executor # Component to execute Jython scripts | └── Interface: │ ├── ComponentJythonExecutor ├── component-remote-python-executor # Component to execute remote python scripts | └── Interface: │ ├── ComponentRemotePythonExecutor ├── component-restconf-executor # Component to execute Restconf operations | └── Interface: │ ├── ComponentRestconfExecutor ├── component-netconf-executor # Component to execute netconf operations | └── Interface: │ ├── ComponentNetconfExecutor ├── component-cli-executor # Cli component | └── Interface: │ ├── ComponentCliExecutor ├── component-remote-ansible-executor # Component to execute remote ansible playbook | └── Interface: │ ├── ComponentRemoteAnsibleExecutor
In the case the workflow point to a DG node template, this DG will describe all the execution sequence to run for the corresponding workflow steps. In the following, the workflow point to a DG and execute two node templates:
- Workflow with DG
- Node templates with DG
in the below DG, we define the following sequence: [target-node-template1] → [target-node-template2]
B. Other artifacts in CBA
This section describes the different parts of the CBA, artifacts needed to have a model-driven package for self-service provisioning:
- CBA Entry point: TOSCA.meta file
TOSCA-Meta-File-Version: 1.0.0 CSAR-Version: 1.0 Created-By: Steve Siani <alphonse.steve.siani.djissitchi@ibm.com> Entry-Definitions: Definitions/CBA_configuration_file_name.json Template-Name: baseconfiguration Template-version: 1.0.0 Template-Tags: Steve Siani, remote_ansible
- Environment files: Some parameters need to be resolved to fulfill the template. It is possible to provide in your CBA, additional variables in environment files. In this approach, the service will get some parameters from environment file. The designer could define many environments variables in files, and those environments files are loaded automatically in the running self-service:
Constraint: Save environment files in [CBA Root Folder]/Environments/
├── CBA-archive-name # CBA Root Directory | . | . | . | └── Environments/ # All environment files contained in this folder are loaded in Blueprint processor run-time │ └── env-prod.properties │ └── env-test.properties │ └── AdditionalApplications.properties | . | . | .
Note: When environment files are provided in CBA under Environments directory, the variables contained in those files are load in Blueprint run-time context as a node template "BPP". So, accessing those variables will be possible by calling the function getNodeTemplateAttributeValue("BPP", attribute) in Blueprint Runtime Service. Where "attribute" refers to the environment variable defined in environment file.
val username = blueprintRuntimeService.getNodeTemplateAttributeValue("BPP", "env-test.ansible_ssh_user").asText()
- Template artifacts: Content the template file and the corresponding template mapping. This template provides a dynamic content to the self-service for configuration appliance.
Ex. Jinja template sample
Ex. Velocity template sample
Ex. Corresponding template mapping file sample
In this template, some parameters are resolved using the input source and some are resolved using properties-capability-source.
- Script artifacts: You may need to resolve resources using a customized script (Kotlin or Python) or execute remote python script on a device. In this case, you will define scripts in your CBA under the Scripts directory.
- Resource resolution using python script
In the CBA, you may need to define and resolve variables. This is possible by declaring these variables as a data types and each data type belongs to a resource dictionary. Let's take the example of the variable declare above in template mapping.
{ "name": "interfaces", "input-param": true, "property": { "type": "list", "entry_schema": { "type": "string" } }, "dictionary-name": "properties-capability-source", "dictionary-source": "capability", "dependencies": ["environment"] }
This variable is declared as an array list resolved using a resource dictionary name "properties-capability-source", from the dictionary source "capability" and will depend on variable call "environment". Dependency variable means that the "environment" variable should be resolved before "interfaces" variable is resolved.
The resource dictionary "properties-capability-source" must be load in CDS run time and will point to the python script to execute as Jython in order to resolve "interfaces" variable.
- Component execution on Netconf device with python script
In the following, we define a node template execution as a "component-netconf-executor" and in the input we specify the script to run into the Netconf device.
C. Enrich the CBA to have complete package
Once CBA design is done, you need to perform the enrichment action to have a fully model-driven package to execute self-service provisioning in run-time execution. Please refer to this section "Enriching (or enhancing) a blueprint".