Introduction — UNDER CONSTRUCTION
The purpose of this page is to describe the vFW use case instantiation using the Controller Design Studio in ONAP Dublin Release.
What's new:
We can see from the demo that now we don't need to perform SDNC Preloading to instantiate the service. The VNF naming and IP addressing will be auto assigned.
Also, we can see that a configuration is generated and save in CDS DB via the new ConfigAssignBB.
And later during the instantiation process, the configuration is deployed on the VF module by CDS via the new ConfigDeployBB.
Video demo of the vFW instantiation — TBD
Summary of the video demo
Before running the instantiation, we need to distribute the vFW service model in ONAP.
Then, we can use a Postman collection as shown in the video that has 3 Rest Calls and some code to automate the instantiation of the vFW use case:
|
|
STEP CDS2: Expose SO Catalog API SO Catalog DB is NOT exposed by default, you will need something like the command below to be able to send the CDS #2 rest call to SO Catalog and get back the service VNF model details. kubectl -n onap expose pod onap-so-so-catalog-db-adapter-56d9cc554b-9fszd --type=LoadBalancer |
|
|
|
|
Stack creation in Openstack —TBD
We can see the final stack created in Openstack, and the final network topology below.
Environment preparation for the Postman Collection:
In order to run the Postman collection correctly, we need to create 3 environment variables in Postman:
- cds-service-model: This is the name of the service model distributed by the robot script, you can find it by running CDS #1 call once and looking for the VNF that has today's date and time.
- cds-instance-name: This is the name of the service instance we will instantiate.
- k8s: This is our ONAP Load Balancer IP Address.
Also, we need to update our IaaS Openstack parameters in the body of the SO Service Instantiation Rest call CDS#3:
- lcpCloudRegionId : the cloud-region name
- tenantId : the tenant id
- public_net_id : the public network id in Openstack
- onap_private_net_id : the private network id in Openstack, we need this as this is not created by the auto assignment service
- onap_private_subnet_id : and the private subnet id
- pub_key : the public key to be put on the VMs
- image_name : Ubuntu 16 image name
- flavor_name : flavor
- sec_group : security group that will be applied to the VMs
SO Workflow BBs
After the Service Instantiation Rest Call to SO, we can see that SO decomposes the service into 1 VNF + 4 VF Modules, and SO created 18 Building Blocks that will be executed to instantiate the use case.
In the video above we can see the BBs as they progressing, until the full workflow is completed, and in the video shows the stack as it comes up in Openstack.
Results of the Postman Rest calls:
Below the output of the calls shown in the video:
Service Instance in SDNC MDSAL (attached in file here vFW_sdnc_mdsal.txt, as it's too big for the a collapsable code block).
Main reference for CDS sequence flows is here : Instantiation - SDN-C Generic Resource API (Enhancement)
To monitor or to troubleshoot the vFW instantiation with CDS in Dublin, we can check several ONAP component logs, as described below:
#Some commands to quickly check the CDS processing. #If you like these commands, you can create aliases, so you can quickly call them anytime. #E.g.: alias l-sdnc='kubectl -n onap exec -it onap-sdnc-sdnc-0 -- cat /var/log/onap/sdnc/karaf.log' #Author: abdelmuhaimen.seaudi@orange.com #Check SDNC Logs: kubectl -n onap exec -it onap-sdnc-sdnc-0 -- cat /var/log/onap/sdnc/karaf.log kubectl -n onap exec -it onap-sdnc-sdnc-0 -- tail -f /var/log/onap/sdnc/karaf.log #Check SO Openstack Adapter Logs: kubectl -n onap get pods | grep so-openstack | grep Running | cut -f1 -d" " | xargs -i kubectl -n onap exec {} -- cat /app/logs/openstack/debug.log kubectl -n onap get pods | grep so-openstack | grep Running | cut -f1 -d" " | xargs -i kubectl -n onap exec {} -- tail -f /app/logs/openstack/debug.log #Check SO BPMN Logs: kubectl -n onap get pods | grep so-bpmn | grep Running | cut -f1 -d" " | xargs -i kubectl -n onap exec {} -- cat /app/logs/bpmn/debug.log kubectl -n onap get pods | grep so-bpmn | grep Running | cut -f1 -d" " | xargs -i kubectl -n onap exec {} -- tail -f /app/logs/bpmn/debug.log #Check CDS Blueprint Processor Logs: kubectl -n onap get pods | grep blueprints-processor | grep Running | cut -f1 -d" " | xargs -i kubectl -n onap logs kubectl -n onap get pods | grep blueprints-processor | grep Running | cut -f1 -d" " | xargs -i kubectl -n onap logs {} -f #Check Netbox Logs: kubectl -n onap get pods | grep netbox-app | grep Running | cut -f1 -d" " | xargs -i kubectl -n onap logs kubectl -n onap get pods | grep netbox-app | grep Running | cut -f1 -d" " | xargs -i kubectl -n onap logs {} -f #Check Naming Service Logs: kubectl -n onap get pods | grep name-gen | grep Running | cut -f1 -d" " | xargs -i kubectl -n onap logs kubectl -n onap get pods | grep name-gen | grep Running | cut -f1 -d" " | xargs -i kubectl -n onap logs {} -f