Introduction
This document explains the steps to on-board and instantiate vFW on Azure
Pre-requisites
- ONAP environment on Azure as explained here
Demo Video
Refer to the video on how to on-board and instantiate vFW on Azure(E2E)
The detailed steps are provided below:
Onboarding and Service Design
- Onboard the vSINC VNF using the TOSCA designed using Simple profile nodes. TOSCA csar is available on Github: https://github.com/onapdemo/demo/blob/beijing/tosca/aria_csars/simple_vfw_vSNC.csar?raw=true
- Onboard the vPG VNF using the TOSCA designed using Simple profile nodes. TOSCA csar is available on Github: https://github.com/onapdemo/demo/blob/beijing/tosca/aria_csars/simple_vfw_vPG.csar?raw=true
- Import the vSINC VSP and create the VNF as shown in the video
- After creating the VNF, use the deployment artifact link to add the Azure specific TOSCA in the OTHER folder.
- Azure specific TOSCA for vSINC is available on Github: https://github.com/onapdemo/demo/blob/beijing/tosca/aria_csars/azurevsnk.csar?raw=true
- Similarly, import the vPG VSP and add the Azure specific TOSCA
- Azure specific TOSCA for vPG is available on Github: https://github.com/onapdemo/demo/blob/beijing/tosca/aria_csars/azurevpkg.csar?raw=true
- Using SDC catalog, create the vFW service by adding two VNFs that were imported.
Distribute the service to SO and AAI
Naming of CSAR files
The csar names of aria TOSCA should be same as in github. This allows the Multivim adapter to pick up the correct TOSCA file for instantiation
Service Provisioning
Once the service model gets distributed to SO & AAI, service instantiation can be done using VID UI
Pre-requisites
Refer to pre-requisites page.
Steps to instantiate
- Login to VID and click on Browse Service Models(Left menu)
- Click on Deploy and then enter the service instance name and other details to create the service.
- Once created, you can see the service instance details with an option to "Add VNF".
- Click on Add VNF and select the VNF module related to vSINC.Enter the VNF name: "zdfw1fwl01vfw01" and other details to create the VNF for vSINC.
- The VNF name should be same as mentioned in above step to run the Closed loop.
- Preload SDNC data by capturing the VNF model data information
POST /restconf/operations/VNF-API:preload-vnf-topology-operation
In preload parameters, repo_url_blob,repo_url_artifacts refers to Github link where the modified scripts for vSINC and vFW installation are kept.
{ "vnf-parameter-name": "repo_url_blob", "vnf-parameter-value": "https://raw.githubusercontent.com/onapdemo/onap-scripts/master/usecases" },
{ "vnf-parameter-name": "repo_url_artifacts", "vnf-parameter-value": "https://raw.githubusercontent.com/onapdemo/onap-scripts/beijing" }
Also, update the dcae_collector_ip(Load Balancer IP) in the request body to initiate Closed loop.
- Click on Add VF module.Enter the details and submit the request. This will instantiate the FW and SINC VMs on Azure
- Similarly,follow step 4(select vPG module related to vPG) and create the VNF for vPG.
- Preload SDNC data by capturing the VNF model data information
POST /restconf/operations/VNF-API:preload-vnf-topology-operation
In preload parameters, repo_url_blob refers to Github link where the modified scripts for vPG installation are kept.
{ "vnf-parameter-name": "repo_url_blob", "vnf-parameter-value": "https://raw.githubusercontent.com/onapdemo/onap-scripts/master/usecases" }
Also, update the dcae_collector_ip(Load Balancer IP) in the request body to initiate Closed loop.
- Click on Add VF module.Enter the details and submit the request. This will instantiate PG on Azure.
Traffic flow test
Open the browser and enter the URL: http://vsnctestapp.eastus.cloudapp.azure.com:667
This will show the graph with the packets coming to SINC vm
ClosedLoop Execution
Once the instantiation of vFW is done, the VES agent in vFW VM will send the measurement data to DCAE using the IP and Port given in Preload parameters.
Two manual steps are needed to run the closed loop flow which are mentioned below:
Push polices
First go through below link and validate the health of the policy pods.
https://wiki.onap.org/display/DW/Policy+on+OOM
Then do these steps
- Go to pap container
- Go to /tmp/policy-install/config/
- execute command "export PRELOAD_POLICIES=true"
- copy push-policies.sh to /tmp
- Go to /tmp and open push-policies.sh
- Go to vid and search the service instance and then take the "model id" of vPG vnf.
- find resourceID in the push-policies.sh and change its value to "model id" of vPG (shown as below)
curl -v --silent -X PUT --header 'Content-Type: application/json' --header 'Accept: text/html' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{
"policyConfigType": "BRMS_PARAM",
"policyName": "com.BRMSParamvFirewall",
"policyDescription": "BRMS Param vFirewall policy",
"policyScope": "com",
"attributes": {
"MATCHING": {
"controller" : "amsterdam"
},
"RULE": {
"templateName": "ClosedLoopControlName",
"closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
"controlLoopYaml": "controlLoop%3A%0D%0A++version%3A+2.0.0%0D%0A++controlLoopName%3A+ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a%0D%0A++trigger_policy%3A+unique-policy-id-1-modifyConfig%0D%0A++timeout%3A+1200%0D%0A++abatement%3A+false%0D%0A+%0D%0Apolicies%3A%0D%0A++-+id%3A+unique-policy-id-1-modifyConfig%0D%0A++++name%3A+modify+packet+gen+config%0D%0A++++description%3A%0D%0A++++actor%3A+APPC%0D%0A++++recipe%3A+ModifyConfig%0D%0A++++target%3A%0D%0A++++++%23+TBD+-+Cannot+be+known+until+instantiation+is+done%0D%0A++++++resourceID%3A+%973ef-7b55-41ce-a633-62af3462a8220D%0A++++++type%3A+VNF%0D%0A++++retry%3A+0%0D%0A++++timeout%3A+300%0D%0A++++success%3A+final_success%0D%0A++++failure%3A+final_failure%0D%0A++++failure_timeout%3A+final_failure_timeout%0D%0A++++failure_retries%3A+final_failure_retries%0D%0A++++failure_exception%3A+final_failure_exception%0D%0A++++failure_guard%3A+final_failure_guard"
}
}
}' 'http://pdp:8081/pdp/api/createPolicy'h. now execute push-policies.sh (./push-policies.sh)
- Create APPC Mount
- Get the VNF instance ID of vPG, either through VID or through AAI.
- Get the public IP address of the Packet Generator from your deployment.
- Create file appc-mount.xml with following content and replace VPG_IP with packet generator IP.
<node xmlns="urn:TBD:params:xml:ns:yang:network-topology">
<node-id>VPG_VNF_INSTANCE_ID</node-id>
<host xmlns="urn:opendaylight:netconf-node-topology">VPG_IP</host>
<port xmlns="urn:opendaylight:netconf-node-topology">2831</port>
<username xmlns="urn:opendaylight:netconf-node-topology">admin</username>
<password xmlns="urn:opendaylight:netconf-node-topology">admin</password>
<tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
<!-- non-mandatory fields with default values, you can safely remove these if you do not wish to override any of these values-->
<reconnect-on-changed-schema xmlns="urn:opendaylight:netconf-node-topology">false</reconnect-on-changed-schema>
<connection-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">20000</connection-timeout-millis>
<max-connection-attempts xmlns="urn:opendaylight:netconf-node-topology">0</max-connection-attempts>
<between-attempts-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">2000</between-attempts-timeout-millis>
<sleep-factor xmlns="urn:opendaylight:netconf-node-topology">1.5</sleep-factor>
<!-- keepalive-delay set to 0 turns off keepalives-->
<keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">120</keepalive-delay>
</node> Create Network config in appc using below API
curl -v --user "admin":"admin" -d @appc-mount.xml -H "Accept: application/xml" -H "Content-type: application/xml" -X PUT http://<load_balancer_ip>:30230/restconf/config/network-topology:network-topology/topology/topology-netconf/node/<VNF_INSTANCE_ID>
Use below GET to validate that PUT API created config correctly
curl -v --user "admin":"admin" -H "Accept: application/xml" -H "Content-type: application/xml" -X GET http://<load_balancer_ip>:30230/restconf/config/network-topology:network-topology/topology/topology-netconf/node/<VNF_INSTANCE_ID>
Note:
Related link https://wiki.onap.org/display/DW/Creating+a+Netconf+Mount
Running vFW with Robot Framework
This vFW automation handles model creation and instantiation.
Run the vFW script as following:
- Connect to the Azure VM where ONAP is deployed via SSH and execute the following commands:
[root@onap-tanmay:/home/ubuntu/oom/kubernetes/robot]# cd /opt/aee/oom/kubernetes/robot/ [root@onap-tanmay:/home/ubuntu/oom/kubernetes/robot]# ./demo-k8s.sh instantiateAzureVFW
Output of the Execution-
Console -
Azure Portal -
Log-
Additional information
Based on the preferences, User can modify following files to instantiate vFW on Azure.
- integration_preload_parameters.py
This file is available under /dockerdata-nfs/<namespace>/robot/eteshare/config/ in Azure VM where ONAP is deployed. User can modify preload parameters for vFW Instantiation under - "azurevfwsnk_preload.template" and "azurevpkg_preload.template". - integration_robot_properties.py
This file is available under /dockerdata-nfs/<namespace>/robot/eteshare/config/ in Azure VM where ONAP is deployed. User can modify fields such as "AZURE_SUBSCRIPTION_ID" , "AZURE_TENANT_ID" , "AZURE_CLIENT_ID" , "AZURE_CLIENT_SECRET" , "AZURE_CLOUD_OWNER" , "AZURE_CLOUD_REGION"
- integration_preload_parameters.py
Make sure you compile these python file after any modification using command - python <filename>
8 Comments
Eric Multanen
Is there any relationship between the contents of the csar's that are onboarded into SDC (e.g. simple_vfw_vPG.csar) and the Azure csar that is added later (e.g. azurevpkg.car) ?
The Azure csar seems to have the 'real' VNF defined in it. Does the VNF described in the onboarded CSAR have any meaning or use?
(I tried to onboard with Casablanca SDC to see what would happen - it fails immediately with "Manifest must contain Source"
Brian Freeman
./demo-k8s.sh instantiateAzureVFW - where is that tag added robot or demo-k8s.sh - I must not have read some pre-requisite carefully since that tag is not in testsuite nor in the Casablanca branch demo-k8s.sh ?
Brian Freeman
I found the changes in the robot container that is referenced on onapdemo github repo. Those changes should be submitted to ONAP .
Srinivasa Addepalli
Hi Brian Freeman and integration team,
I always have this question. Is robot container for testing existing sample use cases or is it expected to be used even in production deployments for other use cases? I always thought that it is for integration testing of use cases with sample VNFs. But few think that it is a required component even for production deployment. Please let us know.
Srini
Brian Freeman
Robot is for testing only but its an example for things that could be done by a Service Provider with their OSS's and tooling
It is not intended to be deployed in a production instance although its easy to imagine that a service provider might want to take pieces of it and re-package them for a monitoring agent etc.
Perhaps more appropriately port it to their preferred scripting engine.
Srinivasa Addepalli
Brian Freeman Yes, that is my understanding too. Thank you for confirming.
Brian Freeman
WRT the robot changes for Azure - its not clear why we need all the cloud differences in SO interface and demo_preload.robot for things like AZURE_CLOUD_OWNER etc. Seems like the standard owner/region/site would work as long as integration_override.yaml had the correct labels ?
Tanmay Nakhate
Brian Freeman This was done to handle both oob vFW flow and azure vFW flow using the same script.