Make sure that you've installed ONAP R2 release. For installation instructions, please refer ONAP Installation in Vanilla OpenStack.
Make sure that all components pass health check when you do the following:
- ssh to the robot vm, run '/opt/ete.sh health'
You will need to update your /etc/hosts so that you can access the ONAP Portal in your browser. You may also want to add IP addresses of so, sdnc, aai, etc so that you can easily ssh to those VMs. Below is a sample just for your reference:
You can try to login to the portal at http://portal.api.simpledemo.onap.org:8989/ONAPPORTAL/login.htm using as one of the following roles. The password is demo123456! for all the users.
Create images for vBRG, vBNG, vGMUX, and vG
Follow the following instructions to build an image for each VNF and save them in your Openstack: ONAP vCPE VPP-based VNF Installation and Usage Information
To avoid unexpected mistakes, you may want to give each image a meaningful name and also be careful when mixing upper case and lower case characters. After this you should see images like below. The casablanca image names have 'casa' in them like "vbng-casa-base-ubuntu-16-04" .
Create license model in SDC
Log in to SDC portal as designer. Create a license that will be used by the subsequent steps. The detailed steps are here: Creating a Licensing Model
Prepare HEAT templates
vCPE uses five VNFs: Infra, vBRG, vBNG, vGMUX, and vG, which are described using five HEAT templates. For each HEAT template, you will need to fill the env file with appropriate parameters. The HEAT templates can be obtained from gerrit: [demo.git] / heat / vCPE /
Note that for each VNF, the env file name and yaml file name are associated together by file MANIFEST.json. If for any reason you change the env anf yaml file names, please remember to change MANIFEST.json accordingly.
For each VNF, compress the three env, yaml, and json files into a zip package, which will be used for onboarding. If you want to get the zip packages I used for reference, download them here: infra-sb02.zip, vbng-sb02.zip, vbrg-sb02.zip, vgmux-sb02.zip, vgw-sb02.zip.
VNF onboarding in SDC
Onboard the VNFs in SDC one by one. The process is the same for all VNFs. The suggested names for the VNFs are given below (all lower case). The suffix can be a date plus a sequence letter, e.g., 1222a.
Below is an example for onboarding infra.
Sign into SDC as cs0008, choose ONBOARD and then click 'CREATE NEW VSP'.
Now enter the name of the vsp. For naming, I'd suggest all lower case with format vcpevsp_[vnf]_[suffix], see example below.
After clicking 'Create', click 'missing' and then select to use the license model created previously.
Click 'Overview' on the left size panel, then drag and drop infra.zip to the webpage to upload the HEAT.
Now click 'Proceed To Validation' to validate the HEAT template.
You may see a lot of warnings. In most cases, you can ignore those warnings.
Click 'Check in', and then 'Submit'
Go to SDC home, and then click 'Import VSP'.
In the search box, type in suffix of the vsp you onboarded a moment ago to easily locate the vsp. Then click 'Import VSP'.
Click 'Create' without changing anything.
Now a VF based on the HEAT is created successfully. Click 'Submit for Testing'.
Sign out and sign back in as tester: jm0007, select the VF you created a moment ago, test and accept it.
Note: in Casablanca you can simply Certify the VSP and continue on with Service Design and Creation
Service Design and Creation
The entire vCPE use case is divided into five services as show below. Each service is described below with suggested names.
- vcpesvc_infra_[suffix]: includes two generic neutron networks named cpe_signal and cpe_public (all names are lower case) and a VNF infra.
- vcpesvc_vbng_[suffix]: includes two generic neutron networks named brg_bng and bng_mux and a VNF vBNG.
- vcpesvc_vgmux_[suffix]: includes a generic neutron network named mux_gw and a VNF vGMUX
- vcpesvc_vbrg_[suffix]: includes a VNF vBRG.
- vcpesvc_rescust_[suffix]: includes a VNF vGW and two allotted resources that will be explained shortly.
Service design and distribution for infra, vBNG, vGMUX, and vBRG
The process for creating these four services are the same, however make sure to use the vnfs & networks as described above. Below are the steps to create vcpesvc_infra_1222a. Follow the same process to create the other three services, changing networks and VNFs according to above. Log back in as Designer username: cs0008
In SDC, click 'Add Service' to create a new service
Enter name, category, description, product code, and click 'Create'.
Click 'Composition' from left side panel. Drag and drop VF vcpevsp_infra_1222a to the design.
Drag and drop a generic neutron network to the design, click to select the icon in the design, then click the pen in the upper right corner (next to the trash bin icon), a window will pop up as shown below. Now change the instance name to 'cpe_signal'.
Click and select the network icon in the design again. From the right side panel, click theicon and then select 'network_role'. In the pop up window, enter 'cpe_signal' as shown below.
Add another generic neutron network the sam way. This time change the instance name and network role to 'cpe_public'. Now the service design is completed. Click 'Submit for Testing'.
Sign out and sign back in as tester 'jm0007'. Test and approve this service.
Sign out and sign back in as governer 'gv0001'. Approve this service.
Sign out and sign back in as operator 'op0001'. Distribute this service. Click monitor to see the results. After some time (could take 30 seconds or more), you should see the service being distributed to AAI, SO, SDNC.
Service design and distribution for customer service
First of all, make sure that all the previous four services have been created and distributed successfully.
The customer service includes a VNF vGW and two allotted resources: tunnelxconn and brg. We will need to create the two allotted resources first and then use them together with vG (which was already onboarded and imported as a VF previously) to compose the service.
Check Sub Category Tag in SDC
You may need to add an Allotted Resource Category Tag to SDC for the BRG.
Log as the "demo" account and go to SDC.
Select "Category Management"
Select "Allotted Resource"
You should have "Tunnel XConn" and "BRG".
If you do not and are missing the "BRG" Sub Category Like the screen shot below. Click on New and add the "BRG" Subcategory.
If Chrome browser fails to add BRG, try Firefox browser.
Create allotted resource tunnelxconn
This allotted resource depends on the previous created service vcpesvc_vgmux_1222a. The dependency is described by filling the allotted resource with the UUID, invariant UUID, and service name of vcpesvc_vgmux_1222a. So for preparation, we first download the csar file of vcpesvc_vgmux_1222a from SDC.
Sign into SDC as designer cs0008, click create a new VF, select 'Tunnel XConnect' as category and enter other information as needed. See below for an example. I'm using vcpear_tunnelxconn_1222a as the name of this allotted resource.
Click create. And then click 'Composition', drag an 'AllottedResource' from the left side panel to the design.
Click on VF name link in between HOME link and Composition on the top menu. From here click on Properties Assignment on the left hand menu. Now open the csar file for vcpesvc_vgmux_1222a, under 'Definitions' open file 'service-VcpesvcVgmux1222a-template.yml'. (Note that the actual file name depends on what you name the service at the first place.) Now put the yml file and the SDC window side by side. Now copy&paste the invariantUUID, UUID, and node name to the corresponding fields in SDC. See the two screenshots below. Save and then submit for testing.
Create allotted resource brg
This allotted resource depends on the previous created service vcpesvc_vbrg_1222a. The dependency is described by filling the allotted resource with the UUID, invariant UUID, and service name of vcpesvc_vbrg_1222a. So for preparation, we first download the csar file of vcpesvc_vbrg_1222a from SDC.
We name this allotted resource vcpear_brg_1222a. The process to create it is the same as that for the above vcpear_vgmux_1222a, Use catagory: BRG. The only differences are the UUID, invariant UUID, and service name parameters being used. Therefore, I will not repeat the steps and screenshots here.
Sign out and sign back in as tester 'jm0007'. Test and approve both Allotted Resources.
Create customer service
Log back in as Designer username: cs0008
We name the service vcpesvc_rescust_1222a and follow the steps below to create it.
Sign into SDC as designer, add a new service and fill in parameters as below. Then click 'Create'.
Click 'Composition' from the left side panel. Drag and drop the following three components to the design.
Point your mouse to the arrow next to 'Composition' and then click 'Properties Assignment' (see below).
First select tunnelxconn from the right side panel, then fill nf_role and nf_type with value 'TunnelXConn'.
Next select brg from the right side panel, then fill nf_role and nf_type with value 'BRG'.
Click 'Submit for Testing'.
Now sign out and sign back in as tester 'jm0007' to complete test of vcpesvc_rescust_1222a.
Sign out and sign back in as governer 'gv0001'. Approve this service.
Distribute the customer service to AAI, SO, and SDNC
Before distributing the customer service, make sure that the other four services for infra, vBNG, vGMUX, and vBRG all have been successfully distributed.
Now distribute the customer service, sign out and sign back in as operator 'op0001'. Distribute this service and check the status to ensure the distribution succeeds. It may take tens of seconds to complete. The results should look like below.
Initial Configuration of ONAP to Deploy vCPE
ssh to the robot VM, execute:
- /opt/demo.sh init_robot
- /opt/demo.sh init
Add an availability zone to AAI by executing the following:
Add operation user ID to AAI. Note that you will need to replace the tenant ID 087050388b204c73a3e418dd2c1fe30b in 2 places and tenant name with the values you use.
ssh to the SDNC VM in HEAT or the host node for pod sdnc-sdnc-0 in OOM, and do the following: (this step is to make sure SDNC can reach out BRG later for configuration)
- Add a route in HEAT: ip route add 10.3.0.0/24 via 10.0.101.10 dev eth0
- Add a route in OOM: ip route add 10.3.0.0/24 via 10.0.101.10 dev ens3
- Enter the sdnc controller docker
- HEAT: "docker exec -it sdnc_controller_container bash"
OOM: "kubectl -n onap exec -it dev-sdnc-sdnc-0 bash"
In the container, run the following to create IP address pool: /opt/sdnc/bin/addIpAddresses.sh VGW 10.5.0 22 250
- Can also remotely run addIpAddresses.sh: kubectl -n onap exec -it dev-sdnc-sdnc-0 -- /opt/sdnc/bin/addIpAddresses.sh VGW 10.5.0 22 250
- For healthcheck-k8s.py also install curl inside the sdnc container
Download and modify automation code
A python program had been developed to automate the deployment. You can download ONAP integration repo by git clone https://gerrit.onap.org/r/integration, and the script is under integration/test/vcpe.
Now go to the vcpe directory and modify vcpecommon.py. You will need to enter your cloud and network information into the following two dictionaries.
Create subdirectory csar/ and __var/, and download service csar from SDC and put under csar directory
install python-pip and other python modules (see the comment section)
Run automation program to deploy services
Sign into SDC as designer and download five csar files for infra, vbng, vgmux, vbrg, and rescust. Copy all the csar files to directory csar.
Now you can simply run 'vcpe.py' to see the instructions.
To get ready for service deployment. First run 'vcpe.py init'. This will modify SO and SDNC database to add service-related information.
Once it is done. Run 'vcpe.py infra'. This will deploy the following services. It may take 7-10 minutes to complete depending on the cloud infrastructure.
If the deployment succeeds, you will see a summary of the deployment from the program.
Validate deployed VNFs
By now you will be able to see 7 VMs in Horizon. However, this does not mean all the VNFs are functioning properly. In many cases we found that a VNF may need to be restarted multiple times to make it function properly. We perform validation as follows:
- Run healthcheck.py. It checks for three things:
- vGMUX honeycomb server is running
- vBRG honeycomb server is running
- vBRG has obtained an IP address and its MAC/IP data has been captured by SDNC
If this healthcheck passes, then skip the following and start to deploy customer service. Otherwise do the following and redo healthcheck.
- If vGMUX check does not pass, restart vGMUX, make sure it can be connected using ssh.
- If vBRG check does not pass, restart vBRG, make sure it can be connected using ssh.
(Please note that the four VPP-based VNFs (vBRG, vBNG, vGMUX, and vGW) were developed by the ONAP community in a tight schedule. We are aware that vBRG may not be stable and sometimes need to be restarted multiple times to get it work. The team is investigating the problem and hope to make it better in the near future. Your patience is appreciated.)
Deploy Customer Service and Test Data Plane
After passing healthcheck, we can deploy customer service by running 'vcpe.py customer'. This will take around 3 minutes depending on the cloud infrastructure. Once finished, the program will print the next few steps to test data plane connection from the vBRG to the web server. If you check Horizon you should be able to see a stack for vgw created a moment ago.
Tips for trouble shooting:
- There could be situations that the vGW does not fully functioning and cannot be connected to using ssh. Try to restart the VM to solve this problem.
- isc-dhcp-server is supposed to be installed on vGW after it is instantiated. But it could happen that the server is not properly installed. If this happens, you can simply ssh to the vGW VM and manually install it with 'apt install isc-dhcp-server'.
Closed Loop Test
Step 2. Push closed loop policy from pap
Step 3. Run heatbridge.
stack_name: from Openstack Horizon→Orchestration→Stack page
oam_ip_address: vGMUX VM oam network ip, you can get from Horizon
service_instance_id: Take from __var/svc_instance_uuid file. Copy the value for gmux without letter 'V'.
Step 4. Make sure APPC VNF_DB_MAPPING table has Restart with Generic_Restart as DG_NAME and 3.0.0 as DG_VERSION
Step 5. Update RegionOne with identity-url. First query RegionOne from Postman and add idenity-url, then Post updated content back to AAI
Get RegionOne data
Add only identity-url with Openstack keystone endpoint, and PUT back to AAI
Step 6. run vcpe.py loop. You don't need to stop/start policy (which is suggested by vcpe script and will be changed)
Checklist for Casablanca Release
Assuming you run vcpe script from rancher node, here we put the above steps in summary, you need to see details of each step in the above tutorial.
0. Enable dev-sdnc-sdnc-0 docker karaf log by editing StatefulSet/dev-sdnc-sdnc (remove log mount), then deleting pod dev-sdnc-sdnc-0 to restart it. Note the pod may move to a different cluster node after restart, write down the cluster node IP.
1. model distribution by `demo-k8s.sh onap init`. this will onboard VNFs and 4 services, i.e. infrastructure, brg, bng and gmux.
2. Login in Portal as Demo user, then go to SDC portal to add BRG subcategory to AllottedResource. SDC FE API not working yet:
3. (No need anymore for Casablanca MR) Update SO catalogdb tables temp_network_heat_template_lookup and network_resource tables by setting aic_version_max=3.0 (SO-1184)
4. Update SO catalogdb table heat_template to set Generic NeutronNet entry BODY field with the correct yaml format
5. Manually create and distribute customer service according to the steps in tutorial
Note: in Casablanca maintenance, this step is automated in Robot by running >ete-k8s.sh onap distributevCPEResCust
5.1 Create csar directory under vcpe, and copy the following 5 csar files from robot docker /tmp/csar/
6. Create availability zone in A&AI
7. Add customer SDN-ETHERNET-INTERNET
7.1 Add route on sdnc cluster node `ip route add 10.3.0.0/24 via 10.0.101.10 dev ens3`. You can find sdnc cluster node name by using kubectl describe sdnc pod
7.2 Run from Rancher node `kubectl -n onap exec -it dev-sdnc-sdnc-0 -- /opt/sdnc/bin/addIpAddresses.sh VGW 10.5.0 22 250`
8. Install python-pip and other python libraries. See tutorial comments section
9. Change the following env and service related parameters in vcpecommon.py, then run `vcpe.py init`. You may see some sql command failure, it's ok to ignore.
10. Run `vcpe.py infra`
11. Make sure sniro configuration is run as part of the above step.
12. Install curl command in sdnc-sdnc container
13. Run healthcheck-k8s.sh to check connectivity from sdnc to brg and gmux. If healthcheck-k8s.sh fails, check /opt/config/sdnc_ip.txt to see it has the SDNC host ip correctly. If you need to change SDNC host ip, you need to clean up and rerun `vcpe.py infra`. Also verify tap interfaces tap-0 and tap-1 are up by running vppctl with show int command. If tap interfaces are not up, use vppctl tap delete tap-0 and tap-1 and then run `/opt/bind_nic.sh` followed by `/opt/set_nat.sh`.
14. Run `vcpe.py customer`
15. Verify tunnelxconn and brg vxlan tunnels are set up correctly
16. Set up vgw and brg dhcp and route, and ping from brg to vgw. Note vgw public ip on Openstack Horizon may be wrong. Use vgw OAM ip to login.
17. Add identity-url property in RegionOne with Postman
18. Add new DG in APPC for closed loop. See APPC release note for steps. CCSDK-741
19. Update gmux libevel.so. See Eric comments on vcpe test status wiki
20. Run heatbridge Robot script
21. Push closed loop policy on Pap.
22. Run `vcpe.py loop` and verify vgmux is restarted
23. To repeat create infra step, you can delete infra vf-module stacks first and the network stacks from Openstack Horizon Orchestration->Stack page, then clean up the record in sdnc DHCP_MAC table before rerun `vcpe.py infra`
24. To repeat create customer step, you can delete customer stack, then clear up tunnles by running `cleanGMUX.py gmux_public_ip` and `cleanGMUX.py brg_public_ip`. After that you can rerun create customer command
Typical Errors and Solutions
SDNC DG error
If you run vcpe.py customer and see an error similar to the following:
"finishTime": "Wed, 22 Aug 2018 18:46:09 GMT",
"statusMessage": "Received error from SDN-C: Not Found"
Enter the SDNC docker
1. make a copy of GENERIC-RESOURCE-API_vnf-topology-operation-assign.xml in the sdnc_controller_container under /opt/sdnc/svclogic/graphs/generic-resource-api
2. edit GENERIC-RESOURCE-API_vnf-topology-operation-assign.xml to replace "<break> </break>" with "<break/>" or "<break></break>"
a. optionally you can change the version to something like 1.3.3-SNAPSHOT-FIX and update graph.versions to match but that is not needed if the xml failed to load .
3. run /opt/sdnc/svclogic/bin/install.sh
this will install the edited DG and make it active as long as the version in the xml and the version in graph.versions match
4. re-run /opt/sdnc/svclogic/bin/showActiveGraphs.sh and you should see the active DG
DHCP server doesn't work
- ssh to the dhcp server
- systemctl status kea-dhcp4-server.service
- If the service is not installed, do 'apt install kea-dhcp4-server.service'
- If the service is installed, most likely /usr/local/lib/kea-sdnc-notify.so is missing. Download this file from the following link and put it in /usr/local/lib. Link: kea-sdnc-notify.so
- systemctl restart kea-dhcp4-server.service
vBRG not responding to configuration from SDNC
Symptom: Run healthcheck.py and the test fails to connect to connect to vBRG. (Note you need to edit the healthcheck.py to use the correct IP address for vBRG. The default is 10.3.0.2).
This is caused by vpp not working properly inside vBRG. There is no deterministic fix for this problem until we have a stable vBRG image. Temporarily, you may try to either restart the vBRG VM or ssh to vBRG and 'systemctl restart vpp' and then retry healthcheck.py. Note that 'systemctl restart vpp' may work better that rebooting the VM but there is no guarantee.
Inside vBRG you can also check the status with 'vppctl show int'. If vpp works properly, you should be able to see that both tap-0 and tap-1 in 'up' state. An example is below.
Unable to change subnet name
When running "vcpe.py infra" command, if you see error message about subnet can't be found. It may be because your python-openstackclient is not the latest version and don't support "openstack subnet set --name" command option. Upgrade the module with "pip install --upgrade python-openstackclient".