Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This page provides technical information about the VPP based VNFs in the vCPE use case for ONAP R1-R3.

Table of Contents

The information builds upon , and updates in some areas, the legacy information documented here:   ONAP vCPE VNF Installation Guide v1.docx.

For information on preparing the VNF images in previous releases, refer to Preparing the vCPE VPP VNF Images for Amsterdam and Beijing.toc

The VPP based VNFs to be covered are:

  • vBRG
  • vBNG
  • vG-MUX
  • vGW

Preparing the vCPE VPP VNF Images for Casablanca

The VNFs are instantiated by a heat template and environment file which starts with a plain Ubuntu 16.04 image and then proceeds to build the VPP code and, in several cases, the Honeycomb agent code for the VNF.

The compilation of these components is time consuming (30+ minutes) and occasionally unsuccessful.  So, the plan is to create snapshot images for each VNF with the time consuming VPP and Honeycomb code pre-built.

Building a pre-built VNF Image

Using the vG-MUX as an example, the following steps are used to create a vG-MUX image which can then be used as the image for instantiating a vG-MUX VNF.

For building and deploying VNF images in the Casablanca release, there are two sets of heat templates, environment files, and scripts:

To create a VNF image, use the build_vcpe_xxx.yaml and build_vcpe_xxx.env files, which invoke the v_xxx_build.sh script.

To instantiate a VNF using a pre-built image, use the base_vcpe_xxx.yaml and build_vcpe_xxx.env files, which invoke the v_xxx_install.sh script.

Building a pre-built VNF Image

Using the vG-MUX as an example, the following steps are used to create a vG-MUX image which can then be used as the image for instantiating a vG-MUX VNF.

The build template (.yaml and .env) files are located in the ONAP 'demo' repository here:   demo/heat/vCPE_build/[ vgmux | vbng | vbrgemu | vgw ]

  1. The build .yaml file and the associated build .env file are used to invoke the VM which will build the image. The build .yaml will invoke the build script “v_gmux_build.sh”.
  2. Manually configure the .yaml file to not run the install script
    1. Comment out the last line of base_vcpe_vgmux.yaml (i.e. do not invoke v_gmux_install.sh via the yaml)
  3. Create a 'stack' - using an appropriately populated .env file
    1. openstack stack create -t base_vcpe_vgmux.yaml -e base_vcpe_vgmux.env vGMUX
  4. Log into the VM as the 'ubuntu' user and switch to the 'root' user
    1. sudo su -
    2. cd /opt
  5. Create the file "/opt/config/compile_state.txt" with the contents of "build"
  6. From /opt, invoke the install script./v_gmux_install.sh
    1. This will build vpp and honeycomb code, it may take 30-40 minutes
  7. Clean up some files not required for the final image (this will save several gigabytes):
    1. rm -fr /opt/vpp /opt/hc2vpp /opt/demo
  8. Edit the file "/opt/config/compile_state.txt" and change the contents of the file to "done"
  9. Reboot
  10. Save an image of the VNF
    1. openstack server image create --name vgmux-base-ubuntu-16-04 <VM Name or ID>
    2. "vgmux-base-ubuntu-16-04" will be the name of the new vG-MUX image

Instantiate a VNF based on the pre-built Image

  1. Log into the VM as the 'ubuntu' user and check that build script has finished executing.
    1. cat /opt/script_status.txt (if that file does not exist, the script is probably still running)
    2. If the script has executed completely and successfully, the output will be “Execution of vG-MUX build script completed”.
    3. If the script has failed to execute successfully, the output should specify the reason for its failure. For more information, query the contents of the systemd journal using the ‘journalctl’ command. See also: Debugging and troubleshooting.
  2. Clean up some files not required for the final image (this will save several gigabytes):
    1. sudo su -

    2. rm -fr /opt/vpp /opt/hc2vpp /opt/demo /opt/script_status.txt

  3. Save an image of the VNF.
    1. openstack server image create --name

    Change the .env file to use the VNF image created using the process described above.
    For example - replace "ubuntu-16-04-cloud-amd64" with "
    1. vgmux-base-ubuntu-16-04

    " in "vcpe_vgmux.env"
  4. Ensure the .yaml file does not have the install script commented out
    1. For example, ensure "v_gmux_install.sh" is not commented out in "base_vcpe_vgmux.yaml"
  5. Create a 'stack'
    1. openstack stack create -t base_vcpe_vgmux.yaml -e base_vcpe_vgmux.env vGMUX
    2. Note, remove the "vGMUX" stack that was created during the "Building a VNF Image" stage
    3. Using the image created above, the install script will perform some configuration steps and complete much more quickly since the VPP and Honeycomb code has already been compiled.

VNF Specific Usage Information

vG-MUX VES Configuration and Usage Information

The vG-MUX provides integrated VES functionality to generate sample events for demonstrating ONAP closed loop functionality.  The VES functionality can be configured via command line (CLI) or via the Honeycomb agent.

Configuration of VES via CLI

Info
titleConfigure VES agent to generate events
vppctl set ves agent server 127.0.0.1 port 88 intval 20
Info
titleQuery VES agent configuration
vppctl show ves agent
Info
titleModify VES agent configuration (2 steps - delete, then configure)
vppctl set ves agent del server 127.0.0.1 port 88 intval 20
vppctl set ves agent server 127.0.0.1 port 95 intval 30

The VES can be configured to generate events in 'real' or 'demo' mode.  In 'demo' mode, the value of the 'Packet Loss Rate' attribute can be configured.

Info
titleConfigure VES mode to 'demo' and 40% Packet Loss Rate
vppctl set ves mode demo base 40

This will cause events to be generated with output data that looks like the following sample.

Note that 'sourceId' and 'sourceName' are populated with the value of the 'vnf_id' metadata.

Also note that Packet-Loss-Rate has a value of "40.0" per the example configuration command shown above.

    1. <VM Name or ID>

    2. "vgmux-base-ubuntu-16-04" will be the name of the new vG-MUX image

Manually Instantiate a VNF based on the pre-built Image

The template (.yaml and .env) files used to prepare a .zip file for onboarding with SDC and deployment by ONAP are located in the ONAP 'demo' repository here: demo/heat/vCPE/[ infra | vgmux | vbng | vbrgemu | vgw ]

For manual testing, the template files can be used to orchestrate a 'stack' directly in an Openstack cloud.  Just fill out the environment file appropriately.   All of the resources required by the VNFs will need to be set up in the Openstack environment in advance - such as the various Neutron networks and flavors.

  1. Change the base .env file to use the VNF image created using the process described above.

    1. For example - replace "ubuntu-16-04-cloud-amd64" with "vgmux-base-ubuntu-16-04" in "base_vcpe_vgmux.env".

  2. Ensure the base .yaml file does not have the install script commented out.
    1. For example, ensure "v_gmux_install.sh" is not commented out in "base_vcpe_vgmux.yaml".
  3. Create a 'stack'
    1. CLI command:  openstack stack create -t base_vcpe_vgmux.yaml -e base_vcpe_vgmux.env vGMUX

    2. Note, remove the "vGMUX" stack that was created during the "Building a VNF Image" stage.
    3. Using the image created above, the install script will perform some configuration steps and complete much more quickly since the VPP and Honeycomb code has already been compiled.

Pre-built VNF images available

Prebuilt images in the ONAP-vCPE Project (as of 08/15/18):

VNF

ONAP-vCPE Image Name

Checksum

vBRG

vbrg-casa-base-ubuntu-16-04

a7e1bb0b991f8807e2c6ee9008b83e21

vBNG

vbng-casa-base-ubuntu-16-04

f30e6f8d07bf68450f0315a6d593e138

vG-MUX

vgmux-casa-base-ubuntu-16-04

f6b46d1133e5576afff245650a354768

vGW

vgw-casa-base-ubuntu-16-04

2482ae8dbe3d7a339f7ffa47478c995e

Compatibility with Amsterdam and Beijing Releases

The VNF heat templates and environment files for the Casablanca release provide backwards compatibility with previous releases. The build heat template will create the compile_state.txt file containing a status of ‘done’.

In the Amsterdam and Beijing releases, after building the VNF image, the process required manually changing the content of compile_state.txt to ‘done’ before saving the image of the VNF. The pre-built VNF images of the Casablanca release can be used with heat templates and environment files of previous releases.

Debugging and troubleshooting

  • To see the full output of the build script:

    Info
    journalctl


    Image Added

  • To check that the build script is running:

    Info
    ps aux | grep v_xxx_build.sh
  • To check that vpp and honeycomb are running:

    Info
    systemctl status vpp.service
    systemctl status honeycomb.service

Note that the vpp service should be ‘inactive’ after the completion of the build script, and it should be ‘running’ on all machines after the completion of the install script.

Image AddedImage Added

The honeycomb service should be running on the vBRG, vG-MUX, and vGW after the completion of the install script.

Image Added

This can also be confirmed by running ‘cat /var/log/honeycomb.log’:

Image Added

  • To check status on vGW of dhcp server:

    Info
    service isc-dhcp-server status
  • To check whether a package is installed using dpkg:

    Info
    dpkg -s [package name]

VNF Specific Usage Information

vG-MUX VES Configuration and Usage Information

The vG-MUX provides integrated VES functionality to generate sample events for demonstrating ONAP closed loop functionality.  The VES functionality can be configured via command line (CLI) or via the Honeycomb agent.

Configuration of VES via CLI

Info
titleConfigure VES agent to generate events
vppctl set ves agent server 127.0.0.1 port 88 intval 20
Info
titleQuery VES agent configuration
vppctl show ves agent
Info
titleModify VES agent configuration (2 steps - delete, then configure)
vppctl set ves agent del server 127.0.0.1 port 88 intval 20
vppctl set ves agent server 127.0.0.1 port 95 intval 30


The VES can be configured to generate events in 'real' or 'demo' mode.  In 'demo' mode, the value of the 'Packet Loss Rate' attribute can be configured.

Info
titleConfigure VES mode to 'demo' and 40% Packet Loss Rate
vppctl set ves mode demo base 40

This will cause events to be generated with output data that looks like the following sample.

Note that 'sourceId' and 'sourceName' are populated with the value of the 'vnf_id' metadata.

Also note that Packet-Loss-Rate has a value of "40.0" per the example configuration command shown above.

Code Block
titleSample vG-MUX VES Event
collapsetrue
{
    "event": {
Code Block
titleSample vG-MUX VES Event
collapsetrue
{
    "event": {
        "commonEventHeader": {
            "domain": "measurementsForVfScaling",
            "eventId": "Generic_traffic",
            "eventName": "Measurement_vGMUX",
            "eventType": "HTTP request rate",
            "lastEpochMicrosec": 1508871671489135,
            "priority": "Normal",
            "reportingEntityId": "No UUID available",
            "reportingEntityName": "zdcpe1cpe01mux01",
            "sequence": 2,
            "sourceId": "vCPE_Infrastructure_vGMUX_demo_app",
            "sourceName": "vCPE_Infrastructure_vGMUX_demo_app",
            "startEpochMicrosec": 1508871661489135,
            "version": 1.2
        },
        "measurementsForVfScalingFields": {
            "additionalMeasurements": [
        "commonEventHeader": {
       {
        "domain": "measurementsForVfScaling",
            "arrayOfFieldseventId": ["Generic_traffic",
            "eventName": "Measurement_vGMUX",
            {
   "eventType": "HTTP request rate",
            "lastEpochMicrosec": 1508871671489135,
            "namepriority": "Packet-Loss-RateNormal",
            "reportingEntityId": "No UUID  available",
            "valuereportingEntityName": "40.0zdcpe1cpe01mux01",
              "sequence": 2,
          }
  "sourceId": "vCPE_Infrastructure_vGMUX_demo_app",
            "sourceName": "vCPE_Infrastructure_vGMUX_demo_app",
     ],
       "startEpochMicrosec": 1508871661489135,
            "nameversion": "ONAP-DCAE"1.2
        },
        "measurementsForVfScalingFields": {
            "additionalMeasurements": [
                }{
            ],
            "cpuUsageArray": [
                {
                    "cpuIdentifier": "cpu1",
                    "cpuIdle": 66.7,
                    "cpuUsageSystem": 0.0,
                    "cpuUsageUser": 3.3,
                    "percentUsage": 0.0
                }
            ],
            "measurementInterval": 10,
            "measurementsForVfScalingVersion": 2.1,
            "requestRate": 886,
            "vNicUsageArray": [
                {
                    "receivedOctetsDelta": 0.0,
                    "receivedTotalPacketsDelta": 0.0,
                    "transmittedOctetsDelta": 0.0,
                    "transmittedTotalPacketsDelta": 0.0,
                    "vNicIdentifier": "eth0",
                    "valuesAreSuspect": "true"
                }
            ]
        }
    }
}

The ves mode can be changed as desired to modify the Packet-Loss-Rate value.

Configuration of VES via Honeycomb

Sample 'curl' commands executed from the VM (i.e. 127.0.0.1) are shown to configure the VES agent and mode.

Info
titleConfigure the VES agent
curl -i -H "Content-Type:application/json" --data '{"config":{"server-addr":"127.0.0.1","server-port":80,"read-interval":10,"is-add":1}}' -X POST -u admin:admin http://127.0.0.1:8183/restconf/config/vesagent:vesagent
Info
titleDelete the VES agent configuration (must be done before changing the configuration)
curl -i -H "Content-Type:application/json"  -X DELETE -u admin:admin http://127.0.0.1:8183/restconf/config/vesagent:vesagent/config
Info
titleQuery the VES agent configuration
curl -i -H "Content-Type:application/json" -X GET -u admin:admin http://127.0.0.1:8183/restconf/operational/vesagent:vesagent/config
Info
titleConfigure the VES mode to 'demo' and 40% packet loss
curl -i -H "Content-Type:application/json" --data '{"mode":{"working-mode":"demo","base-packet-loss":40}}' -X POST -u admin:admin http://127.0.0.1:8183/restconf/config/vesagent:vesagent
Info
titleDelete the VES mode (need to delete before changing it via Honeycomb)
curl -i -H "Content-Type:application/json"  -X DELETE -u admin:admin http://127.0.0.1:8183/restconf/config/vesagent:vesagent/mode
        "arrayOfFields": [
                        {
                            "name": "Packet-Loss-Rate",
                            "value": "40.0"
                        }
                    ],
                    "name": "ONAP-DCAE"
                }
            ],
            "cpuUsageArray": [
                {
                    "cpuIdentifier": "cpu1",
                    "cpuIdle": 66.7,
                    "cpuUsageSystem": 0.0,
                    "cpuUsageUser": 3.3,
                    "percentUsage": 0.0
                }
            ],
            "measurementInterval": 10,
            "measurementsForVfScalingVersion": 2.1,
            "requestRate": 886,
            "vNicUsageArray": [
                {
                    "receivedOctetsDelta": 0.0,
                    "receivedTotalPacketsDelta": 0.0,
                    "transmittedOctetsDelta": 0.0,
                    "transmittedTotalPacketsDelta": 0.0,
                    "vNicIdentifier": "eth0",
                    "valuesAreSuspect": "true"
                }
            ]
        }
    }
}

The ves mode can be changed as desired to modify the Packet-Loss-Rate value.

Configuration of VES via Honeycomb

Sample 'curl' commands executed from the VM (i.e. 127.0.0.1) are shown to configure the VES agent and mode.

Info
titleConfigure the VES agent
curl -i -H "Content-Type:application/json" --data '{"config":{"server-addr":"127.0.0.1","server-port":80,"read-interval":10,"is-add":1}}' -X POST -u admin:admin http://127.0.0.1:8183/restconf/config/vesagent:vesagent
Info
titleDelete the VES agent configuration (must be done before changing the configuration)
curl -i -H "Content-Type:application/json"  -X DELETE -u admin:admin http://127.0.0.1:8183/restconf/config/vesagent:vesagent/config
Info
titleQuery the VES agent configuration
curl -i -H "Content-Type:application/json" -X GET -u admin:admin http://127.0.0.1:8183/restconf/config/vesagent:vesagent/config
Info
titleConfigure the VES mode to 'demo' and 40% packet loss
curl -i -H "Content-Type:application/json" --data '{"mode":{"working-mode":"demo","base-packet-loss":40,"source-name":""}}' -X POST -u admin:admin http://127.0.0.1:8183/restconf/config/vesagent:vesagent

Note: The "source-name" attribute must be supplied when configuring the VES mode via Honeycomb.  If set to "" (as shown), the default source name will be used in the VES event message (currently the 'vnf-id').  If set to a string other than "", then that string will be used as the source name.

Info
titleDelete the VES mode (need to delete before changing it via Honeycomb)
curl -i -H "Content-Type:application/json"  -X DELETE -u admin:admin http://127.0.0.1:8183/restconf/config/vesagent:vesagent/mode
Info
titleQuery the VES mode configuration
curl -i -H "Content-Type:application/json" -X GET -u admin:admin http://127.0.0.1:8183/restconf/config/vesagent:vesagent/mode


Configuring vBRG and vG-MUX via REST from SDNC

The following are sample configurations made to the VNFs running in the ONAP lab.

These are curl commands issued from the SDNC VM to the vBRG and vG-MUX respectively.

In order to allow allow the SDNC VM to talk to the vBRG via it's WAN IP address, the following routing entry was made on the SDNC VM.

10.3.0.0/24 is the subnet for the vBRG and 10.0.1.10 is the ONAP OAM address of the vBNG.

ubuntu@vm-vcpe-sdnc:~$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
...
10.3.0.0 10.0.1.10 255.255.255.0 UG 0 0 0 eth0
...

Configuration to the vBRG

Info
titleCreate VXLAN port for tunnel to the vG-MUX
curl -H 'Content-Type: application/json' -H 'Accept: application/json' -u admin:admin -X PUT -d '{"interface":[{"name":"vxlanTun0", "type":"v3po:vxlan-tunnel", "enabled":"true", "link-up-down-trap-enable": "enabled", "vxlan":{"src":"10.3.0.9", "dst":"10.1.0.20", "vni":"100", "encap-vrf-id":"0"}}]}' http://10.3.0.9:8183/restconf/config/ietf-interfaces:interfaces/interface/vxlanTun0
Info
titleAdd the VXLAN port to the bridge-domain
curl -H 'Content-Type: application/json' -H 'Accept: application/json' -u admin:admin -X PUT -d '{'l2':{"bridge-domain":"bridge-domain-10" , "bridged-virtual-interface": false,  "split-horizon-group": 2}}' http://10.3.0.9:8183/restconf/config/ietf-interfaces:interfaces/interface/vxlanTun0/v3po:l2


Configuration to the vG-MUX

Info
titleCreate VXLAN port for tunnel to vBRG
curl -H 'Content-Type: application/json' -H 'Accept: application/json' -u admin:admin -X PUT -d '{"interface":[{"name":"vxlanTun1", "type":"v3po:vxlan-tunnel", "enabled":"true", "link-up-down-trap-enable": "enabled", "vxlan":{"src":"10.1.0.20", "dst":"10.3.0.9", "vni":"100", "encap-vrf-id":"0"}}]}' http://10.0.101.20:8183/restconf/config/ietf-interfaces:interfaces/interface/vxlanTun1
Info
titleCreate VXLAN port for tunnel to vGW
curl -H 'Content-Type: application/json' -H 'Accept: application/json' -u admin:admin -X PUT -d '{"interface":[{"name":"vxlanTun0", "type":"v3po:vxlan-tunnel", "enabled":"true", "link-up-down-trap-enable": "enabled", "vxlan":{"src":"10.5.0.20", "dst":"10.5.0.21", "vni":"100", "encap-vrf-id":"0"}}]}' http://10.0.101.20:8183/restconf/config/ietf-interfaces:interfaces/interface/vxlanTun0
Info
titleCreate XConnect between the two VXLAN tunnels
curl -H 'Content-Type: application/json' -H 'Accept: application/json' -u admin:admin -X PUT -d '{"l2":{"xconnect-outgoing-interface":"vxlanTun1"}}' http://10.0.101.20:8183/restconf/config/ietf-interfaces:interfaces/interface/vxlanTun0/v3po:l2
curl -H 'Content-Type: application/json' -H 'Accept: application/json' -u admin:admin -X PUT -d '{"l2":{"xconnect-outgoing-interface":"vxlanTun0"}}' http://10.0.101.20:8183/restconf/config/ietf-interfaces:interfaces/interface/vxlanTun1/v3po:l2


Configuration of routing entry from vG-MUX to the vBRG via the vBNG

Info
titleConfigure static route from vG-MUX to vBRG via the vBNG
curl -H 'Content-Type: application/json' -H 'Accept: application/json' -u admin:admin -X PUT -d '{ "routing-protocol":[ { "name":"learned-protocol-0", "description":"static route to vbrg", "enabled":"true", "type":"hc2vpp-ietf-routing:static", "vpp-protocol-attributes": { "primary-vrf": "0"}, "static-routes":{ "ipv4":{ "route":[ { "id":1, "description":"static route to vbrg", "destination-prefix":"10.3.0.9/32", "next-hop":"10.1.0.10", "outgoing-interface":"GigabitEthernet0/4/0" }]}}}]}' http://10.0.101.20:8183/restconf/config/hc2vpp-ietf-routing:routing/routing-instance/vpp-routing-instance/routing-protocols/routing-protocol/learned-protocol-0

Alternatively, a generic route to the subnet could be done:

Info
titleCLI command to static route
vppctl ip route add 10.3.0.0/24 via 10.1.0.10 GigabitEthernet0/4/0
Info
titleQuery the VES mode configuration
curl -i -H "Content-Type:application/json" -X GET -u admin:admin http://127.0.0.1:8183/restconf/operational/vesagent:vesagent/mode