You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 29 Next »


Introduction

The Swisscom virtual BNG and Edge SDN M&C for the ONAP BBS use-case is a demo system. It is a functional prototype and it is not meant for production use. The whole system is running in a single OpenStack VM and therefore only able to support a small number of subscribers. The forwarding dataplane is implemented fully inside the VMs networking stack and therefore not designed to support high data rates. The system is able to interact with ONAP according to the BBS use-case definition.

Below a diagram of the whole end-to-end architecture, the Edge SDN M&C + vBNG VM is highlighted:

vbng-overview

As shown above in the diagram the DHCP and Dataplane traffic are brought to the VM by VxLAN tunneling.  Since it is just routed L3 traffic,  DHCP traffic does not really require VxLAN. For sake of not having two different tunnel technologies we use VxLAN tunneling for both DHCP and Dataplane traffic. Now the question is if the OLT supports native VxLAN tunneling. In our case the OLT does not. Therefore some VxLAN tunnel encapsulation device is required (see transport middle boxes above). Those boxes are very simple to set up. See at the very end of this document how to build such a middle box.

Installation by Heat Template

The Swisscom virtual BNG and Edge SDN M&C is installed and initially set up by a single Heat Orchestration Template. Therefore the OpenStack cloud you would like to use for testing should support Orchestration with Heat. The complete initial vBNG + Edge SDN M&C configuration is provided as Heat stack parameters. The Heat stack creates all the required OpenStack infrastructure, e.g. router, network, security group, port and vbng instance.

Once the stack is created, the vBNG configuration file is deployed to the instance to '$HOME/vbng.conf' and latest vBNG code is directly pulled from specified upstream git repository (default is the Swisscom repository) by cloud-init user-data script. The stack output shows initial vBNG configuration, the floating IP and how to connect to the instance by SSH. To create a stack in OpenStack heat you first require the Heat template from here: https://git.swisscom.com/projects/ZTXGSPON/repos/opnfv/browse/heat/vbng.yaml (Drop a note to michail salichos, David Perez Caparros  or Daniel Balsiger in case you do not have access)

Option A) Upload template in Horizon

The stack can be created directly in OpenStack Horizon by:

  • Navigating to 'Orchestration -> Stacks' in the sidebar
  • Pressing the 'Launch Stack' button
  • In 'Template Source' select to upload the template file 'vbng.yaml' and press 'Next'.
  • Now heat asks for the stack input parameters, they can be set by just entering the desired values in the Horizon form.

The template defines a hopefully useful default for each parameter, therefore not much has to be changed for the Swisscom Lab installation. However, some things like e.g. image, flavor and key have to be selected in the drop-down menus. Also all the initial configuration can be changed there. For a full list of the supported stack parameters see the appendix below.

Option B) OpenStack Commandline Client

Source the openrc.sh file of your OpenStack tenant and create the heat stack the following way:

source Downloads/vbng-openrc.sh
Please enter your OpenStack Password for project vBNG as user bng:

openstack stack create -t Downloads/vbng.yaml vbngstack

In case you would like to overwrite default parameters with your custom values, try adding '--parameter <key=value>', e.g.:

openstack stack create -t Downloads/vbng.yaml vbngstack --parameter key='your_key' --parameter flavor='your_flavor' --parameter image='your_image'

A full list of the supported stack parameter is shown in the following table:

Appendix: Stack Parameters

KeyDefault ValueDescriptionNotes
OpenStack Settings
keyvbngName of the SSH keypair for logging in into the instance

constraint: nova.keypair

image"CentOS 7 x86_64 GenericCloud 1901"

Name of the glance image

Supported are upstream cloud images for: Ubuntu 16.04 / Ubuntu 18.04 / CentOS 7

constraint: glance.image
flavora1.tiny

Flavor to use for the instance

Can be a small one (1vCPU/4GB RAM/10GB disk)

constraint: nova.flavor
extnetexternalName of external networkThis is the existing OpenStack external network containing the floating IPs

int_cidr

192.168.1.0/24Internal Network IPv4 Addressing in CIDR notationCan be anything in the private IP space if your OpenStack supports overlapping IP tenant ranges.
dns18.8.8.8DNS server 1 for internal networkE.g. DNS server 1 Openstack VMs will use

dns2

8.8.4.4DNS server 2 for internal networkE.g. DNS server 2 Openstack VMs will use
vBNG Git Repository Settings
git_repossh://git@git.swisscom.com:7999/ztxgspon/vbng.gitVirtual BNG Git Repository URL (ssh://)This repository holds the vbng code and is cloned by cloud-init
git_sshkeyNOT SHOWN HERESSH Private Key for Git Repository (Read-Only Access)For cloud-init read-only access
git_hostkeyNOT SHOWN HERESSH Host Key for Git Host (git.swisscom.com)
vBNG Settings
cust_cidr10.66.0.0/16Customer IPv4 Network in CIDR notationThe network for your subscribers
cust_gw10.66.0.1Customer IPv4 Network GatewayThe IPv4 gateway your subscribers will use
cust_dns8.8.8.8Customer DNS ServerThe DNS severs your subscribers will use

cust_start

10.66.1.1Customer IPv4 Range Start AddressSubscriber IP range for DHCP
cust_end10.66.1.254Customer IPv4 Range End AddressSubscriber IP range for DHCP
dhcp_cidr172.24.24.0/24DHCP Server / Relay Network in CIDR notationThe network between The DHCP server and the DHCP L3 Relay on the OLT.
dhcp_ip172.24.24.1DHCP Server IPv4 AddressThe DHCP Server is binding/listening to that address
in_tun_port4789UDP Port for incoming VxLAN TunnelsFor incoming VxLAN UDP packets. Used to configure OpenStack Security Groups
onap_dcae_ves_collector_urlhttp://172.30.0.126:30235/eventListener/v7ONAP DCAE VES Collector URLThe URL the VES agent is streaming VES to

vBNG Initial Configuration by cloud-init

Once the stack is created by heat, cloud-init user data script  checks out the vbng git repository and runs the scripts 00-installdeps.sh, 01-setupdatapath.sh, 02-setupcontainers.sh contained in the repository. The parameters passed to them are kept in $HOME/vbng.conf. Once cloud-init finished its job it will create the file $HOME/vbng_provisioning_done on your instance. Logs are kept in /var/log/cloud-init-output.log. You may re-run those scripts as many times as you wish, work will only be done once. For example you have to re-run these 3 scripts on instance reboot. Keep in mind, you may require a reboot in case you have kernel updates installed.

  • vbng/00-installdeps.sh             

    • Update the system, install dependent packages, install and setup docker.

  • vbng/01-setupdatapath.sh      

    • Set up the datapath part, including shaping, routing and NAT.

  • vbng/02-setupcontainers.sh   

    • Create docker images and start all containers: Database, Message Queue, Restconf Server, VES Agent and DHCP Server.

OLT Onboarding Configuration

OLT onboarding configuration is not done by cloud-init, since OLT parameters are normally not known at stack creation time. For OLT onboarding the 2 tunnels for datapath and DHCP transport and the DHCP L3 relay on the OLT have to be configured. Therefore another script should be used, once the vbng instance is provisioned initially:

  • vbng/03-setupolt.sh                 

    The script accepts exactly 8 parameters to specify tunnel and DHCP relay options. Already configured OLTs are kept in $HOME/oltmap.txt. Parameters are:
    1. vxlan_data_ip: The IP Address of the VxLAN remote tunnel endpoint for OLT datapath

    2. vxlan_data_port: The UDP Port of the VxLAN remote tunnel endpoint for OLT datapath

    3. vxlan_data_vni: The VNI of the VxLAN remote tunnel endpoint for OLT datapath

    4. vxlan_dhcp_ip: The IP Address of the VxLAN remote tunnel endpoint for DHCP server / relay traffic

    5. vxlan_dhcp_port: The UDP Port of the VxLAN remote tunnel endpoint for DHCP server / relay traffic

    6. vxlan_dhcp_vni: The VNI of the VxLAN remote tunnel endpoint for DHCP server / relay traffic

    7. relay_north_ip: The Northbound IP of the L3 DHCP relay on the OLT. (Where the DHCP server routes its replies to)

    8. relay_south_ip: The Southbound IP of the L3 DHCP relay on the OLT. (Where the DHCP replies are injected into datapath)

      [centos@vbng ~]$ vbng/03-setupolt.sh 
      Usage: vbng/03-setupolt.sh [vxlan_data_ip] [vxlan_data_port] [vxlan_data_vni] \
                                 [vxlan_dhcp_ip] [vxlan_dhcp_port] [vxlan_dhcp_vni] \
                                 [relay_north_ip] [relay_south_ip]
      [centos@vbng ~]$ vbng/03-setupolt.sh 172.30.0.252 4789 88888  172.30.0.253 4789 100 172.24.24.2 10.66.0.2
      Setting up VxLAN tunnel interface olt0 (172.30.0.252:4789 VNI=88888)
      Setting up VxLAN tunnel interface dhcp0 (172.30.0.253:4789 VNI=100)
      Adding port dhcp0 to bride dhcp...
      Adding relay route to 10.66.0.2 over 172.24.24.2 inside bbs-edge-dhcp-server container...
      [centos@vbng ~]$
      

ONT/Subscriber Configuration

Subscribers are usually configured by calls to bbs-edge-restconf-server directly from ONAP. In case you would like to test this functionality you can of course trigger this directly with curl to the floating IP, TCP port 5000 of the vbng instance:

curl -H "Content-Type: application/json" -X POST -d '{"remote_id":"AC9.000.990.001","ont_sn":"serial","service_type":"Internet","mac":"00:00:00:00:00:00","service_id":"1","up_speed":"100","down_speed":"100","s_vlan":10,"c_vlan":333}' 172.30.0.134:5000/CreateInternetProfileInstance

Important parameters are: "remote_id":"AC9.000.990.001","s_vlan":10,"c_vlan":333 .Of course the values configured must match what the OLT/ONT in the Lab sends. The DHCP authentication is done only on the correct value of remote_id. Once successfully authenticated and given a lease by DHCP, dataplane configuration is delegated to a host process by publishing a message to the queue. The host process is consuming the message from the queue and configures subscribers dataplane with the help of those two scripts:

  • vbng/04-setupcustomer.sh
    • Enable a particular customer
    • Usage: vbng/04-setupcustomer.sh [olt_id] [s-vlan] [c-vlan] [customer_ip] [traffic_profile_id]
  • vbng/05-removecustomer.sh
    • Remove a particular customer
    • Usage: vbng/05-removecustomer.sh [customer_ip]

Currently only 4 subscribers profiles are supported (1/2/3/4), 2 * 100Mbit/s symmetrical and 2 * 20Mit/s symmetrical, respectively. This should be enough to run all test-cases for the BBS use-case.

ONAP Configuration

The installation and initial configuration of Edge SDN M&C + vBNG is done by an Heat stack template, see above. The parameters which must be modified in ONAP are the following:

  • The IP of Edge SDN M&C in order to be accessed from SDN-C is currently hardcoded in the DG -> GENERIC-RESOURCE-API_bbs-internet-profile-network-topology-operation-common-huawei.json (<parameter name='prop.sdncRestApi.thirdpartySdnc.url' value='http://172.30.0.121:5000' />). The Edge SDN M&C external controller is not registered in ESR for this release. Note: The IP above is provided by Heat stack output, it is the Floating IP of the vBNG instance in Swisscoms Lab.
  • To update the IP of Edge SDN M&C in the corresponding DG, one must export the relevant DG mentioned earlier, update the IP, import back and finally enable the DG.


Setup Transport Middle Box for VxLAN Tunneling

We built our middle-boxes on top of CentOS 7. Ubuntu and other distributions will work in a similar way. The commands shown here refer to CentOS 7. The middle box can be any x86 server with two 10Gbit/s NICs. One NIC will be facing the OLT on L2, the other NIC will be in the external network to communicate with the vBNG. To set up such a middle box use these commands to configure on top of a minimal CentOS 7 installation:

  • Copy the team members SSH public keys and disable SSH password auth:

    cat > authorized_keys << __EOF
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCjD/+BIg4c28lHlHw464vbfUYjfDJ1sSKgrEYcMkL+qO6LagkDAWkWdelmAmpcUJlOPYjxDwmKj8Bu6/fd+WfVzk6y33YVmAFN4jAmv/87dYCNuAMr4gDWc3cU5lsNdpsPzQqGUCFfJCvldyUZeu21YZ2rkYB1+Q9VObUSaa5Z74sKNYQJi0AgnZh63cYOyqVDCwIloWd2FzC+4o04cVL3P1R+COGRq1EUUmy5LSI9rsCO59mLCt8Wm4h5OiY84nEbQVZUH3QyYw/ihmGm2qtklkbNMPOPZ7+8ZN5+of4u/7bpEiZk3FcMh7lYwi6dMyUzwv47Il633JP6GDgOxuCH Daniel Balsiger SSH
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDH8lM+qleGIvXI3wgqIp73pKZwwxKfr9BDCdoVP3/zWRQ/7zpw98nvx7gqfVLlt+P2TjxHbSJqGrSECSmKFCHsYzuA+khmg/aca/IQa2FYFpUR1sT4czWQC14PiGGIoSbMukeUZvddZwZlalNZmOKjzY1Flz3w7+W+XHyFuwy6qfaIt1hIBKkqTUxECYq0O6OkdK6gzouKuAY/4AM+VvcIkdHMm9x3LCXWBAH24QzCG/IzydqXfi4FkVtmGJv2AgEMyR0seSoU3drCXvpY91WjXT8i6m7EMB739hw0V32UaqslY3qHtuNTGake5JFWJn9zYF6lZwGXpU94Bw7YjQL1 Michail Salichos SSH
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCRxCsboa1ERMgiJCP2iA8Zcm2LuAOALQHIZIQEvbcwMifdeXMTawC0tDnU6qy35q+cr5W3+4HJDyBLSAKmDosZepm1a/27cRlgXK/vtkxM5UlDk+lZsF/YGXBzZvWepM4XhozzCMNfvWWxkz5SnEl/ZYfdN2H5psXReNTgBX33ax2cI+aOBZxsX2Y0FYBuqlJFT7htgblGjHLq43nL/cF9w9cXkMv+mPUQJN4wNf1HU5JBjX6sKl6Y3IIPxEVGFohu8c9tDHa8JoWxIzKZz3z9Zd8KkfTTsRtXh3MH7mMRZkVTgHHVU3NA4/psEVMJHFtXI6R/laOv8Lpytdky7tkv taapeda0@UM01183
    
    __EOF
    mkdir .ssh
    chmod 0700 .ssh
    cp authorized_keys .ssh # copy not move (selinux)
    chmod 0600 .ssh/authorized_keys
    rm -f authorized_keys
    sed -e 's|^PasswordAuthentication yes|PasswordAuthentication no|' -i /etc/ssh/sshd_config
    systemctl restart sshd
  • Disable NetworkManager, Firewalld and Postfix services, enable legacy networking:

    systemctl disable NetworkManager
    systemctl stop NetworkManager
    systemctl disable firewalld
    systemctl stop  firewalld
    systemctl disable postfix
    systemctl stop postfix
    
    systemctl enable network
    systemctl start network
  • Create Network Interface Configuration Files in /etc/sysconfig/network-scripts/ :

    • ifcfg-bridge:

      DEVICE=bride
      TYPE=Bridge
      MTU=1400
      ONBOOT=yes
      BOOTPROTO=none
      IPV6INIT=no
      IPV6_AUTOCONF=no
    • ifcfg-nic1 (facing OLT):

      DEVICE=nic1
      TYPE=Ethernet
      MTU=1400
      ONBOOT=yes
      BOOTPROTO=none
      IPV6INIT=no
      IPV6_AUTOCONF=no
      BRIDGE=bridge
    • ifcfg-nic2 (in external network, facing vBNG):

      DEVICE=nic2
      TYPE=Ethernet
      MTU=1450
      ONBOOT=yes
      BOOTPROTO=none
      IPV6INIT=no
      IPV6_AUTOCONF=no
      IPADDR=172.30.0.252
      PREFIX=24
      DEFROUTE=yes
      GATEWAY=172.30.0.1
      DNS1=8.8.8.8
      DNS2=8.8.4.4
  • Create VxLAN Tunnel Interface on bridge creation:

    cat > /sbin/ifup-local << __EOF
    #!/bin/sh
    if [[ "\$1" == "bridge" ]]
    then
      ip link add vxlan0 type vxlan id 88888 local 172.30.0.252 remote 172.30.0.121 dstport 4789 dev nic2
      ip li set up dev vxlan0
      ip link set master bridge dev vxlan0
    fi
    __EOF
    
    chmod 755 /sbin/ifup-local
    restorecon -Fv /sbin/ifup-local



  • No labels