Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This document explains how to run the ONAP demos on Azure using the Amsterdam Beijing release of ONAP.

The Amsterdam Beijing release had certain limitations due to which fixes/workarounds have been provided to execute the demos. The document contains the details of the fixes/workarounds and the steps to deploy them.


Current

...

Limitations of Beijing Release and Workarounds


S.No
. required when the TOSCA is ingested in
ComponentIssue detailCurrent StatusFurther Actions
1
SDC The VNFs are onboarded using TOSCA. However, SDC does not support the 'Group' construct (aka VFModule) which is
SO
after distribution. A partial fix has been provided to overcome this issueA proposal to include the fix and enhance SDC to support TOSCA 'Groups' construct has been submitted to the SDC PTL. Expected to discuss this and include in the Casablanca release2VIDVID is unable to show the 'VFModule' of a TOSCA service definition. This is linked to the above issue in SDC.Fix has been providedThe fix will be submitted along with the SDC Group fix for Casablanca release3Multivim BrokerMultiVim broker does not support http request that have content-type as 'multipart/form-data'Fixed in Beijing releaseNA4SDCThe SDC UI displays the service distribution is not successful even though the TOSCA was successfully deployed in SO.Fixed in Beijing releaseNA5SOVFModule not available in TOSCA definitions due to which SO does not consume the service properlyFix has been provided in the 'ASDC Controller' module so that the VFModule tables in the Catalog schema can be populatedA proper solution for this depends on the SDC fix explained above. The changes to support 'Groups'/'VFModule' are part of the SDC client library provided by SDC. In addition, the ASDC controller would also need fixes to handle the SDC client library changes. Expected to be
Custom workflow to call Multivim adapterA downstream image of SO is placed on github which contains the custom workflow BPMN along with the Multivim adapter.A base version of code is pushed to gerrit that supports SO-Multicloud interaction. But, this won't support Multpart data(CSAR artifacts to pass to Plugin). This need to be upstreamed.
2MutliCloud pluginCurrent azure plugin on ONAP gerrit does not support vFW and vDNS use-casesUsing the downstream image from github and developed a custom chart in OOM (downstream) to deploy as part of multicloud component set.Need to upstream the azure-plugin code to support vFW and vDNS use-cases
fixed in the Casablanca release
.



High level Solution Architecture

...

Note
Not all ONAP components have been shown in the high level solution. Only the new component/modules that are introduced in the solution are shown. Rest all remains the same.

Deploying ONAP on Azure using

...

Beijing Release

ONAP needs to be deployed with the dockers containing the fixes workarounds provided for the limitations in the Amsterdam Beijing release. Some fixes have been merged with the Beijing Release and those dockers will be used. 

Note

This section explains deploying the Amsterdam ONAP Release on Azure for executing the demo scenarios. 

If you want to deploy other releases of ONAP or without the fixes for Amsterdam release, please refer here 

The OOM deployment values chart have also been modified to deploy the dockers with the fixes.

The detailed list of changes is given below:

 

S.NoProject Name

Docker Image

(Pull from dockerhub repo)

Remarks
1OOM
elhaydox/oom
NA

Contains the latest values.yaml files

along with certain fixes needed in Amsterdam release.2OOM Configelhaydox/oomconfigContains the configuration files.3

which point to downstream images of:

That include:

  1. SDC POL:5000 error during distribution
  2. Firefox browser crash
  1. SO
  2. Multicloud-azure-plugin
2
SOelhaydox/mso:1.2.2_azure-1.1.0Contains the VFModule fix along with the newly developed BPMN and Multi VIM adapter
4
3multicloud-azureelhaydox/multicloud-azureAria plugin to interface with Azure and instantiate VNFs
5SDCelhaydox/sdc-backendContains the partial fix to support Group construct6VIDelhaydox/vid

Contains the partial fix to support Group construct so that VF-Module can be instantiated from VID

7Robotelhaydox/testsuite
Robot automation code to instantiate vFW and vDNS on Azure

Deploying ONAP on

...

Consists of two steps:

...

Azure

...

...

Creation of

...

Kubernetes cluster on Azure

  • Login to azure

    Code Block
    az login --service-principal -u <client_id> -p <client_secret> --tenant <tenant_id/document_id>

     

  •  Create a resource group  

    Code Block
     az group create --name <resource_group_name> --location <location_name>
  • Get the deployment templates from ONAP gerrit

    Code Block
     git clone -o gerrit https://gerrit.onap.org/r/integration
     cd integration/deployment/Azure_ARM_Template
  • Change arm_cluster_deploy_parameters.json file data (if required)
  • Run the deployment template  

    Code Block
        az group deployment create --resource-group deploy_onap --template-file oomarm_azurecluster_armdeploy_deploybeijing.json --parameters @oom@arm_azurecluster_arm_deploy_parameters.json

      change the parameters file accordingly

             Files attached: 

         

Note
The original OOM templates are here - https://jira.onap.org/browse/LOG-321. However this file will require the Amsterdam fixes to be merged in ONAP.  Till such time, use the above attached files.  

Deploying ONAP on VM

  • The deployment process will take around 30 minutes to complete. You will have a cluster with 12 VMs being created on Azure(as per the parameters). The VM name with the post-index: "0" will run Rancher server. And the remaining VMs form a Kubernetes cluster.

Deploying ONAP

  • SSH to the VM using root user where rancher server is installed.(VM with postindex:"0" as mentioned before)

    Note
    titleHelm upgrade

    When you login to Rancher server VM for the first time, Run: "helm ls" to make sure the client and server are compatible. If it gives error: "Error: incompatible versions client[v2.9.1] server[v2.8.2]", then

    Execute: helm init --upgrade

  • Download the OOM repo from github (because of the downstream images)Download the following script on the VM created using above step.

  • Executes the ONAP script to install rancher - oom_rancher_setup.sh
  • Clones OOM from Git Hub(
    Code Block
    titleGet install script on Azure VM
    # this script is based/branched-off the scripts developed for the CD system under LOG-320 - refer to the merges there for updated scripts
    wget https://raw.githubusercontent.com/onapdemo/onap-scripts/master/entrypoint/deploy_onap.sh
    chmod 777 deploy_onap.sh
    The deploy_onap.sh script is a wrapper/utility script that does the following :
    git clone -b beijing --single-branch https://github.com/onapdemo/oom.git
     ) that contains the modified image reference in the Values chart. The docker images contain the fixes explained above and are available in docker hub instead of the ONAP nexusInstall ONAP based on the modified value charts in oom.
    Note
    This script should be used only for Amsterdam release on Azure.  The script is a wrapper for OOM to install the required images from the docker hub. Once the fixes are merged in ONAP, which could happen in the Casablanca release, it might not be required. The original OOM scripts will be good to install ONAP.
    Please refer to the original ONAP script in https://git.onap.org/logging-analytics/tree/deploy/cd.sh for installing other releases.
  • Execute the below command to deploy ONAP
Code Block
		./deploy_onap.sh -e onap -t single -r true -n $dns_name

                   -r : give input as true to deploy rancher and kubernetes on VM

                  $dns_name: Public IP address/DNS namespace assigned to VM

  • To delete a previously deployed ONAP and deploy new one execute the command     
Code Block
./deploy_onap.sh -e onap -t single -c true -d true -n $dns_name
  • Execute the below commands in sequence to install ONAP

    Code Block
    titleGet install script on Azure VM
    cd oom/kubernetes
    make all # This will create and store the helm charts in local repo.
    helm install local/onap --name dev --namespace onap
    
    
    Note
    Due to network glitches on public cloud, the installation sometimes fail with error: "Error: release dev failed: client: etcd member http://etcd.kubernetes.rancher.internal:2379 has no leader". If one faces this during deployment, we need to re-install ONAP. For that: helm del --purge onap rm -rf /dockerdata-nfs/* #wait for few minutes
    helm install local/onap --name dev --namespace onap

Running ONAP use-cases

...

Refer to the below pages to run the ONAP use-cases

...