Introduction


This wiki is meant to detail the steps to automate the Network Slicing use case testing using the O-RAN-SC SMO package.


Network Slicing use case normally involves deploying many ONAP components, configuring many ONAP components and starting many simulators. SMO package provides an ideal platform to ease the deployment and configuration of such use case and eventually automate the testing of the use case.


The first  use case we have chosen is the Network Slicing Option2. The detailed steps to run the use case manually could be found in the link below.

https://wiki.onap.org/display/DW/E2E+network+slicing+use+case+-+Option2


There are in general 4 steps to run the use case, as listed below.

Currently the first 2 steps are completed, but the last 2 steps are still under development.


  1. Deploy ONAP components using SMO starting script
  2. Run use case preparation scripts to configure all the components
  3. Starting needed simulators
  4. Run test script and verify the result


The source code are stored under SMO package on ORAN gerrit in the "it/dep" repo:

https://gerrit.o-ran-sc.org/r/gitweb?p=it/dep.git;a=tree;f=smo-install;h=2e4539d6c3c2e2a274d1913c89df371c956f0793;hb=HEAD


step 1: Deploy ONAP components using SMO starting script


Start the platform for the use case, we could use the SMO starting scripts.

If you don't have an idea about what SMO starting scripts are, please have a read of the SMO package introduction and watch the attached videos first.


In general, you can run the scripts under layer-0 to install the needed software on your lab.

Then trigger script under layer-1 to build all the needed helm chart.

At last, go to folder layer-2 and run the following command to deploy the components needed for networt-slicing option2 use case.

$ cd ./scripts/layer-2
$ ./2-install-onap-only.sh network-slicing



The command above will take configuration parameters from onap-override.yaml under folder helm-override/network-slicing.

If you want different configuration values, then you must update those values in the onap-override.yaml file.


The deployment will take about 1 hour to be fully started and stabilized, since it involves lots of components.

! Before going to the next step, please make sure all the components are running OK. !


Note: Network Slicing use case involves deploying lots of components. Depending on the system you are using, the number of pods you have to start might exceed the max-pod allowance of the system, which will cause a lots of pods to be in status Pending. If using "kubectl describe" to show the details of the pod, you will see error message of "too many pods".

Increasing the max-pod number to 200 will solve the issue. Different system set max pod in different ways. With microk8s you can open file /var/snap/microk8s/current/args/kubelet, add the line "--max-pods=200" at the end of the file, restart the service with command service snap.microk8s.daemon-kubelite restart and verify whether the date has been updated with command kubectl describe node <node_value> | grep -i capacity -A 13.


step 2: Run use case preparation scripts to configure all the components


The preparation scripts are python scripts, based on the ONAP pythonsdk framework. The detailed introduction of the framework can be found in the SMO package introduction.


The scripts dedicated to the network slicing are located in folder test/pythonsdk/src/orantests/network_slicing.


Before running the script, please open settings.py under folder test/pythonsdk/src/orantests/configuration. Make sure the URL settings for all the components are the good values. To make it work, you can either update the settings.py using the correct IP/hostname or update /etc/hosts of the lab adding the IP/hostname mappings.


If the settings are good, go to folder test/pythonsdk/src/orantests/network-slicing and run the following command to trigger the preparation script:

$ cd ./test/pythonsdk/src/orantests/network-slicing
$ tox -e ns-tests


The command will trigger the main script test_network_slicing.py, which in turn triggers the preparation script of each component.

The whole preparation process will configure the components and also verifies a bit whether the configuration was done successfully at the end of each step.

The whole process may take about 1 hour to complete. You can monitor the progress using the log file pythonsdk.debug.log located in the folder network_slicing/preparation.


If everything goes fine, you will see similar logs as shown below in the end.


If things goes wrong, please read the logs to identify which part has go wrong and try to fix that step manually.

Then you can update the test_network_slicing.py, disable steps that are already complete, and replay the tox command to complete the rest of the configuration.


Please note, when checking test_network_slicing.py in details, you will find some of the preparation steps might require extra input parameters, such as cst_id, cst_invariant_id and sp_id. These values could be found in both logs and SDC UI.


In case it failed in the middle of the SDC template creation, please update the sdc_template_suffix variable inside the test_network_slicing.py and then rerun the script with tox command.

Since SDC doesn't support creating template with the same name, neither deleting of any templates, you have to add a suffix to the original name to create template with a new name.



step 3: Starting needed simulators


Network Slicing Option2 use case involves 3 simulators: external core NSSMF simulator, ACTN simulator and external RAN NSSMF simulator.

SMO packages prepares the lab using helm charts, which are under tests_oom folder.

When starting the testing script, the needed simulators will be started at the beginning. After the tests is completed, the simulators will be deleted.


step 4 : Run test script and verify the result


When typing the tox command, it first runs the preparation scripts and then triggers the real testing script.

The real testing script is written in file test_network_slicing.py under method test_network_slicing_option2.

It will trigger the network slicing use case and using assert to verify the result.

At this moment, the method is still empty. We are still progressing in the testing script.

If you want to continue the Network Slicing Option2 use case test, please run it manually.



Currently known issues

Currently the SMO package is based on the ONAP oom Jakarta release. There's already some known issues regarding the Network Slicing related code.


1) OOF timeout issue

The details of the issue is described in JIRA ticket OPTFRA-1080. There are already fix for this issue, but not merged yet.

To fix the issue manually, please following the steps below:

  • Edit onap-oof-has-configmap, extend the default timeout for nginx

 Open the config map with command "kubectl edit cm onap-oof-has-configmap -n onap"

Add the line “uwsgi_read_timeout 300;” under the Server section

Restart oof-has-api pod with command "kubectl delete pod <onap-oof-has-api-pod-name> -n onap"

  • Add interfaces.py as the configmap of for oof pod with an updated timeout value

Open the config map of oof with command "kubectl edit cm onap-oof-configmap -n onap"

Add the whole content of interfaces.txt (see attachment) file at the end of the config map section

Open oof deployment with command "kubectl edit deployment onap-oof -n onap"

Add the mount point and config map entry as shown below:

- mountPath: /opt/osdf/osdf/utils/interfaces.py

  name: onap-oof-config

  subPath: interfaces.py


- key: interfaces.py

  path: interfaces.py



2) SO timeout issue

The details of the issue is described in JIRA ticket SO-3968. There fixing is still in progress.

To fix the issue manually, please following the steps below:

Start so-bpmn-infra config map with command "kubectl edit cm onap-so-bpmn-infra-app-configmap -n onap"

Find out the oof timeout settings under override.yaml file and updated the value to PT15m

Restart the so-bpmn-infra pod with command "kubectl delete pod <onap-so-bpmn-infra-pod-name> -n onap"


Demos:

  • No labels