You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »


Issue DescriptionOption1/Option2Solution/Work aroundJIRAStatusComments

"Connection to ASDC server failed" in SO and AAI logs after re-distribution of templates in SDC.

Option 2Redeploy SO and AAI with the latest charts.


NST Selection FailureOption 2

NST template in SDC should have the property "latency" as a separate property of type integer instead of type NSCapabilities.




Distribution fails into AAI for the resources SliceProfileXXX
(missing Allotted Resource service model)
Option 1Add Slice_AR (the allotted resource componing each SliceProfile) as
service-model into AAI
PUT https://{{k8s}}:30233/aai/v21/service-design-and-creation/models/model/5d179b7a-8d8a-4317-9318-349b09fcde2c

{
  "model-invariant-id": "5d179b7a-8d8a-4317-9318-349b09fcde2c",
  "model-type": "Resource",
  "model-vers": {
    "model-ver": [
      {
        "model-version-id": "3c532edd-4c72-4558-b892-8d518ca03c56",
        "model-name": "Slice_AR",
        "model-version": "1.0"
      }
    ]
  }
}















This page explains the manual configurations required for setting up E2E network slicing use case - option 1.

1. SDC

ONAP Portal: https://portal.api.simpledemo.onap.org:30225/ONAPPORTAL/login.htm (Username:cs0008, Password:demo123456!)

SDC UUI: https://sdc.api.fe.simpledemo.onap.org:30207/sdc1/portal#!/dashboard

Refer for Template Design for Option 1 respective template creation and distribution.

2.UUI Configuration

Configure CST template UUID and Invariant UUID in slicing.properties file of uui-server microservice

In uui-server microservice, modify the following configuration file, /home/UUI/config/slicing.properties

Add or update the parameters slicing.serviceInvariantUuid and parameter slicing.serviceUuid.

The values of these two parameters come from CST template which can be find on SDC page.

3.MSB Configuration

Register so-orchestrationTasks and so-serviceInstances interface to MSB.

Interface registration can be done through portal.

Steps(Portal):

Link: https://{{master server ip}}:30284/iui/microservices/default.html

 1.Select ‘’ in the left pane

 2. Click 'Service Register' button.

 3. Input the basic info as the picture shows(also refer to the registration info provided above)

4. Click Add Host button.

    Input IP Address and Port then click the 'SAVE' button. (Use cmd ’kubectl get svc -n onap so‘ to confirm IP and port.)

5.  Input the basic info as the picture shows for so-orchestrationTasks (also refer to the registration info provided above)

6. You should add a  aai-business service for MSB.
Steps:

  • Got to msb https://{{master server ip}}:30284/iui/microservices/default.html
  • Select "Service Discover" from left panel
  • Click "Service Register" button
    • ADD the following info:
      Service Name: aai-business
      Url: /aai/v13/business
      Protocol: REST
      Enable SSL to True
      Version: v13
      Load balancer: round-robin
      Visualranfe: InSystem
    • Add host:
      AAI service ip and port (8443)
  • Save all


7. You should add a  aai-externalSystem service for MSB.
Steps:

  • Got to msb https://{{master server ip}}:30284/iui/microservices/default.html
  • Select "Service Discover" from left panel
  • Click "Service Register" button
    • ADD the following info:
      Service Name: aai-externalSystem
      Url: /aai/v11/external-system
      Protocol: REST
      Enable SSL to True
      Version: v11
      Load balancer: round-robin
      Visualranfe: InSystem
    • Add host:
      AAI service ip and port (8443)
  • Save all

4. SO

Copy subnetCapability.json to SO-API Handler pod to configure subnet capabilities at run time.

{
"AN_NF": {
"latency": 5,
"maxNumberofUEs": 200,
"maxThroughput": 90,
"termDensity": 40
},
"AN": {
"latency": 20,
"maxNumberofUEs": 100,
"maxThroughput": 150,
"termDensity": 50
},
"CN": {
"latency": 10,
"maxThroughput": 50,
"maxNumberofConns": 100
},
"TN_FH": {
"latency": 10,
"maxThroughput": 90
},
"TN_MH": {
"latency": 5,
"maxThroughput": 90
},
"TN_BH": {
"latency": 10,
"maxThroughput": 100
}
}


You can copy the file to the pod using the following command

kubectl cp subnetCapability.json -n onap <so-apih-pod-name>:/app


SO Database Update

Insert ORCHESTRATION_URI into service_recipe,  SERVICE_MODEL_UUID replaced by CST.ModelId.

INSERT INTO `catalogdb`.`service_recipe`(`ACTION`, `VERSION_STR`, `DESCRIPTION`, `ORCHESTRATION_URI`, `SERVICE_PARAM_XSD`, `RECIPE_TIMEOUT`, `SERVICE_TIMEOUT_INTERIM`, `SERVICE_MODEL_UUID`) VALUES ('createInstance''1''Custom recipe to create communication service-instance if no custom BPMN flow is found''/mso/async/services/CreateCommunicationService', NULL, 180, NULL, 'c9252b26-f9cd-4e6c-988c-4d6ff39c6dda');
  
INSERT INTO `catalogdb`.`service_recipe`(`ACTION`, `VERSION_STR`, `DESCRIPTION`, `ORCHESTRATION_URI`, `SERVICE_PARAM_XSD`, `RECIPE_TIMEOUT`, `SERVICE_TIMEOUT_INTERIM`, `SERVICE_MODEL_UUID`) VALUES ('deleteInstance''1''Custom recipe to delete communication service if no custom BPMN flow is found''/mso/async/services/DeleteCommunicationService', NULL, 180, NULL, 'c9252b26-f9cd-4e6c-988c-4d6ff39c6dda');
  
INSERT INTO `catalogdb`.`service_recipe`(`ACTION`, `VERSION_STR`, `DESCRIPTION`, `ORCHESTRATION_URI`, `SERVICE_PARAM_XSD`, `RECIPE_TIMEOUT`, `SERVICE_TIMEOUT_INTERIM`, `SERVICE_MODEL_UUID`) VALUES ('activateInstance''1.0''activate communication service''/mso/async/services/ActivateCommunicationService', NULL, 180, NULL, 'c9252b26-f9cd-4e6c-988c-4d6ff39c6dda');

Insert ORCHESTRATION_URI into service_recipe,  SERVICE_MODEL_UUID is ServiceProfile.ModelId

INSERT INTO `catalogdb`.`service_recipe`(`ACTION`, `VERSION_STR`, `DESCRIPTION`, `ORCHESTRATION_URI`, `SERVICE_PARAM_XSD`, `RECIPE_TIMEOUT`,
`SERVICE_TIMEOUT_INTERIM`, `SERVICE_MODEL_UUID`) VALUES ('createInstance', '1', 'Custom recipe to create slice\r\nservice-instance if no custom BPMN flow is found', '/mso/async/services/CreateSliceService', NULL, 180, NULL,'bfca8b32-3404-4e5c-a441-dc42b6823e88');
  
INSERT INTO `catalogdb`.`service_recipe`(`ACTION`, `VERSION_STR`, `DESCRIPTION`, `ORCHESTRATION_URI`, `SERVICE_PARAM_XSD`, `RECIPE_TIMEOUT`, `SERVICE_TIMEOUT_INTERIM`, `SERVICE_MODEL_UUID`) VALUES ('deleteInstance', '1', 'Custom recipe to create slice\r\nservice-instance if no custom BPMN flow is found', '/mso/async/services/DeleteSliceService', NULL, 180, NULL, 'bfca8b32-3404-4e5c-a441-dc42b6823e88');
  
INSERT INTO `catalogdb`.`service_recipe`(`ACTION`, `VERSION_STR`, `DESCRIPTION`, `ORCHESTRATION_URI`, `SERVICE_PARAM_XSD`, `RECIPE_TIMEOUT`, `SERVICE_TIMEOUT_INTERIM`, `SERVICE_MODEL_UUID`) VALUES ('activateInstance', '1.0', 'Gr api recipe to activate service-instance', '/mso/async/services/ActivateSliceService', NULL, 180, NULL, 'bfca8b32-3404-4e5c-a441-dc42b6823e88');


5.OOF Configuration


HAS-API - Add datadictionary 

Go to (conductor/conductor/data/plugins/inventory_provider/candidates/slice_profiles_candidate.py) 
add the following :
    "max_bandwidth": copy_first,
    "jitter": sum,
    "sst": copy_first,
    "latency": sum,
    "resource_sharing_level": copy_first,
    "s_nssai": copy_first,
    "s_nssai_list": copy_first,
    "plmn_id_list": copy_first,
    "plmn_id_List": copy_first,
    "availability": copy_first,
    "throughput": min,
    "reliability": copy_first,
    "max_number_of_ues": copy_first,
    "exp_data_rate_ul": copy_first,
    "exp_data_rate_dl": copy_first,
    "ue_mobility_level": copy_first,
    "activity_factor": copy_first,
    "survival_time": copy_first,
    "max_number_of_conns": copy_first,
    "coverage_area_ta_list": copy_first,
    "max_number_of_pdu_session": copy_first,
    "max_throughput": copy_first,
    "perf_req": copy_first,
    "terminal_density": copy_first

update those and restart the container


OSDF - Change slicing_config.yaml

  • add mapping :
     plmn_id_list: pLMNIdList    plmn_id_List: plmnIdList
  • change mapping (remove characther 's'): maxNumberofPDUSessions
    max_number_of_pdu_session: maxNumberofPDUSession 


6. Policy Creation Steps

Refer Optimization Policy Creation Steps for optimization policy creation and deployment steps

policies.zip

Copy the policy files

unzip policies.zip

kubectl cp policies -n onap <oof-pod-name>:/opt/osdf

kubectl exec -ti -n onap <oof-pod-name> bash

cd policies/nsi

python3 policy_utils.py create_policy_types policy_types

python3 policy_utils.py create_and_push_policies nst_policies

python3 policy_utils.py generate_nsi_policies NSTO1

python3 policy_utils.py create_and_push_policies gen_nsi_policies

cd policies/nssi

python3 policy_utils.py generate_nsi_policies TESTRANTOPNSST

python3 policy_utils.py create_and_push_policies gen_nsi_policies

python3 policy_utils.py generate_nssi_policies RAN_NF_NSST minimize latency

python3 policy_utils.py create_and_push_policies gen_nssi_policies

python3 policy_utils.py generate_nssi_policies CN_NSST minimize latency

python3 policy_utils.py create_and_push_policies gen_nssi_policies

Refer Policy Models and Sample policies - NSI selection for sample policies 

Updated slice/service profile mapping - https://gerrit.onap.org/r/gitweb?p=optf/osdf.git;a=blob;f=config/slicing_config.yaml;h=179f54a6df150a62afdd72938c2f33d9ae1bd202;hb=HEAD

NOTE:

  • The service name given for creating the policy must match with the service name in the request
  • The scope fields in the policies should match with the value in the resourceSharingLevel(non-shared/shared). Do modify the policy accordingly.
  • Check the case of the attributes with the OOF request with the attribute map (camel to snake and snake to camel) in config/slicing_config.yaml, if any mismatch found modify the attribute map accordingly.
  • You need to restart the OOF docker container once you updated the slicing_config.yaml, you can do it using the following steps,

    • Login to the worker VM where the OOF container is running. You can find the worker node by running (kubectl get pods -n onap -o wide | grep dev-oof)
    • Find the container using docker ps | grep optf-osdf
    • Restart the container using docker restart <container id>


7.AAI Configuration

Create customer id  : 

curl --user AAI:AAI -X PUT -H "X-FromAppId:AAI" -H  "X-TransactionId:get_aai_subscr" -H "Accept:application/json" -H "Content-Type:application/json" -k -d '{ 

"global-customer-id":"5GCustomer", 

"subscriber-name":"5GCustomer", 

"subscriber-type":"INFRA" 

}' "https://<worker-vm-ip>:30233/aai/v21/business/customers/customer/5GCustomer" 

Create service type:

curl --user AAI:AAI -X PUT -H "X-FromAppId:AAI" -H  "X-TransactionId:get_aai_subscr" -H "Accept:application/json" -H "Content-Type:application/json" -k https://<worker-vm-ip>:30233/aai/v21/business/customers/customer/5GCustomer/service-subscriptions/service-subscription/5G 

8.ConfigDB

Config DB is a spring boot application that works with mariaDB. DB schema details are available at Config DB.

Install config DB application in a separate VM. MariaDB container should be up and running to access the config DB APIs.

Refer https://wiki.onap.org/display/DW/Config+DB+setup for configDB setup. Latest source is available at Image versions, preparation steps and useful info-Config DB Preload Info Section.

Necessary RAN network functions data are preloaded in config DB while booting the maria DB container.

Note: Refer the latest templates from gerrit which are committed in June 2021. https://gerrit.onap.org/r/gitweb?p=ccsdk/distribution.git;a=commit;h=8b86f34f6ea29728e31c4f6799009e8562ef3b6f

9. SDNC

Install SDNC using OOM charts and the below pods should be running. As ran-slice RPCs are not visible in the latest SDN-C image, use the image version 2.1.0 for sdnc-image and dmaap listener. Manually, load the RANSlice DGs like below:

  • Copy the DG XMLs from /distribution/platform-logic/ran-slice-api/src/main/xml (gerrit repo) to /opt/onap/sdnc/svclogic/graphs/ranSliceapi(sdnc container)
  • Install the DGs : a) Navigate to /opt/onap/sdnc/svclogic/bin (Sdnc container) (b) Run ./install.sh

SDNC Pods

kubectl get pods -n onap | grep sdnc
dev-sdnc-0                                                        2/2     Running                           0          46d
dev-sdnc-ansible-server-6b449f8d8-7mjld                           1/1     Running                           0          46d
dev-sdnc-dbinit-job-mwr8s                                         0/1     Completed                         0          46d
dev-sdnc-dgbuilder-86c9cb55bb-svcsh                               1/1     Running                           0          46d
dev-sdnc-dmaap-listener-6bd7fbc64f-dl4ch                          1/1     Running                           0          46d
dev-sdnc-sdnrdb-init-job-824vl                                    0/1     Completed                         0          46d
dev-sdnc-ueb-listener-769f74cb4b-wgcw7                            1/1     Running                           0          46d
dev-sdnc-web-5b75c68fd8-zfsn6                                     1/1     Running                           0          46d

Check the below in SDNC pod (dev-sdnc-0).

  1. Latest ran-slice-api-dg.properties (/distribution/odlsli/src/main/properties/ran-slice-api-dg.properties) should be available at /opt/onap/ccsdk/data/properties/
  2. All ranSlice*.json template files (/distribution/platform-logic/restapi-templates/src/main/json) should present at /opt/onap/ccsdk/restapi/templates/
  3. DG XML files from /distribution/platform-logic/ran-slice-api/src/main/xml should present at /opt/onap/sdnc/svclogic/graphs/ranSliceapi
  4. Go to /opt/onap/sdnc/svclogic/bin
    Run ./install.sh << this should re-install and activate all DG's>>

Note:

If SDN-C deletion is unsuccessful due to the leftover residues, use the below commands to delete it completely.

kubectl get secrets -n onap --no-headers=true | awk '/dev-sdnc/{print $1}' | xargs kubectl delete secrets -n onap
kubectl get configmap -n onap --no-headers=true | awk '/dev-sdnc/{print $1}' | xargs kubectl delete configmap -n onap
kubectl get svc -n onap --no-headers=true | awk '/^sdn/{print $1}' | xargs kubectl delete svc -n onap
kubectl get deployment -n onap --no-headers=true | awk '/dev-sdnc/{print $1}' | xargs kubectl delete deployment -n onap
kubectl get statefulsets -n onap --no-headers=true | awk '/dev-sdnc/{print $1}' | xargs kubectl delete statefulsets -n onap
kubectl get jobs -n onap --no-headers=true | awk '/dev-sdnc/{print $1}' | xargs kubectl delete jobs -n onap
kubectl get pvc -n onap --no-headers=true | awk '/dev-sdnc/{print $1}' | xargs kubectl delete pvc -n onap
kubectl get pv -n onap --no-headers=true | awk '/dev-sdnc/{print $1}' | xargs kubectl delete pv -n onap
kubectl get secrets -n onap --no-headers=true | awk '/dev-elastic/{print $1}' | xargs kubectl delete secrets -n onap
kubectl get configmap -n onap --no-headers=true | awk '/dev-elastic/{print $1}' | xargs kubectl delete configmap -n onap
kubectl get svc -n onap --no-headers=true | awk '/^elastic/{print $1}' | xargs kubectl delete svc -n onap
kubectl get deployment -n onap --no-headers=true | awk '/dev-elastic/{print $1}' | xargs kubectl delete deployment -n onap
kubectl get statefulsets -n onap --no-headers=true | awk '/dev-elastic/{print $1}' | xargs kubectl delete statefulsets -n onap
kubectl get jobs -n onap --no-headers=true | awk '/dev-elastic/{print $1}' | xargs kubectl delete jobs -n onap
kubectl get pvc -n onap --no-headers=true | awk '/dev-elastic/{print $1}' | xargs kubectl delete pvc -n onap
kubectl get pv -n onap --no-headers=true | awk '/dev-elastic/{print $1}' | xargs kubectl delete pv -n onap
kubectl get secrets -n onap --no-headers=true | awk '/dev-neng/{print $1}' | xargs kubectl delete secrets -n onap
kubectl get configmap -n onap --no-headers=true | awk '/dev-neng/{print $1}' | xargs kubectl delete configmap -n onap
kubectl get svc -n onap --no-headers=true | awk '/^neng/{print $1}' | xargs kubectl delete svc -n onap
kubectl get deployment -n onap --no-headers=true | awk '/dev-neng/{print $1}' | xargs kubectl delete deployment -n onap
kubectl get statefulsets -n onap --no-headers=true | awk '/dev-neng/{print $1}' | xargs kubectl delete statefulsets -n onap
kubectl get jobs -n onap --no-headers=true | awk '/dev-neng/{print $1}' | xargs kubectl delete jobs -n onap
kubectl get pvc -n onap --no-headers=true | awk '/dev-neng/{print $1}' | xargs kubectl delete pvc -n onap
kubectl get pv -n onap --no-headers=true | awk '/dev-neng/{print $1}' | xargs kubectl delete pv -n onap
kubectl delete secret -n onap dev-aai-keystore
kubectl delete secret -n onap dev-pol-basic-auth-secret
kubectl get configmap -n onap --no-headers=true | awk '/dev-sdnr/{print $1}' | xargs kubectl delete configmap -n onap
kubectl get pvc -n onap --no-headers=true | tail -n+2 | awk '/dev-sdn/{print $1}' | xargs kubectl patch pvc -n onap -p '{"metadata":{"finalizers":null}}'
kubectl get deployment -n onap --no-headers=true | awk '/dev-sdn/{print $1}' | xargs kubectl delete deployment -n onap
kubectl delete deployment -n onap dev-network-name-gen
kubectl get statefulsets -n onap --no-headers=true | awk '/dev-sdnr/{print $1}' | xargs kubectl delete statefulsets -n onap

to delete PV:

kubectl get pv -n onap --no-headers=true | tail -n+2 | awk '/dev-sdn/{print $1}' | xargs kubectl patch pv -n onap -p '{"metadata":{"finalizers":null}}'

kubectl delete pv -n onap dev-sdnrdb-master-pv-0 --grace-period=0 --force
kubectl delete pv -n onap dev-sdnrdb-master-pv-1 --grace-period=0 --force
kubectl delete pv -n onap dev-sdnrdb-master-pv-2 --grace-period=0 --force

DMAAP Messages

Refer SDN-R_impacts for Dmaaps messages that can be used as an SDN-R input for RAN slice instantiation, modification, activation, deactivation and termination.


ACTN Simulator:

This Simulator section is bypassed and a workaround is used to continue the flow. Workaround is done in the SDNC DG. <>


ranSliceApi not deployed into SDNC

This issue occurs because you need to set the env SDNR_NORTHBOUND=true for the sdnc-image. This is by default set to false. With this flag all sdnr-northbound features are installed during startup. 

How to fix it?

    • Setting env into SDNC HELM CHART
    • You need to update the sdnc statefulset k8s resource by adding the new env param SDNC_NORTHBOUND = true

      SDNR_NORTHBOUND:                true
      - name: SDNR_NETCONF_CALLHOME_ENABLED
        value: "true"
      - name: SDNR_NORTHBOUND
        value: "true"
      image: nexus3.onap.org:10001/onap/sdnc-image:2.2.1
  • Rebuild the SDNC package and redeploy it.



Skipping TN Allocation 

In file oom/kubernetes/so/charts/so-bpmn-infra/resources/config/overrides/override.yaml, we add a new config flag:

mso: 
   workflow:
       TnNssmf:
            enableSDNCNetworkConfig: 'false'


  • No labels