Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The instructions to create an ONAP installation using the OOM Rancher/Kubernetes approach are in the ONAP wiki site (be sure to select the Casablanca version of the instructions).  Once installed, there are further instructions on deploying ONAP at this wiki page.  To install the development image rather than the nexus3 image, we must update parameter values in the Helm chart for SDNC in the OOM repository, shown here.

oom ??? docs ??? kubernetes ... ? ??? sdnc ? ? ??? charts ? ? ? ??? dmaap-listener ? ? ? ??? sdnc-ansible-server ? ? ? ??? sdnc-portal ? ? ? ??? ueb-listener ? ? ??? resources ? ? ??? sdnc-prom ? ? ??? templates ? ? ??? values.yaml ... ??? TOSCA
Code Block
% ls -F git/oom/kubernetes/sdnc
charts/  Chart.yaml  Makefile  requirements.lock  requirements.yaml  resources/  sdnc-prom/  templates/  values.yaml

Override file for the SDNC values.yaml file

The simplest way to override the values is to copy the entire values.yaml file and modify the relevant parameters.  The new values are shown here in a separate override-sdnc.yaml file.  We identify the repository with the source image name and tag, create a cluster of three ODL members, and create a redundant MySQL deployment of two instances.

...

#################################################################
# Application configuration defaults.
#################################################################
# application images
repository: nexus3.onap.org:10001
repositoryOverride: registry.hub.docker.com
pullPolicy: Always
#image: onap/sdnc-image:1.4.1
image: ft3e0tab7p92qsoceonq/oof-pci-sdnr:1.4.2-SNAPSHOT

...

mysql:
  nameOverride: sdnc-db
  service:
    name: sdnc-dbhost
    internalPort: 3306
  nfsprovisionerPrefix: sdnc
  sdnctlPrefix: sdnc
  persistence:
    mountSubPath: sdnc/mysql
    enabled: true
  disableNfsProvisioner: true
  replicaCount: 2
  geoEnabled: false

...

# default number of instances
replicaCount: 3

...

Override file for the ONAP values.yaml file

By default, the OOM Rancher/Kubernetes script installs all of the components, which we do not need for the proof-of-concept.  We identify which components to install by copying the ~/git/oom/kubernetes/onap/values.yaml file into a separate "override" file and changing "enabled: true" to "enabled: false" for the unneeded components.  Currently, these are the selected components.

aaffalse
aaitrue
appcfalse
clampfalse
clifalse
consulfalse
contribfalse
dcaegen2false
dmaaptrue
esrfalse
logtrue
sniro-emulatortrue
ooftrue
msbfalse
multicloudfalse
nbifalse
policytrue
pombafalse
portaltrue
robottrue
sdctrue
sdnctrue
sotrue
uuifalse
vfcfalse
vidfalse
vnfsdkfalse

Command to install ONAP with the development image

Following the guidelines at the OOM wiki page, I use this command to install ONAP with the desired configuration.

Code Block
cd ~/git/oom/kubernetes
sudo helm deploy demo ./onap --namespace onap -f ~/oof-pci/override-onap.yaml -f ~/oof-pci/override-sdnc.yaml

The parameter "demo" is used to preface each ONAP component with "demo-" so we have "demo-sdnc," for example.  The "./onap" parameter instructs helm to use that directory to guide the deployment.  The "–namespace onap" parameter causes ONAP to be deployed into the kubernetes namespace "onap."  The "-f ~/oof-pci/override-onap.yaml -f ~/oof-pci/override-sdnc.yaml" parameters instruct helm to override the parameters in the ~/git/oom/kubernetes/onap/values.yaml file with the values in the files following the "-f" option.  There can be a series of override files, and the last file takes precedence.

Commands to install the development image

If there is already an instance SDNC installed, it must be deleted before installing a new version.  I use these commands.

Code Block
helm del demo-sdnc --purge
kubectl get persistentvolumes      -n onap | grep demo-sdnc | sed 's/\(^[^ ]\+\).*/kubectl delete persistentvolumes      -n onap \1/'
kubectl get persistentvolumeclaims -n onap | grep demo-sdnc | sed 's/\(^[^ ]\+\).*/kubectl delete persistentvolumeclaims -n onap \1/'
kubectl get secrets                -n onap | grep demo-sdnc | sed 's/\(^[^ ]\+\).*/kubectl delete secrets                -n onap \1/'
kubectl get clusterrolebindings    -n onap | grep demo-sdnc | sed 's/\(^[^ ]\+\).*/kubectl delete clusterrolebindings    -n onap \1/'

The first command deletes SDNC but, despite the "–purge" option, some residual resources remain.  The subsequent commands discovers those resources and generates commands that can be copied and pasted into your terminal session to be executed.  If you know how to pipe a string into bash so it can be executed directly, kindly update this code.  The "helm del..." command takes some time, so please be patient.  Once SDNC has been deleted, you can install the new version using the commands in the previous section.

Accessing SDNC/SDNR

Now that SDNC/SDNR is deployed, how can you access it?  I use this sequence of commands.  First:

Code Block
% kubectl get pods -n onap -o wide | grep NODE && kubectl get pods -n onap -o wide | grep sdnc
NAME                                                   READY     STATUS                  RESTARTS   AGE       IP              NODE
demo-sdnc-controller-blueprints-694b7ff9d-gmrtd        1/1       Running                 0          20h       10.42.42.8      sb4-k8s-1
demo-sdnc-controller-blueprints-db-0                   1/1       Running                 0          20h       10.42.100.157   sb4-k8s-1
demo-sdnc-nengdb-0                                     1/1       Running                 0          20h       10.42.101.202   sb4-k8s-3
demo-sdnc-network-name-gen-7fc56878b6-sz8ps            1/1       Running                 0          20h       10.42.55.243    sb4-k8s-3
demo-sdnc-sdnc-0                                       2/2       Running                 0          20h       10.42.105.225   sb4-k8s-4
demo-sdnc-sdnc-1                                       2/2       Running                 0          20h       10.42.11.48     sb4-k8s-1
demo-sdnc-sdnc-2                                       2/2       Running                 0          20h       10.42.9.208     sb4-k8s-2
demo-sdnc-sdnc-ansible-server-7ddf4c54dd-vq877         1/1       Running                 0          20h       10.42.137.38    sb4-k8s-1
demo-sdnc-sdnc-db-0                                    2/2       Running                 0          20h       10.42.119.112   sb4-k8s-3
demo-sdnc-sdnc-db-1                                    2/2       Running                 0          20h       10.42.26.168    sb4-k8s-4
demo-sdnc-sdnc-dgbuilder-647d9bddb8-b2gxp              1/1       Running                 0          20h       10.42.93.148    sb4-k8s-4
demo-sdnc-sdnc-dmaap-listener-f9c9fd74c-w42rq          0/1       Init:0/1                0          20h       10.42.38.155    sb4-k8s-3
demo-sdnc-sdnc-portal-6fcd6b8445-bhf48                 1/1       Running                 0          20h       10.42.249.112   sb4-k8s-4
demo-sdnc-sdnc-ueb-listener-849d6498b5-mf8pw           0/1       Init:0/1                0          20h       10.42.0.101     sb4-k8s-3
demo-so-so-sdnc-adapter-5b7787596d-bm9xn               1/1       Running                 0          2d        10.42.170.141   sb4-k8s-1
% ping sb4-k8s-4
PING sb4-k8s-4 (10.31.1.79) 56(84) bytes of data.
64 bytes from sb4-k8s-4 (10.31.1.79): icmp_seq=1 ttl=64 time=0.505 ms

We see that there are three instances of SDNC running and two instances of SDNC-DB and that they are deployed in different nodes, as expected.  All of the pods have private IP addresses that are not accessible from outside the ONAP deployment, but demo-sdnc-sdnc-0 is installed in NODE sb4-k8s-4, which has IP address 10.31.1.79.  We now enter this command.

Code Block
% kubectl get svc -n onap | grep NAME && kubectl get svc -n onap | grep sdnc
NAME                          TYPE           CLUSTER-IP      EXTERNAL-IP                            PORT(S)                                                       AGE
sdnc                          NodePort       10.43.141.133   <none>                                 8282:30202/TCP,8202:30208/TCP,8280:30246/TCP,8443:30267/TCP   20h
sdnc-ansible-server           ClusterIP      10.43.41.91     <none>                                 8000/TCP                                                      20h
sdnc-cluster                  ClusterIP      None            <none>                                 2550/TCP                                                      20h
sdnc-dbhost                   ClusterIP      None            <none>                                 3306/TCP                                                      20h
sdnc-dbhost-read              ClusterIP      10.43.100.184   <none>                                 3306/TCP                                                      20h
sdnc-dgbuilder                NodePort       10.43.16.12     <none>                                 3000:30203/TCP                                                20h
sdnc-dmaap-listener           ClusterIP      None            <none>                                 <none>                                                        20h
sdnc-portal                   NodePort       10.43.40.149    <none>                                 8843:30201/TCP                                                20h
sdnc-sdnctldb01               ClusterIP      None            <none>                                 3306/TCP                                                      20h
sdnc-sdnctldb02               ClusterIP      None            <none>                                 3306/TCP                                                      20h
sdnc-ueb-listener             ClusterIP      None            <none>                                 <none>                                                        20h
so-sdnc-adapter               ClusterIP      10.43.141.124   <none>                                 8086/TCP                                                      2d

SDNC is presenting a service at a NodePort that is accessible from outside the ONAP installation.  PORT 8282:30202 means that port 30202 is accessible externally and maps to internal port 8282 (I'm not sure why 8282 rather than 8181; a port mapping from 8282 to 8181 may be set in a Dockerfile).  Therefore, SDNC is listening at sb4-k8s-4:30202, or 10.31.1.79:30202.  By creating a ssh tunnel to sb4-k8s-4 (described here), one can open a browser to localhost:30202/apidoc/explorer/index.html and see this.

Image Added

Conclusion

Please feel free to edit this page to make corrections or improvements.  Your assistance will be greatly appreciated.