Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Run the following command on master node to clone OOM project from ONAP gerrit at any directory you prefer, and that directory will be referred as "{$OOM}" in this page.

...

Info

The source of the new startODL.sh script, gerrit change 25475, has been merged into sdnc/oam project on December 15th, 2017.Skip this section if your


Do the following to get the new startODL.sh script which provides the configuration of ODL clustering for SDN-C

...

cluster.

#PurposeCommand Examples
1Get the shared new startODL.sh script content

Go to gerrit change 25475

click on installation/sdnc/src/main/scripts/startODL.sh under the Files section to view the

details

text of the

changes

script.

click on the Download button (Image Added) to download the startODL_new.sh.zip file

and extract open the sh file inside the zip file,

and copy the content (to be used in step 2)

rename it to "startODL.sh".

2Create new startODL.sh on the Kubernetes node VM

mkdir -p /home/ubuntudockerdata-nfs/cluster/scriptsscript

vi /home/ubuntudockerdata-nfs/cluster/script/startODL.sh

paste the copied content from step 1 to this file

3Give execution permission to the new startODL.sh script

chmod 777 /

home/ubuntu

dockerdata-nfs/cluster/script/startODL.sh

sudo chown $(id -u):$(id -g) /dockerdata-nfs/cluster/script/startODL.sh


Get SDN-C Cluster Templates From Gerrit Topic SDNC-163

Only do this before the SDN-C cluster code is merged into gerrit OOM project.

Info

The source of the templates, gerrit change 25467, has been merged into sdnc/oam project on December 20th, 2017.

Skip step 1 and 2 if your cloned OOM project includes this change.

Skip step 3 if you skipped previous section (adding startODL.sh script).

Skip step 4, if you dont want to add/deploy extra features/bundles/packages.

Step 5 in important. It determines number of sdnc and db pods.

Skip this change if you have skipped the get new startODL.sh script section
#PurposeCommand and Examples
1Get the shared templates code git fetch command

Go to gerrit change 25467

Click Download downward arrow,

From the right bottom corner drop list, select anonymous http,

Click and click the click board in the same line as Checkout to get (copy to clipboard) the git commands (which includes the git fetch and checkout commands).

Expand
titleAn example of the git commands with patch set 1923
git fetch https://gerrit.onap.org/r/oom refs/changes/67/25467/
19
23 && git checkout FETCH_HEAD
2Fetch the shared template to the oom directory on the Kubernetes node VM

cd {$OOM}

run Execute the git commands got command from step 1.

3Link the new startODL.sh
Info

Skip this change if you have skipped the get new startODL.sh script section

Be careful with editing YAML files. They are sensitive to number of spaces. Be careful with copy/paste from browser.


vi kubernetes/sdnc/templates/sdnc-statefulset.yaml

do Make the following changes:

PurposeChanges
Info


mount point for new startODL.sh script

FieldAdded Value

.spec.template.spec.containers.volumeMounts

(of container sdnc-controller-container)

- mountPath: /opt/onap/sdnc/bin/startODL.sh
name: sdnc-startodl

 .spec.template
.
.spec.volumes

- name: sdnc-startodl
hostPath:
path: /

home/ubuntu

dockerdata-nfs/cluster/script/startODL.sh

mount point for ODL test bundle

FieldAdded Value

4Link the ODL deploy directory
Info

If you are not going to use the test bundle to test out SDN-C cluster and load balancing, you can skip this step.

ODL automatically install bundles/pacakges that are put under deploy directory, this mount point provides capability to drop a bundle/package in the Kubernetes node at /dockerdata-nfs/cluster/deploy directory and it will automativally be installed in the sdnc pods (under opt/opendaylight/current/deploy directory).


vi kubernetes/sdnc/templates/sdnc-statefulset.yaml

Make the following changes:

PurposeChanges

mount point for ODL deploy directory

FieldAdded Value

.spec.template.

.spec.template.

spec.containers.volumeMounts

(of container sdnc-controller-container)

- mountPath: /opt/opendaylight/current/deploy
name: sdnc-deploy

 
.spec.template.spec.volumes

- name: sdnc-deploy
hostPath:
path: /

home/ubuntu

dockerdata-nfs/cluster/deploy


45Enable cluster configuration

vi kubernetes/sdnc/values.yaml

change Change the following fields with the new value:

fieldnew valueold value
enableODLClustertruefalse
numberOfODLReplicas31
numberOfDbReplicas21

...

Make nfs-provisioner Pod Runs On Node Where NFS Server Runs

Info

Skip this the following section if you have skipped "Share the /dockerdata-nfs Folder between Kubernetes Nodes".

...

, if any of thew following coditions matches.

  • you have skipped 2

...


Verify (from Master node)), run the following:

#PurposeCommand and Example
1Find the node name
Expand
titlefind nfs server node

Run command "ps -ef|grep nfs", you should

  • node with nfs server runs nfsd process:

ubuntu@sdnc-k8s:~$ ps -ef|grep nfs
root 3473 2 0 Dec07 ? 00:00:00 [nfsiod]
root 11072 2 0 Dec06 ? 00:00:00 [nfsd4_callbacks]
root 11074 2 0 Dec06 ? 00:00:00 [nfsd]
root 11075 2 0 Dec06 ? 00:00:00 [nfsd]
root 11076 2 0 Dec06 ? 00:00:00 [nfsd]
root 11077 2 0 Dec06 ? 00:00:00 [nfsd]
root 11078 2 0 Dec06 ? 00:00:00 [nfsd]
root 11079 2 0 Dec06 ? 00:00:03 [nfsd]
root 11080 2 0 Dec06 ? 00:00:13 [nfsd]
root 11081 2 0 Dec06 ? 00:00:42 [nfsd]
ubuntu@sdnc-k8s:~$

  • node with nfs client runs nfs svc process:

ubuntu@sdnc-k8s-2:~$ ps -ef|grep nfs
ubuntu 5911 5890 0 20:10 pts/0 00:00:00 grep --color=auto nfs
root 18739 2 0 Dec06 ? 00:00:00 [nfsiod]
root 18749 2 0 Dec06 ? 00:00:00 [nfsv4.0-svc]
ubuntu@sdnc-k8s-2:~$

kubectl get node

Expand
titleExample of response

ubuntu@sdnc-k8s:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
sdnc-k8s Ready master 6d v1.8.4
sdnc-k8s-2 Ready <none> 6d v1.8.4
ubuntu@sdnc-k8s:~$

2Set label on the node

kubectl label nodes <NODE_NAME_FROM_LAST_STEP> disktype=ssd

Expand
titleAn example

ubuntu@sdnc-k8s:~$ kubectl label nodes sdnc-k8s disktype=ssd

node "sdnc-k8s" labeled

ubuntu@sdnc-k8s:~$

3Check the label has been set on the node

kubectl get node --show-labels

Expand
titleAn example

ubuntu@sdnc-k8s:~$ kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
sdnc-k8s Ready master 6d v1.8.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=sdnc-k8s,node-role.kubernetes.io/master=
sdnc-k8s-2 Ready <none> 6d v1.8.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=sdnc-k8s-2
ubuntu@sdnc-k8s:~$

4Update nfs-provisioner pod template to force it running on the nfs server node

In nfs-provisoner-deployment.yaml file, add “spec.template.spec.nodeSelector” for pod “nfs-provisioner”

Expand
titleAn example of the nfs-provisioner pod with nodeSelector


Create the ONAP Config

...

Setup onap-parameters.yaml file

The following commands must run on master node before creating ONAP configuration.

...

cd {$OOM}/kubernetes/config

cp onap-parameters-sample.yaml onap-parameters.yaml


Run createConfig

Info
titleTo simplify steps in this section

You can skip the steps in this section by following instruction in autoCreateOnapConfig of Scripts section to

  • create {$OOM}/kubernetes/oneclick/tools/autoCreateOnapConfig.bash file
  • run it and wait until the script completes
#PurposeCommand and Examples
0

Set the OOM Kubernetes config environment

cd {$OOM}/kubernetes/oneclick

source setenv

0

Set the OOM Kubernetes config environment

cd {$OOM}/kubernetes/oneclick

source setenv.bash

1

Run the createConfig script to create the ONAP config

cd {$OOM}/kubernetes/config
./createConfig.sh -n onap

Expand
titleExample of output

**** Creating configuration for ONAP instance: onap

namespace "onap" created

NAME:   onap-config

LAST DEPLOYED: Wed Nov  8 20:47:35 2017

NAMESPACE: onap

STATUS: DEPLOYED

 

RESOURCES:

==> v1/ConfigMap

NAME                   DATA  AGE

global-onap-configmap  15    0s

 

==> v1/Pod

NAME    READY  STATUS             RESTARTS  AGE

config  0/1    ContainerCreating  0         0s

 

 

**** Done ****

Wait for the config-init container to finish

Use the following command to monitor onap config init intil it reaches to Completed STATUS:

kubectl get pod --all-namespaces -a

Expand
titleExample of final output

The final output should be shown as the the following with onap config in Completed STATUS:

Image Removed

Image Added

Additional checks for config-init
helm

helm ls --all

Expand
titleExample of output

NAME REVISION UPDATED STATUS CHART NAMESPACE
onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap

helm status onap-config

Expand
titleExample of output

LAST DEPLOYED: Tue Nov 21 17:07:13 2017
NAMESPACE: onap
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
global-onap-configmap 15 2d

==> v1/Pod
NAME READY STATUS RESTARTS AGE
config 0/1 Completed 0 2d

 kubernetes namespaces

kubectl get namespaces

Expand
titleExample of output

NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Active 2d

...

Deploy the SDN-C Application

...

Info
titleTo simplify steps in this section

You can skip the steps in this section by following instruction in autoDeploySdnc of Scripts section to

  • create {$OOM}/kubernetes/oneclick/tools/autoDeploySdnc.bash file
  • run it and wait until the script completes

Execute the followings on the master node.

Goal:

Verify if the SDN-C ODL-Cluster is running properly

Prerequisites
  1. This test is on one of your Kubernetes nodes
  2. Make sure python-pycurl is installed
    • If not, for Ubuntu use "apt-get install python-pycurl" to install it
Use ODL intergration tool to monitor ODL cluster
#PurposeCommand and Examples
0

Set the OOM Kubernetes config environment

(If you have set the OOM Kubernetes config enviorment in the same terminal, you can skip this step

PurposeCommand and Examples0

Set the OOM Kubernetes config environment

(If you have set the OOM Kubernetes config enviorment in the same terminal, you can skip this step)

cd {$OOM}/kubernetes/oneclick

source setenv.bash

1

Run the createAll script to deploy the SDN-C appilication

cd {$OOM}/kubernetes/oneclick

./createAll.bash -n onap -a sdnc

Expand
titleExample of output

********** Creating instance 1 of ONAP with port range 30200 and 30399

********** Creating ONAP:


********** Creating deployments for sdnc **********

Creating namespace **********
namespace "onap-sdnc" created

Creating service account **********
clusterrolebinding "onap-sdnc-admin-binding" created

Creating registry secret **********
secret "onap-docker-registry-key" created

Creating deployments and services **********
NAME: onap-sdnc
LAST DEPLOYED: Thu Nov 23 20:13:32 2017
NAMESPACE: onap
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolume
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
onap-sdnc-db 2Gi RWX Retain Bound onap-sdnc/sdnc-db 1s

==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
sdnc-db Bound onap-sdnc-db 2Gi RWX 1s

==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dbhost None <none> 3306/TCP 1s
sdnctldb01 None <none> 3306/TCP 1s
sdnctldb02 None <none> 3306/TCP 1s
sdnc-dgbuilder 10.43.97.219 <nodes> 3000:30203/TCP 1s
sdnhost 10.43.99.163 <nodes> 8282:30202/TCP,8201:30208/TCP 1s
sdnc-portal 10.43.72.72 <nodes> 8843:30201/TCP 1s

==> extensions/v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
sdnc-dgbuilder 1 1 1 0 1s

==> apps/v1beta1/StatefulSet
NAME DESIRED CURRENT AGE
sdnc-dbhost 2 1 1s
sdnc 3 3 1s
sdnc-portal 2 2 1s

 


**** Done ****


Ensure that the SDN-C appication has started

Use the kubectl get pods command to monitor the SDN-C startup; you should observe:

  • sdnc-dbhost-0 pod starts and gets into Running STATUS first,
    • while
      • sdnc-dbhost-1 pod does not exist and
      • sdnc, sdnc-dgbuilder and sdnc-portal pods are staying in Init:0/1 STATUS
  • once sdnc-dbhost-0 pod is fully started with the READY "1/1",
    • sdnc-dbhost-1 will be starting from ContainerCreating STATUS and runs up to Running STATUS
  • once sdnc-dbhost-1 pod is in RunningSTATUS,
    • sdnc pods will be starting from PodInitializing STATUS and end up with Running STATUS in parallel
    • while
      • sdnc-dgbuilder and sdnc-portal pods are staying in Init:0/1 STATUS
  • once sdnc pods are (is) in Running STATUS,
    • sdnc-dgbuilder and sdnc-portal will be starting from PodInitializing STATUS and end up with Running STATUS in parllel
Expand
titleExample of start up status changes through "kubectl get pods --all-namespaces -a -o wide"

Image Added

Image Added

Image Added

Image Added

Image Added

Image Added

Image Added

Image Added

Image Added

Image Removed

Image Removed

Image Removed

Image Removed

2

Validate that all SDN-C pods and services are created properly

helm ls --all

Expand
titleExample of SDNC release

ubuntu@sdnc-k8s:~$ helm ls --all
NAME REVISION UPDATED STATUS CHART NAMESPACE
onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap
onap-sdnc 1 Thu Nov 23 20:13:32 2017 DEPLOYED sdnc-0.1.0 onap
ubuntu@sdnc-k8s:~$

kubectl get namespacesnamespace

Expand
titleExample of SDNC namespace

ubuntu@sdnc-k8s:~$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Active 2d
onap-sdnc Active 12m
ubuntu@sdnc-k8s:~$

kubectl get deployment --all-namespaces

Expand
titleExample of SDNC deployment

ubuntu@sdnc-k8s-2:~$ kubectl get deployment --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system heapster 1 1 1 1 15d
kube-system kube-dns 1 1 1 1 15d
kube-system kubernetes-dashboard 1 1 1 1 15d
kube-system monitoring-grafana 1 1 1 1 15d
kube-system monitoring-influxdb 1 1 1 1 15d
kube-system tiller-deploy 1 1 1 1 15d
onap-sdnc sdnc-dgbuilder 1 1 1 0 26m
ubuntu@sdnc-k8s-2:~$

kubectl get clusterrolebinding --all-namespaces

Expand
titleExample of SDNC cluster role binding

ubuntu@sdnc-k8s:~$ kubectl get clusterrolebinding --all-namespaces
NAMESPACE NAME AGE
addons-binding 15d
onap-sdnc-admin-binding 13m
ubuntu@sdnc-k8s:~$

kubectl get serviceaccounts --all-namespaces

Expand
titleExample of SDNC service account

ubuntu@sdnc-k8s:~$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
onap default 1 2d
onap-sdnc default 1 14m
ubuntu@sdnc-k8s:~$

kubectl get service --all-namespacesn onap-sdnc

ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$
Expand
titleExample of all SDNC services

Image Added

kubectl get

service

pods --all-namespaces


NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 15d
kube-system heapster ClusterIP 10.43.210.11 <none> 80/TCP 15d
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP 15d
kube-system kubernetes-dashboard ClusterIP 10.43.196.205 <none> 9090/TCP 15d
kube-system monitoring-grafana ClusterIP 10.43.90.8 <none> 80/TCP 15d
kube-system monitoring-influxdb ClusterIP 10.43.52.1 <none> 8086/TCP 15d
kube-system tiller-deploy ClusterIP 10.43.106.73 <none> 44134/TCP 15d
onap-sdnc dbhost ClusterIP None <none> 3306/TCP 17m
onap-sdnc sdnc-dgbuilder NodePort 10.43.97.219 <none> 3000:30203/TCP 17m
onap-sdnc sdnc-portal NodePort 10.43.72.72 <none> 8843:30201/TCP 17m
onap-sdnc sdnctldb01 ClusterIP None <none> 3306/TCP 17m
onap-sdnc sdnctldb02 ClusterIP None <none> 3306/TCP 17m
onap-sdnc sdnhost NodePort 10.43.99.163 <none> 8282:30202/TCP,8201:30208/TCP 17m
ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$

-a

Expand
titleExample of all SDNC pods

Image Added

docker ps |grep sdnc

Expand
titleExample of SDNC docker container

On Server 1:

$ docker ps |grep sdnc |wc -l
9

$ docker ps |grep sdnc
ebcb2f7f1a4a docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" About an hour ago Up About an hour k8s_filebeat-onap_sdnc-1_onap-sdnc_263071db-e69e-11e7-a01e-026c942e0e8c_0
55a82019ce10 nexus3.onap.org:10001/onap/sdnc-image@sha256:f5f85d498c397677fa2df850d9a2d8149520c5be3dce6c646c8fe4f25f9bab10 "bash -c 'sed -i 's/d" About an hour ago Up About an hour k8s_sdnc-controller-container_sdnc-1_onap-sdnc_263071db-e69e-11e7-a01e-026c942e0e8c_0
bbdfdfc2b1f0 gcr.io/google-samples/xtrabackup@sha256:29354f70c9d9207e757a1bae6a4cbf2f57a56b18fe5c2b0acc1198a053b24b38 "bash -c 'set -ex\ncd " About an hour ago Up About an hour k8s_xtrabackup_sdnc-dbhost-1_onap-sdnc_6e9a6c9a-e69e-11e7-a01e-026c942e0e8c_0
26854595164d mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" About an hour ago Up About an hour k8s_sdnc-db-container_sdnc-dbhost-1_onap-sdnc_6e9a6c9a-e69e-11e7-a01e-026c942e0e8c_0
b577493b5725 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-dbhost-1_onap-sdnc_6e9a6c9a-e69e-11e7-a01e-026c942e0e8c_0
14dcc0985259 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-1_onap-sdnc_263071db-e69e-11e7-a01e-026c942e0e8c_1
b52be823997b quay.io/kubernetes_incubator/nfs-provisioner@sha256:b5328a3825032d7e1719015260260347bda99c5d830bcd5d9da1175e7d1da989 "/nfs-provisioner -pr" About an hour ago Up About an hour k8s_nfs-provisioner_nfs-provisioner-860992464-dz5bt_onap-sdnc_261c3d30-e69e-11e7-a01e-026c942e0e8c_0
dc6dfd3fde3b gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_nfs-provisioner-860992464-dz5bt_onap-sdnc_261c3d30-e69e-11e7-a01e-026c942e0e8c_0
006aaa34c5af mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 32 hours ago Up 32 hours k8s_sdnc-db-container_sdnc-dbhost-1_onap-sdnc_d1cc3b28-e59e-11e7-a01e-026c942e0e8c_0

On Server 2:

kubectl get pods --all-namespaces -a

Expand
titleExample of all SDNC pods

ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$ kubectl get pods --all-namespaces -a
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-4285517626-32km8 1/1 Running 0 15d
kube-system kube-dns-638003847-vqz8t 3/3 Running 0 15d
kube-system kubernetes-dashboard-716739405-tnxj6 1/1 Running 0 15d
kube-system monitoring-grafana-2360823841-qfhzm 1/1 Running 0 15d
kube-system monitoring-influxdb-2323019309-41q0l 1/1 Running 0 15d
kube-system tiller-deploy-737598192-5663c 1/1 Running 0 15d
onap config 0/1 Completed 0 2d
onap-sdnc sdnc-0 2/2 Running 0 17m
onap-sdnc sdnc-1 0/2 CrashLoopBackOff 16 17m
onap-sdnc sdnc-2 2/2 Running 0 17m
onap-sdnc sdnc-dbhost-0 1/1 Running 0 17m
onap-sdnc sdnc-dbhost-1 1/1 Running 0 16m
onap-sdnc sdnc-dgbuilder-356329770-cpfzj 0/1 Running 6 17m
onap-sdnc sdnc-portal-0 0/1 Running 6 17m
onap-sdnc sdnc-portal-1 0/1 CrashLoopBackOff 7 17m
ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$

docker ps |grep sdnc

Example of SDNC docker container
Expand
title

$ docker ps |grep sdnc|wc -l
1420

$ docker ps |grep sdnc

9a1fc91b6dcc docker5155b7fcd0b9 nexus3.elasticonap.coorg:10001/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" 19 minutes ago Up 19 minutes k8s_filebeat-onap_sdnc-2onap/admportal-sdnc-image@sha256:9cfdfa8aac18da5571479e0c767b92dbc72a3a5b475be37bd84fb65400696564 "/bin/bash -c 'cd /op" About an hour ago Up About an hour k8s_sdnc-portal-container_sdnc-portal-1380828306-7xxdz_onap-sdnc_c7feb9d1261d5989-d08ae69e-11e7-957fa01e-0269cb13eff1026c942e0e8c_0
9dbaa04e160c dockerd3a58f2bb662 nexus3.elasticonap.coorg:10001/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" 19 minutes ago Up 19 minutes k8s_filebeat-onap_sdnc-0onap/ccsdk-dgbuilder-image@sha256:c52ad4dacc00da4a882d31b7022e59e3dd4bd4ec104380910949bce4d2d0c7b9 "/bin/bash -c 'cd /op" About an hour ago Up About an hour k8s_sdnc-dgbuilder-container_sdnc-dgbuilder-3612718752-m189c_onap-sdnc_c7f28480262301b1-d08ae69e-11e7-957fa01e-0269cb13eff1026c942e0e8c_0
fca36e9b5353 275e0173f109 nexus3.onap.org:10001/onap/sdnc-image@sha256:1049151464b3e60d9a553bc2f3bdaf79555839217f0557652e982ca99398375a "/opt/onap/sdnc/bin/s" 19 minutes ago Up 19 minutes f5f85d498c397677fa2df850d9a2d8149520c5be3dce6c646c8fe4f25f9bab10 "bash -c 'sed -i 's/d" About an hour ago Up About an hour k8s_sdnc-controller-container_sdnc-2_onap-sdnc_c7feb9d1263d495f-d08ae69e-11e7-957fa01e-0269cb13eff1026c942e0e8c_0
00efa164a58a nexus32566062bd408 docker.onapelastic.org:10001co/onapbeats/sdnc-image@sha256filebeat@sha256:1049151464b3e60d9a553bc2f3bdaf79555839217f0557652e982ca99398375a "/opt/onap/sdnc/bin/s" 19 minutes ago Up 19 minutes k8s_sdnc-controller-container_sdnc-0fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" About an hour ago Up About an hour k8s_filebeat-onap_sdnc-2_onap-sdnc_c7f28480263d495f-d08ae69e-11e7-957fa01e-0269cb13eff1026c942e0e8c_0
4a2769dfee37 mysql/mysql-server@sha256:720f301388709af2c84ee09ba51340d09d1e9f7ba45f19719b5b18b5fa696771 "/entrypoint.sh mysql" 19 minutes ago Up 19 minutes (healthy) k8s_sdnc-db-container_sdnc-dbhost-53df8341738c docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" About an hour ago Up About an hour k8s_filebeat-onap_sdnc-0_onap-sdnc_c7e050b32629ed07-d08ae69e-11e7-957fa01e-0269cb13eff1026c942e0e8c_0
8b4a21cb2bd2 mysql/mysql-server@sha256:720f301388709af2c84ee09ba51340d09d1e9f7ba45f19719b5b18b5fa696771 "/entrypoint.sh mysql" 19 minutes ago Up 19 minutes (healthy) e147f48ffb5b nexus3.onap.org:10001/onap/sdnc-image@sha256:f5f85d498c397677fa2df850d9a2d8149520c5be3dce6c646c8fe4f25f9bab10 "bash -c 'sed -i 's/d" About an hour ago Up About an hour k8s_sdnc-dbcontroller-container_sdnc-dbhost-10_onap-sdnc_cde0fde02629ed07-d08ae69e-11e7-957fa01e-0269cb13eff1026c942e0e8c_0
04904cb18336 c5f63561e196 gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 minutes ago Up 19 minutes k8s_POD_sdnc-portal-samples/xtrabackup@sha256:29354f70c9d9207e757a1bae6a4cbf2f57a56b18fe5c2b0acc1198a053b24b38 "bash -c 'set -ex\ncd " About an hour ago Up About an hour k8s_xtrabackup_sdnc-dbhost-0_onap-sdnc_c810f3e4265aef90-d08ae69e-11e7-957fa01e-0269cb13eff1026c942e0e8c_0
e89a3e28505a gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 minutes ago Up 19 minutes k8s_POD_sdnc-24a0cc594d12d mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" About an hour ago Up About an hour k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_c7feb9d1265aef90-d08ae69e-11e7-957fa01e-0269cb13eff1026c942e0e8c_0
7d2c4ac066f4 6c02ac51ea56 gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 minutes About an hour ago Up 19 minutes About an hour k8s_POD_sdnc-dgbuilderdbhost-356329770-cpfzj0_onap-sdnc_c7dd4b22265aef90-d08ae69e-11e7-957fa01e-0269cb13eff1026c942e0e8c_0
660ee9119001 8eafb15b5e7c gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 minutes About an hour ago Up 19 minutes About an hour k8s_POD_sdnc-02_onap-sdnc_c7f28480263d495f-d08ae69e-11e7-957fa01e-0269cb13eff1026c942e0e8c_0
fa0e59e1a7c7 887ac4e733e8 gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 minutes About an hour ago Up 19 minutes About an hour k8s_POD_sdnc-dbhost-0_onap-sdnc_c7e050b32629ed07-d08ae69e-11e7-957fa01e-0269cb13eff1026c942e0e8c_0
24b6d3eb3020 41044c2ed166 gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 minutes About an hour ago Up 19 minutes About an hour k8s_POD_sdnc-dbhostdgbuilder-3612718752-1m189c_onap-sdnc_cde0fde0262301b1-d08ae69e-11e7-957fa01e-0269cb13eff1026c942e0e8c_0
a6e6445b87eb 6771b1319b20 gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 minutes About an hour ago Up 19 minutes About an hour k8s_POD_sdnc-portal-1380828306-17xxdz_onap-sdnc_c7f6b06f261d5989-d08ae69e-11e7-957fa01e-0269cb13eff1026c942e0e8c_0
1d00b4bb46c0 gcr.io/google_containers/pause-amd64:3.0 "/pause" 19 minutes ago Up 19 minutes k8s_POD_sdnc-portal-1fd57d6e9577b mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 2 hours ago Up 2 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_c813c6677c3c3068-d08ae699-11e7-957fa01e-0269cb13eff1026c942e0e8c_0

$

In the above example, it missed sdnc-1 container as I had a failure of it due to directory sharing.

3

Validate that the SDN-C bundlers are up

Expand
titleEnter pod container with 2 options
Expand
titleOption 1: through pod name from anywhere

Use command

kubectl exec -it <POD_NAME_WITH_NAME_SPACE> bash

Expand
titleExample

Image Removed

Expand
titleOption 2: through docker container ID from where the container is

Use command

docker exec -it <DOCKER_CONTAINER_ID> bash

Expand
titleExample

Image Removed

Expand
titleCheck SDNC bundles in ODL client
Expand
titleEnter ODL client

Image Removed

Expand
titleCheck SDNC bundlers

Image Removed

4

Validate that the SDN-C APIs are shown on the ODL RestConf page

Access the ODL RestConf page from the following URL:

http://<Kubernetes-Master-Node-IP>:30202/apidoc/explorer/index.html

Expand
titleExample of SDNC APIs in ODL RestConf page

Image Removed

5

Validate the SDN-C ODL cluster

Clone ODL Integration-test project

git clone https://github.com/opendaylight/integration-test.git

Enter cluster-montor folder

cd integration-test/tools/clustering/cluster-monitor

Create update-cluster.bash script

vi update-cluster.sh

Code Block
languagebash
titleContent of update-cluster.bash
linenumberstrue
collapsetrue
 #!/bin/bash -x
#get ips string by using kubectl
ips_string=$(kubectl get pods --all-namespaces -o wide | grep 'sdnc-[0-9]' | awk '{print $7}')
ip_list=($(echo ${ips_string} | tr ' ' '\n'))
#loop and replace existing ip
for ((i=0;i<=2;i++));
do
   if [ "${ip_list[$i]}" == "<none>" ]; then
     echo "Ip of deleted pod is not ready yet"
     exit 1;
   fi
   let "j=$i+4"
   sed -i -r "${j}s/(\b[0-9]{1,3}\.){3}[0-9]{1,3}\b"/${ip_list[$i]}/ cluster.json
done
python monitor.py

This script is used to fetch all the IPs of SDN-C pods and automatically update cluster.json file

Start cluster montor UI

./update-cluster.bash

Note:

If applications inside any of these three SDNC pods are not fully started, this script won't be executed successfully due to the issues such as connection error, value error and etc. 

Otherwise, you should see the monitoring UI as the following:

Image Removed
Use testCluster RPC to test SDN-C load sharing

The testCluster-bundle.zip provides a testBundle which offers a testCluster API to help with validating SDN-C RPC load sharing in the deployed SDN-C cluster.

It's just as easy as to do the following:

download testCluster-bundle.zipfrom the attachment, and place it to the sdnc-deployhostPath which has been defined in .spec.volumes of sdnc-deployment.yaml file
  • As this hostPath is mounted as ODL's deploy directory, testBundle will be automatically loaded by ODL and the testCluster API will be availble to use.
  • testCluster api can be accessed from ODL RestConf page, postman or curl command:

    • Expand
      titlecurl command

      curl -u admin:Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U -H 'Accept: application/json' -X POST 'http://${kubernetes-master-api}:30202/restconf/operations/testCluster:who-am-i'

    • Expand
      titleAn example of testCluster API response
      {
      	"output": {
      		"node": "sdnc-2"
      	}
      }