Steps described in this page are run by "ubuntu", a non-root user.



Clone the OOM project from ONAP gerrit

Run the following command on master node to clone OOM project from ONAP gerrit at any directory you prefer, and that directory will be referred as "{$OOM}" in this page.

git clone http://gerrit.onap.org/r/oom

SDN-C Cluster Deployment

Configure SDN-C Cluster Deployment

We are using Kubernetes replicas to achieve the SDN-C cluster deployment (see details from About SDN-C Clustering for the desired goal).

This only needs to be done one time and, at the moment, all modifications are done manually (they can be automated via scripting in the future when the needs come up).


Get New startODL.sh Script From Gerrit Topic SDNC-163

The source of the new startODL.sh script, gerrit change 25475, has been merged into sdnc/oam project on December 15th, 2017.


Do the following to get the new startODL.sh script which provides the configuration of ODL clustering for SDN-C cluster.

#PurposeCommand Examples
1Get the shared new startODL.sh script content

Go to gerrit change 25475

click on installation/sdnc/src/main/scripts/startODL.sh under the Files section to view the text of the script.

click on the Download button () to download the startODL_new.sh.zip file

and extract open the sh file inside the zip file, rename it to "startODL.sh".

2Create new startODL.sh on the Kubernetes node VM

mkdir -p /dockerdata-nfs/cluster/script

vi /dockerdata-nfs/cluster/script/startODL.sh

paste the copied content from step 1 to this file

3Give execution permission to the new startODL.sh script

chmod 777 /dockerdata-nfs/cluster/script/startODL.sh

sudo chown $(id -u):$(id -g) /dockerdata-nfs/cluster/script/startODL.sh


Get SDN-C Cluster Templates From Gerrit Topic SDNC-163

The source of the templates, gerrit change 25467, has been merged into sdnc/oam project on December 20th, 2017.

Skip step 1 and 2 if your cloned OOM project includes this change.

Skip step 3 if you skipped previous section (adding startODL.sh script).

Skip step 4, if you dont want to add/deploy extra features/bundles/packages.

Step 5 in important. It determines number of sdnc and db pods.

#PurposeCommand and Examples
1Get the shared templates code git fetch command

Go to gerrit change 25467

Click Download downward arrow,

From the right bottom corner drop list, select anonymous http,

Click the click board in the same line as Checkout to get (copy to clipboard) the git commands (which includes the git fetch and checkout commands).

git fetch https://gerrit.onap.org/r/oom refs/changes/67/25467/23 && git checkout FETCH_HEAD
2Fetch the shared template to the oom directory on the Kubernetes node VM

cd {$OOM}

Execute the git command from step 1.

3Link the new startODL.sh

Skip this change if you have skipped the get new startODL.sh script section

Be careful with editing YAML files. They are sensitive to number of spaces. Be careful with copy/paste from browser.


vi kubernetes/sdnc/templates/sdnc-statefulset.yaml

Make the following changes:

PurposeChanges


mount point for new startODL.sh script

FieldAdded Value

.spec.template.spec.containers.volumeMounts

(of container sdnc-controller-container)

- mountPath: /opt/onap/sdnc/bin/startODL.sh
name: sdnc-startodl

 .spec.template.spec.volumes

- name: sdnc-startodl
hostPath:
path: /dockerdata-nfs/cluster/script/startODL.sh

4Link the ODL deploy directory

If you are not going to use the test bundle to test out SDN-C cluster and load balancing, you can skip this step.

ODL automatically install bundles/pacakges that are put under deploy directory, this mount point provides capability to drop a bundle/package in the Kubernetes node at /dockerdata-nfs/cluster/deploy directory and it will automativally be installed in the sdnc pods (under opt/opendaylight/current/deploy directory).


vi kubernetes/sdnc/templates/sdnc-statefulset.yaml

Make the following changes:

PurposeChanges

mount point for ODL deploy directory

FieldAdded Value

.spec.template.spec.containers.volumeMounts

(of container sdnc-controller-container)

- mountPath: /opt/opendaylight/current/deploy
name: sdnc-deploy

.spec.template.spec.volumes

- name: sdnc-deploy
hostPath:
path: /dockerdata-nfs/cluster/deploy


5Enable cluster configuration

vi kubernetes/sdnc/values.yaml

Change the following fields with the new value:

fieldnew valueold value
enableODLClustertruefalse
numberOfODLReplicas31
numberOfDbReplicas21


Make nfs-provisioner Pod Runs On Node Where NFS Server Runs

Skip the following section, if any of thew following coditions matches.


Verify (from Master node)

#PurposeCommand and Example
1Find the node name

Run command "ps -ef|grep nfs", you should

  • node with nfs server runs nfsd process:

ubuntu@sdnc-k8s:~$ ps -ef|grep nfs
root 3473 2 0 Dec07 ? 00:00:00 [nfsiod]
root 11072 2 0 Dec06 ? 00:00:00 [nfsd4_callbacks]
root 11074 2 0 Dec06 ? 00:00:00 [nfsd]
root 11075 2 0 Dec06 ? 00:00:00 [nfsd]
root 11076 2 0 Dec06 ? 00:00:00 [nfsd]
root 11077 2 0 Dec06 ? 00:00:00 [nfsd]
root 11078 2 0 Dec06 ? 00:00:00 [nfsd]
root 11079 2 0 Dec06 ? 00:00:03 [nfsd]
root 11080 2 0 Dec06 ? 00:00:13 [nfsd]
root 11081 2 0 Dec06 ? 00:00:42 [nfsd]
ubuntu@sdnc-k8s:~$

  • node with nfs client runs nfs svc process:

ubuntu@sdnc-k8s-2:~$ ps -ef|grep nfs
ubuntu 5911 5890 0 20:10 pts/0 00:00:00 grep --color=auto nfs
root 18739 2 0 Dec06 ? 00:00:00 [nfsiod]
root 18749 2 0 Dec06 ? 00:00:00 [nfsv4.0-svc]
ubuntu@sdnc-k8s-2:~$

kubectl get node

ubuntu@sdnc-k8s:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
sdnc-k8s Ready master 6d v1.8.4
sdnc-k8s-2 Ready <none> 6d v1.8.4
ubuntu@sdnc-k8s:~$

2Set label on the node

kubectl label nodes <NODE_NAME_FROM_LAST_STEP> disktype=ssd

ubuntu@sdnc-k8s:~$ kubectl label nodes sdnc-k8s disktype=ssd

node "sdnc-k8s" labeled

ubuntu@sdnc-k8s:~$

3Check the label has been set on the node

kubectl get node --show-labels

ubuntu@sdnc-k8s:~$ kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
sdnc-k8s Ready master 6d v1.8.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=sdnc-k8s,node-role.kubernetes.io/master=
sdnc-k8s-2 Ready <none> 6d v1.8.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=sdnc-k8s-2
ubuntu@sdnc-k8s:~$

4Update nfs-provisioner pod template to force it running on the nfs server node

In nfs-provisoner-deployment.yaml file, add “spec.template.spec.nodeSelector” for pod “nfs-provisioner”


Create the ONAP Config

Setup onap-parameters.yaml file

The following commands must run on master node before creating ONAP configuration.

cd {$OOM}/kubernetes/config

cp onap-parameters-sample.yaml onap-parameters.yaml


Run createConfig

To simplify steps in this section

You can skip the steps in this section by following instruction in autoCreateOnapConfig of Scripts section to

  • create {$OOM}/kubernetes/oneclick/tools/autoCreateOnapConfig.bash file
  • run it and wait until the script completes
#PurposeCommand and Examples
0

Set the OOM Kubernetes config environment

cd {$OOM}/kubernetes/oneclick

source setenv.bash

1

Run the createConfig script to create the ONAP config

cd {$OOM}/kubernetes/config
./createConfig.sh -n onap

**** Creating configuration for ONAP instance: onap

namespace "onap" created

NAME:   onap-config

LAST DEPLOYED: Wed Nov  8 20:47:35 2017

NAMESPACE: onap

STATUS: DEPLOYED

 

RESOURCES:

==> v1/ConfigMap

NAME                   DATA  AGE

global-onap-configmap  15    0s

 

==> v1/Pod

NAME    READY  STATUS             RESTARTS  AGE

config  0/1    ContainerCreating  0         0s

 

 

**** Done ****

Wait for the config-init container to finish

Use the following command to monitor onap config init intil it reaches to Completed STATUS:

kubectl get pod --all-namespaces -a

The final output should be shown as the the following with onap config in Completed STATUS:

Additional checks for config-init
helm

helm ls --all

NAME REVISION UPDATED STATUS CHART NAMESPACE
onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap

helm status onap-config

LAST DEPLOYED: Tue Nov 21 17:07:13 2017
NAMESPACE: onap
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
global-onap-configmap 15 2d

==> v1/Pod
NAME READY STATUS RESTARTS AGE
config 0/1 Completed 0 2d

 kubernetes namespaces

kubectl get namespaces

NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Active 2d


Deploy the SDN-C Application

To simplify steps in this section

You can skip the steps in this section by following instruction in autoDeploySdnc of Scripts section to

  • create {$OOM}/kubernetes/oneclick/tools/autoDeploySdnc.bash file
  • run it and wait until the script completes

Execute the followings on the master node.

#PurposeCommand and Examples
0

Set the OOM Kubernetes config environment

(If you have set the OOM Kubernetes config enviorment in the same terminal, you can skip this step)

cd {$OOM}/kubernetes/oneclick

source setenv.bash

1

Run the createAll script to deploy the SDN-C appilication

cd {$OOM}/kubernetes/oneclick

./createAll.bash -n onap -a sdnc

********** Creating instance 1 of ONAP with port range 30200 and 30399

********** Creating ONAP:


********** Creating deployments for sdnc **********

Creating namespace **********
namespace "onap-sdnc" created

Creating service account **********
clusterrolebinding "onap-sdnc-admin-binding" created

Creating registry secret **********
secret "onap-docker-registry-key" created

Creating deployments and services **********
NAME: onap-sdnc
LAST DEPLOYED: Thu Nov 23 20:13:32 2017
NAMESPACE: onap
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolume
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
onap-sdnc-db 2Gi RWX Retain Bound onap-sdnc/sdnc-db 1s

==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
sdnc-db Bound onap-sdnc-db 2Gi RWX 1s

==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dbhost None <none> 3306/TCP 1s
sdnctldb01 None <none> 3306/TCP 1s
sdnctldb02 None <none> 3306/TCP 1s
sdnc-dgbuilder 10.43.97.219 <nodes> 3000:30203/TCP 1s
sdnhost 10.43.99.163 <nodes> 8282:30202/TCP,8201:30208/TCP 1s
sdnc-portal 10.43.72.72 <nodes> 8843:30201/TCP 1s

==> extensions/v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
sdnc-dgbuilder 1 1 1 0 1s

==> apps/v1beta1/StatefulSet
NAME DESIRED CURRENT AGE
sdnc-dbhost 2 1 1s
sdnc 3 3 1s
sdnc-portal 2 2 1s

 


**** Done ****


Ensure that the SDN-C appication has started

Use the kubectl get pods command to monitor the SDN-C startup; you should observe:

  • sdnc-dbhost-0 pod starts and gets into Running STATUS first,
    • while
      • sdnc-dbhost-1 pod does not exist and
      • sdnc, sdnc-dgbuilder and sdnc-portal pods are staying in Init:0/1 STATUS
  • once sdnc-dbhost-0 pod is fully started with the READY "1/1",
    • sdnc-dbhost-1 will be starting from ContainerCreating STATUS and runs up to Running STATUS
  • once sdnc-dbhost-1 pod is in RunningSTATUS,
    • sdnc pods will be starting from PodInitializing STATUS and end up with Running STATUS in parallel
    • while
      • sdnc-dgbuilder and sdnc-portal pods are staying in Init:0/1 STATUS
  • once sdnc pods are (is) in Running STATUS,
    • sdnc-dgbuilder and sdnc-portal will be starting from PodInitializing STATUS and end up with Running STATUS in parllel

2

Validate that all SDN-C pods and services are created properly

helm ls --all

ubuntu@sdnc-k8s:~$ helm ls --all
NAME REVISION UPDATED STATUS CHART NAMESPACE
onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap
onap-sdnc 1 Thu Nov 23 20:13:32 2017 DEPLOYED sdnc-0.1.0 onap
ubuntu@sdnc-k8s:~$

kubectl get namespace

ubuntu@sdnc-k8s:~$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Active 2d
onap-sdnc Active 12m
ubuntu@sdnc-k8s:~$

kubectl get deployment --all-namespaces

ubuntu@sdnc-k8s-2:~$ kubectl get deployment --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system heapster 1 1 1 1 15d
kube-system kube-dns 1 1 1 1 15d
kube-system kubernetes-dashboard 1 1 1 1 15d
kube-system monitoring-grafana 1 1 1 1 15d
kube-system monitoring-influxdb 1 1 1 1 15d
kube-system tiller-deploy 1 1 1 1 15d
onap-sdnc sdnc-dgbuilder 1 1 1 0 26m
ubuntu@sdnc-k8s-2:~$

kubectl get clusterrolebinding --all-namespaces

ubuntu@sdnc-k8s:~$ kubectl get clusterrolebinding --all-namespaces
NAMESPACE NAME AGE
addons-binding 15d
onap-sdnc-admin-binding 13m
ubuntu@sdnc-k8s:~$

kubectl get serviceaccounts --all-namespaces

ubuntu@sdnc-k8s:~$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
onap default 1 2d
onap-sdnc default 1 14m
ubuntu@sdnc-k8s:~$

kubectl get service -n onap-sdnc

kubectl get pods --all-namespaces -a

docker ps |grep sdnc

On Server 1:

$ docker ps |grep sdnc |wc -l
9

$ docker ps |grep sdnc
ebcb2f7f1a4a docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" About an hour ago Up About an hour k8s_filebeat-onap_sdnc-1_onap-sdnc_263071db-e69e-11e7-a01e-026c942e0e8c_0
55a82019ce10 nexus3.onap.org:10001/onap/sdnc-image@sha256:f5f85d498c397677fa2df850d9a2d8149520c5be3dce6c646c8fe4f25f9bab10 "bash -c 'sed -i 's/d" About an hour ago Up About an hour k8s_sdnc-controller-container_sdnc-1_onap-sdnc_263071db-e69e-11e7-a01e-026c942e0e8c_0
bbdfdfc2b1f0 gcr.io/google-samples/xtrabackup@sha256:29354f70c9d9207e757a1bae6a4cbf2f57a56b18fe5c2b0acc1198a053b24b38 "bash -c 'set -ex\ncd " About an hour ago Up About an hour k8s_xtrabackup_sdnc-dbhost-1_onap-sdnc_6e9a6c9a-e69e-11e7-a01e-026c942e0e8c_0
26854595164d mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" About an hour ago Up About an hour k8s_sdnc-db-container_sdnc-dbhost-1_onap-sdnc_6e9a6c9a-e69e-11e7-a01e-026c942e0e8c_0
b577493b5725 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-dbhost-1_onap-sdnc_6e9a6c9a-e69e-11e7-a01e-026c942e0e8c_0
14dcc0985259 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-1_onap-sdnc_263071db-e69e-11e7-a01e-026c942e0e8c_1
b52be823997b quay.io/kubernetes_incubator/nfs-provisioner@sha256:b5328a3825032d7e1719015260260347bda99c5d830bcd5d9da1175e7d1da989 "/nfs-provisioner -pr" About an hour ago Up About an hour k8s_nfs-provisioner_nfs-provisioner-860992464-dz5bt_onap-sdnc_261c3d30-e69e-11e7-a01e-026c942e0e8c_0
dc6dfd3fde3b gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_nfs-provisioner-860992464-dz5bt_onap-sdnc_261c3d30-e69e-11e7-a01e-026c942e0e8c_0
006aaa34c5af mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 32 hours ago Up 32 hours k8s_sdnc-db-container_sdnc-dbhost-1_onap-sdnc_d1cc3b28-e59e-11e7-a01e-026c942e0e8c_0

On Server 2:

$ docker ps |grep sdnc|wc -l
20

$ docker ps |grep sdnc

5155b7fcd0b9 nexus3.onap.org:10001/onap/admportal-sdnc-image@sha256:9cfdfa8aac18da5571479e0c767b92dbc72a3a5b475be37bd84fb65400696564 "/bin/bash -c 'cd /op" About an hour ago Up About an hour k8s_sdnc-portal-container_sdnc-portal-1380828306-7xxdz_onap-sdnc_261d5989-e69e-11e7-a01e-026c942e0e8c_0
d3a58f2bb662 nexus3.onap.org:10001/onap/ccsdk-dgbuilder-image@sha256:c52ad4dacc00da4a882d31b7022e59e3dd4bd4ec104380910949bce4d2d0c7b9 "/bin/bash -c 'cd /op" About an hour ago Up About an hour k8s_sdnc-dgbuilder-container_sdnc-dgbuilder-3612718752-m189c_onap-sdnc_262301b1-e69e-11e7-a01e-026c942e0e8c_0
275e0173f109 nexus3.onap.org:10001/onap/sdnc-image@sha256:f5f85d498c397677fa2df850d9a2d8149520c5be3dce6c646c8fe4f25f9bab10 "bash -c 'sed -i 's/d" About an hour ago Up About an hour k8s_sdnc-controller-container_sdnc-2_onap-sdnc_263d495f-e69e-11e7-a01e-026c942e0e8c_0
2566062bd408 docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" About an hour ago Up About an hour k8s_filebeat-onap_sdnc-2_onap-sdnc_263d495f-e69e-11e7-a01e-026c942e0e8c_0
53df8341738c docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" About an hour ago Up About an hour k8s_filebeat-onap_sdnc-0_onap-sdnc_2629ed07-e69e-11e7-a01e-026c942e0e8c_0
e147f48ffb5b nexus3.onap.org:10001/onap/sdnc-image@sha256:f5f85d498c397677fa2df850d9a2d8149520c5be3dce6c646c8fe4f25f9bab10 "bash -c 'sed -i 's/d" About an hour ago Up About an hour k8s_sdnc-controller-container_sdnc-0_onap-sdnc_2629ed07-e69e-11e7-a01e-026c942e0e8c_0
c5f63561e196 gcr.io/google-samples/xtrabackup@sha256:29354f70c9d9207e757a1bae6a4cbf2f57a56b18fe5c2b0acc1198a053b24b38 "bash -c 'set -ex\ncd " About an hour ago Up About an hour k8s_xtrabackup_sdnc-dbhost-0_onap-sdnc_265aef90-e69e-11e7-a01e-026c942e0e8c_0
4a0cc594d12d mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" About an hour ago Up About an hour k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_265aef90-e69e-11e7-a01e-026c942e0e8c_0
6c02ac51ea56 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-dbhost-0_onap-sdnc_265aef90-e69e-11e7-a01e-026c942e0e8c_0
8eafb15b5e7c gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-2_onap-sdnc_263d495f-e69e-11e7-a01e-026c942e0e8c_0
887ac4e733e8 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-0_onap-sdnc_2629ed07-e69e-11e7-a01e-026c942e0e8c_0
41044c2ed166 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-dgbuilder-3612718752-m189c_onap-sdnc_262301b1-e69e-11e7-a01e-026c942e0e8c_0
6771b1319b20 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-portal-1380828306-7xxdz_onap-sdnc_261d5989-e69e-11e7-a01e-026c942e0e8c_0
fd57d6e9577b mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 2 hours ago Up 2 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_7c3c3068-e699-11e7-a01e-026c942e0e8c_0
0be895c2ccad mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 2 hours ago Up 2 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_ea765fb6-e696-11e7-a01e-026c942e0e8c_0
7ddaf2cf9806 mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 4 hours ago Up 4 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_e04ab438-e689-11e7-a01e-026c942e0e8c_0
8ad307977830 mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 25 hours ago Up 25 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_87db0033-e5d9-11e7-a01e-026c942e0e8c_0
979b3ff11974 mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 25 hours ago Up 25 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_6c1b0064-e5d8-11e7-a01e-026c942e0e8c_0
b903e3e52f51 mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 25 hours ago Up 25 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_744c98a4-e5d3-11e7-a01e-026c942e0e8c_0
36857a112463 mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 32 hours ago Up 32 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_abe261cd-e59d-11e7-a01e-026c942e0e8c_0
$


3

Validate that the SDN-C bundlers are up

Use command

kubectl exec -it <POD_NAME_WITH_NAME_SPACE> bash

Use command

docker exec -it <DOCKER_CONTAINER_ID> bash

4

Validate that the SDN-C APIs are shown on the ODL RestConf page

Access the ODL RestConf page from the following URL:

http://<Kubernetes-Master-Node-IP>:30202/apidoc/explorer/index.html


5

Validate the SDN-C ODL cluster

Goal:

Verify if the SDN-C ODL-Cluster is running properly

Prerequisites
  1. This test is on one of your Kubernetes nodes
  2. Make sure python-pycurl is installed
    • If not, for Ubuntu use "apt-get install python-pycurl" to install it
Use ODL intergration tool to monitor ODL cluster

Clone ODL Integration-test project

git clone https://github.com/opendaylight/integration-test.git

Enter cluster-montor folder

cd integration-test/tools/clustering/cluster-monitor

Create cluster-monitor.bash script

vi cluster-monitor.bash

Content of cluster-monitor.bash
########################################################################################
# This script wraps ODL monitor.py and dynamicly picks up ODL clustered IP of the sdnc #
# clustered pods and update the IP address in the cluster.json which feed the ODL      #
# monitor.py.                                                                          #
# This script also changed the username and password in the cluster.json file.         #
#                                                                                      #
# If the sdnc pods IP address is re-assigned, the running session of this script       #
# should be restarted.                                                                 #
#                                                                                      #
# To run it, just enter the following command:                                         #
#    ./cluster-monitor.bash                                                              #
########################################################################################
#!/bin/bash

# get IPs string by using kubectl
ips_string=$(kubectl get pods --all-namespaces -o wide | grep 'sdnc-[0-9]' | awk '{print $7}')
ip_list=($(echo ${ips_string} | tr ' ' '\n'))

# loop and replace existing IP
for ((i=0;i<=2;i++));
do
   if [ "${ip_list[$i]}" == "<none>" ]; then
     echo "Ip of deleted pod is not ready yet"
     exit 1;
   fi

   let "j=$i+4"
   sed -i -r "${j}s/(\b[0-9]{1,3}\.){3}[0-9]{1,3}\b"/${ip_list[$i]}/ cluster.json
done

# replace port, username and password
sed -i  's/8181/8080/g' cluster.json
sed -i 's/username/admin/' cluster.json
sed -i 's/password/admin/' cluster.json

# start monitoring
python monitor.py

This script is used to fetch all the IPs of SDN-C pods and automatically update cluster.json file

Start cluster montor UI

./cluster-monitor.bash

Note:

If applications inside any of these three SDNC pods are not fully started, this script won't be executed successfully due to the issues such as connection error, value error and etc. 

Otherwise, you should see the monitoring UI as the following:

Use testCluster RPC to test SDN-C load sharing

The testCluster-bundle.zip provides a testBundle which offers a testCluster API to help with validating SDN-C RPC load sharing in the deployed SDN-C cluster.

It's just as easy as to do the following:

  1. download testCluster-bundle.zip (by clicking on the hyplinked text), and place it to the sdnc-deploy hostPath which has been defined in .spec.volumes of sdnc-statefulset.yaml file
  2. unzip testCluster-bundle.zip, If unzip command is not installed, install it by using "sudo apt install unzip" and then "unzip testCluster-bundle.zip".

    ubuntu@sdnc-k8s-1:~/cluster/deploy$ unzip testCluster-0.2.0.zip

    The program 'unzip' is currently not installed. You can install it by typing:
    sudo apt install unzip
    ubuntu@sdnc-k8s-1:~/cluster/deploy$ sudo apt install unzip
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    Suggested packages:
    zip
    The following NEW packages will be installed:
    unzip
    0 upgraded, 1 newly installed, 0 to remove and 117 not upgraded.
    Need to get 158 kB of archives.
    After this operation, 530 kB of additional disk space will be used.
    Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 unzip amd64 6.0-20ubuntu1 [158 kB]
    Fetched 158 kB in 0s (230 kB/s)
    Selecting previously unselected package unzip.
    (Reading database ... 93492 files and directories currently installed.)
    Preparing to unpack .../unzip_6.0-20ubuntu1_amd64.deb ...
    Unpacking unzip (6.0-20ubuntu1) ...
    Processing triggers for mime-support (3.59ubuntu1) ...
    Processing triggers for man-db (2.7.5-1) ...
    Setting up unzip (6.0-20ubuntu1) ...
    ubuntu@sdnc-k8s-1:~/cluster/deploy$ unzip testCluster-0.2.0.zip
    Archive: testCluster-0.2.0.zip
    inflating: testDataBroker-api-0.2.0-SNAPSHOT.jar
    inflating: testDataBroker-features-0.2.0-SNAPSHOT.jar
    inflating: testDataBroker-impl-0.2.0-SNAPSHOT.jar
    ubuntu@sdnc-k8s-1:~/cluster/deploy$

  3. As this hostPath is mounted as ODL's deploy directory, once the zip file is unzipped, the testBundle will be automatically loaded by ODL and the testCluster API will be availble as ODL RestConf API.
    • testCluster API is accessible (if you dont see it there, try deleteing SDNC pods. This will restart them.)
      • Example of postman code snippets
        POST /restconf/operations/testCluster:who-am-i HTTP/1.1
        Host: ${KUBERNETES MASTER VM IP}:30202
        Accept: application/json
        Authorization: Basic YWRtaW46S3A4Yko0U1hzek0wV1hsaGFrM2VIbGNzZTJnQXc4NHZhb0dHbUp2VXkyVQ==
        Cache-Control: no-cache
        Postman-Token: 9683538b-de47-dec8-3e88-c491be9dd6ef
        
        
        
        
      • curl -u admin:Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U -H 'Accept: application/json' -X POST 'http://${KUBERNETES MASTER VM IP}:30202/restconf/operations/testCluster:who-am-i'
    • {
      	"output": {
      		"node": "sdnc-2"
      	}
      }



Undeploy the SDN-C Application


To simplify steps in this section

You can skip the steps in this section by following instruction in autoCleanSdnc of Scripts section to

  • create {$OOM}/kubernetes/oneclick/tools/autoCleanSdnc.bash file
  • run it and wait until the script completes
#PurposeCommand and Examples
0
  • Set the OOM Kubernetes config environment

(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step)

cd {$OOM}/kubernetes/oneclick

source setenv.bash

1
  • Run the deleteAll script to delete all SDN-C pods and services

./deleteAll.bash -n onap -a sdnc

********** Cleaning up ONAP:
release "onap-sdnc" deleted
namespace "onap-sdnc" deleted
clusterrolebinding "onap-sdnc-admin-binding" deleted
Service account onap-sdnc-admin-binding deleted.

Waiting for namespaces termination...

********** Gone **********

2
  • Validate that all SDN-C pods and servers are cleaned up

docker ps |grep sdnc

ubuntu@sdnc-k8s:~$ docker ps |grep sdnc
ubuntu@sdnc-k8s:~$

kubectl get pods --all-namespaces -a

ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$ kubectl get pods --all-namespaces -a
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-4285517626-32km8 1/1 Running 0 15d
kube-system kube-dns-638003847-vqz8t 3/3 Running 0 15d
kube-system kubernetes-dashboard-716739405-tnxj6 1/1 Running 0 15d
kube-system monitoring-grafana-2360823841-qfhzm 1/1 Running 0 15d
kube-system monitoring-influxdb-2323019309-41q0l 1/1 Running 0 15d
kube-system tiller-deploy-737598192-5663c 1/1 Running 0 15d
onap config 0/1 Completed 0 2d
ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$

kubectl get service --all-namespaces

ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$ kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 15d
kube-system heapster ClusterIP 10.43.210.11 <none> 80/TCP 15d
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP 15d
kube-system kubernetes-dashboard ClusterIP 10.43.196.205 <none> 9090/TCP 15d
kube-system monitoring-grafana ClusterIP 10.43.90.8 <none> 80/TCP 15d
kube-system monitoring-influxdb ClusterIP 10.43.52.1 <none> 8086/TCP 15d
kube-system tiller-deploy ClusterIP 10.43.106.73 <none> 44134/TCP 15d
ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$

kubectl get serviceaccounts --all-namespaces

ubuntu@sdnc-k8s:~$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
onap default 1 2d
ubuntu@sdnc-k8s:~$

kubectl get clusterrolebinding --all-namespaces

ubuntu@sdnc-k8s:~$ kubectl get clusterrolebinding --all-namespaces
NAMESPACE NAME AGE
addons-binding 15d
ubuntu@sdnc-k8s:~$

kubectl get deployment --all-namespaces

ubuntu@sdnc-k8s:~$ kubectl get deployment --all-namespaces

NAMESPACE     NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE

kube-system   heapster               1         1         1            1           2d

kube-system   kube-dns               1         1         1            1           2d

kube-system   kubernetes-dashboard   1         1         1            1           2d

kube-system   monitoring-grafana     1         1         1            1           2d

kube-system   monitoring-influxdb    1         1         1            1           2d

kube-system   tiller-deploy          1         1         1            1           2d

ubuntu@sdnc-k8s:~$

kubectl get namespaces

ubuntu@sdnc-k8s:~$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Active 2d
ubuntu@sdnc-k8s:~$

helm ls --all

ubuntu@sdnc-k8s:~$ helm ls --all
NAME REVISION UPDATED STATUS CHART NAMESPACE
onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap
ubuntu@sdnc-k8s:~$


Remove the ONAP Config

To simplify steps in this section

You can skip the steps in this section by following instruction in autoCleanOnapConfig of Scripts section to

  • create {$OOM}/kubernetes/oneclick/tools/autoCleanOnapConfig.bash file
  • run it and wait until the script completes
#PurposeCommand and Examples
0
  • Set the OOM Kubernetes config environment

(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step)

cd {$OOM}/kubernetes/oneclick

source setenv.bash

1
  • Remove the ONAP config and any deployed applications in one shot

./deleteAll.bash -n onap

ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$ ./deleteAll.bash -n onap

 

********** Cleaning up ONAP:

Error: release: not found

Error from server (NotFound): namespaces "onap-consul" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-consul-admin-binding" not found

Service account onap-consul-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-msb" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-msb-admin-binding" not found

Service account onap-msb-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-mso" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-mso-admin-binding" not found

Service account onap-mso-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-message-router" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-message-router-admin-binding" not found

Service account onap-message-router-admin-binding deleted.ls

 

 

release "onap-sdnc" deleted

namespace "onap-sdnc" deleted

clusterrolebinding "onap-sdnc-admin-binding" deleted

Service account onap-sdnc-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-vid" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-vid-admin-binding" not found

Service account onap-vid-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-robot" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-robot-admin-binding" not found

Service account onap-robot-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-portal" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-portal-admin-binding" not found

Service account onap-portal-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-policy" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-policy-admin-binding" not found

Service account onap-policy-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-appc" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-appc-admin-binding" not found

Service account onap-appc-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-aai" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-aai-admin-binding" not found

Service account onap-aai-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-sdc" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-sdc-admin-binding" not found

Service account onap-sdc-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-dcaegen2" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-dcaegen2-admin-binding" not found

Service account onap-dcaegen2-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-log" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-log-admin-binding" not found

Service account onap-log-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-cli" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-cli-admin-binding" not found

Service account onap-cli-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-multicloud" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-multicloud-admin-binding" not found

Service account onap-multicloud-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-clamp" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-clamp-admin-binding" not found

Service account onap-clamp-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-vnfsdk" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-vnfsdk-admin-binding" not found

Service account onap-vnfsdk-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-uui" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-uui-admin-binding" not found

Service account onap-uui-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-aaf" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-aaf-admin-binding" not found

Service account onap-aaf-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-vfc" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-vfc-admin-binding" not found

Service account onap-vfc-admin-binding deleted.

 

Error: release: not found

Error from server (NotFound): namespaces "onap-kube2msb" not found

Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-kube2msb-admin-binding" not found

Service account onap-kube2msb-admin-binding deleted.

 

Waiting for namespaces termination...

 

********** Gone **********

2
  • Manually clean up

This step is to clean up the leftover items which were created by the config/createConfig script but not cleaned up by the oneclick/deleteAll script.

ubuntu@sdnc-k8s:~/oom/kubernetes/config$ ./createConfig.sh -n onap

**** Creating configuration for ONAP instance: onap
Error from server (AlreadyExists): namespaces "onap" already exists
Error: a release named "onap-config" already exists.
Please run: helm ls --all "onap-config"; helm del --help
**** Done ****
ubuntu@sdnc-k8s:

ONAP serviceaccount

No action needed.

It cannot be deleted by a specific command, but will instead be automatically deleted when the namespace is deleted.

ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
onap default 1 2d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl delete serviceaccounts default -n onap
serviceaccount "default" deleted
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
onap default 1 6s
ubuntu@sdnc-k8s:

... after ONAP namespace is deleted...

ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get serviceaccounts --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 15d
kube-public default 1 15d
kube-system default 1 15d
kube-system io-rancher-system 1 15d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$

ONAP namespace 

ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Active 2d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl delete namespace onap
namespace "onap" deleted
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
onap Terminating 2d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ kubectl get namespaces
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
ubuntu@sdnc-k8s:~/oom/kubernetes/config$

 release

ubuntu@sdnc-k8s:~/oom/kubernetes/config$ helm ls --all
NAME REVISION UPDATED STATUS CHART NAMESPACE
onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ helm delete onap-config --purge
release "onap-config" deleted
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ helm ls --all
ubuntu@sdnc-k8s:~/oom/kubernetes/config$

3
  • Delete the shared folder

 sudo rm -rf /dockerdata-nfs/onap



Scripts


The following scripts are intended to be placed under directory {$OOM}/kubernetes/oneclick/tools and help to simplify various procedures by automating them   (smile)

autoCreateOnapConfig
########################################################################################
# This script wraps {$OOM}/kubernetes/config/createConfig.sh script                    #
# and will only terminated when the ONAP configuration is Completed                    #
#                                                                                      #
# Before using it, do the following to prepare the bash file:                          #
#   1, cd {$OOM}/kumbernetes/oneclick/tools                                            #
#   2, vi autoCreateOnapConfig.bash                                                    #
#   3, paste the full content here to autoCreateOnapConfig.bash file and save the file #
#   4, chmod 777 autoCleanOnapConfig.bash                                              #
#                                                                                      #
# To run it, just enter the following command:                                         #
#    ./autoCreateOnapConfig.bash <namespace, default is "onap">                                                       #
########################################################################################
#!/bin/bash

NS=$1
if [[ -z $NS ]]
then
  echo "Namespace is not specified, use onap namespace."
  NS="onap"
fi


echo "Create $NS config under config directory..."
cd ../../config
./createConfig.sh -n $NS
cd -


echo "...done : kubectl get namespace
-----------------------------------------------
>>>>>>>>>>>>>> k8s namespace"
kubectl get namespace


echo "
-----------------------------------------------
>>>>>>>>>>>>>> helm : helm ls --all"
helm ls --all


echo "
-----------------------------------------------
>>>>>>>>>>>>>> pod : kubectl get pods -n $NS -a"
kubectl get pods -n $NS -a


while true
do
  echo "wait for $NS config pod reach to Completed STATUS"
  sleep 5
  echo "-----------------------------------------------"
  kubectl get pods -n $NS -a

  status=`kubectl get pods -n $NS -a |grep config |xargs echo | cut -d' ' -f3`

  if [ "$status" = "Completed" ]
  then
    echo "$NS config is Completed!!!"
    break
  fi

  if [ "$status" = "Error" ]
  then
    echo "$NS config is failed with Error!!!
POD logs are:"
    kubectl logs config -n $NS -f
    break
  fi
done
autoCleanOnapConfig
########################################################################################
# This script wraps {$OOM}/kubernetes/oneclick/deleteAll.sh script along with          #
# the following steps to clean up ONAP configure:                                      #
#     - remove ONAP namespace                                                          #
#     - remove ONAP release                                                            #
#     - remove ONAP shared directory                                                   #
#                                                                                      #
# Before using it, do the following to prepare the bash file:                          #
#   1, cd {$OOM}/kumbernetes/oneclick/tools                                            #
#   2, vi autoCleanOnapConfig.bash                                                     #
#   3, paste the full content here to autoCleanOnapConfig.bash file and save the file  #
#   4, chmod 777 autoCleanOnapConfig.bash                                              #
#                                                                                      #
# To run it, just enter the following command:                                         #
#    ./autoCleanOnapConfig.bash                                                        #
########################################################################################
#!/bin/bash


NS=$1
if [[ -z $NS ]]
then
  echo "Namespace is not specified, use onap namespace."
  NS="onap"
fi

echo "Clean up $NS configuration"
cd ..
./deleteAll.bash -n $NS
cd -


echo "----------------------------------------------
Force remove namespace..."
kubectl delete namespace $NS
echo "...done : kubectl get namespace
-----------------------------------------------
>>>>>>>>>>>>>> k8s namespace"
kubectl get namespace
while [[ ! -z `kubectl get namespace|grep $NS` ]]
do
  echo "Wait for namespace $NS to be deleted
-----------------------------------------------
>>>>>>>>>>>>>> k8s namespace"
  kubectl get namespace
  sleep 2
done


echo "Force delete helm process ..."
helm delete $NS-config --purge --debug
echo "...done : helm ls --all
 -----------------------------------------------
>>>>>>>>>>>>>> helm"
helm ls --all

echo "Remove $NS dockerdata..."
sudo rm -rf /dockerdata-nfs/onap
echo "...done : ls -altr /dockerdata-nfs
 -----------------------------------------------
>>>>>>>>>>>>>> /dockerdata-nfs directory"
ls -altr /dockerdata-nfs
autoDeploySdnc
########################################################################################
# This script wraps {$OOM}/kubernetes/oneclick/createAll.sh script along with          #
# the following steps to deploy ONAP SDNC application:                                 #
#     - wait until sdnc-0 is running properly with both (2) containers up              #
#                                                                                      #
# Before using it, do the following to prepare the bash file:                          #
#   1, cd {$OOM}/kumbernetes/oneclick/tools                                            #
#   2, vi autoDeploySdnc.bash                                                          #
#   3, paste the full content here to autoDeploySdnc.bash file and save the file       #
#   4, chmod 777 autoDeploySdnc.bash                                                   #
#                                                                                      #
# To run it, just enter the following command:                                         #
#    ./autoDeploySdnc.bash                                                             #
########################################################################################
#!/bin/bash

cd ..
echo "Deploy SDNC..."
./createAll.bash -n onap -a sdnc
returnCode=$?
echo "deploy result: $returnCode"
cd -

if [ "$returnCode" != "0" ]
then
  echo "Abort..."
  exit
fi

echo "...done
-----------------------------------------------
>>>>>>>>>>>>>> pod : kubectl get pods --all-namespaces -a -o wide"
kubectl get pods --all-namespaces -a -o wide

status=`kubectl get pods --all-namespaces -a |grep sdnc-0 |xargs echo | cut -d' ' -f3`
while true
do
  echo "wait for onap sdnc-0 reachs fully running"
  sleep 5
  echo "-----------------------------------------------"
  kubectl get pods --all-namespaces -a -o wide

  status=`kubectl get pods --all-namespaces -a |grep sdnc-0 |xargs echo | cut -d' ' -f3`
  if [ "$status" = "2/2" ]
  then
    echo "onap sdnc-0 is running!!!"
    break
  fi
done
autoCleanSdnc
########################################################################################
# This script wraps {$OOM}/kubernetes/oneclick/deleteAll.sh script along with          #
# the following steps to un-deploy ONAP SDNC application fully:                        #
#     - force remove clusterrolebinding for onap-sdnc                                  #
#     - force remove namespace for onap-sdnc                                           #
#     - force remove release for onap-sdnc                                             #
#     - wait until onap-sdnc namespace is remvoed                                      #
#                                                                                      #
# Before using it, do the following to prepare the bash file:                          #
#   1, cd {$OOM}/kumbernetes/oneclick/tools                                            #
#   2, vi autoCleanSdnc.bash                                                           #
#   3, paste the full content here to autoCleanSdnc.bash file and save the file        #
#   4, chmod 777 autoCleanSdnc.bash                                                    #
#                                                                                      #
# To run it, just enter the following command:                                         #
#    ./autoCleanSdnc.bash                                                              #
########################################################################################
#!/bin/bash

cd ..
./deleteAll.bash -n onap -a sdnc
cd -



echo "----------------------------------------------
Remove clusterrolebindnig..."
kubectl delete clusterrolebinding onap-sdnc-admin-binding
echo "...done : kubectl get clusterrolebinding"
kubectl get clusterrolebinding

echo "Remove onap-sdnc namespace..."
kubectl delete namespaces onap-sdnc
echo "...done : kubectl get namespaces"
kubectl get namespaces

echo "Delete onap-sdnc release..."
helm delete onap-sdnc --purge
echo "...done: helm ls --all"
helm ls --all


sdncCount=`kubectl get namespaces | grep onap-sdnc | wc -l`
while true
do
  echo "wait for onap-sdnc namespace to be removed"
  sleep 5
  echo "-----------------------------------------------"
  kubectl get namespaces

  sdncCount=`kubectl get namespaces | grep onap-sdnc | wc -l`
  if [ "$sdncCount" = "0" ]
  then
    echo "sdnc removed!!!"
    break
  fi
done
  • No labels

7 Comments

  1. I think the version of SDNC that you are using is out of date.

    There should be containers for ueb-listener and dmaap-listener from the Amsterdam release.

    Will this work for the non-voting cluster members in the geo-redundant site ?

    Brian


    1. Yes, Brian Freeman, we are aware that the SDNC image is out of date, we will update image reference in OOM kubernetes/sdnc/values.yaml file a bit later (smile)

      Good point of `ueb-listener` and `dmaap-listener` containers, we will take that into count.


      Hao Kuang would you please answer the question of 

      Will this work for the non-voting cluster members in the geo-redundant site ?

    2. Good point. For ueb-listener and dmaap-listener, we discussed internally before. A possible solution could be message-bus container behind it's own services but needs more investigation.


      For the second question, theatrically, it should work but we haven't tried that yet. For geo-redundant site, a possible solution is that we can utilize kubernetes Custers, for example we can have one node (vm) X in place A and one node (vm) Y in place B. Starting from k8s version 1.2, it supports nodeAffinity which provides the ability of using matchExpressions to deploy your pods on specific node. Based on this, we can deploy 3 voting nodes on X and 3 non-voting nodes on Y. But shortage here is place A and B should be very close. Otherwise it has to consider federation.

      Reference: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity


  2. Opendaylight implements a RAFT protocol for replicating data and picking the shard leader so I doubt that K8 alone can do the stateful data replication and consistency needed. My question was about the configuration of Opendaylight clustering in startODL.sh etc. so that the mate site that has non-voting members in the ODL cluster (not to be confused with the K8 cluster)

    1. I guess you are talking about turning on persistence of data-replication for ODL data cluster. K8S cluster can do the data replication among nodes as you said, but on the application level, ODL cluster of using AKKA which implements RAFT still has to be used. Assume we have three SDN-C pods and two K8S nodes now. My concern is that if those MD-SAL data of each pod are persisted on each k8s node, since the MD-SAL data replications are same, we waste once more storage room of saving same data. May be we can use external storage cluster system.

      ---

      Currently, this solution doesn't include non-voting members. We thought it would be better to test this with geo-redundant site of using K8S.

  3. I pulled the oom latest source, but it does not contain the oom/kubernetes/oneclick directory

    I am now stuck at the create onap config step.

    How can I proceed ?

    cd {$OOM}/kubernetes/oneclick
    source setenv.bash
    ./createAll.bash -n onap -a sdnc


    1. Beijing has switched to a helm based install about 6-8 weeks ago - those instructions are for an earlier version of master.

      /michael