...
Clone the OOM project from ONAP gerrit
On the node where you have installed Helm (from 3. Set Up the Undercloud), run the following command Run the following command on master node to clone OOM project from ONAP gerrit at any directory you prefer, and that directory will be referred as "{$OOM}" in this page.
...
Info |
---|
The source of the new startODL.sh script, gerrit change 25475, has been merged into sdnc/oam project on December 15th, 2017. Skip this section if your SDN-C image includes this change. |
Do the following to get the new startODL.sh script which provides the configuration of ODL clustering for SDN-C cluster.
# | Purpose | Command Examples |
---|
1 | Get the shared new startODL.sh script content | Go to gerrit change 25475 click on installation/sdnc/src/main/scripts/startODL.sh under the Files section to view the details changesscript. click on the Download button (Image Added) to download the startODL_new.sh.zip file and extract open the sh file inside the zip file, and copy the content (to be used in step 2)rename it to "startODL.sh". |
2 | Create new startODL.sh on the Kubernetes node VM | mkdir -p /dockerdata-nfs/cluster/script vi /dockerdata-nfs/cluster/script/startODL.sh paste the copied content from step 1 to this file |
3 | Give execution permission to the new startODL.sh script | chmod 777 /dockerdata-nfs/cluster/script/startODL.sh sudo chown $( id -u):$( id -g) /dockerdata-nfs/cluster/script/startODL.sh
|
Get SDN-C Cluster Templates From Gerrit Topic SDNC-163
Info |
---|
The source of the templates, gerrit change 25467, has been merged into sdnc/oam project on December 20th, 2017. Skip step 1 and 2 if your cloned OOM project includes this change. Skip step 3 if you skipped previous section (adding startODL.sh script). Skip step 4, if you dont want to add/deploy extra features/bundles/packages. Step 5 in important. It determines number of sdnc and db pods. |
# | Purpose | Command and Examples |
---|
1 | Get the shared templates code git fetch command | Go to gerrit change 25467 Click Download downward arrow, From the right bottom corner drop list, select anonymous http, Click the click board in the same line as Checkout to get (copy to clipboard) the git commands (which includes the git fetch and checkout commands). Expand |
---|
title | An example of the git commands with patch set 23 |
---|
| git fetch https://gerrit.onap.org/r/oom refs/changes/67/25467/23 && git checkout FETCH_HEAD |
|
2 | Fetch the shared template to the oom directory on the Kubernetes node VM | cd {$OOM} run Execute the git commands got command from step 1. |
3 | Link the new startODL.sh | Info |
---|
Skip this change if you have skipped the get new startODL.sh script section Be careful with editing YAML files. They are sensitive to number of spaces. Be careful with copy/paste from browser.Skip this change if you have skipped the get new startODL.sh script section |
vi kubernetes/sdnc/templates/sdnc-statefulset.yaml do Make the following changes: Purpose | Changes |
---|
mount point for new startODL.sh script | Field | Added Value |
---|
.spec.template.spec.containers.volumeMounts (of container sdnc-controller-container) | - mountPath: /opt/onap/sdnc/bin/startODL.sh name: sdnc-startodl | .spec.template.spec.volumes | - name: sdnc-startodl hostPath: path: /dockerdata-nfs/cluster/script/startODL.sh |
|
|
34 | Link the ODL deploy directory | Info |
---|
If you are not going to use the test bundle to test out SDN-C cluster and load balancing, you can skip this step. |
ODL automatically install bundles/pacakges that are put under its deploy directory, this mount point provides capability to drop a bundle/package in the Kubernetes node at /dockerdata-nfs/cluster/deploy directory and it will automativally be installed in the sdnc pods (under opt/opendaylight/current/deploy directory).
vi kubernetes/sdnc/templates/sdnc-statefulset.yaml do Make the following changes: Purpose | Changes |
---|
mount point for ODL deploy directory | Field | Added Value |
---|
.spec.template.spec.containers.volumeMounts (of container sdnc-controller-container) | - mountPath: /opt/opendaylight/current/deploy name: sdnc-deploy | .spec.template.spec.volumes | - name: sdnc-deploy hostPath: path: /dockerdata-nfs/cluster/deploy |
|
|
5 | Enable cluster configuration | vi kubernetes/sdnc/values.yaml change Change the following fields with the new value: field | new value | old value |
---|
enableODLCluster | true | false | numberOfODLReplicas | 3 | 1 | numberOfDbReplicas | 2 | 1 |
|
...
Info |
---|
Skip the following section, if any of thew following coditions matches. |
Verify (from Master node)On the node where you have configured nfs server (from step 2. Share the /dockerdata-nfs Folder between Kubernetes Nodes), run the following:
# | Purpose | Command and Example |
---|
1 | Find the node name | Expand |
---|
title | find nfs server node |
---|
| Run command "ps -ef|grep nfs", you should - node with nfs server runs nfsd process:
ubuntu@sdnc-k8s:~$ ps -ef|grep nfs root 3473 2 0 Dec07 ? 00:00:00 [nfsiod] root 11072 2 0 Dec06 ? 00:00:00 [nfsd4_callbacks] root 11074 2 0 Dec06 ? 00:00:00 [nfsd] root 11075 2 0 Dec06 ? 00:00:00 [nfsd] root 11076 2 0 Dec06 ? 00:00:00 [nfsd] root 11077 2 0 Dec06 ? 00:00:00 [nfsd] root 11078 2 0 Dec06 ? 00:00:00 [nfsd] root 11079 2 0 Dec06 ? 00:00:03 [nfsd] root 11080 2 0 Dec06 ? 00:00:13 [nfsd] root 11081 2 0 Dec06 ? 00:00:42 [nfsd] ubuntu@sdnc-k8s:~$
- node with nfs client runs nfs svc process:
ubuntu@sdnc-k8s-2:~$ ps -ef|grep nfs ubuntu 5911 5890 0 20:10 pts/0 00:00:00 grep --color=auto nfs root 18739 2 0 Dec06 ? 00:00:00 [nfsiod] root 18749 2 0 Dec06 ? 00:00:00 [nfsv4.0-svc] ubuntu@sdnc-k8s-2:~$
|
kubectl get node Expand |
---|
| ubuntu@sdnc-k8s:~$ kubectl get node NAME STATUS ROLES AGE VERSION sdnc-k8s Ready master 6d v1.8.4 sdnc-k8s-2 Ready <none> 6d v1.8.4 ubuntu@sdnc-k8s:~$ |
|
2 | Set label on the node | kubectl label nodes <NODE_NAME_FROM_LAST_STEP> disktype=ssd Expand |
---|
| ubuntu@sdnc-k8s:~$ kubectl label nodes sdnc-k8s disktype=ssd node "sdnc-k8s" labeled ubuntu@sdnc-k8s:~$ |
|
3 | Check the label has been set on the node | kubectl get node --show-labels |
4 | Update nfs-provisioner pod template to force it running on the nfs server node | In nfs-provisoner-deployment.yaml file, add “spec.template.spec.nodeSelector” for pod “nfs-provisioner” Expand |
---|
title | An example of the nfs-provisioner pod with nodeSelector |
---|
|
|
|
...
The following commands must run one time before run create on master node before creating ONAP configuration.
cd {$OOM}/kubernetes/config
cp onap-parameters-sample.yaml onap-parameters.yaml
...
Run createConfig
Info |
---|
title | To simplify steps in this section |
---|
|
You can skip the steps in this section by following instruction in autoCreateOnapConfig of Scripts section to - create {$OOM}/kubernetes/oneclick/tools/autoCreateOnapConfig.bash file
- run it and wait until the script completes
|
...
Info |
---|
title | To simplify steps in this section |
---|
|
You can skip the steps in this section by following instruction in autoDeploySdnc of Scripts section to - create {$OOM}/kubernetes/oneclick/tools/autoDeploySdnc.bash file
- run it and wait until the script completes
|
Execute the followings on the master node.
# | Purpose | Command and Examples |
---|
0 | Set the OOM Kubernetes config environment(If you have set the OOM Kubernetes config enviorment in the same terminal, you can skip this step) | cd {$OOM}/kubernetes/oneclick source setenv.bash |
1 | Run the createAll script to deploy the SDN-C appilication | cd {$OOM}/kubernetes/oneclick ./createAll.bash -n onap -a sdnc Expand |
---|
| ********** Creating instance 1 of ONAP with port range 30200 and 30399 ********** Creating ONAP: ********** Creating deployments for sdnc **********
Creating namespace ********** namespace "onap-sdnc" created Creating service account ********** clusterrolebinding "onap-sdnc-admin-binding" created Creating registry secret ********** secret "onap-docker-registry-key" created Creating deployments and services ********** NAME: onap-sdnc LAST DEPLOYED: Thu Nov 23 20:13:32 2017 NAMESPACE: onap STATUS: DEPLOYED RESOURCES: ==> v1/PersistentVolume NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE onap-sdnc-db 2Gi RWX Retain Bound onap-sdnc/sdnc-db 1s ==> v1/PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESSMODES AGE sdnc-db Bound onap-sdnc-db 2Gi RWX 1s ==> v1/Service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE dbhost None <none> 3306/TCP 1s sdnctldb01 None <none> 3306/TCP 1s sdnctldb02 None <none> 3306/TCP 1s sdnc-dgbuilder 10.43.97.219 <nodes> 3000:30203/TCP 1s sdnhost 10.43.99.163 <nodes> 8282:30202/TCP,8201:30208/TCP 1s sdnc-portal 10.43.72.72 <nodes> 8843:30201/TCP 1s ==> extensions/v1beta1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE sdnc-dgbuilder 1 1 1 0 1s ==> apps/v1beta1/StatefulSet NAME DESIRED CURRENT AGE sdnc-dbhost 2 1 1s sdnc 3 3 1s sdnc-portal 2 2 1s **** Done ****
|
|
| Ensure that the SDN-C appication has started | Use the kubectl get pods command to monitor the SDN-C startup; you should observe: - sdnc-dbhost-0 pod starts and gets into Running STATUS first,
- while
- sdnc-dbhost-1 pod does not exist and
- sdnc, sdnc-dgbuilder and sdnc-portal pods are staying in Init:0/1 STATUS
- once sdnc-dbhost-0 pod is fully started with the READY "1/1",
- sdnc-dbhost-1 will be starting from ContainerCreating STATUS and runs up to Running STATUS
- once sdnc-dbhost-1 pod is in RunningSTATUS,
- sdnc pods will be starting from PodInitializing STATUS and end up with Running STATUS in parallel
- while
- sdnc-dgbuilder and sdnc-portal pods are staying in Init:0/1 STATUS
- once sdnc pods are (is) in Running STATUS,
- sdnc-dgbuilder and sdnc-portal will be starting from PodInitializing STATUS and end up with Running STATUS in parllel
Expand |
---|
title | Example of start up status changes through "kubectl get pods --all-namespaces -a -o wide" |
---|
|
|
|
2 | Validate that all SDN-C pods and services are created properly | helm ls --all Expand |
---|
title | Example of SDNC release |
---|
| ubuntu@sdnc-k8s:~$ helm ls --all NAME REVISION UPDATED STATUS CHART NAMESPACE onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap onap-sdnc 1 Thu Nov 23 20:13:32 2017 DEPLOYED sdnc-0.1.0 onap ubuntu@sdnc-k8s:~$ |
kubectl get namespace Expand |
---|
title | Example of SDNC namespace |
---|
| ubuntu@sdnc-k8s:~$ kubectl get namespaces NAME STATUS AGE default Active 15d kube-public Active 15d kube-system Active 15d onap Active 2d onap-sdnc Active 12m ubuntu@sdnc-k8s:~$ |
kubectl get deployment --all-namespaces Expand |
---|
title | Example of SDNC deployment |
---|
| ubuntu@sdnc-k8s-2:~$ kubectl get deployment --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system heapster 1 1 1 1 15d kube-system kube-dns 1 1 1 1 15d kube-system kubernetes-dashboard 1 1 1 1 15d kube-system monitoring-grafana 1 1 1 1 15d kube-system monitoring-influxdb 1 1 1 1 15d kube-system tiller-deploy 1 1 1 1 15d onap-sdnc sdnc-dgbuilder 1 1 1 0 26m ubuntu@sdnc-k8s-2:~$ |
kubectl get clusterrolebinding --all-namespaces Expand |
---|
title | Example of SDNC cluster role binding |
---|
| ubuntu@sdnc-k8s:~$ kubectl get clusterrolebinding --all-namespaces NAMESPACE NAME AGE addons-binding 15d onap-sdnc-admin-binding 13m ubuntu@sdnc-k8s:~$ |
kubectl get serviceaccounts --all-namespaces Expand |
---|
title | Example of SDNC service account |
---|
| ubuntu@sdnc-k8s:~$ kubectl get serviceaccounts --all-namespaces NAMESPACE NAME SECRETS AGE default default 1 15d kube-public default 1 15d kube-system default 1 15d kube-system io-rancher-system 1 15d onap default 1 2d onap-sdnc default 1 14m ubuntu@sdnc-k8s:~$ |
kubectl get service -n onap-sdnc Expand |
---|
title | Example of all SDNC services |
---|
|
|
kubectl get pods --all-namespaces -a Expand |
---|
title | Example of all SDNC pods |
---|
|
|
docker ps |grep sdnc Expand |
---|
title | Example of SDNC docker container |
---|
| On Server 1: $ docker ps |grep sdnc |wc -l 9 $ docker ps |grep sdnc ebcb2f7f1a4a docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" About an hour ago Up About an hour k8s_filebeat-onap_sdnc-1_onap-sdnc_263071db-e69e-11e7-a01e-026c942e0e8c_0 55a82019ce10 nexus3.onap.org:10001/onap/sdnc-image@sha256:f5f85d498c397677fa2df850d9a2d8149520c5be3dce6c646c8fe4f25f9bab10 "bash -c 'sed -i 's/d" About an hour ago Up About an hour k8s_sdnc-controller-container_sdnc-1_onap-sdnc_263071db-e69e-11e7-a01e-026c942e0e8c_0 bbdfdfc2b1f0 gcr.io/google-samples/xtrabackup@sha256:29354f70c9d9207e757a1bae6a4cbf2f57a56b18fe5c2b0acc1198a053b24b38 "bash -c 'set -ex\ncd " About an hour ago Up About an hour k8s_xtrabackup_sdnc-dbhost-1_onap-sdnc_6e9a6c9a-e69e-11e7-a01e-026c942e0e8c_0 26854595164d mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" About an hour ago Up About an hour k8s_sdnc-db-container_sdnc-dbhost-1_onap-sdnc_6e9a6c9a-e69e-11e7-a01e-026c942e0e8c_0 b577493b5725 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-dbhost-1_onap-sdnc_6e9a6c9a-e69e-11e7-a01e-026c942e0e8c_0 14dcc0985259 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-1_onap-sdnc_263071db-e69e-11e7-a01e-026c942e0e8c_1 b52be823997b quay.io/kubernetes_incubator/nfs-provisioner@sha256:b5328a3825032d7e1719015260260347bda99c5d830bcd5d9da1175e7d1da989 "/nfs-provisioner -pr" About an hour ago Up About an hour k8s_nfs-provisioner_nfs-provisioner-860992464-dz5bt_onap-sdnc_261c3d30-e69e-11e7-a01e-026c942e0e8c_0 dc6dfd3fde3b gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_nfs-provisioner-860992464-dz5bt_onap-sdnc_261c3d30-e69e-11e7-a01e-026c942e0e8c_0 006aaa34c5af mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 32 hours ago Up 32 hours k8s_sdnc-db-container_sdnc-dbhost-1_onap-sdnc_d1cc3b28-e59e-11e7-a01e-026c942e0e8c_0 On Server 2: $ docker ps |grep sdnc|wc -l 20 $ docker ps |grep sdnc 5155b7fcd0b9 nexus3.onap.org:10001/onap/admportal-sdnc-image@sha256:9cfdfa8aac18da5571479e0c767b92dbc72a3a5b475be37bd84fb65400696564 "/bin/bash -c 'cd /op" About an hour ago Up About an hour k8s_sdnc-portal-container_sdnc-portal-1380828306-7xxdz_onap-sdnc_261d5989-e69e-11e7-a01e-026c942e0e8c_0 d3a58f2bb662 nexus3.onap.org:10001/onap/ccsdk-dgbuilder-image@sha256:c52ad4dacc00da4a882d31b7022e59e3dd4bd4ec104380910949bce4d2d0c7b9 "/bin/bash -c 'cd /op" About an hour ago Up About an hour k8s_sdnc-dgbuilder-container_sdnc-dgbuilder-3612718752-m189c_onap-sdnc_262301b1-e69e-11e7-a01e-026c942e0e8c_0 275e0173f109 nexus3.onap.org:10001/onap/sdnc-image@sha256:f5f85d498c397677fa2df850d9a2d8149520c5be3dce6c646c8fe4f25f9bab10 "bash -c 'sed -i 's/d" About an hour ago Up About an hour k8s_sdnc-controller-container_sdnc-2_onap-sdnc_263d495f-e69e-11e7-a01e-026c942e0e8c_0 2566062bd408 docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" About an hour ago Up About an hour k8s_filebeat-onap_sdnc-2_onap-sdnc_263d495f-e69e-11e7-a01e-026c942e0e8c_0 53df8341738c docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942 "filebeat -e" About an hour ago Up About an hour k8s_filebeat-onap_sdnc-0_onap-sdnc_2629ed07-e69e-11e7-a01e-026c942e0e8c_0 e147f48ffb5b nexus3.onap.org:10001/onap/sdnc-image@sha256:f5f85d498c397677fa2df850d9a2d8149520c5be3dce6c646c8fe4f25f9bab10 "bash -c 'sed -i 's/d" About an hour ago Up About an hour k8s_sdnc-controller-container_sdnc-0_onap-sdnc_2629ed07-e69e-11e7-a01e-026c942e0e8c_0 c5f63561e196 gcr.io/google-samples/xtrabackup@sha256:29354f70c9d9207e757a1bae6a4cbf2f57a56b18fe5c2b0acc1198a053b24b38 "bash -c 'set -ex\ncd " About an hour ago Up About an hour k8s_xtrabackup_sdnc-dbhost-0_onap-sdnc_265aef90-e69e-11e7-a01e-026c942e0e8c_0 4a0cc594d12d mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" About an hour ago Up About an hour k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_265aef90-e69e-11e7-a01e-026c942e0e8c_0 6c02ac51ea56 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-dbhost-0_onap-sdnc_265aef90-e69e-11e7-a01e-026c942e0e8c_0 8eafb15b5e7c gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-2_onap-sdnc_263d495f-e69e-11e7-a01e-026c942e0e8c_0 887ac4e733e8 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-0_onap-sdnc_2629ed07-e69e-11e7-a01e-026c942e0e8c_0 41044c2ed166 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-dgbuilder-3612718752-m189c_onap-sdnc_262301b1-e69e-11e7-a01e-026c942e0e8c_0 6771b1319b20 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_sdnc-portal-1380828306-7xxdz_onap-sdnc_261d5989-e69e-11e7-a01e-026c942e0e8c_0 fd57d6e9577b mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 2 hours ago Up 2 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_7c3c3068-e699-11e7-a01e-026c942e0e8c_0 0be895c2ccad mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 2 hours ago Up 2 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_ea765fb6-e696-11e7-a01e-026c942e0e8c_0 7ddaf2cf9806 mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 4 hours ago Up 4 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_e04ab438-e689-11e7-a01e-026c942e0e8c_0 8ad307977830 mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 25 hours ago Up 25 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_87db0033-e5d9-11e7-a01e-026c942e0e8c_0 979b3ff11974 mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 25 hours ago Up 25 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_6c1b0064-e5d8-11e7-a01e-026c942e0e8c_0 b903e3e52f51 mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 25 hours ago Up 25 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_744c98a4-e5d3-11e7-a01e-026c942e0e8c_0 36857a112463 mysql@sha256:1f95a2ba07ea2ee2800ec8ce3b5370ed4754b0a71d9d11c0c35c934e9708dcf1 "docker-entrypoint.sh" 32 hours ago Up 32 hours k8s_sdnc-db-container_sdnc-dbhost-0_onap-sdnc_abe261cd-e59d-11e7-a01e-026c942e0e8c_0 $
|
|
3 | Validate that the SDN-C bundlers are up | Expand |
---|
title | Enter pod container with 2 options |
---|
| Expand |
---|
title | Option 1: through pod name from anywhere |
---|
| Use command kubectl exec -it <POD_NAME_WITH_NAME_SPACE> bash
Expand |
---|
|
|
|
Expand |
---|
title | Option 2: through docker container ID from where the container is |
---|
| Use command docker exec -it <DOCKER_CONTAINER_ID> bash
Expand |
---|
|
|
|
|
Expand |
---|
title | Check SDNC bundles in ODL client |
---|
| Expand |
---|
|
|
Expand |
---|
|
|
|
|
4 | Validate that the SDN-C APIs are shown on the ODL RestConf page | Access the ODL RestConf page from the following URL: http://<Kubernetes-Master-Node-IP>:30202/apidoc/explorer/index.html
Expand |
---|
title | Example of SDNC APIs in ODL RestConf page |
---|
|
|
|
5 | Validate the SDN-C ODL cluster | Goal:Verify if the SDN-C ODL-Cluster is running properly Prerequisites- This test is on one of your Kubernetes nodes
- Make sure python-pycurl is installed
- If not, for Ubuntu use "apt-get install python-pycurl" to install it
Clone ODL Integration-test project | git clone https://github.com/opendaylight/integration-test.git |
---|
Enter cluster-montor folder | cd integration-test/tools/clustering/cluster-monitor |
---|
Create cluster-monitor.bash script | vi cluster-monitor.bash Code Block |
---|
language | bash |
---|
title | Content of cluster-monitor.bash |
---|
linenumbers | true |
---|
collapse | true |
---|
| ########################################################################################
# This script wraps ODL monitor.py and dynamicly picks up ODL clustered IP of the sdnc #
# clustered pods and update the IP address in the cluster.json which feed the ODL #
# monitor.py. #
# This script also changed the username and password in the cluster.json file. #
# #
# If the sdnc pods IP address is re-assigned, the running session of this script #
# should be restarted. #
# #
# To run it, just enter the following command: #
# ./cluster-monitor.bash #
########################################################################################
#!/bin/bash
# get IPs string by using kubectl
ips_string=$(kubectl get pods --all-namespaces -o wide | grep 'sdnc-[0-9]' | awk '{print $7}')
ip_list=($(echo ${ips_string} | tr ' ' '\n'))
# loop and replace existing IP
for ((i=0;i<=2;i++));
do
if [ "${ip_list[$i]}" == "<none>" ]; then
echo "Ip of deleted pod is not ready yet"
exit 1;
fi
let "j=$i+4"
sed -i -r "${j}s/(\b[0-9]{1,3}\.){3}[0-9]{1,3}\b"/${ip_list[$i]}/ cluster.json
done
# replace port, username and password
sed -i 's/8181/8080/g' cluster.json
sed -i 's/username/admin/' cluster.json
sed -i 's/password/admin/' cluster.json
# start monitoring
python monitor.py |
This script is used to fetch all the IPs of SDN-C pods and automatically update cluster.json file |
---|
Start cluster montor UI | ./cluster-monitor.bash Note: If applications inside any of these three SDNC pods are not fully started, this script won't be executed successfully due to the issues such as connection error, value error and etc. Otherwise, you should see the monitoring UI as the following: |
---|
Use testCluster RPC to test SDN-C load sharingThe testCluster-bundle.zip provides a testBundle which offers a testCluster API to help with validating SDN-C RPC load sharing in the deployed SDN-C cluster. It's just as easy as to do the following: - download testCluster-bundle.zip(by clicking on the hyplinked text), and place it to the sdnc-deploy hostPath which has been defined in .spec.volumes of sdnc-statefulset.yaml file
unzip testCluster-bundle.zip, If unzip command is not installed, install it by using "sudo apt install unzip" and then "unzip testCluster-bundle.zip". Expand |
---|
| ubuntu@sdnc-k8s-1:~/cluster/deploy$ unzip testCluster-0.2.0.zip The program 'unzip' is currently not installed. You can install it by typing: sudo apt install unzip ubuntu@sdnc-k8s-1:~/cluster/deploy$ sudo apt install unzip Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: zip The following NEW packages will be installed: unzip 0 upgraded, 1 newly installed, 0 to remove and 117 not upgraded. Need to get 158 kB of archives. After this operation, 530 kB of additional disk space will be used. Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 unzip amd64 6.0-20ubuntu1 [158 kB] Fetched 158 kB in 0s (230 kB/s) Selecting previously unselected package unzip. (Reading database ... 93492 files and directories currently installed.) Preparing to unpack .../unzip_6.0-20ubuntu1_amd64.deb ... Unpacking unzip (6.0-20ubuntu1) ... Processing triggers for mime-support (3.59ubuntu1) ... Processing triggers for man-db (2.7.5-1) ... Setting up unzip (6.0-20ubuntu1) ... ubuntu@sdnc-k8s-1:~/cluster/deploy$ unzip testCluster-0.2.0.zip Archive: testCluster-0.2.0.zip inflating: testDataBroker-api-0.2.0-SNAPSHOT.jar inflating: testDataBroker-features-0.2.0-SNAPSHOT.jar inflating: testDataBroker-impl-0.2.0-SNAPSHOT.jar ubuntu@sdnc-k8s-1:~/cluster/deploy$ |
- As this hostPath is mounted as ODL's deploy directory, once the zip file is unzipped, the testBundle will be automatically loaded by ODL and the testCluster API will be availble as ODL RestConf API.
- testCluster API is accessible from(if you dont see it there, try deleteing SDNC pods. This will restart them.)
Expand |
---|
title | ODL RestConf API Documentation |
---|
|
|
Expand |
---|
|
Code Block |
---|
language | actionscript3 |
---|
title | Example of postman code snippets |
---|
linenumbers | true |
---|
collapse | true |
---|
| POST /restconf/operations/testCluster:who-am-i HTTP/1.1
Host: ${KUBERNETES MASTER VM IP}:30202
Accept: application/json
Authorization: Basic YWRtaW46S3A4Yko0U1hzek0wV1hsaGFrM2VIbGNzZTJnQXc4NHZhb0dHbUp2VXkyVQ==
Cache-Control: no-cache
Postman-Token: 9683538b-de47-dec8-3e88-c491be9dd6ef
|
|
Expand |
---|
| curl -u admin:Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U -H 'Accept: application/json' -X POST 'http://${KUBERNETES MASTER VM IP}:30202/restconf/operations/testCluster:who-am-i' |
Expand |
---|
title | An example of testCluster API response |
---|
| {
"output": {
"node": "sdnc-2"
}
} |
|
...