Info |
---|
Steps described in this page are run by "ubuntu", a non-root user. |
Table of Contents | ||
---|---|---|
|
...
Clone the OOM project from ONAP gerrit
Run the following command on master node to clone OOM project from ONAP gerrit at any directory you prefer, and that directory will be referred as "{$OOM}" in this page.
git clone http://gerrit.onap.org/r/oom
SDN-C Cluster Deployment
Configure
...
SDN-C Cluster Deployment
We are using Kubernetes replicas to achieve the SDNC SDN-C cluster deployment (see details from About SDN-C Clustering for the desired goal).
...
Get New startODL.sh Script From Gerrit Topic SDNC-163
...
Info |
---|
The source of the |
...
new startODL.sh script |
...
, gerrit change 25475, has been merged into sdnc/oam project on December 15th, 2017. |
Do the following to get the new startODL.sh script which provides the configuration of ODL clustering for SDN-C cluster.
# | Purpose | Command Examples |
---|---|---|
1 | Get the shared new startODL.sh script content | Go to gerrit change 25475 click on installation/ sdnc/src/main/scripts/startODL.sh under the Files section to view the text of the script. click on the Download button () to download the startODL_new.sh.zip file and extract open the sh file inside the zip file, rename it to "startODL.sh". |
2 | Create new startODL.sh on the Kubernetes node VM | mkdir -p /dockerdata-nfs/cluster/script vi /home/ubuntudockerdata-nfs/cluster/script/startODL.sh paste the copied content from step 1 to this file |
3 | Give execution permission to the new startODL.sh script | chmod 777 / home/ubuntudockerdata-nfs/cluster/script/startODL.sh
|
Get Get SDN-C Cluster Templates From Gerrit Topic SDNC-163
Only do this before the SDN-C cluster code is merged into gerrit OOM project.
Info |
---|
The source of the templates, gerrit change 25467, has been merged into sdnc/oam project on December 20th, 2017. Skip step 1 and 2 if your cloned OOM project includes this change. Skip step 3 if you skipped previous section (adding startODL.sh script). Skip step 4, if you dont want to add/deploy extra features/bundles/packages. Step 5 in important. It determines number of sdnc and db pods. |
# | Purpose | Command and Examples | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | Get the shared templates code git fetch command | Go to gerrit change 25467 Click Download downward arrow, From the right bottom corner drop list, select anonymous http, Click | ||||||||||||||||||||||
# | Purpose | Command and Examples | ||||||||||||||||||||||
1 | Get the shared templates code git fetch command | Go to gerrit change 25467 Click Download downward arrow, and click the click board in the same line as Checkout to get (copy to clipboard) the git commands (which includes the git fetch commandand checkout commands).
| ||||||||||||||||||||||
2 | Fetch the shared template to the oom directory on the Kubernetes node VM | cd oom {$OOM} run Execute the git fetch command got from step 1. | ||||||||||||||||||||||
3 | Link the new startODL.sh |
vi kubernetes/sdnc/templates/sdnc -statefulset.yaml Make the following changes:
- mountPath: /opt/opendaylight/current/deploy name: sdnc-deploy
- name: sdnc-startodl hostPath: path: /home/ubuntu/cluster/script/startODL.sh
| ||||||||||||||||||||||
4 | Link the ODL deploy directory |
ODL automatically install bundles/pacakges that are put under deploy directory, this mount point provides capability to drop a bundle/package in the Kubernetes node at /dockerdata-nfs/cluster/deploy directory and it will automativally be installed in the sdnc pods (under opt/opendaylight/current/deploy directory). | 4 | Enable cluster configuration vi kubernetes/sdnc/ | valuestemplates/sdnc-statefulset.yaml | change Make the following | fields with the new valuechanges: | field
| ||||||||||||||||
5 | Enable cluster configuration | vi kubernetes/sdnc/values.yaml Change the following fields with the new value:
|
...
Make nfs-provisioner Pod Runs On Node Where NFS Server Runs
Info |
---|
...
Skip the following section, if any of thew following coditions matches.
|
...
|
Verify (from Master node)), run the following:
# | Purpose | Command and Example | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | Find the node name |
kubectl get node
| ||||||||||
2 | Set label on the node | kubectl label nodes <NODE_NAME_FROM_LAST_STEP> disktype=ssd
| ||||||||||
3 | Check the label has been set on the node | kubectl get node --show-labels
| ||||||||||
4 | Update nfs-provisioner pod template to force it running on the nfs server node | In nfs-provisoner-deployment.yaml file, add “spec.template.spec.nodeSelector” for pod “nfs-provisioner”
|
Create the ONAP Config
...
Setup onap-parameters.yaml file
The following commands must run on master node before creating ONAP configuration.
...
cd {$OOM}/kubernetes/config
cp onap-parameters-sample.yaml onap-parameters.yaml
Run createConfig
Info | ||
---|---|---|
| ||
You can skip the steps in this section by following instruction in autoCreateOnapConfig of Scripts section to
|
# | Purpose | Command and Examples | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | Set the OOM Kubernetes config environment | cd {$OOM}/kubernetes/oneclick source setenv.bash | ||||||||||||||||||
0 | Set the OOM Kubernetes config environment | cd {$OOM}/kubernetes/oneclick source setenv.bash | ||||||||||||||||||
1 | Run the createConfig script to create the ONAP config | cd {$OOM}/kubernetes/config
| ||||||||||||||||||
2 | Wait for the config-init container to finish | Use the following command to monitor onap config init intil it reaches to Completed STATUS:
| ||||||||||||||||||
Additional checks for config-init |
|
...
Deploy the SDN-C Application
...
Info | ||
---|---|---|
| ||
You can skip the steps in this section by following instruction in autoDeploySdnc of Scripts section to
|
Execute the followings on the master node.
# | Purpose | Command and Examples | ||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | Set the OOM Kubernetes config environment(If you have set the OOM Kubernetes config enviorment in the same terminal, you can skip this | Purpose | Command and Examples | 0 | Set the OOM Kubernetes config environment(If you have set the OOM Kubernetes config enviorment in the same terminal, you can skip this step) | cd {$OOM}/kubernetes/oneclick source setenv.bash | ||||||||||||||||||||||||||||||||||||||||||||||||
1 | Run the createAll script to deploy the SDN-C appilication | cd {$OOM}/kubernetes/oneclick ./createAll.bash -n onap -a sdnc
| ||||||||||||||||||||||||||||||||||||||||||||||||||||
Ensure that the SDN-C appication has started | Use the kubectl get pods command to monitor the SDN-C startup; you should observe:
| |||||||||||||||||||||||||||||||||||||||||||||||||||||
2 | Validate that all SDN-C pods and services are created properly | helm ls --all
kubectl get namespacesnamespace
kubectl get deployment --all-namespaces
kubectl get clusterrolebinding --all-namespaces
kubectl get serviceaccounts --all-namespaces
kubectl get service -n onap-all-namespacessdnc
kubectl get servicepods --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 15d kube-system heapster ClusterIP 10.43.210.11 <none> 80/TCP 15d kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP 15d kube-system kubernetes-dashboard ClusterIP 10.43.196.205 <none> 9090/TCP 15d kube-system monitoring-grafana ClusterIP 10.43.90.8 <none> 80/TCP 15d kube-system monitoring-influxdb ClusterIP 10.43.52.1 <none> 8086/TCP 15d kube-system tiller-deploy ClusterIP 10.43.106.73 <none> 44134/TCP 15d onap-sdnc dbhost ClusterIP None <none> 3306/TCP 17m onap-sdnc sdnc-dgbuilder NodePort 10.43.97.219 <none> 3000:30203/TCP 17m onap-sdnc sdnc-portal NodePort 10.43.72.72 <none> 8843:30201/TCP 17m onap-sdnc sdnctldb01 ClusterIP None <none> 3306/TCP 17m onap-sdnc sdnctldb02 ClusterIP None <none> 3306/TCP 17m onap-sdnc sdnhost NodePort 10.43.99.163 <none> 8282:30202/TCP,8201:30208/TCP 17m ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$ -a
docker ps |grep sdnc
kubectl get pods --all-namespaces -a
docker ps |grep sdnc
| ||||||||||||||||||||||||||||||||||||||||||||||||||||
3 | Validate that the SDN-C bundlers are up |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||
4 | Validate that the SDN-C APIs are shown on the ODL RestConf page | Access the ODL RestConf page from the following URL:
| 5 | Validate the SDN-C ODL cluster |
Clone ODL Integration-test project | git clone https://github.com/opendaylight/integration-test.git | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Enter cluster-montor folder | cd integration-test/tools/clustering/cluster-monitor | |||||||||||
Create update-cluster.bash script | vi update-cluster.sh
This script is used to fetch all the IPs of SDN-C pods and automatically update cluster.json file | |||||||||||
Start cluster montor UI | ./update-cluster.bash Note: If applications inside any of these three SDNC pods are not fully started, this script won't be executed successfully due to the issues such as connection error, value error and etc. Otherwise, you should see the monitoring UI as the following: |
Use testCluster RPC to test SDN-C load sharing
The testCluster-bundle.zip provides a testBundle which offers a testCluster API to help with validating SDN-C RPC load sharing in the deployed SDN-C cluster.
It's just as easy as to do the following:
download testCluster-bundle.zipfrom the attachment, and place it to the sdnc-deployhostPath which has been defined in .spec.volumes of sdnc-deployment.yaml file- As this hostPath is mounted as ODL's deploy directory, testBundle will be automatically loaded by ODL and the testCluster API will be availble to use.
testCluster api can be accessed from ODL RestConf page, postman or curl command:
Expand title curl command curl -u admin:Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U -H 'Accept: application/json' -X POST 'http://${kubernetes-master-api}:30202/restconf/operations/testCluster:who-am-i'
Expand title An example of testCluster API response { "output": { "node": "sdnc-2" } }
Undeploy the SDN-C Application
...
Aet the OOM Kubernetes config environment
(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step)
...
cd {$OOM}/kubernetes/oneclick
source setenv.bash
...
Run the deleteAll script to delete all SDN-C pods and services
...
./deleteAll.bash -n onap -a sdnc
Expand | ||
---|---|---|
| ||
********** Cleaning up ONAP: Waiting for namespaces termination... ********** Gone ********** |
...
Validate that all SDN-C pods and servers are cleaned up
...
docker ps |grep sdnc
Expand | ||
---|---|---|
| ||
ubuntu@sdnc-k8s:~$ docker ps |grep sdnc |
kubectl get pods --all-namespaces -a
Expand | ||
---|---|---|
| ||
ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$ kubectl get pods --all-namespaces -a |
kubectl get service --all-namespaces
Expand | ||
---|---|---|
| ||
ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$ kubectl get service --all-namespaces |
kubectl get serviceaccounts --all-namespaces
Expand | ||
---|---|---|
| ||
ubuntu@sdnc-k8s:~$ kubectl get serviceaccounts --all-namespaces |
kubectl get clusterrolebinding --all-namespaces
Expand | ||
---|---|---|
| ||
ubuntu@sdnc-k8s:~$ kubectl get clusterrolebinding --all-namespaces |
kubectl get deployment --all-namespaces
Expand | ||
---|---|---|
| ||
ubuntu@sdnc-k8s:~$ kubectl get deployment --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system heapster 1 1 1 1 2d kube-system kube-dns 1 1 1 1 2d kube-system kubernetes-dashboard 1 1 1 1 2d kube-system monitoring-grafana 1 1 1 1 2d kube-system monitoring-influxdb 1 1 1 1 2d kube-system tiller-deploy 1 1 1 1 2d ubuntu@sdnc-k8s:~$ |
kubectl get namespaces
Expand | ||
---|---|---|
| ||
ubuntu@sdnc-k8s:~$ kubectl get namespaces |
helm ls --all
Expand | ||
---|---|---|
| ||
ubuntu@sdnc-k8s:~$ helm ls --all |
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
3 | Validate that the SDN-C bundlers are up |
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||
4 | Validate that the SDN-C APIs are shown on the ODL RestConf page | Access the ODL RestConf page from the following URL:
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||
5 | Validate the SDN-C ODL cluster | Goal:Verify if the SDN-C ODL-Cluster is running properly Prerequisites
Use ODL intergration tool to monitor ODL cluster
Use testCluster RPC to test SDN-C load sharingThe testCluster-bundle.zip provides a testBundle which offers a testCluster API to help with validating SDN-C RPC load sharing in the deployed SDN-C cluster. It's just as easy as to do the following:
|
Undeploy the SDN-C Application
Info | ||
---|---|---|
| ||
You can skip the steps in this section by following instruction in autoCleanSdnc of Scripts section to
|
# | Purpose | Command and Examples | ||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 |
(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step) | cd {$OOM}/kubernetes/oneclick source setenv.bash | ||||||||||||||||||||||||||||||||||||||||
1 |
| ./deleteAll.bash -n onap -a sdnc
| ||||||||||||||||||||||||||||||||||||||||
2 |
| docker ps |grep sdnc
kubectl get pods --all-namespaces -a
kubectl get service --all-namespaces
kubectl get serviceaccounts --all-namespaces
kubectl get clusterrolebinding --all-namespaces
kubectl get deployment --all-namespaces
kubectl get namespaces
helm ls --all
|
Remove the ONAP Config
Info | ||
---|---|---|
| ||
You can skip the steps in this section by following instruction in autoCleanOnapConfig of Scripts section to
|
# | Purpose | Command and Examples | ||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 |
(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step) | cd {$OOM}/kubernetes/oneclick source setenv.bash | ||||||||||||||||||||||||||
1 |
| ./deleteAll.bash -n onap
| ||||||||||||||||||||||||||
2 |
This step is to clean up the leftover items which were created by the config/createConfig script but not cleaned up by the oneclick/deleteAll script.
|
| ||||||||||||||||||||||||||
3 |
| sudo rm -rf /dockerdata-nfs/onap |
Scripts
Anchor | ||||
---|---|---|---|---|
|
The following scripts are intended to be placed under directory {$OOM}/kubernetes/oneclick/tools and help to simplify various procedures by automating them
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
########################################################################################
# This script wraps {$OOM}/kubernetes/config/createConfig.sh script #
# and will only terminated when the ONAP configuration is Completed #
# #
# Before using it, do the following to prepare the bash file: #
# 1, cd {$OOM}/kumbernetes/oneclick/tools #
# 2, vi autoCreateOnapConfig.bash #
# 3, paste the full content here to autoCreateOnapConfig.bash file and save the file #
# 4, chmod 777 autoCleanOnapConfig.bash |
Remove the ONAP Config
...
Set the OOM Kubernetes config environment
(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step)
...
cd {$OOM}/kubernetes/oneclick
source setenv.bash
...
Remove the ONAP config and any deployed applications in one shot
...
./deleteAll.bash -n onap
Expand | ||
---|---|---|
| ||
ubuntu@sdnc-k8s:~/oom/kubernetes/oneclick$ ./deleteAll.bash -n onap
********** Cleaning up ONAP: Error: release: not found Error from server (NotFound): namespaces "onap-consul" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-consul-admin-binding" not found Service account onap-consul-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-msb" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-msb-admin-binding" not found Service account onap-msb-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-mso" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-mso-admin-binding" not found Service account onap-mso-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-message-router" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-message-router-admin-binding" not found Service account onap-message-router-admin-binding deleted.ls
release "onap-sdnc" deleted namespace "onap-sdnc" deleted clusterrolebinding "onap-sdnc-admin-binding" deleted Service account onap-sdnc-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-vid" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-vid-admin-binding" not found Service account onap-vid-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-robot" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-robot-admin-binding" not found Service account onap-robot-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-portal" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-portal-admin-binding" not found Service account onap-portal-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-policy" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-policy-admin-binding" not found Service account onap-policy-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-appc" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-appc-admin-binding" not found Service account onap-appc-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-aai" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-aai-admin-binding" not found Service account onap-aai-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-sdc" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-sdc-admin-binding" not found Service account onap-sdc-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-dcaegen2" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-dcaegen2-admin-binding" not found Service account onap-dcaegen2-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-log" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-log-admin-binding" not found Service account onap-log-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-cli" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-cli-admin-binding" not found Service account onap-cli-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-multicloud" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-multicloud-admin-binding" not found Service account onap-multicloud-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-clamp" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-clamp-admin-binding" not found Service account onap-clamp-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-vnfsdk" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-vnfsdk-admin-binding" not found Service account onap-vnfsdk-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-uui" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-uui-admin-binding" not found Service account onap-uui-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-aaf" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-aaf-admin-binding" not found Service account onap-aaf-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-vfc" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-vfc-admin-binding" not found Service account onap-vfc-admin-binding deleted.
Error: release: not found Error from server (NotFound): namespaces "onap-kube2msb" not found Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "onap-kube2msb-admin-binding" not found Service account onap-kube2msb-admin-binding deleted.
Waiting for namespaces termination...
********** Gone ********** |
...
Manually clean up
This step is to clean up the leftover items which were created by the config/createConfig script but not cleaned up by the oneclick/deleteAll script.
Expand | ||
---|---|---|
| ||
ubuntu@sdnc-k8s:~/oom/kubernetes/config$ ./createConfig.sh -n onap **** Creating configuration for ONAP instance: onap |
...
ONAP serviceaccount | No action needed. It cannot be deleted by a specific command, but will instead be automatically deleted when the namespace is deleted.
| |||||
---|---|---|---|---|---|---|
ONAP namespace |
| |||||
release |
|
...
Delete the shared folder
...
sudo rm -rf /dockerdata-nfs/onap
Scripts
The following scripts help to simplify various procedures by automating them
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
######################################################################################## # This script replaces {$OOM}/kubernetes/config/createConfig.sh script # # and will only terminated when the ONAP configuration is Completed # # # # To run it, just enter the following command: # # Before using it, do the following# # to prepare the ./autoCreateOnapConfig.bash file: <namespace, default is "onap"> # # 1, cd {$OOM}/kumbernetes/oneclick # ######################################################################################## #!/bin/bash NS=$1 if [[ -z $NS ]] then echo "Namespace is not specified, use onap namespace." NS="onap" fi echo "Create $NS config under config directory..." cd ../../config ./createConfig.sh -n # # 2, vi autoCreateOnapConfig.bash # # 3, paste the full content here to autoCreateOnapConfig.bash file and save the file # # 4, chmod 777 autoCreateOnapConfig.bash # # To run it, just enter the following command: $NS cd - echo "...done : kubectl get namespace ----------------------------------------------- >>>>>>>>>>>>>> k8s namespace" kubectl get namespace echo " ----------------------------------------------- >>>>>>>>>>>>>> helm : helm ls --all" helm ls --all echo " ----------------------------------------------- >>>>>>>>>>>>>> pod : kubectl get pods -n $NS -a" kubectl get pods -n $NS -a while true do echo "wait for $NS config pod reach to Completed STATUS" sleep 5 echo "-----------------------------------------------" kubectl get pods -n $NS -a status=`kubectl get pods -n $NS -a |grep config |xargs echo | cut -d' ' -f3` if [ "$status" = "Completed" ] then echo "$NS config is Completed!!!" break fi if [ "$status" = "Error" ] then echo "$NS config is failed with Error!!! POD logs are:" kubectl logs config -n $NS -f break fi done |
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
######################################################################################## # This script wraps {$OOM}/kubernetes/oneclick/deleteAll.sh script along with # # the following steps to clean up ONAP configure: # # ./autoCreateOnapConfig.bash # # - remove ONAP namespace # ######################################################################################## #!/bin/bash echo "Create ONAP config under config directory..." cd ../config ./createConfig.sh -n onap cd - echo "...done : kubectl get namespace ----------------------------------------------- >>>>>>>>>>>>>> k8s namespace" kubectl get namespace echo " ----------------------------------------------- >>>>>>>>>>>>>> helm : helm ls --all" helm ls --all echo " ----------------------------------------------- >>>>>>>>>>>>>> pod : kubectl get pods --all-namespaces -a" kubectl get pods --all-namespaces -a status=`kubectl get pods --all-namespaces -a |grep onap |xargs echo | cut -d' ' -f4` while true do echo "wait for onap config pod reach to Completed STATUS" sleep 5 echo "-----------------------------------------------" kubectl get pods --all-namespaces -a status=`kubectl get pods --all-namespaces -a |grep onap |xargs echo | cut -d' ' -f4` if [ "$status" = "Completed" ] then echo "onap config is Completed!!!" break fi done | ||||||||
Code Block | ||||||||
| ||||||||
######################################################################################## # This script wraps {$OOM}/kubernetes/oneclick/deleteAll.sh script along with # # - remove ONAP release # # - remove ONAP shared directory # # the following steps to clean up ONAP configure: # # - remove ONAP namespace # # Before using it, do the following to prepare the bash file: # # 1, - remove ONAP releasecd {$OOM}/kumbernetes/oneclick/tools # # 2, vi autoCleanOnapConfig.bash # # - remove ONAP shared directory # # 3, paste the full content here to autoCleanOnapConfig.bash file and save the file # # 4, chmod 777 autoCleanOnapConfig.bash # # # # # # Before using it, do the following to prepare the bash file: # # To run 1it, cd {$OOM}/kumbernetes/oneclick just enter the following command: # # 2, vi ./autoCleanOnapConfig.bash # # 3, paste the full content here to autoCleanOnapConfig.bash file and save the file # # 4, chmod 777 autoCleanOnapConfig.bash # # To run it, just enter the following command: # # ./autoCleanOnapConfig.bash # ######################################################################################## #!/bin/bash ./deleteAll.bash -n onap echo "--# ######################################################################################## #!/bin/bash NS=$1 if [[ -z $NS ]] then echo "Namespace is not specified, use onap namespace." NS="onap" fi echo "Clean up $NS configuration" cd .. ./deleteAll.bash -n $NS cd - echo "---------------------------------------------- Force remove namespace..." kubectl delete namespace $NS echo "...done : kubectl get namespace ----------------------------------------------- >>>>>>>>>>>>>> k8s namespace" kubectl get namespace while [[ ! -z `kubectl get namespace|grep $NS` ]] do echo "Wait for namespace $NS to be deleted ----------------------------------------------- >>>>>>>>>>>>>> k8s namespace" kubectl get namespace sleep 2 done echo "Force delete helm process ..." helm delete $NS-config --purge --debug echo "...done : helm ls --all ----------------------------------------------- >>>>>>>>>>>>>> helm" helm ls --all echo "Remove $NS dockerdata..." sudo rm -rf /dockerdata-nfs/onap echo "...done : ls -altr /dockerdata-nfs -------------------------------------------- Force remove namespace..." kubectl delete namespace onap echo "...done" kubectl get namespace echo "Force delete helm process ..." helm delete onap-config --purge --debug echo "...done" helm ls --all echo "Remove ONAP dockerdata..." sudo rm -rf------ >>>>>>>>>>>>>> /dockerdata-nfs/onap echo directory"...done" ls -altr /dockerdata-nfs |
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
######################################################################################## # This script wraps {$OOM}/kubernetes/oneclick/createAll.sh script along with # # the following steps to deploy ONAP SDNC application: # # - wait until sdnc-0 is running properly with both (2) containers up # # # # Before using it, do the following to prepare the bash file: # # 1, cd {$OOM}/kumbernetes/oneclick/tools # # 2, vi autoDeploySdnc.bash # # 3, paste the full content here to autoDeploySdnc.bash file and save the file # # 24, chmod vi777 autoDeploySdnc.bash # # # # 3, paste the full content here to autoDeploySdnc.bash file and save the file # # 4, chmod 777 autoDeploySdnc.bash # # To run it, just enter the following command: # # ./autoDeploySdnc.bash # ######################################################################################## #!/bin/bash cd .. echo "Deploy SDNC..." ./createAll.bash -n onap -a sdnc returnCode=$? echo "deploy result: $returnCode" cd - if [ "$returnCode" != "0" ] then echo "Abort..." exit fi echo "...done ----------------------------------------------- >>>>>>>>>>>>>> pod : kubectl get pods --all-namespaces -a -a -o wide" kubectl get pods --all-namespaces -a -o wide status=`kubectl get pods --all-namespaces -a |grep sdnc-0 |xargs echo | cut -d' ' -f3` while true do echo "wait for onap sdnc-0 reachs fully running" sleep 5 echo "-----------------------------------------------" kubectl get pods --all-namespaces -a -o wide status=`kubectl get pods --all-namespaces -a |grep sdnc-0 |xargs echo | cut -d' ' -f3` if [ "$status" = "2/2" ] then echo "onap sdnc-0 is running!!!" break fi done |
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
######################################################################################## # This script wraps {$OOM}/kubernetes/oneclick/deleteAll.sh script along with # # the following steps to un-deploy ONAP SDNC application fully: # # - force remove clusterrolebinding for onap-sdnc # # - force remove namespace for onap-sdnc # # - force remove release for onap-sdnc # # - wait until onap-sdnc namespace is remvoed # # # # Before using it, do the following to prepare the bash file: # # 1, cd {$OOM}/kumbernetes/oneclick/tools # # 2, vi autoCleanSdnc.bash # # 3, paste the full content here to autoCleanSdnc.bash file and save the file # # 4, chmod 777 autoCleanSdnc.bash # # # # To run it, just enter the following command: # # ./autoCleanSdnc.bash # ######################################################################################## #!/bin/bash cd .. ./deleteAll.bash -n onap -a sdnc cd - echo "---------------------------------------------- Remove clusterrolebindnig..." kubectl delete clusterrolebinding onap-sdnc-admin-binding echo "...done : kubectl get clusterrolebinding" kubectl get clusterrolebinding echo "Remove onap-sdnc namespace..." kubectl delete namespaces onap-sdnc echo "...done : kubectl get namespaces" kubectl get namespaces echo "Delete onap-sdnc release..." helm delete onap-sdnc --purge echo "...done: helm ls --all" helm ls --all sdncCount=`kubectl get namespaces | grep onap-sdnc | wc -l` while true do echo "wait for onap-sdnc namespace to be removed" sleep 5 echo "-----------------------------------------------" kubectl get namespaces sdncCount=`kubectl get namespaces | grep onap-sdnc | wc -l` if [ "$sdncCount" = "0" ] then echo "sdnc removed!!!" break fi done |