Overview
For ONAP SDN-R load and stress test and Proof of concept, June 19 a three node SDN-R cluster is used. Version is El Alto.
Precondition
- Running performant physical server cluster (The hardware requirements)
- Openstack installed
- VM cloud image ubuntu 18.04-server-image
Goal of this description is to setup a partial ONAP with focus of SDN-R services
Steps
- Installing Kubernetes and Rancher
- Initiation HELM
- Downloading OOM charts
- Installation of SDN-R, LOG, AAI, SO, PORTAL, ROBOT Framework, OOF
- Installartion of Device Simulators
Installing Kubernetes and Rancher
Create the Rancher 3 nodes control cluster named as onap-control on OpenStack
The following instructions describe how to create 3 OpenStack VMs to host the Highly-Available Kubernetes Control Plane. ONAP workloads will not be scheduled on these Control Plane nodes.
Launch new VMs in OpenStack.
Select Ubuntu 18.04 as the boot image for the VM without any volume
Select m1.large flavor
Networking
Apply customization script for Control Plane VMs
The script to be copied:
#!/bin/bash DOCKER_VERSION=18.09.5 KUBECTL_VERSION=1.13.5 HELM_VERSION=2.12.3 sudo apt-get update curl https://releases.rancher.com/install-docker/$DOCKER_VERSION.sh | sh mkdir -p /etc/systemd/system/docker.service.d/ cat > /etc/systemd/system/docker.service.d/docker.conf << EOF [Service] ExecStart= ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry=nexus3.onap.org:10001 EOF sudo usermod -aG docker ubuntu systemctl daemon-reload systemctl restart docker apt-mark hold docker-ce IP_ADDR=`ip address |grep ens|grep inet|awk '{print $2}'| awk -F / '{print $1}'` HOSTNAME=`hostname` sudo echo "$IP_ADDR $HOSTNAME" >> /etc/hosts docker login -u docker -p docker nexus3.onap.org:10001 sudo apt-get install make -y wget https://storage.googleapis.com/kubernetes-release/release/v$KUBECTL_VERSION/bin/linux/amd64/kubectl chmod +x kubectl sudo mv ./kubectl /usr/local/bin/kubectl wget http://storage.googleapis.com/kubernetes-helm/helm-v${HELM_VERSION}-linux-amd64.tar.gz tar -zxvf helm-v${HELM_VERSION}-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/helm sudo apt-get update exit 0
This customization script will:
- update ubuntu
- install docker
- install make
- download and install kubectl
- download and install helm
- once again update & upgrade ubuntu
Launched Instances
Create the Kubernetes 12 nodes worker cluster named as onap-k8s on OpenStack cloud
The following instructions describe how to create OpenStack VMs to host the Highly-Available Kubernetes Workers. ONAP workloads will only be scheduled on these nodes.
Launch new VM instances in OpenStack
Select Ubuntu 18.04 as base image
Select Flavor
The size of Kubernetes hosts depend on the size of the ONAP deployment being installed.
If a small subset of ONAP applications are being deployed (i.e. for testing purposes), then 16GB or 32GB may be sufficient.
Networking
Apply customization script for Kubernetes VM(s)
The scrip to be copied:
#!/bin/bash DOCKER_VERSION=18.09.5 KUBECTL_VERSION=1.13.5 sudo apt-get update curl https://releases.rancher.com/install-docker/$DOCKER_VERSION.sh | sh mkdir -p /etc/systemd/system/docker.service.d/ cat > /etc/systemd/system/docker.service.d/docker.conf << EOF [Service] ExecStart= ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry=nexus3.onap.org:10001 EOF sudo usermod -aG docker ubuntu systemctl daemon-reload systemctl restart docker apt-mark hold docker-ce IP_ADDR=`ip address |grep ens|grep inet|awk '{print $2}'| awk -F / '{print $1}'` HOSTNAME=`hostname` sudo echo "$IP_ADDR $HOSTNAME" >> /etc/hosts docker login -u docker -p docker nexus3.onap.org:10001 sudo apt-get install make -y wget https://storage.googleapis.com/kubernetes-release/release/v$KUBECTL_VERSION/bin/linux/amd64/kubectl chmod +x kubectl sudo mv ./kubectl /usr/local/bin/kubectl sudo apt-get update exit 0
This customization script will:
- update ubuntu
- install docker
- install nfs common
- download and install kubectl
- update and upgrade ubuntu
Launched k8s instances
Configure Rancher Kubernetes Engine
Install RKE
Download and install RKE on a VM, desktop or laptop. Binaries can be found here for Linux and Mac: https://github.com/rancher/rke/releases/tag/v0.2.1
Perform this on linux based personal laptop, local machine or a VM which has the connectivity to the created VMs on OpenStack and whose key has been configured during creation of VMs.
In our setup, the key of OpenStack controller has been used to login into VMs, so these steps have been performed on OpenStack controller.
mkdir rke cd rke/ wget https://github.com/rancher/rke/releases/download/v0.2.1/rke_linux-amd64 mv rke_linux-amd64 rke chmod 754 rke
RKE requires a cluster.yml as input. An example file is show below that describes a Kubernetes cluster that will be mapped onto the OpenStack VMs created earlier in this guide.
Example: cluster.yml
The example file:
# An example of an HA Kubernetes cluster for ONAP nodes: - address: 10.31.3.2 port: "22" role: - controlplane - etcd hostname_override: "onap-control-1" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.3 port: "22" role: - controlplane - etcd hostname_override: "onap-control-2" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.16 port: "22" role: - controlplane - etcd hostname_override: "onap-control-3" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.15 port: "22" role: - worker hostname_override: "onap-k8s-1" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.9 port: "22" role: - worker hostname_override: "onap-k8s-2" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.29 port: "22" role: - worker hostname_override: "onap-k8s-3" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.8 port: "22" role: - worker hostname_override: "onap-k8s-4" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.5 port: "22" role: - worker hostname_override: "onap-k8s-5" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.23 port: "22" role: - worker hostname_override: "onap-k8s-6" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.1 port: "22" role: - worker hostname_override: "onap-k8s-7" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.24 port: "22" role: - worker hostname_override: "onap-k8s-8" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.11 port: "22" role: - worker hostname_override: "onap-k8s-9" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.35 port: "22" role: - worker hostname_override: "onap-k8s-10" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.13 port: "22" role: - worker hostname_override: "onap-k8s-11" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.10 port: "22" role: - worker hostname_override: "onap-k8s-12" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" services: kube-api: service_cluster_ip_range: 10.43.0.0/16 pod_security_policy: false always_pull_images: false kube-controller: cluster_cidr: 10.42.0.0/16 service_cluster_ip_range: 10.43.0.0/16 kubelet: cluster_domain: cluster.local cluster_dns_server: 10.43.0.10 fail_swap_on: false network: plugin: canal authentication: strategy: x509 ssh_key_path: "~/.ssh/id_rsa" ssh_agent_auth: false authorization: mode: rbac ignore_docker_version: false kubernetes_version: "v1.13.5-rancher1-2" private_registries: - url: nexus3.onap.org:10001 user: docker password: docker is_default: true cluster_name: "onap" restore: restore: false snapshot_name: ""
Prepare cluster.yml
Before this configuration file can be used the IP address must be mapped for each control and worker node in this file.
Run RKE
From within the same directory as the cluster.yml file, simply execute:
native@node1-1:~/rke$ ./rke up
./rke up
The output will look something like:
native@node1-1:~/rke$ ./rke up INFO[0000] Initiating Kubernetes cluster INFO[0000] [certificates] Generating CA kubernetes certificates INFO[0000] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates INFO[0000] [certificates] Generating Kubernetes API server certificates INFO[0000] [certificates] Generating Service account token key INFO[0000] [certificates] Generating Kube Controller certificates INFO[0001] [certificates] Generating Node certificate INFO[0001] [certificates] Generating admin certificates and kubeconfig INFO[0001] [certificates] Generating Kubernetes API server proxy client certificates . . . . . . . . . . . . INFO[0309] [addons] Setting up Metrics Server INFO[0309] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes INFO[0309] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes INFO[0309] [addons] Executing deploy job rke-metrics-addon INFO[0315] [addons] Metrics Server deployed successfully INFO[0315] [ingress] Setting up nginx ingress controller INFO[0315] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes INFO[0316] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes INFO[0316] [addons] Executing deploy job rke-ingress-controller INFO[0322] [ingress] ingress controller nginx deployed successfully INFO[0322] [addons] Setting up user addons INFO[0322] [addons] no user addons defined INFO[0322] Finished building Kubernetes cluster successfully
Validate RKE deployment
copy the file "kube_config_cluster.yml" to onap-control-1 VM.
In the VM/system where the rke was executed. In Winlab it is node1-1 (OpenStack controller node).
example: to perform sftp to connect to onap-control-1 node:
native@node1-1:~/rke$ sftp ubuntu@10.31.3.2 Connected to 10.31.3.2. sftp> put kube_config_cluster.yml sftp> exit
login into onap-control-1 VM and move the "kube_config_cluster.yml" file into .kube/ folder and rename the file to "config.onap":
In "onap-control-1" vm:
ubuntu@onap-control-1:~$ mkdir .kube ubuntu@onap-control-1:~$ mv kube_config_cluster.yml .kube/config ubuntu@onap-control-1:~$ kubectl config set-context --current --namespace=onap
Perform these above operations on all control nodes as well as worker nodes to run kubectl and helm commands
mkdir .kube mv kube_config_cluster.yml .kube/config kubectl config set-context --current --namespace=onap
Verify the kubernetes cluster
ubuntu@onap-control-1:~$ kubectl get nodes -o=wide
Result:
Initialize Kubernetes Cluster for use by Helm
Perform this on onap-control-1 VM only during the first setup.
kubectl -n kube-system create serviceaccount tiller kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller helm init --service-account tiller kubectl -n kube-system rollout status deploy/tiller-deploy
Setting up the NFS share for multinode kubernetes cluster:
Deploying applications to a Kubernetes cluster requires Kubernetes nodes to share a common, distributed filesystem. In this tutorial, we will setup an NFS Master, and configure all Worker nodes a Kubernetes cluster to play the role of NFS slaves.
It is recommneded that a separate VM, outside of the kubernetes cluster, be used. This is to ensure that the NFS Master does not compete for resources with Kubernetes Control Plane or Worker Nodes.
Launch new NFS Server VM instance
Select Ubuntu 18.04 as base image
Select Flavor
Networking
Apply customization script for NFS Server VM
Script to be added:
#!/bin/bash DOCKER_VERSION=18.09.5 apt-get update curl https://releases.rancher.com/install-docker/$DOCKER_VERSION.sh | sh mkdir -p /etc/systemd/system/docker.service.d/ cat > /etc/systemd/system/docker.service.d/docker.conf << EOF [Service] ExecStart= ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry=nexus3.onap.org:10001 EOF sudo usermod -aG docker ubuntu systemctl daemon-reload systemctl restart docker apt-mark hold docker-ce IP_ADDR=`ip address |grep ens|grep inet|awk '{print $2}'| awk -F / '{print $1}'` HOSTNAME=`hostname` sudo echo "$IP_ADDR $HOSTNAME" >> /etc/hosts docker login -u docker -p docker nexus3.onap.org:10001 sudo apt-get install make -y # install nfs sudo apt-get install nfs-common -y sudo apt update sudo apt upgrade -y exit 0
This customization script will:
- update ubuntu
- install nfs server
- update and upgrade ubuntu
Resulting example
Configure NFS Share on Master node
Login into onap-nfs-server and perform the below commands
wget https://onap.readthedocs.io/en/latest/_downloads/25aa3e27223d311da3d00de8ed6768f8/master_nfs_node.sh chmod +x master_nfs_node.sh sudo ./master_nfs_node.sh {list kubernetes worker nodes ip} example from the WinLab setup: sudo ./master_nfs_node.sh 10.31.3.15 10.31.3.9 10.31.3.29 10.31.3.8 10.31.3.5 10.31.3.23 10.31.3.1 10.31.3.24 10.31.3.11 10.31.3.35 10.31.3.13 10.31.3.10
Login into each kubernetes worker node, i.e. onap-k8s VMs and perform the below commands
wget https://onap.readthedocs.io/en/latest/_downloads/604dc45241b03eac1c92b22e0b32b5f3/slave_nfs_node.sh chmod +x slave_nfs_node.sh sudo ./slave_nfs_node.sh {master nfs node IP address} example from the WinLab setup: sudo ./slave_nfs_node.sh 10.31.3.38
ONAP Installation
Perform the following steps in onap-control-1 VM.
Clone the OOM helm repository
Use the master branch as Dublin branch is not available.
Perform the from the home directory
git clone http://gerrit.onap.org/r/oom --recurse-submodules mkdir .helm cp -R ~/oom/kubernetes/helm/plugins/ ~/.helm cd oom/kubernetes/sdnc
Edit the values.yaml file
... #Replace image ID from image: onap/sdnc-image:1.5.2 image: onap/sdnc-image:1.6.0-STAGING-20190520T202605Z # Add sdnrwt as true at the end of the config config: ... sdnrwt: true # maria-db-galera mariadb-galera: ... replicaCount: 1 # node port for ports 5-7 nodePortPrefix: 312 # set replica count to 3 as default for a SDN cluster replicaCount: 3 service: ... internalPort5: 8085 internalPort6: 8185 internalPort7: 9200 ... externalPort5: 8285 nodePort5: 85 externalPort6: 8295 nodePort6: 86 externalPort7: 9200 nodePort7: 92
Save the file.
navigate to templates folder:
cd templates/
Edit the statefulset.yaml file
... spec: ... template: ... spec: ... containers: - name: {{ include "common.name" . }} ... # Add 3 new ports under ports ports: ... - containerPort: {{ .Values.service.internalPort5 }} - containerPort: {{ .Values.service.internalPort6 }} - containerPort: {{ .Values.service.internalPort7 }} ... # add sdnrwt flag set to true under env env: ... - name: SDNRWT value: "{{ .Values.config.sdnrwt}}"
Save the file.
Edit the service.yaml file
... spec: type: {{ .Values.service.type }} ports: {{if eq .Values.service.type "NodePort" -}} ... - port: {{ .Values.service.externalPort5 }} targetPort: {{ .Values.service.internalPort5 }} nodePort: {{ .Values.nodePortPrefix }}{{ .Values.service.nodePort5 }} name: "{{ .Values.service.portName }}-8285" - port: {{ .Values.service.externalPort6 }} targetPort: {{ .Values.service.internalPort6 }} nodePort: {{ .Values.nodePortPrefix }}{{ .Values.service.nodePort6 }} name: "{{ .Values.service.portName }}-8295" - port: {{ .Values.service.externalPort7 }} targetPort: {{ .Values.service.internalPort7 }} nodePort: {{ .Values.nodePortPrefix }}{{ .Values.service.nodePort7 }} name: "{{ .Values.service.portName }}-9200"
Copy override files
cd cp -r ~/oom/kubernetes/onap/resources/overrides . cd overrides/
Edit the onap-all.yaml file
# Copyright © 2019 Amdocs, Bell Canada # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ################################################################### # This override file enables helm charts for all ONAP applications. ################################################################### cassandra: enabled: true mariadb-galera: enabled: true replicaCount: 1 aaf: enabled: true aai: enabled: true appc: enabled: false clamp: enabled: false cli: enabled: false consul: enabled: true contrib: enabled: false dcaegen2: enabled: true dmaap: enabled: true esr: enabled: false log: enabled: true sniro-emulator: enabled: true oof: enabled: true msb: enabled: true multicloud: enabled: false nbi: enabled: false policy: enabled: true pomba: enabled: false portal: enabled: true robot: enabled: true sdc: enabled: false sdnc: enabled: true replicaCount: 3 so: enabled: true uui: enabled: false vfc: enabled: false vid: enabled: false vnfsdk: enabled: false
Save the file.
Start helm server
go to home directory and start helm server and local repository.
cd helm serve &
Hit on ENTER key to come out of helm serve if it shows some logs
Add helm repository
Note the port number that is listed and use it in the Helm repo add as follows
helm repo add local http://127.0.0.1:8879
Verify helm repository
helm repo list
output:
ubuntu@onap-control-1:~$ helm repo list NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879 ubuntu@onap-control-1:~$
Make onap helm charts available in local helm repository
cd ~/oom/kubernetes make all; make onap
Do not use sudo to perform the above 'make' operation
This take somewhere around 10-15 minutes of time to make all the helm charts and save the charts into local helm repository
output:
ubuntu@onap-control-1:~$ cd ~/oom/kubernetes/ ubuntu@onap-control-1:~/oom/kubernetes$ make all; make onap [common] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' make[2]: Entering directory '/home/ubuntu/oom/kubernetes/common' [common] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' ==> Linting common [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/common-4.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' ... ... ... [onap] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ⎈Happy Helming!⎈ Saving 33 charts Downloading aaf from repo http://127.0.0.1:8879 Downloading aai from repo http://127.0.0.1:8879 Downloading appc from repo http://127.0.0.1:8879 Downloading cassandra from repo http://127.0.0.1:8879 Downloading clamp from repo http://127.0.0.1:8879 Downloading cli from repo http://127.0.0.1:8879 Downloading common from repo http://127.0.0.1:8879 Downloading consul from repo http://127.0.0.1:8879 Downloading contrib from repo http://127.0.0.1:8879 Downloading dcaegen2 from repo http://127.0.0.1:8879 Downloading dmaap from repo http://127.0.0.1:8879 Downloading esr from repo http://127.0.0.1:8879 Downloading log from repo http://127.0.0.1:8879 Downloading sniro-emulator from repo http://127.0.0.1:8879 Downloading mariadb-galera from repo http://127.0.0.1:8879 Downloading msb from repo http://127.0.0.1:8879 Downloading multicloud from repo http://127.0.0.1:8879 Downloading nbi from repo http://127.0.0.1:8879 Downloading nfs-provisioner from repo http://127.0.0.1:8879 Downloading pnda from repo http://127.0.0.1:8879 Downloading policy from repo http://127.0.0.1:8879 Downloading pomba from repo http://127.0.0.1:8879 Downloading portal from repo http://127.0.0.1:8879 Downloading oof from repo http://127.0.0.1:8879 Downloading robot from repo http://127.0.0.1:8879 Downloading sdc from repo http://127.0.0.1:8879 Downloading sdnc from repo http://127.0.0.1:8879 Downloading so from repo http://127.0.0.1:8879 Downloading uui from repo http://127.0.0.1:8879 Downloading vfc from repo http://127.0.0.1:8879 Downloading vid from repo http://127.0.0.1:8879 Downloading vnfsdk from repo http://127.0.0.1:8879 Downloading modeling from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting onap Lint OK 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/onap-4.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' ubuntu@onap-control-1:~/oom/kubernetes$
Deploy ONAP
The name of the release as 'demo', the namespace as 'onap' and the timeout of 300 seconds to deploy 'dmaap' and 'so' which takes some time to deploy these charts waiting for other components.
Perform these below commands in onap-control-1 where git was cloned.
helm deploy demo local/onap --namespace onap -f ~/overrides/onap-all.yaml --timeout 300
Takes some 15-20 minutes due to added timeout parameter.
The output:
ubuntu@onap-control-1:~/oom/kubernetes$ helm deploy demo local/onap --namespace onap -f ~/overrides/onap-all.yaml --timeout 300 fetching local/onap release "demo" deployed release "demo-aaf" deployed release "demo-aai" deployed release "demo-cassandra" deployed release "demo-consul" deployed release "demo-dcaegen2" deployed release "demo-dmaap" deployed release "demo-log" deployed release "demo-mariadb-galera" deployed release "demo-msb" deployed release "demo-oof" deployed release "demo-policy" deployed release "demo-portal" deployed release "demo-robot" deployed release "demo-sdnc" deployed release "demo-sniro-emulator" deployed release "demo-so" deployed ubuntu@onap-control-1:~/oom/kubernetes$
Verify the deploy
ubuntu@onap-control-1:~/oom/kubernetes$ helm ls NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE demo 1 Thu Jun 20 06:57:24 2019 DEPLOYED onap-4.0.0 Dublin onap demo-aaf 1 Thu Jun 20 06:57:25 2019 DEPLOYED aaf-4.0.0 onap demo-aai 1 Thu Jun 20 06:57:27 2019 DEPLOYED aai-4.0.0 onap demo-cassandra 1 Thu Jun 20 06:57:34 2019 DEPLOYED cassandra-4.0.0 onap demo-consul 1 Thu Jun 20 06:57:35 2019 DEPLOYED consul-4.0.0 onap demo-dcaegen2 1 Thu Jun 20 06:57:37 2019 DEPLOYED dcaegen2-4.0.0 onap demo-dmaap 1 Thu Jun 20 06:57:40 2019 DEPLOYED dmaap-4.0.1 onap demo-log 1 Thu Jun 20 07:06:22 2019 DEPLOYED log-4.0.0 onap demo-mariadb-galera 1 Thu Jun 20 07:06:23 2019 DEPLOYED mariadb-galera-4.0.0 onap demo-msb 1 Thu Jun 20 07:06:24 2019 DEPLOYED msb-4.0.0 onap demo-oof 1 Thu Jun 20 07:06:27 2019 DEPLOYED oof-4.0.0 onap demo-policy 1 Thu Jun 20 07:06:30 2019 DEPLOYED policy-4.0.0 onap demo-portal 1 Thu Jun 20 07:06:33 2019 DEPLOYED portal-4.0.0 onap demo-robot 1 Thu Jun 20 07:06:35 2019 DEPLOYED robot-4.0.0 onap demo-sdnc 1 Thu Jun 20 07:06:37 2019 DEPLOYED sdnc-4.0.0 onap demo-sniro-emulator 1 Thu Jun 20 07:06:40 2019 DEPLOYED sniro-emulator-4.0.0 onap demo-so 1 Thu Jun 20 07:06:40 2019 DEPLOYED so-4.0.0 onap ubuntu@onap-control-1:~/oom/kubernetes$
In case of failures in deployment
If the deployment of any onap module fails, please go through these steps to redeploy the modules.
In this example, we demonstrate failure of dmaap, which normally occurs due to timeout issues.
Check the failed modules
perform 'helm ls' on the control node.
ubuntu@onap-control-1:~$ helm ls NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE demo 1 Thu Jun 20 11:40:24 2019 DEPLOYED onap-4.0.0 Dublin onap demo-aaf 1 Thu Jun 20 11:40:24 2019 DEPLOYED aaf-4.0.0 onap demo-aai 1 Thu Jun 20 11:40:26 2019 DEPLOYED aai-4.0.0 onap demo-cassandra 1 Thu Jun 20 11:40:34 2019 DEPLOYED cassandra-4.0.0 onap demo-consul 1 Thu Jun 20 11:40:35 2019 DEPLOYED consul-4.0.0 onap demo-dcaegen2 1 Thu Jun 20 11:40:37 2019 DEPLOYED dcaegen2-4.0.0 onap demo-dmaap 1 Mon Jun 24 09:17:06 2019 FAILED dmaap-4.0.1 onap demo-log 1 Sat Jun 22 14:28:43 2019 DEPLOYED log-4.0.0 onap demo-mariadb-galera 1 Thu Jun 20 11:46:06 2019 DEPLOYED mariadb-galera-4.0.0 onap demo-msb 1 Thu Jun 20 11:46:07 2019 DEPLOYED msb-4.0.0 onap demo-oof 1 Thu Jun 20 11:46:09 2019 DEPLOYED oof-4.0.0 onap demo-policy 1 Thu Jun 20 11:46:13 2019 DEPLOYED policy-4.0.0 onap demo-portal 1 Thu Jun 20 11:46:15 2019 DEPLOYED portal-4.0.0 onap demo-robot 1 Thu Jun 20 11:46:17 2019 DEPLOYED robot-4.0.0 onap demo-sdnc 1 Thu Jun 20 11:46:19 2019 DEPLOYED sdnc-4.0.0 onap demo-sniro-emulator 1 Thu Jun 20 11:46:22 2019 DEPLOYED sniro-emulator-4.0.0 onap demo-so 1 Thu Jun 20 11:46:23 2019 DEPLOYED so-4.0.0 onap
Delete the failed module
use the right release name from the name as shown in helm ls.
The below commands will delete all the pods and dependencies.
helm delete demo-dmaap --purge kubectl get persistentvolumeclaims -n onap | grep demo-dmaap | sed -r 's/(^[^ ]+).*/kubectl delete persistentvolumeclaims -n onap \1/' | bash kubectl get persistentvolumes -n onap | grep demo-dmaap | sed -r 's/(^[^ ]+).*/kubectl delete persistentvolumes -n onap \1/' | bash kubectl get secrets -n onap | grep demo-dmaap | sed -r 's/(^[^ ]+).*/kubectl delete secrets -n onap \1/' | bash kubectl get clusterrolebindings -n onap | grep demo-dmaap | sed -r 's/(^[^ ]+).*/kubectl delete clusterrolebindings -n onap \1/' | bash kubectl get jobs -n onap | grep demo-dmaap | sed -r 's/(^[^ ]+).*/kubectl delete jobs -n onap \1/' | bash kubectl get pods -n onap | grep demo-dmaap | sed -r 's/(^[^ ]+).*/kubectl delete pods -n onap \1/' | bash
Delete any residue data from onap-nfs-server.
cd /dockerdata-nfs/ sudo rm -r demo-dmaap/
Reinstall module
Reinstall the deleted module with the same release name as used in the deletion
helm install --namespace onap -n demo-dmaap local/dmaap
You can verify the deployment in a parallel terminal by checking the pods using the command
kubectl get pods -n onap | grep demo-dmaap
This shall show you the status of the upcoming pods during the deployment wait time.
Once this is deployed, you can verify using the "helm ls" command to check all the required modules are up and running.
Undeploy ONAP
For the release name 'demo' and namespace 'onap':
Perform these below commands in onap-control-1 where the helm was deployed.
helm undeploy demo --purge kubectl get persistentvolumeclaims -n onap | grep demo | sed -r 's/(^[^ ]+).*/kubectl delete persistentvolumeclaims -n onap \1/' | bash kubectl get persistentvolumes -n onap | grep demo | sed -r 's/(^[^ ]+).*/kubectl delete persistentvolumes -n onap \1/' | bash kubectl get secrets -n onap | grep demo | sed -r 's/(^[^ ]+).*/kubectl delete secrets -n onap \1/' | bash kubectl get clusterrolebindings -n onap | grep demo | sed -r 's/(^[^ ]+).*/kubectl delete clusterrolebindings -n onap \1/' | bash kubectl get jobs -n onap | grep demo | sed -r 's/(^[^ ]+).*/kubectl delete jobs -n onap \1/' | bash kubectl get pods -n onap | grep demo | sed -r 's/(^[^ ]+).*/kubectl delete pods -n onap \1/' | bash
Perform the below commands in onap-nfs-server.
This is to delete all the network shared data which will not be deleted on purge delete of helm release.
cd /dockerdata-nfs/ sudo rm -r * e.g. ubuntu@onap-nfs-server:~$ cd /dockerdata-nfs/ ubuntu@onap-nfs-server:/dockerdata-nfs$ sudo rm -r *
Access SDN-R
ODLUX-GUI at winlab
The {user} should be replaced by your user id created in orbit-lab.
ssh -A -t {user}@console.sb10.orbit-lab.org -L 30202:localhost:30202 ssh -A -t native@node1-1 -L 30202:localhost:30202 ssh -A -t ubuntu@10.31.3.2 -L 30202:localhost:30202
or a stable port 31202 only at Winlab:
ssh -A -t {user}@console.sb10.orbit-lab.org -L 31202:localhost:31202 ssh -A -t native@node1-1 -L 31202:localhost:31202 ssh -A -t ubuntu@10.31.3.2 -L 31202:localhost:31202
On your browser:
http://localhost:30202/odlux/index.html
or a stable link only at WinLab:
http://localhost:31202/odlux/index.html
The login credentials are:
username: admin
password: Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U
ODLUX-GUI at your labs
http://{ip-address}:30202/odlux/index.html
The {ip-address} is the IP address of the onap-control-1 machine.