Table of Contents |
---|
Overview
For ONAP SDN-R load and stress test and Proof of concept, June 19 a three node SDN-R cluster is used. Version is El Alto.
...
- Installing Kubernetes and Rancher
- Initiation HELM
- Downloading OOM charts
- Installation of SDN-R, LOG, AAI, AAF, DMAAP, DCAEGEN2, MSB, POLICY, CONSUL, SO, PORTAL, ROBOT Framework, OOF
- Installartion Installation of Device Simulators
draw.io Diagram | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Installing Kubernetes and Rancher
Create the Rancher 3 nodes control cluster named as onap-control on OpenStack
The following instructions describe how to create 3 OpenStack VMs to host the Highly-Available Kubernetes Control Plane. ONAP workloads will not be scheduled on these Control Plane nodes.
Launch new VMs in OpenStack.
Select Ubuntu 18.04 as the boot image for the VM without any volume
Select m1.large flavor
Networking
Apply customization script for Control Plane VMs
The script to be copied:
Code Block |
---|
#!/bin/bash DOCKER_VERSION=18.09.5 KUBECTL_VERSION=1.13.5 HELM_VERSION=2.12.3 sudo apt-get update curl https://releases.rancher.com/install-docker/$DOCKER_VERSION.sh | sh mkdir -p /etc/systemd/system/docker.service.d/ cat > /etc/systemd/system/docker.service.d/docker.conf << EOF [Service] ExecStart= ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry=nexus3.onap.org:10001 EOF sudo usermod -aG docker ubuntu systemctl daemon-reload systemctl restart docker apt-mark hold docker-ce IP_ADDR=`ip address |grep ens|grep inet|awk '{print $2}'| awk -F / '{print $1}'` HOSTNAME=`hostname` sudo echo "$IP_ADDR $HOSTNAME" >> /etc/hosts docker login -u docker -p docker nexus3.onap.org:10001 sudo apt-get install make -y wget https://storage.googleapis.com/kubernetes-release/release/v$KUBECTL_VERSION/bin/linux/amd64/kubectl chmod +x kubectl sudo mv ./kubectl /usr/local/bin/kubectl wget http://storage.googleapis.com/kubernetes-helm/helm-v${HELM_VERSION}-linux-amd64.tar.gz tar -zxvf helm-v${HELM_VERSION}-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/helm sudo apt-get update exit 0 |
...
- update ubuntu
- install docker
- install make
- download and install kubectl
- download and install helm
- once again update & upgrade ubuntu
Launched Instances
Create the Kubernetes 12 nodes worker cluster named as onap-k8s on OpenStack cloud
The following instructions describe how to create OpenStack VMs to host the Highly-Available Kubernetes Workers. ONAP workloads will only be scheduled on these nodes.
Launch new VM instances in OpenStack
Select Ubuntu 18.04 as base image
Select Flavor
The size of Kubernetes hosts depend on the size of the ONAP deployment being installed.
If a small subset of ONAP applications are being deployed (i.e. for testing purposes), then 16GB or 32GB may be sufficient.
Networking
Apply customization script for Kubernetes VM(s)
The scrip to be copied:
Code Block |
---|
#!/bin/bash DOCKER_VERSION=18.09.5 KUBECTL_VERSION=1.13.5 sudo apt-get update curl https://releases.rancher.com/install-docker/$DOCKER_VERSION.sh | sh mkdir -p /etc/systemd/system/docker.service.d/ cat > /etc/systemd/system/docker.service.d/docker.conf << EOF [Service] ExecStart= ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry=nexus3.onap.org:10001 EOF sudo usermod -aG docker ubuntu systemctl daemon-reload systemctl restart docker apt-mark hold docker-ce IP_ADDR=`ip address |grep ens|grep inet|awk '{print $2}'| awk -F / '{print $1}'` HOSTNAME=`hostname` sudo echo "$IP_ADDR $HOSTNAME" >> /etc/hosts docker login -u docker -p docker nexus3.onap.org:10001 sudo apt-get install make -y wget https://storage.googleapis.com/kubernetes-release/release/v$KUBECTL_VERSION/bin/linux/amd64/kubectl chmod +x kubectl sudo mv ./kubectl /usr/local/bin/kubectl sudo apt-get update exit 0 |
...
- update ubuntu
- install docker
- install nfs common
- download and install kubectl
- update and upgrade ubuntu
Launched k8s instances
Configure Rancher Kubernetes Engine
Install RKE
Download and install RKE on a VM, desktop or laptop. Binaries can be found here for Linux and Mac: https://github.com/rancher/rke/releases/tag/v0.2.1
...
Code Block |
---|
# An example of an HA Kubernetes cluster for ONAP nodes: - address: 10.31.3.2 port: "22" role: - controlplane - etcd hostname_override: "onap-control-1" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.3 port: "22" role: - controlplane - etcd hostname_override: "onap-control-2" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.16 port: "22" role: - controlplane - etcd hostname_override: "onap-control-3" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.15 port: "22" role: - worker hostname_override: "onap-k8s-1" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.9 port: "22" role: - worker hostname_override: "onap-k8s-2" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.29 port: "22" role: - worker hostname_override: "onap-k8s-3" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.8 port: "22" role: - worker hostname_override: "onap-k8s-4" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.5 port: "22" role: - worker hostname_override: "onap-k8s-5" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.23 port: "22" role: - worker hostname_override: "onap-k8s-6" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.1 port: "22" role: - worker hostname_override: "onap-k8s-7" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.24 port: "22" role: - worker hostname_override: "onap-k8s-8" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.11 port: "22" role: - worker hostname_override: "onap-k8s-9" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.35 port: "22" role: - worker hostname_override: "onap-k8s-10" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.13 port: "22" role: - worker hostname_override: "onap-k8s-11" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" - address: 10.31.3.10 port: "22" role: - worker hostname_override: "onap-k8s-12" user: ubuntu ssh_key_path: "~/.ssh/id_rsa" services: kube-api: service_cluster_ip_range: 10.43.0.0/16 pod_security_policy: false always_pull_images: false kube-controller: cluster_cidr: 10.42.0.0/16 service_cluster_ip_range: 10.43.0.0/16 kubelet: cluster_domain: cluster.local cluster_dns_server: 10.43.0.10 fail_swap_on: false network: plugin: canal authentication: strategy: x509 ssh_key_path: "~/.ssh/id_rsa" ssh_agent_auth: false authorization: mode: rbac ignore_docker_version: false kubernetes_version: "v1.13.5-rancher1-2" private_registries: - url: nexus3.onap.org:10001 user: docker password: docker is_default: true cluster_name: "onap" restore: restore: false snapshot_name: "" |
Prepare cluster.yml
Before this configuration file can be used the IP address must be mapped for each control and worker node in this file.
Run RKE
From within the same directory as the cluster.yml file, simply execute:
...
Code Block |
---|
native@node1-1:~/rke$ ./rke up INFO[0000] Initiating Kubernetes cluster INFO[0000] [certificates] Generating CA kubernetes certificates INFO[0000] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates INFO[0000] [certificates] Generating Kubernetes API server certificates INFO[0000] [certificates] Generating Service account token key INFO[0000] [certificates] Generating Kube Controller certificates INFO[0001] [certificates] Generating Node certificate INFO[0001] [certificates] Generating admin certificates and kubeconfig INFO[0001] [certificates] Generating Kubernetes API server proxy client certificates . . . . . . . . . . . . INFO[0309] [addons] Setting up Metrics Server INFO[0309] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes INFO[0309] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes INFO[0309] [addons] Executing deploy job rke-metrics-addon INFO[0315] [addons] Metrics Server deployed successfully INFO[0315] [ingress] Setting up nginx ingress controller INFO[0315] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes INFO[0316] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes INFO[0316] [addons] Executing deploy job rke-ingress-controller INFO[0322] [ingress] ingress controller nginx deployed successfully INFO[0322] [addons] Setting up user addons INFO[0322] [addons] no user addons defined INFO[0322] Finished building Kubernetes cluster successfully |
Validate RKE deployment
copy the file "kube_config_cluster.yml" to onap-control-1 VM.
...
Code Block |
---|
mkdir .kube mv kube_config_cluster.yml .kube/config kubectl config set-context --current --namespace=onap |
Verify the kubernetes cluster
Code Block |
---|
ubuntu@onap-control-1:~$ kubectl get nodes -o=wide |
Result:
Initialize Kubernetes Cluster for use by Helm
Perform this on onap-control-1 VM only during the first setup.
Code Block |
---|
kubectl -n kube-system create serviceaccount tiller kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller helm init --service-account tiller kubectl -n kube-system rollout status deploy/tiller-deploy |
Setting up the NFS share for multinode kubernetes cluster:
Deploying applications to a Kubernetes cluster requires Kubernetes nodes to share a common, distributed filesystem. In this tutorial, we will setup an NFS Master, and configure all Worker nodes a Kubernetes cluster to play the role of NFS slaves.
It is recommneded that a separate VM, outside of the kubernetes cluster, be used. This is to ensure that the NFS Master does not compete for resources with Kubernetes Control Plane or Worker Nodes.
Launch new NFS Server VM instance
Select Ubuntu 18.04 as base image
Select Flavor
Networking
Apply customization script for NFS Server VM
Script to be added:
Code Block |
---|
#!/bin/bash DOCKER_VERSION=18.09.5 apt-get update curl https://releases.rancher.com/install-docker/$DOCKER_VERSION.sh | sh mkdir -p /etc/systemd/system/docker.service.d/ cat > /etc/systemd/system/docker.service.d/docker.conf << EOF [Service] ExecStart= ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry=nexus3.onap.org:10001 EOF sudo usermod -aG docker ubuntu systemctl daemon-reload systemctl restart docker apt-mark hold docker-ce IP_ADDR=`ip address |grep ens|grep inet|awk '{print $2}'| awk -F / '{print $1}'` HOSTNAME=`hostname` sudo echo "$IP_ADDR $HOSTNAME" >> /etc/hosts docker login -u docker -p docker nexus3.onap.org:10001 sudo apt-get install make -y # install nfs sudo apt-get install nfs-common -y sudo apt update sudo apt upgrade -y exit 0 |
...
- update ubuntu
- install nfs server
- update and upgrade ubuntu
Resulting example
Configure NFS Share on Master node
Login into onap-nfs-server and perform the below commands
Code Block |
---|
wget https://onap.readthedocs.io/en/latest/_downloads/25aa3e27223d311da3d00de8ed6768f8/master_nfs_node.sh chmod +x master_nfs_node.sh sudo ./master_nfs_node.sh {list kubernetes worker nodes ip} example from the WinLab setup: sudo ./master_nfs_node.sh 10.31.3.15 10.31.3.9 10.31.3.29 10.31.3.8 10.31.3.5 10.31.3.23 10.31.3.1 10.31.3.24 10.31.3.11 10.31.3.35 10.31.3.13 10.31.3.10 |
Login into each kubernetes worker node, i.e. onap-k8s VMs and perform the below commands
Code Block |
---|
wget https://onap.readthedocs.io/en/latest/_downloads/604dc45241b03eac1c92b22e0b32b5f3/slave_nfs_node.sh chmod +x slave_nfs_node.sh sudo ./slave_nfs_node.sh {master nfs node IP address} example from the WinLab setup: sudo ./slave_nfs_node.sh 10.31.3.38 |
ONAP Installation
Perform the following steps in onap-control-1 VM.
Clone the OOM helm repository
Use the master branch as Dublin branch is not available.
Perform the from the home directory
Code Block |
---|
git clone http://gerrit.onap.org/r/oom --recurse-submodules mkdir .helm cp -R ~/oom/kubernetes/helm/plugins/ ~/.helm cd oom/kubernetes/sdnc |
Edit the values.yaml file
Code Block |
---|
... #Replace image ID from image: onap/sdnc-image:1.5.2 image: onap/sdnc-image:1.6.0-STAGING-20190520T202605Z # Add sdnrwt as true at the end of the config config: ... sdnrwt: true # maria-db-galera mariadb-galera: ... replicaCount: 1 # node port for ports 5-7 nodePortPrefix: 312 # set replica count to 3 as default for a SDN cluster replicaCount: 3 service: ... internalPort5: 8085 internalPort6: 8185 internalPort7: 9200 ... externalPort5: 8285 nodePort5: 85 externalPort6: 8295 nodePort6: 86 externalPort7: 9200 nodePort7: 92 |
Save the file.
navigate to templates folder:
Code Block |
---|
cd templates/ |
Edit the statefulset.yaml file
Code Block |
---|
... spec: ... template: ... spec: ... containers: - name: {{ include "common.name" . }} ... # Add 3 new ports under ports ports: ... - containerPort: {{ .Values.service.internalPort5 }} - containerPort: {{ .Values.service.internalPort6 }} - containerPort: {{ .Values.service.internalPort7 }} ... # add sdnrwt flag set to true under env env: ... - name: SDNRWT value: "{{ .Values.config.sdnrwt}}" |
Save the file.
Edit the service.yaml file
Code Block |
---|
... spec: type: {{ .Values.service.type }} ports: {{if eq .Values.service.type "NodePort" -}} ... - port: {{ .Values.service.externalPort5 }} targetPort: {{ .Values.service.internalPort5 }} nodePort: {{ .Values.nodePortPrefix }}{{ .Values.service.nodePort5 }} name: "{{ .Values.service.portName }}-8285" - port: {{ .Values.service.externalPort6 }} targetPort: {{ .Values.service.internalPort6 }} nodePort: {{ .Values.nodePortPrefix }}{{ .Values.service.nodePort6 }} name: "{{ .Values.service.portName }}-8295" - port: {{ .Values.service.externalPort7 }} targetPort: {{ .Values.service.internalPort7 }} nodePort: {{ .Values.nodePortPrefix }}{{ .Values.service.nodePort7 }} name: "{{ .Values.service.portName }}-9200" |
Copy override files
Code Block |
---|
cd cp -r ~/oom/kubernetes/onap/resources/overrides . cd overrides/ |
Edit the onap-all.yaml file
Code Block |
---|
# Copyright © 2019 Amdocs, Bell Canada # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ################################################################### # This override file enables helm charts for all ONAP applications. ################################################################### cassandra: enabled: true mariadb-galera: enabled: true replicaCount: 1 aaf: enabled: true aai: enabled: true appc: enabled: false clamp: enabled: false cli: enabled: false consul: enabled: true contrib: enabled: false dcaegen2: enabled: true dmaap: enabled: true esr: enabled: false log: enabled: true sniro-emulator: enabled: true oof: enabled: true msb: enabled: true multicloud: enabled: false nbi: enabled: false policy: enabled: true pomba: enabled: false portal: enabled: true robot: enabled: true sdc: enabled: false sdnc: enabled: true replicaCount: 3 so: enabled: true uui: enabled: false vfc: enabled: false vid: enabled: false vnfsdk: enabled: false |
Save the file.
Start helm server
go to home directory and start helm server and local repository.
...
Hit on ENTER key to come out of helm serve if it shows some logs
Add helm repository
Note the port number that is listed and use it in the Helm repo add as follows
Code Block |
---|
helm repo add local http://127.0.0.1:8879 |
Verify helm repository
Code Block |
---|
helm repo list |
output:
Code Block |
---|
ubuntu@onap-control-1:~$ helm repo list NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879 ubuntu@onap-control-1:~$ |
Make onap helm charts available in local helm repository
Code Block |
---|
cd ~/oom/kubernetes make all; make onap |
...
Code Block |
---|
ubuntu@onap-control-1:~$ cd ~/oom/kubernetes/ ubuntu@onap-control-1:~/oom/kubernetes$ make all; make onap [common] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' make[2]: Entering directory '/home/ubuntu/oom/kubernetes/common' [common] make[3]: Entering directory '/home/ubuntu/oom/kubernetes/common' ==> Linting common [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/common-4.0.0.tgz make[3]: Leaving directory '/home/ubuntu/oom/kubernetes/common' ... ... ... [onap] make[1]: Entering directory '/home/ubuntu/oom/kubernetes' Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ⎈Happy Helming!⎈ Saving 33 charts Downloading aaf from repo http://127.0.0.1:8879 Downloading aai from repo http://127.0.0.1:8879 Downloading appc from repo http://127.0.0.1:8879 Downloading cassandra from repo http://127.0.0.1:8879 Downloading clamp from repo http://127.0.0.1:8879 Downloading cli from repo http://127.0.0.1:8879 Downloading common from repo http://127.0.0.1:8879 Downloading consul from repo http://127.0.0.1:8879 Downloading contrib from repo http://127.0.0.1:8879 Downloading dcaegen2 from repo http://127.0.0.1:8879 Downloading dmaap from repo http://127.0.0.1:8879 Downloading esr from repo http://127.0.0.1:8879 Downloading log from repo http://127.0.0.1:8879 Downloading sniro-emulator from repo http://127.0.0.1:8879 Downloading mariadb-galera from repo http://127.0.0.1:8879 Downloading msb from repo http://127.0.0.1:8879 Downloading multicloud from repo http://127.0.0.1:8879 Downloading nbi from repo http://127.0.0.1:8879 Downloading nfs-provisioner from repo http://127.0.0.1:8879 Downloading pnda from repo http://127.0.0.1:8879 Downloading policy from repo http://127.0.0.1:8879 Downloading pomba from repo http://127.0.0.1:8879 Downloading portal from repo http://127.0.0.1:8879 Downloading oof from repo http://127.0.0.1:8879 Downloading robot from repo http://127.0.0.1:8879 Downloading sdc from repo http://127.0.0.1:8879 Downloading sdnc from repo http://127.0.0.1:8879 Downloading so from repo http://127.0.0.1:8879 Downloading uui from repo http://127.0.0.1:8879 Downloading vfc from repo http://127.0.0.1:8879 Downloading vid from repo http://127.0.0.1:8879 Downloading vnfsdk from repo http://127.0.0.1:8879 Downloading modeling from repo http://127.0.0.1:8879 Deleting outdated charts ==> Linting onap Lint OK 1 chart(s) linted, no failures Successfully packaged chart and saved it to: /home/ubuntu/oom/kubernetes/dist/packages/onap-4.0.0.tgz make[1]: Leaving directory '/home/ubuntu/oom/kubernetes' ubuntu@onap-control-1:~/oom/kubernetes$ |
Deploy ONAP
The name of the release as 'demo', the namespace as 'onap' and the timeout of 300 seconds to deploy 'dmaap' and 'so' which takes some time to deploy these charts waiting for other components.
...
Code Block |
---|
ubuntu@onap-control-1:~/oom/kubernetes$ helm deploy demo local/onap --namespace onap -f ~/overrides/onap-all.yaml --timeout 300 fetching local/onap release "demo" deployed release "demo-aaf" deployed release "demo-aai" deployed release "demo-cassandra" deployed release "demo-consul" deployed release "demo-dcaegen2" deployed release "demo-dmaap" deployed release "demo-log" deployed release "demo-mariadb-galera" deployed release "demo-msb" deployed release "demo-oof" deployed release "demo-policy" deployed release "demo-portal" deployed release "demo-robot" deployed release "demo-sdnc" deployed release "demo-sniro-emulator" deployed release "demo-so" deployed ubuntu@onap-control-1:~/oom/kubernetes$ |
Verify the deploy
Code Block |
---|
ubuntu@onap-control-1:~/oom/kubernetes$ helm ls NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE demo 1 Thu Jun 20 06:57:24 2019 DEPLOYED onap-4.0.0 Dublin onap demo-aaf 1 Thu Jun 20 06:57:25 2019 DEPLOYED aaf-4.0.0 onap demo-aai 1 Thu Jun 20 06:57:27 2019 DEPLOYED aai-4.0.0 onap demo-cassandra 1 Thu Jun 20 06:57:34 2019 DEPLOYED cassandra-4.0.0 onap demo-consul 1 Thu Jun 20 06:57:35 2019 DEPLOYED consul-4.0.0 onap demo-dcaegen2 1 Thu Jun 20 06:57:37 2019 DEPLOYED dcaegen2-4.0.0 onap demo-dmaap 1 Thu Jun 20 06:57:40 2019 DEPLOYED dmaap-4.0.1 onap demo-log 1 Thu Jun 20 07:06:22 2019 DEPLOYED log-4.0.0 onap demo-mariadb-galera 1 Thu Jun 20 07:06:23 2019 DEPLOYED mariadb-galera-4.0.0 onap demo-msb 1 Thu Jun 20 07:06:24 2019 DEPLOYED msb-4.0.0 onap demo-oof 1 Thu Jun 20 07:06:27 2019 DEPLOYED oof-4.0.0 onap demo-policy 1 Thu Jun 20 07:06:30 2019 DEPLOYED policy-4.0.0 onap demo-portal 1 Thu Jun 20 07:06:33 2019 DEPLOYED portal-4.0.0 onap demo-robot 1 Thu Jun 20 07:06:35 2019 DEPLOYED robot-4.0.0 onap demo-sdnc 1 Thu Jun 20 07:06:37 2019 DEPLOYED sdnc-4.0.0 onap demo-sniro-emulator 1 Thu Jun 20 07:06:40 2019 DEPLOYED sniro-emulator-4.0.0 onap demo-so 1 Thu Jun 20 07:06:40 2019 DEPLOYED so-4.0.0 onap ubuntu@onap-control-1:~/oom/kubernetes$ |
In case of failures in deployment
If the deployment of any onap module fails, please go through these steps to redeploy the modules.
In this example, we demonstrate failure of dmaap, which normally occurs due to timeout issues.
Check the failed modules
perform 'helm ls' on the control node.
Code Block |
---|
ubuntu@onap-control-1:~$ helm ls NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE demo 1 Thu Jun 20 11:40:24 2019 DEPLOYED onap-4.0.0 Dublin onap demo-aaf 1 Thu Jun 20 11:40:24 2019 DEPLOYED aaf-4.0.0 onap demo-aai 1 Thu Jun 20 11:40:26 2019 DEPLOYED aai-4.0.0 onap demo-cassandra 1 Thu Jun 20 11:40:34 2019 DEPLOYED cassandra-4.0.0 onap demo-consul 1 Thu Jun 20 11:40:35 2019 DEPLOYED consul-4.0.0 onap demo-dcaegen2 1 Thu Jun 20 11:40:37 2019 DEPLOYED dcaegen2-4.0.0 onap demo-dmaap 1 Mon Jun 24 09:17:06 2019 FAILED dmaap-4.0.1 onap demo-log 1 Sat Jun 22 14:28:43 2019 DEPLOYED log-4.0.0 onap demo-mariadb-galera 1 Thu Jun 20 11:46:06 2019 DEPLOYED mariadb-galera-4.0.0 onap demo-msb 1 Thu Jun 20 11:46:07 2019 DEPLOYED msb-4.0.0 onap demo-oof 1 Thu Jun 20 11:46:09 2019 DEPLOYED oof-4.0.0 onap demo-policy 1 Thu Jun 20 11:46:13 2019 DEPLOYED policy-4.0.0 onap demo-portal 1 Thu Jun 20 11:46:15 2019 DEPLOYED portal-4.0.0 onap demo-robot 1 Thu Jun 20 11:46:17 2019 DEPLOYED robot-4.0.0 onap demo-sdnc 1 Thu Jun 20 11:46:19 2019 DEPLOYED sdnc-4.0.0 onap demo-sniro-emulator 1 Thu Jun 20 11:46:22 2019 DEPLOYED sniro-emulator-4.0.0 onap demo-so 1 Thu Jun 20 11:46:23 2019 DEPLOYED so-4.0.0 onap |
Delete the failed module
use the right release name from the name as shown in helm ls.
...
Code Block |
---|
cd /dockerdata-nfs/ sudo rm -r demo-dmaap/ |
Reinstall module
Reinstall the deleted module with the same release name as used in the deletion
...
Once this is deployed, you can verify using the "helm ls" command to check all the required modules are up and running.
Undeploy ONAP
For the release name 'demo' and namespace 'onap':
...
Code Block |
---|
cd /dockerdata-nfs/ sudo rm -r * e.g. ubuntu@onap-nfs-server:~$ cd /dockerdata-nfs/ ubuntu@onap-nfs-server:/dockerdata-nfs$ sudo rm -r * |
Access SDN-R
ODLUX-GUI at winlab
The {user} should be replaced by your user id created in orbit-lab.
...
password: Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U
ODLUX-GUI at your labs
Code Block |
---|
http://{ip-address}:30202/odlux/index.html |
The {ip-address} is the IP address of the onap-control-1 machine.
Troubleshooting
Documentation for troubleshooting.
...