Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This page details the Rancher RI installation independent of the deployment target (Physical, Openstack, AWS, Azure, GCD, Bare-metal, VMware)

Pre-requisite

The supported versions are as follows:

...

Delete all the containers (and services)

./deleteAll.bash -n onap -y
# in amsterdam only
./deleteAll.bash -n onap

Delete/Rerun config-init container for /dockerdata-nfs refresh

...

for example after pull brings in files like the following (20170902)

root@ip-172-31-93-160:~/oom/kubernetes/oneclick# git pull

Resolving deltas: 100% (135/135), completed with 24 local objects.

From http://gerrit.onap.org/r/oom

   bf928c5..da59ee4  master     -> origin/master

Updating bf928c5..da59ee4

kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/aai-resources/aai-resources-auth/metadata.rb                                  |    7 +

 kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/aai-resources/aai-resources-auth/recipes/aai-resources-aai-keystore.rb        |    8 +

 kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/{ajsc-aai-config => aai-resources/aai-resources-config}/CHANGELOG.md          |    2 +-

 kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/{ajsc-aai-config => aai-resources/aai-resources-config}/README.md             |    4 +-



see (worked with Zoran)  OOM-257 - DevOps: OOM config reset procedure for new /dockerdata-nfs content CLOSED

# check for the pod
kubectl get pods --all-namespaces -a
# delete all the pod/services
# master
./deleteAll.bash -n onap -y
# amsterdam
./deleteAll.bash -n onap
# delete the fs
rm -rf /dockerdata-nfs/onap
At this moment, its empty env
#Pull the repo
git pull
# rerun the config
cd ../config
./createConfig.bash -n onap
If you get an error saying release onap-config is already exists then please run :- helm del --purge onap-config
 
 
example 20170907
root@kube0:~/oom/kubernetes/oneclick# rm -rf /dockerdata-nfs/
root@kube0:~/oom/kubernetes/oneclick# cd ../config/
root@kube0:~/oom/kubernetes/config# ./createConfig.sh -n onap
**** Creating configuration for ONAP instance: onap
Error from server (AlreadyExists): namespaces "onap" already exists
Error: a release named "onap-config" already exists.
Please run: helm ls --all "onap-config"; helm del --help
**** Done ****
root@kube0:~/oom/kubernetes/config# helm del --purge onap-config
release "onap-config" deleted
# rerun createAll.bash -n onap

Container Endpoint access

...

robot.onap-robot:30209 TCP

kubectl get services --all-namespaces -o wide

onap-vid      vid-mariadb            None           <none>        3306/TCP         1h        app=vid-mariadb

onap-vid      vid-server             10.43.14.244   <nodes>       8080:30200/TCP   1h        app=vid-server


Container Logs

kubectl --namespace onap-vid logs -f vid-server-248645937-8tt6p

16-Jul-2017 02:46:48.707 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 22520 ms

kubectl --namespace onap-portal logs portalapps-2799319019-22mzl -f

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide

NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP              NODE

onap-robot    robot-44708506-dgv8j                    1/1       Running   0          36m       10.42.240.80    obriensystemskub0

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl --namespace onap-robot logs -f robot-44708506-dgv8j

2017-07-16 01:55:54: (log.c.164) server started


A pods may be setup to log to a volume which can be inspected outside of a container.   If you cannot connect to the container you could inspect the backing volume instead.  This is how you find the backing directory for a pod which is using a volume which is an empty directory type, the log files can be found on the kubernetes node hosting the pod.  More details can be found here https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

here is an example of finding  SDNC logs on a VM hosting a kubernetes node.  

#find the sdnc pod name and which kubernetes node its running on.   
kubectl -n onap-sdnc get all -o wide
#describe the pod to see the empty dir volume names and the pod uid
kubectl -n onap-sdnc describe po/sdnc-5b5b7bf89c-97qkx
#ssh to the VM hosting the kubernetes node if you are not alredy on the vm 
ssh  root@vm-host
#search the /var/lib/kubelet/pods/ directory for the log file
sudo find /var/lib/kubelet/pods/ grep sdnc-logs
#The result is path that has the format /var/lib/kubelet/pods/<pod-uid>/volumes/kubernetes.io~empty-dir/<volume-name>
 
/var/lib/kubelet/pods/d6041229-d614-11e7-9516-fa163e6ff8e8/volumes/kubernetes.io~empty-dir/sdnc-logs
/var/lib/kubelet/pods/d6041229-d614-11e7-9516-fa163e6ff8e8/volumes/kubernetes.io~empty-dir/sdnc-logs/sdnc
/var/lib/kubelet/pods/d6041229-d614-11e7-9516-fa163e6ff8e8/volumes/kubernetes.io~empty-dir/sdnc-logs/sdnc/karaf.log
/var/lib/kubelet/pods/d6041229-d614-11e7-9516-fa163e6ff8e8/plugins/kubernetes.io~empty-dir/sdnc-logs
/var/lib/kubelet/pods/d6041229-d614-11e7-9516-fa163e6ff8e8/plugins/kubernetes.io~empty-dir/sdnc-logs/ready


Robot Logs

Yogini and I needed the logs in OOM Kubernetes - they were already there and with a robot:robot auth

http://<your_dns_name>:30209/logs/demo/InitDistribution/report.html

for example after a

oom/kubernetes/robot$./demo-k8s.sh distribute

find your path to the logs by using for example

root@ip-172-31-57-55:/dockerdata-nfs/onap/robot# kubectl --namespace onap-robot exec -it robot-4251390084-lmdbb bash

root@robot-4251390084-lmdbb:/# ls /var/opt/OpenECOMP_ETE/html/logs/demo/InitD                                                            

InitDemo/         InitDistribution/ 

path is

http://<your_dns_name>:30209/logs/demo/InitDemo/log.html#s1-s1-s1-s1-t1



SSH into ONAP containers

Normally I would via https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/

Get the pod name via

kubectl get pods --all-namespaces -o wide

bash into the pod via

kubectl -n onap-mso exec -it  mso-1648770403-8hwcf /bin/bash


Push Files to Pods

Trying to get an authorization file into the robot pod

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl cp authorization onap-robot/robot-44708506-nhm0n:/home/ubuntu

above works?
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl cp authorization onap-robot/robot-44708506-nhm0n:/etc/lighttpd/authorization
tar: authorization: Cannot open: File exists
tar: Exiting with failure status due to previous errors

Redeploying Code war/jar in a docker container

...

Check for the vnc-portal port via (it is always 30211)

obrienbiometrics:onap michaelobrien$ ssh ubuntu@dev.onap.info
ubuntu@ip-172-31-93-122:~$ sudo su -
root@ip-172-31-93-122:~# kubectl get services --all-namespaces -o wide
NAMESPACE             NAME                          CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                      AGE       SELECTOR
onap-portal           vnc-portal                    10.43.78.204    <nodes>       6080:30211/TCP,5900:30212/TCP                                                4d        app=vnc-portal

launch the vnc-portal in a browser

...

OOM-282 - vnc-portal requires /etc/hosts url fix for SDC sdc.ui should be sdc.api CLOSED

before

after

notes

Image Modified

Image Modified



login and run SDC


Continue with the normal ONAP demo flow at (Optional) Tutorial: Onboarding and Distributing a Vendor Software Product (VSP)

...

Having issues after a reboot of a colocated server/agent

Installing Clean Ubuntu

apt-get install ssh

apt-get install ubuntu-desktop

DNS resolution

ignore - not relevant

...

Make sure your Openstack parameters are set if you get the following starting up the config pod

root@obriensystemsu0:~# kubectl get pods --all-namespaces -a
NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
kube-system   heapster-4285517626-l9wjp              1/1       Running   4          22d
kube-system   kube-dns-2514474280-4411x              3/3       Running   9          22d
kube-system   kubernetes-dashboard-716739405-fq507   1/1       Running   4          22d
kube-system   monitoring-grafana-3552275057-w3xml    1/1       Running   4          22d
kube-system   monitoring-influxdb-4110454889-bwqgm   1/1       Running   4          22d
kube-system   tiller-deploy-737598192-841l1          1/1       Running   4          22d
onap          config                                 0/1       Error     0          1d
root@obriensystemsu0:~# vi /etc/hosts
root@obriensystemsu0:~# kubectl logs -n onap config
Validating onap-parameters.yaml has been populated
Error: OPENSTACK_UBUNTU_14_IMAGE must be set in onap-parameters.yaml
+ echo 'Validating onap-parameters.yaml has been populated'
+ [[ -z '' ]]
+ echo 'Error: OPENSTACK_UBUNTU_14_IMAGE must be set in onap-parameters.yaml'
+ exit 1
 
fix
root@obriensystemsu0:~/onap_1007/oom/kubernetes/config# helm delete --purge onap-config
release "onap-config" deleted
root@obriensystemsu0:~/onap_1007/oom/kubernetes/config# ./createConfig.sh -n onap
 
**** Creating configuration for ONAP instance: onap
Error from server (AlreadyExists): namespaces "onap" already exists
NAME:   onap-config
LAST DEPLOYED: Mon Oct  9 21:35:27 2017
NAMESPACE: onap
STATUS: DEPLOYED
 
RESOURCES:
==> v1/ConfigMap
NAME                   DATA  AGE
global-onap-configmap  15    0s
 
==> v1/Pod
NAME    READY  STATUS             RESTARTS  AGE
config  0/1    ContainerCreating  0         0s
 
**** Done ****
root@obriensystemsu0:~/onap_1007/oom/kubernetes/config# kubectl get pods --all-namespaces -a
NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
kube-system   heapster-4285517626-l9wjp              1/1       Running   4          22d
kube-system   kube-dns-2514474280-4411x              3/3       Running   9          22d
kube-system   kubernetes-dashboard-716739405-fq507   1/1       Running   4          22d
kube-system   monitoring-grafana-3552275057-w3xml    1/1       Running   4          22d
kube-system   monitoring-influxdb-4110454889-bwqgm   1/1       Running   4          22d
kube-system   tiller-deploy-737598192-841l1          1/1       Running   4          22d
onap          config                                 1/1       Running   0          25s
root@obriensystemsu0:~/onap_1007/oom/kubernetes/config# kubectl get pods --all-namespaces -a
NAMESPACE     NAME                                   READY     STATUS      RESTARTS   AGE
kube-system   heapster-4285517626-l9wjp              1/1       Running     4          22d
kube-system   kube-dns-2514474280-4411x              3/3       Running     9          22d
kube-system   kubernetes-dashboard-716739405-fq507   1/1       Running     4          22d
kube-system   monitoring-grafana-3552275057-w3xml    1/1       Running     4          22d
kube-system   monitoring-influxdb-4110454889-bwqgm   1/1       Running     4          22d
kube-system   tiller-deploy-737598192-841l1          1/1       Running     4          22d
onap          config                                 0/1       Completed   0          1m