This wiki is under construction - this means that content here may be not fully specified or missing. TODO: get DCAE yamls working, fix health tracking issues for healing |
There are three ways to provision or connect to a Kuberentes cluster - Kubeadm, Cloudify and Rancher.
Official Documentation: https://onap.readthedocs.io/en/latest/submodules/oom.git/docs/OOM%20User%20Guide/oom_user_guide.html?highlight=oom
Integration: https://wiki.opnfv.org/pages/viewpage.action?pageId=12389095
The OOM (ONAP Operation Manager) project has pushed Kubernetes based deployment code to the oom repository - based on ONAP 1.1. This page details getting ONAP running (specifically the vFirewall demo) on Kubernetes for various virtual and native environments. This page assumes you have access to any type of bare metal or VM running a clean Ubuntu 16.04 image - either on Rackspace, Openstack, your laptop or AWS spot EC2.
Architectural details of the OOM project is described here - OOM User Guide
see Alexis' page in ONAP on Kubernetes on Rancher in OpenStack
And the SDNC clustering work SDN-C Clustering on Kubernetes
Server | URL | Notes |
---|---|---|
Live Amsterdam server | http://amsterdam.onap.info:8880 | Login to Rancher/Kubernetes only in the last 45 min of the hour Use the system only in the last 10 min of the hour |
Jenkins server | http://jenkins.onap.info/job/oom-cd/ | view deployment status, deployment (pod up status) |
Kibana server | http://kibana.onap.info:5601 | query "message" logs or view the dashboard |
Metric | Min | Full System | Notes | ||
---|---|---|---|---|---|
vCPU | 8 | 64 recommended (16/32 ok) | The full ONAP system of 85+ containers is CPU and Network bound on startup - therefore if you pre pull the docker images to remove the network slowdown - vCPU utilization will peak at 52 cores on a 64 core system and bring the system up in under 4 min. On a system with 8/16 cores you will see the normal 13/7 min startup time as we throttle 52 to 8/16. | ||
RAM | 7g (a couple components) | 55g (75 containers) | Note: you need at least 53g RAM (3g is for Rancher/Kubernetes itself (the VFs exist on a separate openstack). 53 to start and 57 after running the system for a day | ||
HD | 60g | 120g+ | |||
Software | Rancher v1.6.10 Kubectl 1.8+ (not 1.9 yet) Docker 1.12 HELM 2.3 - do not use 2.6/2.7 yet |
We need a kubernetes installation either a base installation or with a thin API wrapper like Rancher.
There are several options - currently Rancher with Helm on Ubuntu 16.04 is a focus as a thin wrapper on Kubernetes - there are other alternative platforms in the subpage - ONAP on Kubernetes (Alternatives)
OS | VIM | Description | Status | Nodes | Links |
---|---|---|---|---|---|
Ubuntu 16.04.2
| Bare Metal VMWare | Rancher | Recommended approach Issue with kubernetes support only in 1.12 (obsolete docker-machine) on OSX | 1-4 | http://rancher.com/docs/rancher/v1.6/en/quick-start-guide/ |
Latest 20171206 AWS install from clean Ubuntu 16.04 VM using rancher setup script below and the cd.sh script to bring up OOM - after the 20 min prepull of dockers - OOM comes up fully with only the known aaf issue 84 of 85 containers - all healthcheck passes except DCAE at 29/30, portal tested and an AAI cloud-region put
Run as root
see video for AWS on ONAP on Amazon EC2
assumes root user not ubuntu
run http://<your_dns_name>:8880/ and follow instructions in the manual quickstart section below (create k8s env, register host, run rancher client, copy token to ~/.kube/config)
The script curl is incorporated in the cd.sh script below until
This step you can run repeatedly
Run the following (requires onap-parameters.yaml and the following json body for the AAI cloud-region PUT so robot init can run - on the FS beside the cd.sh script)
The ./deleteAll.bash inside the cd.sh currently runs the -y override for in master/beijing - but not master - currently fixing this in https://github.com/obrienlabs/onap-root/blob/master/cd.sh
./cd.sh -b master | amsterdam |
https://github.com/obrienlabs/onap-root/blob/master/cd.sh
This is the same script that runs on our jenkins CD hourly job for master
http://jenkins.onap.info/job/oom-cd/
(Manual instructions)
oom/kubernetes/oneclick/setenv.bash maybe updated to the following reduce app set.
HELM_APPS=('mso' 'message-router' 'sdnc' 'vid' 'robot' 'portal' 'policy' 'appc' 'aai' 'sdc' 'log') #HELM_APPS=('consul' 'msb' 'mso' 'message-router' 'sdnc' 'vid' 'robot' 'portal' 'policy' 'appc' 'aai' 'sdc' 'dcaegen2' 'log' 'cli' 'multicloud' 'clamp' 'vnfsdk' 'uui' 'aaf' 'vfc' 'kube2msb' 'esr') |
1) install rancher, clone oom, run config-init pod, run one or all onap components
***************** Note: uninstall docker if already installed - as Kubernetes only support 1.12.x - as of 20170809
***************** |
---|
ONAP deployment in kubernetes is modelled in the oom project as a 1:1 set of service:pod sets (1 pod per docker container). The fastest way to get ONAP Kubernetes up is via Rancher on any bare metal or VM that supports a clean Ubuntu 16.04 install and more than 50G ram.
TODO: REMOVE from table cell - wrapping is not working
(on each host) add to your /etc/hosts to point your ip to your hostname (add your hostname to the end). Add entries for all other hosts in your cluster.
Open Ports On most hosts like openstack or EC2 you can open all the ports or they are open by default - on some environments like Rackspace VM's you need to open them
Fix virtual memory allocation (to allow onap-log:elasticsearch to come up under Rancher 1.6.11)
clone oom (scp your onap_rsa private key first - or clone anon - Ideally you get a full gerrit account and join the community) see ssh/http/http access links below https://gerrit.onap.org/r/#/admin/projects/oom
or use https (substitute your user/pass)
(on each host (server and client(s) which may be the same machine)) Install only the 1.12.x (currently 1.12.6) version of Docker (the only version that works with Kubernetes in Rancher 1.6) Install Docker
Pre pull docker images the first time you install onap. Currently the pre-pull will take 10-35 min depending on the throttling, what you have already pulled and load on nexus3.onap.org:10001. Pre pulling the images will allow the entire ONAP to start in 3-8 min instead of up to 3 hours. This is a WIP https://jira.onap.org/secure/attachment/10501/prepull_docker.sh Use script above in oom/kubernetes/config once it is merged.
To monitor when prepull is finished see section: Prepulldockerimages (on the master only) Install rancher (Optional: use 8880 instead of 8080 if there is a conflict) - note there may be issues with the dns pod in Rancher after a reboot or when running clustered hosts - a clean system will be OK -
In Rancher UI - dont use (http://127.0.0.1:8880) - use the real IP address - so the client configs are populated correctly with callbacks You must deactivate the default CATTLE environment - by adding a KUBERNETES environment - and Deactivating the older default CATTLE one - your added hosts will attach to the default
Register your hostRegister your host(s) - run following on each host (including the master if you are collocating the master/host on a single machine/vm) For each host, In Rancher > Infrastructure > Hosts. Select "Add Host" The first time you add a host - you will be presented with a screen containing the routable IP - hit save only on a routable IP. Enter IP of host: (if you launched racher with 127.0.0.1/localhost - otherwise keep it empty - it will autopopulate the registration with the real IP) Copy command to register host with Rancher, Execute command on each host, for example:
wait for kubernetes menu to populate with the CLI Install KubectlThe following will install kubectl (1.8 only) https://github.com/kubernetes/kubernetes/issues/57528 on a linux host. Once configured, this client tool will provide management of a Kubernetes cluster.
Paste kubectl config from Rancher (you will see the CLI menu in Rancher / Kubernetes after the k8s pods are up on your host) Click on "Generate Config" to get your content to add into .kube/config Verify that Kubernetes config is good
Install HelmThe following will install Helm (use 2.3.0 not current 2.7.0) on a linux host. Helm is used by OOM for package and configuration management. TODO: need to document why 2.6 is an issue - see Prerequisite: Install Kubectl
Undercloud done - move to ONAP Installation You can install OOM manually below or run the cd.sh below or attached to the top of this page - Install/RefreshOOM https://github.com/obrienlabs/onap-root/blob/master/cd.sh manually..... Wait until all the hosts show green in rancher, Then we are ready to configure and deploy onap environment in kubernetes. These scripts are found in the folders:
First source oom/kubernetes/oneclick/setenv.bash. This will set your helm list of components to start/delete
Seconds we need configure the onap before deployment. This is a onetime operation that spawns temporality config pod. This mounts the volume /dockerdata/ contained in the pod config-init and also creates the directory “/dockerdata-nfs” on the kubernetes node. This mount is required for all other ONAP pods to function. Note: the pod will stop after NFS creation - this is normal. https://git.onap.org/oom/tree/kubernetes/config/onap-parameters-sample.yaml
**** Creating configuration for ONAP instance: onap Wait for the config-init pod is gone before trying to bring up a component or all of ONAP - around 60 sec (up to 10 min) - see https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-Waitingforconfig-initcontainertofinish-20sec root@ip-172-31-93-122:~/oom_20170908/oom/kubernetes/config# kubectl get pods --all-namespaces -a onap config 0/1 Completed 0 1m Note: When using the -a option the config container will show up with the status, however when not used with the -a flag, it will not be present Cluster Configuration (optional - do not use if your server/client are co-located)3. Share the /dockerdata-nfs Folder between Kubernetes Nodes Running ONAPDon't run all the pods unless you have at least 52G allocated - if you have a laptop/VM with 16G - then you can only run enough pods to fit in around 11G
(to bring up a single service at a time) Use the default "onap" namespace if you want to run robot tests out of the box - as in "onap-robot" Bring up core components
Only if you have >52G run the following (all namespaces)
ONAP is OK if everything is 1/1 in the following
Run the ONAP portal via instructions at RunningONAPusingthevnc-portal Wait until the containers are all up check AAI endpoints root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# kubectl -n onap-aai exec -it aai-service-3321436576-2snd6 bash root@aai-service-3321436576-2snd6:/# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 15:50 ? 00:00:00 /usr/local/sbin/haproxy-systemd- root 7 1 0 15:50 ? 00:00:00 /usr/local/sbin/haproxy-master root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# curl https://127.0.0.1:30233/aai/v11/service-design-and-creation/models curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none |
Run Initial healthcheck directly on the host Initialize robot cd oom/kubernetes/robot root@ip-172-31-83-168:~/oom/kubernetes/robot# ./demo-k8s.sh init_robot # password for test:test then health root@ip-172-31-83-168:~/oom/kubernetes/robot# ./ete-k8s.sh health |
When running the non-root ubuntu and jenkins users - the NFS share needs its permissions upgraded in order for a delete to occur on VM reset
ubuntu@ip-172-31-85-6:~$ sudo chmod 777 -R /dockerdata-nfs/ |
Total pods is 75
Docker container list - may not be fully up to date: https://git.onap.org/integration/tree/packaging/docker/docker-images.csv
get health via
NAMESPACE master:20170715 | NAME | Debug port | Containers (2 includes filebeat) | Log Volume External | Log Locations docker internal | Public Ports | Notes |
---|---|---|---|---|---|---|---|
default | config-init | The mount "config-init-root" is in the following location (user configurable VF parameter file below) /dockerdata-nfs/onapdemo/mso/mso/mso-docker.json | |||||
onap-aaf | |||||||
onap-aaf | |||||||
onap-aai | aai-resources | /opt/aai/logroot/AAI-RES | |||||
onap-aai | aai-service | ||||||
onap-aai | aai-traversal | /opt/aai/logroot/AAI-GQ | |||||
onap-aai | data-router | ||||||
onap-aai | elasticsearch | ||||||
onap-aai | hbase | ||||||
onap-aai | model-loader-service | ||||||
onap-aai | search-data-service | ||||||
onap-aai | sparky-be | ||||||
onap-appc | appc | 2 | |||||
onap-appc | appc-dbhost | ||||||
onap-appc | appc-dgbuilder | ||||||
onap-appc | sdntldb01 (internal) | ||||||
onap-appc | sdnctldb02 (internal) | ||||||
onap-cli | |||||||
onap-clamp | clamp | ||||||
onap-clamp | clamp-mariadb | ||||||
onap-consul | consul-agent | ||||||
onap-consul | consul-server | ||||||
onap-consul | consul-server | ||||||
onap-consul | consul-server | ||||||
onap-dcae | dcae-zookeeper | ||||||
onap-dcae | dcae-kafka | Note: currently there are no DCAE containers running yet (we are missing 6 yaml files (1 for the controller and 5 for the collector,staging,3-cdap pods)) - therefore DMaaP, VES collectors and APPC actions as the result of policy actions (closed loop) - will not function yet. In review: https://gerrit.onap.org/r/#/c/7287/ | |||||
onap-dcae | dcae-dmaap | ||||||
onap-dcae | pgaas (PostgreSQL aaS | https://hub.docker.com/r/oomk8s/pgaas/tags/ | |||||
onap-dcae | dcae-collector-common-event | persistent volume: dcae-collector-pvs | |||||
onap-dcae | dcae-collector-dmaapbc | ||||||
| |||||||
onap-dcae | dcae-ves-collector | ||||||
onap-dcae | cdap-0 | ||||||
onap-dcae | cdap-1 | ||||||
onap-dcae | cdap-2 | ||||||
onap-kube2msb | kube2msb-registrator | ||||||
onap-log | elasticsearch | ||||||
onap-log | kibana | ||||||
onap-log | logstash | ||||||
onap-message-router | dmaap | ||||||
onap-message-router | global-kafka | ||||||
onap-message-router | zookeeper | ||||||
onap-msb | msb-consul | bring onap-msb up before the rest of onap follow | |||||
onap-msb | msb-discovery | ||||||
onap-msb | msb-eag | ||||||
onap-msb | msb-iag | ||||||
onap-mso | mariadb | ||||||
onap-mso | mso | 2 | |||||
onap-multicloud | framework | ||||||
onap-multicloud | multicloud-ocata | ||||||
onap-multicloud | multicloud-vio | ||||||
onap-multicloud | multicloud-windriver | ||||||
onap-policy | brmsgw | ||||||
onap-policy | drools | 2 | |||||
onap-policy | mariadb | ||||||
onap-policy | nexus | ||||||
onap-policy | pap | 2 | |||||
onap-policy | pdp | 2 | |||||
onap-portal | portalapps | 2 | |||||
onap-portal | portaldb | ||||||
onap-portal | portalwidgets | ||||||
onap-portal | vnc-portal | ||||||
onap-robot | robot | ||||||
onap-sdc | sdc-be | /dockerdata-nfs/onap/sdc/logs/SDC/SDC-BE | ${log.home}/${OPENECOMP-component-name}/ ${OPENECOMP-subcomponent-name}/transaction.log.%i ./var/lib/jetty/logs/SDC/SDC-BE/metrics.log ./var/lib/jetty/logs/SDC/SDC-BE/audit.log ./var/lib/jetty/logs/SDC/SDC-BE/debug_by_package.log ./var/lib/jetty/logs/SDC/SDC-BE/debug.log ./var/lib/jetty/logs/SDC/SDC-BE/transaction.log ./var/lib/jetty/logs/SDC/SDC-BE/error.log ./var/lib/jetty/logs/importNormativeAll.log ./var/lib/jetty/logs/ASDC/ASDC-FE/audit.log ./var/lib/jetty/logs/ASDC/ASDC-FE/debug.log ./var/lib/jetty/logs/ASDC/ASDC-FE/transaction.log ./var/lib/jetty/logs/ASDC/ASDC-FE/error.log ./var/lib/jetty/logs/2017_09_06.stderrout.log | ||||
onap-sdc | sdc-cs | ||||||
onap-sdc | sdc-es | 2 | |||||
onap-sdc | sdc-fe | 2 | ./var/lib/jetty/logs/SDC/SDC-BE/metrics.log ./var/lib/jetty/logs/SDC/SDC-BE/audit.log ./var/lib/jetty/logs/SDC/SDC-BE/debug_by_package.log ./var/lib/jetty/logs/SDC/SDC-BE/debug.log ./var/lib/jetty/logs/SDC/SDC-BE/transaction.log ./var/lib/jetty/logs/SDC/SDC-BE/error.log ./var/lib/jetty/logs/importNormativeAll.log ./var/lib/jetty/logs/2017_09_07.stderrout.log ./var/lib/jetty/logs/ASDC/ASDC-FE/audit.log ./var/lib/jetty/logs/ASDC/ASDC-FE/debug.log ./var/lib/jetty/logs/ASDC/ASDC-FE/transaction.log ./var/lib/jetty/logs/ASDC/ASDC-FE/error.log ./var/lib/jetty/logs/2017_09_06.stderrout.log | ||||
onap-sdc | sdc-kb | ||||||
onap-sdnc | sdnc | 2 | ./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/journal/000006.log ./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/cache/1504712225751.log ./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/cache/1504712002358.log ./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/tmp/xql.log ./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/log/karaf.log ./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/taglist.log ./var/log/dpkg.log ./var/log/apt/history.log ./var/log/apt/term.log ./var/log/fontconfig.log ./var/log/alternatives.log ./var/log/bootstrap.log | ||||
onap-sdnc | sdnc-dbhost | ||||||
onap-sdnc | sdnc-dgbuilder | ||||||
onap-sdnc | sdnctlbd01 (internal) | ||||||
onap-sdnc | sdnctlb02 (internal) | ||||||
onap-sdnc | sdnc-portal | ./opt/openecomp/sdnc/admportal/server/npm-debug.log ./var/log/dpkg.log ./var/log/apt/history.log ./var/log/apt/term.log ./var/log/fontconfig.log ./var/log/alternatives.log ./var/log/bootstrap.log | |||||
onap-vid | vid-mariadb | ||||||
onap-vid | vid-server | 2 | |||||
onap-vfc | vfc-catalog | ||||||
onap-vfc | vfc-emsdriver | ||||||
onap-vfc | vfc-gvnfmdriver | ||||||
onap-vfc | vfc-hwvnfmdriver | ||||||
onap-vfc | vfc-jujudriver | ||||||
onap-vfc | vfc-nslcm | ||||||
onap-vfc | vfc-resmgr | ||||||
onap-vfc | vfc-vnflcm | ||||||
onap-vfc | vfc-vnfmgr | ||||||
onap-vfc | vfc-vnfres | ||||||
onap-vfc | vfc-workflow | ||||||
onap-vfc | vfc-ztesdncdriver | ||||||
onap-vfc | vfc-ztevmanagerdrive | ||||||
onap-vnfsdk | postgres | ||||||
onap-vnfsdk | refrepo |
The diagram above describes the init dependencies for the ONAP pods when first deploying OOM through Kubernetes.
https://git.onap.org/oom/tree/kubernetes/config/onap-parameters-sample.yaml
Before running pod-config-init.yaml - make sure your config for openstack is setup correctly - so you can deploy the vFirewall VMs for example and run the demo robot scripts.
vi oom/kubernetes/config/config/onap-parameters.yaml
Rackspace example |
defaults OK OPENSTACK_UBUNTU_14_IMAGE: "Ubuntu_14.04.5_LTS" OPENSTACK_FLAVOUR_MEDIUM: "m1.medium" OPENSTACK_SERVICE_TENANT_NAME: "service" DMAAP_TOPIC: "AUTO" DEMO_ARTIFACTS_VERSION: "1.1.0-SNAPSHOT" Modify for your OS/RS account
OPENSTACK_API_KEY: "uuid-key" |
---|
https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/
Delete all the containers (and services)
./deleteAll.bash -n onap -y # in amsterdam only ./deleteAll.bash -n onap |
Delete the config-init container and its generated /dockerdata-nfs share
There may be cases where new configuration content needs to be deployed after a pull of a new version of ONAP.
for example after pull brings in files like the following (20170902)
root@ip-172-31-93-160:~/oom/kubernetes/oneclick# git pull Resolving deltas: 100% (135/135), completed with 24 local objects. From http://gerrit.onap.org/r/oom bf928c5..da59ee4 master -> origin/master Updating bf928c5..da59ee4 kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/aai-resources/aai-resources-auth/metadata.rb | 7 + kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/aai-resources/aai-resources-auth/recipes/aai-resources-aai-keystore.rb | 8 + kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/{ajsc-aai-config => aai-resources/aai-resources-config}/CHANGELOG.md | 2 +- kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/{ajsc-aai-config => aai-resources/aai-resources-config}/README.md | 4 +- |
see (worked with Zoran)
# check for the pod kubectl get pods --all-namespaces -a # delete all the pod/services # master ./deleteAll.bash -n onap -y # amsterdam ./deleteAll.bash -n onap # delete the fs rm -rf /dockerdata-nfs/onap At this moment, its empty env #Pull the repo git pull # rerun the config cd ../config ./createConfig.bash -n onap If you get an error saying release onap-config is already exists then please run :- helm del --purge onap-config example 20170907 root@kube0:~/oom/kubernetes/oneclick# rm -rf /dockerdata-nfs/ root@kube0:~/oom/kubernetes/oneclick# cd ../config/ root@kube0:~/oom/kubernetes/config# ./createConfig.sh -n onap **** Creating configuration for ONAP instance: onap Error from server (AlreadyExists): namespaces "onap" already exists Error: a release named "onap-config" already exists. Please run: helm ls --all "onap-config"; helm del --help **** Done **** root@kube0:~/oom/kubernetes/config# helm del --purge onap-config release "onap-config" deleted # rerun createAll.bash -n onap |
root@ip-172-31-93-160:~/oom/kubernetes/config# kubectl get pods --all-namespaces -a NAMESPACE NAME READY STATUS RESTARTS AGE onap config-init 0/1 ContainerCreating 0 6s root@ip-172-31-93-160:~/oom/kubernetes/config# kubectl get pods --all-namespaces -a NAMESPACE NAME READY STATUS RESTARTS AGE onap config-init 1/1 Running 0 9s root@ip-172-31-93-160:~/oom/kubernetes/config# kubectl get pods --all-namespaces -a NAMESPACE NAME READY STATUS RESTARTS AGE onap config-init 0/1 Completed 0 14s |
Check the services view in the Kuberntes API under robot
robot.onap-robot:88 TCP
robot.onap-robot:30209 TCP
kubectl get services --all-namespaces -o wide onap-vid vid-mariadb None <none> 3306/TCP 1h app=vid-mariadb onap-vid vid-server 10.43.14.244 <nodes> 8080:30200/TCP 1h app=vid-server |
---|
kubectl --namespace onap-vid logs -f vid-server-248645937-8tt6p 16-Jul-2017 02:46:48.707 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 22520 ms kubectl --namespace onap-portal logs portalapps-2799319019-22mzl -f root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE onap-robot robot-44708506-dgv8j 1/1 Running 0 36m 10.42.240.80 obriensystemskub0 root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl --namespace onap-robot logs -f robot-44708506-dgv8j 2017-07-16 01:55:54: (log.c.164) server started |
---|
A pods may be setup to log to a volume which can be inspected outside of a container. If you cannot connect to the container you could inspect the backing volume instead. This is how you find the backing directory for a pod which is using a volume which is an empty directory type, the log files can be found on the kubernetes node hosting the pod. More details can be found here https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
here is an example of finding SDNC logs on a VM hosting a kubernetes node.
#find the sdnc pod name and which kubernetes node its running on. kubectl -n onap-sdnc get all -o wide #describe the pod to see the empty dir volume names and the pod uid kubectl -n onap-sdnc describe po/sdnc-5b5b7bf89c-97qkx #ssh to the VM hosting the kubernetes node if you are not alredy on the vm ssh root@vm-host #search the /var/lib/kubelet/pods/ directory for the log file sudo find /var/lib/kubelet/pods/ | grep sdnc-logs #The result is path that has the format /var/lib/kubelet/pods/<pod-uid>/volumes/kubernetes.io~empty-dir/<volume-name> /var/lib/kubelet/pods/d6041229-d614-11e7-9516-fa163e6ff8e8/volumes/kubernetes.io~empty-dir/sdnc-logs /var/lib/kubelet/pods/d6041229-d614-11e7-9516-fa163e6ff8e8/volumes/kubernetes.io~empty-dir/sdnc-logs/sdnc /var/lib/kubelet/pods/d6041229-d614-11e7-9516-fa163e6ff8e8/volumes/kubernetes.io~empty-dir/sdnc-logs/sdnc/karaf.log /var/lib/kubelet/pods/d6041229-d614-11e7-9516-fa163e6ff8e8/plugins/kubernetes.io~empty-dir/sdnc-logs /var/lib/kubelet/pods/d6041229-d614-11e7-9516-fa163e6ff8e8/plugins/kubernetes.io~empty-dir/sdnc-logs/ready |
Yogini and I needed the logs in OOM Kubernetes - they were already there and with a robot:robot auth http://<your_dns_name>:30209/logs/demo/InitDistribution/report.html for example after a oom/kubernetes/robot$./demo-k8s.sh distribute find your path to the logs by using for example root@ip-172-31-57-55:/dockerdata-nfs/onap/robot# kubectl --namespace onap-robot exec -it robot-4251390084-lmdbb bash root@robot-4251390084-lmdbb:/# ls /var/opt/OpenECOMP_ETE/html/logs/demo/InitD InitDemo/ InitDistribution/ path is http://<your_dns_name>:30209/logs/demo/InitDemo/log.html#s1-s1-s1-s1-t1 |
Normally I would via https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/
Get the pod name via kubectl get pods --all-namespaces -o wide bash into the pod via kubectl -n onap-mso exec -it mso-1648770403-8hwcf /bin/bash |
---|
Trying to get an authorization file into the robot pod
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl cp authorization onap-robot/robot-44708506-nhm0n:/home/ubuntu above works? |
---|
via URL
http://cd.onap.info:30223/mso/logging/debug
via logback.xml
see (Optional) Tutorial: Onboarding and Distributing a Vendor Software Product (VSP)
or run the vnc-portal container to access ONAP using the traditional port mappings. See the following recorded video by Mike Elliot of the OOM team for a audio-visual reference
Check for the vnc-portal port via (it is always 30211)
obrienbiometrics:onap michaelobrien$ ssh ubuntu@dev.onap.info ubuntu@ip-172-31-93-122:~$ sudo su - root@ip-172-31-93-122:~# kubectl get services --all-namespaces -o wide NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR onap-portal vnc-portal 10.43.78.204 <nodes> 6080:30211/TCP,5900:30212/TCP 4d app=vnc-portal |
launch the vnc-portal in a browser
password is "password"
Open firefox inside the VNC vm - launch portal normally
http://portal.api.simpledemo.onap.org:8989/ONAPPORTAL/login.htm
(20170906) Before running SDC - fix the /etc/hosts (thanks Yogini for catching this) - edit your /etc/hosts as follows
(change sdc.ui to sdc.api)
before | after | notes |
---|---|---|
login and run SDC
Continue with the normal ONAP demo flow at (Optional) Tutorial: Onboarding and Distributing a Vendor Software Product (VSP)
Run multiple environments on the same machine - TODO
For example we can see when the AAI model-loader container was created
http://127.0.0.1:8880/v1/containerevents |
---|
{
|
Having issues after a reboot of a colocated server/agent
apt-get install ssh apt-get install ubuntu-desktop |
---|
ignore - not relevant
Search Line limits were exceeded, some dns names have been omitted, the applied search line is: default.svc.cluster.local svc.cluster.local cluster.local kubelet.kubernetes.rancher.internal kubernetes.rancher.internal rancher.internal
https://github.com/rancher/rancher/issues/9303
Make sure your Openstack parameters are set if you get the following starting up the config pod
root@obriensystemsu0:~# kubectl get pods --all-namespaces -a NAMESPACE NAME READY STATUS RESTARTS AGE kube-system heapster-4285517626-l9wjp 1/1 Running 4 22d kube-system kube-dns-2514474280-4411x 3/3 Running 9 22d kube-system kubernetes-dashboard-716739405-fq507 1/1 Running 4 22d kube-system monitoring-grafana-3552275057-w3xml 1/1 Running 4 22d kube-system monitoring-influxdb-4110454889-bwqgm 1/1 Running 4 22d kube-system tiller-deploy-737598192-841l1 1/1 Running 4 22d onap config 0/1 Error 0 1d root@obriensystemsu0:~# vi /etc/hosts root@obriensystemsu0:~# kubectl logs -n onap config Validating onap-parameters.yaml has been populated Error: OPENSTACK_UBUNTU_14_IMAGE must be set in onap-parameters.yaml + echo 'Validating onap-parameters.yaml has been populated' + [[ -z '' ]] + echo 'Error: OPENSTACK_UBUNTU_14_IMAGE must be set in onap-parameters.yaml' + exit 1 fix root@obriensystemsu0:~/onap_1007/oom/kubernetes/config# helm delete --purge onap-config release "onap-config" deleted root@obriensystemsu0:~/onap_1007/oom/kubernetes/config# ./createConfig.sh -n onap **** Creating configuration for ONAP instance: onap Error from server (AlreadyExists): namespaces "onap" already exists NAME: onap-config LAST DEPLOYED: Mon Oct 9 21:35:27 2017 NAMESPACE: onap STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE global-onap-configmap 15 0s ==> v1/Pod NAME READY STATUS RESTARTS AGE config 0/1 ContainerCreating 0 0s **** Done **** root@obriensystemsu0:~/onap_1007/oom/kubernetes/config# kubectl get pods --all-namespaces -a NAMESPACE NAME READY STATUS RESTARTS AGE kube-system heapster-4285517626-l9wjp 1/1 Running 4 22d kube-system kube-dns-2514474280-4411x 3/3 Running 9 22d kube-system kubernetes-dashboard-716739405-fq507 1/1 Running 4 22d kube-system monitoring-grafana-3552275057-w3xml 1/1 Running 4 22d kube-system monitoring-influxdb-4110454889-bwqgm 1/1 Running 4 22d kube-system tiller-deploy-737598192-841l1 1/1 Running 4 22d onap config 1/1 Running 0 25s root@obriensystemsu0:~/onap_1007/oom/kubernetes/config# kubectl get pods --all-namespaces -a NAMESPACE NAME READY STATUS RESTARTS AGE kube-system heapster-4285517626-l9wjp 1/1 Running 4 22d kube-system kube-dns-2514474280-4411x 3/3 Running 9 22d kube-system kubernetes-dashboard-716739405-fq507 1/1 Running 4 22d kube-system monitoring-grafana-3552275057-w3xml 1/1 Running 4 22d kube-system monitoring-influxdb-4110454889-bwqgm 1/1 Running 4 22d kube-system tiller-deploy-737598192-841l1 1/1 Running 4 22d onap config 0/1 Completed 0 1m |
The script curl is incorporated in the cd.sh script below until
Watch the parallel set of pulls complete by doing the following grep for 15-40 min
ps -ef | grep docker | grep pull | wc -l
20171231 master (beijing) ubuntu@ip-172-31-73-50:~$ ps -ef | grep docker | grep pull root 18285 14573 8 02:13 pts/0 00:00:00 /bin/bash ./prepull_docker.sh root 18335 18285 3 02:13 pts/0 00:00:00 docker pull oomk8s/readiness-check:1.0.0 root 18350 18285 3 02:13 pts/0 00:00:00 docker pull mysql:5.7 root 18365 18285 0 02:13 pts/0 00:00:00 docker pull gcr.io/google-samples/xtrabackup:1.0 root 18380 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/ccsdk-dgbuilder-image:v0.1.0 root 18395 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/sdnc-image:v1.2.1 root 18412 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/admportal-sdnc-image:v1.2.1 root 18475 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/msb/msb_discovery:1.0.0 root 18535 18285 3 02:13 pts/0 00:00:00 docker pull oomk8s/readiness-check:1.0.0 root 18560 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/refrepo/postgres:latest root 18587 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/refrepo:1.0-STAGING-latest root 18657 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/aai/esr-server:v1.0.0 root 18701 18285 3 02:13 pts/0 00:00:00 docker pull oomk8s/readiness-check:1.0.0 root 18716 18285 0 02:13 pts/0 00:00:00 docker pull docker.elastic.co/logstash/logstash:5.4.3 root 18731 18285 0 02:13 pts/0 00:00:00 docker pull docker.elastic.co/kibana/kibana:5.5.0 root 18746 18285 0 02:13 pts/0 00:00:00 docker pull docker.elastic.co/elasticsearch/elasticsearch:5.5.0 root 18786 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/sniroemulator root 18824 18285 3 02:13 pts/0 00:00:00 docker pull oomk8s/readiness-check:1.0.0 root 18849 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/policy/policy-pe:v1.1.1 root 18875 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/policy/policy-drools:v1.1.1 root 18901 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/policy/policy-db:v1.1.1 root 18927 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/policy/policy-nexus:v1.1.1 root 18953 18285 0 02:13 pts/0 00:00:00 docker pull ubuntu:16.04 root 18990 18285 3 02:13 pts/0 00:00:00 docker pull oomk8s/readiness-check:1.0.0 root 19005 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/aaf/authz-service:latest root 19031 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/library/cassandra:2.1.17 root 19092 18285 3 02:13 pts/0 00:00:00 docker pull oomk8s/readiness-check:1.0.0 root 19107 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/openecomp/sdc-kibana:v1.1.0 root 19124 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/openecomp/sdc-frontend:v1.1.0 root 19139 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/openecomp/sdc-elasticsearch:v1.1.0 root 19155 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/openecomp/sdc-cassandra:v1.1.0 root 19170 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/openecomp/sdc-backend:v1.1.0 root 19209 18285 3 02:13 pts/0 00:00:00 docker pull oomk8s/readiness-check:1.0.0 root 19224 18285 0 02:13 pts/0 00:00:00 docker pull aaionap/haproxy:1.1.0 root 19249 18285 1 02:13 pts/0 00:00:00 docker pull aaionap/hbase:1.2.0 root 19274 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/model-loader:v1.1.0 root 19300 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/openecomp/aai-resources:v1.1.0 root 19327 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/openecomp/aai-traversal:v1.1.0 root 19353 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/data-router:v1.1.0 root 19378 18285 0 02:13 pts/0 00:00:00 docker pull elasticsearch:2.4.1 root 19404 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/search-data-service:v1.1.0 root 19432 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/sparky-be:v1.1.0 root 19459 18285 0 02:13 pts/0 00:00:00 docker pull aaionap/gremlin-server root 19496 18285 5 02:13 pts/0 00:00:00 docker pull oomk8s/readiness-check:1.0.0 root 19511 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/multicloud/framework:v1.0.0 root 19526 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/multicloud/vio:v1.0.0 root 19543 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/multicloud/openstack-ocata:v1.0.0 root 19586 18285 5 02:13 pts/0 00:00:00 docker pull oomk8s/readiness-check:1.0.0 root 19601 18285 1 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/library/mariadb:10 root 19619 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/openecomp/vid:v1.1.1 root 19657 18285 5 02:13 pts/0 00:00:00 docker pull oomk8s/readiness-check:1.0.0 root 19673 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/portal-apps:v1.3.0 root 19687 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/portal-db:v1.3.0 root 19711 18285 2 02:13 pts/0 00:00:00 docker pull oomk8s/mariadb-client-init:1.0.0 root 19728 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/portal-wms:v1.3.0 root 19743 18285 2 02:13 pts/0 00:00:00 docker pull oomk8s/ubuntu-init:1.0.0 root 19762 18285 0 02:13 pts/0 00:00:00 docker pull dorowu/ubuntu-desktop-lxde-vnc root 19830 18285 5 02:13 pts/0 00:00:00 docker pull oomk8s/readiness-check:1.0.0 root 19845 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/openecomp/mso:v1.1.1 root 19860 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/mariadb:10.1.11 root 19906 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/vfc/nslcm:v1.0.2 root 19927 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/vfc/resmanagement:v1.0.0 root 19951 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/vfc/gvnfmdriver:v1.0.1 root 19972 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/vfc/ztevmanagerdriver:v1.0.2 root 19994 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/vfc/nfvo/svnfm/huawei:v1.0.2 root 20026 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/vfc/ztesdncdriver:v1.0.0 root 20048 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/vfc/nfvo/svnfm/nokia:v1.0.2 root 20075 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/vfc/jujudriver:v1.0.0 root 20099 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/vfc/vnflcm:v1.0.1 root 20121 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/vfc/vnfres:v1.0.1 root 20141 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/vfc/vnfmgr:v1.0.1 root 20165 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/vfc/emsdriver:v1.0.1 root 20187 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/vfc/wfengine-mgrservice:v1.0.0 root 20210 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/vfc/wfengine-activiti:v1.0.0 root 20265 18285 5 02:13 pts/0 00:00:00 docker pull oomk8s/readiness-check:1.0.0 root 20281 18285 0 02:13 pts/0 00:00:00 docker pull attos/dmaap:latest root 20295 18285 0 02:13 pts/0 00:00:00 docker pull wurstmeister/kafka:latest root 20338 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/usecase-ui:v1.0.1 root 20478 18285 5 02:13 pts/0 00:00:00 docker pull oomk8s/readiness-check:1.0.0 root 20494 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/clamp:v1.1.0 root 20522 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/mariadb:10.1.11 root 20560 18285 5 02:13 pts/0 00:00:00 docker pull oomk8s/readiness-check:1.0.0 root 20575 18285 1 02:13 pts/0 00:00:00 docker pull mysql:5.7 root 20590 18285 0 02:13 pts/0 00:00:00 docker pull gcr.io/google-samples/xtrabackup:1.0 root 20605 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/ccsdk-dgbuilder-image:v0.1.0 root 20621 18285 1 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/sdnc-image:v1.2.1 root 20638 18285 1 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/admportal-sdnc-image:v1.2.1 root 20688 18285 5 02:13 pts/0 00:00:00 docker pull oomk8s/readiness-check:1.0.0 root 20703 18285 0 02:13 pts/0 00:00:00 docker pull oomk8s/pgaas:1 root 20717 18285 0 02:13 pts/0 00:00:00 docker pull oomk8s/cdap-fs:1.0.0 root 20734 18285 0 02:13 pts/0 00:00:00 docker pull oomk8s/cdap:1.0.7 root 20749 18285 0 02:13 pts/0 00:00:00 docker pull attos/dmaap:latest root 20764 18285 0 02:13 pts/0 00:00:00 docker pull wurstmeister/kafka:latest root 20781 18285 0 02:13 pts/0 00:00:00 docker pull wurstmeister/zookeeper:latest root 20797 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/openecomp/dcae-dmaapbc:1.1-STAGING-latest root 20812 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/openecomp/dcae-collector-common-event:1.1-STAGING-latest root 20902 18285 4 02:13 pts/0 00:00:00 docker pull oomk8s/readiness-check:1.0.0 root 20917 18285 0 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/openecomp/appc-image:v1.2.0 root 20932 18285 0 02:13 pts/0 00:00:00 docker pull mysql/mysql-server:5.6 root 20949 18285 1 02:13 pts/0 00:00:00 docker pull nexus3.onap.org:10001/onap/ccsdk-dgbuilder-image:v0.1.0 |
https://lists.onap.org/pipermail/onap-discuss/2017-July/002084.html
https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/
Please help out our OPNFV friends
https://wiki.opnfv.org/pages/viewpage.action?pageId=12389095
Of interest
https://github.com/cncf/cross-cloud/
https://en.wikipedia.org/wiki/Fiddler_(software) - thanks Rahul
http://www.opencontrail.org/opencontrail-quick-start-guide/
https://github.com/prometheus/prometheus
http://zipkin.io/pages/quickstart
http://cloudify.co/2017/09/27/model-driven-onap-operations-manager-oom-boarding-tosca-cloudify/
https://gerrit.onap.org/r/#/c/6179/
https://gerrit.onap.org/r/#/c/9849/
https://gerrit.onap.org/r/#/c/9839/