You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 256 Next »

Warning: Draft Content

This wiki is under construction - this means that content here may be not fully specified or missing.

TODO: determine/fix containers not ready, get DCAE yamls working, fix health tracking issues for healing

Official Documentation: https://onap.readthedocs.io/en/latest/submodules/oom.git/docs/OOM%20User%20Guide/oom_user_guide.html?highlight=oom

Integration: https://wiki.opnfv.org/pages/viewpage.action?pageId=12389095 

The OOM (ONAP Operation Manager) project has pushed Kubernetes based deployment code to the oom repository - based on ONAP 1.1.  This page details getting ONAP running (specifically the vFirewall demo) on Kubernetes for various virtual and native environments.  This page assumes you have access to any type of bare metal or VM running a clean Ubuntu 16.04 image - either on Rackspace, Openstack, your laptop or AWS spot EC2.

Architectural details of the OOM project is described here - OOM User Guide

Kubernetes on ONAP Arch

Status


Undercloud Installation

Requirements

MetricMinFull SystemNotes

vCPU4

64 recommended

(16/32 ok)

The full ONAP system of 50+ containers is CPU and Network bound on startup - therefore if you pre pull the docker images to remove the network slowdown - vCPU utilization will peak at 44 cores on a 64 core system and bring the system up in under 4 min. On a system with 16 cores you will see the normal 7 min startup time as we throttle 44 to 16.



RAM

7g (a couple components)

55g (75 containers)

Note: you need at least 51g RAM (3g is for Rancher/Kubernetes itself (the VFs exist on a separate openstack).

51 to start and 55 after running the system for a day



HD

60g

120g+



We need a kubernetes installation either a base installation or with a thin API wrapper like Rancher.

There are several options - currently Rancher with Helm on Ubuntu 16.04 is a focus as a thin wrapper on Kubernetes - there are other alternative platforms in the subpage - ONAP on Kubernetes (Alternatives)

OSVIMDescriptionStatusNodesLinks

Ubuntu 16.04.2

!Redhat

Bare Metal

VMWare

Rancher

Recommended approach

Issue with kubernetes support only in 1.12 (obsolete docker-machine) on OSX

1-4http://rancher.com/docs/rancher/v1.6/en/quick-start-guide/

ONAP Installation

Quickstart Installation

1) install rancher, clone oom, run config-init pod, run one or all onap components

*****************

Note: uninstall docker if already installed - as Kubernetes only support 1.12.x - as of 20170809

% sudo apt-get remove docker-engine

*****************


Install Rancher

ONAP deployment in kubernetes is modelled in the oom project as a 1:1 set of service:pod sets (1 pod per docker container).  The fastest way to get ONAP Kubernetes up is via Rancher on any bare metal or VM that supports a clean Ubuntu 16.04 install and more than 50G ram.

TODO: REMOVE from table cell - wrapping is not working

(on each host) add to your /etc/hosts to point your ip to your hostname (add your hostname to the end). Add entries for all other hosts in your cluster.

sudo vi /etc/hosts
<your-ip> <your-hostname>


Try to use root - if you use ubuntu then you will need to enable docker separately for the ubuntu user

sudo su -
apt-get update 


(to fix possible modprobe: FATAL: Module aufs not found in directory /lib/modules/4.4.0-59-generic)


(on each host (server and client(s) which may be the same machine)) Install only the 1.12.x (currently 1.12.6) version of Docker (the only version that works with Kubernetes in Rancher 1.6)

curl https://releases.rancher.com/install-docker/1.12.sh | sh


(on the master only) Install rancher (use 8880 instead of 8080) - note there may be issues with the dns pod in Rancher after a reboot or when running clustered hosts - a clean system will be OK - OOM-236 - Getting issue details... STATUS

docker run -d --restart=unless-stopped -p 8880:8080 rancher/server


In Rancher UI - dont use (http://127.0.0.1:8880) - use the real IP address - so the client configs are populated correctly with callbacks

You must deactivate the default CATTLE environment - by adding a KUBERNETES environment - and Deactivating the older default CATTLE one - your added hosts will attach to the default

    • Default → Manage Environments
    • Select "Add Environment" button
    • Give the Environment a name and description, then select Kubernetes as the Environment Template
    • Hit the "Create" button. This will create the environment and bring you back to the Manage Environments view
    • At the far right column of the Default Environment row, left-click the menu ( looks like 3 stacked dots ), and select Deactivate. This will make your new Kubernetes environment the new default.


Register your host(s) - run following on each host (including the master if you are collocating the master/host on a single machine/vm)

For each host, In Rancher > Infrastructure > Hosts. Select "Add Host"

The first time you add a host - you will be presented with a screen containing the routable IP - hit save only on a routable IP.

Enter IP of host: (if you launched racher with 127.0.0.1/localhost - otherwise keep it empty - it will autopopulate the registration with the real IP)



Copy command to register host with Rancher,

Execute command on host, for example:

% docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.2 http://192.168.163.131:8880/v1/scripts/BBD465D9B24E94F5FBFD:1483142400000:IDaNFrug38QsjZcu6rXh8TwqA4


wait for kubernetes menu to populate with CLI


Install Kubectl

The following will install kubectl on a linux host. Once configured, this client tool will provide management of a Kubernetes cluster.

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl
mkdir ~/.kube
vi ~/.kube/config

Paste kubectl config from Rancher (you will see the CLI menu in Rancher / Kubernetes after the k8s pods are up on your host)

Click on "Generate Config" to get your content to add into .kube/config


Verify that Kubernetes config is good

root@obrien-kube11-1:~# kubectl cluster-info
Kubernetes master is running at ....
Heapster is running at....
KubeDNS is running at ....
kubernetes-dashboard is running at ...
monitoring-grafana is running at ....
monitoring-influxdb is running at ...
tiller-deploy is running at....


Install Helm

The following will install Helm (use 2.3.0 not current 2.6.0) on a linux host. Helm is used by OOM for package and configuration management.

Prerequisite: Install Kubectl

wget http://storage.googleapis.com/kubernetes-helm/helm-v2.3.0-linux-amd64.tar.gz
tar -zxvf helm-v2.3.0-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
# Test Helm
helm help


Undercloud done - move to ONAP


clone oom (scp your onap_rsa private key first - or clone anon - Ideally you get a full gerrit account and join the community)

see ssh/http/http access links below

https://gerrit.onap.org/r/#/admin/projects/oom

anonymous http
git clone http://gerrit.onap.org/r/oom
or using your key
git clone -b release-1.0.0 ssh://michaelobrien@gerrit.onap.org:29418/oom

or use https (substitute your user/pass)

git clone -b release-1.0.0 https://michaelnnnn:uHaBPMvR47nnnnnnnnRR3Keer6vatjKpf5A@gerrit.onap.org/r/oom


Wait until all the hosts show green in rancher, then run the createConfig/createAll scripts that wraps all the kubectl commands

OOM-115 - Getting issue details... STATUS

Source the setenv.bash script in /oom/kubernetes/oneclick/ - will set your helm list of components to start/delete

source setenv.bash

run the one time config pod - which mounts the volume /dockerdata/ contained in the pod config-init. This mount is required for all other ONAP pods to function.

Note: the pod will stop after NFS creation - this is normal.

% cd oom/kubernetes/config
# edit or copy the config for MSO data
vi onap-parameters.yaml
# or
cp onap-parameters-sample.yaml onap-parameters.yaml 
# run the config pod creation
% ./createConfig.sh -n onap 


**** Creating configuration for ONAP instance: onap
namespace "onap" created
pod "config-init" created
**** Done ****


Wait for the config-init pod is gone before trying to bring up a component or all of ONAP - around 60 sec (up to 10 min) - see https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-Waitingforconfig-initcontainertofinish-20sec

root@ip-172-31-93-122:~/oom_20170908/oom/kubernetes/config# kubectl get pods --all-namespaces -a

onap          config                                 0/1       Completed   0          1m

Note: When using the -a option the config container will show up with the status, however when not used with the -a flag, it will not be present


Cluster Configuration

A mount is better.

Try to run you host and client on a single VM (a 64g one) - if not you can run rancher and several clients across several machines/VMs. The /dockerdata-nfs share must be replicated across the cluster using a mount or by copying the directory to the other servers from the one the "config" pod actually runs. To verify this check your / root fs on each node.

# in this case a 4 node cluster of 16g vms - the config node is on the 4th
root@kube16-3:~# ls /
bin  boot  dev  dockerdata-nfs  etc  home  initrd.img  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  snap  srv  sys  tmp  usr  var  vmlinuz
root@kube16-3:~# scp -r /dockerdata-nfs/ root@kube160.onap.info:~
root@kube16-3:~# scp -r /dockerdata-nfs/ root@kube161.onap.info:~
root@kube16-3:~# scp -r /dockerdata-nfs/ root@kube162.onap.info:~     


Running ONAP

Pre pull docker images once a day. Currently will take 30-48 min depending on the throttling and load on nexus3.onap.org:10001. Pre pulling the images will allow the entire ONAP to start in 3-8 min instead of up to 3 hours.

This is a WIP

https://jira.onap.org/secure/attachment/10501/prepull_docker.sh

OOM-328 - Getting issue details... STATUS

Don't run all the pods unless you have at least 52G allocated - if you have a laptop/VM with 16G - then you can only run enough pods to fit in around 11G

% cd ../oneclick
% vi createAll.bash 
% ./createAll.bash -n onap -a robot|appc|aai 


(to bring up a single service at a time)

Use the default "onap" namespace if you want to run robot tests out of the box - as in "onap-robot"

Bring up core components

root@kos1001:~/oom1004/oom/kubernetes/oneclick# cat setenv.bash
#HELM_APPS=('consul' 'msb' 'mso' 'message-router' 'sdnc' 'vid' 'robot' 'portal' 'policy' 'appc' 'aai' 'sdc' 'dcaegen2' 'log' 'cli' 'multicloud' 'clamp' 'vnfsdk' 'kube2msb' 'aaf' 'vfc')
HELM_APPS=('consul' 'msb' 'mso' 'message-router' 'sdnc' 'vid' 'robot' 'portal' 'policy' 'appc' 'aai' 'sdc' 'log' 'kube2msb')
# pods with the ELK filebeat container for capturing logs
root@kos1001:~/oom1004/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -a | grep 2/2
onap-aai              aai-resources-338473047-8k6vr           2/2       Running     0          1h
onap-aai              aai-traversal-2033243133-6cr9v          2/2       Running     0          1h
onap-aai              model-loader-service-3356570452-25fjp   2/2       Running     0          1h
onap-aai              search-data-service-2366687049-jt0nb    2/2       Running     0          1h
onap-aai              sparky-be-3141964573-f2mhr              2/2       Running     0          1h
onap-appc             appc-1335254431-v1pcs                   2/2       Running     0          1h
onap-mso              mso-3911927766-bmww7                    2/2       Running     0          1h
onap-policy           drools-2302173499-t0zmt                 2/2       Running     0          1h
onap-policy           pap-1954142582-vsrld                    2/2       Running     0          1h
onap-policy           pdp-4137191120-qgqnj                    2/2       Running     0          1h
onap-portal           portalapps-4168271938-4kp32             2/2       Running     0          1h
onap-portal           portaldb-2821262885-0t32z               2/2       Running     0          1h
onap-sdc              sdc-be-2986438255-sdqj6                 2/2       Running     0          1h
onap-sdc              sdc-fe-1573125197-7j3gp                 2/2       Running     0          1h
onap-sdnc             sdnc-3858151307-w9h7j                   2/2       Running     0          1h
onap-vid              vid-server-1837290631-x4ttc             2/2       Running     0          1h
# failed containers
root@kos1001:~/oom1004/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -a | grep 0/1
onap                  config                                  0/1       Completed   0          3h
onap-kube2msb         kube2msb-registrator-1562795675-xsmjz   0/1       Error       24         1h


Only if you have >52G run the following (all namespaces)

% ./createAll.bash -n onap


ONAP is OK if everything is 1/1 in the following

% kubectl get pods --all-namespaces


Run the ONAP portal via instructions at RunningONAPusingthevnc-portal

Wait until the containers are all up

Run Initial healthcheck directly on the host

cd /dockerdata-nfs/onap/robot

./ete-docker.sh health


check AAI endpoints

root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# kubectl -n onap-aai exec -it aai-service-3321436576-2snd6 bash

root@aai-service-3321436576-2snd6:/# ps -ef

UID        PID  PPID  C STIME TTY          TIME CMD

root         1     0  0 15:50 ?        00:00:00 /usr/local/sbin/haproxy-systemd-

root         7     1  0 15:50 ?        00:00:00 /usr/local/sbin/haproxy-master  

root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# curl https://127.0.0.1:30233/aai/v11/service-design-and-creation/models

curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none


Ports


onap-aai              aai-service            10.43.238.197   <nodes>       8443:30233/TCP,8080:30232/TCP                                                


onap-aai              hbase                  None            <none>        2181/TCP,8080/TCP,8085/TCP,9090/TCP,16000/TCP,16010/TCP,16201/TCP            


onap-aai              model-loader-service   10.43.161.44    <nodes>       8443:30229/TCP,8080:30210/TCP                                                


onap-appc             dgbuilder              10.43.67.192    <nodes>       3000:30228/TCP                                                               


onap-appc             sdnhost                10.43.250.74    <nodes>       8282:30230/TCP,1830:30231/TCP                                                


onap-clamp            clamp                  10.43.145.101   <nodes>       8080:30295/TCP                                                              


onap-cli              cli                    10.43.225.34    <nodes>       80:30260/TCP                                                                 


onap-consul           consul-server          10.43.56.151    <nodes>       8500:30270/TCP,8301:30271/TCP                                                


onap-log              elasticsearch          10.43.81.172    <nodes>       9200:30254/TCP                                                               


onap-log              kibana                 10.43.71.77     <nodes>       5601:30253/TCP                                                               


onap-message-router   dmaap                  10.43.226.159   <nodes>       3904:30227/TCP,3905:30226/TCP                                                


onap-msb              msb-consul             10.43.128.166   <nodes>       8500:30500/TCP                                                               


onap-msb              msb-discovery          10.43.6.205     <nodes>       10081:30081/TCP                                                              


onap-msb              msb-eag                10.43.239.63    <nodes>       80:30082/TCP                                                                 


onap-msb              msb-iag                10.43.220.233   <nodes>       80:30080/TCP                                                                


onap-mso              mariadb                10.43.254.57    <nodes>       3306:30252/TCP                                                               


onap-mso              mso                    10.43.206.123   <nodes>       8080:30223/TCP,3904:30225/TCP,3905:30224/TCP,9990:30222/TCP,8787:30250/TCP   


onap-multicloud       framework              10.43.5.154     <nodes>       9001:30291/TCP                                                               


onap-multicloud       multicloud-ocata       10.43.135.158   <nodes>       9006:30293/TCP                                                               


onap-multicloud       multicloud-vio         10.43.43.13     <nodes>       9004:30292/TCP                                                               


onap-multicloud       multicloud-windriver   10.43.43.197    <nodes>       9005:30294/TCP                                                               


onap-policy           brmsgw                 10.43.116.61    <nodes>       9989:30216/TCP                                                               


onap-policy           drools                 10.43.213.147   <nodes>       6969:30217/TCP                                                               


onap-policy           pap                    10.43.177.82    <nodes>       8443:30219/TCP,9091:30218/TCP                                                


onap-policy           pdp                    10.43.167.146   <nodes>       8081:30220/TCP                                                               


onap-portal           portalapps             10.43.130.189   <nodes>       8006:30213/TCP,8010:30214/TCP,8989:30215/TCP                                 


onap-portal           vnc-portal             10.43.8.67      <nodes>       6080:30211/TCP,5900:30212/TCP                                                


onap-robot            robot                  10.43.245.43    <nodes>       88:30209/TCP                                                                 


onap-sdc              sdc-be                 10.43.132.126   <nodes>       8443:30204/TCP,8080:30205/TCP                                                


onap-sdc              sdc-fe                 10.43.96.120    <nodes>       9443:30207/TCP,8181:30206/TCP                                                


onap-sdnc             sdnc-dgbuilder         10.43.177.11    <nodes>       3000:30203/TCP                                                               


onap-sdnc             sdnc-portal            10.43.108.205   <nodes>       8843:30201/TCP                                                               


onap-sdnc             sdnhost                10.43.185.180   <nodes>       8282:30202/TCP,8201:32147/TCP                                                


onap-vid              vid-server             10.43.36.97     <nodes>       8080:30200/TCP                                                               


onap-vnfsdk           refrepo                10.43.121.92    <nodes>       8702:30297/TCP 


podext portinternal portcontainertyperest calls
aai-service302338443aai-servicehttps

haproxy only













sdc-be302058080sdc-behttp/sdc/* /sdc2/*
sdc-fe302068181sdc-fehttp/sdc1/*


List of Containers

Total pods is 75

Docker container list - may not be fully up to date: https://git.onap.org/integration/tree/packaging/docker/docker-images.csv

get health via

 



NAMESPACE

master:20170715

NAMEImage

Debug port

Containers

(2 includes filebeat)

Log Volume ExternalLog Locations docker internalPublic  PortsNotes
defaultconfig-init





The mount "config-init-root" is in the following location

(user configurable VF parameter file below)

/dockerdata-nfs/onapdemo/mso/mso/mso-docker.json

onap-aaf







onap-aaf







onap-aai 

aai-resources





/opt/aai/logroot/AAI-RES



onap-aai 

aai-service








onap-aai 

aai-traversal





/opt/aai/logroot/AAI-GQ



onap-aai 

data-router








onap-aai 

elasticsearch








onap-aai 

hbase








onap-aai 

model-loader-service








onap-aai 

search-data-service








onap-aai 

sparky-be








onap-appcappc

2



onap-appcappc-dbhost






onap-appcappc-dgbuilder






onap-appcsdntldb01 (internal)






onap-appcsdnctldb02 (internal)






onap-cli







onap-clampclamp






onap-clamp

clamp-mariadb








onap-consul

consul-agent








onap-consul

consul-server








onap-consul

consul-server








onap-consul

consul-server








onap-dcaedcae-zookeeper
wurstmeister/zookeeper:latest






onap-dcaedcae-kafka
dockerfiles_kafka:latest





Note: currently there are no DCAE containers running yet (we are missing 6 yaml files (1 for the controller and 5 for the collector,staging,3-cdap pods)) - therefore DMaaP, VES collectors and APPC actions as the result of policy actions (closed loop) - will not function yet.

In review: https://gerrit.onap.org/r/#/c/7287/

OOM-5 - Getting issue details... STATUS

OOM-62 - Getting issue details... STATUS

onap-dcaedcae-dmaap
attos/dmaap:latest






onap-dcaepgaas (PostgreSQL aaSobrienlabs/pgaas




https://hub.docker.com/r/oomk8s/pgaas/tags/
onap-dcaedcae-collector-common-event





persistent volume: dcae-collector-pvs
onap-dcaedcae-collector-dmaapbc






onap-dcae

not required

dcae-controller







persistent volume: dcae-controller-pvs
onap-dcaedcae-ves-collector






onap-dcaecdap-0






onap-dcaecdap-1






onap-dcaecdap-2






onap-kube2msb

kube2msb-registrator








onap-log

elasticsearch








onap-logkibana






onap-loglogstash






onap-message-routerdmaap






onap-message-routerglobal-kafka






onap-message-routerzookeeper






onap-msb

msb-consul







bring onap-msb up before the rest of onap

follow OOM-113 - Getting issue details... STATUS

onap-msb

msb-discovery








onap-msb

msb-eag








onap-msb

msb-iag








onap-msomariadb






onap-msomso

2



onap-multicloud

framework








onap-multicloud

multicloud-ocata








onap-multicloud

multicloud-vio








onap-multicloud

multicloud-windriver








onap-policy brmsgw






onap-policy drools

2



onap-policy mariadb






onap-policy nexus






onap-policy pap

2



onap-policy pdp

2



onap-portal portalapps

2



onap-portal portaldb






onap-portal

portalwidgets








onap-portal vnc-portal






onap-robot robot






onap-sdcsdc-be


/dockerdata-nfs/onap/sdc/logs/SDC/SDC-BE

${log.home}/${OPENECOMP-component-name}/ ${OPENECOMP-subcomponent-name}/transaction.log.%i


./var/lib/jetty/logs/SDC/SDC-BE/metrics.log

./var/lib/jetty/logs/SDC/SDC-BE/audit.log

./var/lib/jetty/logs/SDC/SDC-BE/debug_by_package.log

./var/lib/jetty/logs/SDC/SDC-BE/debug.log

./var/lib/jetty/logs/SDC/SDC-BE/transaction.log

./var/lib/jetty/logs/SDC/SDC-BE/error.log

./var/lib/jetty/logs/importNormativeAll.log

./var/lib/jetty/logs/ASDC/ASDC-FE/audit.log

./var/lib/jetty/logs/ASDC/ASDC-FE/debug.log

./var/lib/jetty/logs/ASDC/ASDC-FE/transaction.log

./var/lib/jetty/logs/ASDC/ASDC-FE/error.log

./var/lib/jetty/logs/2017_09_06.stderrout.log



onap-sdcsdc-cs






onap-sdcsdc-es

2



onap-sdcsdc-fe

2

./var/lib/jetty/logs/SDC/SDC-BE/metrics.log

./var/lib/jetty/logs/SDC/SDC-BE/audit.log

./var/lib/jetty/logs/SDC/SDC-BE/debug_by_package.log

./var/lib/jetty/logs/SDC/SDC-BE/debug.log

./var/lib/jetty/logs/SDC/SDC-BE/transaction.log

./var/lib/jetty/logs/SDC/SDC-BE/error.log

./var/lib/jetty/logs/importNormativeAll.log

./var/lib/jetty/logs/2017_09_07.stderrout.log

./var/lib/jetty/logs/ASDC/ASDC-FE/audit.log

./var/lib/jetty/logs/ASDC/ASDC-FE/debug.log

./var/lib/jetty/logs/ASDC/ASDC-FE/transaction.log

./var/lib/jetty/logs/ASDC/ASDC-FE/error.log

./var/lib/jetty/logs/2017_09_06.stderrout.log



onap-sdcsdc-kb






onap-sdncsdnc

2

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/journal/000006.log

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/cache/1504712225751.log

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/cache/1504712002358.log

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/tmp/xql.log

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/log/karaf.log

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/taglist.log

./var/log/dpkg.log

./var/log/apt/history.log

./var/log/apt/term.log

./var/log/fontconfig.log

./var/log/alternatives.log

./var/log/bootstrap.log



onap-sdncsdnc-dbhost






onap-sdncsdnc-dgbuilder






onap-sdncsdnctlbd01 (internal)






onap-sdncsdnctlb02 (internal)






onap-sdncsdnc-portal



./opt/openecomp/sdnc/admportal/server/npm-debug.log

./var/log/dpkg.log

./var/log/apt/history.log

./var/log/apt/term.log

./var/log/fontconfig.log

./var/log/alternatives.log

./var/log/bootstrap.log



onap-vid vid-mariadb






onap-vid vid-server

2



onap-vfc

vfc-catalog








onap-vfc

vfc-emsdriver








onap-vfc

vfc-gvnfmdriver








onap-vfc

vfc-hwvnfmdriver








onap-vfc

vfc-jujudriver








onap-vfc

vfc-nslcm








onap-vfc

vfc-resmgr








onap-vfc

vfc-vnflcm








onap-vfc

vfc-vnfmgr








onap-vfc

vfc-vnfres








onap-vfc

vfc-workflow








onap-vfc

vfc-ztesdncdriver








onap-vfc

vfc-ztevmanagerdrive








onap-vnfsdkpostgres






onap-vnfsdkrefrepo






Fix MSO mso-docker.json (deprecated)

Before running pod-config-init.yaml - make sure your config for openstack is setup correctly - so you can deploy the vFirewall VMs for example

vi oom/kubernetes/config/docker/init/src/config/mso/mso/mso-docker.json

OriginalReplacement for Rackspace

"mso-po-adapter-config": {
"checkrequiredparameters": "true",
"cloud_sites": [{
"aic_version": "2.5",
"id": "Ottawa",
"identity_service_id": "KVE5076_OPENSTACK",
"lcp_clli": "RegionOne",
"region_id": "RegionOne"
}],
"identity_services": [{
"admin_tenant": "services",
"dcp_clli": "KVE5076_OPENSTACK",
"identity_authentication_type": "USERNAME_PASSWORD",
"identity_server_type": "KEYSTONE",
"identity_url": "http://OPENSTACK_KEYSTONE_IP_HERE:5000/v2.0",
"member_role": "admin",
"mso_id": "dev",
"mso_pass": "dcdc0d9e4d69a667c67725a9e466e6c3",
"tenant_metadata": "true"
}],

"mso-po-adapter-config": {
"checkrequiredparameters": "true",
"cloud_sites": [{
"aic_version": "2.5",
"id": "Dallas",
"identity_service_id": "RAX_KEYSTONE",
"lcp_clli": "DFW", # or IAD
"region_id": "DFW"
}],
"identity_services": [{
"admin_tenant": "service",
"dcp_clli": "RAX_KEYSTONE",
"identity_authentication_type": "RACKSPACE_APIKEY",
"identity_server_type": "KEYSTONE",
"identity_url": "https://identity.api.rackspacecloud.com/v2.0",
"member_role": "admin",
"mso_id": "9998888",
"mso_pass": "YOUR_API_KEY",
"tenant_metadata": "true"
}],

Kubernetes DevOps


Kubernetes specific config

https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/

Deleting All Containers

Delete all the containers (and services)

./deleteAll.bash -n onap

Delete/Rerun config-init container for /dockerdata-nfs refresh

Delete the config-init container and its generated /dockerdata-nfs share

There may be cases where new configuration content needs to be deployed after a pull of a new version of ONAP.

for example after pull brings in files like the following (20170902)

root@ip-172-31-93-160:~/oom/kubernetes/oneclick# git pull

Resolving deltas: 100% (135/135), completed with 24 local objects.

From http://gerrit.onap.org/r/oom

   bf928c5..da59ee4  master     -> origin/master

Updating bf928c5..da59ee4

kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/aai-resources/aai-resources-auth/metadata.rb                                  |    7 +

 kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/aai-resources/aai-resources-auth/recipes/aai-resources-aai-keystore.rb        |    8 +

 kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/{ajsc-aai-config => aai-resources/aai-resources-config}/CHANGELOG.md          |    2 +-

 kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/{ajsc-aai-config => aai-resources/aai-resources-config}/README.md             |    4 +-


see (worked with Zoran)  OOM-257 - Getting issue details... STATUS

# check for the pod
kubectl get pods --all-namespaces -a
# delete all the pod/services
./deleteAll.bash -n onap
# delete the fs
rm -rf /dockerdata-nfs
At this moment, its empty env
#Pull the repo
git pull
# rerun the config
./createConfig.bash -n onap
If you get an error saying release onap-config is already exists then please run :- helm del --purge onap-config


example 20170907 
root@kube0:~/oom/kubernetes/oneclick# rm -rf /dockerdata-nfs/
root@kube0:~/oom/kubernetes/oneclick# cd ../config/
root@kube0:~/oom/kubernetes/config# ./createConfig.sh -n onap
**** Creating configuration for ONAP instance: onap
Error from server (AlreadyExists): namespaces "onap" already exists
Error: a release named "onap-config" already exists.
Please run: helm ls --all "onap-config"; helm del --help
**** Done ****
root@kube0:~/oom/kubernetes/config# helm del --purge onap-config
release "onap-config" deleted
# rerun createAll.bash -n onap


Waiting for config-init container to finish - 20sec

root@ip-172-31-93-160:~/oom/kubernetes/config# kubectl get pods --all-namespaces -a

NAMESPACE     NAME                                   READY     STATUS              RESTARTS   AGE

onap          config-init                            0/1       ContainerCreating   0          6s

root@ip-172-31-93-160:~/oom/kubernetes/config# kubectl get pods --all-namespaces -a

NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE

onap          config-init                            1/1       Running   0          9s

root@ip-172-31-93-160:~/oom/kubernetes/config# kubectl get pods --all-namespaces -a

NAMESPACE     NAME                                   READY     STATUS      RESTARTS   AGE

onap          config-init                            0/1       Completed   0          14s


Container Endpoint access

Check the services view in the Kuberntes API under robot

robot.onap-robot:88 TCP

robot.onap-robot:30209 TCP

kubectl get services --all-namespaces -o wide

onap-vid      vid-mariadb            None           <none>        3306/TCP         1h        app=vid-mariadb

onap-vid      vid-server             10.43.14.244   <nodes>       8080:30200/TCP   1h        app=vid-server


Container Logs

kubectl --namespace onap-vid logs -f vid-server-248645937-8tt6p

16-Jul-2017 02:46:48.707 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 22520 ms

kubectl --namespace onap-portal logs portalapps-2799319019-22mzl -f

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide

NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP              NODE

onap-robot    robot-44708506-dgv8j                    1/1       Running   0          36m       10.42.240.80    obriensystemskub0

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl --namespace onap-robot logs -f robot-44708506-dgv8j

2017-07-16 01:55:54: (log.c.164) server started

Robot Logs

Yogini and I needed the logs in OOM Kubernetes - they were already there and with a robot:robot auth

http://test.onap.info:30209/logs/demo/InitDistribution/report.html

for example after a

root@ip-172-31-57-55:/dockerdata-nfs/onap/robot# ./demo-k8s.sh distribute

find your path to the logs by using for example

root@ip-172-31-57-55:/dockerdata-nfs/onap/robot# kubectl --namespace onap-robot exec -it robot-4251390084-lmdbb bash

root@robot-4251390084-lmdbb:/# ls /var/opt/OpenECOMP_ETE/html/logs/demo/InitD                                                            

InitDemo/         InitDistribution/ 

path is

http://test.onap.info:30209/logs/demo/InitDemo/log.html#s1-s1-s1-s1-t1



SSH into ONAP containers

Normally I would via https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/

Get the pod name via

kubectl get pods --all-namespaces -o wide

bash into the pod via

kubectl -n onap-mso exec -it  mso-1648770403-8hwcf /bin/bash


Push Files to Pods

Trying to get an authorization file into the robot pod

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl cp authorization onap-robot/robot-44708506-nhm0n:/home/ubuntu

above works?
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl cp authorization onap-robot/robot-44708506-nhm0n:/etc/lighttpd/authorization
tar: authorization: Cannot open: File exists
tar: Exiting with failure status due to previous errors

Redeploying Code war/jar in a docker container

Attaching a debugger to a docker container



Running ONAP Portal UI Operations

Running ONAP using the vnc-portal

see Installing and Running the ONAP Demos

or run the vnc-portal container to access ONAP using the traditional port mappings.  See the following recorded video by Mike Elliot of the OOM team for a audio-visual reference

https://wiki.onap.org/download/attachments/13598723/zoom_0.mp4?version=1&modificationDate=1502986268000&api=v2

Check for the vnc-portal port via (it is always 30211)

obrienbiometrics:onap michaelobrien$ ssh ubuntu@dev.onap.info
ubuntu@ip-172-31-93-122:~$ sudo su -
root@ip-172-31-93-122:~# kubectl get services --all-namespaces -o wide
NAMESPACE             NAME                          CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                      AGE       SELECTOR
onap-portal           vnc-portal                    10.43.78.204    <nodes>       6080:30211/TCP,5900:30212/TCP                                                4d        app=vnc-portal

launch the vnc-portal in a browser

http://dev.onap.info:30211/

password is "password"

Open firefox inside the VNC vm - launch portal normally

http://portal.api.simpledemo.openecomp.org:8989/ECOMPPORTAL/login.htm

(20170906) Before running SDC - fix the /etc/hosts (thanks Yogini for catching this) - edit your /etc/hosts as follows

(change sdc.ui to sdc.api)

OOM-282 - Getting issue details... STATUS

beforeafternotes



login and run SDC


Continue with the normal ONAP demo flow at (Optional) Tutorial: Onboarding and Distributing a Vendor Software Product (VSP)

Running Multiple ONAP namespaces

Run multiple environments on the same machine - TODO

root@obriensystemsu0:~/onap/oom/kubernetes/oneclick# ./createAll.bash -n onap -a robot

********** Creating instance 1 of ONAP with port range 30200 and 30399

********** Creating ONAP:

********** Creating deployments for robot **********

Creating namespace **********

namespace "onap-robot" created

Creating registry secret **********

secret "onap-docker-registry-key" created

Creating deployments and services **********

NAME:   onap-robot

LAST DEPLOYED: Sun Sep 17 14:58:29 2017

NAMESPACE: onap

STATUS: DEPLOYED

RESOURCES:

==> v1/Service

NAME   CLUSTER-IP  EXTERNAL-IP  PORT(S)       AGE

robot  10.43.6.5   <nodes>      88:30209/TCP  0s

==> extensions/v1beta1/Deployment

NAME   DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE

robot  1        1        1           0          0s

**** Done ****

root@obriensystemsu0:~/onap/oom/kubernetes/oneclick# ./createAll.bash -n onap2 -a robot

********** Creating instance 1 of ONAP with port range 30200 and 30399

********** Creating ONAP:

********** Creating deployments for robot **********

Creating namespace **********

namespace "onap2-robot" created

Creating registry secret **********

secret "onap2-docker-registry-key" created

Creating deployments and services **********

Error: release onap2-robot failed: Service "robot" is invalid: spec.ports[0].nodePort: Invalid value: 30209: provided port is already allocated

The command helm returned with error code 1

Rest API

There is a v1 and v2 experimental REST endpoint that allows us to automate Rancher

http://127.0.0.1:8880/v1

For example we can see when the AAI model-loader container was created

http://127.0.0.1:8880/v1/containerevents
{
  • "id": "1ce88",
  • "type": "containerEvent",
  • "links": {},
  • "actions": {},
  • "baseType": "containerEvent",
  • "state": "created",
  • "accountId": "1a7",
  • "created": "2017-09-17T20:07:37Z",
  • "createdTS": 1505678857000,
  • "data": {
    • "fields": {
      • "dockerInspect": {
        • "Id": "59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54",
        • "Created": "2017-09-17T20:07:37.750772403Z",
        • "Path": "/pause",
        • "Args": [ ],
        • "State": {
          • "Status": "running",
          • "Running": true,
          • "Paused": false,
          • "Restarting": false,
          • "OOMKilled": false,
          • "Dead": false,
          • "Pid": 25115,
          • "ExitCode": 0,
          • "Error": "",
          • "StartedAt": "2017-09-17T20:07:37.92889179Z",
          • "FinishedAt": "0001-01-01T00:00:00Z"
          },
        • "Image": "sha256:99e59f495ffaa222bfeb67580213e8c28c1e885f1d245ab2bbe3b1b1ec3bd0b2",
        • "ResolvConfPath": "/var/lib/docker/containers/59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54/resolv.conf",
        • "HostnamePath": "/var/lib/docker/containers/59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54/hostname",
        • "HostsPath": "/var/lib/docker/containers/59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54/hosts",
        • "LogPath": "/var/lib/docker/containers/59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54/59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54-json.log",
        • "Name": "/k8s_POD_model-loader-service-849987455-532vd_onap-aai_d9034afb-9be3-11e7-ac87-024d93e255bc_0",
        • "RestartCount": 0,
        • "Driver": "aufs",
        • "MountLabel": "",
        • "ProcessLabel": "",
        • "AppArmorProfile": "",
        • "ExecIDs": null,
        • "HostConfig": {
          • "Binds": null,
          • "ContainerIDFile": "",
          • "LogConfig": {
            • "Type": "json-file",
            • "Config": { }
            },
          • "NetworkMode": "none",
          • "PortBindings": { },
          • "RestartPolicy": {
            • "Name": "",
            • "MaximumRetryCount": 0
            },
          • "AutoRemove": false,
          • "VolumeDriver": "",
          • "VolumesFrom": null,
          • "CapAdd": null,
          • "CapDrop": null,
          • "Dns": null,
          • "DnsOptions": null,
          • "DnsSearch": null,
          • "ExtraHosts": null,
          • "GroupAdd": null,
          • "IpcMode": "",
          • "Cgroup": "",
          • "Links": null,
          • "OomScoreAdj": -998,
          • "PidMode": "",
          • "Privileged": false,
          • "PublishAllPorts": false,
          • "ReadonlyRootfs": false,
          • "SecurityOpt": [
            • "seccomp=unconfined"
            ],
          • "UTSMode": "",
          • "UsernsMode": "",
          • "ShmSize": 67108864,
          • "Runtime": "runc",
          • "ConsoleSize": [ 2 items
            • 0,
            • 0
            ],
          • "Isolation": "",
          • "CpuShares": 2,
          • "Memory": 0,
          • "CgroupParent": "/kubepods/besteffort/podd9034afb-9be3-11e7-ac87-024d93e255bc",
          • "BlkioWeight": 0,
          • "BlkioWeightDevice": null,
          • "BlkioDeviceReadBps": null,
          • "BlkioDeviceWriteBps": null,
          • "BlkioDeviceReadIOps": null,
          • "BlkioDeviceWriteIOps": null,
          • "CpuPeriod": 0,
          • "CpuQuota": 0,
          • "CpuRealtimePeriod": 0,
          • "CpuRealtimeRuntime": 0,
          • "CpusetCpus": "",
          • "CpusetMems": "",
          • "Devices": null,
          • "DiskQuota": 0,
          • "KernelMemory": 0,
          • "MemoryReservation": 0,
          • "MemorySwap": 0,
          • "MemorySwappiness": -1,
          • "OomKillDisable": false,
          • "PidsLimit": 0,
          • "Ulimits": null,
          • "CpuCount": 0,
          • "CpuPercent": 0,
          • "IOMaximumIOps": 0,
          • "IOMaximumBandwidth": 0
          },
        • "GraphDriver": {
          • "Name": "aufs",
          • "Data": null
          },
        • "Mounts": [ ],
        • "Config": {
          • "Hostname": "model-loader-service-849987455-532vd",
          • "Domainname": "",
          • "User": "",
          • "AttachStdin": false,
          • "AttachStdout": false,
          • "AttachStderr": false,
          • "Tty": false,
          • "OpenStdin": false,
          • "StdinOnce": false,
          • "Env": null,
          • "Cmd": null,
          • "Image": "gcr.io/google_containers/pause-amd64:3.0",
          • "Volumes": null,
          • "WorkingDir": "",
          • "Entrypoint": [
            • "/pause"
            ],
          • "OnBuild": null,
          • "Labels": {}
          },
        • "NetworkSettings": {
          • "Bridge": "",
          • "SandboxID": "6ebd4ae330d1fded82301d121e604a1e7193f20c538a9ff1179e98b9e36ffa5f",
          • "HairpinMode": false,
          • "LinkLocalIPv6Address": "",
          • "LinkLocalIPv6PrefixLen": 0,
          • "Ports": { },
          • "SandboxKey": "/var/run/docker/netns/6ebd4ae330d1",
          • "SecondaryIPAddresses": null,
          • "SecondaryIPv6Addresses": null,
          • "EndpointID": "",
          • "Gateway": "",
          • "GlobalIPv6Address": "",
          • "GlobalIPv6PrefixLen": 0,
          • "IPAddress": "",
          • "IPPrefixLen": 0,
          • "IPv6Gateway": "",
          • "MacAddress": "",
          • "Networks": {
            • "none": {
              • "IPAMConfig": null,
              • "Links": null,
              • "Aliases": null,
              • "NetworkID": "9812f79a4ddb086db1b60cd10292d729842b2b42e674b400ac09101541e2b845",
              • "EndpointID": "d4cb711ea75ed4d27b9d4b3a71d1b3dd5dfa9f4ebe277ab4280d98011a35b463",
              • "Gateway": "",
              • "IPAddress": "",
              • "IPPrefixLen": 0,
              • "IPv6Gateway": "",
              • "GlobalIPv6Address": "",
              • "GlobalIPv6PrefixLen": 0,
              • "MacAddress": ""
              }
            }
          }
        }
      }
    },


Troubleshooting

Rancher fails to restart on server reboot

Having issues after a reboot of a colocated server/agent

Installing Clean Ubuntu

apt-get install ssh

apt-get install ubuntu-desktop

DNS resolution

ignore - not relevant

Search Line limits were exceeded, some dns names have been omitted, the applied search line is: default.svc.cluster.local svc.cluster.local cluster.local kubelet.kubernetes.rancher.internal kubernetes.rancher.internal rancher.internal

https://github.com/rancher/rancher/issues/9303

Config Pod fails to start with Error

Make sure your Openstack parameters are set if you get the following starting up the config pod

root@obriensystemsu0:~# kubectl get pods --all-namespaces -a
NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
kube-system   heapster-4285517626-l9wjp              1/1       Running   4          22d
kube-system   kube-dns-2514474280-4411x              3/3       Running   9          22d
kube-system   kubernetes-dashboard-716739405-fq507   1/1       Running   4          22d
kube-system   monitoring-grafana-3552275057-w3xml    1/1       Running   4          22d
kube-system   monitoring-influxdb-4110454889-bwqgm   1/1       Running   4          22d
kube-system   tiller-deploy-737598192-841l1          1/1       Running   4          22d
onap          config                                 0/1       Error     0          1d
root@obriensystemsu0:~# vi /etc/hosts
root@obriensystemsu0:~# kubectl logs -n onap config
Validating onap-parameters.yaml has been populated
Error: OPENSTACK_UBUNTU_14_IMAGE must be set in onap-parameters.yaml
+ echo 'Validating onap-parameters.yaml has been populated'
+ [[ -z '' ]]
+ echo 'Error: OPENSTACK_UBUNTU_14_IMAGE must be set in onap-parameters.yaml'
+ exit 1

fix
root@obriensystemsu0:~/onap_1007/oom/kubernetes/config# helm delete --purge onap-config
release "onap-config" deleted
root@obriensystemsu0:~/onap_1007/oom/kubernetes/config# ./createConfig.sh -n onap

**** Creating configuration for ONAP instance: onap
Error from server (AlreadyExists): namespaces "onap" already exists
NAME:   onap-config
LAST DEPLOYED: Mon Oct  9 21:35:27 2017
NAMESPACE: onap
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                   DATA  AGE
global-onap-configmap  15    0s

==> v1/Pod
NAME    READY  STATUS             RESTARTS  AGE
config  0/1    ContainerCreating  0         0s

**** Done ****
root@obriensystemsu0:~/onap_1007/oom/kubernetes/config# kubectl get pods --all-namespaces -a
NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
kube-system   heapster-4285517626-l9wjp              1/1       Running   4          22d
kube-system   kube-dns-2514474280-4411x              3/3       Running   9          22d
kube-system   kubernetes-dashboard-716739405-fq507   1/1       Running   4          22d
kube-system   monitoring-grafana-3552275057-w3xml    1/1       Running   4          22d
kube-system   monitoring-influxdb-4110454889-bwqgm   1/1       Running   4          22d
kube-system   tiller-deploy-737598192-841l1          1/1       Running   4          22d
onap          config                                 1/1       Running   0          25s
root@obriensystemsu0:~/onap_1007/oom/kubernetes/config# kubectl get pods --all-namespaces -a
NAMESPACE     NAME                                   READY     STATUS      RESTARTS   AGE
kube-system   heapster-4285517626-l9wjp              1/1       Running     4          22d
kube-system   kube-dns-2514474280-4411x              3/3       Running     9          22d
kube-system   kubernetes-dashboard-716739405-fq507   1/1       Running     4          22d
kube-system   monitoring-grafana-3552275057-w3xml    1/1       Running     4          22d
kube-system   monitoring-influxdb-4110454889-bwqgm   1/1       Running     4          22d
kube-system   tiller-deploy-737598192-841l1          1/1       Running     4          22d
onap          config                                 0/1       Completed   0          1m

 



Questions

https://lists.onap.org/pipermail/onap-discuss/2017-July/002084.html

https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/

Please help out our OPNFV friends

https://wiki.opnfv.org/pages/viewpage.action?pageId=12389095

Of interest

https://github.com/cncf/cross-cloud/

Reference Reviews

https://gerrit.onap.org/r/#/c/6179/

https://gerrit.onap.org/r/#/c/9849/

https://gerrit.onap.org/r/#/c/9839/

  • No labels