You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 116 Next »

Warning: Draft Content

This wiki is under construction - this means that content here may be not fully specified or missing.

TODO: determine/fix containers not ready, get DCAE yamls working, fix health tracking issues for healing


The OOM (ONAP Operation Manager) project has pushed Kubernetes based deployment code to the oom repository.  This page details getting ONAP running (specifically the vFirewall demo) on Kubernetes for various virtual and native environments.

Kubernetes on ONAP Arch

Undercloud Installation

Note: you need at least 37g RAM (34 for ONAP services - this is without DCAE yet and without running the vFirewall yet).

We need a kubernetes installation either a base installation or with a thin API wrapper like Rancher or Redhat

There are several options - currently Rancher on Ubuntu 16.04 is a focus as a thin wrapper on Kubernetes - there are other alternative platforms in the subpage - ONAP on Kubernetes (Alternatives)

OSVIMDescriptionStatusNodesLinks

Ubuntu 16.04.2

!Redhat

Bare Metal

VMWare

Rancher

Recommended approach

Issue with kubernetes support only in 1.12 (obsolete docker-machine) on OSX

1-4http://rancher.com/docs/rancher/v1.6/en/quick-start-guide/

ONAP Installation

Quickstart Installation

ONAP deployment in kubernetes is modelled in the oom project as a 1:1 set of service:pod sets (1 pod per docker container).  The fastest way to get ONAP Kubernetes up is via Rancher.

Primary platform is virtual Ubuntu 16.04 VMs on VMWare Workstation 12.5 on a up to two separate 64Gb/6-core 5820K Windows 10 systems.

Secondary platform is bare-metal 4 NUCs (i7/i5/i3 with 16G each) 

Install only the 1.12.x (currently 1.12.6) version of Docker (the only version that works with Kubernetes in Rancher 1.6)
curl https://releases.rancher.com/install-docker/1.12.sh | sh

Install rancher (use 8880 instead of 8080)
sudo docker run -d --restart=unless-stopped -p 8880:8080 rancher/server

In Rancher UI (http://127.0.0.1:8880) , Set IP name of master node in config, create a new onap environment as Kubernetes (will setup kube containers), stop default environment

register your host(s) - run following on each host (get from "add host" menu) - install docker 1.12 if not already on the host

curl https://releases.rancher.com/install-docker/1.12.sh | sh
docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.2 http://192.168.163.131:8880/v1/scripts/BBD465D9B24E94F5FBFD:1483142400000:IDaNFrug38QsjZcu6rXh8TwqA4


install kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

paste kubectl config from rancher

mkdir ~/.kube

vi ~/.kube/config

clone oom (scp your onap_rsa private key first)

git clone ssh://michaelobrien@gerrit.onap.org:29418/oom

fix nexus3 security temporarily for OOM-3 - Getting issue details... STATUS

vi oom/kubernetes/oneclick/createAll.bash

create_namespace() {
  kubectl create namespace $1-$2
+  kubectl --namespace $1-$2 create secret docker-registry regsecret --docker-server=nexus3.onap.org:10001 --docker-username=docker --docker-password=docker --docker-email=email@email.com
+  kubectl --namespace $1-$2 patch serviceaccount default -p '{"imagePullSecrets": [{"name": "regsecret"}]}'
}


Wait until all the hosts show green in rancher, then run the script that wraps all the kubectl commands

run the one time config pod (with mounts )config-init-root) for all the other pods) - the pod will stop normally

cd oom/kubernetes/config

Before running pod-config-init.yaml - make sure your config for openstack is setup correctly - so you can deploy the vFirewall VMs for example

vi oom/kubernetes/config/docker/init/src/config/mso/mso/mso-docker.json

replace for example

"identity_services": [{
"identity_url": "http://OPENSTACK_KEYSTONE_IP_HERE:5000/v2.0",

~/onap/oom/kubernetes/config# kubectl create -f pod-config-init.yaml

pod "config-init" created

Fix DNS resolution before running any more pods ( add service.ns.svc.cluster.local or svc.cluster.local temporarily)

~/oom/kubernetes/oneclick# cat /etc/resolv.conf

nameserver 192.168.241.2

search localdomain service.ns.svc.cluster.local

Note: use only the hardcoded "onap" namespace prefix - as URLs in the config pod are set as follows "workflowSdncadapterCallback": "http://mso.onap-mso:8080/mso/SDNCAdapterCallbackService",

cd ../oneclick
vi createAll.bash 

./createAll.bash -n onap

Wait until the containers are all up - you should see...

Four host Kubernetes cluster in Rancher

In this case 4 Intel NUCs running Ubuntu 16.04.2 natively


Target Deployment State

root@obriensystemsucont0:~/onap/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide

to update

on 5820k 4.1GHz 12 vCores 48g Ubuntu 16.04.2 VM on 64g host


root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get services --all-namespaces -o wide

NAMESPACE             NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                      AGE       SELECTOR

default               kubernetes             10.43.0.1       <none>        443/TCP                                                                      45m       <none>

kube-system           heapster               10.43.39.217    <none>        80/TCP                                                                       45m       k8s-app=heapster

kube-system           kube-dns               10.43.0.10      <none>        53/UDP,53/TCP                                                                45m       k8s-app=kube-dns

kube-system           kubernetes-dashboard   10.43.106.248   <none>        9090/TCP                                                                     45m       k8s-app=kubernetes-dashboard

kube-system           monitoring-grafana     10.43.18.184    <none>        80/TCP                                                                       45m       k8s-app=grafana

kube-system           monitoring-influxdb    10.43.58.26     <none>        8086/TCP                                                                     45m       k8s-app=influxdb

kube-system           tiller-deploy          10.43.235.104   <none>        44134/TCP                                                                    45m       app=helm,name=tiller

onap-aai              aai-service            10.43.181.245   <nodes>       8443:30233/TCP,8080:30232/TCP                                                27m       app=aai-service

onap-aai              hbase                  None            <none>        8020/TCP                                                                     27m       app=hbase

onap-aai              model-loader-service   10.43.74.55     <nodes>       8443:30229/TCP,8080:30210/TCP                                                27m       app=model-loader-service

onap-appc             dbhost                 None            <none>        3306/TCP                                                                     27m       app=appc-dbhost

onap-appc             dgbuilder              10.43.146.180   <nodes>       3000:30228/TCP                                                               27m       app=appc-dgbuilder

onap-appc             sdnctldb01             None            <none>        3306/TCP                                                                     27m       app=appc-dbhost

onap-appc             sdnctldb02             None            <none>        3306/TCP                                                                     27m       app=appc-dbhost

onap-appc             sdnhost                10.43.196.103   <nodes>       8282:30230/TCP,1830:30231/TCP                                                27m       app=appc

onap-message-router   dmaap                  10.43.95.58     <nodes>       3904:30227/TCP,3905:30226/TCP                                                27m       app=dmaap

onap-message-router   global-kafka           None            <none>        9092/TCP                                                                     27m       app=global-kafka

onap-message-router   zookeeper              None            <none>        2181/TCP                                                                     27m       app=zookeeper

onap-mso              mariadb                10.43.5.166     <nodes>       3306:30252/TCP                                                               27m       app=mariadb

onap-mso              mso                    10.43.156.155   <nodes>       8080:30223/TCP,3904:30225/TCP,3905:30224/TCP,9990:30222/TCP,8787:30250/TCP   27m       app=mso

onap-policy           brmsgw                 10.43.81.249    <nodes>       9989:30216/TCP                                                               27m       app=brmsgw

onap-policy           drools                 10.43.154.250   <nodes>       6969:30217/TCP                                                               27m       app=drools

onap-policy           mariadb                None            <none>        3306/TCP                                                                     27m       app=mariadb

onap-policy           nexus                  None            <none>        8081/TCP                                                                     27m       app=nexus

onap-policy           pap                    10.43.37.182    <nodes>       8443:30219/TCP,9091:30218/TCP                                                27m       app=pap

onap-policy           pdp                    10.43.123.239   <nodes>       8081:30220/TCP                                                               27m       app=pdp

onap-policy           pypdp                  10.43.226.208   <nodes>       8480:30221/TCP                                                               27m       app=pypdp

onap-portal           portalapps             10.43.225.107   <nodes>       8006:30213/TCP,8010:30214/TCP,8989:30215/TCP                                 27m       app=portalapps

onap-portal           portaldb               None            <none>        3306/TCP                                                                     27m       app=portaldb

onap-portal           vnc-portal             10.43.216.210   <nodes>       6080:30211/TCP,5900:30212/TCP                                                27m       app=vnc-portal

onap-robot            robot                  10.43.52.131    <nodes>       88:30209/TCP                                                                 34m       app=robot

onap-sdc              sdc-be                 10.43.123.5     <nodes>       8443:30204/TCP,8080:30205/TCP                                                27m       app=sdc-be

onap-sdc              sdc-cs                 None            <none>        9042/TCP,9160/TCP                                                            27m       app=sdc-cs

onap-sdc              sdc-es                 None            <none>        9200/TCP,9300/TCP                                                            27m       app=sdc-es

onap-sdc              sdc-fe                 10.43.233.17    <nodes>       9443:30207/TCP,8181:30206/TCP                                                27m       app=sdc-fe

onap-sdc              sdc-kb                 None            <none>        5601/TCP                                                                     27m       app=sdc-kb

onap-sdnc             dbhost                 None            <none>        3306/TCP                                                                     27m       app=sdnc-dbhost

onap-sdnc             sdnc-dgbuilder         10.43.253.47    <nodes>       3000:30203/TCP                                                               27m       app=sdnc-dgbuilder

onap-sdnc             sdnc-portal            10.43.248.245   <nodes>       8843:30201/TCP                                                               27m       app=sdnc-portal

onap-sdnc             sdnctldb01             None            <none>        3306/TCP                                                                     27m       app=sdnc-dbhost

onap-sdnc             sdnctldb02             None            <none>        3306/TCP                                                                     27m       app=sdnc-dbhost

onap-sdnc             sdnhost                10.43.51.170    <nodes>       8282:30202/TCP                                                               27m       app=sdnc

onap-vid              vid-mariadb            None            <none>        3306/TCP                                                                     31m       app=vid-mariadb

onap-vid              vid-server             10.43.54.34     <nodes>       8080:30200/TCP                                                               31m       app=vid-server

sdnc-portal is still downloading node packages

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl --namespace onap-sdnc logs -f sdnc-portal-3375812606-01s1d

npm http GET https://registry.npmjs.org/minimist/0.0.8
npm http GET https://registry.npmjs.org/basic-auth/1.0.0
npm http GET https://registry.npmjs.org/utils-merge/1.0.0
npm http GET https://registry.npmjs.org/minimist/0.0.8

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl --namespace onap-sdnc logs -f sdnc-portal-3375812606-01s1d | grep ERR
npm ERR! fetch failed https://registry.npmjs.org/is-utf8/-/is-utf8-0.2.1.tgz

#2

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system heapster-859001963-kz210 1/1 Running 5 7h 10.42.82.15 obriensystemskub0
kube-system kube-dns-1759312207-jd5tf 3/3 Running 8 7h 10.42.46.137 obriensystemskub0
kube-system kubernetes-dashboard-2463885659-xv986 1/1 Running 4 7h 10.42.83.159 obriensystemskub0
kube-system monitoring-grafana-1177217109-sm5nq 1/1 Running 4 7h 10.42.176.36 obriensystemskub0
kube-system monitoring-influxdb-1954867534-vvb84 1/1 Running 4 7h 10.42.30.22 obriensystemskub0
kube-system tiller-deploy-1933461550-gdxch 1/1 Running 4 7h 10.42.32.200 obriensystemskub0
onap-aai aai-service-301900780-70xv0 1/1 Running 0 10m 10.42.233.63 obriensystemskub0
onap-aai hbase-2985919495-bv216 1/1 Running 0 10m 10.42.239.209 obriensystemskub0
onap-aai model-loader-service-2352751609-7rp0x 1/1 Running 0 10m 10.42.192.198 obriensystemskub0
onap-appc appc-4266112350-92s6l 1/1 Running 0 10m 10.42.129.60 obriensystemskub0
onap-appc appc-dbhost-981835105-gcz3b 1/1 Running 0 10m 10.42.132.39 obriensystemskub0
onap-appc appc-dgbuilder-939982213-xb7t8 1/1 Running 0 10m 10.42.219.42 obriensystemskub0
onap-message-router dmaap-1381770224-jz059 1/1 Running 0 10m 10.42.56.184 obriensystemskub0
onap-message-router global-kafka-3488253347-8wpn7 1/1 Running 0 10m 10.42.82.226 obriensystemskub0
onap-message-router zookeeper-3757672320-xgl3m 1/1 Running 0 10m 10.42.127.222 obriensystemskub0
onap-mso mariadb-2610811658-s2jdw 1/1 Running 0 10m 10.42.86.175 obriensystemskub0
onap-mso mso-2217182437-pvkmh 1/1 Running 0 10m 10.42.92.123 obriensystemskub0
onap-policy brmsgw-554754608-rm4vt 1/1 Running 0 10m 10.42.120.11 obriensystemskub0
onap-policy drools-1184532483-g1b49 0/1 Running 0 10m 10.42.182.219 obriensystemskub0
onap-policy mariadb-546348828-6t88h 1/1 Running 0 10m 10.42.35.111 obriensystemskub0
onap-policy nexus-2933631225-hhn9s 1/1 Running 0 10m 10.42.227.139 obriensystemskub0
onap-policy pap-235069217-s8mhh 1/1 Running 0 10m 10.42.79.125 obriensystemskub0
onap-policy pdp-819476266-v47r8 1/1 Running 0 10m 10.42.225.9 obriensystemskub0
onap-policy pypdp-3646772508-b2wrr 1/1 Running 0 10m 10.42.41.167 obriensystemskub0
onap-portal portalapps-157357486-vbbhl 1/1 Running 0 10m 10.42.252.245 obriensystemskub0
onap-portal portaldb-351714684-4l76z 1/1 Running 0 10m 10.42.145.255 obriensystemskub0
onap-portal vnc-portal-1027553126-j1ggq 1/1 Running 0 10m 10.42.250.184 obriensystemskub0
onap-robot robot-44708506-lwfkk 1/1 Running 0 10m 10.42.251.228 obriensystemskub0
onap-sdc sdc-be-4018435632-kq4rw 1/1 Running 0 10m 10.42.146.30 obriensystemskub0
onap-sdc sdc-cs-2973656688-bnft4 1/1 Running 0 10m 10.42.255.194 obriensystemskub0
onap-sdc sdc-es-2628312921-pd4z3 1/1 Running 0 10m 10.42.235.113 obriensystemskub0
onap-sdc sdc-fe-4051669116-3c0tl 1/1 Running 0 10m 10.42.242.79 obriensystemskub0
onap-sdc sdc-kb-4011398457-f9mnm 1/1 Running 0 10m 10.42.31.16 obriensystemskub0
onap-sdnc sdnc-1672832555-r38fb 1/1 Running 0 10m 10.42.233.166 obriensystemskub0
onap-sdnc sdnc-dbhost-2119410126-7k6d0 1/1 Running 0 10m 10.42.126.192 obriensystemskub0
onap-sdnc sdnc-dgbuilder-730191098-4s7v4 1/1 Running 0 10m 10.42.27.68 obriensystemskub0
onap-sdnc sdnc-portal-3375812606-mp4nt 0/1 Running 0 10m 10.42.241.175 obriensystemskub0
onap-vid vid-mariadb-1357170716-9l2p3 1/1 Running 0 10m 10.42.189.186 obriensystemskub0
onap-vid vid-server-248645937-dkpjc 1/1 Running 0 10m 10.42.180.230 obriensystemskub0



Total pods is 49

below any coloured container had issues getting to running state (currently 32 of 33 come up after 45 min) - there are an addition 11 dcae pods - bringing the total to 44 with an additional 4 for appc/sdnc and the config-init pod

NAMESPACE

master:20170715

NAMEREADY STATUS

RESTARTS

(in 14h)

HostStart
time
Notes
defaultconfig-init0/0Terminated (Succeeded)01

The mount "config-init-root" is in the following location

(user configurable VF parameter file below)

/dockerdata-nfs/onapdemo/mso/mso/mso-docker.json

onap-aai aai-service-346921785-624ss1/1Running01

onap-aai hbase-139474849-7fg0s1/1Running02

onap-aai model-loader-service-1795708961-wg19w0/1Init:1/2 822

onap-appcappc-2044062043-bx6tc1/1Running01

onap-appcappc-dbhost-2039492951-jslts1/1Running02

onap-appcappc-dgbuilder-2934720673-mcp7c1/1Running02

onap-appcsdntldb01 (internal)





onap-appcsdnctldb02 (internal)





onap-dcaedcae-zookeeper
debugging


Note: currently there are no DCAE containers running yet (we are missing 6 yaml files (1 for the controller and 5 for the collector,staging,3-cdap pods)) - therefore DMaaP, VES collectors and APPC actions as the result of policy actions (closed loop) - will not function yet.

OOM-5 - Getting issue details... STATUS

onap-dcaedcae-kafka
debugging



onap-dcaedcae-dmaap
debugging



onap-dcaepgaas
debugging



onap-dcaedcae-collector-common-event
debugging


persistent volume: dcae-collector-pvs
onap-dcaedcae-collector-dmaapbc
debugging



onap-dcaedcae-controller
debugging


persistent volume: dcae-controller-pvs
onap-dcaedcae-ves-collector
debugging



onap-dcaecdap-0-dep
debugging



onap-dcaecdap-1-dep
debugging



onap-dcaecdap-2-dep
debugging



onap-message-routerdmaap-3842712241-gtdkp0/1CrashLoopBackOff1641

onap-message-routerglobal-kafka-89365896-5fnq91/1Running02

onap-message-routerzookeeper-1406540368-jdscq1/1Running01

onap-msomariadb-2638235337-758zr1/1Running01

onap-msomso-3192832250-fq6pn0/1CrashLoopBackOff1672
fixed by config-init and resolv.conf
onap-policy brmsgw-568914601-d5z710/1Init:0/1 821
fixed by config-init and resolv.conf
onap-policy drools-1450928085-099m2 0/1Init:0/1 82145mfixed by config-init and resolv.conf
onap-policy mariadb-2932363958-0l05g1/1Running 00

onap-policy nexus-871440171-tqq4z0/1Running 02

onap-policy pap-2218784661-xlj0n1/1Running 01

onap-policy pdp-1677094700-75wpj0/1Init:0/1822
fixed by config-init and resolv.conf
onap-policy pypdp-3209460526-bwm6b0/1Init:0/1822
fixed by config-init and resolv.conf
onap-portal portalapps-1708810953-trz470/1Init:CrashLoopBackOff1632
Initial dockerhub mariadb download issue - fixed
onap-portal portaldb-3652211058-vsg8r1/1Running00

onap-portal vnc-portal-948446550-76kj70/1Init:0/5821
fixed by config-init and resolv.conf
onap-robot robot-964706867-czr051/1Running 02

onap-sdcsdc-be-2426613560-jv8sk 0/1Init:0/2822
fixed by config-init and resolv.conf
onap-sdcsdc-cs-2080334320-95dq80/1CrashLoopBackOff1632
fixed by config-init and resolv.conf
onap-sdcsdc-es-3272676451-skf7z1/1Running01

onap-sdcsdc-fe-931927019-nt94t 0/1Init:0/1821
fixed by config-init and resolv.conf
onap-sdcsdc-kb-3337231379-8m8wx0/1Init:0/1821
fixed by config-init and resolv.conf
onap-sdncsdnc-1788655913-vvxlj 1/1Running00

onap-sdncsdnc-dbhost-240465348-kv8vf1/1Running00

onap-sdncsdnc-dgbuilder-4164493163-cp6rx1/1Running00

onap-sdncsdnctlbd01 (internal)





onap-sdncsdnctlb02 (internal)





onap-sdncsdnc-portal-2324831407-50811 0/1Running3=vm
0=nuc
1
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl --namespace onap-sdnc logs -f sdnc-portal-3375812606-01s1d | grep ERR
npm ERR! fetch failed https://registry.npmjs.org/is-utf8/-/is-utf8-0.2.1.tgz
onap-vid vid-mariadb-4268497828-81hm00/1CrashLoopBackOff 1692
fixed by config-init and resolv.conf
onap-vid vid-server-2331936551-6gxsp0/1  Init:0/1821
fixed by config-init and resolv.conf


Run VID if required to verify that config-init has mounted properly (and to verify resolv.conf)

Cloning details

Install the latest version of the OOM (ONAP Operations Manager) project repo - specifically the ONAP on Kubernetes work just uploaded June 2017

https://gerrit.onap.org/r/gitweb?p=oom.git

git clone ssh://yourgerrituserid@gerrit.onap.org:29418/oom

cd oom/kubernetes/oneclick

Versions

oom : master (1.1.0-SNAPSHOT)

onap deployments: 1.0.0

Rancher environment for Kubernetes

setup a separate onap kubernetes environment and disable the exising default environment.

Adding hosts to the Kubernetes environment will kick in k8s containers

Rancher kubectl config

To be able to run the kubectl scripts - install kubectl

Nexus3 security settings

Fix nexus3 security for each namespace

in createAll.bash add the following two lines just before namespace creation - to create a secret and attach it to the namespace (thanks to Jason Hunt of IBM last friday to helping us attach it - when we were all getting our pods to come up).  A better fix for the future will be to pass these in as parameters from a prod/stage/dev ecosystem config.

create_namespace() {
  kubectl create namespace $1-$2
+  kubectl --namespace $1-$2 create secret docker-registry regsecret --docker-server=nexus3.onap.org:10001 --docker-username=docker --docker-password=docker --docker-email=email@email.com
+  kubectl --namespace $1-$2 patch serviceaccount default -p '{"imagePullSecrets": [{"name": "regsecret"}]}'
}


Fix MSO mso-docker.json

Before running pod-config-init.yaml - make sure your config for openstack is setup correctly - so you can deploy the vFirewall VMs for example

vi oom/kubernetes/config/docker/init/src/config/mso/mso/mso-docker.json


OriginalReplacement for Rackspace

"mso-po-adapter-config": {
"checkrequiredparameters": "true",
"cloud_sites": [{
"aic_version": "2.5",
"id": "Ottawa",
"identity_service_id": "KVE5076_OPENSTACK",
"lcp_clli": "RegionOne",
"region_id": "RegionOne"
}],
"identity_services": [{
"admin_tenant": "services",
"dcp_clli": "KVE5076_OPENSTACK",
"identity_authentication_type": "USERNAME_PASSWORD",
"identity_server_type": "KEYSTONE",
"identity_url": "http://OPENSTACK_KEYSTONE_IP_HERE:5000/v2.0",
"member_role": "admin",
"mso_id": "dev",
"mso_pass": "dcdc0d9e4d69a667c67725a9e466e6c3",
"tenant_metadata": "true"
}],

"mso-po-adapter-config": {
"checkrequiredparameters": "true",
"cloud_sites": [{
"aic_version": "2.5",
"id": "Dallas",
"identity_service_id": "RAX_KEYSTONE",
"lcp_clli": "DFW", # or IAD
"region_id": "DFW"
}],
"identity_services": [{
"admin_tenant": "service",
"dcp_clli": "RAX_KEYSTONE",
"identity_authentication_type": "RACKSPACE_APIKEY",
"identity_server_type": "KEYSTONE",
"identity_url": "https://identity.api.rackspacecloud.com/v2.0",
"member_role": "admin",
"mso_id": "9998888",
"mso_pass": "YOUR_API_KEY",
"tenant_metadata": "true"
}],

delete/recreate the config po

root@obriensystemskub0:~/oom/kubernetes/config# kubectl --namespace default delete -f pod-config-init.yaml
pod "config-init" deleted
root@obriensystemskub0:~/oom/kubernetes/config# kubectl create -f pod-config-init.yaml
pod "config-init" created

or copy over your changes directly to the mount

root@obriensystemskub0:~/oom/kubernetes/config# cp docker/init/src/config/mso/mso/mso-docker.json /dockerdata-nfs/onapdemo/mso/mso/mso-docker.json

Use only "onap" namespace

Note: use only the hardcoded "onap" namespace prefix - as URLs in the config pod are set as follows "workflowSdncadapterCallback": "http://mso.onap-mso:8080/mso/SDNCAdapterCallbackService",


Monitor Container Deployment

first verify your kubernetes system is up

Then wait 25-45 min for all pods to attain 1/1 state

Kubernetes specific config

https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/

Nexus Docker repo Credentials

Checking out use of a kubectl secret in the yaml files via - https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

Container Endpoint access

Check the services view in the Kuberntes API under robot

robot.onap-robot:88 TCP

robot.onap-robot:30209 TCP

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get services --all-namespaces -o wide

onap-vid      vid-mariadb            None           <none>        3306/TCP         1h        app=vid-mariadb

onap-vid      vid-server             10.43.14.244   <nodes>       8080:30200/TCP   1h        app=vid-server


Container Logs

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl --namespace onap-vid logs -f vid-server-248645937-8tt6p

16-Jul-2017 02:46:48.707 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 22520 ms


root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide

NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP              NODE

onap-robot    robot-44708506-dgv8j                    1/1       Running   0          36m       10.42.240.80    obriensystemskub0

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl --namespace onap-robot logs -f robot-44708506-dgv8j

2017-07-16 01:55:54: (log.c.164) server started

SSH into ONAP containers

Normally I would via https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/

kubectl exec -it robot -- /bin/bash

The pod id should be sufficient

root@obriensystemsucont0:~/onap/oom/kubernetes/oneclick# kubectl describe node obriensystemsucont0 | grep robot

  Namespace            Name                        CPU Requests    CPU Limits    Memory Requests    Memory Limits

  ---------            ----                        ------------    ----------    ---------------    -------------
  onap-robot            robot-964706867-95hjd                0 (0%)        0 (0%)        0 (0%)        0 (0%)
root@obriensystemsucont0:~/onap/oom/kubernetes/oneclick# kubectl exec -it robot-964706867-95hjd /bin/bash
Error from server (NotFound): pods "robot-964706867-95hjd" not found
root@obriensystemsucont0:~/onap/oom/kubernetes/oneclick# kubectl exec -it robot-964706867 /bin/bash
Error from server (NotFound): pods "robot-964706867" not found
root@obriensystemsucont0:~/onap/oom/kubernetes/oneclick# kubectl exec -it robot /bin/bash
Error from server (NotFound): pods "robot" not found
root@obriensystemsucont0:~/onap/oom/kubernetes/oneclick# kubectl exec -it onap-robot /bin/bash

https://jira.onap.org/browse/OOM-47

in queue....

Push Files to Pods

Trying to get an authorization file into the robot pod

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl cp authorization onap-robot/robot-44708506-nhm0n:/home/ubuntu

above works?
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl cp authorization onap-robot/robot-44708506-nhm0n:/etc/lighttpd/authorization
tar: authorization: Cannot open: File exists
tar: Exiting with failure status due to previous errors


Running ONAP Portal UI Operations

see Installing and Running the ONAP Demos

Get the mapped external port by checking the service in kubernetes - here 30200 for VID on a particular node in our cluster.

or run a kube

fix /etc/hosts as usual

192.168.163.132 portal.api.simpledemo.openecomp.org

192.168.163.132 sdc.api.simpledemo.openecomp.org

192.168.163.132 policy.api.simpledemo.openecomp.org

192.168.163.132 vid.api.simpledemo.openecomp.org

In order to map internal 8989 ports to external ones like 30215 - we will need to reconfigure the onap config links as below.


Kubernetes Installation Options

Rancher on Ubuntu 16.04

Install Rancher

http://rancher.com/docs/rancher/v1.6/en/quick-start-guide/

http://rancher.com/docs/rancher/v1.6/en/installing-rancher/installing-server/#single-container

Install a docker version that Rancher and Kubernetes support which is currently 1.12.6

http://rancher.com/docs/rancher/v1.5/en/hosts/#supported-docker-versions

curl https://releases.rancher.com/install-docker/1.12.sh | sh
docker run -d --restart=unless-stopped -p 8880:8080 rancher/server:stable


Verify your Rancher admin console is up on the external port you configured above

Wait for the docker container to finish DB startup


http://rancher.com/docs/rancher/v1.6/en/hosts/

Registering Hosts in Rancher

Having issues registering a combined single VM (controller + host) - use your real IP not localhost

In settings | Host Configuration | set your IP

[root@obrien-b2 etcd]# sudo docker run -e CATTLE_AGENT_IP="192.168.163.128"  --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.2 http://192.168.163.128:8080/v1/scripts/A9487FC88388CC31FB76:1483142400000:IypSDQCtA4SwkRnthKqH53Vxoo
INFO: Launched Rancher Agent: 1130bdae106396623a01e34a54f72627da2673e466fc78229688330f597ea247

See your host registered

Troubleshooting

Rancher fails to restart on server reboot

Having issues after a reboot of a colocated server/agent

Docker Nexus Config

OOM-3 - Getting issue details... STATUS

Out of the box we cant pull images - currently working on a config step around https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

kubectl create secret docker-registry regsecret --docker-server=nexus3.onap.org:10001 --docker-username=docker --docker-password=docker --docker-email=someone@amdocs.com

      imagePullSecrets:

       - name: regsecret





Failed to pull image "nexus3.onap.org:10001/openecomp/testsuite:1.0-STAGING-latest": image pull failed for nexus3.onap.org:10001/openecomp/testsuite:1.0-STAGING-latest, this may be because there are no credentials on this request. details: (unauthorized: authentication required)
kubelet 172.17.4.99

OOM Repo changes

20170629: fix on 20170626 on a hardcoded proxy - (for those who run outside the firewall) - https://gerrit.onap.org/r/gitweb?p=oom.git;a=commitdiff;h=131c2a42541fb807f395fe1f39a8482a53f92c60

DNS resolution

add "service.ns.svc.cluster.local" to fix

Search Line limits were exceeded, some dns names have been omitted, the applied search line is: default.svc.cluster.local svc.cluster.local cluster.local kubelet.kubernetes.rancher.internal kubernetes.rancher.internal rancher.internal


https://github.com/rancher/rancher/issues/9303

root@obriensystemskub0:~/oom/kubernetes/oneclick# cat /etc/resolv.conf

# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)

#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN

nameserver 192.168.241.2

search localdomain service.ns.svc.cluster.local



OOM-1 - Getting issue details... STATUS

Key Summary T Created Updated Due Assignee Reporter P Status Resolution
Loading...
Refresh

Design Issues

DI 10: 20170724: DCAE Integration

OOM-5 - Getting issue details... STATUS

todo: 

docker images need to be pushed to nexus

from: registry.stratlab.local:30002/onap/dcae/cdap:1.0.7

to: nexus3.onap.org:10001/openecomp

OOM-62 - Getting issue details... STATUS

2 persistent volumes also created (controller-pvs, collector-pvs)

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide | grep dcae

onap-dcae             cdap0-3177500262-j03zs                         0/1       Init:ImagePullBackOff   0          44m       10.42.231.218   obriensystemskub0

onap-dcae             cdap1-3117270807-7wr1l                         0/1       Init:ImagePullBackOff   0          44m       10.42.82.255    obriensystemskub0

onap-dcae             cdap2-212125479-22j6b                          0/1       Init:ImagePullBackOff   0          44m       10.42.99.3      obriensystemskub0

onap-dcae             dcae-collector-common-event-1873968642-h5prb   0/1       ImagePullBackOff        0          44m       10.42.32.193    obriensystemskub0

onap-dcae             dcae-collector-dmaapbc-894256738-7z6hh         0/1       ImagePullBackOff        0          44m       10.42.129.230   obriensystemskub0

onap-dcae             dcae-controller-3570678936-zvst8               0/1       ContainerCreating       0          44m       <none>          obriensystemskub0

onap-dcae             dcae-pgaas-3690783998-18hjq                    0/1       ImagePullBackOff        0          44m       10.42.173.31    obriensystemskub0

onap-dcae             dcae-ves-collector-2165043810-nlq5g            0/1       ImagePullBackOff        0          40m       10.42.217.235   obriensystemskub0

onap-dcae             dmaap-2584402605-k9drh                         0/1       ImagePullBackOff        0          44m       10.42.28.94     obriensystemskub0

onap-dcae             kafka-3224242826-pj465                         0/1       ImagePullBackOff        0          44m       10.42.228.190   obriensystemskub0

onap-dcae             zookeeper-1478538356-9zblp                     0/1       ImagePullBackOff        0          44m       10.42.199.199   obriensystemskub0

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get services --all-namespaces -o wide | grep dcae

onap-dcae             dcae-collector-common-event   10.43.244.26    <nodes>       8080:30236/TCP,8443:30237/TCP,9999:30238/TCP                                 44m       app=dcae-collector-common-event

onap-dcae             dcae-collector-dmaapbc        10.43.85.114    <nodes>       8080:30239/TCP,8443:30240/TCP                                                44m       app=dcae-collector-dmaapbc

onap-dcae             dcae-controller               10.43.71.115    <nodes>       8000:30234/TCP,9998:30235/TCP                                                44m       app=dcae-controller

onap-dcae             dcae-ves-collector            10.43.204.227   <nodes>       8080:30241/TCP,9999:30242/TCP                                                44m       app=dcae-ves-collector

onap-dcae             zldciad4vipstg00              10.43.150.187   <nodes>       5432:30245/TCP                                                               44m       app=dcae-pgaas




https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/

  • No labels