Table of Contents
Note | ||
---|---|---|
| ||
This wiki is under construction - this means that content here may be not fully specified or missing. TODO: |
...
Reviews
https://gerrit.onap.org/r/#/c/6179/
Undercloud Installation
Note: you need at least 37g RAM (34 for ONAP services - this is without DCAE yet and without running the vFirewall yet).
We need a kubernetes installation either a base installation or with a thin API wrapper like Rancher or Redhat
There are several options - currently Rancher on Ubuntu 16.04 is a focus as a thin wrapper on Kubernetes - there are other alternative platforms in the subpage - ONAP on Kubernetes (Alternatives)
...
Ubuntu 16.04.2
!Redhat
...
Bare Metal
VMWare
...
Recommended approach
Issue with kubernetes support only in 1.12 (obsolete docker-machine) on OSX
...
ONAP Installation
Quickstart Installation
ONAP deployment in kubernetes is modelled in the oom project as a 1:1 set of service:pod sets (1 pod per docker container). The fastest way to get ONAP Kubernetes up is via Rancher.
Primary platform is virtual Ubuntu 16.04 VMs on VMWare Workstation 12.5 on a up to two separate 64Gb/6-core 5820K Windows 10 systems.
Secondary platform is bare-metal 4 NUCs (i7/i5/i3 with 16G each)
Install only the 1.12.x (currently 1.12.6) version of Docker (the only version that works with Kubernetes in Rancher 1.6)
curl https://releases.rancher.com/install-docker/1.12.sh | sh
Install rancher (use 8880 instead of 8080)
sudo docker run -d --restart=unless-stopped -p 8880:8080 rancher/server
In Rancher UI (http://127.0.0.1:8880) , Set IP name of master node in config, create a new onap environment as Kubernetes (will setup kube containers), stop default environment
register your host(s) - run following on each host (get from "add host" menu) - install docker 1.12 if not already on the host
curl https://releases.rancher.com/install-docker/1.12.sh | sh
docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.2 http://192.168.163.131:8880/v1/scripts/BBD465D9B24E94F5FBFD:1483142400000:IDaNFrug38QsjZcu6rXh8TwqA4
install kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
paste kubectl config from rancher
mkdir ~/.kube
vi ~/.kube/config
clone oom (scp your onap_rsa private key first)
git clone ssh://michaelobrien@gerrit.onap.org:29418/oom
fix nexus3 security temporarily for
Jira | ||||||
---|---|---|---|---|---|---|
|
vi oom/kubernetes/oneclick/createAll.bash
create_namespace() {
kubectl create namespace $1-$2
+ kubectl --namespace $1-$2 create secret docker-registry regsecret --docker-server=nexus3.onap.org:10001 --docker-username=docker --docker-password=docker --docker-email=email@email.com
+ kubectl --namespace $1-$2 patch serviceaccount default -p '{"imagePullSecrets": [{"name": "regsecret"}]}'
}
Wait until all the hosts show green in rancher, then run the script that wraps all the kubectl commands
run the one time config pod (with mounts )config-init-root) for all the other pods) - the pod will stop normally
cd oom/kubernetes/config
Before running pod-config-init.yaml - make sure your config for openstack is setup correctly - so you can deploy the vFirewall VMs for example
vi oom/kubernetes/config/docker/init/src/config/mso/mso/mso-docker.json
replace for example
"identity_services": [{
"identity_url": "http://OPENSTACK_KEYSTONE_IP_HERE:5000/v2.0",
~/onap/oom/kubernetes/config# kubectl create -f pod-config-init.yaml
pod "config-init" created
Fix DNS resolution before running any more pods ( add service.ns.svc.cluster.local or svc.cluster.local temporarily)
~/oom/kubernetes/oneclick# cat /etc/resolv.conf
nameserver 192.168.241.2
search localdomain service.ns.svc.cluster.local
Note: use only the hardcoded "onap" namespace prefix - as URLs in the config pod are set as follows "workflowSdncadapterCallback": "http://mso.onap-mso:8080/mso/SDNCAdapterCallbackService",
cd ../oneclick
vi createAll.bash
./createAll.bash -n onap
Wait until the containers are all up - you should see...
Four host Kubernetes cluster in Rancher
In this case 4 Intel NUCs running Ubuntu 16.04.2 natively
Target Deployment State
root@obriensystemsucont0:~/onap/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide
to update
on 5820k 4.1GHz 12 vCores 48g Ubuntu 16.04.2 VM on 64g host
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get services --all-namespaces -o wide
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes 10.43.0.1 <none> 443/TCP 45m <none>
kube-system heapster 10.43.39.217 <none> 80/TCP 45m k8s-app=heapster
kube-system kube-dns 10.43.0.10 <none> 53/UDP,53/TCP 45m k8s-app=kube-dns
kube-system kubernetes-dashboard 10.43.106.248 <none> 9090/TCP 45m k8s-app=kubernetes-dashboard
kube-system monitoring-grafana 10.43.18.184 <none> 80/TCP 45m k8s-app=grafana
kube-system monitoring-influxdb 10.43.58.26 <none> 8086/TCP 45m k8s-app=influxdb
kube-system tiller-deploy 10.43.235.104 <none> 44134/TCP 45m app=helm,name=tiller
onap-aai aai-service 10.43.181.245 <nodes> 8443:30233/TCP,8080:30232/TCP 27m app=aai-service
onap-aai hbase None <none> 8020/TCP 27m app=hbase
onap-aai model-loader-service 10.43.74.55 <nodes> 8443:30229/TCP,8080:30210/TCP 27m app=model-loader-service
onap-appc dbhost None <none> 3306/TCP 27m app=appc-dbhost
onap-appc dgbuilder 10.43.146.180 <nodes> 3000:30228/TCP 27m app=appc-dgbuilder
onap-appc sdnctldb01 None <none> 3306/TCP 27m app=appc-dbhost
onap-appc sdnctldb02 None <none> 3306/TCP 27m app=appc-dbhost
onap-appc sdnhost 10.43.196.103 <nodes> 8282:30230/TCP,1830:30231/TCP 27m app=appc
onap-message-router dmaap 10.43.95.58 <nodes> 3904:30227/TCP,3905:30226/TCP 27m app=dmaap
onap-message-router global-kafka None <none> 9092/TCP 27m app=global-kafka
onap-message-router zookeeper None <none> 2181/TCP 27m app=zookeeper
onap-mso mariadb 10.43.5.166 <nodes> 3306:30252/TCP 27m app=mariadb
onap-mso mso 10.43.156.155 <nodes> 8080:30223/TCP,3904:30225/TCP,3905:30224/TCP,9990:30222/TCP,8787:30250/TCP 27m app=mso
onap-policy brmsgw 10.43.81.249 <nodes> 9989:30216/TCP 27m app=brmsgw
onap-policy drools 10.43.154.250 <nodes> 6969:30217/TCP 27m app=drools
onap-policy mariadb None <none> 3306/TCP 27m app=mariadb
onap-policy nexus None <none> 8081/TCP 27m app=nexus
onap-policy pap 10.43.37.182 <nodes> 8443:30219/TCP,9091:30218/TCP 27m app=pap
onap-policy pdp 10.43.123.239 <nodes> 8081:30220/TCP 27m app=pdp
onap-policy pypdp 10.43.226.208 <nodes> 8480:30221/TCP 27m app=pypdp
onap-portal portalapps 10.43.225.107 <nodes> 8006:30213/TCP,8010:30214/TCP,8989:30215/TCP 27m app=portalapps
onap-portal portaldb None <none> 3306/TCP 27m app=portaldb
onap-portal vnc-portal 10.43.216.210 <nodes> 6080:30211/TCP,5900:30212/TCP 27m app=vnc-portal
onap-robot robot 10.43.52.131 <nodes> 88:30209/TCP 34m app=robot
onap-sdc sdc-be 10.43.123.5 <nodes> 8443:30204/TCP,8080:30205/TCP 27m app=sdc-be
onap-sdc sdc-cs None <none> 9042/TCP,9160/TCP 27m app=sdc-cs
onap-sdc sdc-es None <none> 9200/TCP,9300/TCP 27m app=sdc-es
onap-sdc sdc-fe 10.43.233.17 <nodes> 9443:30207/TCP,8181:30206/TCP 27m app=sdc-fe
onap-sdc sdc-kb None <none> 5601/TCP 27m app=sdc-kb
onap-sdnc dbhost None <none> 3306/TCP 27m app=sdnc-dbhost
onap-sdnc sdnc-dgbuilder 10.43.253.47 <nodes> 3000:30203/TCP 27m app=sdnc-dgbuilder
onap-sdnc sdnc-portal 10.43.248.245 <nodes> 8843:30201/TCP 27m app=sdnc-portal
onap-sdnc sdnctldb01 None <none> 3306/TCP 27m app=sdnc-dbhost
onap-sdnc sdnctldb02 None <none> 3306/TCP 27m app=sdnc-dbhost
onap-sdnc sdnhost 10.43.51.170 <nodes> 8282:30202/TCP 27m app=sdnc
onap-vid vid-mariadb None <none> 3306/TCP 31m app=vid-mariadb
onap-vid vid-server 10.43.54.34 <nodes> 8080:30200/TCP 31m app=vid-server
sdnc-portal is still downloading node packages
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl --namespace onap-sdnc logs -f sdnc-portal-3375812606-01s1d
...
#2
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system heapster-859001963-kz210 1/1 Running 5 7h 10.42.82.15 obriensystemskub0
kube-system kube-dns-1759312207-jd5tf 3/3 Running 8 7h 10.42.46.137 obriensystemskub0
kube-system kubernetes-dashboard-2463885659-xv986 1/1 Running 4 7h 10.42.83.159 obriensystemskub0
kube-system monitoring-grafana-1177217109-sm5nq 1/1 Running 4 7h 10.42.176.36 obriensystemskub0
kube-system monitoring-influxdb-1954867534-vvb84 1/1 Running 4 7h 10.42.30.22 obriensystemskub0
kube-system tiller-deploy-1933461550-gdxch 1/1 Running 4 7h 10.42.32.200 obriensystemskub0
onap-aai aai-service-301900780-70xv0 1/1 Running 0 10m 10.42.233.63 obriensystemskub0
onap-aai hbase-2985919495-bv216 1/1 Running 0 10m 10.42.239.209 obriensystemskub0
onap-aai model-loader-service-2352751609-7rp0x 1/1 Running 0 10m 10.42.192.198 obriensystemskub0
onap-appc appc-4266112350-92s6l 1/1 Running 0 10m 10.42.129.60 obriensystemskub0
onap-appc appc-dbhost-981835105-gcz3b 1/1 Running 0 10m 10.42.132.39 obriensystemskub0
onap-appc appc-dgbuilder-939982213-xb7t8 1/1 Running 0 10m 10.42.219.42 obriensystemskub0
onap-message-router dmaap-1381770224-jz059 1/1 Running 0 10m 10.42.56.184 obriensystemskub0
onap-message-router global-kafka-3488253347-8wpn7 1/1 Running 0 10m 10.42.82.226 obriensystemskub0
onap-message-router zookeeper-3757672320-xgl3m 1/1 Running 0 10m 10.42.127.222 obriensystemskub0
onap-mso mariadb-2610811658-s2jdw 1/1 Running 0 10m 10.42.86.175 obriensystemskub0
onap-mso mso-2217182437-pvkmh 1/1 Running 0 10m 10.42.92.123 obriensystemskub0
onap-policy brmsgw-554754608-rm4vt 1/1 Running 0 10m 10.42.120.11 obriensystemskub0
onap-policy drools-1184532483-g1b49 0/1 Running 0 10m 10.42.182.219 obriensystemskub0
onap-policy mariadb-546348828-6t88h 1/1 Running 0 10m 10.42.35.111 obriensystemskub0
onap-policy nexus-2933631225-hhn9s 1/1 Running 0 10m 10.42.227.139 obriensystemskub0
onap-policy pap-235069217-s8mhh 1/1 Running 0 10m 10.42.79.125 obriensystemskub0
onap-policy pdp-819476266-v47r8 1/1 Running 0 10m 10.42.225.9 obriensystemskub0
onap-policy pypdp-3646772508-b2wrr 1/1 Running 0 10m 10.42.41.167 obriensystemskub0
onap-portal portalapps-157357486-vbbhl 1/1 Running 0 10m 10.42.252.245 obriensystemskub0
onap-portal portaldb-351714684-4l76z 1/1 Running 0 10m 10.42.145.255 obriensystemskub0
onap-portal vnc-portal-1027553126-j1ggq 1/1 Running 0 10m 10.42.250.184 obriensystemskub0
onap-robot robot-44708506-lwfkk 1/1 Running 0 10m 10.42.251.228 obriensystemskub0
onap-sdc sdc-be-4018435632-kq4rw 1/1 Running 0 10m 10.42.146.30 obriensystemskub0
onap-sdc sdc-cs-2973656688-bnft4 1/1 Running 0 10m 10.42.255.194 obriensystemskub0
onap-sdc sdc-es-2628312921-pd4z3 1/1 Running 0 10m 10.42.235.113 obriensystemskub0
onap-sdc sdc-fe-4051669116-3c0tl 1/1 Running 0 10m 10.42.242.79 obriensystemskub0
onap-sdc sdc-kb-4011398457-f9mnm 1/1 Running 0 10m 10.42.31.16 obriensystemskub0
onap-sdnc sdnc-1672832555-r38fb 1/1 Running 0 10m 10.42.233.166 obriensystemskub0
onap-sdnc sdnc-dbhost-2119410126-7k6d0 1/1 Running 0 10m 10.42.126.192 obriensystemskub0
onap-sdnc sdnc-dgbuilder-730191098-4s7v4 1/1 Running 0 10m 10.42.27.68 obriensystemskub0
onap-sdnc sdnc-portal-3375812606-mp4nt 0/1 Running 0 10m 10.42.241.175 obriensystemskub0
onap-vid vid-mariadb-1357170716-9l2p3 1/1 Running 0 10m 10.42.189.186 obriensystemskub0
onap-vid vid-server-248645937-dkpjc 1/1 Running 0 10m 10.42.180.230 obriensystemskub0
List of Containers
Total pods is 49
below any coloured container had issues getting to running state (currently 32 of 33 come up after 45 min) - there are an addition 11 dcae pods - bringing the total to 44 with an additional 4 for appc/sdnc and the config-init pod
...
NAMESPACE
master:20170715
...
RESTARTS
(in 14h)
...
The mount "config-init-root" is in the following location
(user configurable VF parameter file below)
/dockerdata-nfs/onapdemo/mso/mso/mso-docker.json
...
Note: currently there are no DCAE containers running yet (we are missing 6 yaml files (1 for the controller and 5 for the collector,staging,3-cdap pods)) - therefore DMaaP, VES collectors and APPC actions as the result of policy actions (closed loop) - will not function yet.
Jira | ||||||
---|---|---|---|---|---|---|
|
Jira | ||||||
---|---|---|---|---|---|---|
|
...
List of Docker Images
...
root@obriensystemskub0:~/oom/kubernetes/dcae# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nexus3.onap.org:10001/openecomp/dcae-collector-common-event latest 325d5b115fc2 7 days ago 538.8 MB
nexus3.onap.org:10001/openecomp/dcae-dmaapbc latest 46891e265574 7 days ago 328.1 MB
wurstmeister/kafka latest f26f76f2e50d 13 days ago 267.7 MB
nexus3.onap.org:10001/openecomp/model-loader 1.0-STAGING-latest 0c7a3eb0682b 2 weeks ago 758.9 MB
nexus3.onap.org:10001/openecomp/ajsc-aai 1.0-STAGING-latest f5c97da83393 2 weeks ago 1.352 GB
nexus3.onap.org:10001/openecomp/vid 1.0-STAGING-latest b80d37a7ac5e 2 weeks ago 725.9 MB
nexus3.onap.org:10001/openecomp/dgbuilder-sdnc-image 1.0-STAGING-latest cb3912913a43 2 weeks ago 850.9 MB
nexus3.onap.org:10001/openecomp/admportal-sdnc-image 1.0-STAGING-latest 825d60fd3abd 2 weeks ago 780.9 MB
nexus3.onap.org:10001/openecomp/sdnc-image 1.0-STAGING-latest c6eff373cc29 2 weeks ago 1.433 GB
nexus3.onap.org:10001/openecomp/testsuite 1.0-STAGING-latest 1b9a4aaa9649 2 weeks ago 1.097 GB
nexus3.onap.org:10001/openecomp/appc-image 1.0-STAGING-latest c5172ae773c5 2 weeks ago 2.235 GB
mariadb 10 b101c8399ee3 2 weeks ago 386.7 MB
nexus3.onap.org:10001/openecomp/mso 1.0-STAGING-latest b45b72cbc99d 2 weeks ago 1.429 GB
oomk8s/config-init 1.0.0 f19045125a44 5 weeks ago 322 MB
ubuntu 16.04 d355ed3537e9 5 weeks ago 119.2 MB
attos/dmaap latest b0ae220fcf1f 5 weeks ago 747.4 MB
mysql/mysql-server 5.6 05712b2a4b84 7 weeks ago 214.3 MB
oomk8s/ubuntu-init 1.0.0 14bb4db11858 8 weeks ago 207 MB
oomk8s/readiness-check 1.0.0 d3923ba1f99c 8 weeks ago 578.6 MB
oomk8s/mariadb-client-init 1.0.0 a5fa953bd4e0 9 weeks ago 251.1 MB
nexus3.onap.org:10001/openecomp/policy/policy-pe 1.0-STAGING-latest 2ffa01f2a8a2 4 months ago 1.421 GB
nexus3.onap.org:10001/openecomp/policy/policy-drools 1.0-STAGING-latest 389f86326698 4 months ago 1.347 GB
nexus3.onap.org:10001/openecomp/policy/policy-db 1.0-STAGING-latest 4f0ed6e92fed 4 months ago 1.142 GB
nexus3.onap.org:10001/openecomp/policy/policy-nexus 1.0-STAGING-latest 82c38af9208a 4 months ago 981.9 MB
nexus3.onap.org:10001/openecomp/sdc-cassandra 1.0-STAGING-latest 8f7c4a92530a 4 months ago 902 MB
nexus3.onap.org:10001/openecomp/sdc-kibana 1.0-STAGING-latest 287d79893e52 4 months ago 463.7 MB
nexus3.onap.org:10001/openecomp/sdc-elasticsearch 1.0-STAGING-latest 6ddf30823600 4 months ago 517.9 MB
nexus3.onap.org:10001/openecomp/sdc-frontend 1.0-STAGING-latest 1e63e5d9dff7 4 months ago 592.8 MB
nexus3.onap.org:10001/openecomp/sdc-backend 1.0-STAGING-latest 163918b87ae7 4 months ago 941.1 MB
nexus3.onap.org:10001/openecomp/portalapps 1.0-STAGING-latest ee0e08b1704b 4 months ago 1.04 GB
nexus3.onap.org:10001/openecomp/portaldb 1.0-STAGING-latest f2a8dc705ba2 4 months ago 394.5 MB
aaidocker/aai-hbase-1.2.3 latest aba535a6f8b5 7 months ago 1.562 GB
wurstmeister/zookeeper latest 351aa00d2fe9 8 months ago 478.3 MB
nexus3.onap.org:10001/mariadb 10.1.11 d1553bc7007f 17 months ago 346.5 MB
Run VID if required to verify that config-init has mounted properly (and to verify resolv.conf)
Cloning details
Install the latest version of the OOM (ONAP Operations Manager) project repo - specifically the ONAP on Kubernetes work just uploaded June 2017
https://gerrit.onap.org/r/gitweb?p=oom.git
...
git clone ssh://yourgerrituserid@gerrit.onap.org:29418/oom
cd oom/kubernetes/oneclick
Versions
oom : master (1.1.0-SNAPSHOT)
onap deployments: 1.0.0
Rancher environment for Kubernetes
setup a separate onap kubernetes environment and disable the exising default environment.
Adding hosts to the Kubernetes environment will kick in k8s containers
Rancher kubectl config
To be able to run the kubectl scripts - install kubectl
Nexus3 security settings
Fix nexus3 security for each namespace
in createAll.bash add the following two lines just before namespace creation - to create a secret and attach it to the namespace (thanks to Jason Hunt of IBM last friday to helping us attach it - when we were all getting our pods to come up). A better fix for the future will be to pass these in as parameters from a prod/stage/dev ecosystem config.
...
create_namespace() {
kubectl create namespace $1-$2
+ kubectl --namespace $1-$2 create secret docker-registry regsecret --docker-server=nexus3.onap.org:10001 --docker-username=docker --docker-password=docker --docker-email=email@email.com
+ kubectl --namespace $1-$2 patch serviceaccount default -p '{"imagePullSecrets": [{"name": "regsecret"}]}'
}
Fix MSO mso-docker.json
Before running pod-config-init.yaml - make sure your config for openstack is setup correctly - so you can deploy the vFirewall VMs for example
vi oom/kubernetes/config/docker/init/src/config/mso/mso/mso-docker.json
...
"mso-po-adapter-config": {
"checkrequiredparameters": "true",
"cloud_sites": [{
"aic_version": "2.5",
"id": "Ottawa",
"identity_service_id": "KVE5076_OPENSTACK",
"lcp_clli": "RegionOne",
"region_id": "RegionOne"
}],
"identity_services": [{
"admin_tenant": "services",
"dcp_clli": "KVE5076_OPENSTACK",
"identity_authentication_type": "USERNAME_PASSWORD",
"identity_server_type": "KEYSTONE",
"identity_url": "http://OPENSTACK_KEYSTONE_IP_HERE:5000/v2.0",
"member_role": "admin",
"mso_id": "dev",
"mso_pass": "dcdc0d9e4d69a667c67725a9e466e6c3",
"tenant_metadata": "true"
}],
...
"mso-po-adapter-config": {
"checkrequiredparameters": "true",
"cloud_sites": [{
"aic_version": "2.5",
"id": "Dallas",
"identity_service_id": "RAX_KEYSTONE",
"lcp_clli": "DFW", # or IAD
"region_id": "DFW"
}],
"identity_services": [{
"admin_tenant": "service",
"dcp_clli": "RAX_KEYSTONE",
"identity_authentication_type": "RACKSPACE_APIKEY",
"identity_server_type": "KEYSTONE",
"identity_url": "https://identity.api.rackspacecloud.com/v2.0",
"member_role": "admin",
"mso_id": "9998888",
"mso_pass": "YOUR_API_KEY",
"tenant_metadata": "true"
}],
The official documentation for installation of ONAP with OOM / Kubernetes is located in Read the Docs:
- OOM User Guide — onap master documentation
- OOM Quick Start Guide — onap master documentation)
- OOM Cloud Setup Guide — onap master documentation
delete/recreate the config po
root@obriensystemskub0:~/oom/kubernetes/config# kubectl --namespace default delete -f pod-config-init.yaml
pod "config-init" deleted
root@obriensystemskub0:~/oom/kubernetes/config# kubectl create -f pod-config-init.yaml
pod "config-init" created
or copy over your changes directly to the mount
root@obriensystemskub0:~/oom/kubernetes/config# cp docker/init/src/config/mso/mso/mso-docker.json /dockerdata-nfs/onapdemo/mso/mso/mso-docker.json
Use only "onap" namespace
Note: use only the hardcoded "onap" namespace prefix - as URLs in the config pod are set as follows "workflowSdncadapterCallback": "http://mso.onap-mso:8080/mso/SDNCAdapterCallbackService",
Monitor Container Deployment
first verify your kubernetes system is up
Then wait 25-45 min for all pods to attain 1/1 state
Kubernetes specific config
https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/
Nexus Docker repo Credentials
Checking out use of a kubectl secret in the yaml files via - https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
Container Endpoint access
Check the services view in the Kuberntes API under robot
robot.onap-robot:88 TCP
robot.onap-robot:30209 TCP
...
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get services --all-namespaces -o wide
onap-vid vid-mariadb None <none> 3306/TCP 1h app=vid-mariadb
onap-vid vid-server 10.43.14.244 <nodes> 8080:30200/TCP 1h app=vid-server
Container Logs
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl --namespace onap-vid logs -f vid-server-248645937-8tt6p
16-Jul-2017 02:46:48.707 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 22520 ms
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
onap-robot robot-44708506-dgv8j 1/1 Running 0 36m 10.42.240.80 obriensystemskub0
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl --namespace onap-robot logs -f robot-44708506-dgv8j
2017-07-16 01:55:54: (log.c.164) server started
SSH into ONAP containers
Normally I would via https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/
...
kubectl exec -it robot -- /bin/bash
The pod id should be sufficient
...
root@obriensystemsucont0:~/onap/oom/kubernetes/oneclick# kubectl describe node obriensystemsucont0 | grep robot
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
onap-robot robot-964706867-95hjd 0 (0%) 0 (0%) 0 (0%) 0 (0%)
root@obriensystemsucont0:~/onap/oom/kubernetes/oneclick# kubectl exec -it robot-964706867-95hjd /bin/bash
Error from server (NotFound): pods "robot-964706867-95hjd" not found
root@obriensystemsucont0:~/onap/oom/kubernetes/oneclick# kubectl exec -it robot-964706867 /bin/bash
Error from server (NotFound): pods "robot-964706867" not found
root@obriensystemsucont0:~/onap/oom/kubernetes/oneclick# kubectl exec -it robot /bin/bash
Error from server (NotFound): pods "robot" not found
root@obriensystemsucont0:~/onap/oom/kubernetes/oneclick# kubectl exec -it onap-robot /bin/bash
https://jira.onap.org/browse/OOM-47
in queue....
Push Files to Pods
Trying to get an authorization file into the robot pod
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl cp authorization onap-robot/robot-44708506-nhm0n:/home/ubuntu
...
Running ONAP Portal UI Operations
see Installing and Running the ONAP Demos
Get the mapped external port by checking the service in kubernetes - here 30200 for VID on a particular node in our cluster.
or run a kube
fix /etc/hosts as usual
...
192.168.163.132 portal.api.simpledemo.openecomp.org
192.168.163.132 sdc.api.simpledemo.openecomp.org
192.168.163.132 policy.api.simpledemo.openecomp.org
192.168.163.132 vid.api.simpledemo.openecomp.org
In order to map internal 8989 ports to external ones like 30215 - we will need to reconfigure the onap config links as below.
Kubernetes Installation Options
Rancher on Ubuntu 16.04
Install Rancher
http://rancher.com/docs/rancher/v1.6/en/quick-start-guide/
http://rancher.com/docs/rancher/v1.6/en/installing-rancher/installing-server/#single-container
Install a docker version that Rancher and Kubernetes support which is currently 1.12.6
http://rancher.com/docs/rancher/v1.5/en/hosts/#supported-docker-versions
...
curl https://releases.rancher.com/install-docker/1.12.sh | sh
docker run -d --restart=unless-stopped -p 8880:8080 rancher/server:stable
...
Wait for the docker container to finish DB startup
http://rancher.com/docs/rancher/v1.6/en/hosts/
Registering Hosts in Rancher
Having issues registering a combined single VM (controller + host) - use your real IP not localhost
In settings | Host Configuration | set your IP
...
See your host registered
Troubleshooting
Rancher fails to restart on server reboot
Having issues after a reboot of a colocated server/agent
Docker Nexus Config
Jira | ||||||
---|---|---|---|---|---|---|
|
Out of the box we cant pull images - currently working on a config step around https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
...
imagePullSecrets:
- name: regsecret
...
OOM Repo changes
20170629: fix on 20170626 on a hardcoded proxy - (for those who run outside the firewall) - https://gerrit.onap.org/r/gitweb?p=oom.git;a=commitdiff;h=131c2a42541fb807f395fe1f39a8482a53f92c60
DNS resolution
add "service.ns.svc.cluster.local" to fix
Search Line limits were exceeded, some dns names have been omitted, the applied search line is: default.svc.cluster.local svc.cluster.local cluster.local kubelet.kubernetes.rancher.internal kubernetes.rancher.internal rancher.internal
https://github.com/rancher/rancher/issues/9303
root@obriensystemskub0:~/oom/kubernetes/oneclick# cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 192.168.241.2
search localdomain service.ns.svc.cluster.local
Jira | ||||||
---|---|---|---|---|---|---|
|
Jira | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
Design Issues
DI 10: 20170724: DCAE Integration
Jira | ||||||
---|---|---|---|---|---|---|
|
todo:
docker images need to be pushed to nexus
from: registry.stratlab.local:30002/onap/dcae/cdap:1.0.7
to: nexus3.onap.org:10001/openecomp
Jira | ||||||
---|---|---|---|---|---|---|
|
2 persistent volumes also created (controller-pvs, collector-pvs)
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide | grep dcae
onap-dcae cdap0-801098998-1j83b 0/1 Init:ImagePullBackOff 0 7m 10.42.170.56 obriensystemskub0
onap-dcae cdap1-1109312935-sv8g0 0/1 Init:ImagePullBackOff 0 7m 10.42.184.43 obriensystemskub0
onap-dcae cdap2-2495595959-fxnmg 0/1 Init:ImagePullBackOff 0 7m 10.42.69.133 obriensystemskub0
onap-dcae dcae-collector-common-event-2687859322-jcv7n 1/1 Running 0 7m 10.42.219.171 obriensystemskub0
onap-dcae dcae-collector-dmaapbc-2087600858-sb98v 1/1 Running 0 7m 10.42.225.93 obriensystemskub0
onap-dcae dcae-controller-1960065296-95xxx 0/1 ContainerCreating 0 7m <none> obriensystemskub0
onap-dcae dcae-pgaas-3690783998-v4w60 0/1 ImagePullBackOff 0 7m 10.42.28.81 obriensystemskub0
onap-dcae dcae-ves-collector-1184035059-t0t7s 0/1 ImagePullBackOff 0 7m 10.42.223.26 obriensystemskub0
onap-dcae dmaap-3637563410-2s7b2 0/1 CrashLoopBackOff 6 7m 10.42.7.93 obriensystemskub0
onap-dcae kafka-2923495538-tb218 0/1 CrashLoopBackOff 6 7m 10.42.49.109 obriensystemskub0
onap-dcae zookeeper-2122426841-2n6h5 1/1 Running 0 7m 10.42.192.205 obriensystemskub0
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get services --all-namespaces -o wide | grep dcae
onap-dcae dcae-collector-common-event 10.43.97.177 <nodes> 8080:30236/TCP,8443:30237/TCP,9999:30238/TCP 8m app=dcae-collector-common-event
onap-dcae dcae-collector-dmaapbc 10.43.100.153 <nodes> 8080:30239/TCP,8443:30240/TCP 8m app=dcae-collector-dmaapbc
onap-dcae dcae-controller 10.43.117.220 <nodes> 8000:30234/TCP,9998:30235/TCP 7m app=dcae-controller
onap-dcae dcae-ves-collector 10.43.215.194 <nodes> 8080:30241/TCP,9999:30242/TCP 8m app=dcae-ves-collector
onap-dcae zldciad4vipstg00 10.43.110.169 <nodes> 5432:30245/TCP 8m app=dcae-pgaas
Links
...