Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

(on each host) add to your /etc/hosts to point your ip to your hostname (add your hostname to the end). Add entries for all other hosts in your cluster.

Code Block
languagebash
sudo vi /etc/hosts
<your-ip> <your-hostname>


Try to use root - if you use ubuntu then you will need to enable docker separately for the ubuntu user

Code Block
sudo su -
apt-get update 


(to fix possible modprobe: FATAL: Module aufs not found in directory /lib/modules/4.4.0-59-generic)


(on each host (server and client(s) which may be the same machine)) Install only the 1.12.x (currently 1.12.6) version of Docker (the only version that works with Kubernetes in Rancher 1.6)

Code Block
curl https://releases.rancher.com/install-docker/1.12.sh | sh


(on the master only) Install rancher (use 8880 instead of 8080) - note there may be issues with the dns pod in Rancher after a reboot or when running clustered hosts - a clean system will be OK -

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyOOM-236

Code Block
docker run -d --restart=unless-stopped -p 8880:8080 rancher/server


In Rancher UI - dont use (http://127.0.0.1:8880) - use the real IP address - so the client configs are populated correctly with callbacks

You must deactivate the default CATTLE environment - by adding a KUBERNETES environment - and Deactivating the older default CATTLE one - your added hosts will attach to the default

    • Default → Manage Environments
    • Select "Add Environment" button
    • Give the Environment a name and description, then select Kubernetes as the Environment Template
    • Hit the "Create" button. This will create the environment and bring you back to the Manage Environments view
    • At the far right column of the Default Environment row, left-click the menu ( looks like 3 stacked dots ), and select Deactivate. This will make your new Kubernetes environment the new default.


Register your host(s) - run following on each host (including the master if you are collocating the master/host on a single machine/vm)

For each host, In Rancher > Infrastructure > Hosts. Select "Add Host"

The first time you add a host - you will be presented with a screen containing the routable IP - hit save only on a routable IP.

Enter IP of host: (if you launched racher with 127.0.0.1/localhost - otherwise keep it empty - it will autopopulate the registration with the real IP)



Copy command to register host with Rancher,

Execute command on host, for example:

Code Block
% docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.2 http://192.168.163.131:8880/v1/scripts/BBD465D9B24E94F5FBFD:1483142400000:IDaNFrug38QsjZcu6rXh8TwqA4


wait for kubernetes menu to populate with CLI


Install Kubectl

The following will install kubectl on a linux host. Once configured, this client tool will provide management of a Kubernetes cluster.

Code Block
% curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
% chmod +x ./kubectl
% mv ./kubectl /usr/local/bin/kubectl
% mkdir ~/.kube
% vi ~/.kube/config

Paste kubectl config from Rancher (you will see the CLI menu in Rancher / Kubernetes after the k8s pods are up on your host)

Click on "Generate Config" to get your content to add into .kube/config


Verify that Kubernetes config is good

Code Block
root@obrien-kube11-1:~# kubectl cluster-info
Kubernetes master is running at ....
Heapster is running at....
KubeDNS is running at ....
kubernetes-dashboard is running at ...
monitoring-grafana is running at ....
monitoring-influxdb is running at ...
tiller-deploy is running at....


Install Helm

The following will install Helm (use 2.3.0 not current 2.6.0) on a linux host. Helm is used by OOM for package and configuration management.

Prerequisite: Install Kubectl

Code Block
# wget http://storage.googleapis.com/kubernetes-helm/helm-v2.3.0-linux-amd64.tar.gz
# tar -zxvf helm-v2.3.0-linux-amd64.tar.gz
# mv linux-amd64/helm /usr/local/bin/helm
# Test Helm
# helm help


Undercloud done - move to ONAP


clone oom (scp your onap_rsa private key first - or clone anon - Ideally you get a full gerrit account and join the community)

see ssh/http/http access links below

https://gerrit.onap.org/r/#/admin/projects/oom

Code Block
anonymous http
git clone http://gerrit.onap.org/r/oom
or using your key
git clone -b release-1.0.0 ssh://michaelobrien@gerrit.onap.org:29418/oom

or use https (substitute your user/pass)

Code Block
git clone -b release-1.0.0 https://michaelnnnn:uHaBPMvR47nnnnnnnnRR3Keer6vatjKpf5A@gerrit.onap.org/r/oom


Wait until all the hosts show green in rancher, then run the createConfig/createAll scripts that wraps all the kubectl commands

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyOOM-115

Source the setenv.bash script in /oom/kubernetes/oneclick/ - will set your helm list of components to start/delete

Code Block
source setenv.bash

run the one time config pod - which mounts the volume /dockerdata/ contained in the pod config-init. This mount is required for all other ONAP pods to function.

Note: the pod will stop after NFS creation - this is normal.

Code Block
% cd oom/kubernetes/config
# edit or copy the config for MSO data
vi onap-parameters.yaml
# or
cp onap-parameters-sample.yaml onap-parameters.yaml 
# run the config pod creation
% ./createConfig.sh -n onap 


**** Creating configuration for ONAP instance: onap
namespace "onap" created
pod "config-init" created
**** Done ****


Wait for the config-init pod is gone before trying to bring up a component or all of ONAP - around 60 sec (up to 10 min) - see https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-Waitingforconfig-initcontainertofinish-20sec

root@ip-172-31-93-122:~/oom_20170908/oom/kubernetes/config# kubectl get pods --all-namespaces -a

onap          config                                 0/1       Completed   0          1m

Note: When using the -a option the config container will show up with the status, however when not used with the -a flag, it will not be present

Don't run all the pods unless you have at least 52G allocated - if you have a laptop/VM with 16G - then you can only run enough pods to fit in around 11G

Code Block
% cd ../oneclick
% vi createAll.bash 
% ./createAll.bash -n onap -a robot|appc|aai 

(to bring up a single service at a time)

Only if you have >52G run the following (all namespaces)

Code Block
% ./createAll.bash -n onap

ONAP is OK if everything is 1/1 in the following

Code Block
% kubectl get pods --all-namespaces

Run the ONAP portal via instructions at RunningONAPusingthevnc-portal

Wait until the containers are all up

Image Removed

Image Removed

Image Removed

Image Removed

Image Removed

Run Initial healthcheck directly on the host

cd /dockerdata-nfs/onap/robot

./ete-docker.sh health

check AAI endpoints

root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# kubectl -n onap-aai exec -it aai-service-3321436576-2snd6 bash

root@aai-service-3321436576-2snd6:/# ps -ef

UID        PID  PPID  C STIME TTY          TIME CMD

root         1     0  0 15:50 ?        00:00:00 /usr/local/sbin/haproxy-systemd-

root         7     1  0 15:50 ?        00:00:00 /usr/local/sbin/haproxy-master  

root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# curl https://127.0.0.1:30233/aai/v11/service-design-and-creation/models

curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none

List of Containers

Total pods is 48 (without DCAE)

Docker container list - source of truth: https://git.onap.org/integration/tree/packaging/docker/docker-images.csv

get health via

...


Cluster Configuration

A mount is better.

Try to run you host and client on a single VM (a 64g one) - if not you can run rancher and several clients across several machines/VMs. The /dockerdata-nfs share must be replicated across the cluster using a mount or by copying the directory to the other servers from the one the "config" pod actually runs. To verify this check your / root fs on each node.

Code Block
# in this case a 4 node cluster of 16g vms - the config node is on the 4th
root@kube16-3:~# ls /
bin  boot  dev  dockerdata-nfs  etc  home  initrd.img  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  snap  srv  sys  tmp  usr  var  vmlinuz
root@kube16-3:~# scp -r /dockerdata-nfs/ root@kube160.onap.info:~
root@kube16-3:~# scp -r /dockerdata-nfs/ root@kube161.onap.info:~
root@kube16-3:~# scp -r /dockerdata-nfs/ root@kube162.onap.info:~     


Running ONAP

Don't run all the pods unless you have at least 52G allocated - if you have a laptop/VM with 16G - then you can only run enough pods to fit in around 11G

Code Block
% cd ../oneclick
% vi createAll.bash 
% ./createAll.bash -n onap -a robot|appc|aai 


(to bring up a single service at a time)

Only if you have >52G run the following (all namespaces)

Code Block
% ./createAll.bash -n onap


ONAP is OK if everything is 1/1 in the following

Code Block
% kubectl get pods --all-namespaces


Run the ONAP portal via instructions at RunningONAPusingthevnc-portal

Wait until the containers are all up

Image Added

Image Added

Image Added

Image Added

Image Added

Run Initial healthcheck directly on the host

cd /dockerdata-nfs/onap/robot

./ete-docker.sh health


check AAI endpoints

root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# kubectl -n onap-aai exec -it aai-service-3321436576-2snd6 bash

root@aai-service-3321436576-2snd6:/# ps -ef

UID        PID  PPID  C STIME TTY          TIME CMD

root         1     0  0 15:50 ?        00:00:00 /usr/local/sbin/haproxy-systemd-

root         7     1  0 15:50 ?        00:00:00 /usr/local/sbin/haproxy-master  

root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# curl https://127.0.0.1:30233/aai/v11/service-design-and-creation/models

curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none


List of Containers

Total pods is 57

Docker container list - source of truth: https://git.onap.org/integration/tree/packaging/docker/docker-images.csv

get health via

~# kubectl get pods --all-namespaces -a | grep 1/1 kube-system heapster-4285517626-7wdct 1/1 Running 0 2d kube-system kubernetes-dashboard-716739405-xxn5k 1/1 Running 0 2d kube-system monitoring-grafana-3552275057-hvfw8 1/1 Running 0 2d kube-system monitoring-influxdb-4110454889-7s5fj 1/1 Running 0 2d kube-system tiller-deploy-737598192-jpggg 1/1 Running 0 2d onap-aai aai-dmaap-522748218-5rw0v 1/1 Running 0 1d onap-aai aai-kafka-2485280328-6264m 1/1 Running 0 1d onap-aai aai-resources-3302599602-fn4xm 1/1 Running 0 1d onap-aai aai-service-3321436576-2snd6 1/1 Running 0 1d onap-aai aai-traversal-2747464563-3c8m7 1/1 Running 0 1d onap-aai aai-zookeeper-1010977228-l2h3h 1/1 Running 0 1d onap-aai data-router-1397019010-t60wm 1/1 Running 0 1d onap-aai elasticsearch-2660384851-k4txd 1/1 Running 0 1d onap-aai gremlin-1786175088-m39jb 1/1 Running 0 1d onap-aai hbase-3880914143-vp8zk 1/1 Running 0 1d onap-aai model-loader-service-226363973-wx6s3 1/1 Running 0 1d onap-aai search-data-service-1212351515-q4k68 1/1 Running 0 1d onap-aai sparky-be-2088640323-h2pbx 1/1 Running 0 1d onap-appc appc-1972362106-4zqh8 1/1 Running 0 1d onap-appc appc-dbhost-2280647936-s041d 1/1 Running 0 1d onap-appc appc-dgbuilder-2616852186-g9sng 1/1 Running 0 1d onap-message-router dmaap-3565545912-w5lp4 1/1 Running 0 1d onap-message-router global-kafka-701218468-091rt 1/1 Running 0 1d onap-message-router zookeeper-555686225-vdp8w 1/1 Running 0 1d onap-mso mariadb-2814112212-zs7lk 1/1 Running 0 1d onap-mso mso-2505152907-xdhmb 1/1 Running 0 1d onap-policy brmsgw-362208961-ks6jb 1/1 Running 0 1d onap-policy drools-3066421234-rbpr9 1/1 Running 0 1d onap-policy mariadb-2520934092-3jcw3 1/1 Running 0 1d onap-policy nexus-3248078429-4k29f 1/1 Running 0 1d onap-policy pap-4199568361-p3h0p 1/1 Running 0 1d onap-policy pdp-785329082-3c8m5 1/1 Running 0 1d onap-policy pypdp-3381312488-q2z8t 1/1 Running 0 1d onap-portal portalapps-2799319019-00qhb 1/1 Running 0 1d onap-portal portaldb-1564561994-50mv0 1/1 Running 0 1d onap-portal portalwidgets-1728801515-r825g 1/1 Running 0 1d onap-portal vnc-portal-700404418-r61hm 1/1 Running 0 1d onap-robot robot-349535534-lqsvp 1/1 Running 0 1d onap-sdc sdc-be-1839962017-n3hx3 1/1 Running 0 1d onap-sdc sdc-cs-2640808243-tc9ck 1/1 Running 0 1d onap-sdc sdc-es-227943957-f6nfv 1/1 Running 0 1d onap-sdc sdc-fe-3467675014-v8jxm 1/1 Running 0 1d onap-sdc sdc-kb-1998598941-57nj1 1/1 Running 0 1d onap-sdnc sdnc-250717546-xmrmw 1/1 Running 0 1d onap-sdnc sdnc-dbhost-3807967487-tdr91 1/1 Running 0 1d onap-sdnc sdnc-dgbuilder-3446959187-dn07m 1/1 Running 0 1d onap-sdnc sdnc-portal-4253352894-hx9v8 1/1 Running 0 1d onap-vid vid-mariadb-2932072366-n5qw1 1/1 Running 0 1d onap-vid vid-server-377438368-kn6x4 1/1 Running 0 1d #busted containers 0/1 filter (ignore config-init it is a 1-time container) kubectl get pods --all-namespaces -a | grep 0/1 onap config-init 0/1 Completed 0 1d
Code Block

NAMESPACE

master:20170715

NAMEImageLog Volume ExternalLog Locations docker internalPublic / Debug PortsNotes
defaultconfig-init



The mount "config-init-root" is in the following location

(user configurable VF parameter file below)

/dockerdata-nfs/onapdemo/mso/mso/mso-docker.json

onap-aai 

aai-dmaap-522748218-5rw0v






onap-aai 

aai-kafka-2485280328-6264m






onap-aai 

aai-resources-3302599602-fn4xm



/opt/aai/logroot/AAI-RES



onap-aai 

aai-service-3321436576-2snd6 






onap-aai 

aai-traversal-2747464563-3c8m7



/opt/aai/logroot/AAI-GQ



onap-aai 

aai-zookeeper-1010977228-l2h3h 






onap-aai 

data-router-1397019010-t60wm






onap-aai 

elasticsearch-2660384851-k4txd






onap-aai 

gremlin-1786175088-m39jb






onap-aai 

hbase-3880914143-vp8z






onap-aai 

model-loader-service-226363973-wx6s3






onap-aai 

search-data-service-1212351515-q4k6






onap-aai 

sparky-be-2088640323-h2pbx






onap-appcappc-2044062043-bx6tc




onap-appcappc-dbhost-2039492951-jslts




onap-appcappc-dgbuilder-2934720673-mcp7c




onap-appcsdntldb01 (internal)




onap-appcsdnctldb02 (internal)




onap-cli





onap-dcaedcae-zookeeper
wurstmeister/zookeeper:latest




onap-dcaedcae-kafka
dockerfiles_kafka:latest



Note: currently there are no DCAE containers running yet (we are missing 6 yaml files (1 for the controller and 5 for the collector,staging,3-cdap pods)) - therefore DMaaP, VES collectors and APPC actions as the result of policy actions (closed loop) - will not function yet.

In review: https://gerrit.onap.org/r/#/c/7287/

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyOOM-5

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyOOM-62

onap-dcaedcae-dmaap
attos/dmaap:latest




onap-dcaepgaas (PostgreSQL aaSobrienlabs/pgaas


https://hub.docker.com/r/oomk8s/pgaas/tags/
onap-dcaedcae-collector-common-event



persistent volume: dcae-collector-pvs
onap-dcaedcae-collector-dmaapbc




onap-dcae

not required

dcae-controller





persistent volume: dcae-controller-pvs
onap-dcaedcae-ves-collector




onap-dcaecdap-0




onap-dcaecdap-1




onap-dcaecdap-2




onap-message-routerdmaap-3842712241-gtdkp




onap-message-routerglobal-kafka-89365896-5fnq9




onap-message-routerzookeeper-1406540368-jdscq




onap-msb




bring onap-msb up before the rest of onap

follow

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyOOM-113

onap-msb





onap-msb





onap-msb





onap-msomariadb-2638235337-758zr




onap-msomso-3192832250-fq6pn




onap-multicloud





onap-multicloud





onap-policy brmsgw-568914601-d5z71




onap-policy drools-1450928085-099m2 




onap-policy mariadb-2932363958-0l05g




onap-policy nexus-871440171-tqq4z




onap-policy pap-2218784661-xlj0n




onap-policy pdp-1677094700-75wpj




onap-policy pypdp-3209460526-bwm6b



1.0.0 only

onap-portal portalapps-1708810953-trz47




onap-portal portaldb-3652211058-vsg8r




onap-portal

portalwidgets-1728801515-r825g 






onap-portal vnc-portal-948446550-76kj7




onap-robot robot-964706867-czr05




onap-sdcsdc-be-2426613560-jv8sk 

/dockerdata-nfs/onap/sdc/logs/SDC/SDC-BE

${log.home}/${OPENECOMP-component-name}/ ${OPENECOMP-subcomponent-name}/transaction.log.%i


./var/lib/jetty/logs/SDC/SDC-BE/metrics.log

./var/lib/jetty/logs/SDC/SDC-BE/audit.log

./var/lib/jetty/logs/SDC/SDC-BE/debug_by_package.log

./var/lib/jetty/logs/SDC/SDC-BE/debug.log

./var/lib/jetty/logs/SDC/SDC-BE/transaction.log

./var/lib/jetty/logs/SDC/SDC-BE/error.log

./var/lib/jetty/logs/importNormativeAll.log

./var/lib/jetty/logs/ASDC/ASDC-FE/audit.log

./var/lib/jetty/logs/ASDC/ASDC-FE/debug.log

./var/lib/jetty/logs/ASDC/ASDC-FE/transaction.log

./var/lib/jetty/logs/ASDC/ASDC-FE/error.log

./var/lib/jetty/logs/2017_09_06.stderrout.log



onap-sdcsdc-cs-2080334320-95dq8




onap-sdcsdc-es-3272676451-skf7z




onap-sdcsdc-fe-931927019-nt94t 

./var/lib/jetty/logs/SDC/SDC-BE/metrics.log

./var/lib/jetty/logs/SDC/SDC-BE/audit.log

./var/lib/jetty/logs/SDC/SDC-BE/debug_by_package.log

./var/lib/jetty/logs/SDC/SDC-BE/debug.log

./var/lib/jetty/logs/SDC/SDC-BE/transaction.log

./var/lib/jetty/logs/SDC/SDC-BE/error.log

./var/lib/jetty/logs/importNormativeAll.log

./var/lib/jetty/logs/2017_09_07.stderrout.log

./var/lib/jetty/logs/ASDC/ASDC-FE/audit.log

./var/lib/jetty/logs/ASDC/ASDC-FE/debug.log

./var/lib/jetty/logs/ASDC/ASDC-FE/transaction.log

./var/lib/jetty/logs/ASDC/ASDC-FE/error.log

./var/lib/jetty/logs/2017_09_06.stderrout.log



onap-sdcsdc-kb-3337231379-8m8wx




onap-sdncsdnc-1788655913-vvxlj 

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/journal/000006.log

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/cache/1504712225751.log

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/cache/1504712002358.log

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/tmp/xql.log

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/log/karaf.log

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/taglist.log

./var/log/dpkg.log

./var/log/apt/history.log

./var/log/apt/term.log

./var/log/fontconfig.log

./var/log/alternatives.log

./var/log/bootstrap.log



onap-sdncsdnc-dbhost-240465348-kv8vf




onap-sdncsdnc-dgbuilder-4164493163-cp6rx




onap-sdncsdnctlbd01 (internal)




onap-sdncsdnctlb02 (internal)




onap-sdncsdnc-portal-2324831407-50811 

./opt/openecomp/sdnc/admportal/server/npm-debug.log

./var/log/dpkg.log

./var/log/apt/history.log

./var/log/apt/term.log

./var/log/fontconfig.log

./var/log/alternatives.log

./var/log/bootstrap.log



onap-vid vid-mariadb-4268497828-81hm0




onap-vid vid-server-2331936551-6gxsp




Fix MSO mso-docker.json (deprecated)

Before running pod-config-init.yaml - make sure your config for openstack is setup correctly - so you can deploy the vFirewall VMs for example

...