Skip to end of metadata
Go to start of metadata

Warning: Draft Content

This wiki is under construction - this means that content here may be not fully specified or missing.

TODO: determine/fix containers not ready, get DCAE yamls working, fix health tracking issues for healing

Official Documentation: https://onap.readthedocs.io/en/latest/submodules/oom.git/docs/OOM%20User%20Guide/oom_user_guide.html?highlight=oom

Integration: https://wiki.opnfv.org/pages/viewpage.action?pageId=12389095 

The OOM (ONAP Operation Manager) project has pushed Kubernetes based deployment code to the oom repository - based on ONAP 1.1.  This page details getting ONAP running (specifically the vFirewall demo) on Kubernetes for various virtual and native environments.  This page assumes you have access to any type of bare metal or VM running a clean Ubuntu 16.04 image - either on Rackspace, Openstack, your laptop or AWS spot EC2.

Architectural details of the OOM project is described here - OOM User Guide

Kubernetes on ONAP Arch

Status


Undercloud Installation

Requirements

MetricMinFull SystemNotes

vCPU4

64 recommended

(16/32 ok)

The full ONAP system of 50+ containers is CPU and Network bound on startup - therefore if you pre pull the docker images to remove the network slowdown - vCPU utilization will peak at 44 cores on a 64 core system and bring the system up in under 4 min. On a system with 16 cores you will see the normal 7 min startup time as we throttle 44 to 16.



RAM

7g (a couple components)

55g (75 containers)

Note: you need at least 51g RAM (3g is for Rancher/Kubernetes itself (the VFs exist on a separate openstack).

51 to start and 55 after running the system for a day



HD

60g

120g+



We need a kubernetes installation either a base installation or with a thin API wrapper like Rancher.

There are several options - currently Rancher with Helm on Ubuntu 16.04 is a focus as a thin wrapper on Kubernetes - there are other alternative platforms in the subpage - ONAP on Kubernetes (Alternatives)

OSVIMDescriptionStatusNodesLinks

Ubuntu 16.04.2

!Redhat

Bare Metal

VMWare

Rancher

Recommended approach

Issue with kubernetes support only in 1.12 (obsolete docker-machine) on OSX

1-4http://rancher.com/docs/rancher/v1.6/en/quick-start-guide/

ONAP Installation

Quickstart Installation

1) install rancher, clone oom, run config-init pod, run one or all onap components

*****************

Note: uninstall docker if already installed - as Kubernetes only support 1.12.x - as of 20170809

% sudo apt-get remove docker-engine

*****************


Install Rancher

ONAP deployment in kubernetes is modelled in the oom project as a 1:1 set of service:pod sets (1 pod per docker container).  The fastest way to get ONAP Kubernetes up is via Rancher on any bare metal or VM that supports a clean Ubuntu 16.04 install and more than 50G ram.

TODO: REMOVE from table cell - wrapping is not working

(on each host) add to your /etc/hosts to point your ip to your hostname (add your hostname to the end). Add entries for all other hosts in your cluster.

sudo vi /etc/hosts
<your-ip> <your-hostname>

clone oom (scp your onap_rsa private key first - or clone anon - Ideally you get a full gerrit account and join the community)

see ssh/http/http access links below

https://gerrit.onap.org/r/#/admin/projects/oom

anonymous http
git clone http://gerrit.onap.org/r/oom
or using your key
git clone ssh://michaelobrien@gerrit.onap.org:29418/oom

or use https (substitute your user/pass)

git clone https://michaelnnnn:uHaBPMvR47nnnnnnnnRR3Keer6vatjKpf5A@gerrit.onap.org/r/oom

(on each host (server and client(s) which may be the same machine)) Install only the 1.12.x (currently 1.12.6) version of Docker (the only version that works with Kubernetes in Rancher 1.6)

Install Docker

curl https://releases.rancher.com/install-docker/1.12.sh | sh

Pre pull docker images the first time you install onap. Currently the pre-pull will take 10-35 min depending on the throttling, what you have already pulled and load on nexus3.onap.org:10001. Pre pulling the images will allow the entire ONAP to start in 3-8 min instead of up to 3 hours.

This is a WIP

https://jira.onap.org/secure/attachment/10501/prepull_docker.sh

OOM-328 - Getting issue details... STATUS

curl https://jira.onap.org/secure/attachment/10501/prepull_docker.sh > prepull_docker.sh
chmod 777 prepull_docker.sh
nohup ./prepull_docker.sh > prepull.log & 


(on the master only) Install rancher (use 8880 instead of 8080) - note there may be issues with the dns pod in Rancher after a reboot or when running clustered hosts - a clean system will be OK - OOM-236 - Getting issue details... STATUS

docker run -d --restart=unless-stopped -p 8880:8080 rancher/server

In Rancher UI - dont use (http://127.0.0.1:8880) - use the real IP address - so the client configs are populated correctly with callbacks

You must deactivate the default CATTLE environment - by adding a KUBERNETES environment - and Deactivating the older default CATTLE one - your added hosts will attach to the default

    • Default → Manage Environments
    • Select "Add Environment" button
    • Give the Environment a name and description, then select Kubernetes as the Environment Template
    • Hit the "Create" button. This will create the environment and bring you back to the Manage Environments view
    • At the far right column of the Default Environment row, left-click the menu ( looks like 3 stacked dots ), and select Deactivate. This will make your new Kubernetes environment the new default.


Register your host(s) - run following on each host (including the master if you are collocating the master/host on a single machine/vm)

For each host, In Rancher > Infrastructure > Hosts. Select "Add Host"

The first time you add a host - you will be presented with a screen containing the routable IP - hit save only on a routable IP.

Enter IP of host: (if you launched racher with 127.0.0.1/localhost - otherwise keep it empty - it will autopopulate the registration with the real IP)



Copy command to register host with Rancher,

Execute command on host, for example:

% docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.2 http://192.168.163.131:8880/v1/scripts/BBD465D9B24E94F5FBFD:1483142400000:IDaNFrug38QsjZcu6rXh8TwqA4


wait for kubernetes menu to populate with CLI


Install Kubectl

The following will install kubectl on a linux host. Once configured, this client tool will provide management of a Kubernetes cluster.

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl
mkdir ~/.kube
vi ~/.kube/config

Paste kubectl config from Rancher (you will see the CLI menu in Rancher / Kubernetes after the k8s pods are up on your host)

Click on "Generate Config" to get your content to add into .kube/config


Verify that Kubernetes config is good

root@obrien-kube11-1:~# kubectl cluster-info
Kubernetes master is running at ....
Heapster is running at....
KubeDNS is running at ....
kubernetes-dashboard is running at ...
monitoring-grafana is running at ....
monitoring-influxdb is running at ...
tiller-deploy is running at....


Install Helm

The following will install Helm (use 2.3.0 not current 2.6.0) on a linux host. Helm is used by OOM for package and configuration management.

Prerequisite: Install Kubectl

wget http://storage.googleapis.com/kubernetes-helm/helm-v2.3.0-linux-amd64.tar.gz
tar -zxvf helm-v2.3.0-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
# Test Helm
helm help


Undercloud done - move to ONAP Installation


Wait until all the hosts show green in rancher, then run the createConfig/createAll scripts that wraps all the kubectl commands - off of oom/kubernetes/config and oom/kubernetes/oneclick

whereever you pulled oom

Source the setenv.bash script in /oom/kubernetes/oneclick/ - will set your helm list of components to start/delete

source setenv.bash

run the one time config pod - which mounts the volume /dockerdata/ contained in the pod config-init. This mount is required for all other ONAP pods to function.

Note: the pod will stop after NFS creation - this is normal.

% cd oom/kubernetes/config
# edit or copy the config for MSO data
vi onap-parameters.yaml
# or
cp onap-parameters-sample.yaml onap-parameters.yaml 
# run the config pod creation
% ./createConfig.sh -n onap 


**** Creating configuration for ONAP instance: onap
namespace "onap" created
pod "config-init" created
**** Done ****


Wait for the config-init pod is gone before trying to bring up a component or all of ONAP - around 60 sec (up to 10 min) - see https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-Waitingforconfig-initcontainertofinish-20sec

root@ip-172-31-93-122:~/oom_20170908/oom/kubernetes/config# kubectl get pods --all-namespaces -a

onap          config                                 0/1       Completed   0          1m

Note: When using the -a option the config container will show up with the status, however when not used with the -a flag, it will not be present


Cluster Configuration (optional - do not use if your server/client are co-located)

A mount is better.

Try to run you host and client on a single VM (a 64g one) - if not you can run rancher and several clients across several machines/VMs. The /dockerdata-nfs share must be replicated across the cluster using a mount or by copying the directory to the other servers from the one the "config" pod actually runs. To verify this check your / root fs on each node.

# in this case a 4 node cluster of 16g vms - the config node is on the 4th
root@kube16-3:~# ls /
bin  boot  dev  dockerdata-nfs  etc  home  initrd.img  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  snap  srv  sys  tmp  usr  var  vmlinuz
root@kube16-3:~# scp -r /dockerdata-nfs/ root@kube160.onap.info:~
root@kube16-3:~# scp -r /dockerdata-nfs/ root@kube161.onap.info:~
root@kube16-3:~# scp -r /dockerdata-nfs/ root@kube162.onap.info:~     

Running ONAP

Don't run all the pods unless you have at least 52G allocated - if you have a laptop/VM with 16G - then you can only run enough pods to fit in around 11G

% cd ../oneclick
% vi createAll.bash 
% ./createAll.bash -n onap -a robot|appc|aai 


(to bring up a single service at a time)

Use the default "onap" namespace if you want to run robot tests out of the box - as in "onap-robot"

Bring up core components

root@kos1001:~/oom1004/oom/kubernetes/oneclick# cat setenv.bash
#HELM_APPS=('consul' 'msb' 'mso' 'message-router' 'sdnc' 'vid' 'robot' 'portal' 'policy' 'appc' 'aai' 'sdc' 'dcaegen2' 'log' 'cli' 'multicloud' 'clamp' 'vnfsdk' 'kube2msb' 'aaf' 'vfc')
HELM_APPS=('consul' 'msb' 'mso' 'message-router' 'sdnc' 'vid' 'robot' 'portal' 'policy' 'appc' 'aai' 'sdc' 'log' 'kube2msb')
# pods with the ELK filebeat container for capturing logs
root@kos1001:~/oom1004/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -a | grep 2/2
onap-aai              aai-resources-338473047-8k6vr           2/2       Running     0          1h
onap-aai              aai-traversal-2033243133-6cr9v          2/2       Running     0          1h
onap-aai              model-loader-service-3356570452-25fjp   2/2       Running     0          1h
onap-aai              search-data-service-2366687049-jt0nb    2/2       Running     0          1h
onap-aai              sparky-be-3141964573-f2mhr              2/2       Running     0          1h
onap-appc             appc-1335254431-v1pcs                   2/2       Running     0          1h
onap-mso              mso-3911927766-bmww7                    2/2       Running     0          1h
onap-policy           drools-2302173499-t0zmt                 2/2       Running     0          1h
onap-policy           pap-1954142582-vsrld                    2/2       Running     0          1h
onap-policy           pdp-4137191120-qgqnj                    2/2       Running     0          1h
onap-portal           portalapps-4168271938-4kp32             2/2       Running     0          1h
onap-portal           portaldb-2821262885-0t32z               2/2       Running     0          1h
onap-sdc              sdc-be-2986438255-sdqj6                 2/2       Running     0          1h
onap-sdc              sdc-fe-1573125197-7j3gp                 2/2       Running     0          1h
onap-sdnc             sdnc-3858151307-w9h7j                   2/2       Running     0          1h
onap-vid              vid-server-1837290631-x4ttc             2/2       Running     0          1h
# failed containers
root@kos1001:~/oom1004/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -a | grep 0/1
onap                  config                                  0/1       Completed   0          3h
onap-kube2msb         kube2msb-registrator-1562795675-xsmjz   0/1       Error       24         1h


Only if you have >52G run the following (all namespaces)

% ./createAll.bash -n onap


ONAP is OK if everything is 1/1 in the following

% kubectl get pods --all-namespaces


Run the ONAP portal via instructions at RunningONAPusingthevnc-portal

Wait until the containers are all up

Run Initial healthcheck directly on the host

cd /dockerdata-nfs/onap/robot

./ete-docker.sh health


check AAI endpoints

root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# kubectl -n onap-aai exec -it aai-service-3321436576-2snd6 bash

root@aai-service-3321436576-2snd6:/# ps -ef

UID        PID  PPID  C STIME TTY          TIME CMD

root         1     0  0 15:50 ?        00:00:00 /usr/local/sbin/haproxy-systemd-

root         7     1  0 15:50 ?        00:00:00 /usr/local/sbin/haproxy-master  

root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# curl https://127.0.0.1:30233/aai/v11/service-design-and-creation/models

curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none


Ports


onap-aai              aai-service            10.43.238.197   <nodes>       8443:30233/TCP,8080:30232/TCP                                                


onap-aai              hbase                  None            <none>        2181/TCP,8080/TCP,8085/TCP,9090/TCP,16000/TCP,16010/TCP,16201/TCP            


onap-aai              model-loader-service   10.43.161.44    <nodes>       8443:30229/TCP,8080:30210/TCP                                                


onap-appc             dgbuilder              10.43.67.192    <nodes>       3000:30228/TCP                                                               


onap-appc             sdnhost                10.43.250.74    <nodes>       8282:30230/TCP,1830:30231/TCP                                                


onap-clamp            clamp                  10.43.145.101   <nodes>       8080:30295/TCP                                                              


onap-cli              cli                    10.43.225.34    <nodes>       80:30260/TCP                                                                 


onap-consul           consul-server          10.43.56.151    <nodes>       8500:30270/TCP,8301:30271/TCP                                                


onap-log              elasticsearch          10.43.81.172    <nodes>       9200:30254/TCP                                                               


onap-log              kibana                 10.43.71.77     <nodes>       5601:30253/TCP                                                               


onap-message-router   dmaap                  10.43.226.159   <nodes>       3904:30227/TCP,3905:30226/TCP                                                


onap-msb              msb-consul             10.43.128.166   <nodes>       8500:30500/TCP                                                               


onap-msb              msb-discovery          10.43.6.205     <nodes>       10081:30081/TCP                                                              


onap-msb              msb-eag                10.43.239.63    <nodes>       80:30082/TCP                                                                 


onap-msb              msb-iag                10.43.220.233   <nodes>       80:30080/TCP                                                                


onap-mso              mariadb                10.43.254.57    <nodes>       3306:30252/TCP                                                               


onap-mso              mso                    10.43.206.123   <nodes>       8080:30223/TCP,3904:30225/TCP,3905:30224/TCP,9990:30222/TCP,8787:30250/TCP   


onap-multicloud       framework              10.43.5.154     <nodes>       9001:30291/TCP                                                               


onap-multicloud       multicloud-ocata       10.43.135.158   <nodes>       9006:30293/TCP                                                               


onap-multicloud       multicloud-vio         10.43.43.13     <nodes>       9004:30292/TCP                                                               


onap-multicloud       multicloud-windriver   10.43.43.197    <nodes>       9005:30294/TCP                                                               


onap-policy           brmsgw                 10.43.116.61    <nodes>       9989:30216/TCP                                                               


onap-policy           drools                 10.43.213.147   <nodes>       6969:30217/TCP                                                               


onap-policy           pap                    10.43.177.82    <nodes>       8443:30219/TCP,9091:30218/TCP                                                


onap-policy           pdp                    10.43.167.146   <nodes>       8081:30220/TCP                                                               


onap-portal           portalapps             10.43.130.189   <nodes>       8006:30213/TCP,8010:30214/TCP,8989:30215/TCP                                 


onap-portal           vnc-portal             10.43.8.67      <nodes>       6080:30211/TCP,5900:30212/TCP                                                


onap-robot            robot                  10.43.245.43    <nodes>       88:30209/TCP                                                                 


onap-sdc              sdc-be                 10.43.132.126   <nodes>       8443:30204/TCP,8080:30205/TCP                                                


onap-sdc              sdc-fe                 10.43.96.120    <nodes>       9443:30207/TCP,8181:30206/TCP                                                


onap-sdnc             sdnc-dgbuilder         10.43.177.11    <nodes>       3000:30203/TCP                                                               


onap-sdnc             sdnc-portal            10.43.108.205   <nodes>       8843:30201/TCP                                                               


onap-sdnc             sdnhost                10.43.185.180   <nodes>       8282:30202/TCP,8201:32147/TCP                                                


onap-vid              vid-server             10.43.36.97     <nodes>       8080:30200/TCP                                                               


onap-vnfsdk           refrepo                10.43.121.92    <nodes>       8702:30297/TCP 


podext portinternal portcontainertyperest calls
aai-service302338443aai-servicehttps

haproxy only













sdc-be302058080sdc-behttp/sdc/* /sdc2/*
sdc-fe302068181sdc-fehttp/sdc1/*


List of Containers

Total pods is 75

Docker container list - may not be fully up to date: https://git.onap.org/integration/tree/packaging/docker/docker-images.csv

get health via

 



NAMESPACE

master:20170715

NAMEImage

Debug port

Containers

(2 includes filebeat)

Log Volume ExternalLog Locations docker internalPublic  PortsNotes
defaultconfig-init





The mount "config-init-root" is in the following location

(user configurable VF parameter file below)

/dockerdata-nfs/onapdemo/mso/mso/mso-docker.json

onap-aaf







onap-aaf







onap-aai 

aai-resources





/opt/aai/logroot/AAI-RES



onap-aai 

aai-service








onap-aai 

aai-traversal





/opt/aai/logroot/AAI-GQ



onap-aai 

data-router








onap-aai 

elasticsearch








onap-aai 

hbase








onap-aai 

model-loader-service








onap-aai 

search-data-service








onap-aai 

sparky-be








onap-appcappc

2



onap-appcappc-dbhost






onap-appcappc-dgbuilder






onap-appcsdntldb01 (internal)






onap-appcsdnctldb02 (internal)






onap-cli







onap-clampclamp






onap-clamp

clamp-mariadb








onap-consul

consul-agent








onap-consul

consul-server








onap-consul

consul-server








onap-consul

consul-server








onap-dcaedcae-zookeeper
wurstmeister/zookeeper:latest






onap-dcaedcae-kafka
dockerfiles_kafka:latest





Note: currently there are no DCAE containers running yet (we are missing 6 yaml files (1 for the controller and 5 for the collector,staging,3-cdap pods)) - therefore DMaaP, VES collectors and APPC actions as the result of policy actions (closed loop) - will not function yet.

In review: https://gerrit.onap.org/r/#/c/7287/

OOM-5 - Getting issue details... STATUS

OOM-62 - Getting issue details... STATUS

onap-dcaedcae-dmaap
attos/dmaap:latest






onap-dcaepgaas (PostgreSQL aaSobrienlabs/pgaas




https://hub.docker.com/r/oomk8s/pgaas/tags/
onap-dcaedcae-collector-common-event





persistent volume: dcae-collector-pvs
onap-dcaedcae-collector-dmaapbc






onap-dcae

not required

dcae-controller







persistent volume: dcae-controller-pvs
onap-dcaedcae-ves-collector






onap-dcaecdap-0






onap-dcaecdap-1






onap-dcaecdap-2






onap-kube2msb

kube2msb-registrator








onap-log

elasticsearch








onap-logkibana






onap-loglogstash






onap-message-routerdmaap






onap-message-routerglobal-kafka






onap-message-routerzookeeper






onap-msb

msb-consul







bring onap-msb up before the rest of onap

follow OOM-113 - Getting issue details... STATUS

onap-msb

msb-discovery








onap-msb

msb-eag








onap-msb

msb-iag








onap-msomariadb






onap-msomso

2



onap-multicloud

framework








onap-multicloud

multicloud-ocata








onap-multicloud

multicloud-vio








onap-multicloud

multicloud-windriver








onap-policy brmsgw






onap-policy drools

2



onap-policy mariadb






onap-policy nexus






onap-policy pap

2



onap-policy pdp

2



onap-portal portalapps

2



onap-portal portaldb






onap-portal

portalwidgets








onap-portal vnc-portal






onap-robot robot






onap-sdcsdc-be


/dockerdata-nfs/onap/sdc/logs/SDC/SDC-BE

${log.home}/${OPENECOMP-component-name}/ ${OPENECOMP-subcomponent-name}/transaction.log.%i


./var/lib/jetty/logs/SDC/SDC-BE/metrics.log

./var/lib/jetty/logs/SDC/SDC-BE/audit.log

./var/lib/jetty/logs/SDC/SDC-BE/debug_by_package.log

./var/lib/jetty/logs/SDC/SDC-BE/debug.log

./var/lib/jetty/logs/SDC/SDC-BE/transaction.log

./var/lib/jetty/logs/SDC/SDC-BE/error.log

./var/lib/jetty/logs/importNormativeAll.log

./var/lib/jetty/logs/ASDC/ASDC-FE/audit.log

./var/lib/jetty/logs/ASDC/ASDC-FE/debug.log

./var/lib/jetty/logs/ASDC/ASDC-FE/transaction.log

./var/lib/jetty/logs/ASDC/ASDC-FE/error.log

./var/lib/jetty/logs/2017_09_06.stderrout.log



onap-sdcsdc-cs






onap-sdcsdc-es

2



onap-sdcsdc-fe

2

./var/lib/jetty/logs/SDC/SDC-BE/metrics.log

./var/lib/jetty/logs/SDC/SDC-BE/audit.log

./var/lib/jetty/logs/SDC/SDC-BE/debug_by_package.log

./var/lib/jetty/logs/SDC/SDC-BE/debug.log

./var/lib/jetty/logs/SDC/SDC-BE/transaction.log

./var/lib/jetty/logs/SDC/SDC-BE/error.log

./var/lib/jetty/logs/importNormativeAll.log

./var/lib/jetty/logs/2017_09_07.stderrout.log

./var/lib/jetty/logs/ASDC/ASDC-FE/audit.log

./var/lib/jetty/logs/ASDC/ASDC-FE/debug.log

./var/lib/jetty/logs/ASDC/ASDC-FE/transaction.log

./var/lib/jetty/logs/ASDC/ASDC-FE/error.log

./var/lib/jetty/logs/2017_09_06.stderrout.log



onap-sdcsdc-kb






onap-sdncsdnc

2

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/journal/000006.log

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/cache/1504712225751.log

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/cache/1504712002358.log

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/tmp/xql.log

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/log/karaf.log

./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/taglist.log

./var/log/dpkg.log

./var/log/apt/history.log

./var/log/apt/term.log

./var/log/fontconfig.log

./var/log/alternatives.log

./var/log/bootstrap.log



onap-sdncsdnc-dbhost






onap-sdncsdnc-dgbuilder






onap-sdncsdnctlbd01 (internal)






onap-sdncsdnctlb02 (internal)






onap-sdncsdnc-portal



./opt/openecomp/sdnc/admportal/server/npm-debug.log

./var/log/dpkg.log

./var/log/apt/history.log

./var/log/apt/term.log

./var/log/fontconfig.log

./var/log/alternatives.log

./var/log/bootstrap.log



onap-vid vid-mariadb






onap-vid vid-server

2



onap-vfc

vfc-catalog








onap-vfc

vfc-emsdriver








onap-vfc

vfc-gvnfmdriver








onap-vfc

vfc-hwvnfmdriver








onap-vfc

vfc-jujudriver








onap-vfc

vfc-nslcm








onap-vfc

vfc-resmgr








onap-vfc

vfc-vnflcm








onap-vfc

vfc-vnfmgr








onap-vfc

vfc-vnfres








onap-vfc

vfc-workflow








onap-vfc

vfc-ztesdncdriver








onap-vfc

vfc-ztevmanagerdrive








onap-vnfsdkpostgres






onap-vnfsdkrefrepo






OOM Pod Init Dependencies

OOM Pod Init Dependencies

The diagram above describes the init dependencies for the ONAP pods when first deploying OOM through Kubernetes. The diagram currently covers the core components initially used in the vFW demo.

Configure Openstack settings in onap-parameters.yaml

Before running pod-config-init.yaml - make sure your config for openstack is setup correctly - so you can deploy the vFirewall VMs for example and run the demo robot scripts.

vi oom/kubernetes/config/config/onap-parameters.yaml

Rackspace example

defaults OK

OPENSTACK_UBUNTU_14_IMAGE: "Ubuntu_14.04.5_LTS"
OPENSTACK_PUBLIC_NET_ID: "e8f51956-00dd-4425-af36-045716781ffc"
OPENSTACK_OAM_NETWORK_ID: "d4769dfb-c9e4-4f72-b3d6-1d18f4ac4ee6"
OPENSTACK_OAM_SUBNET_ID: "191f7580-acf6-4c2b-8ec0-ba7d99b3bc4e"
OPENSTACK_OAM_NETWORK_CIDR: "192.168.30.0/24"

OPENSTACK_FLAVOUR_MEDIUM: "m1.medium"

OPENSTACK_SERVICE_TENANT_NAME: "services"

DMAAP_TOPIC: "AUTO"

DEMO_ARTIFACTS_VERSION: "1.1.0-SNAPSHOT"

Modify for your OS/RS account
OPENSTACK_USERNAME: "yourlogin"
OPENSTACK_API_KEY: "uuid-key"
OPENSTACK_TENANT_NAME: "numeric"
OPENSTACK_TENANT_ID: "numeric"
OPENSTACK_REGION: "IAD"
OPENSTACK_KEYSTONE_URL: "https://identity.api.rackspacecloud.com"

Kubernetes DevOps


Kubernetes specific config

https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/

Deleting All Containers

Delete all the containers (and services)

./deleteAll.bash -n onap

Delete/Rerun config-init container for /dockerdata-nfs refresh

Delete the config-init container and its generated /dockerdata-nfs share

There may be cases where new configuration content needs to be deployed after a pull of a new version of ONAP.

for example after pull brings in files like the following (20170902)

root@ip-172-31-93-160:~/oom/kubernetes/oneclick# git pull

Resolving deltas: 100% (135/135), completed with 24 local objects.

From http://gerrit.onap.org/r/oom

   bf928c5..da59ee4  master     -> origin/master

Updating bf928c5..da59ee4

kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/aai-resources/aai-resources-auth/metadata.rb                                  |    7 +

 kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/aai-resources/aai-resources-auth/recipes/aai-resources-aai-keystore.rb        |    8 +

 kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/{ajsc-aai-config => aai-resources/aai-resources-config}/CHANGELOG.md          |    2 +-

 kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/{ajsc-aai-config => aai-resources/aai-resources-config}/README.md             |    4 +-


see (worked with Zoran)  OOM-257 - Getting issue details... STATUS

# check for the pod
kubectl get pods --all-namespaces -a
# delete all the pod/services
./deleteAll.bash -n onap
# delete the fs
rm -rf /dockerdata-nfs
At this moment, its empty env
#Pull the repo
git pull
# rerun the config
./createConfig.bash -n onap
If you get an error saying release onap-config is already exists then please run :- helm del --purge onap-config


example 20170907 
root@kube0:~/oom/kubernetes/oneclick# rm -rf /dockerdata-nfs/
root@kube0:~/oom/kubernetes/oneclick# cd ../config/
root@kube0:~/oom/kubernetes/config# ./createConfig.sh -n onap
**** Creating configuration for ONAP instance: onap
Error from server (AlreadyExists): namespaces "onap" already exists
Error: a release named "onap-config" already exists.
Please run: helm ls --all "onap-config"; helm del --help
**** Done ****
root@kube0:~/oom/kubernetes/config# helm del --purge onap-config
release "onap-config" deleted
# rerun createAll.bash -n onap


Waiting for config-init container to finish - 20sec

root@ip-172-31-93-160:~/oom/kubernetes/config# kubectl get pods --all-namespaces -a

NAMESPACE     NAME                                   READY     STATUS              RESTARTS   AGE

onap          config-init                            0/1       ContainerCreating   0          6s

root@ip-172-31-93-160:~/oom/kubernetes/config# kubectl get pods --all-namespaces -a

NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE

onap          config-init                            1/1       Running   0          9s

root@ip-172-31-93-160:~/oom/kubernetes/config# kubectl get pods --all-namespaces -a

NAMESPACE     NAME                                   READY     STATUS      RESTARTS   AGE

onap          config-init                            0/1       Completed   0          14s


Container Endpoint access

Check the services view in the Kuberntes API under robot

robot.onap-robot:88 TCP

robot.onap-robot:30209 TCP

kubectl get services --all-namespaces -o wide

onap-vid      vid-mariadb            None           <none>        3306/TCP         1h        app=vid-mariadb

onap-vid      vid-server             10.43.14.244   <nodes>       8080:30200/TCP   1h        app=vid-server


Container Logs

kubectl --namespace onap-vid logs -f vid-server-248645937-8tt6p

16-Jul-2017 02:46:48.707 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 22520 ms

kubectl --namespace onap-portal logs portalapps-2799319019-22mzl -f

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide

NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP              NODE

onap-robot    robot-44708506-dgv8j                    1/1       Running   0          36m       10.42.240.80    obriensystemskub0

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl --namespace onap-robot logs -f robot-44708506-dgv8j

2017-07-16 01:55:54: (log.c.164) server started

Robot Logs

Yogini and I needed the logs in OOM Kubernetes - they were already there and with a robot:robot auth

http://test.onap.info:30209/logs/demo/InitDistribution/report.html

for example after a

root@ip-172-31-57-55:/dockerdata-nfs/onap/robot# ./demo-k8s.sh distribute

find your path to the logs by using for example

root@ip-172-31-57-55:/dockerdata-nfs/onap/robot# kubectl --namespace onap-robot exec -it robot-4251390084-lmdbb bash

root@robot-4251390084-lmdbb:/# ls /var/opt/OpenECOMP_ETE/html/logs/demo/InitD                                                            

InitDemo/         InitDistribution/ 

path is

http://test.onap.info:30209/logs/demo/InitDemo/log.html#s1-s1-s1-s1-t1



SSH into ONAP containers

Normally I would via https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/

Get the pod name via

kubectl get pods --all-namespaces -o wide

bash into the pod via

kubectl -n onap-mso exec -it  mso-1648770403-8hwcf /bin/bash


Push Files to Pods

Trying to get an authorization file into the robot pod

root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl cp authorization onap-robot/robot-44708506-nhm0n:/home/ubuntu

above works?
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl cp authorization onap-robot/robot-44708506-nhm0n:/etc/lighttpd/authorization
tar: authorization: Cannot open: File exists
tar: Exiting with failure status due to previous errors

Redeploying Code war/jar in a docker container

Attaching a debugger to a docker container



Running ONAP Portal UI Operations

Running ONAP using the vnc-portal

see Installing and Running the ONAP Demos

or run the vnc-portal container to access ONAP using the traditional port mappings.  See the following recorded video by Mike Elliot of the OOM team for a audio-visual reference

https://wiki.onap.org/download/attachments/13598723/zoom_0.mp4?version=1&modificationDate=1502986268000&api=v2

Check for the vnc-portal port via (it is always 30211)

obrienbiometrics:onap michaelobrien$ ssh ubuntu@dev.onap.info
ubuntu@ip-172-31-93-122:~$ sudo su -
root@ip-172-31-93-122:~# kubectl get services --all-namespaces -o wide
NAMESPACE             NAME                          CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                      AGE       SELECTOR
onap-portal           vnc-portal                    10.43.78.204    <nodes>       6080:30211/TCP,5900:30212/TCP                                                4d        app=vnc-portal

launch the vnc-portal in a browser

http://dev.onap.info:30211/

password is "password"

Open firefox inside the VNC vm - launch portal normally

http://portal.api.simpledemo.openecomp.org:8989/ECOMPPORTAL/login.htm

(20170906) Before running SDC - fix the /etc/hosts (thanks Yogini for catching this) - edit your /etc/hosts as follows

(change sdc.ui to sdc.api)

OOM-282 - Getting issue details... STATUS

beforeafternotes



login and run SDC


Continue with the normal ONAP demo flow at (Optional) Tutorial: Onboarding and Distributing a Vendor Software Product (VSP)

Running Multiple ONAP namespaces

Run multiple environments on the same machine - TODO

root@obriensystemsu0:~/onap/oom/kubernetes/oneclick# ./createAll.bash -n onap -a robot

********** Creating instance 1 of ONAP with port range 30200 and 30399

********** Creating ONAP:

********** Creating deployments for robot **********

Creating namespace **********

namespace "onap-robot" created

Creating registry secret **********

secret "onap-docker-registry-key" created

Creating deployments and services **********

NAME:   onap-robot

LAST DEPLOYED: Sun Sep 17 14:58:29 2017

NAMESPACE: onap

STATUS: DEPLOYED

RESOURCES:

==> v1/Service

NAME   CLUSTER-IP  EXTERNAL-IP  PORT(S)       AGE

robot  10.43.6.5   <nodes>      88:30209/TCP  0s

==> extensions/v1beta1/Deployment

NAME   DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE

robot  1        1        1           0          0s

**** Done ****

root@obriensystemsu0:~/onap/oom/kubernetes/oneclick# ./createAll.bash -n onap2 -a robot

********** Creating instance 1 of ONAP with port range 30200 and 30399

********** Creating ONAP:

********** Creating deployments for robot **********

Creating namespace **********

namespace "onap2-robot" created

Creating registry secret **********

secret "onap2-docker-registry-key" created

Creating deployments and services **********

Error: release onap2-robot failed: Service "robot" is invalid: spec.ports[0].nodePort: Invalid value: 30209: provided port is already allocated

The command helm returned with error code 1

Rest API

There is a v1 and v2 experimental REST endpoint that allows us to automate Rancher

http://127.0.0.1:8880/v1

For example we can see when the AAI model-loader container was created

http://127.0.0.1:8880/v1/containerevents
{
  • "id": "1ce88",
  • "type": "containerEvent",
  • "links": {},
  • "actions": {},
  • "baseType": "containerEvent",
  • "state": "created",
  • "accountId": "1a7",
  • "created": "2017-09-17T20:07:37Z",
  • "createdTS": 1505678857000,
  • "data": {
    • "fields": {
      • "dockerInspect": {
        • "Id": "59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54",
        • "Created": "2017-09-17T20:07:37.750772403Z",
        • "Path": "/pause",
        • "Args": [ ],
        • "State": {
          • "Status": "running",
          • "Running": true,
          • "Paused": false,
          • "Restarting": false,
          • "OOMKilled": false,
          • "Dead": false,
          • "Pid": 25115,
          • "ExitCode": 0,
          • "Error": "",
          • "StartedAt": "2017-09-17T20:07:37.92889179Z",
          • "FinishedAt": "0001-01-01T00:00:00Z"
          },
        • "Image": "sha256:99e59f495ffaa222bfeb67580213e8c28c1e885f1d245ab2bbe3b1b1ec3bd0b2",
        • "ResolvConfPath": "/var/lib/docker/containers/59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54/resolv.conf",
        • "HostnamePath": "/var/lib/docker/containers/59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54/hostname",
        • "HostsPath": "/var/lib/docker/containers/59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54/hosts",
        • "LogPath": "/var/lib/docker/containers/59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54/59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54-json.log",
        • "Name": "/k8s_POD_model-loader-service-849987455-532vd_onap-aai_d9034afb-9be3-11e7-ac87-024d93e255bc_0",
        • "RestartCount": 0,
        • "Driver": "aufs",
        • "MountLabel": "",
        • "ProcessLabel": "",
        • "AppArmorProfile": "",
        • "ExecIDs": null,
        • "HostConfig": {
          • "Binds": null,
          • "ContainerIDFile": "",
          • "LogConfig": {
            • "Type": "json-file",
            • "Config": { }
            },
          • "NetworkMode": "none",
          • "PortBindings": { },
          • "RestartPolicy": {
            • "Name": "",
            • "MaximumRetryCount": 0
            },
          • "AutoRemove": false,
          • "VolumeDriver": "",
          • "VolumesFrom": null,
          • "CapAdd": null,
          • "CapDrop": null,
          • "Dns": null,
          • "DnsOptions": null,
          • "DnsSearch": null,
          • "ExtraHosts": null,
          • "GroupAdd": null,
          • "IpcMode": "",
          • "Cgroup": "",
          • "Links": null,
          • "OomScoreAdj": -998,
          • "PidMode": "",
          • "Privileged": false,
          • "PublishAllPorts": false,
          • "ReadonlyRootfs": false,
          • "SecurityOpt": [
            • "seccomp=unconfined"
            ],
          • "UTSMode": "",
          • "UsernsMode": "",
          • "ShmSize": 67108864,
          • "Runtime": "runc",
          • "ConsoleSize": [ 2 items
            • 0,
            • 0
            ],
          • "Isolation": "",
          • "CpuShares": 2,
          • "Memory": 0,
          • "CgroupParent": "/kubepods/besteffort/podd9034afb-9be3-11e7-ac87-024d93e255bc",
          • "BlkioWeight": 0,
          • "BlkioWeightDevice": null,
          • "BlkioDeviceReadBps": null,
          • "BlkioDeviceWriteBps": null,
          • "BlkioDeviceReadIOps": null,
          • "BlkioDeviceWriteIOps": null,
          • "CpuPeriod": 0,
          • "CpuQuota": 0,
          • "CpuRealtimePeriod": 0,
          • "CpuRealtimeRuntime": 0,
          • "CpusetCpus": "",
          • "CpusetMems": "",
          • "Devices": null,
          • "DiskQuota": 0,
          • "KernelMemory": 0,
          • "MemoryReservation": 0,
          • "MemorySwap": 0,
          • "MemorySwappiness": -1,
          • "OomKillDisable": false,
          • "PidsLimit": 0,
          • "Ulimits": null,
          • "CpuCount": 0,
          • "CpuPercent": 0,
          • "IOMaximumIOps": 0,
          • "IOMaximumBandwidth": 0
          },
        • "GraphDriver": {
          • "Name": "aufs",
          • "Data": null
          },
        • "Mounts": [ ],
        • "Config": {
          • "Hostname": "model-loader-service-849987455-532vd",
          • "Domainname": "",
          • "User": "",
          • "AttachStdin": false,
          • "AttachStdout": false,
          • "AttachStderr": false,
          • "Tty": false,
          • "OpenStdin": false,
          • "StdinOnce": false,
          • "Env": null,
          • "Cmd": null,
          • "Image": "gcr.io/google_containers/pause-amd64:3.0",
          • "Volumes": null,
          • "WorkingDir": "",
          • "Entrypoint": [
            • "/pause"
            ],
          • "OnBuild": null,
          • "Labels": {}
          },
        • "NetworkSettings": {
          • "Bridge": "",
          • "SandboxID": "6ebd4ae330d1fded82301d121e604a1e7193f20c538a9ff1179e98b9e36ffa5f",
          • "HairpinMode": false,
          • "LinkLocalIPv6Address": "",
          • "LinkLocalIPv6PrefixLen": 0,
          • "Ports": { },
          • "SandboxKey": "/var/run/docker/netns/6ebd4ae330d1",
          • "SecondaryIPAddresses": null,
          • "SecondaryIPv6Addresses": null,
          • "EndpointID": "",
          • "Gateway": "",
          • "GlobalIPv6Address": "",
          • "GlobalIPv6PrefixLen": 0,
          • "IPAddress": "",
          • "IPPrefixLen": 0,
          • "IPv6Gateway": "",
          • "MacAddress": "",
          • "Networks": {
            • "none": {
              • "IPAMConfig": null,
              • "Links": null,
              • "Aliases": null,
              • "NetworkID": "9812f79a4ddb086db1b60cd10292d729842b2b42e674b400ac09101541e2b845",
              • "EndpointID": "d4cb711ea75ed4d27b9d4b3a71d1b3dd5dfa9f4ebe277ab4280d98011a35b463",
              • "Gateway": "",
              • "IPAddress": "",
              • "IPPrefixLen": 0,
              • "IPv6Gateway": "",
              • "GlobalIPv6Address": "",
              • "GlobalIPv6PrefixLen": 0,
              • "MacAddress": ""
              }
            }
          }
        }
      }
    },


Troubleshooting

Rancher fails to restart on server reboot

Having issues after a reboot of a colocated server/agent

Installing Clean Ubuntu

apt-get install ssh

apt-get install ubuntu-desktop

DNS resolution

ignore - not relevant

Search Line limits were exceeded, some dns names have been omitted, the applied search line is: default.svc.cluster.local svc.cluster.local cluster.local kubelet.kubernetes.rancher.internal kubernetes.rancher.internal rancher.internal

https://github.com/rancher/rancher/issues/9303

Config Pod fails to start with Error

Make sure your Openstack parameters are set if you get the following starting up the config pod

root@obriensystemsu0:~# kubectl get pods --all-namespaces -a
NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
kube-system   heapster-4285517626-l9wjp              1/1       Running   4          22d
kube-system   kube-dns-2514474280-4411x              3/3       Running   9          22d
kube-system   kubernetes-dashboard-716739405-fq507   1/1       Running   4          22d
kube-system   monitoring-grafana-3552275057-w3xml    1/1       Running   4          22d
kube-system   monitoring-influxdb-4110454889-bwqgm   1/1       Running   4          22d
kube-system   tiller-deploy-737598192-841l1          1/1       Running   4          22d
onap          config                                 0/1       Error     0          1d
root@obriensystemsu0:~# vi /etc/hosts
root@obriensystemsu0:~# kubectl logs -n onap config
Validating onap-parameters.yaml has been populated
Error: OPENSTACK_UBUNTU_14_IMAGE must be set in onap-parameters.yaml
+ echo 'Validating onap-parameters.yaml has been populated'
+ [[ -z '' ]]
+ echo 'Error: OPENSTACK_UBUNTU_14_IMAGE must be set in onap-parameters.yaml'
+ exit 1

fix
root@obriensystemsu0:~/onap_1007/oom/kubernetes/config# helm delete --purge onap-config
release "onap-config" deleted
root@obriensystemsu0:~/onap_1007/oom/kubernetes/config# ./createConfig.sh -n onap

**** Creating configuration for ONAP instance: onap
Error from server (AlreadyExists): namespaces "onap" already exists
NAME:   onap-config
LAST DEPLOYED: Mon Oct  9 21:35:27 2017
NAMESPACE: onap
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                   DATA  AGE
global-onap-configmap  15    0s

==> v1/Pod
NAME    READY  STATUS             RESTARTS  AGE
config  0/1    ContainerCreating  0         0s

**** Done ****
root@obriensystemsu0:~/onap_1007/oom/kubernetes/config# kubectl get pods --all-namespaces -a
NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
kube-system   heapster-4285517626-l9wjp              1/1       Running   4          22d
kube-system   kube-dns-2514474280-4411x              3/3       Running   9          22d
kube-system   kubernetes-dashboard-716739405-fq507   1/1       Running   4          22d
kube-system   monitoring-grafana-3552275057-w3xml    1/1       Running   4          22d
kube-system   monitoring-influxdb-4110454889-bwqgm   1/1       Running   4          22d
kube-system   tiller-deploy-737598192-841l1          1/1       Running   4          22d
onap          config                                 1/1       Running   0          25s
root@obriensystemsu0:~/onap_1007/oom/kubernetes/config# kubectl get pods --all-namespaces -a
NAMESPACE     NAME                                   READY     STATUS      RESTARTS   AGE
kube-system   heapster-4285517626-l9wjp              1/1       Running     4          22d
kube-system   kube-dns-2514474280-4411x              3/3       Running     9          22d
kube-system   kubernetes-dashboard-716739405-fq507   1/1       Running     4          22d
kube-system   monitoring-grafana-3552275057-w3xml    1/1       Running     4          22d
kube-system   monitoring-influxdb-4110454889-bwqgm   1/1       Running     4          22d
kube-system   tiller-deploy-737598192-841l1          1/1       Running     4          22d
onap          config                                 0/1       Completed   0          1m

 



Questions

https://lists.onap.org/pipermail/onap-discuss/2017-July/002084.html

https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/

Please help out our OPNFV friends

https://wiki.opnfv.org/pages/viewpage.action?pageId=12389095

Of interest

https://github.com/cncf/cross-cloud/

Reference Reviews

https://gerrit.onap.org/r/#/c/6179/

https://gerrit.onap.org/r/#/c/9849/

https://gerrit.onap.org/r/#/c/9839/

  • No labels

92 Comments

  1. Hi F. Michael O'Brien Does this include DCAE as well? I think this is the best way to install ONAP. Does this include any config files as well to talk to openstack cloud to instantiate VNFs?

    1. Sorry, 

      DCAE is not currently in the repo yet - that will require consolidation of the DCAE Controller (a lot of work)

      ../oneclick/dcae.sh is listed as "under construction"

      As far as I know VNFs like the vFirewall come up, however closed loop operations will need DCAE.

      /michael

  2. I see the recently added update about not being able to pull images because of missing credentials.  I encountered this yesterday and was able to get a workaround done by creating the secret and embedding the imagePullSecrets to the *-deployment.yaml file.

    Here's steps just for the robot:

    • kubectl --namespace onap-robot create secret docker-registry nexuscreds2  --docker-server=nexus3.onap.org:10001 --docker-username=docker --docker-password=docker --docker-email=x@abc.com
    • then added to the robot-deployment.yaml (above volumes):

      •         imagePullSecrets:
      •         - name: nexuscreds2

    This has to be done for each namspace and each script with the image would need to be updated.  An alternate that I'm looking at is:

    -       modify the default service account for the namespace to use this secret as an imagePullSecret.

    -        kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'

    -       Now, any new pods created in the current namespace will have this added to their spec:

    -        spec:
    -          imagePullSecrets:
    -          - name: myregistrykey

    (from  https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ )

    This would probably have to be done in the createAll.bash script, possibly with the userid/password as parameters to that script.

    Is there a suggested approach?  if so, I can submit some updates.

  3. Talk about parallel development - google served me

    https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-that-holds-your-authorization-token

    kubectl create secret docker-registry regsecret --docker-server=nexus3.onap.org:10001 --docker-username=docker --docker-password=docker --docker-email=email@email.com

    testing this now

    /michael

    1. Jason, 

          In our current environment (namespace 1:1 → service 1:1 → pod 1:1 → docker container) it looks like the following single command will have a global scope (no need to modify individual yaml files - a slight alternative to what you have suggested which would work as well.

      kubectl create secret docker-registry regsecret --docker-server=nexus3.onap.org:10001 --docker-username=docker --docker-password=docker --docker-email=email@email.com

          So no code changes which is good.  Currently everything seems to be coming up - but my 70G VM is at 99% so we need more HD space.

      Edit: actually even though it looked to work

      2017-06-30T19:31 UTC
      2017-06-30T19:31 UTC
      pulling image "nexus3.onap.org:10001/openecomp/sdc-elasticsearch:1.0-STAGING-latest"
      kubelet 172.17.4.99
      spec.containers{sdc-es}
      2
      2017-06-30T19:31 UTC
      2017-06-30T19:31 UTC

      still getting errors without the namespace for each service like in your example - if we wait long enough

      So a better fix Yves and I are testing is to put the line just after the namespace creation in createAll.bash

      create_namespace() {
        kubectl create namespace $1-$2
        kubectl --namespace $1-$2 create secret docker-registry regsecret --docker-server=nexus3.onap.org:10001 --docker-username=docker --docker-password=docker --docker-email=email@email.com
      }

      /michael

      1. Michael,

        I'm surprised that it appears to work for you, as it doesn't for my environment.  First, you should have to specify the imagePullSecrets for it to work... that can either be done in the yaml or by using the patch serviceaccount command.  Second, the scope of the secret for imagePullSecrets is just that namespace:

        Pods can only reference image pull secrets in their own namespace, so this process needs to be done one time per namespace.

        source: https://kubernetes.io/docs/concepts/containers/images/#creating-a-secret-with-a-docker-config

        In your environment, had you previously pulled the images before?  I noticed in my environment that it would find a previously pulled image even if I didn't have the authentication credentials.  To test that out, I had to add " imagePullPolicy: Always " to the *-deployment.yaml file under the container scope, so it would always try to pull it.

        So I think a fix is necessary.  I can submit a suggested change to the createAll.bash script that creates the secret and updates the service account in each namespace? 

      2. I think you'll need to add to the service account, too, so....

        create_namespace() {
          kubectl create namespace $1-$2
          kubectl --namespace $1-$2 create secret docker-registry regsecret --docker-server=nexus3.onap.org:10001 --docker-username=docker --docker-password=docker --docker-email=email@email.com
        kubectl --namespace $1-$2 patch serviceaccount default -p '{"imagePullSecrets": [{"name": "regsecret"}]}'
        }

        I will test now.

        1. We previously saw a successful pull from nexus3 - but that turned out to be a leftover mod in my branch yaml for a specific pod.

          Yes, I should know in about 10 min (in the middle of a redeploy) if I need to patch - makes sense because it would assume a magical 1:1 association - what if I created several secrets.

          I'll adjust and retest.

          btw, thanks for working with us getting Kubernetes/oom up!

          /michael

          1. My test of the updated create_namespace() method eliminated all of the "no credentials" errors.  I have plenty of other errors (most seem to be related to the readiness check timing out), but I think this one is licked.

            Is there a better way to track this than the comments here?  Jira?

            1. JIRA is

              OOM-3 - Getting issue details... STATUS

          2. Looks like we will need to specify the secret on each yaml file - because of our mixed nexus3/dockerhub repos

            When we try to pull from dockerhub - the secret gets applied

            Failed to pull image "oomk8s/readiness-check:1.0.0": unexpected EOF
            Error syncing pod, skipping: failed to "StartContainer" for "mso-readiness" with ErrImagePull: "unexpected EOF"
            MountVolume.SetUp failed for volume "kubernetes.io/secret/3a7b5084-5dd2-11e7-b73a-08002723e514-default-token-fs361" (spec.Name: "default-token-fs361") pod "3a7b5084-5dd2-11e7-b73a-08002723e514" (UID: "3a7b5084-5dd2-11e7-b73a-08002723e514") with: Get http://127.0.0.1:8080/api/v1/namespaces/onap3-mso/secrets/default-token-fs361: dial tcp 127.0.0.1:8080: getsockopt: connection refused


            retesting

            1. Actually our mso images loaded fine after internal retries - bringing up the whole system (except dcae) - so this is without a secret override on the yamls that target nexus3.

              It includes your patch line from above

              My vagrant vm ran out of HD space at 19G - resizing

              v.customize ["modifyhd", "aa296a7e-ae13-4212-a756-5bf2a8461b48", "--resize",  "32768"]

              wont work on the coreos image - moving up one level of virtualization (docker on virtualbox on vmware-rhel73 in win10) to (docker on virtualbox on win10)

              vid still failing on FS

              Failed to start container with docker id 47b63e352857 with error: Error response from daemon: {"message":"oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:359: container init caused \\\"rootfs_linux.go:54: mounting \\\\\\\"/dockerdata-nfs/onapdemo/vid/vid/lf_config/vid-my.cnf\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/overlay2/0638a5d171ddacf7346133ee5e53104992243e897370bb054383f2e121e5d63f/merged\\\\\\\" at \\\\\\\"/var/lib/docker/overlay2/0638a5d171ddacf7346133ee5e53104992243e897370bb054383f2e121e5d63f/merged/etc/mysql/my.cnf\\\\\\\" caused \\\\\\\"not a directory\\\\\\\"\\\"\"\n: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type"}


              /michael

            2. I am getting the "Error syncing Pod"  errors in bringing currently only aai and vid pod up.

              I implemented even both the fix mentioned in OOM-3 -

              1)  

              create_namespace() {
              kubectl create namespace $1-$2
              kubectl --namespace $1-$2 create secret docker-registry regsecret --docker-server=nexus3.onap.org:10001 --docker-username=docker --docker-password=docker --docker-email=email@email.com
              kubectl --namespace $1-$2 patch serviceaccount default -p '{"imagePullSecrets": [{"name": "regsecret"}]}'

              }


              2)  Adding below in vid-server-deployment.yaml

              spec:
                imagePullSecrets:
                  - name: regsecret

              Errors:-

              aai-service-403142545-f620t
              onap-aai
              Waiting: PodInitializing

              Search Line limits were exceeded, some dns names have been omitted, the applied search line is: onap-aai.svc.cluster.local svc.cluster.local cluster.local kubelet.kubernetes.rancher.internal kubernetes.rancher.internal rancher.internal
              Error syncing pod


              vid-mariadb-1108617343-zgnbd
              onap-vid
              Waiting: rpc error: code = 2 desc = failed to start container "c4966c8f8dbfdf460ca661afa94adc7f536fd4b33ed3af7a0857ecdeefed1225": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/dockerdata-nfs/onap/vid/vid/lf_config/vid-my.cnf\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/8a2abc00538b1bec820b272692b4367922893fb7eed6851cfca6e4d3445d1b36\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/8a2abc00538b1bec820b272692b4367922893fb7eed6851cfca6e4d3445d1b36/etc/mysql/my.cnf\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"not a directory\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}

              Search Line limits were exceeded, some dns names have been omitted, the applied search line is: onap-vid.svc.cluster.local svc.cluster.local cluster.local kubelet.kubernetes.rancher.internal kubernetes.rancher.internal rancher.internal
              Error: failed to start container "vid-mariadb": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:53: mounting \\\\\\\\\\\\\\\"/dockerdata-nfs/onap/vid/vid/lf_config/vid-my.cnf\\\\\\\\\\\\\\\" to rootfs \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/8a2abc00538b1bec820b272692b4367922893fb7eed6851cfca6e4d3445d1b36\\\\\\\\\\\\\\\" at \\\\\\\\\\\\\\\"/var/lib/docker/aufs/mnt/8a2abc00538b1bec820b272692b4367922893fb7eed6851cfca6e4d3445d1b36/etc/mysql/my.cnf\\\\\\\\\\\\\\\" caused \\\\\\\\\\\\\\\"not a directory\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\""}
              Error syncing pod


              Is there anything I am missing here?

              1. Vaibhav,

                  Hi, OOM-3 has been deprecated (it is in the closed state) - the secrets fix is implemented differently now - you don't need the workaround.

                  Also the search line limits is a bug in rancher that you can ignore - it is warning that more than 5 dns search terms were used - not an issue - see my other comments on this page

                https://github.com/rancher/rancher/issues/9303

                  The only real issue is "Error syncing pod" this is an intermittent timing issue (most likely) that we are working on - a faster/more-cores system should see less of this.

                  If you only have 2 working pods - you might not have run the config-init pod - verify you have /dockerdata-nfs on you host FS.

                for vid you should see (20170831 1.1 build)

                onap-vid              vid-mariadb-2932072366-gw6b7           1/1       Running            0          1h

                onap-vid              vid-server-377438368-bt6zg             1/1       Running            0          1h

                /michael

                1. Hi Michael,


                  I have ran the config-init, but at that time I was installing one by one only, Now I tried to install in one go and got success for below:-


                  kube-system heapster-4285517626-q0996 1/1 Running 5 19h 10.42.41.231 storm0220.cloud.com
                  kube-system kube-dns-2514474280-kvcvx 3/3 Running 12 19h 10.42.4.230 storm0220.cloud.com
                  kube-system kubernetes-dashboard-716739405-fjxpm 1/1 Running 7 19h 10.42.35.168 storm0220.cloud.com
                  kube-system monitoring-grafana-3552275057-0v7mk 1/1 Running 6 19h 10.42.128.254 storm0220.cloud.com
                  kube-system monitoring-influxdb-4110454889-vxv19 1/1 Running 6 19h 10.42.159.54 storm0220.cloud.com
                  kube-system tiller-deploy-737598192-t56wv 1/1 Running 2 19h 10.42.61.18 storm0220.cloud.com
                  onap-aai hbase-2720973979-p2btt 0/1 Running 0 17h 10.42.12.51 storm0220.cloud.com
                  onap-appc appc-dbhost-3721796594-v9k2k 1/1 Running 0 17h 10.42.215.107 storm0220.cloud.com
                  onap-message-router zookeeper-4131483451-r5msz 1/1 Running 0 17h 10.42.76.76 storm0220.cloud.com
                  onap-mso mariadb-786536066-dx5px 1/1 Running 0 17h 10.42.88.165 storm0220.cloud.com
                  onap-policy mariadb-1621559354-nbrvh 1/1 Running 0 17h 10.42.108.42 storm0220.cloud.com
                  onap-portal portaldb-3934803085-fj217 1/1 Running 0 17h 10.42.145.204 storm0220.cloud.com
                  onap-robot robot-1597903591-fffz3 1/1 Running 0 1h 10.42.253.121 storm0220.cloud.com
                  onap-sdnc sdnc-dbhost-3459361889-7xdmw 1/1 Running 0 17h 10.42.58.17 storm0220.cloud.com
                  onap-vid vid-mariadb-1108617343-gsv8f 1/1 Running 0 17h 10.42.175.190 storm0220.cloud.com


                  but yes, again Many of them are stuck with the same error :- "Error Syncing POD"


                  and yes now the Server I am using is having 128GB Ram.  (Though I have configured proxy in best known manner, but do you think this also can relates to proxy then I will dig more in that direction)


                  BR/

                  VC

                  1. I'll contact you directly about proxy access.

                    Personally I try to run on machines/VMs outside the corporate proxy - to avoid the proxy part of the triage equation

                    /michael

                    1. Sure, Thanks Frank,


                      Will check the  Proxy, Anyways other than proxy, whenever you get to know a fix against "Error Syncing POD" , Please update us.

                      Currently,I  have 20 out of 34 onap PODs are running fine and rest all are failing with "Error syncing POD"

                      BR/

                      VC

  4. Update: containers are loading now - for example both pods for VID come up ok if we first run the config-init pod to bring up the config mounts.  Also there is an issue with unresolved DNS entries that is fixed temporarily by adding to /etc/resolv.conf

    1) mount config files

    root@obriensystemsucont0:~/onap/oom/kubernetes/config# kubectl create -f pod-config-init.yaml

    pod "config-init" created

    2) fix DNS search

    https://github.com/rancher/rancher/issues/9303

    Fix DNS resolution before running any more pods ( add service.ns.svc.cluster.local)

    root@obriensystemskub0:~/oom/kubernetes/oneclick# cat /etc/resolv.conf

    nameserver 192.168.241.2

    search localdomain service.ns.svc.cluster.local

    3) run or restart VID service as an example (one of 10 failing pods)

    root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces

    onap-vid      vid-mariadb-1357170716-k36tm            1/1       Running   0          10m

    onap-vid      vid-server-248645937-8tt6p              1/1       Running   0          10m


    root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl --namespace onap-vid logs -f vid-server-248645937-8tt6p

    16-Jul-2017 02:46:48.707 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 22520 ms


    tomcat comes up on 127.0.0.1:30200 for this colocated setup

    root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get services --all-namespaces -o wide


    onap-vid      vid-mariadb            None           <none>        3306/TCP         1h        app=vid-mariadb


    onap-vid      vid-server             10.43.14.244   <nodes>       8080:30200/TCP   1h        app=vid-server

  5. Good news – 32 of 33 pods are up (sdnc-portal is going through a restart).

    Ran 2 parallel Rancher systems on 48G Ubuntu 16.04.2 VM’s on two 64G servers

    Stats: Without DCAE (which is up to 40% of ONAP) we run at 33G – so I would expect a full system to be around 50G which means we can run on a P70 Thinkpad laptop with 64G.

    Had to add some dns-search domains for k8s in interfaces to appear in resolv.conf after running the config pod.

     

    Issues:

    after these 2 config changes the pods come up within 25 min except policy-drools which takes 45 min (on 1 machine but not the other) and sdnc-portal (which is having issues with some node downloads)

    root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl --namespace onap-sdnc logs -f sdnc-portal-3375812606-01s1d | grep ERR
    npm ERR! fetch failed 
    https://registry.npmjs.org/is-utf8/-/is-utf8-0.2.1.tgz

     

    I’ll look at instantiating the vFirewall VM’s and integrating DCAE next.

     

    on 5820k 4.1GHz 12 vCores 48g Ubuntu 16.04.2 VM on 64g host

    root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide

    NAMESPACE             NAME                                    READY     STATUS    RESTARTS   AGE       IP              NODE

    kube-system           heapster-859001963-bmlff                1/1       Running   5          43m       10.42.143.118   obriensystemskub0

    kube-system           kube-dns-1759312207-0x1xx               3/3       Running   8          43m       10.42.246.144   obriensystemskub0

    kube-system           kubernetes-dashboard-2463885659-jl5jf   1/1       Running   5          43m       10.42.117.156   obriensystemskub0

    kube-system           monitoring-grafana-1177217109-7gkl6     1/1       Running   4          43m       10.42.79.40     obriensystemskub0

    kube-system           monitoring-influxdb-1954867534-8nr2q    1/1       Running   5          43m       10.42.146.215   obriensystemskub0

    kube-system           tiller-deploy-1933461550-w77c5          1/1       Running   4          43m       10.42.1.66      obriensystemskub0

    onap-aai              aai-service-301900780-wp3w1             1/1       Running   0          25m       10.42.104.101   obriensystemskub0

    onap-aai              hbase-2985919495-zfs2c                  1/1       Running   0          25m       10.42.208.135   obriensystemskub0

    onap-aai              model-loader-service-2352751609-4qb0x   1/1       Running   0          25m       10.42.25.139    obriensystemskub0

    onap-appc             appc-4266112350-gscxh                   1/1       Running   0          25m       10.42.90.128    obriensystemskub0

    onap-appc             appc-dbhost-981835105-lp6tn             1/1       Running   0          25m       10.42.201.58    obriensystemskub0

    onap-appc             appc-dgbuilder-939982213-41znl          1/1       Running   0          25m       10.42.30.127    obriensystemskub0

    onap-message-router   dmaap-1381770224-c5xp8                  1/1       Running   0          25m       10.42.133.232   obriensystemskub0

    onap-message-router   global-kafka-3488253347-zt8x9           1/1       Running   0          25m       10.42.235.227   obriensystemskub0

    onap-message-router   zookeeper-3757672320-bxkvs              1/1       Running   0          25m       10.42.14.4      obriensystemskub0

    onap-mso              mariadb-2610811658-r22z9                1/1       Running   0          25m       10.42.46.110    obriensystemskub0

    onap-mso              mso-2217182437-1r8fm                    1/1       Running   0          25m       10.42.120.204   obriensystemskub0

    onap-policy           brmsgw-554754608-gssf8                  1/1       Running   0          25m       10.42.84.128    obriensystemskub0

    onap-policy           drools-1184532483-kg8sr                 1/1       Running   0          25m       10.42.62.198    obriensystemskub0

    onap-policy           mariadb-546348828-1ck21                 1/1       Running   0          25m       10.42.118.120   obriensystemskub0

    onap-policy           nexus-2933631225-s1qjz                  1/1       Running   0          25m       10.42.73.217    obriensystemskub0

    onap-policy           pap-235069217-qdf2r                     1/1       Running   0          25m       10.42.157.211   obriensystemskub0

    onap-policy           pdp-819476266-zvncc                     1/1       Running   0          25m       10.42.38.47     obriensystemskub0

    onap-policy           pypdp-3646772508-n801j                  1/1       Running   0          25m       10.42.244.206   obriensystemskub0

    onap-portal           portalapps-157357486-gjnnc              1/1       Running   0          25m       10.42.83.144    obriensystemskub0

    onap-portal           portaldb-351714684-1n956                1/1       Running   0          25m       10.42.8.80      obriensystemskub0

    onap-portal           vnc-portal-1027553126-h6dhd             1/1       Running   0          25m       10.42.129.60    obriensystemskub0

    onap-robot            robot-44708506-t10kk                    1/1       Running   0          31m       10.42.185.118   obriensystemskub0

    onap-sdc              sdc-be-4018435632-3k6k2                 1/1       Running   0          25m       10.42.246.193   obriensystemskub0

    onap-sdc              sdc-cs-2973656688-kktn8                 1/1       Running   0          25m       10.42.240.176   obriensystemskub0

    onap-sdc              sdc-es-2628312921-bg0dg                 1/1       Running   0          25m       10.42.67.214    obriensystemskub0

    onap-sdc              sdc-fe-4051669116-3b9bh                 1/1       Running   0          25m       10.42.42.203    obriensystemskub0

    onap-sdc              sdc-kb-4011398457-fgpkl                 1/1       Running   0          25m       10.42.47.218    obriensystemskub0

    onap-sdnc             sdnc-1672832555-1h4s7                   1/1       Running   0          25m       10.42.120.148   obriensystemskub0

    onap-sdnc             sdnc-dbhost-2119410126-48mt9            1/1       Running   0          25m       10.42.133.166   obriensystemskub0

    onap-sdnc             sdnc-dgbuilder-730191098-gj6g9          1/1       Running   0          25m       10.42.154.99    obriensystemskub0

    onap-sdnc             sdnc-portal-3375812606-01s1d            0/1       Running   0          25m       10.42.105.164   obriensystemskub0

    onap-vid              vid-mariadb-1357170716-vnmhr            1/1       Running   0          28m       10.42.218.225   obriensystemskub0

    onap-vid              vid-server-248645937-m67r9              1/1       Running   0          28m       10.42.227.81    obriensystemskub0

  6. Michael OBrien i've got to the point where i can access the portal login page, but after inputting the credentials, it keeps redirecting to port 8989 and fails instead of the external mapped port (30215 in my case) any thoughts ? 

    i'm running on GCE with 40GB and only running sdc, message-router and portal for now.


    1. Nagaraja,  yes good question.  I actually have been able to get the point of running portal - as the 1.0.0 system is pretty stable now

      onap-portal           portalapps             255.255.255.255   <nodes>       8006:30213/TCP,8010:30214/TCP,8989:30215/TCP                                 2h

      I was recording a demo and ran into the same issue - I will raise a JIRA as we fix this and post here

      http://portal.api.simpledemo.openecomp.org:30215/ECOMPPORTAL/login.htm

      redirects to

      Request URL:

      http://portal.api.simpledemo.openecomp.org:8989/ECOMPPORTAL/applicationsHome


      because of hardcoded parameters like the following in the DockerFile

      ENV VID_ECOMP_REDIRECT_URL http://portal.api.simpledemo.openecomp.org:8989/ECOMPPORTAL/login.htm
      ENV VID_ECOMP_REST_URL http://portal.api.simpledemo.openecomp.org:8989/ECOMPPORTAL/auxapi
      ENV VID_ECOMP_SHARED_CONTEXT_REST_URL http://portal.api.simpledemo.openecomp.org:8989/ECOMPPORTAL/context



  7. Are you accessing the ECOMP Portal via the 'onap-portal  vnc-portal-1027553126-h6dhd' container? 

    This container was added to the standard ONAP deployment so one may VNC into the ONAP Deployment instance (namespace) and have networking resolved fully resolved within K8s.

    1. Mike,  Was just writing a question to you - yes looks like I am using the wrong container - reworking now

      thank you

      1. Nagaraga,

           Portal access via the vnc-portal container (port 30211) is documented above now in

        RunningONAPusingthevnc-portal

        /michael

  8. Hi all,

    I am new to this kubernetes installation of ONAP and installing ONAP component 1 by 1 on My VM (due to memory constraint)

    I want to see if the PODs are working fine

    I launched robot component:-

    onap-robot robot-1597903591-1tx35 1/1 Running 0 2h 10.42.104.187 localhost


    and logged in to same via

    kubectl -n onap-robot exec -it robot-1597903591-1tx35 /bin/bash

    Now do I need to mount some directory to see the containers and How docker process will run in same.


    BR/

    VC

    1. Docker process are not running by own may be due to proxy internet being used. Trying running manually the install and setup by logging to each component.

      1. Vaibhav,

           Hi, there are a combination of files - some are in the container itself - see /var/opt

           some are off the shared file system on the host - see /dockerdata-nfs

           In the case of robot - you have spun up one pod - each pod has a single docker container, to see the other pods/containers - kubectl into each like you have into robot - just change the pod name.  kubectl is an abstraction on top of docker - so you don't need to directly access docker containers.

        /michael

        1. Vaibhav, if you are trying to see the status of the pod or look at the log file, you can do it also through Rancher / Kubernetes dashboard :

        2. Hi Michael,

          Yes, I can see the mounted directories and found robot_install.sh in /var/opt/OpenECOMP_ETE/demo/boot

          On K8s Dashboard and CLI, the POD is in running state but when I logged in (via kubectl) any of them, I am unable to see any docker process running via docker ps. (Even docker itself is not installed)

          I think this Ideally is taken care by POD itself right or do we need to go inside each component and run the installation script of that specific.


          BR/

          VC

          1. Vaibhav,  Hi, the architecture of kubernetes is such that it manages docker containers - we are not running docker on docker.  Docker ps will only be possible on the host machine(s)/vm(s) that kubernetes is running on - you will see the wrapper docker containers running the kubernetes and rancher undercloud.

            When you "kubectl exec -it" - into a pod you have entered a docker container the same as a "docker exec -it"  at that point you are in a container process, try doing a "ps -ef | grep java" to see if a java process is running for example.  Note that by the nature of docker most containers will have a minimal linux install - so some do not include the ps command for example.

            If you check the instructions above you will see the first step is to install docker 1.12 only on the host - as you end up with 1 or more hosts running a set of docker containers after ./createAll.bash finishes

            example - try the mso jboss container - it is one of the heavyweight containers


            root@ip-172-31-93-122:~# kubectl -n onap-mso exec -it mso-371905462-w0mcj  bash

            root@mso-371905462-w0mcj:/# ps -ef | grep java

            root      1920  1844  0 Aug27 ?        00:28:33 java -D[Standalone] -server -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Xms64m -Xmx4g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=1g -Djboss.bind.address=0.0.0.0 -Djboss.bind.address.management=0.0.0.0 -Dmso.db=MARIADB -Dmso.config.path=/etc/mso/config.d/ -Dorg.jboss.boot.log.file=/opt/jboss/standalone/log/server.log -Dlogging.configuration=file:/opt/jboss/standalone/configuration/logging.properties -jar /opt/jboss/jboss-modules.jar -mp /opt/jboss/modules org.jboss.as.standalone -Djboss.home.dir=/opt/jboss -Djboss.server.base.dir=/opt/jboss/standalone -c standalone-full-ha-mso.xml

            if you want to see the k8s wrapped containers - do a docker ps on the host

            root@ip-172-31-93-122:~# docker ps | grep mso

            9fed2b7ebd1d        nexus3.onap.org:10001/openecomp/mso@sha256:ab3a447956577a0f339751fb63cc2659e58b9f5290852a90f09f7ed426835abe                    "/docker-files/script"   4 days ago          Up 4 days                                                k8s_mso_mso-371905462-w0mcj_onap-mso_11da22bf-8b3d-11e7-9e1a-0289899d0a5f_0

            e4171a2b73d8        nexus3.onap.org:10001/mariadb@sha256:3821f92155bf4311a59b7ec6219b79cbf9a42c75805000a7c8fe5d9f3ad28276                          "/docker-entrypoint.s"   4 days ago          Up 4 days                                                k8s_mariadb_mariadb-786536066-87g9d_onap-mso_11bc6958-8b3d-11e7-9e1a-0289899d0a5f_0

            8ba86442fbde        gcr.io/google_containers/pause-amd64:3.0                                                                                       "/pause"                 4 days ago          Up 4 days                                                k8s_POD_mso-371905462-w0mcj_onap-mso_11da22bf-8b3d-11e7-9e1a-0289899d0a5f_0

            f099c5613bf1        gcr.io/google_containers/pause-amd64:3.0                                                                                       "/pause"                 4 days ago          Up 4 days                                                k8s_POD_mariadb-786536066-87g9d_onap-mso_11bc6958-8b3d-11e7-9e1a-0289899d0a5f_0

             

  9. Hi all,
    I am new to kubernetes installation of ONAP and have problems cloning onap repository.
    I have tried git clone -b release-1.0.0 http://gerrit.onap.org/r/oom
    but ended up with the following error
    fatal: unable to access 'http://gerrit.onap.org/r/oom/': The requested URL returned error: 403

    I also tried to use ssh git clone -b release-1.0.0 ssh://cnleng@gerrit.onap.org:29418/oom
    but I cannot access settings on https://gerrit.onap.org (Already have an account on Linux foundation) to copy my ssh keys
    Any help will be appreciated.
    Thanks

    1. 403 in your case might be due to your proxy or firewall - check access away from your company if possible


      Verified the URL

      root@ip-172-31-90-90:~/test# git clone -b release-1.0.0 http://gerrit.onap.org/r/oom

      Cloning into 'oom'...

      remote: Counting objects: 896, done

      remote: Finding sources: 100% (262/262)

      remote: Total 1701 (delta 96), reused 1667 (delta 96)

      Receiving objects: 100% (1701/1701), 1.08 MiB | 811.00 KiB/s, done.

      Resolving deltas: 100% (588/588), done.

      Checking connectivity... done.


      If you login to gerrit and navigate to the oom directory, it will supply you with anon, https and ssl urls - try each of them they should work.


  10. Hi, I am trying to install ONAP components though oom, but getting the following errors:

    Search Line limits were exceeded, some dns names have been omitted, the applied search line is: onap-appc.svc.cluster.local svc.cluster.local cluster.local kubelet.kubernetes.rancher.internal kubernetes.rancher.internal rancher.internal

    I tried to edit /etc/resolve.conf according to Michael's comment above:

    nameserver <server ip>

    search localdomain service.ns.svc.cluster.local

    but it does not seem helps


    Please advise how to resolve this DNS issue

    Thanks

    Geora





    1. Geora, hi, that is a red herring unfortunately - there is a bug in rancher where they add more than 5 domains to the search tree - you can ignore these - the resolve.conf turns out to have no effect - it is removed except in the comment history

      https://github.com/rancher/rancher/issues/9303

      /michael

  11. todo: update table/diagram on aai for 1.1 coming in

    root@obriensystemskub0:~/11/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -a

    NAMESPACE             NAME                                    READY     STATUS             RESTARTS   AGE

    default               config-init                             0/1       Completed          0          45d

    kube-system           heapster-859001963-kz210                1/1       Running            5          46d

    kube-system           kube-dns-1759312207-jd5tf               3/3       Running            8          46d

    kube-system           kubernetes-dashboard-2463885659-xv986   1/1       Running            4          46d

    kube-system           monitoring-grafana-1177217109-sm5nq     1/1       Running            4          46d

    kube-system           monitoring-influxdb-1954867534-vvb84    1/1       Running            4          46d

    kube-system           tiller-deploy-1933461550-gdxch          1/1       Running            4          46d

    onap                  config-init                             0/1       Completed          0          1h

    onap-aai              aai-dmaap-2612279050-g4qjj              1/1       Running            0          1h

    onap-aai              aai-kafka-3336540298-kshzc              1/1       Running            0          1h

    onap-aai              aai-resources-2582573456-n1v1q          0/1       CrashLoopBackOff   9          1h

    onap-aai              aai-service-3847504356-03rk2            0/1       Init:0/1           3          1h

    onap-aai              aai-traversal-1020522763-njrw7          0/1       Completed          10         1h

    onap-aai              aai-zookeeper-3839400401-160pk          1/1       Running            0          1h

    onap-aai              data-router-1134329636-f5g2j            1/1       Running            0          1h

    onap-aai              elasticsearch-2888468814-4pmgd          1/1       Running            0          1h

    onap-aai              gremlin-1948549042-j56p9                0/1       CrashLoopBackOff   7          1h

    onap-aai              hbase-1088118705-f29c1                  1/1       Running            0          1h

    onap-aai              model-loader-service-784161734-3njbr    1/1       Running            0          1h

    onap-aai              search-data-service-237180539-0sj6c     1/1       Running            0          1h

    onap-aai              sparky-be-3826115676-c2wls              1/1       Running            0          1h

    onap-appc             appc-2493901092-041m9                   1/1       Running            0          1h

    onap-appc             appc-dbhost-3869943665-5d0vb            1/1       Running            0          1h

    onap-appc             appc-dgbuilder-2279934547-t2qqx         0/1       Running            1          1h

    onap-message-router   dmaap-3009751734-w59nn                  1/1       Running            0          1h

    onap-message-router   global-kafka-1350602254-f8vj6           1/1       Running            0          1h

    onap-message-router   zookeeper-2151387536-sw7bn              1/1       Running            0          1h

    onap-mso              mariadb-3820739445-qjrmn                1/1       Running            0          1h

    onap-mso              mso-278039889-4379l                     1/1       Running            0          1h

    onap-policy           brmsgw-1958800448-p855b                 1/1       Running            0          1h

    onap-policy           drools-3844182126-31hmg                 0/1       Running            0          1h

    onap-policy           mariadb-2047126225-4hpdb                1/1       Running            0          1h

    onap-policy           nexus-851489966-h1l4b                   1/1       Running            0          1h

    onap-policy           pap-2713970993-kgssq                    1/1       Running            0          1h

    onap-policy           pdp-3122086202-dqfz6                    1/1       Running            0          1h

    onap-policy           pypdp-1774542636-vp3tt                  1/1       Running            0          1h

    onap-portal           portalapps-2603614056-4030t             1/1       Running            0          1h

    onap-portal           portaldb-122537869-8h4hd                1/1       Running            0          1h

    onap-portal           portalwidgets-3462939811-9rwtl          1/1       Running            0          1h

    onap-portal           vnc-portal-2396634521-7zlvf             0/1       Init:2/5           3          1h

    onap-robot            robot-2697244605-cbkzp                  1/1       Running            0          1h

    onap-sdc              sdc-be-2266987346-r321s                 0/1       Running            0          1h

    onap-sdc              sdc-cs-1003908407-46k1q                 1/1       Running            0          1h

    onap-sdc              sdc-es-640345632-7ldhv                  1/1       Running            0          1h

    onap-sdc              sdc-fe-783913977-ccg59                  0/1       Init:0/1           3          1h

    onap-sdc              sdc-kb-1525226917-j2n48                 1/1       Running            0          1h

    onap-sdnc             sdnc-2490795740-pfwdz                   1/1       Running            0          1h

    onap-sdnc             sdnc-dbhost-2647239646-5spg0            1/1       Running            0          1h

    onap-sdnc             sdnc-dgbuilder-1138876857-1b40z         0/1       Running            0          1h

    onap-sdnc             sdnc-portal-3897220020-0tt9t            0/1       Running            1          1h

    onap-vid              vid-mariadb-2479414751-n33qf            1/1       Running            0          1h

    onap-vid              vid-server-1654857885-jd1jc             1/1       Running            0          1h



    20170902 update - everything up (minus to-be-merged-dcae)

    root@ip-172-31-93-160:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -a | grep 0/1

    onap                  config-init                            0/1       Completed         0          21m

    onap-aai              aai-service-3321436576-2snd6           0/1       PodInitializing   0          18m

    onap-policy           drools-3066421234-rbpr9                0/1       Init:0/1          1          18m

    onap-portal           vnc-portal-700404418-r61hm             0/1       Init:2/5          1          18m

    onap-sdc              sdc-fe-3467675014-v8jxm                0/1       Running           0          18m

    root@ip-172-31-93-160:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces | grep 0/1

    root@ip-172-31-93-160:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces

    NAMESPACE             NAME                                   READY     STATUS    RESTARTS   AGE

    kube-system           heapster-4285517626-7wdct              1/1       Running   0          1d

    kube-system           kube-dns-2514474280-kmd6v              3/3       Running   3          1d

    kube-system           kubernetes-dashboard-716739405-xxn5k   1/1       Running   0          1d

    kube-system           monitoring-grafana-3552275057-hvfw8    1/1       Running   0          1d

    kube-system           monitoring-influxdb-4110454889-7s5fj   1/1       Running   0          1d

    kube-system           tiller-deploy-737598192-jpggg          1/1       Running   0          1d

    onap-aai              aai-dmaap-522748218-5rw0v              1/1       Running   0          21m

    onap-aai              aai-kafka-2485280328-6264m             1/1       Running   0          21m

    onap-aai              aai-resources-3302599602-fn4xm         1/1       Running   0          21m

    onap-aai              aai-service-3321436576-2snd6           1/1       Running   0          21m

    onap-aai              aai-traversal-2747464563-3c8m7         1/1       Running   0          21m

    onap-aai              aai-zookeeper-1010977228-l2h3h         1/1       Running   0          21m

    onap-aai              data-router-1397019010-t60wm           1/1       Running   0          21m

    onap-aai              elasticsearch-2660384851-k4txd         1/1       Running   0          21m

    onap-aai              gremlin-1786175088-m39jb               1/1       Running   0          21m

    onap-aai              hbase-3880914143-vp8zk                 1/1       Running   0          21m

    onap-aai              model-loader-service-226363973-wx6s3   1/1       Running   0          21m

    onap-aai              search-data-service-1212351515-q4k68   1/1       Running   0          21m

    onap-aai              sparky-be-2088640323-h2pbx             1/1       Running   0          21m

    onap-appc             appc-1972362106-4zqh8                  1/1       Running   0          21m

    onap-appc             appc-dbhost-2280647936-s041d           1/1       Running   0          21m

    onap-appc             appc-dgbuilder-2616852186-g9sng        1/1       Running   0          21m

    onap-message-router   dmaap-3565545912-w5lp4                 1/1       Running   0          21m

    onap-message-router   global-kafka-701218468-091rt           1/1       Running   0          21m

    onap-message-router   zookeeper-555686225-vdp8w              1/1       Running   0          21m

    onap-mso              mariadb-2814112212-zs7lk               1/1       Running   0          21m

    onap-mso              mso-2505152907-xdhmb                   1/1       Running   0          21m

    onap-policy           brmsgw-362208961-ks6jb                 1/1       Running   0          21m

    onap-policy           drools-3066421234-rbpr9                1/1       Running   0          21m

    onap-policy           mariadb-2520934092-3jcw3               1/1       Running   0          21m

    onap-policy           nexus-3248078429-4k29f                 1/1       Running   0          21m

    onap-policy           pap-4199568361-p3h0p                   1/1       Running   0          21m

    onap-policy           pdp-785329082-3c8m5                    1/1       Running   0          21m

    onap-policy           pypdp-3381312488-q2z8t                 1/1       Running   0          21m

    onap-portal           portalapps-2799319019-00qhb            1/1       Running   0          21m

    onap-portal           portaldb-1564561994-50mv0              1/1       Running   0          21m

    onap-portal           portalwidgets-1728801515-r825g         1/1       Running   0          21m

    onap-portal           vnc-portal-700404418-r61hm             1/1       Running   0          21m

    onap-robot            robot-349535534-lqsvp                  1/1       Running   0          21m

    onap-sdc              sdc-be-1839962017-n3hx3                1/1       Running   0          21m

    onap-sdc              sdc-cs-2640808243-tc9ck                1/1       Running   0          21m

    onap-sdc              sdc-es-227943957-f6nfv                 1/1       Running   0          21m

    onap-sdc              sdc-fe-3467675014-v8jxm                1/1       Running   0          21m

    onap-sdc              sdc-kb-1998598941-57nj1                1/1       Running   0          21m

    onap-sdnc             sdnc-250717546-xmrmw                   1/1       Running   0          21m

    onap-sdnc             sdnc-dbhost-3807967487-tdr91           1/1       Running   0          21m

    onap-sdnc             sdnc-dgbuilder-3446959187-dn07m        1/1       Running   0          21m

    onap-sdnc             sdnc-portal-4253352894-hx9v8           1/1       Running   0          21m

    onap-vid              vid-mariadb-2932072366-n5qw1           1/1       Running   0          21m

    onap-vid              vid-server-377438368-kn6x4             1/1       Running   0          21m

    root@ip-172-31-93-160:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces | grep 0/1


    health passes except for to-be-merged dcae

    root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# ls

    authorization  demo-docker.sh  demo-k8s.sh  ete-docker.sh  ete-k8s.sh  eteshare  robot

    root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# ./ete-docker.sh health

    ------------------------------------------------------------------------------

    Basic SDNGC Health Check                                              | PASS |

    ------------------------------------------------------------------------------

    Basic A&AI Health Check                                               | PASS |

    ------------------------------------------------------------------------------

    Basic Policy Health Check                                             | PASS |

    ------------------------------------------------------------------------------

    Basic MSO Health Check                                                | PASS |

    ------------------------------------------------------------------------------

    Basic ASDC Health Check                                               | PASS |

    ------------------------------------------------------------------------------

    Basic APPC Health Check                                               | PASS |

    ------------------------------------------------------------------------------

    Basic Portal Health Check                                             | PASS |

    ------------------------------------------------------------------------------

    Basic Message Router Health Check                                     | PASS |

    ------------------------------------------------------------------------------

    Basic VID Health Check                                                | PASS |


  12. Has anyone managed to run ONAP on Kubernetes with more than one node? i'm unclear about how the /dockerdata-nfs volume mount works in the case of multiple nodes.

    1) in my azure setup, i have one master node and 4 agent nodes (Standard D3 - 4CPU/ 14GB). after running the config-init pod (and completing) i do not see the /dockerdata-nfs directory being created on the master node. i am not sure how to check this directory on all the agent nodes. Is this directory expected to be created on all the agent nodes? if so, are they kept synchronized? 

    2) after the cluster is restarted/ there is a possibility that pods will run on different set of nodes, so if the /dockerdata-nfs is not kept in sync between the agent nodes, then the data will not be persisted.


    ps: i did not use rancher. i created the k8s cluster using acs-engine. 


    1. Hi nagaraja,

      The mounting of the shared dockerdata-nfs volume does not appear to happen automatically. You can install nfs-kernel-server and mount a shared drive manually. If you are running rancher on the master node (the one with the files in the /dockerdata-nfs directory, mount that directory to the agent nodes:

      On Master:

      # apt-get install nfs-kernel-server 

      Modify /etc/exports to share directory from master to agent nodes

      # vi /etc/exports 

      #systemctl restart nfs-kernel-server


      On client nodes:

      #apt-get install nfs-common

      delete existing data:

      #rm -fr dockerdata-nfs/

      #mkdir -p /dockerdata-nfs

      #mount <master ip>:/dockerdata-nfs/ /dockerdata-nfs/

  13. Hi All,

    I am trying to install ONAP on Kubernetes and I got the following error while trying to run ./createConfig.sh -n onap command:


    sudo: unable to execute ./createConfig.sh: No such file or directory
    Hangup


    Does anyone have an idea? (kubernetes /helm is already up and running)

    Thanks,




    1. Please check whether the file is not in DOS format. you might want to do dos2unix on it (and others)

      1. Thank you for your help. Indeed this was the cause of the problem.


    2. we need to 755 the file - it was committed with the wrong permissions to the 1.0.0 branch

      OOM-218 - Getting issue details... STATUS

      the instructions reference this.

      % chmod 777 createConfig.sh    (1.0 branch only)

  14. Hi All,

    I am trying to install ONAP on Kubernetes and I got the following error while trying to run ./createAll.bash -n onap -a robot|appc|aai command:

    Command 'mppc' from package 'makepp' (universe)
    Command 'ppc' from package 'pearpc' (universe)
    appc: command not found
    No command 'aai' found, did you mean:
    Command 'axi' from package 'afnix' (universe)
    Command 'ali' from package 'nmh' (universe)
    Command 'ali' from package 'mailutils-mh' (universe)
    Command 'aa' from package 'astronomical-almanac' (universe)
    Command 'fai' from package 'fai-client' (universe)
    Command 'cai' from package 'emboss' (universe)
    aai: command not found



    Does anyone have an idea? (kubernetes /helm is already up and running)

    Thanks,

  15. you need to run the commands for each onap command one by one.

    i.e, ./createAll.bash -n onap -a robot

    when that's completed, 

    ./createAll.bash -n onap -a aai

    ./createAll.bash -n onap -a appc

    and so on for each onap component you wish to install.

    1. Thanks for the help,

      but right now it looks like Kubernetes is not able to pull an image from registry

      kubectl get pods --all-namespaces -a

      NAMESPACE     NAME                                  READY     STATUS             RESTARTS   AGE

      onap-robot    robot-3494393958-8fl0q                0/1       ImagePullBackOff   0          5m


      Do you have any idea why?

      1. There was an issue (happens periodically) with the nexus3 repo.

        Also check that you are not having proxy issues.

        Usually we post the ONAP partner we are with either via our email or on our profile - thank you in advance.

        /michael

  16. Hi All,

    I am trying to install ONAP on Kubernetes and I got the following behaviorwhile trying to run ./createAll.bash -n onap -a robot|appc|aai command:



    but right now it looks like Kubernetes is not able to pull an image from registry


    kubectl get pods --all-namespaces -a


    NAMESPACE     NAME                                  READY     STATUS             RESTARTS   AGE


    onap-robot    robot-3494393958-8fl0q                0/1       ImagePullBackOff   0          5m




    Do you have any idea why?


  17. Hi, F. Michael O'Brien .I am trying to  install ONAP through the way above and encountered a problem.

    The pod of hbase in kubernetes returns to “Readiness probe failed: dial tcp 10.42.76.162:8020: getsockopt: connection refused”. It seems like the service of hbase is not started as expected.The container named hbase in Rancher logs:

    Starting namenodes on [hbase]
    hbase: chown: missing operand after '/opt/hadoop-2.7.2/logs'
    hbase: Try 'chown --help' for more information.
    hbase: starting namenode, logging to /opt/hadoop-2.7.2/logs/hadoop--namenode-hbase.out
    localhost: starting datanode, logging to /opt/hadoop-2.7.2/logs/hadoop--datanode-hbase.out
    Starting secondary namenodes [0.0.0.0]
    0.0.0.0: starting secondarynamenode, logging to /opt/hadoop-2.7.2/logs/hadoop--secondarynamenode-hbase.out
    starting zookeeper, logging to /opt/hbase-1.2.3/bin/../logs/hbase--zookeeper-hbase.out
    starting master, logging to /opt/hbase-1.2.3/bin/../logs/hbase--master-hbase.out
    starting regionserver, logging to /opt/hbase-1.2.3/bin/../logs/hbase--1-regionserver-hbase.out

    1. Nexus3 usually has intermittent connection issues - you may have to wait up until 30 min.  Yesterday I was able to bring it up on 3 systems with the 20170906 tag (All outside the firewall)

      I assume MSO (earlier in the startup) worked - so you don't have a proxy issue

      /michael

  18. verified


    root@ip-172-31-93-122:~/oom_20170908/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -a


    NAMESPACE             NAME                                   READY     STATUS      RESTARTS   AGE


    kube-system           heapster-4285517626-q5vns              1/1       Running     3          12d


    kube-system           kube-dns-646531078-tzhbj               3/3       Running     6          12d


    kube-system           kubernetes-dashboard-716739405-zc56m   1/1       Running     3          12d


    kube-system           monitoring-grafana-3552275057-gwcv0    1/1       Running     3          12d


    kube-system           monitoring-influxdb-4110454889-m29w3   1/1       Running     3          12d


    kube-system           tiller-deploy-737598192-rndtq          1/1       Running     3          12d


    onap                  config                                 0/1       Completed   0          10m


    onap-aai              aai-resources-3302599602-6mggg         1/1       Running     0          7m


    onap-aai              aai-service-3321436576-qc7tx           1/1       Running     0          7m


    onap-aai              aai-traversal-2747464563-bvbqn         1/1       Running     0          7m


    onap-aai              data-router-1397019010-d4bh1           1/1       Running     0          7m


    onap-aai              elasticsearch-2660384851-r9v3k         1/1       Running     0          7m


    onap-aai              gremlin-1786175088-q5z1k               1/1       Running     1          7m


    onap-aai              hbase-3880914143-0nn8x                 1/1       Running     0          7m


    onap-aai              model-loader-service-226363973-2wr0k   1/1       Running     0          7m


    onap-aai              search-data-service-1212351515-b04rz   1/1       Running     0          7m


    onap-aai              sparky-be-2088640323-kg4ts             1/1       Running     0          7m


    onap-appc             appc-1972362106-j27bp                  1/1       Running     0          7m


    onap-appc             appc-dbhost-4156477017-13mhs           1/1       Running     0          7m


    onap-appc             appc-dgbuilder-2616852186-4rtxz        1/1       Running     0          7m


    onap-message-router   dmaap-3565545912-nqcs1                 1/1       Running     0          8m


    onap-message-router   global-kafka-3548877108-x4gqb          1/1       Running     0          8m


    onap-message-router   zookeeper-2697330950-6l8ht             1/1       Running     0          8m


    onap-mso              mariadb-2019543522-1jc0v               1/1       Running     0          8m


    onap-mso              mso-2505152907-cj74x                   1/1       Running     0          8m


    onap-policy           brmsgw-3913376880-5v5p4                1/1       Running     0          7m


    onap-policy           drools-873246297-1h059                 1/1       Running     0          7m


    onap-policy           mariadb-922099840-qbpj7                1/1       Running     0          7m


    onap-policy           nexus-2268491532-pqt8t                 1/1       Running     0          7m


    onap-policy           pap-1694585402-7mdtg                   1/1       Running     0          7m


    onap-policy           pdp-3638368335-zptqk                   1/1       Running     0          7m


    onap-portal           portalapps-2799319019-twhn2            1/1       Running     0          8m


    onap-portal           portaldb-2714869748-bt1c8              1/1       Running     0          8m


    onap-portal           portalwidgets-1728801515-gr616         1/1       Running     0          8m


    onap-portal           vnc-portal-1920917086-s9mj9            1/1       Running     0          8m


    onap-robot            robot-1085296500-jkkln                 1/1       Running     0          8m


    onap-sdc              sdc-be-1839962017-nh4bm                1/1       Running     0          7m


    onap-sdc              sdc-cs-428962321-hhnmk                 1/1       Running     0          7m


    onap-sdc              sdc-es-227943957-mrnng                 1/1       Running     0          7m


    onap-sdc              sdc-fe-3467675014-nq72v                1/1       Running     0          7m


    onap-sdc              sdc-kb-1998598941-2bd73                1/1       Running     0          7m


    onap-sdnc             sdnc-250717546-0dtr7                   1/1       Running     0          8m


    onap-sdnc             sdnc-dbhost-2348786256-96gvr           1/1       Running     0          8m


    onap-sdnc             sdnc-dgbuilder-3446959187-9993t        1/1       Running     0          8m


    onap-sdnc             sdnc-portal-4253352894-sd7mg           1/1       Running     0          8m


    onap-vid              vid-mariadb-2940400992-mmtbn           1/1       Running     0          8m


    onap-vid              vid-server-377438368-z3tfv             1/1       Running     0          8m


    From: onap-discuss-bounces@lists.onap.org [mailto:onap-discuss-bounces@lists.onap.org] On Behalf Of Mandeep Khinda

    Sent: Friday, September 8, 2017 14:36
    To: onap-discuss@lists.onap.org
    Subject: [onap-discuss] [oom] config pod changes


    OOM users,

    I’ve just pushed a change that requires a re-build of the /dockerdata-nfs/onap/ mount on your K8s host.


    Basically, what I’ve tried to do is port over the heat stack version of ONAPs configuration mechanism.  The heat way of running ONAP writes files to /opt/config/ based on the stack’s environment file that has the details related to each users environment.  These values are then swapped in to the various VMs containers using scripts.


    Now that we are using helm for OOM, I was able to do something similar in order to start trying to run the vFW/vLB demo use cases. 

    This story tracks the functionality that was needed: https://jira.onap.org/browse/OOM-277


    I have also been made aware that this change requires K8s 1.6 as I am making use of the “envFrom”  https://kubernetes.io/docs/api-reference/v1.6/#container-v1-core.  We stated earlier that we are setting minimum requirements of K8s 1.7 and rancher 1.6 for OOM so hopefully this isn’t a big issue.


    It boils down to this:

    /oom/kubernetes/config/onap-parameters.yaml is kind of like file “onap_openstackRC.env” and you will need to define some required values otherwise the config pod deployment will fail.


    A sample can be found here:

    /oom/kubernetes/config/onap-parameters-sample.yaml

    Note: If you don’t care about interacting with openstack to launch VNFs then, you can just use the sample file contents.


    continue to run createConfig.sh –n onap and it will install the config files and swap in your environment specific values before it completes.


    createAll.bash –n onap to recreate your ONAP K8s environment and go from there.


    Thx,

    Mandeep

    -- 


  19. Hi, ALL


    1? I am trying to  install ONAP  on Kubernetes and encountered a problem.

    I  create msb pods  first  by command "./createAll.bash -n onap -a msb", then 
    create aai pods by command "/createAll.bash -n onap -a aai".
    The problem is that  all serviceName and url of aai  do not register to msb as  expected.
    I find  the code of  aai project   has those lines "


    msb.onap.org/service-info: '[

     {
    "serviceName": "aai-cloudInfrastructure",
    "version": "v11",
    "url": "/aai/v11/cloud-infrastructure",
    "protocol": "REST",
    "port": "8443",
    "enable_ssl":"True",
    "visualRange":"1"
    },{...}]"

      

    so  I think msb can not support domain name right now?

    2? Also three of aai pods  can not  be created normally.


     








  20. Hi all,

    Goal: I want to deploy and manage vFirewall router using ONAP.

    I installed ONAP on Kubernetes using oom(release-1.0.0). All Services are running except DCAE as it is not yet completely implemented in Kubernetes. Also, I have an OpenStack cluster configured separately. 

    How can I integrate DCAE to the above Kubernetes cluster?

    Thanks,
    Sathvik M

    1. DCAE is still coming in (1.0 version in 1.1) - this component is an order of magnitude more complex than any other ONAP deployment - you can track


      https://jira.onap.org/browse/OOM-176


      1. DCAE is in OOM Kubernetes as of 20170913

        onap-dcae cdap0-4078069992-ql1fk 1/1 Running 0 41m
        onap-dcae cdap1-4039904165-r8f2v 1/1 Running 0 41m
        onap-dcae cdap2-422364317-827g3 1/1 Running 0 41m
        onap-dcae dcae-collector-common-event-1149898616-1f8vt 1/1 Running 0 41m
        onap-dcae dcae-collector-dmaapbc-1520987080-9drlt 1/1 Running 0 41m
        onap-dcae dcae-controller-2121147148-1kd7f 1/1 Running 0 41m
        onap-dcae dcae-pgaas-2006588677-0wlf1 1/1 Running 0 41m
        onap-dcae dmaap-1927146826-6wt83 1/1 Running 0 41m
        onap-dcae kafka-2590900334-29qsk 1/1 Running 0 41m
        onap-dcae zookeeper-2166102094-4jgw0 1/1 Running 0 41m

        1. That means DCAE is working... Is it available in 1.0 version of OOM or 1.1?

          Thanks,

          Sathvik M


        2. Hi Michael,

          As DCAE is available in OOM 1.1v, I started installtion of 1.1v. Out of 10 containers of A&AI 2 of them are not coming up.

          root@node2:~# kubectl get pods --all-namespaces | grep "onap"| grep "0/1"
          onap-aai              aai-service-3869033750-mtfk1                   0/1       Init:0/1   9          2h
          onap-aai              aai-traversal-50329389-4gqz9                   0/1       Running    0          2h
          root@node2:~#


          RepetedlyI am seeing below prints in

          aai-traversal-50329389-4gqz9_onap-aai_aai-traversal-8df844f689667062e7cffdcfc531c255a170d4721f996e27097d7c45a4474f1f.log


          {"log":"Waiting for resources to be up\n","stream":"stdout","time":"2017-09-21T18:23:53.274547381Z"}
          {"log":"aai-resources.api.simpledemo.openecomp.org: forward host lookup failed: Unknown host\n","stream":"stderr","time":"2017-09-21T18:23:58.279615776Z"}
          {"log":"Waiting for resources to be up\n","stream":"stdout","time":"2017-09-21T18:23:58.279690784Z"}


          Can some one help me in fixing this issue.


          Thanks,

          Sathvik M

  21. Hi,
    I sense that there is a bit lack of information here. which, I would be happy to acquire.

    There is a file that describes the onap environment, "onap-parameters.yaml". I think that it will good practice to provide data on how to fill it (or acquire the values that should be resides in it). 

    F. Michael O'Brien, any available document about it?

    1. Mor, You are welcome to help us finish the documentation for OOM-277

      The config was changed on friday - those us here are playing catch up on some of the infrastructure changes as we are testing the deploys every couple days - you are welcome to add to the documentation here - usually the first to encounter an issue/workaround documents it - so the rest of us can benefit.

      Most of the content on this tutorial is added by developers like yourself that would like to get OOM deployed and fully functional - at ONAP we self document anything that is missing

      OOM-277 - Getting issue details... STATUS

      There was a section added on friday for those switching from the old-style config to the new - you run a helm purge

      The configuration parameters will be specific to your rackspace/openstack config - usually you match your rc export.  There is a sample posted from before when it was in the json file in mso - see the screen cap.

      The major issue is than so far no one using pure public ONAP has actually deployed a vFirewall yet (mostly due to stability issues with ONAP that are being fixed)

      ./michael

  22. TODO

    good to go : 20170913:2200h

    root@ip-172-31-57-55:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -a | grep 0/1

    onap                  config                                         0/1       Completed         0          37m

    onap-aai              aai-service-3321436576-790w2                   0/1       Init:0/1          1          34m

    onap-aai              aai-traversal-2747464563-pb8ns                 0/1       Running           0          34m

    onap-appc             appc-dgbuilder-2616852186-htwkl                0/1       Running           0          35m

    onap-dcae             dmaap-1927146826-6wt83                         0/1       Running           0          34m

    onap-policy           brmsgw-3913376880-qznzv                        0/1       Init:0/1          1          35m

    onap-policy           drools-873246297-twxtq                         0/1       Init:0/1          1          35m

    onap-policy           pap-1694585402-hwkdk                           0/1       PodInitializing   0          35m

    onap-policy           pdp-3638368335-l00br                           0/1       Init:0/1          1          35m

    onap-portal           vnc-portal-1920917086-0q786                    0/1       Init:1/5          1          35m

    onap-sdc              sdc-be-1839962017-16zc3                        0/1       Init:0/2          1          34m

    onap-sdc              sdc-fe-3467675014-qp7f5                        0/1       Init:0/1          1          34m

    onap-sdc              sdc-kb-1998598941-6z0w2                        0/1       PodInitializing   0          34m

    onap-sdnc             sdnc-dgbuilder-3446959187-lspd6                0/1       Running           0          35m

    root@ip-172-31-57-55:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -a | grep 0/1

    onap                  config                                         0/1       Completed         0          39m

    onap-policy           brmsgw-3913376880-qznzv                        0/1       Init:0/1          1          36m

    onap-policy           drools-873246297-twxtq                         0/1       Init:0/1          1          36m

    onap-policy           pdp-3638368335-l00br                           0/1       PodInitializing   0          36m

    onap-portal           vnc-portal-1920917086-0q786                    0/1       Init:2/5          1          36m

    onap-sdc              sdc-be-1839962017-16zc3                        0/1       PodInitializing   0          36m

    onap-sdc              sdc-fe-3467675014-qp7f5                        0/1       Init:0/1          1          36m

    root@ip-172-31-57-55:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -a | grep 0/1

    onap                  config                                         0/1       Completed         0          40m

    onap-policy           drools-873246297-twxtq                         0/1       PodInitializing   0          38m

    onap-portal           vnc-portal-1920917086-0q786                    0/1       Init:2/5          1          38m

    onap-sdc              sdc-fe-3467675014-qp7f5                        0/1       Running           0          38m

    root@ip-172-31-57-55:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -a | grep 0/1

    onap                  config                                         0/1       Completed         0          41m

    onap-policy           drools-873246297-twxtq                         0/1       PodInitializing   0          39m

    onap-portal           vnc-portal-1920917086-0q786                    0/1       Init:3/5          1          39m

    root@ip-172-31-57-55:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -a | grep 0/1

    onap                  config                                         0/1       Completed         0          42m

    onap-policy           drools-873246297-twxtq                         0/1       Running           0          40m

    onap-portal           vnc-portal-1920917086-0q786                    0/1       PodInitializing   0          40m

    root@ip-172-31-57-55:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -a | grep 0/1

    onap                  config                                         0/1       Completed         0          42m

    onap-portal           vnc-portal-1920917086-0q786                    0/1       PodInitializing   0          40m

    root@ip-172-31-57-55:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -a | grep 0/1

    onap                  config                                         0/1       Completed         0          43m

    onap-portal           vnc-portal-1920917086-0q786                    0/1       PodInitializing   0          40m

    root@ip-172-31-57-55:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -a | grep 0/1

    onap                  config                                         0/1       Completed   0          43m

    root@ip-172-31-57-55:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces 

    NAMESPACE             NAME                                           READY     STATUS    RESTARTS   AGE

    kube-system           heapster-4285517626-7212s                      1/1       Running   1          1d

    kube-system           kube-dns-2514474280-lmr1k                      3/3       Running   3          1d

    kube-system           kubernetes-dashboard-716739405-qfjgd           1/1       Running   1          1d

    kube-system           monitoring-grafana-3552275057-gj3x8            1/1       Running   1          1d

    kube-system           monitoring-influxdb-4110454889-2dq44           1/1       Running   1          1d

    kube-system           tiller-deploy-737598192-46l1m                  1/1       Running   2          1d

    onap-aai              aai-resources-3302599602-c894z                 1/1       Running   0          41m

    onap-aai              aai-service-3321436576-790w2                   1/1       Running   0          41m

    onap-aai              aai-traversal-2747464563-pb8ns                 1/1       Running   0          41m

    onap-aai              data-router-1397019010-fwqmz                   1/1       Running   0          41m

    onap-aai              elasticsearch-2660384851-chf2n                 1/1       Running   0          41m

    onap-aai              gremlin-1786175088-smqgx                       1/1       Running   0          41m

    onap-aai              hbase-3880914143-9cksj                         1/1       Running   0          41m

    onap-aai              model-loader-service-226363973-nlcnm           1/1       Running   0          41m

    onap-aai              search-data-service-1212351515-5wkb2           1/1       Running   0          41m

    onap-aai              sparky-be-2088640323-xs1dg                     1/1       Running   0          41m

    onap-appc             appc-1972362106-lx2t0                          1/1       Running   0          41m

    onap-appc             appc-dbhost-4156477017-9vbf9                   1/1       Running   0          41m

    onap-appc             appc-dgbuilder-2616852186-htwkl                1/1       Running   0          41m

    onap-dcae             cdap0-4078069992-ql1fk                         1/1       Running   0          41m

    onap-dcae             cdap1-4039904165-r8f2v                         1/1       Running   0          41m

    onap-dcae             cdap2-422364317-827g3                          1/1       Running   0          41m

    onap-dcae             dcae-collector-common-event-1149898616-1f8vt   1/1       Running   0          41m

    onap-dcae             dcae-collector-dmaapbc-1520987080-9drlt        1/1       Running   0          41m

    onap-dcae             dcae-controller-2121147148-1kd7f               1/1       Running   0          41m

    onap-dcae             dcae-pgaas-2006588677-0wlf1                    1/1       Running   0          41m

    onap-dcae             dmaap-1927146826-6wt83                         1/1       Running   0          41m

    onap-dcae             kafka-2590900334-29qsk                         1/1       Running   0          41m

    onap-dcae             zookeeper-2166102094-4jgw0                     1/1       Running   0          41m

    onap-message-router   dmaap-3565545912-2f19k                         1/1       Running   0          41m

    onap-message-router   global-kafka-3548877108-ns5v6                  1/1       Running   0          41m

    onap-message-router   zookeeper-2697330950-9fbmf                     1/1       Running   0          41m

    onap-mso              mariadb-2019543522-nqqbz                       1/1       Running   0          41m

    onap-mso              mso-2505152907-pg17g                           1/1       Running   0          41m

    onap-policy           brmsgw-3913376880-qznzv                        1/1       Running   0          41m

    onap-policy           drools-873246297-twxtq                         1/1       Running   0          41m

    onap-policy           mariadb-922099840-x5xsq                        1/1       Running   0          41m

    onap-policy           nexus-2268491532-025jf                         1/1       Running   0          41m

    onap-policy           pap-1694585402-hwkdk                           1/1       Running   0          41m

    onap-policy           pdp-3638368335-l00br                           1/1       Running   0          41m

    onap-portal           portalapps-3572242008-qr51z                    1/1       Running   0          41m

    onap-portal           portaldb-2714869748-wxtvh                      1/1       Running   0          41m

    onap-portal           portalwidgets-1728801515-33bm7                 1/1       Running   0          41m

    onap-portal           vnc-portal-1920917086-0q786                    1/1       Running   0          41m

    onap-robot            robot-1085296500-d3l2g                         1/1       Running   0          41m

    onap-sdc              sdc-be-1839962017-16zc3                        1/1       Running   0          41m

    onap-sdc              sdc-cs-428962321-z87js                         1/1       Running   0          41m

    onap-sdc              sdc-es-227943957-5ssh3                         1/1       Running   0          41m

    onap-sdc              sdc-fe-3467675014-qp7f5                        1/1       Running   0          41m

    onap-sdc              sdc-kb-1998598941-6z0w2                        1/1       Running   0          41m

    onap-sdnc             sdnc-250717546-476sv                           1/1       Running   0          41m

    onap-sdnc             sdnc-dbhost-2348786256-wsf9z                   1/1       Running   0          41m

    onap-sdnc             sdnc-dgbuilder-3446959187-lspd6                1/1       Running   0          41m

    onap-sdnc             sdnc-portal-4253352894-73mzq                   1/1       Running   0          41m

    onap-vid              vid-mariadb-2940400992-twp1r                   1/1       Running   0          41m

    onap-vid              vid-server-377438368-mkgpc                     1/1       Running   0          41m






  23. HI, 

    I just went through instalaltion tutorial :

    1 - I am wondering how Openstack impact ONAP operations ?

    2 - when will dcae component be available on kubernetes ?

    Thanks,

    1. DCAE is in OOM Kubernetes as of 20170913

      onap-dcae cdap0-4078069992-ql1fk 1/1 Running 0 41m
      onap-dcae cdap1-4039904165-r8f2v 1/1 Running 0 41m
      onap-dcae cdap2-422364317-827g3 1/1 Running 0 41m
      onap-dcae dcae-collector-common-event-1149898616-1f8vt 1/1 Running 0 41m
      onap-dcae dcae-collector-dmaapbc-1520987080-9drlt 1/1 Running 0 41m
      onap-dcae dcae-controller-2121147148-1kd7f 1/1 Running 0 41m
      onap-dcae dcae-pgaas-2006588677-0wlf1 1/1 Running 0 41m
      onap-dcae dmaap-1927146826-6wt83 1/1 Running 0 41m
      onap-dcae kafka-2590900334-29qsk 1/1 Running 0 41m
      onap-dcae zookeeper-2166102094-4jgw0 1/1 Running 0 41m

      1. Those changes are available in which branch ?

  24. Is there any reason for using the 8880 port instead of the 8080 port when installing Rancher?

    8880 port seems to be blocked in our environment and using 8080 was working fine. I hope I will not run into other issues because I am using 8080?

    1. Kiran Kamineni, You can use whatever port you prefer. It should cause no issues.


  25. Hi, I managed to install all ONAP components using Kubernates, they seem to be running and I can access the Portl and authenticate,

    Problem:

    I can not access the SDC, It always gives the error "Sorry, you are not authorized to view this page, contact ...the administrators".

    I tried with all the available users (demo, cs0008, jh0003) but none of them is working.

    Can I get few bits of help regarding this?

    Thanks in advance.

    Mohamed Aly, Aalto University.

    1. Please try accessing it from the VNC. (<your node IP>:30211).

      1. I am having the same issue as Mohamed. I am accessing it via the VNC portal on port 30211

      2. Hi, thank you for your reply, I'm accessing it from the VNC node with the port 30211, it doesn't work though and gives the same error.

        Any update on this issue??

        1. First verify that your portal containers are running in K8s (including the vnc-portal). Make notice of the 2/2 and 1/1 Ready states. If a 0 is on the left of those numbers then the container is not fully running.

          kubectl get pods --all-namespaces -o=wide

          onap-portal     portalapps-4168271938-gllr1          2/2 Running
          onap-portal     portaldb-2821262885-rs4qj            2/2 Running
          onap-portal     portalwidgets-1837229812-r8cn2   1/1 Running
          onap-portal     vnc-portal-2366268378-c71z9        1/1 Running


          If the containers (pods) are in a good state, ensure your k8s host has a routable IP address and substitute it into the example URL below:

          http://<ip address>:30211/vnc.html?autoconnect=1&autoscale=0&quality=3

          1. This is not our problem, thanx anyway.

          2. I am also facing the same issue. 

            From the wireshark logs, GET /sdc2/rest/version api is having some issues.

            Pods seem to be running fine ::

            onap1-portal           portaldb-3931461499-x03wg               2/2       Running            0          1h

            onap1-portal           portalwidgets-3077832546-jz647          1/1       Running            0          1h

            onap1-portal           vnc-portal-3037811218-hj3wj             1/1       Running            0          1h

            onap1-sdc              sdc-be-3901137770-h7d65                 2/2       Running            0          1h

            onap1-sdc              sdc-cs-372240393-kqlw7                  1/1       Running            0          1h

            onap1-sdc              sdc-es-140478562-r1fx9                  1/1       Running            0          1h

            onap1-sdc              sdc-fe-3405834798-pjvkh                 2/2       Running            0          1h

            onap1-sdc              sdc-kb-3782380369-hzb6q                 1/1       Running            0          1h

    2. Any update on this issue??

  26. Hi, is there a page available where we could find any sort of updated list/diagram of the dependencies between the different onap components? Also is there a breakdown of the memory requirements for the various oom components? 

    1. Hi Samuel,

      No official documentation on the dependencies at this point. But a very good idea to add. I will look into doing this.

      For now you can see the dependencies in each of the deployment descriptors like in the AAI traversal example (see below) that depends on aai-resource and hbase containers before it starts up. In OOM we make use of Kubernetes init-containers and readiness probes to implement the dependencies. This prevents the main container in the deployment descriptor from starting until its dependencies are "ready".


      oom/kubernetes/aai/templates] vi aai-traversal-deployment.yaml


      pod.beta.kubernetes.io/init-containers: '[
      {
      "args": [
      "--container-name",
      "hbase",
      "--container-name",
      "aai-resources"
      ],
      "command": [
      "/root/ready.py"
      ],
      "env": [
      {
      "name": "NAMESPACE",
      "valueFrom": {
      "fieldRef": {
      "apiVersion": "v1",
      "fieldPath": "metadata.namespace"
      }
      }
      }
      ],
      "image": "{{ .Values.image.readiness }}",
      "imagePullPolicy": "{{ .Values.pullPolicy }}",
      "name": "aai-traversal-readiness"
      }
      ]'

    2. Samuel, To add to the dependency discussion by Mike - Ideally I would like to continue the deployment diagram below with the dependencies listed in the yamls he refers to

      The diagram can be edited by anyone - I will take time this week and update it.

      Overall Deployment Architecture#Version1.1.0/R1

      /michael

  27. There seems to be an error in the VFC service definition template when creating all services on an ubuntu 16.04 with 64 GB RAM:

    Creating namespace **********

    namespace "onap-vfc" created

    Creating registry secret **********
    secret "onap-docker-registry-key" created

    Creating deployments and services **********
    Error: yaml: line 27: found unexpected end of stream
    The command helm returned with error code 1


    1. Gopinath,

         Hi, VFC is still a work in progress - the VFC team is working through issues with their containers.  You don't currently need VFC for ONAP to function - you can comment it out of the oneclick/setenv.bash helm line (ideally we would leave out services that are still WIP).

      # full
      HELM_APPS=('consul' 'msb' 'mso' 'message-router' 'sdnc' 'vid' 'robot' 'portal' 'policy' 'appc' 'aai' 'sdc' 'dcaegen2' 'log' 'cli' 'multicloud' 'clamp' 'vnfsdk' 'kube2msb' 'aaf' 'vfc')
      
      
      # required
      HELM_APPS=('consul' 'msb' 'mso' 'message-router' 'sdnc' 'vid' 'robot' 'portal' 'policy' 'appc' 'aai' 'sdc' 'dcaegen2' 'log', 'cli')
      
      
      


         thank you

         /michael

      1. Hi Michael,

        Checking back to see if VFC container issues are resolved and I can continue with the full install including other components?

        Thanks!

        Gopinath

  28. Hi,

    I am trying to bring up ONAP using Kubernets. Can you tell please if I should pull only OOM release-1.0.0 or a pull from master branch should also be fine, to get the ONAP up & running and also to run demo on it.

    Thanks!

    1. Rajesh,  Hi, the latest master is 1.1/R1 - the wiki is now targeting 1.1 - I'll remove the 1.0 link.  Be aware that ONAP in general is undergoing stabilization at this point.

      /michael

  29. Hi,

    I am getting the same error as a few people above when it comes to accessing SDC where it says I am not authorized to view this page, and it also gives me a 500 error. My initial impression is that this might be because I cannot reach the IP corresponding to the sdc.api.simpledemo.openecomp.org in the /etc/hosts file from my vnc container. 

    Could anybody confirm if this may cause an issue? And if so, which container/host/service IP should be paired with the sdc url?

    Thanks,

    Sam

    1. Actually, I believe the resolution is correct, as it maps to the sdc-fe service, and if I change the IP to any other service the sdc web page times out. Also, if I curl<sdc-url>:8080 I do get information back. I am still not sure what might be causing this issue. Currently I am trying to look through the sdc logs for hints, but no luck as of yet

      1. The request is failing on the sdc-fe side. I posted the outputs of a tcpdump from the sdc-fe container here https://pastebin.com/bA46vqUk 

        1. There are general SDC issues - I'll look them up and paste them.  We are also investigating issues with the sdc-be container

          see

          SDC-451 - Getting issue details... STATUS

          and

          INT-106 - Getting issue details... STATUS

  30. I am also facing same SDC issue duo to sdc-es is not ready.

    sdc-es shows the below error in log.


    7-10-11T17:49:50+05:30] INFO: HTTP Request Returned 404 Not Found: Object not found: chefzero://localhost:8889/environments/AUTO
    10/11/2017 5:49:50 PM
    10/11/2017 5:49:50 PM================================================================================
    10/11/2017 5:49:50 PMError expanding the run_list:
    10/11/2017 5:49:50 PM================================================================================
    10/11/2017 5:49:50 PMUnexpected API Request Failure:
    10/11/2017 5:49:50 PM-------------------------------
    10/11/2017 5:49:50 PMObject not found: chefzero://localhost:8889/environments/AUTO
    10/11/2017 5:49:50 PMPlatform:
    10/11/2017 5:49:50 PM---------
    10/11/2017 5:49:50 PMx86_64-linux
    10/11/2017 5:49:50 PM[2017-10-11T17:49:50+05:30] ERROR: Running exception handlers
    10/11/2017 5:49:50 PM[2017-10-11T17:49:50+05:30] ERROR: Exception handlers complete
    10/11/2017 5:49:50 PM[2017-10-11T17:49:50+05:30] FATAL: Stacktrace dumped to /root/chef-solo/cache/chef-stacktrace.out
    10/11/2017 5:49:50 PM[2017-10-11T17:49:50+05:30] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
    10/11/2017 5:49:50 PM[2017-10-11T17:49:50+05:30] ERROR: 404 "Not Found"
    10/11/2017 5:49:51 PM[2017-10-11T17:49:50+05:30] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
    10/11/2017 5:49:52 PM[2017-10-11T17:49:52+05:30] INFO: Started chef-zero at chefzero://localhost:8889 with repository at /root/chef-solo
    10/11/2017 5:49:52 PM One version per cookbook
    10/11/2017 5:49:52 PM[2017-10-11T17:49:52+05:30] INFO: Forking chef instance to converge...
    10/11/2017 5:49:52 PM[2017-10-11T17:49:52+05:30] INFO: *** Chef 12.19.36 ***
    10/11/2017 5:49:52 PM[2017-10-11T17:49:52+05:30] INFO: Platform: x86_64-linux
    10/11/2017 5:49:52 PM[2017-10-11T17:49:52+05:30] INFO: Chef-client pid: 927
    10/11/2017 5:49:53 PM[2017-10-11T17:49:53+05:30] INFO: Setting the run_list to ["role[elasticsearch]"] from CLI options
    10/11/2017 5:49:53 PM[2017-10-11T17:49:53+05:30] WARN: Run List override has been provided.
    10/11/2017 5:49:53 PM[2017-10-11T17:49:53+05:30] WARN: Original Run List: [role[elasticsearch]]
    10/11/2017 5:49:53 PM[2017-10-11T17:49:53+05:30] WARN: Overridden Run List: [recipe[sdc-elasticsearch::ES_6_create_kibana_dashboard_virtualization]]
    10/11/2017 5:49:53 PM[2017-10-11T17:49:53+05:30] INFO: HTTP Request Returned 404 Not Found: Object not found: chefzero://localhost:8889/environments/AUTO
    10/11/2017 5:49:53 PM

    1. R1 is still under RC0 fix mode as we prep for the release - pull yesterdays (13th) 

      Mandeeps

      https://gerrit.onap.org/r/#/c/18803/

      fixes

      OOM-359 - Getting issue details... STATUS

      SDC-451 - Getting issue details... STATUS

      and some of

      OOM-110 - Getting issue details... STATUS


      actually those are for sdc-be, I see a chef error on sdc-es - but the pod starts up ok (need to verify the endpoints though) - also this pod is not slated for the elk filebeat sister container - it should


      [2017-10-14T11:06:17-05:00] ERROR: cookbook_file[/usr/share/elasticsearch/config/kibana_dashboard_virtualization.json] (sdc-elasticsearch::ES_6_create_kibana_dashboard_virtualization line 1) had an error: Chef::Exceptions::FileNotFound: Cookbook 'sdc-elasticsearch' (0.0.0) does not contain a file at any of these locations:
        files/debian-8.6/kibana_dashboard_virtualization.json
        files/debian/kibana_dashboard_virtualization.json
        files/default/kibana_dashboard_virtualization.json
        files/kibana_dashboard_virtualization.json
      
      This cookbook _does_ contain: ['files/default/dashboard_BI-Dashboard.json','files/default/dashboard_Monitoring-Dashboared.json','files/default/visualization_JVM-used-CPU.json','files/default/visualization_JVM-used-Threads-Num.json','files/default/visualization_number-of-user-accesses.json','files/default/logging.yml','files/default/visualization_JVM-used-Memory.json','files/default/visualization_host-used-Threads-Num.json','files/default/visualization_Show-all-certified-services-ampersand-resources-(per-day).json','files/default/visualization_Show-all-created-Resources-slash-Services-slash-Products.json','files/default/visualization_host-used-CPU.json','files/default/visualization_Show-all-distributed-services.json']
      [2017-10-14T11:06:17-05:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
      

      getting a chef exit on missing elk components in sdd-es - even though this one is not slated for the sister filebeat container - likely a reused script across all pods in sdc - will take a look

      see original OOM-110 commit

      https://gerrit.onap.org/r/#/c/15941/1

      Likely we can ignore this one in sdc-es  - need to check endpoints though - pod comes up ok - regardless of the failed cookbook.

      root@obriensystemsu0:~/onap/oom/kubernetes/oneclick# kubectl logs -f -n onap-sdc sdc-es-2514443912-nt3r3

      /michael

  31. todo add to devops

    oot@obriensystemsu0:~/onap/oom/kubernetes/oneclick# kubectl logs -f -n onap-aai aai-traversal-3982333463-vb89g aai-traversalCloning into 'aai-config'...
    [2017-10-14T10:50:36-05:00] INFO: Started chef-zero at chefzero://localhost:1 with repository at /var/chef/aai-config
      One version per cookbook
      environments at /var/chef/aai-data/environments

    [2017-10-14T10:50:36-05:00] INFO: Forking chef instance to converge...
    Starting Chef Client, version 13.4.24
    [2017-10-14T10:50:36-05:00] INFO: *** Chef 13.4.24 ***
    [2017-10-14T10:50:36-05:00] INFO: Platform: x86_64-linux
    [2017-10-14T10:50:36-05:00] INFO: Chef-client pid: 43
    [

  32. Hi Michael,

    I am trying to setup ONAP using Kubernetes. I am using rancher to setup Kubernetes cluster. i am having 5 machine with 16GB memory each. Configured kubernentes successfully. when i am running createAll.bash to setup ONAP application, some of the components are successfully configured and running but some of the components are failing and with "ImagePullOfBack" error. 

    when i am trying to pull images independently i am able to download images from nexus successfully but not when running through createAll script. When i went through the script seem everything fine and not able to understand what is wrong. could you please help me understand the issue.

    ~Vijendra

    1. Vijendra,

         Hi, try running the docker pre pull script on all of your machines first.  Also you may need to duplicate /dockerdata-nfs across all machines - manually or via a shared drive.

         /michael

  33. Hi,

    I started getting an error with the MSO when I redeployed yesterday

    Starting Xvfb on display :88 with res 1280x1024x24
    Executing robot tests at log level TRACE
    ==============================================================================
    OpenECOMP ETE
    ==============================================================================
    OpenECOMP ETE.Robot
    ==============================================================================
    OpenECOMP ETE.Robot.Testsuites
    ==============================================================================
    .

    .

    .

    ------------------------------------------------------------------------------
    Basic SDNGC Health Check | PASS |
    ------------------------------------------------------------------------------
    Basic A&AI Health Check | PASS |
    ------------------------------------------------------------------------------
    Basic Policy Health Check | PASS |
    ------------------------------------------------------------------------------
    Basic MSO Health Check | FAIL |
    503 != 200
    ------------------------------------------------------------------------------
    Basic ASDC Health Check | PASS |
    ------------------------------------------------------------------------------
    Basic APPC Health Check | PASS |
    ------------------------------------------------------------------------------
    Basic Portal Health Check | PASS |
    ------------------------------------------------------------------------------
    Basic Message Router Health Check | PASS |
    ------------------------------------------------------------------------------
    Basic VID Health Check | PASS |
    ------------------------------------------------------------------------------
    Basic Microservice Bus Health Check | FAIL |
    Variable '${MSB_ENDPOINT}' not found. Did you mean:
    ${MSO_ENDPOINT}
    ${MR_ENDPOINT}
    ------------------------------------------------------------------------------
    OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp compo... | FAIL |
    11 critical tests, 8 passed, 3 failed
    11 tests total, 8 passed, 3 failed
    ==============================================================================
    OpenECOMP ETE.Robot.Testsuites | FAIL |
    11 critical tests, 8 passed, 3 failed
    11 tests total, 8 passed, 3 failed
    ==============================================================================
    OpenECOMP ETE.Robot | FAIL |
    11 critical tests, 8 passed, 3 failed
    11 tests total, 8 passed, 3 failed
    ==============================================================================
    OpenECOMP ETE | FAIL |
    11 critical tests, 8 passed, 3 failed
    11 tests total, 8 passed, 3 failed
    ==============================================================================
    Output: /var/opt/OpenECOMP_ETE/html/logs/ete/ETE_11572/output.xml
    Log: /var/opt/OpenECOMP_ETE/html/logs/ete/ETE_11572/log.html

    Anybody else get this error/may know how to determine the root cause of this?

    1. Yes, we have been getting this since last friday - I have been too busy to raise an issue like normal - this is not as simple as onap-parameters.xml it looks like a robot change related to the SO rename - will post a JIRA/workaround shortly.  Anyway SO is not fully up on OOM/Heat anyway currently.

      /michael