Table of Contents
Note | ||
---|---|---|
| ||
This wiki is under construction - this means that content here may be not fully specified or missing. TODO: |
Official Documentation:Â https://onap.readthedocs.io/en/latest/submodules/oom.git/docs/OOM%20User%20Guide/oom_user_guide.html?highlight=oom
Integration:Â https://wiki.opnfv.org/pages/viewpage.action?pageId=12389095Â
The OOM (ONAP Operation Manager) project has pushed Kubernetes based deployment code to the oom repository - based on ONAP 1.1. Â This page details getting ONAP running (specifically the vFirewall demo) on Kubernetes for various virtual and native environments. Â This page assumes you have access to any type of bare metal or VM running a clean Ubuntu 16.04 image - either on Rackspace, Openstack, your laptop or AWS spot EC2.
Architectural details of the OOM project is described here -Â OOM User Guide
...
Status
Undercloud Installation
Requirements
...
7g (a couple components)
...
Note:Â you need at least 51g RAMÂ (3g is for Rancher/Kubernetes itself (the VFs exist on a separate openstack).
51 to start and 55 after running the system for a day
...
60g
...
We need a kubernetes installation either a base installation or with a thin API wrapper like Rancher.
There are several options - currently Rancher with Helm on Ubuntu 16.04 is a focus as a thin wrapper on Kubernetes - there are other alternative platforms in the subpage -Â ONAP on Kubernetes (Alternatives)
...
Ubuntu 16.04.2
!Redhat
...
Bare Metal
VMWare
...
Recommended approach
Issue with kubernetes support only in 1.12 (obsolete docker-machine) on OSX
...
ONAP Installation
Quickstart Installation
1) install rancher, clone oom, run config-init pod, run one or all onap components
...
*****************
Note: uninstall docker if already installed - as Kubernetes only support 1.12.x - as of 20170809
Code Block |
---|
% sudo apt-get remove docker-engine |
*****************
Install Rancher
ONAP deployment in kubernetes is modelled in the oom project as a 1:1 set of service:pod sets (1 pod per docker container). Â The fastest way to get ONAP Kubernetes up is via Rancher on any bare metal or VM that supports a clean Ubuntu 16.04 install and more than 50G ram.
(on each host) add to your /etc/hosts to point your ip to your hostname (add your hostname to the end). Add entries for all other hosts in your cluster.
Code Block | ||
---|---|---|
| ||
sudo vi /etc/hosts
<your-ip> <your-hostname> |
Try to use root - if you use ubuntu then you will need to enable docker separately for the ubuntu user
Code Block |
---|
sudo su -
apt-get update |
(to fix possible modprobe: FATAL: Module aufs not found in directory /lib/modules/4.4.0-59-generic)
(on each host (server and client(s) which may be the same machine)) Install only the 1.12.x (currently 1.12.6) version of Docker (the only version that works with Kubernetes in Rancher 1.6)
Code Block |
---|
curl https://releases.rancher.com/install-docker/1.12.sh | sh |
(on the master only) Install rancher (use 8880 instead of 8080) - note there may be issues with the dns pod in Rancher after a reboot or when running clustered hosts - a clean system will be OK - Jira server ONAP JIRA serverId 425b2b0a-557c-3c0c-b515-579789cceedb key OOM-236
Code Block |
---|
docker run -d --restart=unless-stopped -p 8880:8080 rancher/server |
In Rancher UI - dont use (http://127.0.0.1:8880) - use the real IP address - so the client configs are populated correctly with callbacks
You must deactivate the default CATTLE environment - by adding a KUBERNETES environment - and Deactivating the older default CATTLE one - your added hosts will attach to the default
- Default → Manage Environments
- Select "Add Environment" button
- Give the Environment a name and description, then select Kubernetes as the Environment Template
- Hit the "Create" button. This will create the environment and bring you back to the Manage Environments view
- At the far right column of the Default Environment row, left-click the menu ( looks like 3 stacked dots ), and select Deactivate. This will make your new Kubernetes environment the new default.
Register your host(s) - run following on each host (including the master if you are collocating the master/host on a single machine/vm)
For each host, In Rancher > Infrastructure > Hosts. Select "Add Host"
The first time you add a host - you will be presented with a screen containing the routable IP - hit save only on a routable IP.
Enter IP of host: (if you launched racher with 127.0.0.1/localhost - otherwise keep it empty - it will autopopulate the registration with the real IP)
Copy command to register host with Rancher,
Execute command on host, for example:
Code Block |
---|
% docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.2 http://192.168.163.131:8880/v1/scripts/BBD465D9B24E94F5FBFD:1483142400000:IDaNFrug38QsjZcu6rXh8TwqA4 |
wait for kubernetes menu to populate with CLI
Install Kubectl
The following will install kubectl on a linux host. Once configured, this client tool will provide management of a Kubernetes cluster.
Code Block |
---|
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl
mkdir ~/.kube
vi ~/.kube/config |
Paste kubectl config from Rancher (you will see the CLI menu in Rancher / Kubernetes after the k8s pods are up on your host)
Click on "Generate Config" to get your content to add into .kube/config
Verify that Kubernetes config is good
Code Block |
---|
root@obrien-kube11-1:~# kubectl cluster-info
Kubernetes master is running at ....
Heapster is running at....
KubeDNS is running at ....
kubernetes-dashboard is running at ...
monitoring-grafana is running at ....
monitoring-influxdb is running at ...
tiller-deploy is running at.... |
Install Helm
The following will install Helm (use 2.3.0 not current 2.6.0) on a linux host. Helm is used by OOM for package and configuration management.
Prerequisite: Install Kubectl
Code Block |
---|
wget http://storage.googleapis.com/kubernetes-helm/helm-v2.3.0-linux-amd64.tar.gz
tar -zxvf helm-v2.3.0-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
# Test Helm
helm help |
Undercloud done - move to ONAP
clone oom (scp your onap_rsa private key first - or clone anon - Ideally you get a full gerrit account and join the community)
see ssh/http/http access links below
https://gerrit.onap.org/r/#/admin/projects/oom
Code Block |
---|
anonymous http
git clone http://gerrit.onap.org/r/oom
or using your key
git clone -b release-1.0.0 ssh://michaelobrien@gerrit.onap.org:29418/oom |
or use https (substitute your user/pass)
Code Block |
---|
git clone -b release-1.0.0Â https://michaelnnnn:uHaBPMvR47nnnnnnnnRR3Keer6vatjKpf5A@gerrit.onap.org/r/oom |
Wait until all the hosts show green in rancher, then run the createConfig/createAll scripts that wraps all the kubectl commands
Jira | ||||||
---|---|---|---|---|---|---|
|
Source the setenv.bash script in /oom/kubernetes/oneclick/ - will set your helm list of components to start/delete
Code Block |
---|
source setenv.bash |
run the one time config pod - which mounts the volume /dockerdata/ contained in the pod config-init. This mount is required for all other ONAP pods to function.
Note: the pod will stop after NFS creation - this is normal.
Code Block |
---|
% cd oom/kubernetes/config
# edit or copy the config for MSO data
vi onap-parameters.yaml
# or
cp onap-parameters-sample.yaml onap-parameters.yaml
# run the config pod creation
% ./createConfig.sh -n onap
|
**** Creating configuration for ONAP instance: onap
namespace "onap" created
pod "config-init" created
**** Done ****
Wait for the config-init pod is gone before trying to bring up a component or all of ONAP - around 60 sec (up to 10 min) - see https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-Waitingforconfig-initcontainertofinish-20sec
root@ip-172-31-93-122:~/oom_20170908/oom/kubernetes/config# kubectl get pods --all-namespaces -a
onap     config                 0/1    Completed  0     1m
Note: When using the -a option the config container will show up with the status, however when not used with the -a flag, it will not be present
Cluster Configuration
A mount is better.
Try to run you host and client on a single VM (a 64g one) - if not you can run rancher and several clients across several machines/VMs. The /dockerdata-nfs share must be replicated across the cluster using a mount or by copying the directory to the other servers from the one the "config" pod actually runs. To verify this check your / root fs on each node.
Code Block |
---|
# in this case a 4 node cluster of 16g vms - the config node is on the 4th
root@kube16-3:~# ls /
bin boot dev dockerdata-nfs etc home initrd.img lib lib64 lost+found media mnt opt proc root run sbin snap srv sys tmp usr var vmlinuz
root@kube16-3:~# scp -r /dockerdata-nfs/ root@kube160.onap.info:~
root@kube16-3:~# scp -r /dockerdata-nfs/ root@kube161.onap.info:~
root@kube16-3:~# scp -r /dockerdata-nfs/ root@kube162.onap.info:~ |
Running ONAP
Don't run all the pods unless you have at least 52G allocated - if you have a laptop/VM with 16G - then you can only run enough pods to fit in around 11G
Code Block |
---|
% cd ../oneclick
% vi createAll.bashÂ
% ./createAll.bash -n onap -a robot|appc|aai |
(to bring up a single service at a time)
Only if you have >52G run the following (all namespaces)
Code Block |
---|
% ./createAll.bash -n onap |
ONAP is OK if everything is 1/1 in the following
Code Block |
---|
% kubectl get pods --all-namespaces |
Run the ONAP portal via instructions at RunningONAPusingthevnc-portal
Wait until the containers are all up
Run Initial healthcheck directly on the host
cd /dockerdata-nfs/onap/robot
./ete-docker.sh health
check AAI endpoints
root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# kubectl -n onap-aai exec -it aai-service-3321436576-2snd6 bash
root@aai-service-3321436576-2snd6:/# ps -ef
UIDÂ Â Â Â PIDÂ PPIDÂ C STIME TTYÂ Â Â Â Â TIME CMD
root     1   0 0 15:50 ?    00:00:00 /usr/local/sbin/haproxy-systemd-
root     7   1 0 15:50 ?    00:00:00 /usr/local/sbin/haproxy-master Â
root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# curl https://127.0.0.1:30233/aai/v11/service-design-and-creation/models
curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
List of Containers
Total pods is 75
Docker container list - source of truth:Â https://git.onap.org/integration/tree/packaging/docker/docker-images.csv
get health via
...
NAMESPACE
master:20170715
...
Containers
(2 includes filebeat)
...
The mount "config-init-root" is in the following location
(user configurable VF parameter file below)
/dockerdata-nfs/onapdemo/mso/mso/mso-docker.json
...
aai-resources
...
/opt/aai/logroot/AAI-RES
...
aai-service
...
aai-traversal
...
/opt/aai/logroot/AAI-GQ
...
data-router
...
elasticsearch
...
hbase
...
model-loader-service
...
search-data-service
...
sparky-be
...
clamp-mariadb
...
consul-agent
...
consul-server
...
consul-server
...
consul-server
...
wurstmeister/zookeeper:latest
...
dockerfiles_kafka:latest
...
Note: currently there are no DCAE containers running yet (we are missing 6 yaml files (1 for the controller and 5 for the collector,staging,3-cdap pods)) - therefore DMaaP, VES collectors and APPC actions as the result of policy actions (closed loop) - will not function yet.
In review: https://gerrit.onap.org/r/#/c/7287/
Jira | ||||||
---|---|---|---|---|---|---|
|
Jira | ||||||
---|---|---|---|---|---|---|
|
...
attos/dmaap:latest
...
not required
dcae-controller
...
kube2msb-registrator
...
elasticsearch
...
msb-consul
...
bring onap-msb up before the rest of onap
follow
Jira | ||||||
---|---|---|---|---|---|---|
|
...
msb-discovery
...
msb-eag
...
msb-iag
...
framework
...
multicloud-ocata
...
multicloud-vio
...
multicloud-windriver
...
portalwidgets
...
/dockerdata-nfs/onap/sdc/logs/SDC/SDC-BE
${log.home}/${OPENECOMP-component-name}/ ${OPENECOMP-subcomponent-name}/transaction.log.%i
./var/lib/jetty/logs/SDC/SDC-BE/metrics.log
./var/lib/jetty/logs/SDC/SDC-BE/audit.log
./var/lib/jetty/logs/SDC/SDC-BE/debug_by_package.log
./var/lib/jetty/logs/SDC/SDC-BE/debug.log
./var/lib/jetty/logs/SDC/SDC-BE/transaction.log
./var/lib/jetty/logs/SDC/SDC-BE/error.log
./var/lib/jetty/logs/importNormativeAll.log
./var/lib/jetty/logs/ASDC/ASDC-FE/audit.log
./var/lib/jetty/logs/ASDC/ASDC-FE/debug.log
./var/lib/jetty/logs/ASDC/ASDC-FE/transaction.log
./var/lib/jetty/logs/ASDC/ASDC-FE/error.log
./var/lib/jetty/logs/2017_09_06.stderrout.log
...
./var/lib/jetty/logs/SDC/SDC-BE/metrics.log
./var/lib/jetty/logs/SDC/SDC-BE/audit.log
./var/lib/jetty/logs/SDC/SDC-BE/debug_by_package.log
./var/lib/jetty/logs/SDC/SDC-BE/debug.log
./var/lib/jetty/logs/SDC/SDC-BE/transaction.log
./var/lib/jetty/logs/SDC/SDC-BE/error.log
./var/lib/jetty/logs/importNormativeAll.log
./var/lib/jetty/logs/2017_09_07.stderrout.log
./var/lib/jetty/logs/ASDC/ASDC-FE/audit.log
./var/lib/jetty/logs/ASDC/ASDC-FE/debug.log
./var/lib/jetty/logs/ASDC/ASDC-FE/transaction.log
./var/lib/jetty/logs/ASDC/ASDC-FE/error.log
./var/lib/jetty/logs/2017_09_06.stderrout.log
...
./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/journal/000006.log
./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/cache/1504712225751.log
./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/cache/1504712002358.log
./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/tmp/xql.log
./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/data/log/karaf.log
./opt/opendaylight/distribution-karaf-0.5.1-Boron-SR1/taglist.log
./var/log/dpkg.log
./var/log/apt/history.log
./var/log/apt/term.log
./var/log/fontconfig.log
./var/log/alternatives.log
./var/log/bootstrap.log
...
./opt/openecomp/sdnc/admportal/server/npm-debug.log
./var/log/dpkg.log
./var/log/apt/history.log
./var/log/apt/term.log
./var/log/fontconfig.log
./var/log/alternatives.log
./var/log/bootstrap.log
...
vfc-catalog
...
vfc-emsdriver
...
vfc-gvnfmdriver
...
vfc-hwvnfmdriver
...
vfc-jujudriver
...
vfc-nslcm
...
vfc-resmgr
...
vfc-vnflcm
...
vfc-vnfmgr
...
vfc-vnfres
...
vfc-workflow
...
vfc-ztesdncdriver
...
vfc-ztevmanagerdrive
...
Fix MSO mso-docker.json (deprecated)
Before running pod-config-init.yaml - make sure your config for openstack is setup correctly - so you can deploy the vFirewall VMs for example
vi oom/kubernetes/config/docker/init/src/config/mso/mso/mso-docker.json
...
"mso-po-adapter-config": {
"checkrequiredparameters": "true",
"cloud_sites": [{
"aic_version": "2.5",
"id": "Ottawa",
"identity_service_id": "KVE5076_OPENSTACK",
"lcp_clli": "RegionOne",
"region_id": "RegionOne"
}],
"identity_services": [{
"admin_tenant": "services",
"dcp_clli": "KVE5076_OPENSTACK",
"identity_authentication_type": "USERNAME_PASSWORD",
"identity_server_type": "KEYSTONE",
"identity_url": "http://OPENSTACK_KEYSTONE_IP_HERE:5000/v2.0",
"member_role": "admin",
"mso_id": "dev",
"mso_pass": "dcdc0d9e4d69a667c67725a9e466e6c3",
"tenant_metadata": "true"
}],
...
"mso-po-adapter-config": {
"checkrequiredparameters": "true",
"cloud_sites": [{
"aic_version": "2.5",
"id": "Dallas",
"identity_service_id": "RAX_KEYSTONE",
"lcp_clli": "DFW", # or IAD
"region_id": "DFW"
}],
"identity_services": [{
"admin_tenant": "service",
"dcp_clli": "RAX_KEYSTONE",
"identity_authentication_type": "RACKSPACE_APIKEY",
"identity_server_type": "KEYSTONE",
"identity_url": "https://identity.api.rackspacecloud.com/v2.0",
"member_role": "admin",
"mso_id": "9998888",
"mso_pass": "YOUR_API_KEY",
"tenant_metadata": "true"
}],
Kubernetes DevOps
...
https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/
Deleting All Containers
Delete all the containers (and services)
Code Block |
---|
./deleteAll.bash -n onap |
Delete/Rerun config-init container for /dockerdata-nfs refresh
Delete the config-init container and its generated /dockerdata-nfs share
There may be cases where new configuration content needs to be deployed after a pull of a new version of ONAP.
for example after pull brings in files like the following (20170902)
...
root@ip-172-31-93-160:~/oom/kubernetes/oneclick# git pull
Resolving deltas: 100% (135/135), completed with 24 local objects.
From http://gerrit.onap.org/r/oom
  bf928c5..da59ee4 master   -> origin/master
Updating bf928c5..da59ee4
kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/aai-resources/aai-resources-auth/metadata.rb                 |  7 +
 kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/aai-resources/aai-resources-auth/recipes/aai-resources-aai-keystore.rb    |  8 +
 kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/{ajsc-aai-config => aai-resources/aai-resources-config}/CHANGELOG.md     |  2 +-
 kubernetes/config/docker/init/src/config/aai/aai-config/cookbooks/{ajsc-aai-config => aai-resources/aai-resources-config}/README.md       |  4 +-
see (worked with Zoran)Â
Jira | ||||||
---|---|---|---|---|---|---|
|
Code Block |
---|
# check for the pod
kubectl get pods --all-namespaces -a
# delete all the pod/services
./deleteAll.bash -n onap
# delete the fs
rm -rf /dockerdata-nfs
At this moment, its empty env
#Pull the repo
git pull
# rerun the config
./createConfig.bash -n onap
If you get an error saying release onap-config is already exists then please run :- helm del --purge onap-config
example 20170907
root@kube0:~/oom/kubernetes/oneclick# rm -rf /dockerdata-nfs/
root@kube0:~/oom/kubernetes/oneclick# cd ../config/
root@kube0:~/oom/kubernetes/config# ./createConfig.sh -n onap
**** Creating configuration for ONAP instance: onap
Error from server (AlreadyExists): namespaces "onap" already exists
Error: a release named "onap-config" already exists.
Please run: helm ls --all "onap-config"; helm del --help
**** Done ****
root@kube0:~/oom/kubernetes/config# helm del --purge onap-config
release "onap-config" deleted
# rerun createAll.bash -n onap |
Waiting for config-init container to finish - 20sec
...
root@ip-172-31-93-160:~/oom/kubernetes/config# kubectl get pods --all-namespaces -a
NAMESPACE Â Â NAME Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â READY Â Â STATUSÂ Â Â Â Â Â Â RESTARTS Â AGE
onap     config-init              0/1    ContainerCreating  0     6s
root@ip-172-31-93-160:~/oom/kubernetes/config# kubectl get pods --all-namespaces -a
NAMESPACE Â Â NAME Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â READY Â Â STATUSÂ Â RESTARTS Â AGE
onap     config-init              1/1    Running  0     9s
root@ip-172-31-93-160:~/oom/kubernetes/config# kubectl get pods --all-namespaces -a
NAMESPACE Â Â NAME Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â READY Â Â STATUSÂ Â Â RESTARTS Â AGE
onap     config-init              0/1    Completed  0     14s
The official documentation for installation of ONAP with OOM / Kubernetes is located in Read the Docs:
- OOM User Guide — onap master documentation
- OOM Quick Start Guide — onap master documentation)
- OOM Cloud Setup Guide — onap master documentation
Container Endpoint access
Check the services view in the Kuberntes API under robot
robot.onap-robot:88 TCP
robot.onap-robot:30209 TCP
...
kubectl get services --all-namespaces -o wide
onap-vid     vid-mariadb           None          <none>       3306/TCP        1h       app=vid-mariadb
onap-vid     vid-server            10.43.14.244  <nodes>      8080:30200/TCP  1h       app=vid-server
...
kubectl --namespace onap-vid logs -f vid-server-248645937-8tt6p
16-Jul-2017 02:46:48.707 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 22520 ms
kubectl --namespace onap-portal logs portalapps-2799319019-22mzl -f
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide
NAMESPACEÂ Â Â Â NAMEÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â READYÂ Â Â Â STATUSÂ Â Â RESTARTSÂ Â AGEÂ Â Â Â Â Â IPÂ Â Â Â Â Â Â Â Â Â Â Â Â NODE
onap-robot   robot-44708506-dgv8j                   1/1      Running  0         36m      10.42.240.80   obriensystemskub0
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl --namespace onap-robot logs -f robot-44708506-dgv8j
2017-07-16 01:55:54: (log.c.164) server started
Robot Logs
Yogini and I needed the logs in OOM Kubernetes - they were already there and with a robot:robot auth
http://test.onap.info:30209/logs/demo/InitDistribution/report.html
for example after a
root@ip-172-31-57-55:/dockerdata-nfs/onap/robot#Â ./demo-k8s.sh distribute
find your path to the logs by using for example
root@ip-172-31-57-55:/dockerdata-nfs/onap/robot# kubectl --namespace onap-robot exec -it robot-4251390084-lmdbb bash
root@robot-4251390084-lmdbb:/# ls /var/opt/OpenECOMP_ETE/html/logs/demo/InitD Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â
InitDemo/ Â Â Â Â InitDistribution/Â
path is
http://test.onap.info:30209/logs/demo/InitDemo/log.html#s1-s1-s1-s1-t1
SSH into ONAP containers
Normally I would via https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/
...
Get the pod name via
kubectl get pods --all-namespaces -o wide
bash into the pod via
kubectl -n onap-mso exec -it mso-1648770403-8hwcf /bin/bash
...
Trying to get an authorization file into the robot pod
...
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl cp authorization onap-robot/robot-44708506-nhm0n:/home/ubuntu
above works?
root@obriensystemskub0:~/oom/kubernetes/oneclick# kubectl cp authorization onap-robot/robot-44708506-nhm0n:/etc/lighttpd/authorization
tar: authorization: Cannot open: File exists
tar: Exiting with failure status due to previous errors
Redeploying Code war/jar in a docker container
Attaching a debugger to a docker container
Running ONAP Portal UI Operations
Running ONAP using the vnc-portal
see Installing and Running the ONAP Demos
or run the vnc-portal container to access ONAP using the traditional port mappings. Â See the following recorded video by Mike Elliot of the OOM team for a audio-visual reference
Check for the vnc-portal port via (it is always 30211)
Code Block | ||
---|---|---|
| ||
obrienbiometrics:onap michaelobrien$ ssh ubuntu@dev.onap.info
ubuntu@ip-172-31-93-122:~$ sudo su -
root@ip-172-31-93-122:~# kubectl get services --all-namespaces -o wide
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
onap-portal vnc-portal 10.43.78.204 <nodes> 6080:30211/TCP,5900:30212/TCP 4d app=vnc-portal
|
launch the vnc-portal in a browser
password is "password"
Open firefox inside the VNC vm - launch portal normally
http://portal.api.simpledemo.openecomp.org:8989/ECOMPPORTAL/login.htm
(20170906) Before running SDC - fix the /etc/hosts (thanks Yogini for catching this) - edit your /etc/hosts as follows
(change sdc.ui to sdc.api)
Jira | ||||||
---|---|---|---|---|---|---|
|
...
...
...
Continue with the normal ONAP demo flow at (Optional) Tutorial: Onboarding and Distributing a Vendor Software Product (VSP)
Running Multiple ONAP namespaces
Run multiple environments on the same machine - TODO
...
root@obriensystemsu0:~/onap/oom/kubernetes/oneclick# ./createAll.bash -n onap -a robot
********** Creating instance 1 of ONAP with port range 30200 and 30399
********** Creating ONAP:
********** Creating deployments for robot **********
Creating namespace **********
namespace "onap-robot" created
Creating registry secret **********
secret "onap-docker-registry-key" created
Creating deployments and services **********
NAME:Â Â onap-robot
LAST DEPLOYED: Sun Sep 17 14:58:29 2017
NAMESPACE: onap
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAMEÂ Â CLUSTER-IPÂ EXTERNAL-IPÂ PORT(S)Â Â Â Â Â Â AGE
robot 10.43.6.5  <nodes>     88:30209/TCP 0s
==> extensions/v1beta1/Deployment
NAMEÂ Â DESIREDÂ CURRENTÂ UP-TO-DATEÂ AVAILABLEÂ AGE
robot 1       1       1          0         0s
**** Done ****
root@obriensystemsu0:~/onap/oom/kubernetes/oneclick# ./createAll.bash -n onap2 -a robot
********** Creating instance 1 of ONAP with port range 30200 and 30399
********** Creating ONAP:
********** Creating deployments for robot **********
Creating namespace **********
namespace "onap2-robot" created
Creating registry secret **********
secret "onap2-docker-registry-key" created
Creating deployments and services **********
Error: release onap2-robot failed: Service "robot" is invalid: spec.ports[0].nodePort: Invalid value: 30209: provided port is already allocated
The command helm returned with error code 1
Rest API
There is a v1 and v2 experimental REST endpoint that allows us to automate Rancher
For example we can see when the AAI model-loader container was created
...
- "id": "1ce88",
- "type": "containerEvent",
- "links": {
- "self": "…/v1/containerevents/1ce88",
- "account": "…/v1/containerevents/1ce88/account",
- "host": "…/v1/containerevents/1ce88/host"
- "actions": {},
- "baseType": "containerEvent",
- "state": "created",
- "accountId": "1a7",
- "created": "2017-09-17T20:07:37Z",
- "createdTS": 1505678857000,
- "data": {
- "fields": {
- "dockerInspect": {
- "Id": "59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54",
- "Created": "2017-09-17T20:07:37.750772403Z",
- "Path": "/pause",
- "Args": [ ],
- "State": {
- "Status": "running",
- "Running": true,
- "Paused": false,
- "Restarting": false,
- "OOMKilled": false,
- "Dead": false,
- "Pid": 25115,
- "ExitCode": 0,
- "Error": "",
- "StartedAt": "2017-09-17T20:07:37.92889179Z",
- "FinishedAt": "0001-01-01T00:00:00Z"
- "Image": "sha256:99e59f495ffaa222bfeb67580213e8c28c1e885f1d245ab2bbe3b1b1ec3bd0b2",
- "ResolvConfPath": "/var/lib/docker/containers/59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54/resolv.conf",
- "HostnamePath": "/var/lib/docker/containers/59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54/hostname",
- "HostsPath": "/var/lib/docker/containers/59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54/hosts",
- "LogPath": "/var/lib/docker/containers/59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54/59ec11e257fb9061b250fe7ce6a7c86ffd10a82a2f26776c0adc9ac0eb3c6e54-json.log",
- "Name": "/k8s_POD_model-loader-service-849987455-532vd_onap-aai_d9034afb-9be3-11e7-ac87-024d93e255bc_0",
- "RestartCount": 0,
- "Driver": "aufs",
- "MountLabel": "",
- "ProcessLabel": "",
- "AppArmorProfile": "",
- "ExecIDs": null,
- "HostConfig": {
- "Binds": null,
- "ContainerIDFile": "",
- "LogConfig": {
- "Type": "json-file",
- "Config": { }
- "NetworkMode": "none",
- "PortBindings": { },
- "RestartPolicy": {
- "Name": "",
- "MaximumRetryCount": 0
- "AutoRemove": false,
- "VolumeDriver": "",
- "VolumesFrom": null,
- "CapAdd": null,
- "CapDrop": null,
- "Dns": null,
- "DnsOptions": null,
- "DnsSearch": null,
- "ExtraHosts": null,
- "GroupAdd": null,
- "IpcMode": "",
- "Cgroup": "",
- "Links": null,
- "OomScoreAdj": -998,
- "PidMode": "",
- "Privileged": false,
- "PublishAllPorts": false,
- "ReadonlyRootfs": false,
- "SecurityOpt": [
- "seccomp=unconfined"
- "UTSMode": "",
- "UsernsMode": "",
- "ShmSize": 67108864,
- "Runtime": "runc",
- "ConsoleSize": [ 2 items
- 0,
- 0
- "Isolation": "",
- "CpuShares": 2,
- "Memory": 0,
- "CgroupParent": "/kubepods/besteffort/podd9034afb-9be3-11e7-ac87-024d93e255bc",
- "BlkioWeight": 0,
- "BlkioWeightDevice": null,
- "BlkioDeviceReadBps": null,
- "BlkioDeviceWriteBps": null,
- "BlkioDeviceReadIOps": null,
- "BlkioDeviceWriteIOps": null,
- "CpuPeriod": 0,
- "CpuQuota": 0,
- "CpuRealtimePeriod": 0,
- "CpuRealtimeRuntime": 0,
- "CpusetCpus": "",
- "CpusetMems": "",
- "Devices": null,
- "DiskQuota": 0,
- "KernelMemory": 0,
- "MemoryReservation": 0,
- "MemorySwap": 0,
- "MemorySwappiness": -1,
- "OomKillDisable": false,
- "PidsLimit": 0,
- "Ulimits": null,
- "CpuCount": 0,
- "CpuPercent": 0,
- "IOMaximumIOps": 0,
- "IOMaximumBandwidth": 0
- "GraphDriver": {
- "Name": "aufs",
- "Data": null
- "Mounts": [ ],
- "Config": {
- "Hostname": "model-loader-service-849987455-532vd",
- "Domainname": "",
- "User": "",
- "AttachStdin": false,
- "AttachStdout": false,
- "AttachStderr": false,
- "Tty": false,
- "OpenStdin": false,
- "StdinOnce": false,
- "Env": null,
- "Cmd": null,
- "Image": "gcr.io/google_containers/pause-amd64:3.0",
- "Volumes": null,
- "WorkingDir": "",
- "Entrypoint": [
- "/pause"
- "OnBuild": null,
- "Labels": {
- "annotation.kubernetes.io/config.seen": "2017-09-17T20:07:35.940708461Z",
- "annotation.kubernetes.io/config.source": "api",
- "annotation.kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"onap-aai\",\"name\":\"model-loader-service-849987455\",\"uid\":\"d9000736-9be3-11e7-ac87-024d93e255bc\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"1306\"}}\n",
- "app": "model-loader-service",
- "io.kubernetes.container.name": "POD",
- "io.kubernetes.docker.type": "podsandbox",
- "io.kubernetes.pod.name": "model-loader-service-849987455-532vd",
- "io.kubernetes.pod.namespace": "onap-aai",
- "io.kubernetes.pod.uid": "d9034afb-9be3-11e7-ac87-024d93e255bc",
- "pod-template-hash": "849987455"
- "NetworkSettings": {
- "Bridge": "",
- "SandboxID": "6ebd4ae330d1fded82301d121e604a1e7193f20c538a9ff1179e98b9e36ffa5f",
- "HairpinMode": false,
- "LinkLocalIPv6Address": "",
- "LinkLocalIPv6PrefixLen": 0,
- "Ports": { },
- "SandboxKey": "/var/run/docker/netns/6ebd4ae330d1",
- "SecondaryIPAddresses": null,
- "SecondaryIPv6Addresses": null,
- "EndpointID": "",
- "Gateway": "",
- "GlobalIPv6Address": "",
- "GlobalIPv6PrefixLen": 0,
- "IPAddress": "",
- "IPPrefixLen": 0,
- "IPv6Gateway": "",
- "MacAddress": "",
- "Networks": {
- "none": {
- "IPAMConfig": null,
- "Links": null,
- "Aliases": null,
- "NetworkID": "9812f79a4ddb086db1b60cd10292d729842b2b42e674b400ac09101541e2b845",
- "EndpointID": "d4cb711ea75ed4d27b9d4b3a71d1b3dd5dfa9f4ebe277ab4280d98011a35b463",
- "Gateway": "",
- "IPAddress": "",
- "IPPrefixLen": 0,
- "IPv6Gateway": "",
- "GlobalIPv6Address": "",
- "GlobalIPv6PrefixLen": 0,
- "MacAddress": ""
- "none": {
- "dockerInspect": {
- "fields": {
Troubleshooting
Rancher fails to restart on server reboot
Having issues after a reboot of a colocated server/agent
Installing Clean Ubuntu
...
apt-get install ssh
apt-get install ubuntu-desktop
DNS resolution
ignore - not relevant
Search Line limits were exceeded, some dns names have been omitted, the applied search line is: default.svc.cluster.local svc.cluster.local cluster.local kubelet.kubernetes.rancher.internal kubernetes.rancher.internal rancher.internal
https://github.com/rancher/rancher/issues/9303
Questions
https://lists.onap.org/pipermail/onap-discuss/2017-July/002084.html
Links
https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/
Please help out our OPNFV friends
https://wiki.opnfv.org/pages/viewpage.action?pageId=12389095
Of interest
https://github.com/cncf/cross-cloud/
Reference Reviews
https://gerrit.onap.org/r/#/c/6179/
https://gerrit.onap.org/r/#/c/9849/
...