...
The OOM (ONAP Operation Manager) project has pushed Kubernetes based deployment code to the oom repository. This page details on getting ONAP running (specifically the vFirewall demo) on Kubernetes for various environments.
Undercloud Installation
We need a kubernetes installation with the proper architecture components running. This architecture can be provided by vendors like Redhat or Rancher
https://kubernetes.io/docs/concepts/overview/components/
either a base installation or with a thin API wrapper like Rancher or Redhat
There are several options - currently Rancher is a focus.
OS | VIM | Description | Status | Links | Linux||||
---|---|---|---|---|---|---|---|---|
Ubuntu 16 !Redhat !OSX | Bare Metal | Directly on RHEL 7.3 (VMs in this case) | In progress | VMWare | Racher | Recommended approach Issue with kubernetes support only in 1.12 (obsolete docker-machine) on OSX [root@obrien-b2 oneclick]# curl https:// | kubernetesio | docs/getting-started-guides/scratch/ |
OSX Linux | CoreOS | On Vagrant (Thanks Yves) | in-progress Issue: the coreos VM 19G size is insufficient | https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html Implement OSX fix for Vagrant 1.9.6 https://github.com/mitchellh/vagrant/issues/7747 Avoid the kubectl lock https://github.com/coreos/coreos-kubernetes/issues/886 Nexus auth issues fixed | OSX | MInikube on VMWare Fusion | minikube VM not restartable | https://github.com/kubernetes/minikube | RHEL 7.3 | Redhat Kubernetes | services deploy, but pod IP's not reachable, likely my missing 2 networks (public, onap_oam) retry with kubectl exec | https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html-single/getting_started_with_kubernetes/ | Bare Metal VMWare | Rancher | http://rancher.com/docs/rancher/v1.6/en/quick-start-guide/ |
ONAP Installation
Quickstart Installation
ONAP deployment in kubernetes is modelled in the oom project as a 1:1 set of service:pod sets (1 pod per docker container). The fastest way to get ONAP Kubernetes up is via Rancher.
Platform is Ubuntu 16.04 VMs on VMWare Workstation 12.5 on a up to two 64Gb/6-core 5820K Windows 10 systems (but a bare metal set of Ubuntu servers will work the same)
Currently editing this (adding rancher details) over the morning of 20170706 so bear with me...
Target Deployment State
root@obriensystemsucont0:~/onap/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide
below any colored container has issues getting to running state.
install-docker/1.12.sh | sh | http://rancher.com/docs/rancher/v1.6/en/quick-start-guide/ | |||
Linux | Bare Metal | Kubernetes Directly on RHEL 7.3 (VMs in this case) | In progress | https://kubernetes.io/docs/getting-started-guides/scratch/ |
OSX Linux | CoreOS | On Vagrant (Thanks Yves) | Issue: the coreos VM 19G size is insufficient | https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html Implement OSX fix for Vagrant 1.9.6 https://github.com/mitchellh/vagrant/issues/7747 Avoid the kubectl lock https://github.com/coreos/coreos-kubernetes/issues/886 Nexus auth issues fixed |
OSX | MInikube on VMWare Fusion | minikube VM not restartable | https://github.com/kubernetes/minikube | |
RHEL 7.3 | Redhat Kubernetes | services deploy, but pod IP's not reachable, likely my missing 2 networks (public, onap_oam) retry with kubectl exec | https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html-single/getting_started_with_kubernetes/ |
ONAP Installation
Quickstart Installation
ONAP deployment in kubernetes is modelled in the oom project as a 1:1 set of service:pod sets (1 pod per docker container). The fastest way to get ONAP Kubernetes up is via Rancher.
Platform is Ubuntu 16.04 VMs on VMWare Workstation 12.5 on a up to two 64Gb/6-core 5820K Windows 10 systems (but a bare metal set of Ubuntu servers will work the same)
Currently editing this (adding rancher details) over the morning of 20170706 so bear with me...
Target Deployment State
root@obriensystemsucont0:~/onap/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide
below any colored container has issues getting to running state.
NAMESPACE master:20170705 | NAME | READY | STATUS | RESTARTS (in 14h) | Notes | |||||
---|---|---|---|---|---|---|---|---|---|---|
onap-aai | aai-service-346921785-624ss | 1/1 | Running | 0 | ||||||
onap-aai | ||||||||||
NAMESPACE master:20170705 | NAME | READY | STATUS | RESTARTS (in 14h) | Notes | |||||
onap-aai | aai-service-346921785-624ss | 1/1 | Running | 0 | onap-aai | hbase-139474849-7fg0s | 1/1 | Running | 0 | |
onap-aai | model-loader-service-1795708961-wg19w | 0/1 | Init:1/2 | 82 | ||||||
onap-appc | appc-2044062043-bx6tc | 1/1 | Running | 0 | ||||||
onap-appc | appc-dbhost-2039492951-jslts | 1/1 | Running | 0 | ||||||
onap-appc | appc-dgbuilder-2934720673-mcp7c | 1/1 | Running | 0 | ||||||
onap-dcae | not yet pushed | Note: currently there are no DCAE containers running yet (we are missing 6 yaml files (1 for the controller and 5 for the collector,staging,3-cdap pods)) - therefore DMaaP, VES collectors and APPC actions as the result of policy actions (closed loop) - will not function yet. | ||||||||
onap-dcae-cdap | not yet pushed | |||||||||
onap-dcae-stg | not yet pushed | |||||||||
onap-dcae-coll | not yet pushed | |||||||||
onap-message-router | dmaap-3842712241-gtdkp | 0/1 | CrashLoopBackOff | 164 | ||||||
onap-message-router | global-kafka-89365896-5fnq9 | 1/1 | Running | 0 | ||||||
onap-message-router | zookeeper-1406540368-jdscq | 1/1 | Running | 0 | ||||||
onap-mso | mariadb-2638235337-758zr | 0/1 | Running | 0 | ||||||
onap-mso | mso-3192832250-fq6pn | 1/1 | CrashLoopBackOff | 167 | ||||||
onap-policy | brmsgw-568914601-d5z71 | 0/1 | Init:0/1 | 82 | ||||||
onap-policy | drools-1450928085-099m2 | 0/1 | Init:0/1 | 82 | ||||||
onap-policy | mariadb-2932363958-0l05g | 1/1 | Running | 0 | ||||||
onap-policy | nexus-871440171-tqq4z | 0/1 | Running | 0 | ||||||
onap-policy | pap-2218784661-xlj0n | 1/1 | Running | 0 | ||||||
onap-policy | pdp-1677094700-75wpj | 0/1 | Init:0/1 | 82 | ||||||
onap-policy | pypdp-3209460526-bwm6b | 0/1 | Init:0/1 | 82 | ||||||
onap-portal | portalapps-1708810953-trz47 | 0/1 | Init:CrashLoopBackOff | 163 | ||||||
onap-portal | portaldb-3652211058-vsg8r | 1/1 | Running | 0 | ||||||
onap-portal | vnc-portal-948446550-76kj7 | 0/1 | Init:0/5 | 82 | ||||||
onap-robot | robot-964706867-czr05 | 1/1 | Running | 0 | ||||||
onap-sdc | sdc-be-2426613560-jv8sk | 0/1 | Init:0/2 | 82 | ||||||
onap-sdc | sdc-cs-2080334320-95dq8 | 0/1 | CrashLoopBackOff | 163 | ||||||
onap-sdc | sdc-es-3272676451-skf7z | 1/1 | Running | 0 | ||||||
onap-sdc | sdc-fe-931927019-nt94t | 0/1 | Init:0/1 | 82 | ||||||
onap-sdc | sdc-kb-3337231379-8m8wx | 0/1 | Init:0/1 | 82 | ||||||
onap-sdnc | sdnc-1788655913-vvxlj | 1/1 | Running | 0 | ||||||
onap-sdnc | sdnc-dbhost-240465348-kv8vf | 1/1 | Running | 0 | ||||||
onap-sdnc | sdnc-dgbuilder-4164493163-cp6rx | 1/1 | Running | 0 | ||||||
onap-sdnc | sdnc-portal-2324831407-50811 | 0/1 | Running | 25 | ||||||
onap-vid | vid-mariadb-4268497828-81hm0 | 0/1 | CrashLoopBackOff | 169 | ||||||
onap-vid | vid-server-2331936551-6gxsp | 0/1 | Init:0/1 | 82 |
Clone
Install the latest version of the OOM (ONAP Operations Manager) project repo - specifically the ONAP on Kubernetes work just uploaded June 2017
...
git clone ssh://yourgerrituserid@gerrit.onap.org:29418/oom cd oom/kubernetes/oneclick Versions oom : master (1.1.0-SNAPSHOT) onap deployments: 1.0.0 |
---|
Kubernetes specific config
...
Running ONAP Portal UI Operations
see Installing and Running the ONAP Demos
In queue.....
Kubernetes Installation Options
Bare RHEL 7.3 VM - Multi Node Cluster
In progress as of 20170701
Racher on Ubuntu 16.04
Install Rancher
httphttps://kubernetesrancher.iocom/docs/getting-started-guides/scratch/
...
/rancher/v1.6/en/quick-start-guide/
...
...
https://github.com/kubernetes/kubernetes/releases/tag/v1.7.0
https://github.com/kubernetes/kubernetes/releases/download/v1.7.0/kubernetes.tar.gz
tar -xvf kubernetes.tar
optional build from source
cd kubernetes/
vi Vagrantfile
cat README.md
ls client/
git clone https://github.com/kubernetes/kubernetes
systemctl start docker
docker ps
cd kubernetes/
make quick-release
go directly to binaries
/run/media/root/sec/onap_kub/kubernetes/cluster
./get-kube-binaries.sh
export Path=/run/media/root/sec/onap_kub/kubernetes/client/bin:$PATH
[root@obrien-b2 server]# pwd
/run/media/root/sec/onap_kub/kubernetes/server
kubernetes-manifests.tar.gz kubernetes-salt.tar.gz kubernetes-server-linux-amd64.tar.gz README
tar -xvf kubernetes-server-linux-amd64.tar.gz
/run/media/root/sec/onap_kub/kubernetes/server/kubernetes/server/bin
build images
[root@obrien-b2 etcd]# make
...
(go lang required - adjust google docs)
https://golang.org/doc/install?download=go1.8.3.linux-amd64.tar.gz
CoreOS on Vagrant on RHEL/OSX
(Yves alerted me to this) - currently blocked by the 19g VM size (changing the HD of the VM is unsupported in the VirtualBox driver)
https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html
Implement OSX fix for Vagrant 1.9.6 https://github.com/mitchellh/vagrant/issues/7747
Adjust the VagrantFile for your system
NODE_VCPUS = 1
NODE_MEMORY_SIZE = 2048
to (for a 5820K on 64G for example)
NODE_VCPUS = 8
NODE_MEMORY_SIZE = 32768
curl -O https://storage.googleapis.com/kubernetes-release/release/v1.6.1/bin/darwin/amd64/kubectl
chmod +x kubectl
skipped (mv kubectl /usr/local/bin/kubectl) - already there
ls /usr/local/bin/kubectl
git clone https://github.com/coreos/coreos-kubernetes.git
cd coreos-kubernetes/single-node/
vagrant box update
/docs/rancher/v1.6/en/installing-rancher/installing-server/#single-container
Install a docker version that Rancher and Kubernetes support which is currently 1.12.6
http://rancher.com/docs/rancher/v1.5/en/hosts/#supported-docker-versions
curl https://releases.rancher.com/install-docker/1.12.sh | sh |
---|
Verify your Rancher admin console is up on the external port you configured above
Wait for the docker container to finish DB startup
http://rancher.com/docs/rancher/v1.6/en/hosts/
Registering Hosts in Rancher
Having issues registering a combined single VM (controller + host) - use your real IP not localhost
In settings | Host Configuration | set your IP [root@obrien-b2 etcd]# sudo docker run -e CATTLE_AGENT_IP="192.168.163.128" --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.2 http://192.168.163.128:8080/v1/scripts/A9487FC88388CC31FB76:1483142400000:IypSDQCtA4SwkRnthKqH53Vxoo |
---|
See your host registered
Bare RHEL 7.3 VM - Multi Node Cluster
In progress as of 20170701
https://kubernetes.io/docs/getting-started-guides/scratch/
https://github.com/kubernetes/kubernetes/releases/latest https://github.com/kubernetes/kubernetes/releases/tag/v1.7.0 https://github.com/kubernetes/kubernetes/releases/download/v1.7.0/kubernetes.tar.gz tar -xvf kubernetes.tar optional build from source vi Vagrantfile go directly to binaries /run/media/root/sec/onap_kub/kubernetes/cluster ./get-kube-binaries.sh export Path=/run/media/root/sec/onap_kub/kubernetes/client/bin:$PATH [root@obrien-b2 server]# pwd /run/media/root/sec/onap_kub/kubernetes/server kubernetes-manifests.tar.gz kubernetes-salt.tar.gz kubernetes-server-linux-amd64.tar.gz README tar -xvf kubernetes-server-linux-amd64.tar.gz /run/media/root/sec/onap_kub/kubernetes/server/kubernetes/server/bin build images [root@obrien-b2 etcd]# make
(go lang required - adjust google docs) https://golang.org/doc/install?download=go1.8.3.linux-amd64.tar.gz |
---|
CoreOS on Vagrant on RHEL/OSX
(Yves alerted me to this) - currently blocked by the 19g VM size (changing the HD of the VM is unsupported in the VirtualBox driver)
https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html
Implement OSX fix for Vagrant 1.9.6 https://github.com/mitchellh/vagrant/issues/7747
Adjust the VagrantFile for your system
NODE_VCPUS = 1 NODE_MEMORY_SIZE = 2048 to (for a 5820K on 64G for example) NODE_VCPUS = 8 NODE_MEMORY_SIZE = 32768 |
---|
curl -O https://storage.googleapis.com/kubernetes-release/release/v1.6.1/bin/darwin/amd64/kubectl chmod +x kubectl skipped (mv kubectl /usr/local/bin/kubectl) - already there ls /usr/local/bin/kubectl git clone https://github.com/coreos/coreos-kubernetes.git cd coreos-kubernetes/single-node/ vagrant box update sudo ln -sf /usr/local/bin/openssl /opt/vagrant/embedded/bin/openssl vagrant up Wait at least 5 min (Yves is good) (rerun from here) export KUBECONFIG="${KUBECONFIG}:$(pwd)/kubeconfig" kubectl config use-context vagrant-single obrienbiometrics:single-node michaelobrien$ export KUBECONFIG="${KUBECONFIG}:$(pwd)/kubeconfig" obrienbiometrics:single-node michaelobrien$ kubectl config use-context vagrant-single Switched to context "vagrant-single". obrienbiometrics:single-node michaelobrien$ kubectl proxy & [1] 4079 obrienbiometrics:single-node michaelobrien$ Starting to serve on 127.0.0.1:8001 goto $ kubectl get nodes $ kubectl get service --all-namespaces $ kubectl cluster-info git clone ssh://michaelobrien@gerrit.onap.org:29418/oom cd oom/kubernetes/oneclick/ obrienbiometrics:oneclick michaelobrien$ ./createAll.bash -n onap **** Done ****obrienbiometrics:oneclick michaelobrien$ kubectl get service --all-namespaces ... onap-vid vid-server 10.3.0.31 <nodes> 8080:30200/TCP 32s obrienbiometrics:oneclick michaelobrien$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system heapster-v1.2.0-4088228293-3k7j1 2/2 Running 2 4h kube-system kube-apiserver-172.17.4.99 1/1 Running 1 4h kube-system kube-controller-manager-172.17.4.99 1/1 Running 1 4h kube-system kube-dns-782804071-jg3nl 4/4 Running 4 4h kube-system kube-dns-autoscaler-2715466192-k45qg 1/1 Running 1 4h kube-system kube-proxy-172.17.4.99 1/1 Running 1 4h kube-system kube-scheduler-172.17.4.99 1/1 Running 1 4h kube-system kubernetes-dashboard-3543765157-qtnnj 1/1 Running 1 4h onap-aai aai-service-346921785-w3r22 0/1 Init:0/1 0 1m ... reset obrienbiometrics:single-node michaelobrien$ rm -rf ~/.vagrant.d/boxes/coreos-alpha/ |
---|
sudo ln -sf /usr/local/bin/openssl /opt/vagrant/embedded/bin/openssl
vagrant up
Wait at least 5 min (Yves is good)
(rerun from here)
export KUBECONFIG="${KUBECONFIG}:$(pwd)/kubeconfig"
kubectl config use-context vagrant-single
obrienbiometrics:single-node michaelobrien$ export KUBECONFIG="${KUBECONFIG}:$(pwd)/kubeconfig"
obrienbiometrics:single-node michaelobrien$ kubectl config use-context vagrant-single
Switched to context "vagrant-single".
obrienbiometrics:single-node michaelobrien$ kubectl proxy &
[1] 4079
obrienbiometrics:single-node michaelobrien$ Starting to serve on 127.0.0.1:8001
goto
$ kubectl get nodes
$ kubectl get service --all-namespaces
$ kubectl cluster-info
git clone ssh://michaelobrien@gerrit.onap.org:29418/oom
cd oom/kubernetes/oneclick/
obrienbiometrics:oneclick michaelobrien$ ./createAll.bash -n onap
**** Done ****obrienbiometrics:oneclick michaelobrien$ kubectl get service --all-namespaces
...
onap-vid vid-server 10.3.0.31 <nodes> 8080:30200/TCP 32s
obrienbiometrics:oneclick michaelobrien$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-v1.2.0-4088228293-3k7j1 2/2 Running 2 4h
kube-system kube-apiserver-172.17.4.99 1/1 Running 1 4h
kube-system kube-controller-manager-172.17.4.99 1/1 Running 1 4h
kube-system kube-dns-782804071-jg3nl 4/4 Running 4 4h
kube-system kube-dns-autoscaler-2715466192-k45qg 1/1 Running 1 4h
kube-system kube-proxy-172.17.4.99 1/1 Running 1 4h
kube-system kube-scheduler-172.17.4.99 1/1 Running 1 4h
kube-system kubernetes-dashboard-3543765157-qtnnj 1/1 Running 1 4h
onap-aai aai-service-346921785-w3r22 0/1 Init:0/1 0 1m
...
reset
obrienbiometrics:single-node michaelobrien$ rm -rf ~/.vagrant.d/boxes/coreos-alpha/
...
Install Rancher
http://rancher.com/docs/rancher/v1.6/en/quick-start-guide/
http://rancher.com/docs/rancher/v1.6/en/installing-rancher/installing-server/#single-container
Install a docker version that Rancher and Kubernetes support which is currently 1.12.6
http://rancher.com/docs/rancher/v1.5/en/hosts/#supported-docker-versions
...
curl https://releases.rancher.com/install-docker/1.12.sh | sh
docker run -d --restart=unless-stopped -p 8880:8080 rancher/server:stable
Verify your Rancher admin console is up on the external port you configured above
Wait for the docker container to finish DB startup
http://rancher.com/docs/rancher/v1.6/en/hosts/
Registering Hosts in Rancher
Having issues registering a combined single VM (controller + host) - use your real IP not localhost
In settings | Host Configuration | set your IP
...
See your host registered
...
OSX Minikube
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl kubectl cluster-info kubectl completion -h brew install bash-completion curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.19.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/ minikube start --vm-driver=vmwarefusion kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080 kubectl expose deployment hello-minikube --type=NodePort kubectl get pod curl $(minikube service hello-minikube --url) minikube stop |
---|
...