Quickstart Guide
Advanced Kubernetes Installations
Children Display |
---|
Table of Contents |
---|
...
Code Block |
---|
All of DCAE is up via OOM (including the 7 CDAP nodes) Issue was: each tenant hits its floating IP allocation after 2.5 DCAE installs - we run out of IPs because they are not deleted. Fix: delete all unassociated IPs before brining up OOM/DCAE - we cannot mix cloudify blueprint orchestration with manual openstack deletion - once in a blueprint - we need to remove everything orchestrated on top of HEAT using the cloudify manager - or do as the integration team does and clean the tenant before a deployment. after deleting all floating IPs and rerunning the OOM deployment Time: 35 min from heat side dcae-boot install - 55 min total from one-click OOM install obrienbiometrics:lab_logging michaelobrien$ ssh ubuntu@10.12.6.124 Last login: Fri Feb 9 16:50:48 2018 from 10.12.25.197 ubuntu@onap-oom-obrien:~$ kubectl -n onap-dcaegen2 exec -it heat-bootstrap-4010086101-fd5p2 bash root@heat-bootstrap:/# cd /opt/heat root@heat-bootstrap:/opt/heat# source DCAE-openrc-v3.sh root@heat-bootstrap:/opt/heat# openstack server list +--------------------------------------+---------------------+--------+----------------------------------------------------+--------------------------+------------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+---------------------+--------+----------------------------------------------------+--------------------------+------------+ | 29990fcb-881f-457c-a386-aa32691d3beb | dcaepgvm00 | ACTIVE | oam_onap_3QKg=10.99.0.13, 10.12.6.144 | ubuntu-16-04-cloud-amd64 | m1.medium | | 7b4b63f3-c436-41a8-96dd-665baa94a698 | dcaecdap01 | ACTIVE | oam_onap_3QKg=10.99.0.19, 10.12.5.219 | ubuntu-16-04-cloud-amd64 | m1.large | | f4e6c499-8938-4e04-ab78-f0e753fe3cbb | dcaecdap00 | ACTIVE | oam_onap_3QKg=10.99.0.9, 10.12.6.69 | ubuntu-16-04-cloud-amd64 | m1.large | | 60ccff1f-e7c3-4ab4-b749-96aef7ee0b8c | dcaecdap04 | ACTIVE | oam_onap_3QKg=10.99.0.16, 10.12.5.106 | ubuntu-16-04-cloud-amd64 | m1.large | | df56d059-dc91-4122-a8de-d59ea14c5062 | dcaecdap05 | ACTIVE | oam_onap_3QKg=10.99.0.15, 10.12.6.131 | ubuntu-16-04-cloud-amd64 | m1.large | | 648ea7d3-c92f-4cd8-870f-31cb80eb7057 | dcaecdap02 | ACTIVE | oam_onap_3QKg=10.99.0.20, 10.12.6.128 | ubuntu-16-04-cloud-amd64 | m1.large | | c13fb83f-1011-44bb-bc6c-36627845a468 | dcaecdap06 | ACTIVE | oam_onap_3QKg=10.99.0.18, 10.12.6.134 | ubuntu-16-04-cloud-amd64 | m1.large | | 5ed7b172-1203-45a3-91e1-c97447ef201e | dcaecdap03 | ACTIVE | oam_onap_3QKg=10.99.0.6, 10.12.6.123 | ubuntu-16-04-cloud-amd64 | m1.large | | 80ada3ca-745e-42db-b67c-cdd83140e68e | dcaedoks00 | ACTIVE | oam_onap_3QKg=10.99.0.12, 10.12.6.173 | ubuntu-16-04-cloud-amd64 | m1.medium | | 5e9ef7af-abb3-4311-ae96-a2d27713f4c5 | dcaedokp00 | ACTIVE | oam_onap_3QKg=10.99.0.17, 10.12.6.168 | ubuntu-16-04-cloud-amd64 | m1.medium | | d84bbb08-f496-4762-8399-0aef2bb773c2 | dcaecnsl00 | ACTIVE | oam_onap_3QKg=10.99.0.7, 10.12.6.184 | ubuntu-16-04-cloud-amd64 | m1.medium | | 53f41bfc-9512-4a0f-b431-4461cd42839e | dcaecnsl01 | ACTIVE | oam_onap_3QKg=10.99.0.11, 10.12.6.188 | ubuntu-16-04-cloud-amd64 | m1.medium | | b6177cb2-5920-40b8-8f14-0c41b73b9f1b | dcaecnsl02 | ACTIVE | oam_onap_3QKg=10.99.0.4, 10.12.6.178 | ubuntu-16-04-cloud-amd64 | m1.medium | | 5e6fd14b-e75b-41f2-ad61-b690834df458 | dcaeorcl00 | ACTIVE | oam_onap_3QKg=10.99.0.8, 10.12.6.185 | CentOS-7 | m1.medium | | 5217dabb-abd7-4e57-972a-86efdd5252f5 | dcae-dcae-bootstrap | ACTIVE | oam_onap_3QKg=10.99.0.3, 10.12.6.183 | ubuntu-16-04-cloud-amd64 | m1.small | | 87569b68-cd4c-4a1f-9c6c-96ea7ce3d9b9 | onap-oom-obrien | ACTIVE | oam_onap_w37L=10.0.16.1, 10.12.6.124 | ubuntu-16-04-cloud-amd64 | m1.xxlarge | | d80f35ac-1257-47fc-828e-dddc3604d3c1 | oom-jenkins | ACTIVE | appc-multicloud-integration=10.10.5.14, 10.12.6.49 | | v1.xlarge | +--------------------------------------+---------------------+--------+----------------------------------------------------+--------------------------+------------+ root@heat-bootstrap:/opt/heat# |
Quickstart Installation
Anchor | ||||
---|---|---|---|---|
|
(Manual instructions)
ONAP Minimum R1 Installation Helm Apps
...
(on each host) add to your /etc/hosts to point your ip to your hostname (add your hostname to the end). Add entries for all other hosts in your cluster. For example on openlab - you will need to add the name of your host before you install docker - to avoid the error below sudo: unable to resolve host onap-oom
Open Ports On most hosts like openstack or EC2 you can open all the ports or they are open by default - on some environments like Rackspace VM's you need to open them
Fix virtual memory allocation (to allow onap-log:elasticsearch to come up under Rancher 1.6.11)
clone oom (scp your onap_rsa private key first - or clone anon - Ideally you get a full gerrit account and join the community) see ssh/http/http access links below https://gerrit.onap.org/r/#/admin/projects/oom
(on each host (server and client(s) which may be the same machine)) Install only the 17.03.2 version of Docker (the only version that works with Kubernetes in Rancher 1.6.13+) Install Docker
Pre pull docker images the first time you install onap. Currently the pre-pull will take 16-180 min depending on your network. Pre pulling the images will allow the entire ONAP to start in 3-8 min instead of up to 3 hours.
Use script above in oom/kubernetes/config once it is merged. https://git.onap.org/oom/tree/kubernetes/config/prepull_docker.sh
To monitor when prepull is finished see section: Prepulldockerimages. It is advised to wait until pre pull has finished before continuing. (on the master only) Install rancher (Optional: use 8880 instead of 8080 if there is a conflict) - note there may be issues with the dns pod in Rancher after a reboot or when running clustered hosts - a clean system will be OK -
In Rancher UI - dont use (http://127.0.0.1:8880) - use the real IP address - so the client configs are populated correctly with callbacks You must deactivate the default CATTLE environment - by adding a KUBERNETES environment - and Deactivating the older default CATTLE one - your added hosts will attach to the default
Register your hostRegister your host(s) - run following on each host (including the master if you are collocating the master/host on a single machine/vm) For each host, In Rancher > Infrastructure > Hosts. Select "Add Host" The first time you add a host - you will be presented with a screen containing the routable IP - hit save only on a routable IP. Enter IP of host: (if you launched racher with 127.0.0.1/localhost - otherwise keep it empty - it will autopopulate the registration with the real IP) Copy command to register host with Rancher, Execute command on each host, for example:
wait for kubernetes menu to populate with the CLI Install KubectlThe following will install kubectl (for Kubernetes 1.9.2 ) https://github.com/kubernetes/kubernetes/issues/57528 on a linux host. Once configured, this client tool will provide management of a Kubernetes cluster.
Paste kubectl config from Rancher (you will see the CLI menu in Rancher / Kubernetes after the k8s pods are up on your host) Click on "Generate Config" to get your content to add into .kube/config Verify that Kubernetes config is good
Install HelmThe following will install Helm - currently 2.8.0 on a linux host. Helm is used by OOM for package and configuration management. https://lists.onap.org/pipermail/onap-discuss/2018-January/007674.html Prerequisite: Install Kubectl
Undercloud done - move to ONAP Installation You can install OOM manually below or run the cd.sh below or attached to the top of this page - Install/RefreshOOM https://github.com/obrienlabs/onap-root/blob/master/cd.sh manually..... Wait until all the hosts show green in rancher, Then we are ready to configure and deploy onap environment in kubernetes. These scripts are found in the folders:
First source oom/kubernetes/oneclick/setenv.bash. This will set your helm list of components to start/delete
Seconds we need configure the onap before deployment. This is a onetime operation that spawns temporality config pod. This mounts the volume /dockerdata/ contained in the pod config-init and also creates the directory “/dockerdata-nfs” on the kubernetes node. This mount is required for all other ONAP pods to function. Note: the pod will stop after NFS creation - this is normal. https://git.onap.org/oom/tree/kubernetes/config/onap-parameters-sample.yaml
**** Creating configuration for ONAP instance: onap Wait for the config-init pod is gone before trying to bring up a component or all of ONAP - around 60 sec (up to 10 min) - see https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-Waitingforconfig-initcontainertofinish-20sec root@ip-172-31-93-122:~/oom_20170908/oom/kubernetes/config# kubectl get pods --all-namespaces -a onap config 0/1 Completed 0 1m Note: When using the -a option the config container will show up with the status, however when not used with the -a flag, it will not be present Cluster Configuration (optional - do not use if your server/client are co-located)3. Share the /dockerdata-nfs Folder between Kubernetes Nodes RunningDeploying ONAP
Don't run all the pods unless you have at least 52G allocated - if you have a laptop/VM with 16G - then you can only run enough pods to fit in around 11G
(to bring up a single service at a time) Use the default "onap" namespace if you want to run robot tests out of the box - as in "onap-robot" Bring up core components
Only if you have >52G run the following (all namespaces)
ONAP is OK if everything is 1/1 or 2/2 in the following
Run the ONAP portal via instructions at RunningONAPusingthevnc-portal Wait until the containers are all up check AAI endpoints root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# kubectl -n onap-aai exec -it aai-service-3321436576-2snd6 bash root@aai-service-3321436576-2snd6:/# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 15:50 ? 00:00:00 /usr/local/sbin/haproxy-systemd- root 7 1 0 15:50 ? 00:00:00 /usr/local/sbin/haproxy-master root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# curl https://127.0.0.1:30233/aai/v11/service-design-and-creation/models curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none |
...