...
(on each host) add to your /etc/hosts to point your ip to your hostname (add your hostname to the end). Add entries for all other hosts in your cluster.
clone oom (scp your onap_rsa private key first - or clone anon - Ideally you get a full gerrit account and join the community) see ssh/http/http access links below https://gerrit.onap.org/r/#/admin/projects/oom
or use https (substitute your user/pass)
(on each host (server and client(s) which may be the same machine)) Install only the 1.12.x (currently 1.12.6) version of Docker (the only version that works with Kubernetes in Rancher 1.6) Install Docker
(on the master only) Install rancher (use 8880 instead of 8080) - note there may be issues with the dns pod in Rancher after a reboot or when running clustered hosts - a clean system will be OK -
In Rancher UI - dont use (http://127.0.0.1:8880) - use the real IP address - so the client configs are populated correctly with callbacks You must deactivate the default CATTLE environment - by adding a KUBERNETES environment - and Deactivating the older default CATTLE one - your added hosts will attach to the default Pre pull docker images the first time you install onap. Currently the pre-pull will take 10-35 min depending on the throttling, what you have already pulled and load on nexus3.onap.org:10001. Pre pulling the images will allow the entire ONAP to start in 3-8 min instead of up to 3 hours. This is a WIP https://jira.onap.org/secure/attachment/10501/prepull_docker.sh
(on the master only) Install rancher (use 8880 instead of 8080) - note there may be issues with the dns pod in Rancher after a reboot or when running clustered hosts - a clean system will be OK -
In Rancher UI - dont use (http://127.0.0.1:8880) - use the real IP address - so the client configs are populated correctly with callbacks You must deactivate the default CATTLE environment - by adding a KUBERNETES environment - and Deactivating the older default CATTLE one - your added hosts will attach to the default
Register your host(s) - run following on each host (including the master if you are collocating the master/host on a single machine/vm) For each host, In Rancher > Infrastructure > Hosts. Select "Add Host" The first time you add a host - you will be presented with a screen containing the routable IP - hit save only on a routable IP. Enter IP of host: (if you launched racher with 127.0.0.1/localhost - otherwise keep it empty - it will autopopulate the registration with the real IP) Copy command to register host with Rancher, Execute command on host, for example:
wait for kubernetes menu to populate with CLI Install KubectlThe following will install kubectl on a linux host. Once configured, this client tool will provide management of a Kubernetes cluster.
Paste kubectl config from Rancher (you will see the CLI menu in Rancher / Kubernetes after the k8s pods are up on your host) Click on "Generate Config" to get your content to add into .kube/config Verify that Kubernetes config is good
Install HelmThe following will install Helm (use 2.3.0 not current 2.6.0) on a linux host. Helm is used by OOM for package and configuration management. Prerequisite: Install Kubectl
Undercloud done - move to ONAP Installation Wait until all the hosts show green in rancher, then run the createConfig/createAll scripts that wraps all the kubectl commands - off of oom/kubernetes/config and oom/kubernetes/oneclick whereever you pulled oom Source the setenv.bash script in /oom/kubernetes/oneclick/ - will set your helm list of components to start/delete
run the one time config pod - which mounts the volume /dockerdata/ contained in the pod config-init. This mount is required for all other ONAP pods to function. Note: the pod will stop after NFS creation - this is normal.
**** Creating configuration for ONAP instance: onap Wait for the config-init pod is gone before trying to bring up a component or all of ONAP - around 60 sec (up to 10 min) - see https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-Waitingforconfig-initcontainertofinish-20sec root@ip-172-31-93-122:~/oom_20170908/oom/kubernetes/config# kubectl get pods --all-namespaces -a onap config 0/1 Completed 0 1m Note: When using the -a option the config container will show up with the status, however when not used with the -a flag, it will not be present Cluster Configuration (optional - do not use if your server/client are co-located)A mount is better. Try to run you host and client on a single VM (a 64g one) - if not you can run rancher and several clients across several machines/VMs. The /dockerdata-nfs share must be replicated across the cluster using a mount or by copying the directory to the other servers from the one the "config" pod actually runs. To verify this check your / root fs on each node.
Running ONAPPre pull docker images the first time you install onap. Currently the pre-pull will take 10-35 min depending on the throttling, what you have already pulled and load on nexus3.onap.org:10001. Pre pulling the images will allow the entire ONAP to start in 3-8 min instead of up to 3 hours. This is a WIP https://jira.onap.org/secure/attachment/10501/prepull_docker.sh
Code Block |
Running ONAPDon't run all the pods unless you have at least 52G allocated - if you have a laptop/VM with 16G - then you can only run enough pods to fit in around 11G
(to bring up a single service at a time) Use the default "onap" namespace if you want to run robot tests out of the box - as in "onap-robot" Bring up core components
Only if you have >52G run the following (all namespaces)
ONAP is OK if everything is 1/1 in the following
Run the ONAP portal via instructions at RunningONAPusingthevnc-portal Wait until the containers are all up Run Initial healthcheck directly on the host cd /dockerdata-nfs/onap/robot ./ete-docker.sh health check AAI endpoints root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# kubectl -n onap-aai exec -it aai-service-3321436576-2snd6 bash root@aai-service-3321436576-2snd6:/# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 15:50 ? 00:00:00 /usr/local/sbin/haproxy-systemd- root 7 1 0 15:50 ? 00:00:00 /usr/local/sbin/haproxy-master root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# curl https://127.0.0.1:30233/aai/v11/service-design-and-creation/models curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none |
...