...
(on each host) add to your /etc/hosts to point your ip to your hostname (add your hostname to the end). Add entries for all other hosts in your cluster.
clone oom (scp your onap_rsa private key first - or clone anon - Ideally you get a full gerrit account and join the community) see ssh/http/http access links below https://gerrit.onap.org/r/#/admin/projects/oom
or use https (substitute your user/pass)
(on each host (server and client(s) which may be the same machine)) Install only the 1.12.x (currently 1.12.6) version of Docker (the only version that works with Kubernetes in Rancher 1.6) Install Docker
Pre pull docker images the first time you install onap. Currently the pre-pull will take 10-35 min depending on the throttling, what you have already pulled and load on nexus3.onap.org:10001. Pre pulling the images will allow the entire ONAP to start in 3-8 min instead of up to 3 hours. This is a WIP https://jira.onap.org/secure/attachment/10501/prepull_docker.sh
(on the master only) Install rancher (use 8880 instead of 8080) - note there may be issues with the dns pod in Rancher after a reboot or when running clustered hosts - a clean system will be OK -
In Rancher UI - dont use (http://127.0.0.1:8880) - use the real IP address - so the client configs are populated correctly with callbacks You must deactivate the default CATTLE environment - by adding a KUBERNETES environment - and Deactivating the older default CATTLE one - your added hosts will attach to the default
Register your host(s) - run following on each host (including the master if you are collocating the master/host on a single machine/vm) For each host, In Rancher > Infrastructure > Hosts. Select "Add Host" The first time you add a host - you will be presented with a screen containing the routable IP - hit save only on a routable IP. Enter IP of host: (if you launched racher with 127.0.0.1/localhost - otherwise keep it empty - it will autopopulate the registration with the real IP) Copy command to register host with Rancher, Execute command on host, for example:
wait for kubernetes menu to populate with the CLI Install KubectlThe following will install kubectl on a linux host. Once configured, this client tool will provide management of a Kubernetes cluster.
Paste kubectl config from Rancher (you will see the CLI menu in Rancher / Kubernetes after the k8s pods are up on your host) Click on "Generate Config" to get your content to add into .kube/config Verify that Kubernetes config is good
Install HelmThe following will install Helm (use 2.3.0 not current 2.6.0) on a linux host. Helm is used by OOM for package and configuration management. Prerequisite: Install Kubectl
Undercloud done - move to ONAP Installation Wait until all the hosts show green in rancher, then run the createConfig/createAll scripts that wraps all the kubectl commands - off of oom/kubernetes/config and oom/kubernetes/oneclick whereever you pulled oom Source the setenv.bash script in /oom/kubernetes/oneclick/ - will set your helm list of components to start/delete
run the one time config pod - which mounts the volume /dockerdata/ contained in the pod config-init. This mount is required for all other ONAP pods to function. Note: the pod will stop after NFS creation - this is normal.
**** Creating configuration for ONAP instance: onap Wait for the config-init pod is gone before trying to bring up a component or all of ONAP - around 60 sec (up to 10 min) - see https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-Waitingforconfig-initcontainertofinish-20sec root@ip-172-31-93-122:~/oom_20170908/oom/kubernetes/config# kubectl get pods --all-namespaces -a onap config 0/1 Completed 0 1m Note: When using the -a option the config container will show up with the status, however when not used with the -a flag, it will not be present Cluster Configuration (optional - do not use if your server/client are co-located)Use an NFS mount A mount is better. Try to run you host and client on a single VM (a 64g one) - if not you can run rancher and several clients across several machines/VMs. The /dockerdata-nfs share must be replicated across the cluster using a mount or by copying the directory to the other servers from the one the "config" pod actually runs. To verify this check your / root fs on each node.
Running ONAPDon't run all the pods unless you have at least 52G allocated - if you have a laptop/VM with 16G - then you can only run enough pods to fit in around 11G
(to bring up a single service at a time) Use the default "onap" namespace if you want to run robot tests out of the box - as in "onap-robot" Bring up core components
Running ONAPDon't run all the pods unless you have at least 52G allocated - if you have a laptop/VM with 16G - then you can only run enough pods to fit in around 11G
(to bring up a single service at a time) Use the default "onap" namespace if you want to run robot tests out of the box - as in "onap-robot" Bring up core components
Only if you have >52G run the following (all namespaces)
ONAP is OK if everything is 1/1 in the following
Run the ONAP portal via instructions at RunningONAPusingthevnc-portal Wait until the containers are all up Run Initial healthcheck directly on the host cd /dockerdata-nfs/onap/robot ./ete-docker.sh health check AAI endpoints root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# kubectl -n onap-aai exec -it aai-service-3321436576-2snd6 bash root@aai-service-3321436576-2snd6:/# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 15:50 ? 00:00:00 /usr/local/sbin/haproxy-systemd- root 7 1 0 15:50 ? 00:00:00 /usr/local/sbin/haproxy-master root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# curl https://127.0.0.1:30233/aai/v11/service-design-and-creation/models curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none |
Upgrade OOM
Preparation Files
Running an OOM full refresh
Ports
onap-aai aai-service 10.43.238.197 <nodes> 8443:30233/TCP,8080:30232/TCP
onap-aai hbase None <none> 2181/TCP,8080/TCP,8085/TCP,9090/TCP,16000/TCP,16010/TCP,16201/TCP
onap-aai model-loader-service 10.43.161.44 <nodes> 8443:30229/TCP,8080:30210/TCP
onap-appc dgbuilder 10.43.67.192 <nodes> 3000:30228/TCP
onap-appc sdnhost 10.43.250.74 <nodes> 8282:30230/TCP,1830:30231/TCP
onap-clamp clamp 10.43.145.101 <nodes> 8080:30295/TCP
onap-cli cli 10.43.225.34 <nodes> 80:30260/TCP
onap-consul consul-server 10.43.56.151 <nodes> 8500:30270/TCP,8301:30271/TCP
onap-log elasticsearch 10.43.81.172 <nodes> 9200:30254/TCP
onap-log kibana 10.43.71.77 <nodes> 5601:30253/TCP
onap-message-router dmaap 10.43.226.159 <nodes> 3904:30227/TCP,3905:30226/TCP
onap-msb msb-consul 10.43.128.166 <nodes> 8500:30500/TCP
onap-msb msb-discovery 10.43.6.205 <nodes> 10081:30081/TCP
onap-msb msb-eag 10.43.239.63 <nodes> 80:30082/TCP
onap-msb msb-iag 10.43.220.233 <nodes> 80:30080/TCP
onap-mso mariadb 10.43.254.57 <nodes> 3306:30252/TCP
onap-mso mso 10.43.206.123 <nodes> 8080:30223/TCP,3904:30225/TCP,3905:30224/TCP,9990:30222/TCP,8787:30250/TCP
onap-multicloud framework 10.43.5.154 <nodes> 9001:30291/TCP
onap-multicloud multicloud-ocata 10.43.135.158 <nodes> 9006:30293/TCP
onap-multicloud multicloud-vio 10.43.43.13 <nodes> 9004:30292/TCP
onap-multicloud multicloud-windriver 10.43.43.197 <nodes> 9005:30294/TCP
onap-policy brmsgw 10.43.116.61 <nodes> 9989:30216/TCP
onap-policy drools 10.43.213.147 <nodes> 6969:30217/TCP
onap-policy pap 10.43.177.82 <nodes> 8443:30219/TCP,9091:30218/TCP
onap-policy pdp 10.43.167.146 <nodes> 8081:30220/TCP
onap-portal portalapps 10.43.130.189 <nodes> 8006:30213/TCP,8010:30214/TCP,8989:30215/TCP
onap-portal vnc-portal 10.43.8.67 <nodes> 6080:30211/TCP,5900:30212/TCP
onap-robot robot 10.43.245.43 <nodes> 88:30209/TCP
onap-sdc sdc-be 10.43.132.126 <nodes> 8443:30204/TCP,8080:30205/TCP
onap-sdc sdc-fe 10.43.96.120 <nodes> 9443:30207/TCP,8181:30206/TCP
onap-sdnc sdnc-dgbuilder 10.43.177.11 <nodes> 3000:30203/TCP
onap-sdnc sdnc-portal 10.43.108.205 <nodes> 8843:30201/TCP
onap-sdnc sdnhost 10.43.185.180 <nodes> 8282:30202/TCP,8201:32147/TCP
onap-vid vid-server 10.43.36.97 <nodes> 8080:30200/TCP
onap-vnfsdk refrepo 10.43.121.92 <nodes> 8702:30297/TCP
...
haproxy only
...
Only if you have >52G run the following (all namespaces)
ONAP is OK if everything is 1/1 in the following
Run the ONAP portal via instructions at RunningONAPusingthevnc-portal Wait until the containers are all up Run Initial healthcheck directly on the host cd /dockerdata-nfs/onap/robot ./ete-docker.sh health check AAI endpoints root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# kubectl -n onap-aai exec -it aai-service-3321436576-2snd6 bash root@aai-service-3321436576-2snd6:/# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 15:50 ? 00:00:00 /usr/local/sbin/haproxy-systemd- root 7 1 0 15:50 ? 00:00:00 /usr/local/sbin/haproxy-master root@ip-172-31-93-160:/dockerdata-nfs/onap/robot# curl https://127.0.0.1:30233/aai/v11/service-design-and-creation/models curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none |
Upgrade OOM
Preparation Files
Running an OOM full refresh
Ports
...
List of Containers
Total pods is 75
...