Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Secondary platform is bare-metal 4 NUCs (i7/i5/i3 with 16G each) 

(on each host) fix add to your /etc/hosts to point localhost/127.0.0.1 your ip to your hostname (add your hostname to the end)

Code Block
languagebash
sudo vi /etc/hosts
127.0.0.1 localhost your-hostname<your-ip> <your-hostname>


Try to use root - if you use ubuntu then you will need to enable docker separately for the ubuntu user

Code Block
sudo su -
apt-get update 


(to fix possible modprobe: FATAL: Module aufs not found in directory /lib/modules/4.4.0-59-generic)


(on each host) Install only the 1.12.x (currently 1.12.6) version of Docker (the only version that works with Kubernetes in Rancher 1.6)

Code Block
curl https://releases.rancher.com/install-docker/1.12.sh | sh


(on the master) Install rancher (use 8880 instead of 8080) - note there may be issues with the dns pod in Rancher after a reboot or when running clustered hosts - a clean system will be OK -

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyOOM-236

Code Block
docker run -d --restart=unless-stopped -p 8880:8080 rancher/server


In Rancher UI - dont use (http://127.0.0.1:8880) - use the real IP address - so the client configs are populated correctly with callbacks

You must deactivate the default CATTLE environment - by adding a KUBERNETES environment - and Deactivating the older default CATTLE one

    • Default → Manage Environments
    • Select "Add Environment" button
    • Give the Environment a name and description, then select Kubernetes as the Environment Template
    • Hit the "Create" button. This will create the environment and bring you back to the Manage Environments view
    • At the far right column of the Default Environment row, left-click the menu ( looks like 3 stacked dots ), and select Deactivate. This will make your new Kubernetes environment the new default.


Register your host(s) - run following on each host

For each host, In Rancher > Infrastructure > Hosts. Select "Add Host"

Copy command to register host with Rancher,

Execute command on host, for example:

Code Block
% docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.2 http://192.168.163.131:8880/v1/scripts/BBD465D9B24E94F5FBFD:1483142400000:IDaNFrug38QsjZcu6rXh8TwqA4


wait for kubernetes menu to populate with CLI


install kubectl

Code Block
% curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
% chmod +x ./kubectl
% mv ./kubectl /usr/local/bin/kubectl
% mkdir ~/.kube
% vi ~/.kube/config


paste kubectl config from rancher (you will see the CLI menu in Rancher | Kubernetes after the k8s pods are up on your host

Click on "Generate Config" to get your content to add into .kube/config


Verify that Kubernetes config is good

Code Block
root@obrien-kube11-1:~# kubectl cluster-info
Kubernetes master is running at ....
Heapster is running at....
KubeDNS is running at ....
kubernetes-dashboard is running at ...
monitoring-grafana is running at ....
monitoring-influxdb is running at ...
tiller-deploy is running at....


Install Helm (use 2.3.0 not current 2.6.0)

Code Block
wget http://storage.googleapis.com/kubernetes-helm/helm-v2.3.0-linux-amd64.tar.gz
tar -zxvf helm-v2.3.0-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
# test helm
helm help


Undercloud done - move to ONAP


clone oom (scp your onap_rsa private key first - or clone anon - Ideally you get a full gerrit account and join the community)

see ssh/http/http access links below

https://gerrit.onap.org/r/#/admin/projects/oom

Code Block
anonymous
git clone -b release-1.0.0 http://gerrit.onap.org/r/oom
or using your key


git clone -b release-1.0.0 ssh://michaelobrien@gerrit.onap.org:29418/oom

or use https

Code Block
git clone -b release-1.0.0 https://michaelnnnn:uHaBPMvR47nnnnnnnnRR3Keer6vatjKpf5A@gerrit.onap.org/r/oom


Wait until all the hosts show green in rancher, then run the script that wraps all the kubectl commands

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyOOM-115

Run the setenv.bash script in /oom/kubernetes/oneclick/ (new since 20170817)

Code Block
source setenv.bash


(only if you are planning on closed-loop) - Before running createConfig.sh (see below) - make sure your config for openstack is setup correctly - so you can deploy the vFirewall VMs for example

vi oom/kubernetes/config/docker/init/src/config/mso/mso/mso-docker.json

replace for example

"identity_services": [{
"identity_url": "http://OPENSTACK_KEYSTONE_IP_HERE:5000/v2.0",


run the one time config pod - which mounts the volume /dockerdata/ contained in the pod config-init. This mount is required for all other ONAP pods to function.

Note: the pod will stop after NFS creation - this is normal.

Code Block
% cd oom/kubernetes/config
% chmod 777 createConfig.sh    
% ./createConfig.sh -n onap 


**** Creating configuration for ONAP instance: onap
namespace "onap" created
pod "config-init" created
**** Done ****


Wait for the config-init pod is gone before trying to bring up a component or all of ONAP

Note: use only the hardcoded "onap" namespace prefix - as URLs in the config pod are set as follows "workflowSdncadapterCallback": "http://mso.onap-mso:8080/mso/SDNCAdapterCallbackService"

Don't run all the pods unless you have at least 40G (without DCAE) or 50G allocated - if you have a laptop/VM with 16G - then you can only run enough pods to fit in around 11G

Ignore errors introduced around 20170816 - these are non-blocking and will allow the create to proceed -

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyOOM-146

Code Block
% cd ../oneclick
% vi createAll.bash 
% ./createAll.bash -n onap -a robot|appc|aai 


(to bring up a single service at a time)

Only if you have >50G run the following (all namespaces)

Code Block
% ./createAll.bash -n onap


1.0.0 is OK

Run the ONAP portal via instructions at RunningONAPusingthevnc-portal

1.1 is currently having helm issues as of 20170825

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyOOM-219

Wait until the containers are all up

...