Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Introduction

This tutorial is going to explain how to setup a local Kubernetes cluster and minimal helm setup to run and deploy SDC (but can be extended to several/all ONAP components) on a single host.

...

2) it is not using docker as the container runtime, it's using containerd, it's not an issue, just be aware of that as you won't see containers using classic docker commands


How to remove/install /remove microk8s ?

If you have a previous version of microk8s, you first need to uninstall it (upgrade is possible but it is not recommended between major versions so I recommend to uninstall as it's fast and safe)

...

this tutorial is focused on Honolulu release so we will use k8s version 1.19, to do so, you just need to select the appropriate channel

Code Block
sudo snap install microk8s --classic --channel=1.18/stable
sudo snap refresh microk8s --classic --channel=1.19/stable

Or (when the master node bug will be fixed on 1.19 kubernetes snap install)

sudo snap install microk8s --classic --channel=1.19/stable

...

Storage addon : we will enable the default Host storage class, this allows local volume storage that are used by some pods to exchange folders between containers.

Code Block
sudo microk8s enable dns storage

...

As Helm is self contained, it's pretty straightforward to install/upgrade

I recommend putting helm in local bin folder as a softlink, this way it's easy to switch between versions if you need to

, we can also use snap to install the right version

Code Block
sudo snap install helm --classic --channel=3.5/stable

Note: You may encounter some log issues when installing helm with snap

Normally the helm logs are available in "~/.local/share/helm/plugins/deploy/cache/onap/logs", if you notice that the log files are all equal to 0, you can uninstall the helm with snap and reinstall it manually

Code Block
Code Block
 wget https://get.helm.sh/helm-v3.5.24-linux-amd64.tar.gz
 


Code Block
tar -zxvfxvfz helm-v3.5.24-linux-amd64.tar.gz



Code Block
 sudo mv linux-amd64/helm /usr/local/bin/helm-v3.5.2
 sudo ln -s /usr/local/bin/helm-v3.5.2 /usr/local/bin/helm

3) Tweak Microk8s

The below tweaks are not strictly necessary, but they help in making the setup more simple and flexible.

A) Increase the max number of podsof pods & Add priviledged config

As ONAP may deploy a significant amount of pods, we need to inform kubelet to allow more than the basic configuration (as we plan an all in box setup), if you only plan to run a limited number of components, this is not needneeded

to change the max number of pods, we need to add a parameter to the startup line of kubelet.edit

1. Edit the file located at :

Code Block
sudo nano /var/snap/microk8s/current/args/kubelet

...

Code Block
sudo service snap.microk8s.daemon-kubelet restart

2. Edit the file located at :

Code Block
sudo nano /var/snap/microk8s/current/args/kube-apiserver

add the following line at the end :

Code Block
--allow-privileged=true

save the file and restart kubelet to apply the change :

Code Block
sudo service snap.microk8s.daemon-apiserver restart


B) run a local copy of kubectl

Microk8s comes bundled with kubectl, you can interact with it by doing:

Code Block
sudo microk8s kubectl describe node

...

We need kubectl 1.19 to match the cluster we have installed

Code Block
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.19.7/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

...

, let's again use snap to quickly choose and install the one we need

Code Block
sudo snap install kubectl --classic --channel=1.19/stable

Now we need to provide our local kubectl client with a proper config file so that it can access the cluster, microk8s allows to retrieve the cluster config very easily

Simply create a .kube folder in your home directory and dump the config there

Code Block
cd
mkdir .kubecdkube
cd .kube
sudo microk8s.config > config
chmod 700 config

...

the example below pulls the latest version from master, it's probably wiser to select the right version (honolulu branch or a specific review you want to test)

Code Block
cd
git clone --recursive "https://gerrit.onap.org/r/oom"

...

Code Block
helm plugin install --version v0.9.0 https://github.com/chartmuseum/helm-push.git

Once all plugins are installed, you should see them as available helm commands when doing :

Code Block
helm --help


6) Install the

...

chartmuseum repository

To align with how the previous release were deployed, we will setup a local chart repository.

...

unless you already have docker, in which case you can skip this part altogether.

Or use snap:

Code Block
sudo snap install docker

8) Build all oom charts and store them in the chart repo

You should be ready to build all helm charts, go into the oom/kubernetes folder and run a full make

Ensure you have "make" installed:

Code Block
sudo apt install make

Then build OOM

Code Block
cd ~/oom/kubernetes
make all

...

Code Block
# for myVM ONAP
127.0.0.1 portal.api.simpledemo.onap.orgaaf-gui
127.0.0.1 sdcaai.apiui.simpledemo.onap.org
127.0.0.1 sdcappc.api.fe.simpledemo.onap.org
127.0.0.1 sdccds.workflow.pluginapi.simpledemo.onap.org
127.0.0.1 vidcdt.api.simpledemo.onap.org
127.0.0.1 policyclamp.api.simpledemo.onap.org
127.0.0.1 aainbi.uiapi.simpledemo.onap.org
127.0.0.1 nbipolicy.api.simpledemo.onap.org
127.0.0.1 clampportal.api.simpledemo.onap.org
127.0.0.1 so-monitoring
127.0.0.1 robot-onap.onap.org
127.0.0.1 aaf-gui
127.0.0.1 robot-onapsdc.api.fe.simpledemo.onap.org
127.0.0.1 cdtsdc.api.simpledemo.onap.org
127.0.0.1 appcsdc.workflow.apiplugin.simpledemo.onap.org
127.0.0.1 cdsso-monitoring
127.0.0.1 vid.api.simpledemo.onap.org

You can then access the portal UI by opening your browser to :
https://portal.api.simpledemo.onap.org:30225/ONAPPORTAL/login.htm
user/pass is cs0008/demo123456!

...