Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

It is yet not clear which version of Microk8s works for which ONAP release. Successfuly Successfully tested is Microk8s 1.15 for Frankfurt release. E.g. the latest Microk8s version doesnt doesn't work for Frankfurt. Run the following command to install a specific version of Microk8s.

...

Code Block
languagebash
titleEnable add-ons of Microk8s
microk8s enable storage

microk8s enable dns helm

microk8s enable dns 


Microk8s 1.15 comes with helm v2.14.3, ONAP Frankfurt release requires helm v2.16.6, you can use snap to install required helm version:

Code Block
languagebash
titleInstall Microk8s
snap install helm --classic --channel=2.16/stable


The default storage class  is required in the Helm deploy command later on, therefore take note of it with the following command.

...

  1. Clone ONAP Helm charts for specific release including submodules

    git clone --branch frankfurt --recurse-submodules "https://gerrit.onap.org/r/oom"

  2. Change to kubernetes dir

    cd oom/kubernetes

  3. Initialize Helm and configure Tiller

    Code Block
    languagebash
    titleConfigure Tiller
    microk8s kubectl -n kube-system create serviceaccount tiller
     
    microk8s kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
     
    microk8s helm init --service-account tiller
     
    microk8s kubectl -n kube-system rollout status deploy/tiller-deploy
    


  4. Start Helm service and initialize local Helm repository

    make repo
    Info

    Makefile script calls helm directly, so you have to create alias to allow it to find helm binary:

    sudo snap alias microk8s.helm helm
    # This will do same

    # This will do same as:

    helm serve &
    helm repo add local http://127.0.0.1:8879

      

  5. Package ONAP Helm charts

    make

...

Code Block
languagebash
titleDeploy CDS
microk8s helm upgrade --install dev ./onap --namespace onap \
--set cds.enabled=true \
--set mariadb-galera.enabled=true \
--set global.masterPassword=random \
--set cds.cds-blueprints-processor.dmaapEnabled=false \
--set global.persistence.storageClass=microk8s-hostpath

...

As end result you should see CDS pods running. Note that "cds-sdc-listener" POD will never get up with this setup because it's dependent on SDC.

Code Block
$ microk8s kubectl -n onap get pod
NAME                                            READY   STATUS     RESTARTS   AGE
dev-cds-blueprints-processor-5d74bff479-kn4jw   1/1     Running    0          8m51s
dev-cds-command-executor-745869c5f7-pt8sj       1/1     Running    0          8m51s
dev-cds-db-0                                    1/1     Running    0          8m51s
dev-cds-py-executor-7c5458f747-t88ws            1/1     Running    0          8m51s
dev-cds-sdc-listener-b5f59bdf5-ztsjc            0/1     Init:0/1   0          8m51s
dev-cds-ui-679cbf49fd-hswkd                     1/1     Running    0          8m51s
dev-mariadb-galera-0                            1/1     Running    0          8m51s
dev-mariadb-galera-1                            1/1     Running    1          4m25s
dev-mariadb-galera-2                            1/1     Running    0          2m36s

...

Ports can be forwarded e.g. to access the CDS-UI from another machine. To forward the CDS-UI Port use the following command.

Code Block
microk8s kubectl port-forward dev-cds-ui-679cbf49fd-hswkd -n onap --address 0.0.0.0 3000:3000

Take care to provide the right pod name and port number.  The port of CDS-UI can be displayed by:

Code Block
microk8s kubectl -n onap get pod dev-cds-ui-679cbf49fd-hswkd --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'

Afterwards the CDS-UI should be accessable accessible externally.