Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

TiThis This wiki describes how to set up a Kubernetes cluster with kuberadm, and then deploying SDN-C within that Kubernetes cluster.

(To view the current page, Chrome is the preferred browser. IE may add extra "CR LF"  each line, which causes problems). 

Table of Contents

What is OpenStack? What is Kubernetes? What is Docker?

...

Setting up an OpenStack lab is out of scope for this tutorial. Assuming that you have a lab, you will need to create 1+n VMs: one to be configured as the Kubernetes master node, and "n" to be considered as Kubernetes worker nodes. We will create 3 Kubernetes worker nodes for this tutorial because we want each of our SDN-Cs to appear on a different VM for resiliency.

Note

There are some changes committed in OOM repo. Two new pods are added: dmaap and ueb-listener. We observed issue with SDNC pods deployment using one master and three worker nodes. We were able to deploy SDNC with latest OOM using one master and four worker nodes.

Hence if you wish to use latest OOM for SDNC deployment, it is recommended to add another Compute Node (VM 5) as a worker node (k8s-node4).


Create the Undercloud

The examples here will use the OpenStackClient; however, the Openstack Horizon GUI could be used. Start by creating 4 VMs with the hostnames: k8s-master, k8s-node0, k8s-node1, and k8s-node1. Each VM should have internet access and approximately:

  • 16384 MB
  • 20 GB
  • 4 vCPUs

Warning
titleHow many much resources are needed?

 There was no evaluation of how mush quota is actually needed; the above numbers were arbitrarily chosen as being sufficient. A lot more is likely needed if the the full ONAP environment is deployed. For just SDN-C, this is more than plenty.  

...

As ubuntu user run the followings.

Code Block
languagebash
# (Optional) fix vi bug (changes in some versions of Mobaxterm (changes first letter of edited file after oneningopening to "g")
vi ~/.vimrc  ==> repeat for root/ubuntu and any ubuntuother user
#add which will edit files.
# Add the following 2 lines.
syntax on
set background=dark



# addAdd hostname of kubernetes nodes(master and workers) to /etc/hosts
sudo vi /etc/hosts
#<IP# <IP address> <hostname>

#Turn# Turn off firewall and allow all incoming HTTP connections through IPTABLES
sudo ufw disable
sudo iptables -I INPUT -j ACCEPT

# fixFix server timezone and select your timezone.
sudo dpkg-reconfigure tzdata


# (Optional) create a bash history file as the Ubuntu user so that it does not accidently get created as the root user.  
touch ~/.bash_history  

# (Optional) turn on ssh password authentication and give ubuntu user a password  if you do not like using ssh keys. 
# Set the "PasswordAuthentication yes" in the /etc/ssh/sshd_config file and then set the ubuntu password
sudo vi /etc/ssh/sshd_config;sudo systemctl restart sshd;sudo passwd ubuntu;

#update# Update the VM with the lates core packages  
sudo apt clean
sudo apt update
sudo apt -y full-upgrade
sudo reboot

#setup# Setup ntp on your image if needed.  It is important that all the VM's clocks are in synch or it will cause problems joining kubernetes nodes to the kubernetes cluster
sudo apt install ntp
sudo apt install ntpdate 

#if needed# It is recommended to add you local ntp-hostname or ntp server's IP address to the ntp.conf 
sudo# vi /etc/ntp.conf
#syncSync up your vm clock with withthat of youyour ntp server. AThe best choice for the ntp server is one which ais different server form Kubernetes VMs... Aa serversolid restartmachine. would Make sure you can ping it!
# A service restart would be needed to synch the time up. You can run them from command line for immediate change.


sudo vi /etc/ntp.conf
# Append the following lines to /etc/ntp.conf, to make them permanent.

date 
sudo service ntp stop
sudo ntpdate -s <ntp-hostname> hostname | ntp server's IP address>  ==>e.g.: sudo ntpdate -s 10.247.5.11
sudo service ntp start
date


sudo reboot


# Some of the clustering scripts (switch_voting.sh and sdnc_cluster.sh) require JSON parsing, so install jq on th masters only
sudo apt install -y jq

Question: Did you check date on all K8S nodes to make sure they are in synch?

Install Install Docker

The ONAP apps are pakages in Docker containers, so Docker must be installed on all the VMs.

The following snippet was taken from https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#install-docker-ce:

Code Block
languagebash
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88

sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
  # Add a docker repository to "/etc/apt/sources.list". It is for the latest stable one for the ubuntu falvour on the machine ("lsb_release -cs")
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get -y install docker-ce

sudo docker run hello-world

Install the Kubernetes Pakages

Just install the pakages; there is no need to configure them yet.

The following snippet was taken from https://kubernetes.io/docs/setup/independent/install-kubeadm/:

Code Block
languagebash
# the "sudo -i" changes user to root.
sudo -i
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF


# run the following commands
apt-get update
apt-get install -y kubelet kubeadm kubectl


exit

Configure the Kubernetes Cluster with kubeadm

kubeadm is a utility provided by Kubernetes which simplifies the process of configuring a Kubernetes cluster.  Details about how to use it can be found here: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/.

Configre the Kubernetes Master Node

The kubeadm init command sets up the Kubernetes master node. SSH to the k8s-master VM and invoke the following command. It is imported to capture the output into a log file as there is information which you will need to refer to afterwards.

Code Block
languagebash
#on the k8s-master vm setup the kubernetes master node.  
# the "sudo -i" changes user to root.
sudo -i
kubeadm init | tee ~/kubeadm_init.log
# the "exist" reverts user back to ubuntu.
exit

The output of "kubeadm init" will look like below:

Code Block
languagebash
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.5
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.147.101.4]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 59.503216 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node k8s-master as master by adding a label and a taint
[markmaster] Master k8s-master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 76d969.a408a6a63c9c4b73
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy


Your Kubernetes master has initialized successfully!


To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

sudo kubeadm join --token b17b41.5095dd3db0129b0b 10.147.132.36:6443 --discovery-token-ca-cert-hash sha256:139ff626a5b177c4e4efc5dedd8ba5f2a84edbf42e3f2f70543095358ff97d3b

Execute the following snippet as the ubuntu and root user to get kubectl to work. 

Code Block
languagebash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

A "Pod network" must be deployed to use the cluster. This will let pods to communicate with eachother.

There are many different pod networks to choose from. See https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network for choices. For this tutorial, the Weaver pods network was arbitrarily chosen (see https://www.weave.works/docs/net/latest/kubernetes/kube-addon/ for more information).

The following snippet will install the Weaver Pod network:

Code Block
languagebash
sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"


# wait for few minutes and verify
$ kubectl get pods -n kube-system
NAME                                    READY     STATUS    RESTARTS   AGE
etcd-k8s-s2-master                      1/1       Running   0          1m
kube-apiserver-k8s-s2-master            1/1       Running   0          1m
kube-controller-manager-k8s-s2-master   1/1       Running   0          1m
kube-dns-6f4fd4bdf-7pb9j                3/3       Running   0          2m
kube-proxy-prnfr                        1/1       Running   0          2m
kube-scheduler-k8s-s2-master            1/1       Running   0          1m
weave-net-hhjct                         2/2       Running   0          1m

Install Helm and Tiller on the Kubernetes Master Node

ONAP uses Helm, so we need to install Helm. The following instructions were taken from https://docs.helm.sh/using_helm/#installing-helm:

Code Block
languagebash
#As a root user, download helm and install it
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh

Tiller requires service account setup in Kubernetes before being initialized. The following snippet will do that for you:

Code Block
languagebash
#As a root user, create then kubernetes spec to define the helm service account
cat > tiller-serviceaccount.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""
EOF


# Install Tillers
kubectl create -f tiller-serviceaccount.yaml

#init helm 
helm init --service-account tiller --upgrade

If you need to reset Helm, follow the below steps:

Code Block
languagebash
#uninstalls Tiller from a cluster
helm reset


#clean up any existing artifacts 
kubectl -n kube-system delete deployment tiller-deploy
kubectl -n kube-system delete serviceaccount tiller
kubectl -n kube-system delete ClusterRoleBinding tiller-clusterrolebinding


kubectl create -f tiller-serviceaccount.yaml

#init helm 
helm init --service-account tiller --upgrade

Configure the Kubernetes Worker Nodes 

Setting up cluster nodes is very easy: just refer back to the "kubeadm init" output logs. In the logs, there was a “kubeadm join” command with token information parameters. Capture those parameters and then execute as root on each of the VMs which will be Kubernetes worker nodes:  k8s-node1, k8s-node2, and k8s-node3.

The command looks like the follwing snippet:



# Verify:
sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                     PORTS               NAMES
c66d903a0b1f        hello-world         "/hello"            10 seconds ago      Exited (0) 9 seconds ago                       vigorous_bhabha


Install the Kubernetes Pakages

Just install the pakages; there is no need to configure them yet.

The following snippet was taken from https://kubernetes.io/docs/setup/independent/install-kubeadm/:

Code Block
languagebash
# The "sudo -i" changes user to root.
sudo -i
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

# Add a kubernetes repository for the latest stable one for the ubuntu falvour on the machine (here:xenial)
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

apt-get update

# As of today (late April 2018) version 1.10.1 of kubernetes packages are available. To install that version, you can run:
apt-get install -y kubelet=1.10.1-00
apt-get install -y kubectl=1.10.1-00
apt-get install -y kubeadm

# To install latest version of Kubenetes packages. (recommended)
apt-get install -y kubelet kubeadm kubectl

# To install old version of kubernetes packages, follow the next line.
# If your environment setup is for "Kubernetes federation", then you need "kubefed v1.10.1". We recommend all of Kubernetes packages to be of the same version.
apt-get install -y kubelet=1.8.6-00 kubernetes-cni=0.5.1-00
apt-get install -y kubectl=1.8.6-00
apt-get install -y kubeadm



# Verify version 
kubectl version
kubeadm version
kubelet --version

exit
# Append the following lines to ~/.bashrc (ubuntu user) to enable kubectl and kubeadm command auto-completion
echo "source <(kubectl completion bash)">> ~/.bashrc
echo "source <(kubeadm completion bash)">> ~/.bashrc

Note: If you intend to remove kubernetes packages use  "apt autoremove kubelet; apt autoremove kubeadm;apt autoremove kubectl " .

Configure the Kubernetes Cluster with kubeadm

kubeadm is a utility provided by Kubernetes which simplifies the process of configuring a Kubernetes cluster.  Details about how to use it can be found here: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/.

Configure the Kubernetes Master Node (k8s-master)

The kubeadm init command sets up the Kubernetes master node. SSH to the k8s-master VM and invoke the following command. It is important to capture the output into a log file as there is information which you will need to refer to afterwards.

Note: A new add-on named as "kube-dns" will be added to the master node.  However, there is a recommended option to replace it with "CoreDNS", by providing "--feature-gates=CoreDNS=true" parameter to "kubeadm init" command.

Code Block
languagebash
# On the k8s-master vm setup the kubernetes master node.  
# The "sudo -i" changes user to root.
sudo -i

# There is no kubernetes app running. 
ps -ef | grep -i kube | grep -v grep

# Pick one DNS add-on: either "kube-dns" or "CoreDNS".  If your environment setup is for "Kubernetes federation" or "SDN-C Geographic Redundancy" then use "CoreDNS" addon.
# Note that kubeadm version 1.8.x does not have support for coredns feature gate. 
# Upgrade kubeadm to latest version before running below command:

# With "CoreDNS" addon (recommended)
kubeadm init --feature-gates=CoreDNS=true | tee ~/kubeadm_init.log 

# with kube-dns addon
kubeadm init | tee ~/kubeadm_init.log

# Verify many kubernetes app running (kubelet,  kube-scheduler, etcd, kube-apiserver, kube-proxy, kube-controller-manager)
ps -ef | grep -i kube | grep -v grep

# The "exit" reverts user back to ubuntu.
exit

The output of "kubeadm init" (with kube-dns addon) will look like below:

Code Block
languagebash
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.7
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.12.0-ce. Max validated version: 17.03
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubefed-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.147.114.12]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 44.002324 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node kubefed-1 as master by adding a label and a taint
[markmaster] Master kubefed-1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 2246a6.83b4c7ca38913ce1
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12
Code Block
languagebash
sudo kubeadm join --token
b17b41.5095dd3db0129b0b 10.147.132.36:6443 --discovery-token-ca-cert-hash
 sha256:139ff626a5b177c4e4efc5dedd8ba5f2a84edbf42e3f2f70543095358ff97d3bef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a

Execute the following snippet (as ubuntu user) to get kubectl to work. Back on the Kubernetes master node VM, execute the “kubectl get nodes“ command to see all the Kubernetes nodes. It might take some time, but eventually each node will have a status of "Ready":

Code Block
languagebash
kubectlmkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


Verify a set of pods are created. The coredns or kubedns will be in pending state.

Code Block
languagebash
# If you installed coredns addon
sudo kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME       get nodes

#Sample Output:
#NAME         STATUS    ROLES     AGE       VERSION
#k8s-master   Ready     master    1d        v1.8.5
#k8s-node1    Ready     <none>READY    1d STATUS    RESTARTS   v1.8.5
#k8s-node2AGE    Ready   IP  <none>    1d        v1.8.5
#k8s-node3NODE
kube-system   coredns-65dcdb4cf-8dr7w Ready     <none>    1d    0/1    v1.8.5

Make sure that the tiller pod is running. Execute the following command and look for a po/tiller-deploy-xxxx with a “Running” status. For example:

Code Block
languagebash
kubectl -n kube-system getPending pods
#Sample output:
NAME 0          10m       <none>          <none>
kube-system   coredns-65dcdb4cf-8ez2s     READY     STATUS    RESTARTS0/1   AGE
etcd-k8s-s1-master    Pending   0          10m     1/1  <none>     Running   0  <none>
kube-system        2d
kube-apiserver-k8s-s1etcd-k8s-master            1/1       Running   01/1       Running   2d
kube-controller-manager-k8s-s1-master0   1/1       Running9m   0     10.147.99.149   k8s-master
kube-system  2d
 kube-dnsapiserver-6f4fd4bdfk8s-4dvg7master                3/31/1       Running   0          2d
kube-proxy-6bftw9m        10.147.99.149   k8s-master
kube-system   kube-controller-manager-k8s-master   1/1       Running   1/10        Running  9m 0       10.147.99.149   1dk8s-master
kube-proxy-ds8cjsystem   kube-proxy-jztl4                     1/1       Running   0          2d
kube-proxy-fchnx10m       10.147.99.149   k8s-master
kube-system   kube-scheduler-k8s-master            1/1       Running   0          1d
kube-proxy-xf2nv9m        10.147.99.149   k8s-master

#(There will be 2 coredns pods with kubernetes version     1/1.10.1 and higher)


# If you did not Runninginstall coredns addon; 0kube-dns pod will be created
sudo kubectl get   pods 1d
kube-scheduler-k8sall-s1namespaces -mastero wide
NAME           1/1       Running   0          2d
tiller-deploy-6657cd6b8d-hqc5l     READY     1/1STATUS    RESTARTS   RunningAGE   0    IP      1d
weave-net-fl2lg        NODE
etcd-k8s-s1-master                 2/2     1/1  Running   1  Running   0     1d
weave-net-mgkqp     23d       10.147.99.131   k8s-s1-master
kube-apiserver-k8s-s1-master           2 1/21       Running   20          1d
weave-net-tqmsm23d       10.147.99.131          k8s-s1-master
kube-controller-manager-k8s-s1-master   1/1       Running   0          23d       10.147.99.131   k8s-s1-master
kube-dns-6f4fd4bdf-czn68        2/2        3/3       Pending   0          23d        Running<none>    1      <none>    2d
weavekube-netproxy-xj2v2ljt2h                         21/21       Running   0     1     23d     1d

Now you now have a Kubernetes cluster with 3 worker nodes and 1 master node.

Configure dockerdata-nfs

This is a shared directory which must be mounted on all of the Kuberenetes VMs(master node and worker nodes). Because many of the ONAP pods use this directory to share data.

See 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes for instruction on how to set this up.

Configuring SDN-C ONAP

Clone OOM project only on Kuberentes Master Node

As ubuntu user, clone the oom repository. 

Code Block
languagebash
git clone https://gerrit.onap.org/r/oom

...

  10.147.99.148   k8s-s1-node0
kube-scheduler-k8s-s1-master            1/1       Running   0          23d       10.147.99.131   k8s-s1-master


# (Optional) run the following commands if you are curious.
sudo kubectl get node
sudo kubectl get secret
sudo kubectl config view
sudo kubectl config current-context
sudo kubectl get componentstatus
sudo kubectl get clusterrolebinding --all-namespaces
sudo kubectl get serviceaccounts --all-namespaces
sudo kubectl get pods --all-namespaces -o wide
sudo kubectl get services --all-namespaces -o wide
sudo kubectl cluster-info


A "Pod network" must be deployed to use the cluster. This will let pods to communicate with eachother.

There are many different pod networks to choose from. See https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network for choices. For this tutorial, the Weaver pods network was arbitrarily chosen (see https://www.weave.works/docs/net/latest/kubernetes/kube-addon/ for more information).

The following snippet will install the Weaver Pod network:

Code Block
languagebash
sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

# Sample output:
serviceaccount "weave-net" configured
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
role "weave-net" created
rolebinding "weave-net" created
daemonset "weave-net" created

Pay attention to the new pod (and serviceaccount) for "wave-net" . This pod provdes pod-to-pod connectivity.

Verfiy status of the pods. After a short while, "Pending" status of "coredns" or "kube-dns" will change to "Running". 

Code Block
languagebash
sudo kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP               NODE
kube-system   etcd-k8s-master                      1/1       Running   0          1m        10.147.112.140   k8s-master
kube-system   kube-apiserver-k8s-master            1/1       Running   0          1m        10.147.112.140   k8s-master
kube-system   kube-controller-manager-k8s-master   1/1       Running   0          1m        10.147.112.140   k8s-master
kube-system   kube-dns-545bc4bfd4-jcklm            3/3       Running   0          44m       10.32.0.2        k8s-master
kube-system   kube-proxy-lnv7r                     1/1       Running   0          44m       10.147.112.140   k8s-master
kube-system   kube-scheduler-k8s-master            1/1       Running   0          1m        10.147.112.140   k8s-master
kube-system   weave-net-b2hkh                      2/2       Running   0          1m        10.147.112.140   k8s-master


#(There will be 2 coredns pods with different IP addresses, with kubernetes version 1.10.1)

# Verify the AVAIABLE flag for the deployment "kube-dns" or "coredns" will be changed to 1. (2 with kubernetes version 1.10.1)
#For coredns
sudo kubectl get deployment --all-namespaces
NAMESPACE     NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   coredns   2         2         2            2           2m

#For kubedns
sudo kubectl get deployment --all-namespaces
NAMESPACE     NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   kube-dns   1         1         1            1           1h

Troubleshooting tip: 

  • If any of the weave pods face a problem and gets stuck at "ImagePullBackOff " state, you can try running the " sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" " again. 
  • Sometimes, you need to delete the problematic pod, to let it terminate and start fresh. Use "kubectl delete po/<pod-name> -n <name-space> " to delete a pod.
  • To "Unjoin" a worker node "kubectl delete node <node-name> (go through the "Undeploy SDNC" process at the end if you have an SDNC cluster running)
  • If for any reason you need to re-create kubernetes cluster, first remove /etc/kubernetes/ and /var/lib/etcd and /etc/systemd/system/kubelet.service.d/ . Then run kubeadm init command. 


Install Helm and Tiller on the Kubernetes Master Node (k8s-master)

ONAP uses Helm, a package manager for kubernetes.

Install helm (client side). The following instructions were taken from https://docs.helm.sh/using_helm/#installing-helm:

Note: You may need to install older version of helm, then follow "Downgrade helm" section (scroll down). 

Code Block
languagebash
# As a root user, download helm and install it
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh


Install Tiller(server side of hlem)

Tiller manages installation of helm packages (charts).  Tiller requires ServiceAccount setup in Kubernetes before being initialized. The following snippet will do that for you:

(Chrome as broswer is preferred. IE may add  extra "CR LF" to each line, which causes problems). 

Code Block
languagebash
# id
ubuntu


# As a ubuntu user, create  a yaml file to define the helm service account and cluster role binding. 
cat > tiller-serviceaccount.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""
EOF


# Create a ServiceAccount and ClusterRoleBinding based on the created file. 
sudo kubectl create -f tiller-serviceaccount.yaml

# Verify 
which helm
helm version
# Only Client version is shown. Expect delay in getting prompt back.  CTRL+C to get the prompt back!


Initialize helm....This command installs Tiller. It also discovers Kubernetes clusters by reading $KUBECONFIG (default '~/.kube/config') and using the default context.

Code Block
languagebash
helm init --service-account tiller --upgrade


# A new pod is created, but will be in pending status.
kubectl get pods --all-namespaces -o wide  | grep tiller
kube-system   tiller-deploy-b6bf9f4cc-vbrc5           0/1       Pending   0          7m        <none>           <none>


# A new service is created 
kubectl get services --all-namespaces -o wide | grep tiller
kube-system   tiller-deploy   ClusterIP   10.102.74.236   <none>        44134/TCP       47m       app=helm,name=tiller

# A new deployment is created, but the AVAILABLE flage is set to "0".

kubectl get deployments --all-namespaces
NAMESPACE     NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   kube-dns        1         1         1            1           1h
kube-system   tiller-deploy   1         1         1            0           8m


Configure the Kubernetes Worker Nodes (k8s-node<n>)

Setting up cluster nodes is very easy. Just refer back to the "kubeadm init" output logs (/root/kubeadm_init.log). In the last line of the the logs, there is a “kubeadm join” command with token information and other parameters.

Capture those parameters and then execute it as root on each of the Kubernetes worker nodes:  k8s-node1, k8s-node2, and k8s-node3.

After running the "kubeadm join" command on a worker node,

  • 2 new pods (proxy and weave) will be created on Master node and will be assigned to the worker node. 
  • The tiller pod status will change to "running" . 
  • The AVAILABLE flag for tiller-deploy deployment will be changed to "1".
  • The worker node will join the cluster.


The command looks like the follwing snippet (find the command in bottom of /root/kubeadm_init.log):

Code Block
languagebash
# Should change to root user on the worker node.
kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 --discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a


# Make sure in the output, you see "This node has joined the cluster:".

Verify the results from master node:

Code Block
languagebash
kubectl get pods --all-namespaces -o wide  

kubectl get nodes
# Sample Output:
NAME            STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    2h        v1.8.6
k8s-node1    Ready     <none>    53s       v1.8.6

Make sure you run the same "kubeadm join" command on all worker nodes once and verify the results. 


Return to the Kubernetes master node VM, execute the “kubectl get nodes“ command (from master node) to see all the Kubernetes nodes. It might take some time, but eventually each node will have a status of "Ready":

Code Block
languagebash
kubectl get nodes

# Sample Output:
NAME         STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    1d        v1.8.5
k8s-node1    Ready     <none>    1d        v1.8.5
k8s-node2    Ready     <none>    1d        v1.8.5
k8s-node3    Ready     <none>    1d        v1.8.5


Make sure that the tiller pod is running. Execute the following command (from master node) and look for a po/tiller-deploy-xxxx with a “Running” status. For example:

(In the case of using coredns instead of kube-dns, you notice it will only one container)

Code Block
languagebash
kubectl get pods --all-namespaces -o wide
# Sample output:
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP               NODE
kube-system   etcd-k8s-master                         1/1       Running   0          1h        10.147.112.140   k8s-master
kube-system   kube-apiserver-k8s-master               1/1       Running   0          1h        10.147.112.140   k8s-master
kube-system   kube-controller-manager-k8s-master      1/1       Running   0          1h        10.147.112.140   k8s-master
kube-system   kube-dns-545bc4bfd4-jcklm               3/3       Running   0          2h        10.32.0.2        k8s-master
kube-system   kube-proxy-4zztj                        1/1       Running   0          2m        10.147.112.150   k8s-node2
kube-system   kube-proxy-lnv7r                        1/1       Running   0          2h        10.147.112.140   k8s-master
kube-system   kube-proxy-t492g                        1/1       Running   0          20m       10.147.112.164   k8s-node1
kube-system   kube-proxy-xx8df                        1/1       Running   0          2m        10.147.112.169   k8s-node3
kube-system   kube-scheduler-k8s-master               1/1       Running   0          1h        10.147.112.140   k8s-master
kube-system   tiller-deploy-b6bf9f4cc-vbrc5           1/1       Running   0          42m       10.44.0.1        k8s-node1
kube-system   weave-net-b2hkh                         2/2       Running   0          1h        10.147.112.140   k8s-master
kube-system   weave-net-s7l27                         2/2       Running   1          2m        10.147.112.169   k8s-node3
kube-system   weave-net-vmlrq                         2/2       Running   0          20m       10.147.112.164   k8s-node1
kube-system   weave-net-xxgnq                         2/2       Running   1          2m        10.147.112.150   k8s-node2

Now you have a Kubernetes cluster with 3 worker nodes and 1 master node.

Cluster's Full Picture

You can run " kubectl describe node" on the Master node and get a complete report on nodes (including workers) and thier system resources.

Configure dockerdata-nfs

This is a shared directory which must be mounted on all of the Kuberenetes VMs(master node and worker nodes). Because many of the ONAP pods use this directory to share data.

See 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes for instruction on how to set this up.


Configuring SDN-C ONAP


Clone OOM project only on Kuberentes Master Node

Note
titleHelm support

OOM deployment is now being done using helm.

It is highly recommended to stop here and continue from this page. Deploying SDN-C using helm chart


As ubuntu user, clone the oom repository. 

Code Block
languagebash
git clone https://gerrit.onap.org/r/oom
Note

You may use any specific known stable OOM release for SDNC deployment. The above URL downloads latest OOM.

Info

We identified some issues with latest OOM deployment after namespace change. The details and resolutions for these identified issues are provided below:

There are few things missing after namespace change:

  1. PV is not getting created but PVC is created. So we need to provide PV explicitly to be available for PVC to claim it. Refer attached files - pv-volume-1.yaml and pv-volume-2.yaml. To create use: kubectl create -f <filename>.yaml

    Code Block
    # Verify PVC is "Bound"
    ubuntu@k8s-s1-master:/home/ubuntu# kubectl get pvc --all-namespaces
    NAMESPACE   NAME                      STATUS    VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS     AGE
    onap        sdnc-data-sdnc-dbhost-0   Bound     nfs-volume4   11Gi       RWO,RWX        onap-sdnc-data   1h
    onap        sdnc-data-sdnc-dbhost-1   Bound     nfs-volume5   11Gi       RWO,RWX        onap-sdnc-data   43m
    
    
    # Verify PV is "Bound"
    ubuntu@k8s-s1-master:/home/ubuntu# kubectl get pv --all-namespaces
    NAMESPACE   NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                          STORAGECLASS     REASON    AGE
                nfs-volume4   11Gi       RWO,RWX        Retain           Bound     onap/sdnc-data-sdnc-dbhost-0   onap-sdnc-data             1h
                nfs-volume5   11Gi       RWO,RWX        Retain           Bound     onap/sdnc-data-sdnc-dbhost-1   onap-sdnc-data             2m
    
    
  2. ServiceAccount – default in new namespace(onap) is not bind with cluster-admin role. As a result, it gives issue:
    E0319 15:40:32.717436       1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:369: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:onap:default" cannot list storageclasses.storage.k8s.io at the cluster scope
    Resoution: Create a clusterrole binding for this service account explicitly. Refer attached file - binding.yaml. To create use: kubectl create -f <filename>.yaml

  3. The secret for docker registry to pull images is not getting created. As a result, it gives issue:

Warning  FailedSync             <invalid> (x3 over <invalid>)  kubelet, k8s-s1-node3  Error syncing pod

Normal   BackOff                <invalid>                      kubelet, k8s-s1-node3  Back-off pulling image "nexus3.onap.org:10001/onap/sdnc-image:v1.2.1"

Normal   Pulling                <invalid> (x3 over <invalid>)  kubelet, k8s-s1-node3  pulling image "nexus3.onap.org:10001/onap/sdnc-image:v1.2.1"

Warning  Failed                 <invalid> (x3 over <invalid>)  kubelet, k8s-s1-node3  Failed to pull image "nexus3.onap.org:10001/onap/sdnc-image:v1.2.1": rpc error: code = Unknown desc = Error response from daemon: Get https://nexus3.onap.org:10001/v2/onap/sdnc-image/manifests/v1.2.1: no basic auth credentials

Resolution: Create the secret explicitly using command: kubectl --namespace onap create secret docker-registry onap-docker-registry-key --docker-server=nexus3.onap.org:10001 --docker-username=docker --docker-password=docker --docker-email=docker@nexus3.onap.org

We were able to deploy latest OOM (after namespace change) succesfully after these resolutions:

Code Block
root@k8s-s1-master:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   coredns-65dcdb4cf-2vmwp                 1/1       Running   0          4d
kube-system   etcd-k8s-s1-master                      1/1       Running   0          4d
kube-system   kube-apiserver-k8s-s1-master            1/1       Running   0          4d
kube-system   kube-controller-manager-k8s-s1-master   1/1       Running   0          4d
kube-system   kube-proxy-pjtgh                        1/1       Running   0          4d
kube-system   kube-proxy-pmzmw                        1/1       Running   0          4d
kube-system   kube-proxy-zbbjp                        1/1       Running   0          4d
kube-system   kube-proxy-zrhd2                        1/1       Running   0          4d
kube-system   kube-proxy-zrn7d                        1/1       Running   0          4d
kube-system   kube-scheduler-k8s-s1-master            1/1       Running   0          4d
kube-system   tiller-deploy-7bf964fff8-g5rnm          1/1       Running   0          4d
kube-system   weave-net-8hdkl                         2/2       Running   0          4d
kube-system   weave-net-bq5rx                         2/2       Running   0          4d
kube-system   weave-net-jxdb8                         2/2       Running   0          4d
kube-system   weave-net-nb8sw                         2/2       Running   0          4d
kube-system   weave-net-wnrbw                         2/2       Running   0          4d
onap          sdnc-0                                  2/2       Running   0          1d
onap          sdnc-1                                  2/2       Running   0          1d
onap          sdnc-2                                  2/2       Running   0          1d
onap          sdnc-dbhost-0                           2/2       Running   0          1d
onap          sdnc-dbhost-1                           2/2       Running   1          1d
onap          sdnc-dgbuilder-65444884c7-k2h67         1/1       Running   0          1d
onap          sdnc-dmaap-listener-567c7b744b-xrld2    1/1       Running   0          1d
onap          sdnc-nfs-provisioner-6db9648675-25bnb   1/1       Running   0          1d
onap          sdnc-portal-5f74449bb5-rffzt            1/1       Running   0          1d
onap          sdnc-ueb-listener-5bb66785c8-6xv7m      1/1       Running   0          1d



Get the following 2 gerrit changes from Configure SDN-C Cluster Deployment .

Local Nexus

Optional,  if you have a local nexus3 for you docker repo can use the following snippet to update the oom to pull from you local. This will speed up deployment time.  

Code Block
languagebash
# Update nexus3 to you 
find ~/oom -type f -exec \
    sed -i 's/nexus3\.onap\.org:10001/yournexus:port/g' {} +


Configure ONAP

As ubuntu user, 

Code Block
languagebash
cd ~/oom/kubernetes/oneclick/
source setenv.bash

cd ~/oom/kubernetes/config/
# Dummy values can be used as we will not be deploying a VM
cp onap-parameters-sample.yaml onap-parameters.yaml
./createConfig.sh -n onap


Wait for the ONAP config pod to change state from ContainerCreating to Running  and finaly to Completed. It should have a "Completed" status:

Code Block
languagebash
kubectl -n onap get pods --show-all
# Sample output:
NAME      READY     STATUS      RESTARTS   AGE
config    0/1       Completed   0          9m


$ kubectl get pod --all-namespaces -a | grep onap
onap          config                                  0/1       Completed   0          9m


$ kubectl get namespaces
# Sample output:
NAME          STATUS    AGE
default       Active    50m
kube-public   Active    50m
kube-system   Active    50m
onap          Active    8m


Deploy the SDN-C Pods

Code Block
languagebash
cd ~/oom/kubernetes/oneclick/
source setenv.bash
./createAll.bash -n onap -a sdnc


# Verify: Repeat the follwing command, untill installation is completed, and all pods are running. 
kubectl -n onap-sdnc get pod -o wide

3 SDNC pods will be created and will be each assigned to a separate Kuberentes worker nodes. 

2 DBhost pods will be created and will be each assigned to a separate Kuberentes worker nodes. 

Verify SDNC Clustering

Refer to Validate the SDN-C ODL cluster.

Undeploy SDNC

Code Block
languagebash
$ cd ~/oom/kubernetes/oneclick/
$ source setenv.bash
$ ./deleteAll.bash -n onap
$ ./deleteAll.bash -n onap -a sdnc
$ sudo rm -rf /dockerdata-nfs

Get the details from Kubernete Master Node


Access to RestConf UI is via https://<Kuberbetes-Master Node-IP>:30202/apidoc/explorer/index.html (admin user)

Run the following command to make sure installation is error free.

Code Block
languagebash
$ kubectl cluster-info
Kubernetes master is running at https://10.147.112.158:6443
KubeDNS is running at https://10.147.112.158:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Code Block
languagebash
$ kubectl -n onap-sdnc get all
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/nfs-provisioner 1 1 1 1 5h deploy/sdnc-dgbuilder 1 1 1 1 5h deploy/sdnc-portal 1 1 1 1 5h NAME DESIRED CURRENT READY AGE rs/nfs-provisioner-6cb95b597d 1 1 1 5h rs/sdnc-dgbuilder-557b6879cd 1 1 1 5h rs/sdnc-portal-7bb789ccd6 1 1 1 5h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/nfs-provisioner 1 1 1 1 5h deploy/sdnc-dgbuilder 1 1 1 1 5h deploy/sdnc-portal 1 1 1 1 5h NAME DESIRED CURRENT READY AGE rs/nfs-provisioner-6cb95b597d 1 1 1 5h rs/sdnc-dgbuilder-557b6879cd 1 1 1 5h rs/sdnc-portal-7bb789ccd6 1 1 1 5h NAME DESIRED CURRENT AGE statefulsets/sdnc 3 3 5h statefulsets/sdnc-dbhost 2 2 5h NAME READY STATUS RESTARTS AGE po/nfs-provisioner-6cb95b597d-jjhv5 1/1 Running 0 5h po/sdnc-0 2/2 Running 0 5h po/sdnc-1 2/2 Running 0 5h po/sdnc-2 2/2 Running 0 5h po/sdnc-dbhost-0 2/2 Running 0 5h po/sdnc-dbhost-1 2/2 Running 0 5h po/sdnc-dgbuilder-557b6879cd-9nkv4 1/1 Running 0 5h po/sdnc-portal-7bb789ccd6-5z9w4 1/1 Running 0 5h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/dbhost ClusterIP None <none> 3306/TCP 5h svc/dbhost-read ClusterIP 10.109.106.173 <none> 3306/TCP 5h svc/nfs-provisioner ClusterIP 10.101.77.92 <none> 2049/TCP,20048/TCP,111/TCP,111/UDP 5h svc/sdnc-dgbuilder NodePort 10.111.87.223 <none> 3000:30203/TCP 5h svc/sdnc-portal NodePort 10.102.21.163 <none> 8843:30201/TCP 5h svc/sdnctldb01 ClusterIP None <none> 3306/TCP 5h svc/sdnctldb02 ClusterIP None <none> 3306/TCP 5h svc/sdnhost NodePort 10.108.100.101 <none> 8282:30202/TCP,8201:30208/TCP,8280:30246/TCP 5h svc/sdnhost-cluster ClusterIP None <none> 2550/TCP 5h 
Code Block
languagebash
$ kubectl -n onap-sdnc get pod
NAME                               READY     STATUS              RESTARTS
nfs-provisioner-6cb95b597d-jjhv5   0/1       ContainerCreating   0
sdnc-0                             0/2       Init:0/1            0
sdnc-1                             0/2       Init:0/1            0
sdnc-2                             0/2       Init:0/1            0
sdnc-dbhost-0                      0/2       Init:0/2            0
sdnc-dgbuilder-557b6879cd-9nkv4    0/1       Init:0/1            0
sdnc-portal-7bb789ccd6-5z9w4       0/1       Init:0/1            0


# Wait few minutes


$ kubectl -n onap-sdnc get pod
NAME                               READY     STATUS    RESTARTS   AGE
nfs-provisioner-6cb95b597d-jjhv5   1/1       Running   0          23m
sdnc-0                             2/2       Running   0          23m
sdnc-1                             2/2       Running   0          23m
sdnc-2

...

Local Nexus

Optional,  if you have a local nexus3  for you docker repo can use the following snippet to update the oom to pull from you local.  THis will speed up deployment time.  

Code Block
languagebash
# update nexus3 to you 
find ~/oom -type f -exec \
    sed -i 's/nexus3\.onap\.org:10001/yournexus:port/g' {} +

Configure ONAP

As ubuntu user, 

Code Block
languagebash
cd ~/oom/kubernetes/oneclick/
source setenv.bash

cd ~/oom/kubernetes/config/
#dummy values can be used as we will not be deploying a VM
cp onap-parameters-sample.yaml onap-parameters.yaml
./createConfig.sh -n onap

Wait for the ONAP config pod to change state from ContainerCreating to Running  and finaly to Completed. It should have a "Completed" status:

Code Block
languagebash
kubectl -n onap get pods --show-all
#Sample output:
NAME      READY     STATUS      RESTARTS   AGE
config    0/1       Completed   0          9m


$ kubectl get pod --all-namespaces -a | grep onap
onap          config                                  02/12       CompletedRunning   0          9m


$ kubectl get namespaces
#Sample output:
NAME23m
sdnc-dbhost-0             STATUS    AGE
default     2/2  Active    50m
kube-public Running  Active 0   50m
kube-system   Active    50m
onap23m
sdnc-dbhost-1          Active    8m

Deploy the SDN-C Pods

As ubuntu user, apply the following patch to the kubernetes/sdnc/templates/nfs-provisoner-deployment.yaml file so that it does not try to export the /dockerdata-nfs as an NFS mount (because this folder was exported by the NFS server during configuration earlier):

Code Block
languagebash
cd ~/
cat > nfs-provisoner.patch <<EOF
diff --git a/kubernetes/sdnc/templates/nfs-provisoner-deployment.yaml b/kubernetes/sdnc/templates/nfs-provisoner-deployment.yaml
index 75f57fd..0952da8 100644
--- a/kubernetes/sdnc/templates/nfs-provisoner-deployment.yaml
+++ b/kubernetes/sdnc/templates/nfs-provisoner-deployment.yaml
@@ -51,5 +51,6 @@ spec:
       volumes:
            2/2       Running   0          21m
sdnc-dgbuilder-557b6879cd-9nkv4    1/1       Running   0          23m
sdnc-portal-7bb789ccd6-5z9w4 name: export-volume
     1/1      hostPath:
- Running   0        path: /dockerdata-nfs/{{ .Values.nsPrefix }}/sdnc/data
+     23m
Code Block
languagebash
$ kubectl get pod --all-namespaces -a
NAMESPACE     NAME    path: /nfs-provisioner/{{ .Values.nsPrefix }}/sdnc/data
+                type: DirectoryOrCreate
 #{{ end }}
EOF

cd ~/oom
git apply ~/nfs-provisoner.patch

cd ~/oom/kubernetes/oneclick/
source setenv.bash
./createAll.bash -n onap -a sdnc


# repeat the follwing command, untill installation is completed.
kubectl -n onap-sdnc get pod -o wide

3 SDNC pods will be created and will be each assigned to a separate Kuberentes worker nodes. 

2 DBhost pods will be created and will be each assigned to a separate Kuberentes worker nodes. 

Verify SDNC Clustering

Refer to Validate the SDN-C ODL cluster.

Undeploy SDNC

Code Block
languagebash
$ cd ~/oom/kubernetes/oneclick/
$ source setenv.bash
$ ./deleteAll.bash -n onap
$ ./deleteAll.bash -n onap -a sdnc
$sudo rm -rf /dockerdata-nfs

Get the details from Kubernete Master Node

Access to RestConf UI is via https://<Kuberbetes-Master Node-IP>:30202/apidoc/explorer/index.html (admin user)

Run the following command to make sure installation is error free.

Code Block
languagebash
$ kubectl cluster-info
Kubernetes master is running at https://10.147.112.158:6443
KubeDNS is running at https://10.147.112.158:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Code Block
languagebash
$ kubectl -n onap-sdnc get all
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/nfs-provisioner 1 1 1 1 5h deploy/sdnc-dgbuilder 1 1 1 1 5h deploy/sdnc-portal 1 1 1 1 5h NAME DESIRED CURRENT READY AGE rs/nfs-provisioner-6cb95b597d 1 1 1 5h rs/sdnc-dgbuilder-557b6879cd 1 1 1 5h rs/sdnc-portal-7bb789ccd6 1 1 1 5h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/nfs-provisioner 1 1 1 1 5h deploy/sdnc-dgbuilder 1 1 1 1 5h deploy/sdnc-portal 1 1 1 1 5h NAME DESIRED CURRENT READY AGE rs/nfs-provisioner-6cb95b597d 1 1 1 5h rs/sdnc-dgbuilder-557b6879cd 1 1 1 5h rs/sdnc-portal-7bb789ccd6 1 1 1 5h NAME DESIRED CURRENT AGE statefulsets/sdnc 3 3 5h statefulsets/sdnc-dbhost 2 2 5h NAME READY STATUS RESTARTS AGE po/nfs-provisioner-6cb95b597d-jjhv5 1/1 Running 0 5h po/sdnc-0 2/2 Running 0 5h po/sdnc-1 2/2 Running 0 5h po/sdnc-2 2/2 Running 0 5h po/sdnc-dbhost-0 2/2 Running 0 5h po/sdnc-dbhost-1 2/2 Running 0 5h po/sdnc-dgbuilder-557b6879cd-9nkv4 1/1 Running 0 5h po/sdnc-portal-7bb789ccd6-5z9w4 1/1 Running 0 5h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/dbhost ClusterIP None <none> 3306/TCP 5h svc/dbhost-read ClusterIP 10.109.106.173 <none> 3306/TCP 5h svc/nfs-provisioner ClusterIP 10.101.77.92 <none> 2049/TCP,20048/TCP,111/TCP,111/UDP 5h svc/sdnc-dgbuilder NodePort 10.111.87.223 <none> 3000:30203/TCP 5h svc/sdnc-portal NodePort 10.102.21.163 <none> 8843:30201/TCP 5h svc/sdnctldb01 ClusterIP None <none> 3306/TCP 5h svc/sdnctldb02 ClusterIP None <none> 3306/TCP 5h svc/sdnhost NodePort 10.108.100.101 <none> 8282:30202/TCP,8201:30208/TCP,8280:30246/TCP 5h svc/sdnhost-cluster ClusterIP None <none> 2550/TCP 5h 
Code Block
languagebash
$ kubectl -n onap-sdnc get pod
NAME         READY     STATUS      RESTARTS   AGE
kube-system   etcd-k8s-s2-master                      1/1       Running     0          13h
kube-system   kube-apiserver-k8s-s2-master            1/1       Running     0          13h
kube-system   kube-controller-manager-k8s-s2-master   1/1       Running     0          13h
kube-system   kube-dns-6f4fd4bdf-n8rgs                3/3       Running     0          14h
kube-system   kube-proxy-l8gsk                        1/1       Running     0          12h
kube-system   kube-proxy-pdz6h                        1/1       Running     0          12h
kube-system   kube-proxy-q7zz2                        1/1       Running     0        READY  12h
kube-system   STATUSkube-proxy-r76g9              RESTARTS
nfs-provisioner-6cb95b597d-jjhv5   0/1       ContainerCreating1/1   0
sdnc-0    Running     0          14h
kube-system   kube-scheduler-k8s-s2-master       0/2       Init:01/1       Running     0
sdnc-1          13h
kube-system   tiller-deploy-6657cd6b8d-f6p9h           1/1     0/2  Running     Init:0/1          12h
kube-system   0
sdnc-2weave-net-mwdjd                         2/2     0/2  Running     Init:0/1          12h
kube-system  0
sdnc-dbhost-0 weave-net-sl7gg                         02/2       Running Init:0/2    2         0 12h
sdnc-dgbuilder-557b6879cd-9nkv4kube-system   weave-net-t6nmx      0/1       Init:0/1            0
sdnc-portal-7bb789ccd6-5z9w42/2       0/1Running     1  Init:0/1        13h
kube-system    0


# wait few minutes


$ kubectl -n onap-sdnc get pod
NAMEweave-net-zmqcf                         2/2       Running     2   READY     STATUS  12h
onap-sdnc  RESTARTS   AGE
nfs-provisioner-6cb95b597d-jjhv5        1/1       Running     0          23m5h
onap-sdnc     sdnc-0                                  2/2       Running     0          23m
5h
onap-sdnc     sdnc-1                                  2/2       Running     0          23m5h
onap-sdnc     sdnc-2                                  2/2       Running     0          23m5h
onap-sdnc     sdnc-dbhost-0                           2/2       Running     0  Running   0     5h
onap-sdnc     23m
sdnc-dbhost-1                           2/2       Running     0          21m5h
onap-sdnc     sdnc-dgbuilder-557b6879cd-9nkv4         1/1       Running     0          23m5h
onap-sdnc     sdnc-portal-7bb789ccd6-5z9w4            1/1       Running   0   0       23m
Code Block
languagebash
$ kubectl get pod --all-namespaces -a
NAMESPACE 5h
onap     NAME     config                               READY   0/1  STATUS     Completed RESTARTS  0 AGE
kube-system   etcd-k8s-s2-master      5h

Code Block
languagebash
$ kubectl -n onap-sdnc get pod           1/1  -o wide
NAME     Running     0          13h
kube-system   kube-apiserver-k8s-s2-master        READY    1/1 STATUS    RESTARTS  Running AGE    0   IP       13h
kube-system   kubeNODE
nfs-controllerprovisioner-manager-k8s-s2-master6cb95b597d-jjhv5   1/1       Running     0          13h
kube-system5h   kube-dns-6f4fd4bdf-n8rgs     10.36.0.1   k8s-s2-node0
sdnc-0        3/3       Running     0          14h
kube-system2/2    kube-proxy-l8gsk   Running   0          5h        1/110.36.0.2    k8s-s2-node0
sdnc-1   Running     0          12h
kube-system   kube-proxy-pdz6h                2/2       Running  1/1 0      Running    5h 0       10.42.0.1   12h
kube-systemk8s-s2-node1
sdnc-2     kube-proxy-q7zz2                        12/12       Running   0          05h        10.44.0.3  12h
kube-system k8s-s2-node2
sdnc-dbhost-0    kube-proxy-r76g9                  2/2      1/1 Running   0   Running     0  5h        14h
kube-system10.44.0.4   kubek8s-schedulers2-k8snode2
sdnc-s2-masterdbhost-1                      12/12       Running   0  0        5h  13h
kube-system   tiller-deploy-6657cd6b8d-f6p9h   10.42.0.3   k8s-s2-node1
sdnc-dgbuilder-557b6879cd-9nkv4    1/1       Running   0    0      5h    12h
kube-system   weave-net-mwdjd  10.44.0.2   k8s-s2-node2
sdnc-portal-7bb789ccd6-5z9w4       1/1       Running   0    2/2      5h Running     1  10.42.0.2   k8s-s2-node1

Code Block
languagebash
$ kubectl get services  12h
kube-system   weave-net-sl7gg--all-namespaces -o wide
NAMESPACE     NAME              TYPE        2/2CLUSTER-IP       EXTERNAL-IP  Running PORT(S)    2          12h
kube-system   weave-net-t6nmx                       AGE  2/2     SELECTOR
default  Running     1kubernetes        ClusterIP  13h
kube-system   weave-net-zmqcf 10.96.0.1        <none>        443/TCP           2/2       Running     2          12h
onap-sdnc     nfs-provisioner-6cb95b597d-jjhv5  14h      1/1  <none>
kube-system   kube-dns   Running     0  ClusterIP   10.96.0.10     5h
onap-sdnc  <none>   sdnc-0     53/UDP,53/TCP                             2/2     14h  Running     0k8s-app=kube-dns
kube-system   tiller-deploy     ClusterIP  5h
onap-sdnc   10.96.194.1   sdnc-1   <none>        44134/TCP                       2/2       Running     0   12h       5happ=helm,name=tiller
onap-sdnc     sdnc-2dbhost            ClusterIP   None             <none>       2 3306/2TCP        Running     0          5h
onap-sdnc     sdnc-dbhost-0           5h        app=sdnc-dbhost
onap-sdnc     dbhost-read   2/2    ClusterIP   Running10.109.106.173   <none>     0   3306/TCP       5h
onap-sdnc      sdnc-dbhost-1                          5h 2/2       Runningapp=sdnc-dbhost
onap-sdnc     0nfs-provisioner   ClusterIP   10.101.77.92    5h
onap-sdnc <none>    sdnc-dgbuilder-557b6879cd-9nkv4         1/12049/TCP,20048/TCP,111/TCP,111/UDP       Running     0 5h         5happ=nfs-provisioner
onap-sdnc     sdnc-portal-7bb789ccd6-5z9w4dgbuilder    NodePort    10.111.87.223    1/1<none>       Running 3000:30203/TCP    0          5h
onap          config         5h        app=sdnc-dgbuilder
onap-sdnc     sdnc-portal       NodePort     0/110.102.21.163    <none>    Completed   0 8843:30201/TCP         5h

Code Block
languagebash
$ kubectl -n onap-sdnc get pod -o wide
NAME                 5h        app=sdnc-portal
onap-sdnc     sdnctldb01 READY     STATUS    RESTARTS   AGE     ClusterIP  IP None         NODE
nfs-provisioner-6cb95b597d-jjhv5   1/1 <none>      Running  3306/TCP 0          5h        10.36.0.1   k8s-s2-node0
sdnc-0                 5h        app=sdnc-dbhost
onap-sdnc    2/2 sdnctldb02      Running  ClusterIP 0  None        5h     <none>   10.36.0.2   k8s-s2-node0
sdnc-1  3306/TCP                           2/2       Running   0  5h        5happ=sdnc-dbhost
onap-sdnc     sdnhost   10.42.0.1   k8s-s2-node1
sdnc-2     NodePort    10.108.100.101   <none>        8282:30202/TCP,8201:30208/TCP,8280:30246/TCP   5h      2/2  app=sdnc
onap-sdnc     Runningsdnhost-cluster   0ClusterIP   None       5h      <none>  10.44.0.3   k8s-s2-node2
sdnc-dbhost-0   2550/TCP                   2/2       Running   0          5h        10.44.0.4   k8s-s2-node2
sdnc-dbhost-1                      2/2       Running   0          5h        10.42.0.3   k8s-s2-node1
sdnc-dgbuilder-557b6879cd-9nkv4    1/1       Running   0          5h        10.44.0.2   k8s-s2-node2
sdnc-portal-7bb789ccd6-5z9w4       1/1       Running   0          5h        10.42.0.2   k8s-s2-node1

app=sdnc


Get more detail about a single pod by using "describe" with the resource name. The resource name is shown with the get all command used above.

Code Block
languagebash
$ kubectl -n onap-sdnc describe po/sdnc-0


Get logs of containers inside each pod:

Code Block
languagebash
# add -v=n {n:1:10) to "kubectl logs"  to get verbose logs. 


$ kubectl describe pod sdnc-0  -n onap-sdnc
$ kubectl logs sdnc-0 sdnc-readiness -n onap-sdnc  # add -v=n {n:1:10) to get verbose logs. 
$ kubectl logs sdnc-0 sdnc-controller-container -n onap-sdnc
$ kubectl logs sdnc-0 filebeat-onap -n onap-sdnc

$ kubectl describe pod sdnc-dbhost-0  -n onap-sdnc
$ kubectl logs sdnc-dbhost-0 sdnc-db-container -n onap-sdnc
$ kubectl logs sdnc-dbhost-0 init-mysql -n onap-sdnc
$ kubectl logs sdnc-dbhost-0 clone-mysql  -n onap-sdnc
$ kubectl logs sdnc-dbhost-0 xtrabackup  -n onap-sdnc

List of Presistent Volumes..

Each DB pod, has got a presistent volume cliam (pvc), lined to a pv.  PVC capacity must be less than or equal to PV. Their status must be "Bound". 

Code Block
languagebash
$ kubectl get servicespv --all-namespaces -o wide
NAMESPACEn onap-sdnc
NAME             NAME              TYPE        CLUSTER-IP    CAPACITY   EXTERNAL-IPACCESS MODES  PORT(S) RECLAIM POLICY   STATUS     CLAIM                              AGE STORAGECLASS     REASON SELECTOR
default   AGE
pvc-75411a66-f640-11e7-9949-fa163ee2b421   1Gi kubernetes       RWX ClusterIP   10.96.0.1        <none>Delete        443/TCP   Bound      onap-sdnc/sdnc-data-sdnc-dbhost-0   onap-sdnc-data             23h
pvc-824cb3cc-f620-11e7-9949-fa163ee2b421   1Gi        RWX    14h       <none>
kube-system Delete  kube-dns         Released ClusterIP   10.96.0.10onap-sdnc/sdnc-data-sdnc-dbhost-0   onap-sdnc-data    <none>        53/UDP,53/TCP  1d
pvc-cb380eda-f640-11e7-9949-fa163ee2b421   1Gi        RWX            Delete          14h Bound      k8s-app=kube-dns
kube-systemonap-sdnc/sdnc-data-sdnc-dbhost-1   tilleronap-sdnc-deploydata     ClusterIP   10.96.194.1      <none> 23h




$ kubectl get pvc -n onap-sdnc
NAME       44134/TCP               STATUS    VOLUME                   12h       app=helm,name=tiller
onap-sdnc     dbhost      CAPACITY   ACCESS MODES  ClusterIP STORAGECLASS  None   AGE
sdnc-data-sdnc-dbhost-0   Bound     pvc-75411a66-f640-11e7-9949-fa163ee2b421  <none> 1Gi       3306/TCP RWX            onap-sdnc-data   23h
sdnc-data-sdnc-dbhost-1   Bound     pvc-cb380eda-f640-11e7-9949-fa163ee2b421   1Gi        RWX    5h        app=sdnc-dbhost
onap-sdnc     dbhost-read-data   23h
Code Block
languagebash
$ kubectl get  ClusterIP   10.109.106.173   <none>        3306/TCPserviceaccounts --all-namespaces
$ kubectl get clusterrolebinding --all-namespaces


$kubectl get deployment --all-namespaces
NAMESPACE     NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   kube-dns        5h1        app=sdnc-dbhost
onap-sdnc  1    nfs-provisioner   ClusterIP  1 10.101.77.92     <none>      1  2049/TCP,20048/TCP,111/TCP,111/UDP         1d
kube-system   tiller-deploy 5h  1      app=nfs-provisioner
onap-sdnc   1  sdnc-dgbuilder    NodePort   1 10.111.87.223    <none>       1 3000:30203/TCP          1d


Scale up or down SDNC or DB pods

Code Block
languagebash
decrease sdnc pods to 1 
$ kubectl scale statefulset  sdnc -n onap-sdnc --replicas=1
statefulset "sdnc" scaled


# verify ..2 sdnc pods will terminate
$ kubectl 5hget pods --all-namespaces -a | grep   app=sdnc-dgbuilder
onap-sdnc     sdnc-portalnfs-provisioner-5fb9fcb48f-cj8hm      1/1 NodePort    10.102.21.163  Running  <none>     0   8843:30201/TCP       21h
onap-sdnc    sdnc-0                      5h        app=sdnc-portal
onap-sdnc  2/2   sdnctldb01    Running    ClusterIP   None0          2h
onap-sdnc   <none> sdnc-1       3306/TCP                         0/2       Terminating      0 5h        app=sdnc-dbhost40m
onap-sdnc    sdnc-2 sdnctldb02        ClusterIP   None             <none>        33060/TCP2       Terminating   0         15m


increase sdnc pods to 5
$ kubectl scale statefulset  sdnc -n onap-sdnc --replicas=5
statefulset "sdnc" scaled

increase db pods to 4
$kubectl scale 5h        app=statefulset sdnc-dbhost
 -n onap-sdnc  --replicas=5
statefulset "sdnc-dbhost" scaled

$ sdnhostkubectl get pods --all-namespaces -o wide | grep onap-sdnc
onap-sdnc   NodePort    10.108.100.101   <none>  nfs-provisioner-7fd7b4c6b7-d6k5t        8282:30202/TCP,8201:30208/TCP,8280:30246/TCP1/1   5h    Running   0 app=sdnc
onap-sdnc     sdnhost-cluster   ClusterIP 13h  None     10.42.0.149     sdnc-k8s
onap-sdnc   <none>  sdnc-0      2550/TCP                            2/2       Running   0 5h        app=sdnc

...

Code Block
languagebash
$ kubectl -n onap-sdnc describe po/sdnc-0

Get logs of containers inside each pod:

Code Block
languagebash
$ kubectl describe pod sdnc-0  -n onap-sdnc
$ kubectl logs sdnc-0 sdnc-readiness -n onap-sdnc
$ kubectl logs sdnc-0 sdnc-controller-container -n onap-sdnc
$ kubectl logs sdnc-0 filebeat-onap -n onap-sdnc

$ kubectl describe pod sdnc-dbhost-0  -n onap-sdnc
$ kubectl logs sdnc-dbhost-0 sdnc-db-container -n onap-sdnc
$ kubectl logs sdnc-dbhost-0 init-mysql -n onap-sdnc
$ kubectl logs sdnc-dbhost-0 clone-mysql  -n onap-sdnc
$ kubectl logs sdnc-dbhost-0 xtrabackup  -n onap-sdnc
Code Block
languagebash
$ kubectl get pv -n onap-sdnc
NAME 13h       10.42.134.186   sdnc-k8s
onap-sdnc     sdnc-1                                  2/2       Running   0          13h       10.42.186.72    sdnc-k8s
onap-sdnc     sdnc-2                                  2/2     CAPACITY  Running ACCESS MODES 0  RECLAIM POLICY   STATUS    13h CLAIM      10.42.51.86     sdnc-k8s
onap-sdnc     sdnc-dbhost-0               STORAGECLASS     REASON    AGE
pvc-75411a66-f640-11e7-9949-fa163ee2b421   1Gi2/2       Running RWX  0          Delete13h       10.42.190.88    Boundsdnc-k8s
onap-sdnc      onap-sdnc/sdnc-data-sdnc-dbhost-01       onap-sdnc-data             23h
pvc-824cb3cc-f620-11e7-9949-fa163ee2b421   1Gi    2/2    RWX   Running   0      Delete    12h       Released10.42.213.221   sdnc-k8s
onap-sdnc/sdnc-data-     sdnc-dbhost-02     onap-sdnc-data             1d
pvc-cb380eda-f640-11e7-9949-fa163ee2b421   1Gi      2/2  RWX     Running   0    Delete      5m     Bound   10.42.63.197   onap- sdnc/sdnc-datak8s
onap-sdnc-dbhost-1     onapsdnc-sdncdbhost-data 3            23h




$ kubectl get pvc -n onap-sdnc
NAME          2/2       Running   0  STATUS    VOLUME    5m        10.42.199.38    sdnc-k8s
onap-sdnc     sdnc-dbhost-4                CAPACITY   ACCESS MODES   STORAGECLASS     AGE
sdnc-data-sdnc-dbhost-02/2    Bound   Running  pvc-75411a66-f640-11e7-9949-fa163ee2b421  0  1Gi        RWX4m        10.42.148.85    sdnc-k8s
onap-sdnc-data     23h
sdnc-datadgbuilder-sdnc-dbhost-16ff8d94857-hl92x     Bound     pvc-cb380eda-f640-11e7-9949-fa163ee2b4211/1    1Gi   Running   0  RWX        13h    onap-sdnc-data   23h
Code Block
languagebash
$ kubectl get serviceaccounts --all-namespaces
$ kubectl get clusterrolebinding --all-namespaces


$kubectl get deployment --all-namespaces
NAMESPACE10.42.255.132   sdnc-k8s
onap-sdnc     sdnc-portal-0        NAME            DESIRED   CURRENT   UP-TO-DATE  1/1  AVAILABLE   AGE
kube-system  Running kube-dns  0      1    13h     1  10.42.141.70    sdnc-k8s
onap-sdnc   1  sdnc-portal-1          1           1d
kube-system   tiller-deploy   1/1       Running  1 0        1  13h       10.42.60.71   1  sdnc-k8s
onap-sdnc     sdnc-portal-2    1d



sudo ntpdate -s 10.247.5.11