You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

This wiki describes how to set up a Kubernetes cluster with kuberadm, and then deploying SDN-C within that Kubernetes cluster.

What is OpenStack? What is Kubernetes? What is Docker?

In the OpenStack lab, the controller executes the function of partitioning resources. The compute nodes are the collection of resources (memory, CPUs, hard drive space) to be partitioned. When creating a VM with "X" memory, "Y" CPUs and "Z" hard drive space, OpenStack's controller reviews its pool of available resources, allocates the quota, and then creates the VM on one of the available compute nodes. Many VMs can be created on a single compute node. OpenStack's controller uses a lot of criteria to choose a compute node, but if an application spans multiple VMs, Affinity rules can be used to ensure the VMs don’t congregate on a single compute node. This would not be good for resilience.

Kubernetes is similar to OpenStack in that it manages resources. Instead of scheduling VMs, Kubernetes schedules Pods. In a Kubernetes cluster, there is a single master and multiple nodes. The Kubernetes’s master is like the OpenStack controller in that it allocates resources for Pods. Kubernetes nodes are the pool of resources to be allocated, similar to OpenStack’s compute nodes. Pods, like VMs, can have Affinity rules configured in order to increase Apps resilience.

If you would like more information on these subjects, please explore these links:

Deployment Architecture

The Kubernetes deployment in this tutorial will be set up on top of OpenStack VMs. Let's call this the undercloud. undercloud can be physical boxes, or VMs. The VMs can come from different cloud providers, but in this tutorial we will use OpenStack. The following table shows the layers of software that need to be considered when thinking about resilience:

Hardware Base OS

Openstack Software Configured on Base OS

VMs Deployed by Openstack

Kubernetes Software Configured on VMs

Pods Deployed by KubernetesDocker Containers Deployed within a POD

Computer 1

Controller Node



Computer 2ComputeVM 1k8s-master

Computer 3ComputeVM 2k8s-node1

sdnc-0

sdnc-controller-container,

filebeat-onap
sdnc-dbhost-0sdnc-db-container xtrabackup,
Computer 4ComputeVM 3k8s-node2sdnc-0

sdnc-controller-container,

filebeat-onap
sdnc-dbhost-0sdnc-db-container xtrabackup
Computer 5ComputeVM 4k8s-node3sdnc-0

sdnc-controller-container,

filebeat-onap
nfs-provisioner-xxxnfs-provisioner

Setting up an OpenStack lab is out of scope for this tutorial. Assuming that you have a lab, you will need to create 1+n VMs: one to be configured as the Kubernetes master, and "n" to be considered as Kubernetes nodes. We will create 3 Kubernetes nodes for this tutorial because we want each of our SDN-Cs to appear on a different VM for resiliency.

Create the Undercloud

The examples here will use the OpenStackClient; however, the Openstack Horizon GUI could be used. Start by creating 4 VMs with the hostnames: k8s-master, k8s-node0, k8s-node1, and k8s-node1. Each VM should have internet access and approximately:

  • 16384 MB
  • 20 GB
  • 4 vCPUs

How many resources are needed?

 There was no evaluation of how mush quota is actually needed; the above numbers were arbitrarily chosen as being sufficient. A lot more is likely needed if the the full ONAP environment is deployed. For just SDN-C, this is more than plenty.  


Use the ubuntu 16.04 cloud image to create the VMs. This image can be found at https://cloud-images.ubuntu.com/.

wget https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img

openstack image create ubuntu-16.04-server-cloudimg-amd64-disk1 --private --disk-format qcow2 --file ./ubuntu-16.04-server-cloudimg-amd64-disk1


Exactly how to create VMs in OpenStack is out of scope for this tutorial. However, here is some examples of what OpenStackClient commands can be used to perform this job:

openstack server list;
openstack network list;
openstack flavor list;
openstack keypair list;
openstack image list;
openstack security group list

openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-master"
openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node1"
openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node2"
openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node3"


Configure Each VM 

Repeat the following steps on each VM:

Pre-Configure Each VM

Make sure the VMs are: 

  • Up to date
  • The clocks are synchonized 

#(Optional) create a bash history file as the Ubuntu user so that it does not accidently get created as the root user.  
touch ~/.bash_history  

#(Optional) turn on ssh password authentication and give ubuntu user a password  if you do not like using ssh keys. 
# Set the "PasswordAuthentication yes" in the /etc/ssh/sshd_config file and then set the ubuntu password
sudo vi /etc/ssh/sshd_config;sudo systemctl restart sshd;sudo passwd ubuntu;

#setup ntp on your image if needed.  It is important that all the VM's clocks are in synch or it will cause problems joining kubernetes nodes to the kubernetes cluster
sudo apt install ntp.  
sudo apt install ntpdate 

#if needed add you local ntp-hostname to the ntp.conf 
sudo vi /etc/ntp.conf
#sync your vm clock  with you ntp server
date 
sudo service ntp stop
sudo ntpdate -s ntp-hostname 
sudo service ntp start
date

#update the VM with the lates core packages  
sudo apt clean
sudo apt update
sudo apt -y full-upgrade
sudo reboot


Install Docker

The ONAP apps are pakages in Docker containers, so Docker must be installed on all the VMs.

The following snippet was taken from https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#install-docker-ce:

sudo apt-get install     linux-image-extra-$(uname -r)     linux-image-extra-virtual
sudo apt-get install     apt-transport-https     ca-certificates     curl     software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
sudo apt-get update
sudo apt-get -y install docker-ce
sudo apt-get install docker-ce=<VERSION>
sudo docker run hello-world


Install the Kubernetes Pakages

Just install the pakages; there is no need to configure them yet.

The following snippet was taken from https://kubernetes.io/docs/setup/independent/install-kubeadm/:

sudo -i
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF


# run the following commands
apt-get update
apt-get install -y kubelet kubeadm kubectl


Configure the Kubernetes Cluster with kubeadm

kubeadm is a utility provided by Kubernetes which simplifies the process of configuring a Kubernetes cluster.  Details about how to use it can be found here: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/.

Configre the Kubernetes Master Node 

The kubeadm init command sets up the Kubernetes master. SSH to the k8s-master VM and invoke the following command. It is imported to capture the output into a log file as there is information which you will need to refer to afterwards.

#on the k8s-master vm setup the kubernetes master.  
sudo -i
kubeadm init | tee ~/kubeadm_init.log
exit

The output will look like this:

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.5
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.147.101.4]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 59.503216 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node k8s-master as master by adding a label and a taint
[markmaster] Master k8s-master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 76d969.a408a6a63c9c4b73
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy



Your Kubernetes master has initialized successfully!
...


Execute the following snippet as the ubuntu user to get kubectl to work. 

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Note: If you need to run kube[ adm | ctl | let ] as root user, repeat the above for the root user.

A "Pod network" must be deployed to use the cluster. This will let pods to communicate with eachother.

There are many different pod networks to choose from. See https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network for choices. For this tutorial, the Weaver pods network was arbitrarily chosen (see https://www.weave.works/docs/net/latest/kubernetes/kube-addon/ for more information).

The following snippet will install the Weaver Pod network:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"


Install Helm and Tiller on the Kubernetes Master

ONAP uses Helm, so we need to install Helm. The following instructions were taken from https://docs.helm.sh/using_helm/#installing-helm:

#download helm and install it
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh


Tiller requires service account setup in Kubernetes before being initialized. The following snippet will do that for you:

#create then kubernetes spec to define the helm service account
cat > tiller-serviceaccount.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""
EOF


# Install Tillers
kubectl create -f tiller-serviceaccount.yaml

#init helm 
helm init --service-account tiller --upgrade


If you need to reset Helm:

#uninstalls Tiller from a cluster
helm reset


#clean up any existing artifacts 
kubectl -n kube-system delete deployment tiller-deploy
kubectl -n kube-system delete serviceaccount tiller
kubectl -n kube-system delete ClusterRoleBinding tiller-clusterrolebinding


kubectl create -f tiller-serviceaccount.yaml

#init helm 
helm init --service-account tiller --upgrade



Configure the Kubernetes Nodes 

Setting up cluster nodes is very easy: just refer back to the kubeadm init logs. In the logs there was a “kubeadm join” command with token information parameters. Capture those parameters and then execute as root on each of the VMs which will be Kubernetes nodes:  k8s-node1, k8s-node2, and k8s-nodeq3.

The command looks like the follwing snippet:

sudo   kubeadm join --token
b17b41.5095dd3db0129b0b 10.147.132.36:6443 --discovery-token-ca-cert-hash
sha256:139ff626a5b177c4e4efc5dedd8ba5f2a84edbf42e3f2f70543095358ff97d3b


Back on the k8s-master VM, execute the “kubectl get nodes“ command to see all the Kubernetes nodes. It might take some time, but eventually each node will have a status of "Ready":

kubectl get nodes

#Sample Output:
#NAME         STATUS    ROLES     AGE       VERSION
#k8s-master   Ready     master    1d        v1.8.5
#k8s-node1    Ready     <none>    1d        v1.8.5
#k8s-node2    Ready     <none>    1d        v1.8.5
#k8s-node3    Ready     <none>    1d        v1.8.5

You now have a Kubernetes cluster.

Configuring SDN-C ONAP

Configure dockerdata-nfs

This is a shared folder which must be mounted on all of the k8s-nodes VMs because many of the ONAP PODs use this folder to share data.  See 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes for instruction on how to set this up.

Deploy ONAP SDN-C

Clone the OOM repository. The latest patch from the cluster topic can be found here: https://gerrit.onap.org/r/#/c/25467/.

The following deploys patch 22:

git clone https://gerrit.onap.org/r/oom
cd oom
git fetch https://gerrit.onap.org/r/oom refs/changes/67/25467/22 && git checkout FETCH_HEAD


Make sure that the tiller pod is running. Execute the following command and look for a po/tiller-deploy-xxxx with a “Running” status. For example:

kubectl -n kube-system get pods
#Sample output:
#kube-system   po/tiller-deploy-546cf9696c-r7j6z       1/1       Running   0          35s


Configure ONAP

cd ~/oom/kubernetes/oneclick/
source setenv.bash
cd ~/oom/kubernetes/config/
#dummy values can be used as we will not be deploying a VM
cp onap-parameters-sample.yaml onap-parameters.yaml
./createConfig.sh -n onap


Wait for the ONAP config pod to Complete. It should have a "Completed" status:

kubectl -n onap get pods --show-all
#Sample output:
#NAME      READY     STATUS      RESTARTS   AGE
#config    0/1       Completed   0          1d


Deploy the SDN-C Pods

Apply the following patch to the kubernetes/sdnc/templates/nfs-provisoner-deployment.yaml file so that it does not try to export the /dockerdata-nfs as an NFS mount (because this folder was exported by the NFS server during configuration earlier):

cd ~/
cat > nfs-provisoner.patch <<EOF
diff --git a/kubernetes/sdnc/templates/nfs-provisoner-deployment.yaml b/kubernetes/sdnc/templates/nfs-provisoner-deployment.yaml
index 75f57fd..0952da8 100644
--- a/kubernetes/sdnc/templates/nfs-provisoner-deployment.yaml
+++ b/kubernetes/sdnc/templates/nfs-provisoner-deployment.yaml
@@ -51,5 +51,6 @@ spec:
       volumes:
         - name: export-volume
           hostPath:
-            path: /dockerdata-nfs/{{ .Values.nsPrefix }}/sdnc/data
+            path: /nfs-provisioner/{{ .Values.nsPrefix }}/sdnc/data
+            type: DirectoryOrCreate
 #{{ end }}
EOF
cd ~/oom
git apply ~/nfs-provisoner.patch


Deploy the SDN-C pods:

cd ~/oom/kubernetes/oneclick/
./createAll.bash -n onap -a sdnc


Get the Details from Kubernetes

kubectl -n onap-sdnc get all


See what VM the Pod is running on. Just switch "all" to "pod" and then add the “-o wide” parameter to see which Kubernetes nodes the pods where deployed on:

kubectl -n onap-sdnc get pod -o wide


Get more detail about a single pod by using "describe" with the resource name. The resource name is shown with the get all command used above.

kubectl -n onap-sdnc describe po/sdnc-0
  • No labels