This wiki describes how to set up a Kubernetes cluster with kuberadm, and then deploying SDN-C within that Kubernetes cluster.

(To view the current page, Chrome is the preferred browser. IE may add extra "CR LF"  each line, which causes problems). 

What is OpenStack? What is Kubernetes? What is Docker?

In the OpenStack lab, the controller executes the function of partitioning resources. The compute nodes are the collection of resources (memory, CPUs, hard drive space) to be partitioned. When creating a VM with "X" memory, "Y" CPUs and "Z" hard drive space, OpenStack's controller reviews its pool of available resources, allocates the quota, and then creates the VM on one of the available compute nodes. Many VMs can be created on a single compute node. OpenStack's controller uses a lot of criteria to choose a compute node, but if an application spans multiple VMs, Affinity rules can be used to ensure the VMs don’t congregate on a single compute node. This would not be good for resilience.

Kubernetes is similar to OpenStack in that it manages resources. Instead of scheduling VMs, Kubernetes schedules Pods. In a Kubernetes cluster, there is a single master node and multiple worker nodes. The Kubernetes’s master node is like the OpenStack controller in that it allocates resources for Pods. Kubernetes worker nodes are the pool of resources to be allocated, similar to OpenStack’s compute nodes. Pods, like VMs, can have Affinity rules configured in order to increase Apps resilience.

If you would like more information on these subjects, please explore these links:

Deployment Architecture

The Kubernetes deployment in this tutorial will be set up on top of OpenStack VMs. Let's call this the undercloud. undercloud can be physical boxes, or VMs. The VMs can come from different cloud providers, but in this tutorial we will use OpenStack. The following table shows the layers of software that need to be considered when thinking about resilience:

Hardware Base OS

Openstack Software Configured on Base OS

VMs Deployed by Openstack

Kubernetes Software Configured on VMs

Pods Deployed by KubernetesDocker Containers Deployed within a POD

Computer 1

Controller Node



Computer 2ComputeVM 1k8s-master

Computer 3ComputeVM 2k8s-node1

sdnc-0

sdnc-controller-container,

filebeat-onap
sdnc-dbhost-0sdnc-db-container xtrabackup,
Computer 4ComputeVM 3k8s-node2sdnc-0

sdnc-controller-container,

filebeat-onap
sdnc-dbhost-0sdnc-db-container xtrabackup
Computer 5ComputeVM 4k8s-node3sdnc-0

sdnc-controller-container,

filebeat-onap
nfs-provisioner-xxxnfs-provisioner










Setting up an OpenStack lab is out of scope for this tutorial. Assuming that you have a lab, you will need to create 1+n VMs: one to be configured as the Kubernetes master node, and "n" to be considered as Kubernetes worker nodes. We will create 3 Kubernetes worker nodes for this tutorial because we want each of our SDN-Cs to appear on a different VM for resiliency.

There are some changes committed in OOM repo. Two new pods are added: dmaap and ueb-listener. We observed issue with SDNC pods deployment using one master and three worker nodes. We were able to deploy SDNC with latest OOM using one master and four worker nodes.

Hence if you wish to use latest OOM for SDNC deployment, it is recommended to add another Compute Node (VM 5) as a worker node (k8s-node4).


Create the Undercloud

The examples here will use the OpenStackClient; however, the Openstack Horizon GUI could be used. Start by creating 4 VMs with the hostnames: k8s-master, k8s-node0, k8s-node1, and k8s-node1. Each VM should have internet access and approximately:

  • 16384 MB
  • 20 GB
  • 4 vCPUs

How much resources are needed?

 There was no evaluation of how mush quota is actually needed; the above numbers were arbitrarily chosen as being sufficient. A lot more is likely needed if the the full ONAP environment is deployed. For just SDN-C, this is more than plenty.  


Use the ubuntu 16.04 cloud image to create the VMs. This image can be found at https://cloud-images.ubuntu.com/.

wget https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img

openstack image create ubuntu-16.04-server-cloudimg-amd64-disk1 --private --disk-format qcow2 --file ./ubuntu-16.04-server-cloudimg-amd64-disk1


Exactly how to create VMs in OpenStack is out of scope for this tutorial. However, here is some examples of what OpenStackClient commands can be used to perform this job:

openstack server list;
openstack network list;
openstack flavor list;
openstack keypair list;
openstack image list;
openstack security group list

openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-master"
openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node1"
openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node2"
openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node3"


Configure Each VM 

Repeat the following steps on each VM:

Pre-Configure Each VM

Make sure the VMs are: 

  • Up to date
  • The clocks are synchonized 

As ubuntu user run the followings.

# (Optional) fix vi bug in some versions of Mobaxterm (changes first letter of edited file after opening to "g")
vi ~/.vimrc  ==> repeat for root/ubuntu and any other user which will edit files.
# Add the following 2 lines.
syntax on
set background=dark



# Add hostname of kubernetes nodes(master and workers) to /etc/hosts
sudo vi /etc/hosts
# <IP address> <hostname>

# Turn off firewall and allow all incoming HTTP connections through IPTABLES
sudo ufw disable
sudo iptables -I INPUT -j ACCEPT

# Fix server timezone and select your timezone.
sudo dpkg-reconfigure tzdata


# (Optional) create a bash history file as the Ubuntu user so that it does not accidently get created as the root user.  
touch ~/.bash_history  

# (Optional) turn on ssh password authentication and give ubuntu user a password  if you do not like using ssh keys. 
# Set the "PasswordAuthentication yes" in the /etc/ssh/sshd_config file and then set the ubuntu password
sudo vi /etc/ssh/sshd_config;sudo systemctl restart sshd;sudo passwd ubuntu;

# Update the VM with the lates core packages  
sudo apt clean
sudo apt update
sudo apt -y full-upgrade
sudo reboot

# Setup ntp on your image if needed.  It is important that all the VM's clocks are in synch or it will cause problems joining kubernetes nodes to the kubernetes cluster
sudo apt install ntp
sudo apt install ntpdate 

# It is recommended to add local ntp-hostname or ntp server's IP address to the ntp.conf 
# Sync up your vm clock with that of your ntp server. The best choice for the ntp server is one which is different form Kubernetes VMs... a solid machine. Make sure you can ping it!
# A service restart would be needed to synch the time up. You can run them from command line for immediate change.


sudo vi /etc/ntp.conf
# Append the following lines to /etc/ntp.conf, to make them permanent.

date 
sudo service ntp stop
sudo ntpdate -s <ntp-hostname | ntp server's IP address>  ==>e.g.: sudo ntpdate -s 10.247.5.11
sudo service ntp start
date


# Some of the clustering scripts (switch_voting.sh and sdnc_cluster.sh) require JSON parsing, so install jq on th masters only
sudo apt install -y jq

Question: Did you check date on all K8S nodes to make sure they are in synch?

Install Docker

The ONAP apps are pakages in Docker containers.

The following snippet was taken from https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#install-docker-ce:

sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88

# Add a docker repository to "/etc/apt/sources.list". It is for the latest stable one for the ubuntu falvour on the machine ("lsb_release -cs")
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get -y install docker-ce

sudo docker run hello-world


# Verify:
sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                     PORTS               NAMES
c66d903a0b1f        hello-world         "/hello"            10 seconds ago      Exited (0) 9 seconds ago                       vigorous_bhabha


Install the Kubernetes Pakages

Just install the pakages; there is no need to configure them yet.

The following snippet was taken from https://kubernetes.io/docs/setup/independent/install-kubeadm/:

# The "sudo -i" changes user to root.
sudo -i
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

# Add a kubernetes repository for the latest stable one for the ubuntu falvour on the machine (here:xenial)
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

apt-get update

# As of today (late April 2018) version 1.10.1 of kubernetes packages are available. To install that version, you can run:
apt-get install -y kubelet=1.10.1-00
apt-get install -y kubectl=1.10.1-00
apt-get install -y kubeadm

# To install latest version of Kubenetes packages. (recommended)
apt-get install -y kubelet kubeadm kubectl

# To install old version of kubernetes packages, follow the next line.
# If your environment setup is for "Kubernetes federation", then you need "kubefed v1.10.1". We recommend all of Kubernetes packages to be of the same version.
apt-get install -y kubelet=1.8.6-00 kubernetes-cni=0.5.1-00
apt-get install -y kubectl=1.8.6-00
apt-get install -y kubeadm



# Verify version 
kubectl version
kubeadm version
kubelet --version

exit
# Append the following lines to ~/.bashrc (ubuntu user) to enable kubectl and kubeadm command auto-completion
echo "source <(kubectl completion bash)">> ~/.bashrc
echo "source <(kubeadm completion bash)">> ~/.bashrc

Note: If you intend to remove kubernetes packages use  "apt autoremove kubelet; apt autoremove kubeadm;apt autoremove kubectl " .

Configure the Kubernetes Cluster with kubeadm

kubeadm is a utility provided by Kubernetes which simplifies the process of configuring a Kubernetes cluster.  Details about how to use it can be found here: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/.

Configure the Kubernetes Master Node (k8s-master)

The kubeadm init command sets up the Kubernetes master node. SSH to the k8s-master VM and invoke the following command. It is important to capture the output into a log file as there is information which you will need to refer to afterwards.

Note: A new add-on named as "kube-dns" will be added to the master node.  However, there is a recommended option to replace it with "CoreDNS", by providing "--feature-gates=CoreDNS=true" parameter to "kubeadm init" command.

# On the k8s-master vm setup the kubernetes master node.  
# The "sudo -i" changes user to root.
sudo -i

# There is no kubernetes app running. 
ps -ef | grep -i kube | grep -v grep

# Pick one DNS add-on: either "kube-dns" or "CoreDNS".  If your environment setup is for "Kubernetes federation" or "SDN-C Geographic Redundancy" then use "CoreDNS" addon.
# Note that kubeadm version 1.8.x does not have support for coredns feature gate. 
# Upgrade kubeadm to latest version before running below command:

# With "CoreDNS" addon (recommended)
kubeadm init --feature-gates=CoreDNS=true | tee ~/kubeadm_init.log 

# with kube-dns addon
kubeadm init | tee ~/kubeadm_init.log

# Verify many kubernetes app running (kubelet,  kube-scheduler, etcd, kube-apiserver, kube-proxy, kube-controller-manager)
ps -ef | grep -i kube | grep -v grep

# The "exit" reverts user back to ubuntu.
exit

The output of "kubeadm init" (with kube-dns addon) will look like below:

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.7
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.12.0-ce. Max validated version: 17.03
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubefed-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.147.114.12]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 44.002324 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node kubefed-1 as master by adding a label and a taint
[markmaster] Master kubefed-1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 2246a6.83b4c7ca38913ce1
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 --discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a

Execute the following snippet (as ubuntu user) to get kubectl to work. 

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


Verify a set of pods are created. The coredns or kubedns will be in pending state.

# If you installed coredns addon
sudo kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE       IP              NODE
kube-system   coredns-65dcdb4cf-8dr7w              0/1       Pending   0          10m       <none>          <none>
kube-system   coredns-65dcdb4cf-8ez2s              0/1       Pending   0          10m       <none>          <none>
kube-system   etcd-k8s-master                      1/1       Running   0          9m        10.147.99.149   k8s-master
kube-system   kube-apiserver-k8s-master            1/1       Running   0          9m        10.147.99.149   k8s-master
kube-system   kube-controller-manager-k8s-master   1/1       Running   0          9m        10.147.99.149   k8s-master
kube-system   kube-proxy-jztl4                     1/1       Running   0          10m       10.147.99.149   k8s-master
kube-system   kube-scheduler-k8s-master            1/1       Running   0          9m        10.147.99.149   k8s-master

#(There will be 2 coredns pods with kubernetes version 1.10.1 and higher)


# If you did not install coredns addon; kube-dns pod will be created
sudo kubectl get pods --all-namespaces -o wide
NAME                                    READY     STATUS    RESTARTS   AGE       IP              NODE
etcd-k8s-s1-master                      1/1       Running   0          23d       10.147.99.131   k8s-s1-master
kube-apiserver-k8s-s1-master            1/1       Running   0          23d       10.147.99.131   k8s-s1-master
kube-controller-manager-k8s-s1-master   1/1       Running   0          23d       10.147.99.131   k8s-s1-master
kube-dns-6f4fd4bdf-czn68                3/3       Pending   0          23d        <none>          <none>    
kube-proxy-ljt2h                        1/1       Running   0          23d       10.147.99.148   k8s-s1-node0
kube-scheduler-k8s-s1-master            1/1       Running   0          23d       10.147.99.131   k8s-s1-master


# (Optional) run the following commands if you are curious.
sudo kubectl get node
sudo kubectl get secret
sudo kubectl config view
sudo kubectl config current-context
sudo kubectl get componentstatus
sudo kubectl get clusterrolebinding --all-namespaces
sudo kubectl get serviceaccounts --all-namespaces
sudo kubectl get pods --all-namespaces -o wide
sudo kubectl get services --all-namespaces -o wide
sudo kubectl cluster-info


A "Pod network" must be deployed to use the cluster. This will let pods to communicate with eachother.

There are many different pod networks to choose from. See https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network for choices. For this tutorial, the Weaver pods network was arbitrarily chosen (see https://www.weave.works/docs/net/latest/kubernetes/kube-addon/ for more information).

The following snippet will install the Weaver Pod network:

sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

# Sample output:
serviceaccount "weave-net" configured
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
role "weave-net" created
rolebinding "weave-net" created
daemonset "weave-net" created

Pay attention to the new pod (and serviceaccount) for "wave-net" . This pod provdes pod-to-pod connectivity.

Verfiy status of the pods. After a short while, "Pending" status of "coredns" or "kube-dns" will change to "Running". 

sudo kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP               NODE
kube-system   etcd-k8s-master                      1/1       Running   0          1m        10.147.112.140   k8s-master
kube-system   kube-apiserver-k8s-master            1/1       Running   0          1m        10.147.112.140   k8s-master
kube-system   kube-controller-manager-k8s-master   1/1       Running   0          1m        10.147.112.140   k8s-master
kube-system   kube-dns-545bc4bfd4-jcklm            3/3       Running   0          44m       10.32.0.2        k8s-master
kube-system   kube-proxy-lnv7r                     1/1       Running   0          44m       10.147.112.140   k8s-master
kube-system   kube-scheduler-k8s-master            1/1       Running   0          1m        10.147.112.140   k8s-master
kube-system   weave-net-b2hkh                      2/2       Running   0          1m        10.147.112.140   k8s-master


#(There will be 2 coredns pods with different IP addresses, with kubernetes version 1.10.1)

# Verify the AVAIABLE flag for the deployment "kube-dns" or "coredns" will be changed to 1. (2 with kubernetes version 1.10.1)
#For coredns
sudo kubectl get deployment --all-namespaces
NAMESPACE     NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   coredns   2         2         2            2           2m

#For kubedns
sudo kubectl get deployment --all-namespaces
NAMESPACE     NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   kube-dns   1         1         1            1           1h

Troubleshooting tip: 

  • If any of the weave pods face a problem and gets stuck at "ImagePullBackOff " state, you can try running the " sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" " again. 
  • Sometimes, you need to delete the problematic pod, to let it terminate and start fresh. Use "kubectl delete po/<pod-name> -n <name-space> " to delete a pod.
  • To "Unjoin" a worker node "kubectl delete node <node-name> (go through the "Undeploy SDNC" process at the end if you have an SDNC cluster running)
  • If for any reason you need to re-create kubernetes cluster, first remove /etc/kubernetes/ and /var/lib/etcd and /etc/systemd/system/kubelet.service.d/ . Then run kubeadm init command. 


Install Helm and Tiller on the Kubernetes Master Node (k8s-master)

ONAP uses Helm, a package manager for kubernetes.

Install helm (client side). The following instructions were taken from https://docs.helm.sh/using_helm/#installing-helm:

Note: You may need to install older version of helm, then follow "Downgrade helm" section (scroll down). 

# As a root user, download helm and install it
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh


Install Tiller(server side of hlem)

Tiller manages installation of helm packages (charts).  Tiller requires ServiceAccount setup in Kubernetes before being initialized. The following snippet will do that for you:

(Chrome as broswer is preferred. IE may add  extra "CR LF" to each line, which causes problems). 

# id
ubuntu


# As a ubuntu user, create  a yaml file to define the helm service account and cluster role binding. 
cat > tiller-serviceaccount.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""
EOF


# Create a ServiceAccount and ClusterRoleBinding based on the created file. 
sudo kubectl create -f tiller-serviceaccount.yaml

# Verify 
which helm
helm version
# Only Client version is shown. Expect delay in getting prompt back.  CTRL+C to get the prompt back!


Initialize helm....This command installs Tiller. It also discovers Kubernetes clusters by reading $KUBECONFIG (default '~/.kube/config') and using the default context.

helm init --service-account tiller --upgrade


# A new pod is created, but will be in pending status.
kubectl get pods --all-namespaces -o wide  | grep tiller
kube-system   tiller-deploy-b6bf9f4cc-vbrc5           0/1       Pending   0          7m        <none>           <none>


# A new service is created 
kubectl get services --all-namespaces -o wide | grep tiller
kube-system   tiller-deploy   ClusterIP   10.102.74.236   <none>        44134/TCP       47m       app=helm,name=tiller

# A new deployment is created, but the AVAILABLE flage is set to "0".

kubectl get deployments --all-namespaces
NAMESPACE     NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   kube-dns        1         1         1            1           1h
kube-system   tiller-deploy   1         1         1            0           8m


Configure the Kubernetes Worker Nodes (k8s-node<n>)

Setting up cluster nodes is very easy. Just refer back to the "kubeadm init" output logs (/root/kubeadm_init.log). In the last line of the the logs, there is a “kubeadm join” command with token information and other parameters.

Capture those parameters and then execute it as root on each of the Kubernetes worker nodes:  k8s-node1, k8s-node2, and k8s-node3.

After running the "kubeadm join" command on a worker node,

  • 2 new pods (proxy and weave) will be created on Master node and will be assigned to the worker node. 
  • The tiller pod status will change to "running" . 
  • The AVAILABLE flag for tiller-deploy deployment will be changed to "1".
  • The worker node will join the cluster.


The command looks like the follwing snippet (find the command in bottom of /root/kubeadm_init.log):

# Should change to root user on the worker node.
kubeadm join --token 2246a6.83b4c7ca38913ce1 10.147.114.12:6443 --discovery-token-ca-cert-hash sha256:ef25f42843927c334981621a1a3d299834802b2e2a962ae720640f74e361db2a


# Make sure in the output, you see "This node has joined the cluster:".

Verify the results from master node:

kubectl get pods --all-namespaces -o wide  

kubectl get nodes
# Sample Output:
NAME            STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    2h        v1.8.6
k8s-node1    Ready     <none>    53s       v1.8.6

Make sure you run the same "kubeadm join" command on all worker nodes once and verify the results. 


Return to the Kubernetes master node VM, execute the “kubectl get nodes“ command (from master node) to see all the Kubernetes nodes. It might take some time, but eventually each node will have a status of "Ready":

kubectl get nodes

# Sample Output:
NAME         STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    1d        v1.8.5
k8s-node1    Ready     <none>    1d        v1.8.5
k8s-node2    Ready     <none>    1d        v1.8.5
k8s-node3    Ready     <none>    1d        v1.8.5


Make sure that the tiller pod is running. Execute the following command (from master node) and look for a po/tiller-deploy-xxxx with a “Running” status. For example:

(In the case of using coredns instead of kube-dns, you notice it will only one container)

kubectl get pods --all-namespaces -o wide
# Sample output:
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP               NODE
kube-system   etcd-k8s-master                         1/1       Running   0          1h        10.147.112.140   k8s-master
kube-system   kube-apiserver-k8s-master               1/1       Running   0          1h        10.147.112.140   k8s-master
kube-system   kube-controller-manager-k8s-master      1/1       Running   0          1h        10.147.112.140   k8s-master
kube-system   kube-dns-545bc4bfd4-jcklm               3/3       Running   0          2h        10.32.0.2        k8s-master
kube-system   kube-proxy-4zztj                        1/1       Running   0          2m        10.147.112.150   k8s-node2
kube-system   kube-proxy-lnv7r                        1/1       Running   0          2h        10.147.112.140   k8s-master
kube-system   kube-proxy-t492g                        1/1       Running   0          20m       10.147.112.164   k8s-node1
kube-system   kube-proxy-xx8df                        1/1       Running   0          2m        10.147.112.169   k8s-node3
kube-system   kube-scheduler-k8s-master               1/1       Running   0          1h        10.147.112.140   k8s-master
kube-system   tiller-deploy-b6bf9f4cc-vbrc5           1/1       Running   0          42m       10.44.0.1        k8s-node1
kube-system   weave-net-b2hkh                         2/2       Running   0          1h        10.147.112.140   k8s-master
kube-system   weave-net-s7l27                         2/2       Running   1          2m        10.147.112.169   k8s-node3
kube-system   weave-net-vmlrq                         2/2       Running   0          20m       10.147.112.164   k8s-node1
kube-system   weave-net-xxgnq                         2/2       Running   1          2m        10.147.112.150   k8s-node2

Now you have a Kubernetes cluster with 3 worker nodes and 1 master node.

Cluster's Full Picture

You can run " kubectl describe node" on the Master node and get a complete report on nodes (including workers) and thier system resources.

Configure dockerdata-nfs

This is a shared directory which must be mounted on all of the Kuberenetes VMs(master node and worker nodes). Because many of the ONAP pods use this directory to share data.

See 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes for instruction on how to set this up.


Configuring SDN-C ONAP


Clone OOM project only on Kuberentes Master Node

Helm support

OOM deployment is now being done using helm.

It is highly recommended to stop here and continue from this page. Deploying SDN-C using helm chart


As ubuntu user, clone the oom repository. 

git clone https://gerrit.onap.org/r/oom

You may use any specific known stable OOM release for SDNC deployment. The above URL downloads latest OOM.

We identified some issues with latest OOM deployment after namespace change. The details and resolutions for these identified issues are provided below:

There are few things missing after namespace change:

  1. PV is not getting created but PVC is created. So we need to provide PV explicitly to be available for PVC to claim it. Refer attached files - pv-volume-1.yaml and pv-volume-2.yaml. To create use: kubectl create -f <filename>.yaml

    # Verify PVC is "Bound"
    ubuntu@k8s-s1-master:/home/ubuntu# kubectl get pvc --all-namespaces
    NAMESPACE   NAME                      STATUS    VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS     AGE
    onap        sdnc-data-sdnc-dbhost-0   Bound     nfs-volume4   11Gi       RWO,RWX        onap-sdnc-data   1h
    onap        sdnc-data-sdnc-dbhost-1   Bound     nfs-volume5   11Gi       RWO,RWX        onap-sdnc-data   43m
    
    
    # Verify PV is "Bound"
    ubuntu@k8s-s1-master:/home/ubuntu# kubectl get pv --all-namespaces
    NAMESPACE   NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                          STORAGECLASS     REASON    AGE
                nfs-volume4   11Gi       RWO,RWX        Retain           Bound     onap/sdnc-data-sdnc-dbhost-0   onap-sdnc-data             1h
                nfs-volume5   11Gi       RWO,RWX        Retain           Bound     onap/sdnc-data-sdnc-dbhost-1   onap-sdnc-data             2m
    
    
  2. ServiceAccount – default in new namespace(onap) is not bind with cluster-admin role. As a result, it gives issue:
    E0319 15:40:32.717436       1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:369: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:onap:default" cannot list storageclasses.storage.k8s.io at the cluster scope
    Resoution: Create a clusterrole binding for this service account explicitly. Refer attached file - binding.yaml. To create use: kubectl create -f <filename>.yaml

  3. The secret for docker registry to pull images is not getting created. As a result, it gives issue:

Warning  FailedSync             <invalid> (x3 over <invalid>)  kubelet, k8s-s1-node3  Error syncing pod

Normal   BackOff                <invalid>                      kubelet, k8s-s1-node3  Back-off pulling image "nexus3.onap.org:10001/onap/sdnc-image:v1.2.1"

Normal   Pulling                <invalid> (x3 over <invalid>)  kubelet, k8s-s1-node3  pulling image "nexus3.onap.org:10001/onap/sdnc-image:v1.2.1"

Warning  Failed                 <invalid> (x3 over <invalid>)  kubelet, k8s-s1-node3  Failed to pull image "nexus3.onap.org:10001/onap/sdnc-image:v1.2.1": rpc error: code = Unknown desc = Error response from daemon: Get https://nexus3.onap.org:10001/v2/onap/sdnc-image/manifests/v1.2.1: no basic auth credentials

Resolution: Create the secret explicitly using command: kubectl --namespace onap create secret docker-registry onap-docker-registry-key --docker-server=nexus3.onap.org:10001 --docker-username=docker --docker-password=docker --docker-email=docker@nexus3.onap.org

We were able to deploy latest OOM (after namespace change) succesfully after these resolutions:

root@k8s-s1-master:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   coredns-65dcdb4cf-2vmwp                 1/1       Running   0          4d
kube-system   etcd-k8s-s1-master                      1/1       Running   0          4d
kube-system   kube-apiserver-k8s-s1-master            1/1       Running   0          4d
kube-system   kube-controller-manager-k8s-s1-master   1/1       Running   0          4d
kube-system   kube-proxy-pjtgh                        1/1       Running   0          4d
kube-system   kube-proxy-pmzmw                        1/1       Running   0          4d
kube-system   kube-proxy-zbbjp                        1/1       Running   0          4d
kube-system   kube-proxy-zrhd2                        1/1       Running   0          4d
kube-system   kube-proxy-zrn7d                        1/1       Running   0          4d
kube-system   kube-scheduler-k8s-s1-master            1/1       Running   0          4d
kube-system   tiller-deploy-7bf964fff8-g5rnm          1/1       Running   0          4d
kube-system   weave-net-8hdkl                         2/2       Running   0          4d
kube-system   weave-net-bq5rx                         2/2       Running   0          4d
kube-system   weave-net-jxdb8                         2/2       Running   0          4d
kube-system   weave-net-nb8sw                         2/2       Running   0          4d
kube-system   weave-net-wnrbw                         2/2       Running   0          4d
onap          sdnc-0                                  2/2       Running   0          1d
onap          sdnc-1                                  2/2       Running   0          1d
onap          sdnc-2                                  2/2       Running   0          1d
onap          sdnc-dbhost-0                           2/2       Running   0          1d
onap          sdnc-dbhost-1                           2/2       Running   1          1d
onap          sdnc-dgbuilder-65444884c7-k2h67         1/1       Running   0          1d
onap          sdnc-dmaap-listener-567c7b744b-xrld2    1/1       Running   0          1d
onap          sdnc-nfs-provisioner-6db9648675-25bnb   1/1       Running   0          1d
onap          sdnc-portal-5f74449bb5-rffzt            1/1       Running   0          1d
onap          sdnc-ueb-listener-5bb66785c8-6xv7m      1/1       Running   0          1d



Get the following 2 gerrit changes from Configure SDN-C Cluster Deployment .

Local Nexus

Optional,  if you have a local nexus3 for you docker repo can use the following snippet to update the oom to pull from you local. This will speed up deployment time.  

# Update nexus3 to you 
find ~/oom -type f -exec \
    sed -i 's/nexus3\.onap\.org:10001/yournexus:port/g' {} +


Configure ONAP

As ubuntu user, 

cd ~/oom/kubernetes/oneclick/
source setenv.bash

cd ~/oom/kubernetes/config/
# Dummy values can be used as we will not be deploying a VM
cp onap-parameters-sample.yaml onap-parameters.yaml
./createConfig.sh -n onap


Wait for the ONAP config pod to change state from ContainerCreating to Running  and finaly to Completed. It should have a "Completed" status:

kubectl -n onap get pods --show-all
# Sample output:
NAME      READY     STATUS      RESTARTS   AGE
config    0/1       Completed   0          9m


$ kubectl get pod --all-namespaces -a | grep onap
onap          config                                  0/1       Completed   0          9m


$ kubectl get namespaces
# Sample output:
NAME          STATUS    AGE
default       Active    50m
kube-public   Active    50m
kube-system   Active    50m
onap          Active    8m


Deploy the SDN-C Pods

cd ~/oom/kubernetes/oneclick/
source setenv.bash
./createAll.bash -n onap -a sdnc


# Verify: Repeat the follwing command, untill installation is completed, and all pods are running. 
kubectl -n onap-sdnc get pod -o wide

3 SDNC pods will be created and will be each assigned to a separate Kuberentes worker nodes. 

2 DBhost pods will be created and will be each assigned to a separate Kuberentes worker nodes. 

Verify SDNC Clustering

Refer to Validate the SDN-C ODL cluster.

Undeploy SDNC

$ cd ~/oom/kubernetes/oneclick/
$ source setenv.bash
$ ./deleteAll.bash -n onap
$ ./deleteAll.bash -n onap -a sdnc
$ sudo rm -rf /dockerdata-nfs

Get the details from Kubernete Master Node


Access to RestConf UI is via https://<Kuberbetes-Master Node-IP>:30202/apidoc/explorer/index.html (admin user)

Run the following command to make sure installation is error free.

$ kubectl cluster-info
Kubernetes master is running at https://10.147.112.158:6443
KubeDNS is running at https://10.147.112.158:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl -n onap-sdnc get all
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/nfs-provisioner 1 1 1 1 5h deploy/sdnc-dgbuilder 1 1 1 1 5h deploy/sdnc-portal 1 1 1 1 5h NAME DESIRED CURRENT READY AGE rs/nfs-provisioner-6cb95b597d 1 1 1 5h rs/sdnc-dgbuilder-557b6879cd 1 1 1 5h rs/sdnc-portal-7bb789ccd6 1 1 1 5h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/nfs-provisioner 1 1 1 1 5h deploy/sdnc-dgbuilder 1 1 1 1 5h deploy/sdnc-portal 1 1 1 1 5h NAME DESIRED CURRENT READY AGE rs/nfs-provisioner-6cb95b597d 1 1 1 5h rs/sdnc-dgbuilder-557b6879cd 1 1 1 5h rs/sdnc-portal-7bb789ccd6 1 1 1 5h NAME DESIRED CURRENT AGE statefulsets/sdnc 3 3 5h statefulsets/sdnc-dbhost 2 2 5h NAME READY STATUS RESTARTS AGE po/nfs-provisioner-6cb95b597d-jjhv5 1/1 Running 0 5h po/sdnc-0 2/2 Running 0 5h po/sdnc-1 2/2 Running 0 5h po/sdnc-2 2/2 Running 0 5h po/sdnc-dbhost-0 2/2 Running 0 5h po/sdnc-dbhost-1 2/2 Running 0 5h po/sdnc-dgbuilder-557b6879cd-9nkv4 1/1 Running 0 5h po/sdnc-portal-7bb789ccd6-5z9w4 1/1 Running 0 5h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/dbhost ClusterIP None <none> 3306/TCP 5h svc/dbhost-read ClusterIP 10.109.106.173 <none> 3306/TCP 5h svc/nfs-provisioner ClusterIP 10.101.77.92 <none> 2049/TCP,20048/TCP,111/TCP,111/UDP 5h svc/sdnc-dgbuilder NodePort 10.111.87.223 <none> 3000:30203/TCP 5h svc/sdnc-portal NodePort 10.102.21.163 <none> 8843:30201/TCP 5h svc/sdnctldb01 ClusterIP None <none> 3306/TCP 5h svc/sdnctldb02 ClusterIP None <none> 3306/TCP 5h svc/sdnhost NodePort 10.108.100.101 <none> 8282:30202/TCP,8201:30208/TCP,8280:30246/TCP 5h svc/sdnhost-cluster ClusterIP None <none> 2550/TCP 5h 
$ kubectl -n onap-sdnc get pod
NAME                               READY     STATUS              RESTARTS
nfs-provisioner-6cb95b597d-jjhv5   0/1       ContainerCreating   0
sdnc-0                             0/2       Init:0/1            0
sdnc-1                             0/2       Init:0/1            0
sdnc-2                             0/2       Init:0/1            0
sdnc-dbhost-0                      0/2       Init:0/2            0
sdnc-dgbuilder-557b6879cd-9nkv4    0/1       Init:0/1            0
sdnc-portal-7bb789ccd6-5z9w4       0/1       Init:0/1            0


# Wait few minutes


$ kubectl -n onap-sdnc get pod
NAME                               READY     STATUS    RESTARTS   AGE
nfs-provisioner-6cb95b597d-jjhv5   1/1       Running   0          23m
sdnc-0                             2/2       Running   0          23m
sdnc-1                             2/2       Running   0          23m
sdnc-2                             2/2       Running   0          23m
sdnc-dbhost-0                      2/2       Running   0          23m
sdnc-dbhost-1                      2/2       Running   0          21m
sdnc-dgbuilder-557b6879cd-9nkv4    1/1       Running   0          23m
sdnc-portal-7bb789ccd6-5z9w4       1/1       Running   0          23m
$ kubectl get pod --all-namespaces -a
NAMESPACE     NAME                                    READY     STATUS      RESTARTS   AGE
kube-system   etcd-k8s-s2-master                      1/1       Running     0          13h
kube-system   kube-apiserver-k8s-s2-master            1/1       Running     0          13h
kube-system   kube-controller-manager-k8s-s2-master   1/1       Running     0          13h
kube-system   kube-dns-6f4fd4bdf-n8rgs                3/3       Running     0          14h
kube-system   kube-proxy-l8gsk                        1/1       Running     0          12h
kube-system   kube-proxy-pdz6h                        1/1       Running     0          12h
kube-system   kube-proxy-q7zz2                        1/1       Running     0          12h
kube-system   kube-proxy-r76g9                        1/1       Running     0          14h
kube-system   kube-scheduler-k8s-s2-master            1/1       Running     0          13h
kube-system   tiller-deploy-6657cd6b8d-f6p9h          1/1       Running     0          12h
kube-system   weave-net-mwdjd                         2/2       Running     1          12h
kube-system   weave-net-sl7gg                         2/2       Running     2          12h
kube-system   weave-net-t6nmx                         2/2       Running     1          13h
kube-system   weave-net-zmqcf                         2/2       Running     2          12h
onap-sdnc     nfs-provisioner-6cb95b597d-jjhv5        1/1       Running     0          5h
onap-sdnc     sdnc-0                                  2/2       Running     0          5h
onap-sdnc     sdnc-1                                  2/2       Running     0          5h
onap-sdnc     sdnc-2                                  2/2       Running     0          5h
onap-sdnc     sdnc-dbhost-0                           2/2       Running     0          5h
onap-sdnc     sdnc-dbhost-1                           2/2       Running     0          5h
onap-sdnc     sdnc-dgbuilder-557b6879cd-9nkv4         1/1       Running     0          5h
onap-sdnc     sdnc-portal-7bb789ccd6-5z9w4            1/1       Running     0          5h
onap          config                                  0/1       Completed   0          5h

$ kubectl -n onap-sdnc get pod -o wide
NAME                               READY     STATUS    RESTARTS   AGE       IP          NODE
nfs-provisioner-6cb95b597d-jjhv5   1/1       Running   0          5h        10.36.0.1   k8s-s2-node0
sdnc-0                             2/2       Running   0          5h        10.36.0.2   k8s-s2-node0
sdnc-1                             2/2       Running   0          5h        10.42.0.1   k8s-s2-node1
sdnc-2                             2/2       Running   0          5h        10.44.0.3   k8s-s2-node2
sdnc-dbhost-0                      2/2       Running   0          5h        10.44.0.4   k8s-s2-node2
sdnc-dbhost-1                      2/2       Running   0          5h        10.42.0.3   k8s-s2-node1
sdnc-dgbuilder-557b6879cd-9nkv4    1/1       Running   0          5h        10.44.0.2   k8s-s2-node2
sdnc-portal-7bb789ccd6-5z9w4       1/1       Running   0          5h        10.42.0.2   k8s-s2-node1

$ kubectl get services --all-namespaces -o wide
NAMESPACE     NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                        AGE       SELECTOR
default       kubernetes        ClusterIP   10.96.0.1        <none>        443/TCP                                        14h       <none>
kube-system   kube-dns          ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP                                  14h       k8s-app=kube-dns
kube-system   tiller-deploy     ClusterIP   10.96.194.1      <none>        44134/TCP                                      12h       app=helm,name=tiller
onap-sdnc     dbhost            ClusterIP   None             <none>        3306/TCP                                       5h        app=sdnc-dbhost
onap-sdnc     dbhost-read       ClusterIP   10.109.106.173   <none>        3306/TCP                                       5h        app=sdnc-dbhost
onap-sdnc     nfs-provisioner   ClusterIP   10.101.77.92     <none>        2049/TCP,20048/TCP,111/TCP,111/UDP             5h        app=nfs-provisioner
onap-sdnc     sdnc-dgbuilder    NodePort    10.111.87.223    <none>        3000:30203/TCP                                 5h        app=sdnc-dgbuilder
onap-sdnc     sdnc-portal       NodePort    10.102.21.163    <none>        8843:30201/TCP                                 5h        app=sdnc-portal
onap-sdnc     sdnctldb01        ClusterIP   None             <none>        3306/TCP                                       5h        app=sdnc-dbhost
onap-sdnc     sdnctldb02        ClusterIP   None             <none>        3306/TCP                                       5h        app=sdnc-dbhost
onap-sdnc     sdnhost           NodePort    10.108.100.101   <none>        8282:30202/TCP,8201:30208/TCP,8280:30246/TCP   5h        app=sdnc
onap-sdnc     sdnhost-cluster   ClusterIP   None             <none>        2550/TCP                                       5h        app=sdnc


Get more detail about a single pod by using "describe" with the resource name. The resource name is shown with the get all command used above.

$ kubectl -n onap-sdnc describe po/sdnc-0


Get logs of containers inside each pod:

# add -v=n {n:1:10) to "kubectl logs"  to get verbose logs. 


$ kubectl describe pod sdnc-0  -n onap-sdnc
$ kubectl logs sdnc-0 sdnc-readiness -n onap-sdnc  # add -v=n {n:1:10) to get verbose logs. 
$ kubectl logs sdnc-0 sdnc-controller-container -n onap-sdnc
$ kubectl logs sdnc-0 filebeat-onap -n onap-sdnc

$ kubectl describe pod sdnc-dbhost-0  -n onap-sdnc
$ kubectl logs sdnc-dbhost-0 sdnc-db-container -n onap-sdnc
$ kubectl logs sdnc-dbhost-0 init-mysql -n onap-sdnc
$ kubectl logs sdnc-dbhost-0 clone-mysql  -n onap-sdnc
$ kubectl logs sdnc-dbhost-0 xtrabackup  -n onap-sdnc

List of Presistent Volumes..

Each DB pod, has got a presistent volume cliam (pvc), lined to a pv.  PVC capacity must be less than or equal to PV. Their status must be "Bound". 

$ kubectl get pv -n onap-sdnc
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                               STORAGECLASS     REASON    AGE
pvc-75411a66-f640-11e7-9949-fa163ee2b421   1Gi        RWX            Delete           Bound      onap-sdnc/sdnc-data-sdnc-dbhost-0   onap-sdnc-data             23h
pvc-824cb3cc-f620-11e7-9949-fa163ee2b421   1Gi        RWX            Delete           Released   onap-sdnc/sdnc-data-sdnc-dbhost-0   onap-sdnc-data             1d
pvc-cb380eda-f640-11e7-9949-fa163ee2b421   1Gi        RWX            Delete           Bound      onap-sdnc/sdnc-data-sdnc-dbhost-1   onap-sdnc-data             23h




$ kubectl get pvc -n onap-sdnc
NAME                      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
sdnc-data-sdnc-dbhost-0   Bound     pvc-75411a66-f640-11e7-9949-fa163ee2b421   1Gi        RWX            onap-sdnc-data   23h
sdnc-data-sdnc-dbhost-1   Bound     pvc-cb380eda-f640-11e7-9949-fa163ee2b421   1Gi        RWX            onap-sdnc-data   23h
$ kubectl get serviceaccounts --all-namespaces
$ kubectl get clusterrolebinding --all-namespaces


$kubectl get deployment --all-namespaces
NAMESPACE     NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   kube-dns        1         1         1            1           1d
kube-system   tiller-deploy   1         1         1            1           1d


Scale up or down SDNC or DB pods

decrease sdnc pods to 1 
$ kubectl scale statefulset  sdnc -n onap-sdnc --replicas=1
statefulset "sdnc" scaled


# verify ..2 sdnc pods will terminate
$ kubectl get pods --all-namespaces -a | grep sdnc
onap-sdnc    nfs-provisioner-5fb9fcb48f-cj8hm      1/1       Running       0          21h
onap-sdnc    sdnc-0                                2/2       Running       0          2h
onap-sdnc    sdnc-1                                0/2       Terminating   0         40m
onap-sdnc    sdnc-2                                0/2       Terminating   0         15m


increase sdnc pods to 5
$ kubectl scale statefulset  sdnc -n onap-sdnc --replicas=5
statefulset "sdnc" scaled

increase db pods to 4
$kubectl scale statefulset sdnc-dbhost -n onap-sdnc  --replicas=5
statefulset "sdnc-dbhost" scaled

$ kubectl get pods --all-namespaces -o wide | grep onap-sdnc
onap-sdnc     nfs-provisioner-7fd7b4c6b7-d6k5t        1/1       Running   0          13h       10.42.0.149     sdnc-k8s
onap-sdnc     sdnc-0                                  2/2       Running   0          13h       10.42.134.186   sdnc-k8s
onap-sdnc     sdnc-1                                  2/2       Running   0          13h       10.42.186.72    sdnc-k8s
onap-sdnc     sdnc-2                                  2/2       Running   0          13h       10.42.51.86     sdnc-k8s
onap-sdnc     sdnc-dbhost-0                           2/2       Running   0          13h       10.42.190.88    sdnc-k8s
onap-sdnc     sdnc-dbhost-1                           2/2       Running   0          12h       10.42.213.221   sdnc-k8s
onap-sdnc     sdnc-dbhost-2                           2/2       Running   0          5m        10.42.63.197    sdnc-k8s
onap-sdnc     sdnc-dbhost-3                           2/2       Running   0          5m        10.42.199.38    sdnc-k8s
onap-sdnc     sdnc-dbhost-4                           2/2       Running   0          4m        10.42.148.85    sdnc-k8s
onap-sdnc     sdnc-dgbuilder-6ff8d94857-hl92x         1/1       Running   0          13h       10.42.255.132   sdnc-k8s
onap-sdnc     sdnc-portal-0                           1/1       Running   0          13h       10.42.141.70    sdnc-k8s
onap-sdnc     sdnc-portal-1                           1/1       Running   0          13h       10.42.60.71     sdnc-k8s
onap-sdnc     sdnc-portal-2   



sudo ntpdate -s 10.247.5.11


  • No labels

3 Comments

  1. Hi,

    Are you able to install the whole ONAP in this Cluster? 

    I created a 6 node cluster, and install ONAP on it. All the containers are up and running.

    However, there is some issue with kube2msb.

    Keep getting this error from kube2msb log:

    E0130 21:33:32.464069 7 reflector.go:216] kube2msb/kube2msb.go:214: Failed to list *api.Pod: pods is forbidden: User "system:serviceaccount:kube-system:default" cannot list pods at the cluster scope
    E0130 21:33:32.464161 7 reflector.go:216] kube2msb/kube2msb.go:148: Failed to list *api.Service: services is forbidden: User "system:serviceaccount:kube-system:default" cannot list services at the cluster scope
    E0130 21:33:33.465608 7 reflector.go:216] kube2msb/kube2msb.go:148: Failed to list *api.Service: services is forbidden: User "system:serviceaccount:kube-system:default" cannot list services at the cluster scope
    E0130 21:33:33.465672 7 reflector.go:216] kube2msb/kube2msb.go:214: Failed to list *api.Pod: pods is forbidden: User "system:serviceaccount:kube-system:default" cannot list pods at the cluster scope

    So that no all the service registered on MSB. That will cause health check failed.

    The cluster, which is created by kubeadm, uses RBAC. However the kube2msb uses TOKEN in it's API calls. 

    So that kube2msb is not able to use "system:serviceaccount:kube-system:default" to access api.Pod and api.Pod

    pods:

    [centos@server-186-kubernetes-master-host-8z2tnf oom]$ kubectl get pod --all-namespaces -a | grep onap
    onap-aaf aaf-c68d8877c-gdzbh 0/1 Running 0 44m
    onap-aaf aaf-cs-75464db899-zwhbj 1/1 Running 0 44m
    onap-aai aai-resources-78db876764-dlfsd 2/2 Running 0 1h
    onap-aai aai-service-7cc55b7849-nnqpc 1/1 Running 0 1h
    onap-aai aai-traversal-5bbdd868dc-grsf5 2/2 Running 0 1h
    onap-aai data-router-857d5f69-2znlv 1/1 Running 0 1h
    onap-aai elasticsearch-6b577bf757-x85c2 1/1 Running 0 1h
    onap-aai hbase-576777bd56-pf8k7 1/1 Running 0 1h
    onap-aai model-loader-service-6c87b9cb79-ztm75 2/2 Running 0 1h
    onap-aai search-data-service-67cc45d55b-g4cqz 2/2 Running 0 1h
    onap-aai sparky-be-77c7874c9d-n5rvq 2/2 Running 0 1h
    onap-appc appc-775c4db7db-7lhl7 2/2 Running 0 45m
    onap-appc appc-dbhost-7bd58565d9-r2q7r 1/1 Running 0 45m
    onap-appc appc-dgbuilder-5dc9cff85-8rq45 1/1 Running 0 45m
    onap-clamp clamp-7cd5579f97-66g9m 1/1 Running 0 45m
    onap-clamp clamp-mariadb-78c46967b8-nvprj 1/1 Running 0 45m
    onap-cli cli-6885486887-hf4wb 1/1 Running 0 45m
    onap-consul consul-agent-5c744c8758-cl8mk 1/1 Running 0 1h
    onap-consul consul-server-687f6f6556-65mdq 1/1 Running 0 1h
    onap-consul consul-server-687f6f6556-lw8lc 1/1 Running 0 1h
    onap-consul consul-server-687f6f6556-sbm6x 1/1 Running 0 1h
    onap-esr esr-esrgui-68cdbd94f5-zrt79 1/1 Running 0 32m
    onap-esr esr-esrserver-7fd9c6b6fc-ktk8j 1/1 Running 0 32m
    onap-kube2msb kube2msb-registrator-5d8547df4c-grt7h 1/1 Running 0 1h
    onap-log elasticsearch-7689bd6fbd-x8gbj 1/1 Running 0 45m
    onap-log kibana-5dfc4d4f86-jqfsh 1/1 Running 0 45m
    onap-log logstash-cd4cb99f8-69lrc 1/1 Running 0 45m
    onap-message-router dmaap-84bd655dd6-hhsr2 1/1 Running 0 45m
    onap-message-router global-kafka-54d5cf7b5b-sxcnt 1/1 Running 0 45m
    onap-message-router zookeeper-7df6479654-6dbwf 1/1 Running 0 45m
    onap-msb msb-consul-6c79b86c79-gqxks 1/1 Running 0 1h
    onap-msb msb-discovery-845db56dc5-hqj4k 1/1 Running 0 1h
    onap-msb msb-eag-65bd96b98-j5b45 1/1 Running 0 1h
    onap-msb msb-iag-7bb5b74cd9-6n8qd 1/1 Running 0 1h
    onap-mso mariadb-6487b74997-62ldp 1/1 Running 0 46m
    onap-mso mso-6d6f86958b-xn6mv 2/2 Running 0 46m
    onap-multicloud framework-77f46548ff-n8xll 1/1 Running 0 45m
    onap-multicloud multicloud-ocata-766474f955-c6sv4 1/1 Running 0 45m
    onap-multicloud multicloud-vio-64598b4c84-tqkl6 1/1 Running 0 45m
    onap-multicloud multicloud-windriver-68765f8898-dv2kv 1/1 Running 0 45m
    onap-policy brmsgw-785897ff54-4z97r 1/1 Running 0 45m
    onap-policy drools-7889c5d865-95m95 2/2 Running 0 45m
    onap-policy mariadb-7c66956bf-gmbj5 1/1 Running 0 45m
    onap-policy nexus-77487c4c5d-8z8gc 1/1 Running 0 45m
    onap-policy pap-796d8d946f-4f4v4 2/2 Running 0 45m
    onap-policy pdp-8775c6cc8-54llt 2/2 Running 0 45m
    onap-portal portalapps-dd4f99c9b-9pl86 2/2 Running 0 45m
    onap-portal portaldb-7f8547d599-kcf9j 1/1 Running 0 45m
    onap-portal portalwidgets-6f884fd4b4-h7l88 1/1 Running 0 45m
    onap-portal vnc-portal-687cdf7845-mt2mr 1/1 Running 0 45m
    onap-robot robot-7747c7c475-kkqkv 1/1 Running 0 45m
    onap-sdc sdc-be-7f6bb5884f-6hvjl 2/2 Running 0 45m
    onap-sdc sdc-cs-7bd5fb9dbc-hgcqk 1/1 Running 0 45m
    onap-sdc sdc-es-69f77b4778-4sb9k 1/1 Running 0 45m
    onap-sdc sdc-fe-7c7b64c6c-hm8d6 2/2 Running 0 45m
    onap-sdc sdc-kb-c4dc46d47-znmr5 1/1 Running 0 45m
    onap-sdnc dmaap-listener-5d5584bb9d-mjkt8 1/1 Running 0 45m
    onap-sdnc nfs-provisioner-64b5fc8744-qm924 1/1 Running 0 45m
    onap-sdnc sdnc-0 2/2 Running 0 45m
    onap-sdnc sdnc-dbhost-0 2/2 Running 0 45m
    onap-sdnc sdnc-dgbuilder-8667587c65-l54fg 1/1 Running 0 45m
    onap-sdnc sdnc-portal-6587c6dbdf-s96t6 1/1 Running 0 45m
    onap-sdnc ueb-listener-69ccb5bdcb-gm4tk 1/1 Running 0 45m
    onap-uui uui-578cd988b6-dd86h 1/1 Running 0 44m
    onap-uui uui-server-576998685c-rb8p5 1/1 Running 0 44m
    onap-vfc vfc-catalog-6ff7b74b68-qf8wg 1/1 Running 0 44m
    onap-vfc vfc-emsdriver-7845c8f9f-7j6z2 1/1 Running 0 44m
    onap-vfc vfc-gvnfmdriver-56cf469b46-hfrw8 1/1 Running 0 44m
    onap-vfc vfc-hwvnfmdriver-588d5b679f-qw757 1/1 Running 0 44m
    onap-vfc vfc-jujudriver-6db77bfdd5-wmpld 1/1 Running 0 44m
    onap-vfc vfc-nokiavnfmdriver-6c78675f8d-x5gql 1/1 Running 0 44m
    onap-vfc vfc-nslcm-796b678d-9b9ws 1/1 Running 0 44m
    onap-vfc vfc-resmgr-74f858b688-dwcl9 1/1 Running 0 44m
    onap-vfc vfc-vnflcm-5849759444-5qnff 1/1 Running 0 44m
    onap-vfc vfc-vnfmgr-77df547c78-6kngz 1/1 Running 0 44m
    onap-vfc vfc-vnfres-5bddd7fc68-5ngxx 1/1 Running 0 44m
    onap-vfc vfc-workflow-5849854569-nwmw8 1/1 Running 0 44m
    onap-vfc vfc-workflowengineactiviti-699f669db9-wptfp 1/1 Running 0 44m
    onap-vfc vfc-ztesdncdriver-5dcf694c4-5q7zn 1/1 Running 0 44m
    onap-vfc vfc-ztevnfmdriver-585d8db4f7-k5rtb 0/1 ImagePullBackOff 0 44m
    onap-vid vid-mariadb-575fd8f48-hn98f 1/1 Running 0 45m
    onap-vid vid-server-65c4f7ddb9-fxdjj 2/2 Running 0 45m
    onap-vnfsdk postgres-5679d856cf-6cvds 1/1 Running 0 45m
    onap-vnfsdk refrepo-594969bf89-rt7fw 1/1 Running 0 45m
    onap config 0/1 Completed 0 1h

    1. I have not tried to install whole ONAP in a cluster.

      See https://github.com/kubernetes/kops/issues/3551 for your error.. might help

    2. Hi Jun Hu, this looks like an issue with service account not having cluster admin role. You may try to create a cluster role binding to bind your service account "default" in "kube-system" namespace explicitly and it may help to resolve your issue. For reference refer below yaml file - clusterrolebind.yaml :

      apiVersion: rbac.authorization.k8s.io/v1beta1
      kind: ClusterRoleBinding
      metadata:
      name: system:serviceaccount:default
      namespace: kube-system
      subjects:
      - kind: ServiceAccount
      name: default
      namespace: kube-system
      roleRef:
      kind: ClusterRole
      name: cluster-admin
      apiGroup: rbac.authorization.k8s.io

      To create use: kubectl create -f <filename>.yaml

      I was able to resolve a similar issue for "default" service account in "onap" namespace using above process. Details below:

      Issue: 

      E0319 15:40:32.717436       1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:369: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:onap:default" cannot list storageclasses.storage.k8s.io at the cluster scope

      Resolution: Create cluster role binding to bind service account "default" in "onap" namespace with cluster-admin role.