TiThis wiki describes how to set up a Kubernetes cluster with kuberadm, and then deploying SDN-C within that Kubernetes cluster.
What is OpenStack? What is Kubernetes? What is Docker?
In the OpenStack lab, the controller executes the function of partitioning resources. The compute nodes are the collection of resources (memory, CPUs, hard drive space) to be partitioned. When creating a VM with "X" memory, "Y" CPUs and "Z" hard drive space, OpenStack's controller reviews its pool of available resources, allocates the quota, and then creates the VM on one of the available compute nodes. Many VMs can be created on a single compute node. OpenStack's controller uses a lot of criteria to choose a compute node, but if an application spans multiple VMs, Affinity rules can be used to ensure the VMs don’t congregate on a single compute node. This would not be good for resilience.
Kubernetes is similar to OpenStack in that it manages resources. Instead of scheduling VMs, Kubernetes schedules Pods. In a Kubernetes cluster, there is a single master node and multiple worker nodes. The Kubernetes’s master node is like the OpenStack controller in that it allocates resources for Pods. Kubernetes worker nodes are the pool of resources to be allocated, similar to OpenStack’s compute nodes. Pods, like VMs, can have Affinity rules configured in order to increase Apps resilience.
If you would like more information on these subjects, please explore these links:
Deployment Architecture
The Kubernetes deployment in this tutorial will be set up on top of OpenStack VMs. Let's call this the undercloud. undercloud can be physical boxes, or VMs. The VMs can come from different cloud providers, but in this tutorial we will use OpenStack. The following table shows the layers of software that need to be considered when thinking about resilience:
Hardware Base OS | Openstack Software Configured on Base OS | VMs Deployed by Openstack | Kubernetes Software Configured on VMs | Pods Deployed by Kubernetes | Docker Containers Deployed within a POD |
---|---|---|---|---|---|
Computer 1 | Controller Node | ||||
Computer 2 | Compute | VM 1 | k8s-master | ||
Computer 3 | Compute | VM 2 | k8s-node1 | sdnc-0 | sdnc-controller-container, filebeat-onap |
sdnc-dbhost-0 | sdnc-db-container xtrabackup, | ||||
Computer 4 | Compute | VM 3 | k8s-node2 | sdnc-0 | sdnc-controller-container, filebeat-onap |
sdnc-dbhost-0 | sdnc-db-container xtrabackup | ||||
Computer 5 | Compute | VM 4 | k8s-node3 | sdnc-0 | sdnc-controller-container, filebeat-onap |
nfs-provisioner-xxx | nfs-provisioner |
Setting up an OpenStack lab is out of scope for this tutorial. Assuming that you have a lab, you will need to create 1+n VMs: one to be configured as the Kubernetes master node, and "n" to be considered as Kubernetes worker nodes. We will create 3 Kubernetes worker nodes for this tutorial because we want each of our SDN-Cs to appear on a different VM for resiliency.
Create the Undercloud
The examples here will use the OpenStackClient; however, the Openstack Horizon GUI could be used. Start by creating 4 VMs with the hostnames: k8s-master, k8s-node0, k8s-node1, and k8s-node1. Each VM should have internet access and approximately:
- 16384 MB
- 20 GB
- 4 vCPUs
How many resources are needed?
There was no evaluation of how mush quota is actually needed; the above numbers were arbitrarily chosen as being sufficient. A lot more is likely needed if the the full ONAP environment is deployed. For just SDN-C, this is more than plenty.
Use the ubuntu 16.04 cloud image to create the VMs. This image can be found at https://cloud-images.ubuntu.com/.
wget https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img openstack image create ubuntu-16.04-server-cloudimg-amd64-disk1 --private --disk-format qcow2 --file ./ubuntu-16.04-server-cloudimg-amd64-disk1
Exactly how to create VMs in OpenStack is out of scope for this tutorial. However, here is some examples of what OpenStackClient commands can be used to perform this job:
openstack server list; openstack network list; openstack flavor list; openstack keypair list; openstack image list; openstack security group list openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-master" openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node1" openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node2" openstack server create --flavor "flavor-name" --image "ubuntu-16.04-server-cloudimg-amd64-disk1" --key-name "keypair-name" --nic net-id="net-name" --security-group "security-group-id" "k8s-node3"
Configure Each VM
Repeat the following steps on each VM:
Pre-Configure Each VM
Make sure the VMs are:
- Up to date
- The clocks are synchonized
As ubuntu user run the followings.
# fix vi bug (changes first letter of file after onening to "g") vi ~/.vimrc ==> repeat for root and ubuntu user #add the following 2 lines. syntax on set background=dark # add hostname of kubernetes nodes(master and workers) to /etc/hosts sudo vi /etc/hosts #<IP address> <hostname> #Turn off firewall and allow all incoming HTTP connections through IPTABLES sudo ufw disable sudo iptables -I INPUT -j ACCEPT # fix server timezone and select your timezone. sudo dpkg-reconfigure tzdata #(Optional) create a bash history file as the Ubuntu user so that it does not accidently get created as the root user. touch ~/.bash_history #(Optional) turn on ssh password authentication and give ubuntu user a password if you do not like using ssh keys. # Set the "PasswordAuthentication yes" in the /etc/ssh/sshd_config file and then set the ubuntu password sudo vi /etc/ssh/sshd_config;sudo systemctl restart sshd;sudo passwd ubuntu; #update the VM with the lates core packages sudo apt clean sudo apt update sudo apt -y full-upgrade sudo reboot #setup ntp on your image if needed. It is important that all the VM's clocks are in synch or it will cause problems joining kubernetes nodes to the kubernetes cluster sudo apt install ntp sudo apt install ntpdate #if needed add you local ntp-hostname to the ntp.conf sudo vi /etc/ntp.conf #sync your vm clock with you ntp server. A best choice for the ntp server is a different server form Kubernetes VMs. A server restart would be needed to synch the time up. date sudo service ntp stop sudo ntpdate -s <ntp-hostname> sudo service ntp start date
Install Docker
The ONAP apps are pakages in Docker containers.
The following snippet was taken from https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#install-docker-ce:
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual sudo apt-get install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo apt-key fingerprint 0EBFCD88 sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update sudo apt-get -y install docker-ce sudo docker run hello-world
Install the Kubernetes Pakages
Just install the pakages; there is no need to configure them yet.
The following snippet was taken from https://kubernetes.io/docs/setup/independent/install-kubeadm/:
(Chrome as broswer is preferred. IE may add extra "CR LF" to each line, which causes problems).
# the "sudo -i" changes user to root. sudo -i apt-get update && apt-get install -y apt-transport-https curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update # If your environment setup is for "Kubernetes federation", then you need "kubefed v1.8.6". We recommend all of Kubernetes packages to be of the same version. curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.6/bin/linux/amd64/kubelet curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.6/bin/linux/amd64/kubeadm curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.6/bin/linux/amd64/kubectl chmod +x kube* mv kubelet /usr/local/bin/ mv kubeadm /usr/local/bin/ mv kubectl /usr/local/bin/ # If your environment setup is NOT for "Kubernetes federation", then you can use the latest version of Kubenetes packages. apt-get install -y kubelet kubeadm kubectl exit
Configure the Kubernetes Cluster with kubeadm
kubeadm is a utility provided by Kubernetes which simplifies the process of configuring a Kubernetes cluster. Details about how to use it can be found here: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/.
Configre the Kubernetes Master Node
The kubeadm init command sets up the Kubernetes master node. SSH to the k8s-master VM and invoke the following command. It is imported to capture the output into a log file as there is information which you will need to refer to afterwards.
#on the k8s-master vm setup the kubernetes master node. # the "sudo -i" changes user to root. sudo -i kubeadm init | tee ~/kubeadm_init.log # the "exist" reverts user back to ubuntu. exit
The output of "kubeadm init" will look like below:
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.8.5 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.147.101.4] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] This often takes around a minute; or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 59.503216 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node k8s-master as master by adding a label and a taint [markmaster] Master k8s-master tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: 76d969.a408a6a63c9c4b73 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: sudo kubeadm join --token b17b41.5095dd3db0129b0b 10.147.132.36:6443 --discovery-token-ca-cert-hash sha256:139ff626a5b177c4e4efc5dedd8ba5f2a84edbf42e3f2f70543095358ff97d3b
Execute the following snippet as the ubuntu and root user to get kubectl to work.
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
A "Pod network" must be deployed to use the cluster. This will let pods to communicate with eachother.
There are many different pod networks to choose from. See https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network for choices. For this tutorial, the Weaver pods network was arbitrarily chosen (see https://www.weave.works/docs/net/latest/kubernetes/kube-addon/ for more information).
The following snippet will install the Weaver Pod network:
sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" # wait for few minutes and verify $ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE etcd-k8s-s2-master 1/1 Running 0 1m kube-apiserver-k8s-s2-master 1/1 Running 0 1m kube-controller-manager-k8s-s2-master 1/1 Running 0 1m kube-dns-6f4fd4bdf-7pb9j 3/3 Running 0 2m kube-proxy-prnfr 1/1 Running 0 2m kube-scheduler-k8s-s2-master 1/1 Running 0 1m weave-net-hhjct 2/2 Running 0 1m
Install Helm and Tiller on the Kubernetes Master Node
ONAP uses Helm, so we need to install Helm. The following instructions were taken from https://docs.helm.sh/using_helm/#installing-helm:
#As a root user, download helm and install it curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh chmod 700 get_helm.sh ./get_helm.sh
Tiller requires service account setup in Kubernetes before being initialized. The following snippet will do that for you:
(Chrome as broswer is preferred. IE may add extra "CR LF" to each line, which causes problems).
#As a root user, create then kubernetes spec to define the helm service account cat > tiller-serviceaccount.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: tiller-clusterrolebinding subjects: - kind: ServiceAccount name: tiller namespace: kube-system roleRef: kind: ClusterRole name: cluster-admin apiGroup: "" EOF # Install Tillers kubectl create -f tiller-serviceaccount.yaml #init helm helm init --service-account tiller --upgrade
If you need to reset Helm, follow the below steps:
#uninstalls Tiller from a cluster helm reset #clean up any existing artifacts kubectl -n kube-system delete deployment tiller-deploy kubectl -n kube-system delete serviceaccount tiller kubectl -n kube-system delete ClusterRoleBinding tiller-clusterrolebinding kubectl create -f tiller-serviceaccount.yaml #init helm helm init --service-account tiller --upgrade
Configure the Kubernetes Worker Nodes
Setting up cluster nodes is very easy: just refer back to the "kubeadm init" output logs. In the logs, there was a “kubeadm join” command with token information parameters. Capture those parameters and then execute as root on each of the VMs which will be Kubernetes worker nodes: k8s-node1, k8s-node2, and k8s-node3.
The command looks like the follwing snippet:
sudo kubeadm join --token b17b41.5095dd3db0129b0b 10.147.132.36:6443 --discovery-token-ca-cert-hash sha256:139ff626a5b177c4e4efc5dedd8ba5f2a84edbf42e3f2f70543095358ff97d3b
Back on the Kubernetes master node VM, execute the “kubectl get nodes“ command to see all the Kubernetes nodes. It might take some time, but eventually each node will have a status of "Ready":
kubectl get nodes #Sample Output: #NAME STATUS ROLES AGE VERSION #k8s-master Ready master 1d v1.8.5 #k8s-node1 Ready <none> 1d v1.8.5 #k8s-node2 Ready <none> 1d v1.8.5 #k8s-node3 Ready <none> 1d v1.8.5
Make sure that the tiller pod is running. Execute the following command and look for a po/tiller-deploy-xxxx with a “Running” status. For example:
kubectl -n kube-system get pods #Sample output: NAME READY STATUS RESTARTS AGE etcd-k8s-s1-master 1/1 Running 0 2d kube-apiserver-k8s-s1-master 1/1 Running 0 2d kube-controller-manager-k8s-s1-master 1/1 Running 0 2d kube-dns-6f4fd4bdf-4dvg7 3/3 Running 0 2d kube-proxy-6bftw 1/1 Running 0 1d kube-proxy-ds8cj 1/1 Running 0 2d kube-proxy-fchnx 1/1 Running 0 1d kube-proxy-xf2nv 1/1 Running 0 1d kube-scheduler-k8s-s1-master 1/1 Running 0 2d tiller-deploy-6657cd6b8d-hqc5l 1/1 Running 0 1d weave-net-fl2lg 2/2 Running 1 1d weave-net-mgkqp 2/2 Running 2 1d weave-net-tqmsm 2/2 Running 1 2d weave-net-xj2v2 2/2 Running 1 1d
Now you now have a Kubernetes cluster with 3 worker nodes and 1 master node.
Configure dockerdata-nfs
This is a shared directory which must be mounted on all of the Kuberenetes VMs(master node and worker nodes). Because many of the ONAP pods use this directory to share data.
See 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes for instruction on how to set this up.
Configuring SDN-C ONAP
Clone OOM project only on Kuberentes Master Node
As ubuntu user, clone the oom repository.
git clone https://gerrit.onap.org/r/oom
Follow ONLY the follwoing 2 steps from Configure SDN-C Cluster Deployment .
- Get New startODL.sh Script From Gerrit Topic SDNC-163
- Get SDN-C Cluster Templates From Gerrit Topic SDNC-163 (apply gerrit change 25467, update sdnc-statefulset.yaml , modify values.yaml)
Local Nexus
Optional, if you have a local nexus3 for you docker repo can use the following snippet to update the oom to pull from you local. THis will speed up deployment time.
# update nexus3 to you find ~/oom -type f -exec \ sed -i 's/nexus3\.onap\.org:10001/yournexus:port/g' {} +
Configure ONAP
As ubuntu user,
cd ~/oom/kubernetes/oneclick/ source setenv.bash cd ~/oom/kubernetes/config/ #dummy values can be used as we will not be deploying a VM cp onap-parameters-sample.yaml onap-parameters.yaml ./createConfig.sh -n onap
Wait for the ONAP config pod to change state from ContainerCreating to Running and finaly to Completed. It should have a "Completed" status:
kubectl -n onap get pods --show-all #Sample output: NAME READY STATUS RESTARTS AGE config 0/1 Completed 0 9m $ kubectl get pod --all-namespaces -a | grep onap onap config 0/1 Completed 0 9m $ kubectl get namespaces #Sample output: NAME STATUS AGE default Active 50m kube-public Active 50m kube-system Active 50m onap Active 8m
Deploy the SDN-C Pods
As ubuntu user, apply the following patch to the kubernetes/sdnc/templates/nfs-provisoner-deployment.yaml file so that it does not try to export the /dockerdata-nfs as an NFS mount (because this folder was exported by the NFS server during configuration earlier):
cd ~/ cat > nfs-provisoner.patch <<EOF diff --git a/kubernetes/sdnc/templates/nfs-provisoner-deployment.yaml b/kubernetes/sdnc/templates/nfs-provisoner-deployment.yaml index 75f57fd..0952da8 100644 --- a/kubernetes/sdnc/templates/nfs-provisoner-deployment.yaml +++ b/kubernetes/sdnc/templates/nfs-provisoner-deployment.yaml @@ -51,5 +51,6 @@ spec: volumes: - name: export-volume hostPath: - path: /dockerdata-nfs/{{ .Values.nsPrefix }}/sdnc/data + path: /nfs-provisioner/{{ .Values.nsPrefix }}/sdnc/data + type: DirectoryOrCreate #{{ end }} EOF cd ~/oom git apply ~/nfs-provisoner.patch cd ~/oom/kubernetes/oneclick/ source setenv.bash ./createAll.bash -n onap -a sdnc # repeat the follwing command, untill installation is completed. kubectl -n onap-sdnc get pod -o wide
3 SDNC pods will be created and will be each assigned to a separate Kuberentes worker nodes.
2 DBhost pods will be created and will be each assigned to a separate Kuberentes worker nodes.
Verify SDNC Clustering
Refer to Validate the SDN-C ODL cluster.
Undeploy SDNC
$ cd ~/oom/kubernetes/oneclick/ $ source setenv.bash $ ./deleteAll.bash -n onap $ ./deleteAll.bash -n onap -a sdnc $sudo rm -rf /dockerdata-nfs
Get the details from Kubernete Master Node
Access to RestConf UI is via https://<Kuberbetes-Master Node-IP>:30202/apidoc/explorer/index.html (admin user)
Run the following command to make sure installation is error free.
$ kubectl cluster-info Kubernetes master is running at https://10.147.112.158:6443 KubeDNS is running at https://10.147.112.158:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl -n onap-sdnc get all NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/nfs-provisioner 1 1 1 1 5h deploy/sdnc-dgbuilder 1 1 1 1 5h deploy/sdnc-portal 1 1 1 1 5h NAME DESIRED CURRENT READY AGE rs/nfs-provisioner-6cb95b597d 1 1 1 5h rs/sdnc-dgbuilder-557b6879cd 1 1 1 5h rs/sdnc-portal-7bb789ccd6 1 1 1 5h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/nfs-provisioner 1 1 1 1 5h deploy/sdnc-dgbuilder 1 1 1 1 5h deploy/sdnc-portal 1 1 1 1 5h NAME DESIRED CURRENT READY AGE rs/nfs-provisioner-6cb95b597d 1 1 1 5h rs/sdnc-dgbuilder-557b6879cd 1 1 1 5h rs/sdnc-portal-7bb789ccd6 1 1 1 5h NAME DESIRED CURRENT AGE statefulsets/sdnc 3 3 5h statefulsets/sdnc-dbhost 2 2 5h NAME READY STATUS RESTARTS AGE po/nfs-provisioner-6cb95b597d-jjhv5 1/1 Running 0 5h po/sdnc-0 2/2 Running 0 5h po/sdnc-1 2/2 Running 0 5h po/sdnc-2 2/2 Running 0 5h po/sdnc-dbhost-0 2/2 Running 0 5h po/sdnc-dbhost-1 2/2 Running 0 5h po/sdnc-dgbuilder-557b6879cd-9nkv4 1/1 Running 0 5h po/sdnc-portal-7bb789ccd6-5z9w4 1/1 Running 0 5h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/dbhost ClusterIP None <none> 3306/TCP 5h svc/dbhost-read ClusterIP 10.109.106.173 <none> 3306/TCP 5h svc/nfs-provisioner ClusterIP 10.101.77.92 <none> 2049/TCP,20048/TCP,111/TCP,111/UDP 5h svc/sdnc-dgbuilder NodePort 10.111.87.223 <none> 3000:30203/TCP 5h svc/sdnc-portal NodePort 10.102.21.163 <none> 8843:30201/TCP 5h svc/sdnctldb01 ClusterIP None <none> 3306/TCP 5h svc/sdnctldb02 ClusterIP None <none> 3306/TCP 5h svc/sdnhost NodePort 10.108.100.101 <none> 8282:30202/TCP,8201:30208/TCP,8280:30246/TCP 5h svc/sdnhost-cluster ClusterIP None <none> 2550/TCP 5h
$ kubectl -n onap-sdnc get pod NAME READY STATUS RESTARTS nfs-provisioner-6cb95b597d-jjhv5 0/1 ContainerCreating 0 sdnc-0 0/2 Init:0/1 0 sdnc-1 0/2 Init:0/1 0 sdnc-2 0/2 Init:0/1 0 sdnc-dbhost-0 0/2 Init:0/2 0 sdnc-dgbuilder-557b6879cd-9nkv4 0/1 Init:0/1 0 sdnc-portal-7bb789ccd6-5z9w4 0/1 Init:0/1 0 # wait few minutes $ kubectl -n onap-sdnc get pod NAME READY STATUS RESTARTS AGE nfs-provisioner-6cb95b597d-jjhv5 1/1 Running 0 23m sdnc-0 2/2 Running 0 23m sdnc-1 2/2 Running 0 23m sdnc-2 2/2 Running 0 23m sdnc-dbhost-0 2/2 Running 0 23m sdnc-dbhost-1 2/2 Running 0 21m sdnc-dgbuilder-557b6879cd-9nkv4 1/1 Running 0 23m sdnc-portal-7bb789ccd6-5z9w4 1/1 Running 0 23m
$ kubectl get pod --all-namespaces -a NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-k8s-s2-master 1/1 Running 0 13h kube-system kube-apiserver-k8s-s2-master 1/1 Running 0 13h kube-system kube-controller-manager-k8s-s2-master 1/1 Running 0 13h kube-system kube-dns-6f4fd4bdf-n8rgs 3/3 Running 0 14h kube-system kube-proxy-l8gsk 1/1 Running 0 12h kube-system kube-proxy-pdz6h 1/1 Running 0 12h kube-system kube-proxy-q7zz2 1/1 Running 0 12h kube-system kube-proxy-r76g9 1/1 Running 0 14h kube-system kube-scheduler-k8s-s2-master 1/1 Running 0 13h kube-system tiller-deploy-6657cd6b8d-f6p9h 1/1 Running 0 12h kube-system weave-net-mwdjd 2/2 Running 1 12h kube-system weave-net-sl7gg 2/2 Running 2 12h kube-system weave-net-t6nmx 2/2 Running 1 13h kube-system weave-net-zmqcf 2/2 Running 2 12h onap-sdnc nfs-provisioner-6cb95b597d-jjhv5 1/1 Running 0 5h onap-sdnc sdnc-0 2/2 Running 0 5h onap-sdnc sdnc-1 2/2 Running 0 5h onap-sdnc sdnc-2 2/2 Running 0 5h onap-sdnc sdnc-dbhost-0 2/2 Running 0 5h onap-sdnc sdnc-dbhost-1 2/2 Running 0 5h onap-sdnc sdnc-dgbuilder-557b6879cd-9nkv4 1/1 Running 0 5h onap-sdnc sdnc-portal-7bb789ccd6-5z9w4 1/1 Running 0 5h onap config 0/1 Completed 0 5h
$ kubectl -n onap-sdnc get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE nfs-provisioner-6cb95b597d-jjhv5 1/1 Running 0 5h 10.36.0.1 k8s-s2-node0 sdnc-0 2/2 Running 0 5h 10.36.0.2 k8s-s2-node0 sdnc-1 2/2 Running 0 5h 10.42.0.1 k8s-s2-node1 sdnc-2 2/2 Running 0 5h 10.44.0.3 k8s-s2-node2 sdnc-dbhost-0 2/2 Running 0 5h 10.44.0.4 k8s-s2-node2 sdnc-dbhost-1 2/2 Running 0 5h 10.42.0.3 k8s-s2-node1 sdnc-dgbuilder-557b6879cd-9nkv4 1/1 Running 0 5h 10.44.0.2 k8s-s2-node2 sdnc-portal-7bb789ccd6-5z9w4 1/1 Running 0 5h 10.42.0.2 k8s-s2-node1
$ kubectl get services --all-namespaces -o wide NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h <none> kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 14h k8s-app=kube-dns kube-system tiller-deploy ClusterIP 10.96.194.1 <none> 44134/TCP 12h app=helm,name=tiller onap-sdnc dbhost ClusterIP None <none> 3306/TCP 5h app=sdnc-dbhost onap-sdnc dbhost-read ClusterIP 10.109.106.173 <none> 3306/TCP 5h app=sdnc-dbhost onap-sdnc nfs-provisioner ClusterIP 10.101.77.92 <none> 2049/TCP,20048/TCP,111/TCP,111/UDP 5h app=nfs-provisioner onap-sdnc sdnc-dgbuilder NodePort 10.111.87.223 <none> 3000:30203/TCP 5h app=sdnc-dgbuilder onap-sdnc sdnc-portal NodePort 10.102.21.163 <none> 8843:30201/TCP 5h app=sdnc-portal onap-sdnc sdnctldb01 ClusterIP None <none> 3306/TCP 5h app=sdnc-dbhost onap-sdnc sdnctldb02 ClusterIP None <none> 3306/TCP 5h app=sdnc-dbhost onap-sdnc sdnhost NodePort 10.108.100.101 <none> 8282:30202/TCP,8201:30208/TCP,8280:30246/TCP 5h app=sdnc onap-sdnc sdnhost-cluster ClusterIP None <none> 2550/TCP 5h app=sdnc
Get more detail about a single pod by using "describe" with the resource name. The resource name is shown with the get all command used above.
$ kubectl -n onap-sdnc describe po/sdnc-0
Get logs of containers inside each pod:
$ kubectl describe pod sdnc-0 -n onap-sdnc $ kubectl logs sdnc-0 sdnc-readiness -n onap-sdnc $ kubectl logs sdnc-0 sdnc-controller-container -n onap-sdnc $ kubectl logs sdnc-0 filebeat-onap -n onap-sdnc $ kubectl describe pod sdnc-dbhost-0 -n onap-sdnc $ kubectl logs sdnc-dbhost-0 sdnc-db-container -n onap-sdnc $ kubectl logs sdnc-dbhost-0 init-mysql -n onap-sdnc $ kubectl logs sdnc-dbhost-0 clone-mysql -n onap-sdnc $ kubectl logs sdnc-dbhost-0 xtrabackup -n onap-sdnc
$ kubectl get pv -n onap-sdnc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-75411a66-f640-11e7-9949-fa163ee2b421 1Gi RWX Delete Bound onap-sdnc/sdnc-data-sdnc-dbhost-0 onap-sdnc-data 23h pvc-824cb3cc-f620-11e7-9949-fa163ee2b421 1Gi RWX Delete Released onap-sdnc/sdnc-data-sdnc-dbhost-0 onap-sdnc-data 1d pvc-cb380eda-f640-11e7-9949-fa163ee2b421 1Gi RWX Delete Bound onap-sdnc/sdnc-data-sdnc-dbhost-1 onap-sdnc-data 23h $ kubectl get pvc -n onap-sdnc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE sdnc-data-sdnc-dbhost-0 Bound pvc-75411a66-f640-11e7-9949-fa163ee2b421 1Gi RWX onap-sdnc-data 23h sdnc-data-sdnc-dbhost-1 Bound pvc-cb380eda-f640-11e7-9949-fa163ee2b421 1Gi RWX onap-sdnc-data 23h
$ kubectl get serviceaccounts --all-namespaces $ kubectl get clusterrolebinding --all-namespaces $kubectl get deployment --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system kube-dns 1 1 1 1 1d kube-system tiller-deploy 1 1 1 1 1d