Installation

Execute the following steps on master node

1) Create certificate

ubuntu@k8s-s1-master:~$ mkdir certs
ubuntu@k8s-s1-master:~$ cd certs/
ubuntu@k8s-s1-master:~/certs$ openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
Generating RSA private key, 2048 bit long modulus
......+++
........................+++
e is 65537 (0x10001)
ubuntu@k8s-s1-master:~/certs$ ll
total 12
drwxrwxr-x 2 ubuntu ubuntu 4096 Feb  2 15:51 ./
drwxr-xr-x 8 ubuntu ubuntu 4096 Feb  2 15:48 ../
-rw-rw-r-- 1 ubuntu ubuntu 1751 Feb  2 15:51 dashboard.pass.key
ubuntu@k8s-s1-master:~/certs$ openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
writing RSA key
ubuntu@k8s-s1-master:~/certs$
ubuntu@k8s-s1-master:~/certs$
ubuntu@k8s-s1-master:~/certs$ ll
total 16
drwxrwxr-x 2 ubuntu ubuntu 4096 Feb  2 15:51 ./
drwxr-xr-x 8 ubuntu ubuntu 4096 Feb  2 15:48 ../
-rw-rw-r-- 1 ubuntu ubuntu 1679 Feb  2 15:51 dashboard.key
-rw-rw-r-- 1 ubuntu ubuntu 1751 Feb  2 15:51 dashboard.pass.key
ubuntu@k8s-s1-master:~/certs$ rm dashboard.pass.key
ubuntu@k8s-s1-master:~/certs$ openssl req -new -key dashboard.key -out dashboard.csr
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:CA
State or Province Name (full name) [Some-State]:ONTARIO
Locality Name (eg, city) []:OTTAWA
Organization Name (eg, company) [Internet Widgits Pty Ltd]:AMDOCS
Organizational Unit Name (eg, section) []:R&D
Common Name (e.g. server FQDN or YOUR name) []:REZA
Email Address []:myname@amdocs.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
ubuntu@k8s-s1-master:~/certs$ ll
total 16
drwxrwxr-x 2 ubuntu ubuntu 4096 Feb  2 15:53 ./
drwxr-xr-x 8 ubuntu ubuntu 4096 Feb  2 15:48 ../
-rw-rw-r-- 1 ubuntu ubuntu 1037 Feb  2 15:53 dashboard.csr
-rw-rw-r-- 1 ubuntu ubuntu 1679 Feb  2 15:51 dashboard.key
ubuntu@k8s-s1-master:~/certs$ openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
Signature ok
subject=/C=CA/ST=ONTARIO/L=OTTAWA/O=AMDOCS/OU=R&D/CN=REZA/emailAddress=myname@amdocs.com
Getting Private key
ubuntu@k8s-s1-master:~/certs$
ubuntu@k8s-s1-master:~/certs$ ll
total 20
drwxrwxr-x 2 ubuntu ubuntu 4096 Feb  2 15:53 ./
drwxr-xr-x 8 ubuntu ubuntu 4096 Feb  2 15:48 ../
-rw-rw-r-- 1 ubuntu ubuntu 1273 Feb  2 15:53 dashboard.crt
-rw-rw-r-- 1 ubuntu ubuntu 1037 Feb  2 15:53 dashboard.csr
-rw-rw-r-- 1 ubuntu ubuntu 1679 Feb  2 15:51 dashboard.key
ubuntu@k8s-s1-master:~/certs$


ubuntu@k8s-s5-master:~/certs$ kubectl create secret generic kubernetes-dashboard-certs --from-file=$HOME/certs -n kube-system
secret "kubernetes-dashboard-certs" created
ubuntu@k8s-s5-master:~/certs$


2) Install kubernetes dashboard service

ubuntu@k8s-s1-master:~$ kubectl  apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
serviceaccount "kubernetes-dashboard" created
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
ubuntu@k8s-s1-master:~


3) Modify kubernetes dashboard service

ubuntu@k8s-s5-master:~/certs$ kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes-dashboard   ClusterIP   10.108.52.94    <none>        80/TCP    57s
ubuntu@k8s-s5-master:~/certs$ 

ubuntu@k8s-s1-master:~$ kubectl -n kube-system edit service kubernetes-dashboard
#Change value for spec.type from "ClusterIP" to "NodePort" . Then save the file (:wq)


4) Check port on which Dashboard was exposed

ubuntu@k8s-s1-master:~$ kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   NodePort   10.108.52.94   <none>        80:30830/TCP   2h
ubuntu@k8s-s1-master:~$


#here it is 30830

Web-based Interface

5) Navigate to UI via a browser

Use the master node ip address and the exposed port :http://<master-node-ip-address>:<exposed-port>


6) Grant full admin privilages to Dashboard Service Account

The browser does not ask for credentials to login. The default user is "system:serviceaccount:kube-system:kubernetes-dashboard" , which does not have access to the default namespace.

To fix this, create a new "ClusterRoleBinding" and provide privilages to Dashboard Service Account.

Create the following yaml file and deploy it. 

dashboard-admin.yaml


~$ kubectl apply -f https://raw.githubusercontent.com/shdowofdeath/dashboard/master/rolebindingdashboard.yaml 
clusterrolebinding "kubernetes-dashboard" created
~$


7) Navigate to UI via a browser

You can access the browser , without any credentials. 

Monitoring SDN-C Site Health

The Kubernetes dashboard GUI can be used to monitor the health of components of the SDN-C site by changing the Namespace to the namespace SDN-C is created in.

In order to see the status of each pod in the site, select the 'Pods' pane (under the 'Workloads' heading). You can also use the following URL: http://server:31497/#!/pod?namespace=onap-sdnc.

When a pod fails, the GUI will show that fact.

e.g. An ODL outage:

e.g. A database outage:

Selecting the 'Overview' pane will allow a less specific view of failures:

If an application inside a pod, such as the ODL, dies or is stopped, the pod itself will be recreated so the outage will be seen as with a pod outage.

The operator of the site can use this information to help determine when a manual failover to the remote site is required. Normally, a failover would be desired when there is a lack of redundancy available for a component, such as when only one database is available or when only one ODL is available. The operator would first want to determine whether the site that has experienced the outage(s) is the 'active' site by running the '/dockerdata-nfs/cluster/script/sdnc.cluster' script.

  • No labels