The aim of this step is to have unique names for cluster, context and user for each site(aka Kubernetes cluster). This will allow distinguishing site when we join them to a federation cluster.

This step needs to be done on master node of all Kubernetes clusters(sites). Here we use s1 and s2 notations for the first site and second site.

Following is example of site 2 (k8s-s2-master). 

#Change contexts.context.user name to a unique name for this site
ubuntu@k8s-s2-master:~# kubectl config set-context kubernetes-admin@kubernetes --user='kubernetes-admins2'
Context "kubernetes-admin@kubernetes" modified.
ubuntu@k8s-s2-master:~# kubectl config get-contexts
CURRENT   NAME                          CLUSTER      AUTHINFO             NAMESPACE
*         kubernetes-admin@kubernetes   kubernetes   kubernetes-admins2
ubuntu@k8s-s2-master:~#
 
#Change contexts.context.cluster value to a unique name for this site
ubuntu@k8s-s2-master:~# kubectl config set-context kubernetes-admin@kubernetes --cluster='kubernetes-s2'
Context "kubernetes-admin@kubernetes" modified.
ubuntu@k8s-s2-master:~# kubectl config get-contexts
CURRENT   NAME                          CLUSTER         AUTHINFO             NAMESPACE
*         kubernetes-admin@kubernetes   kubernetes-s2   kubernetes-admins2
ubuntu@k8s-s2-master:~#
 
#Change contexts.context.name to uniquely identify this site
ubuntu@k8s-s2-master:~# kubectl config rename-context kubernetes-admin@kubernetes kubernetes-admin-s2
Context "kubernetes-admin@kubernetes" renamed to "kubernetes-admin-s2".
ubuntu@k8s-s2-master:~# kubectl config get-contexts
CURRENT   NAME                          CLUSTER         AUTHINFO             NAMESPACE
*         kubernetes-admin-s2           kubernetes-s2   kubernetes-admins2
ubuntu@k8s-s2-master:~#


# verify changes
ubuntu@k8s-s2-master:~$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.112.140:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes-s2
    user: kubernetes-admins2
  name: kubernetes-admin-s2
current-context: kubernetes-admin-s2
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
ubuntu@k8s-s2-master:~$

For site 1, replace "s2" in above commands with "s1" and repeat them.

Furthermore, make the following changes manually on ~/.kube/config of both sites' (master node):

  • Change clusters.cluster.name to the same cluster name (contexts.context.cluster) chosen in above command.
  • Change users.name to the same user name(contexts.context.user) chosen in above command.


Verify changes on both sites

ubuntu@k8s-s2-master:~# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.114.14:6443
  name: kubernetes-s2
contexts:
- context:
    cluster: kubernetes-s2
    user: kubernetes-admins2
  name: kubernetes-admin-s2
current-context: kubernetes-admin-s2
kind: Config
preferences: {}
users:
- name: kubernetes-admins2
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
ubuntu@k8s-s2-master:~#
 
#Change current context name to the new name used above
ubuntu@k8s-s2-master:~# kubectl config use-context kubernetes-admin-s2
Switched to context "kubernetes-admin-s2".
ubuntu@k8s-s2-master:~#
 
ubuntu@k8s-s2-master:~# kubectl config current-context
kubernetes-admin-s2

  • No labels