You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

All of the following commands are executed on federation master node.


  • 6.1 Modify kube config of federation cluster

Follow steps at 5. Deploying Federation Control Plane for federation master node. change the default kubernetes config values to make them unique. 

As we intend to use federation cluster as the "host cluster", we added "host" to kube config.

The final result should look like something below:

 ubuntu@kubefed-1:~$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.113.131:6443
  name: kubernetes-host
contexts:
- context:
    cluster: kubernetes-host
    user: kubernetes-adminhost
  name: kubernetes-admin-host
current-context: kubernetes-admin-host
kind: Config
preferences: {}
users:
- name: kubernetes-adminhost
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
ubuntu@kubefed-1:~$


  • 6.2 Merge kube config of sites into kube config of federation

Manual Merge

Edit ~/.kube/config on federation master node to incorporate underlying kubernetes site's configuration. From ~/.kube/config file on each site's master node, extract "block of data" for clustercontext and user and insert them in the ~/.kube/config file of federation master node. Make sure you follow yaml file rules, when putting the same set of data together. In other words,cluster, make sure "block of data" for: cluster context and use,  for site-1, site-2, .. and federation site will be merged in a single file, under clusters, contexts and users.

This way, the ~/.kube/config file on the federation master node, will carry all information it needs to communicate with underlying sites (their master node).


Merge using commands

Copy kube config files to be merged (from site-1 master and site-2 master) to some path on federation master. Copy the federation master node config file at the same path. The kube config files can then be merged using below command -

Syntax: 

KUBECONFIG=<path-to-first-site.kubeconfig>:<path-to-second-site.kubeconfig>:<path-to-federation-cluster.kubeconfig> \
kubectl config view --raw --flatten > federation.kubeconfig

As a result, federation.kubeconfig file will contain definitions of the local federation cluster, site-1-cluster, and site-2-cluster. This file can be then moved to kube config path as "config".

Example below:

#copy config files to be merged at some path
ubuntu@kubefed-1:~$# ls -lrt | grep config
-rw-r--r-- 1 root root  5448 Feb  1 14:51 first.config
-rw-r--r-- 1 root root  5453 Feb  1 14:53 second.config
-rw-r--r-- 1 root root  5453 Feb  1 14:53 fed.config

#below example command merges two sites config files with fed config file
ubuntu@kubefed-1:~$# KUBECONFIG=first.config:second.config:fed.config \
> kubectl config view --raw --flatten > federation.kubeconfig
 
#notice the new config file federation.kubeconfig
ubuntu@kubefed-1:~$# ls -lrt | grep config
-rw-r--r-- 1 root root  5448 Feb  1 14:51 first.config
-rw-r--r-- 1 root root  5453 Feb  1 14:53 second.config
-rw-r--r-- 1 root root  5453 Feb  1 14:53 fed.config
-rw-r--r-- 1 root root 10785 Feb  1 14:55 federation.kubeconfig
 
#move file to kube config and give appropriate permission
ubuntu@kubefed-1:~$# mv federation.kubeconfig $HOME/.kube/config
ubuntu@kubefed-1:~$# sudo chown $(id -u):$(id -g) $HOME/.kube/config
ubuntu@kubefed-1:~$# 


Verify config after merging:

ubuntu@kubefed-1:~$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.113.131:6443
  name: kubernetes-host
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.112.140:6443
  name: kubernetes-s1
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.113.15:6443
  name: kubernetes-s2
contexts:
- context:
    cluster: kubernetes-host
    user: kubernetes-adminhost
  name: kubernetes-admin-host
- context:
    cluster: kubernetes-s1
    user: kubernetes-admins1
  name: kubernetes-admin-s1
- context:
    cluster: kubernetes-s2
    user: kubernetes-admins2
  name: kubernetes-admin-s2
current-context: kubernetes-admin-host
kind: Config
preferences: {}
users:
- name: kubernetes-adminhost
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kubernetes-admins1
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kubernetes-admins2
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
ubuntu@kubefed-1:~$


  • 6.3 Change current context  

    We picked federation cluster as federation host cluster.  So we need to execute "kubectl config use-context <contexts.context.name of host cluster>" on the federation master node to change current-context to that of the federation host cluster.


ubuntu@kubefed-1:~# kubectl config use-context kubernetes-admin-host
Switched to context "kubernetes-admin-host".
ubuntu@kubefed-1:~#

# Verify 
ubuntu@kubefed-1:~# kubectl config current-context
kubernetes-admin-host
ubuntu@kubefed-1:~#


  • 6.4 Deploy federation control plane

The command "kubefed init" is used to setup federation control plane and it should be fed with several parameters. See below syntax:

kubefed init <federation-name>\
--host-cluster-context=<your-host-cluster-context> \
--dns-provider="coredns" \
--dns-zone-name="example.com." \
--dns-provider-config="path to your coredns-provider.conf" \
--api-server-service-type="NodePort" \
--api-server-advertise-address="<federation worker node ip address>"

where:

Federation Control Plane stores it's state in etcd which is stored in a persistent volume.

If you followed 5. Deploying Federation Control Plane , and created PV, then the folloing example should work for you. Make sure to set the  api-server-advertise-address to the right value.

ubuntu@kubefed-1:~# kubefed init enterprise \
    --host-cluster-context=kubernetes-admin-host \
    --dns-provider="coredns" \
    --dns-zone-name="example.com." \
    --dns-provider-config="$HOME/coredns-provider.conf" \
    --api-server-service-type="NodePort" \
    --api-server-advertise-address="10.147.113.19" \


If you skip 5. Deploying Federation Control Plane , and have not created PV, then the "etcd-persistent-storage" parameter is set to false. Then the folloing example should work for you. Make sure to set the  api-server-advertise-address to the right value.

ubuntu@kubefed-1:~# kubefed init enterprise \
    --host-cluster-context=kubernetes-admin-host \
    --dns-provider="coredns" \
    --dns-zone-name="example.com." \
    --dns-provider-config="$HOME/coredns-provider.conf" \
	--etcd-persistent-storage=false \
    --api-server-service-type="NodePort" \
    --api-server-advertise-address="10.147.113.19" \

Example output of a successful executions: 

Creating a namespace federation-system for federation system components... done
Creating federation control plane service... done
Creating federation control plane objects (credentials, persistent volume claim)... done
Creating federation component deployments... done
Updating kubeconfig... done
Waiting for federation control plane to come up.................. done
Federation API server is running at: 10.147.113.19:31417

Verification list 

  • PV and PVC status on host cluster

If persistent storage is disabled using parameter "etcd-persistent-storage=false", we do not need to verify PV and PVC status.


Upon successful execution of "kubefed init" command,

  • A new PVC (pv claim) is created. 
  • The status of the exisitng pv and new pvc will change from "Available" to "Bound".
  • 2 new deployments for apiserver and contoller-manager are created.
  • 2 new federations pods are created under "federation-system" namespace.
  • 2 new service accounts are created under "federation-system" namespace.
  • A new cluster is created. It's name is the "federation name" we provided as the first parameter for "kubefed init". (ere enterprise). The ~/.kube/config is also updated with infromation of the new cluster

ubuntu@kubefed-1:~# kubectl get pv --all-namespaces | grep enterprise
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                               STORAGECLASS     REASON    AGE
enterprise-apiserver-etcd-volume1   11Gi       RWX            Retain           Available                                                                                3h
enterprise-apiserver-etcd-volume2   11Gi       RWO            Retain           Bound       federation-system/enterprise-apiserver-etcd-claim                            3m
ubuntu@kubefed-1:~#

#A new PVC is created (per pv-volume1.yaml or pv-volume2.yaml) and its status becomes "Bound"
ubuntu@kubefed-1:~# kubectl get pvc --all-namespaces | grep enterprise
NAMESPACE           NAME                              STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
federation-system   enterprise-apiserver-etcd-claim   Bound     enterprise-apiserver-etcd-volume2          11Gi       RWO                             1d
ubuntu@kubefed-1:~#
 

ubuntu@kubefed-1:~# kubectl describe pv enterprise-apiserver-etcd-volume2
Name:            enterprise-apiserver-etcd-volume2
Labels:          app=federated-cluster
                 type=local
Annotations:     pv.kubernetes.io/bound-by-controller=yes
                 volume.alpha.kubernetes.io/storage-class=yes
StorageClass:
Status:          Bound
Claim:           federation-system/enterprise-apiserver-etcd-claim
Reclaim Policy:  Retain
Access Modes:    RWO
Capacity:        11Gi
Message:
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /mnt/data
    HostPathType:
Events:            <none>
ubuntu@kubefed-1:~#
 

ubuntu@kubefed-4:~$ kubectl describe pvc enterprise-apiserver-etcd-claim -n federation-system
Name:          enterprise-apiserver-etcd-claim
Namespace:     federation-system
StorageClass:
Status:        Bound
Volume:        enterprise-apiserver-etcd-volume2
Labels:        app=federated-cluster
Annotations:   federation.alpha.kubernetes.io/federation-name=enterprise
               pv.kubernetes.io/bind-completed=yes
               pv.kubernetes.io/bound-by-controller=yes
Finalizers:    []
Capacity:      11Gi
Access Modes:  RWO
Events:        <none>
ubuntu@kubefed-4:~$


  • New federation pods on host cluster

Two new pods are created unsed "federation-system" namespace. 

#check newly created pods
root@kubefed-1:~# kubectl get pods --all-namespaces  -o wide
NAMESPACE           NAME                                                              READY     STATUS    RESTARTS   AGE       IP              NODE
federation-system   enterprise-apiserver-759656c886-t84hn                             2/2       Running   0          15h       10.44.0.6       kubefed-1w
federation-system   enterprise-controller-manager-59d95bcf6c-9nx9p                    1/1       Running   1          15h       10.44.0.7       kubefed-1w
 
#check newly created service accounts
ubuntu@kubefed-1:~$ kubectl get serviceaccounts --all-namespaces | grep federation-system
NAMESPACE           NAME                                                SECRETS   AGE
federation-system   default                                             1         1h
federation-system   federation-controller-manager                       1         1h
ubuntu@kubefed-1:~$


#check deployments
root@kubefed-1:~#  kubectl get deployments --namespace=federation-system
NAME                            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
enterprise-apiserver            1         1         1            1           1d
enterprise-controller-manager   1         1         1            1           1d
root@kubefed-1:~# 


  • Kube config update on federation master node

The ~/.kube/config file on federation master node is updated and a new cluster (here: enterprise) is added. Check below:  

root@kubefed-1:~# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.113.19:31417
  name: enterprise
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.113.35:6443
  name: kubernetes-host
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.113.132:6443
  name: kubernetes-s3
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.112.160:6443
  name: kubernetes-s5
contexts:
- context:
    cluster: enterprise
    user: enterprise
  name: enterprise
- context:
    cluster: kubernetes-host
    user: kubernetes-adminhost
  name: kubernetes-admin-host
- context:
    cluster: kubernetes-s3
    user: kubernetes-admins3
  name: kubernetes-admin-s3
- context:
    cluster: kubernetes-s5
    user: kubernetes-admins5
  name: kubernetes-admin-s5
current-context: federation
kind: Config
preferences: {}
users:
- name: enterprise
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kubernetes-adminhost
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kubernetes-admins3
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kubernetes-admins5
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
root@kubefed-1:~#


Facing errors? Need to redeploy federation? 

Undeploy federation control plane with below steps:

  • Delete the federation-system namespace. This will terminate 2 pods which were created in the federation-system.  Also the PV whcihch was bound to PVC, will be "Released". 
  • Delete the PV which is in "Released" status. 
  • Remove the new cluster "enterprise" from "~/.kube/config " . Then make sure "kubectl config view"  returns back clean .

ubuntu@kubefed-1:~$ kubectl delete ns federation-system
namespace "federation-system" deleted
 

ubuntu@kubefed-1:~$ kubectl get pv --all-namespaces | grep enterprise
enterprise-apiserver-etcd-volume1   11Gi       RWX            Retain           Available                                                                                3h
enterprise-apiserver-etcd-volume2   11Gi       RWO            Retain           Released    federation-system/enterprise-apiserver-etcd-claim                            3h
ubuntu@kubefed-1:~$ 
 
ubuntu@kubefed-1:~$ kubectl delete pv enterprise-apiserver-etcd-volume2
persistentvolume "enterprise-apiserver-etcd-volume2" deleted
ubuntu@kubefed-1:~$

You can now try to redeploy federation control plane using kubefed init.

  • No labels