Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

All of the following commands are executed on federation master node.

Table of Contents


  • 6

    5.1 Modify kube config of federation cluster

Follow steps at 6. Deploying Federation Control Plane for federation master node. change the default kubernetes config values to make them unique. below steps for federation master node. change the default kubernetes config values to make them unique. 

By default, we have below kube config after k8s cluster installation:

Code Block
ubuntu@kubefed-1:~$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.113.131:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
ubuntu@kubefed-1:~$


As we intend to use federation cluster as the "host cluster", we added "host" to cluster,user and context names to existing kube config so as to make these names unique.

The final result should config view on federation master node after making changes will look like something below:

Code Block
 ubuntu@kubefedubuntu@kubefed-1:~$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.113.131:6443
  name: kubernetes-host
contexts:
- context:
    cluster: kubernetes-host
    user: kubernetes-adminhost
  name: kubernetes-admin-host
current-context: kubernetes-admin-host
kind: Config
preferences: {}
users:
- name: kubernetes-adminhost
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
ubuntu@kubefed-1:~$


  • 6

    5.2 Merge kube config of sites into kube config of federation

Manual Merge

Edit ~/.kube/config on federation master node to incorporate underlying kubernetes site's configuration. From ~/.kube/config file on each site's master node, extract "block of data" for clustercontext and user and insert them in the ~/.kube/config file of federation master node. Make sure you follow yaml file rules, when putting the same set of data together. In other words,cluster, make sure "block of data" for: cluster context and use,  for site-1, site-2, .. and federation site will be merged in a single file, under clusters, contexts and users.

...

Code Block
ubuntu@kubefed-1:~$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.113.131:6443
  name: kubernetes-host
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.112.140:6443
  name: kubernetes-s1
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.113.15:6443
  name: kubernetes-s2
contexts:
- context:
    cluster: kubernetes-host
    user: kubernetes-adminhost
  name: kubernetes-admin-host
- context:
    cluster: kubernetes-s1
    user: kubernetes-admins1
  name: kubernetes-admin-s1
- context:
    cluster: kubernetes-s2
    user: kubernetes-admins2
  name: kubernetes-admin-s2
current-context: kubernetes-admin-host
kind: Config
preferences: {}
users:
- name: kubernetes-adminhost
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kubernetes-admins1
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kubernetes-admins2
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
ubuntu@kubefed-1:~$


  • 6

    5.3 Change current context  

    We picked federation cluster as federation host cluster.  So we need to execute "kubectl config use-context <contexts.context.name of host cluster>" on the federation master node to change current-context to that of the federation host cluster.

...

Code Block
ubuntu@kubefed-1:~# kubectl config use-context kubernetes-admin-host
Switched to context "kubernetes-admin-host".
ubuntu@kubefed-1:~#

# Verify 
ubuntu@kubefed-1:~# kubectl config current-context
kubernetes-admin-host
ubuntu@kubefed-1:~#


  • 6

    5.4 Deploy federation control plane

The command "kubefed init" is used to setup federation control plane and it should be fed with several parameters. See below syntax:

kubefed init <federation-name>\
--host-cluster-context=<your-host-cluster-context> \
--dns-provider="coredns" \
--dns-zone-name="example.com." \
--dns-provider-config="path to your coredns-provider.conf" \
--api-server-service-type="NodePort" \
--api-server-advertise-address="<federation worker master node ip address>"

where:

Federation Control Plane stores it's state in etcd which is stored in a persistent volume.

If you followed 65. Deploying Federation Control Plane , and created PV, then the folloing example should work for you. Make sure to set the  api-server-advertise-address to the right value.

Code Block
ubuntu@kubefed-1:~# kubefed init enterprise \
    --host-cluster-context=kubernetes-admin-host \
    --dns-provider="coredns" \
    --dns-zone-name="example.com." \
    --dns-provider-config="$HOME/coredns-provider.conf" \
    --api-server-service-type="NodePort" \
    --api-server-advertise-address="10.147.113.19" \


If you skip 65. Deploying Federation Control Plane , and have not created PV, then the "etcd-persistent-storage" parameter is set to false. Then the folloing example should work for you. Make sure to set the  api-server-advertise-address to the right value.

...

Code Block
Creating a namespace federation-system for federation system components... done
Creating federation control plane service... done
Creating federation control plane objects (credentials, persistent volume claim)... done
Creating federation component deployments... done
Updating kubeconfig... done
Waiting for federation control plane to come up.................. done
Federation API server is running at: 10.147.113.19:31417
  • 5.5 Verification list 

  • PV and PVC status on host cluster

Note

If persistent storage is disabled using parameter "etcd-persistent-storage=false", we do not need to verify PV and PVC status.

...

Code Block
ubuntu@kubefed-1:~# kubectl get pv --all-namespaces | grep enterprise
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                               STORAGECLASS     REASON    AGE
enterprise-apiserver-etcd-volume1   11Gi       RWX            Retain           Available                                                                                3h
enterprise-apiserver-etcd-volume2   11Gi       RWO            Retain           Bound       federation-system/enterprise-apiserver-etcd-claim                            3m
ubuntu@kubefed-1:~#

#A new PVC is created (per pv-volume1.yaml or pv-volume2.yaml) and its status becomes "Bound"
ubuntu@kubefed-1:~# kubectl get pvc --all-namespaces | grep enterprise
NAMESPACE           NAME                              STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
federation-system   enterprise-apiserver-etcd-claim   Bound     enterprise-apiserver-etcd-volume2          11Gi       RWO                             1d
ubuntu@kubefed-1:~#
 

ubuntu@kubefed-1:~# kubectl describe pv enterprise-apiserver-etcd-volume2
Name:            enterprise-apiserver-etcd-volume2
Labels:          app=federated-cluster
                 type=local
Annotations:     pv.kubernetes.io/bound-by-controller=yes
                 volume.alpha.kubernetes.io/storage-class=yes
StorageClass:
Status:          Bound
Claim:           federation-system/enterprise-apiserver-etcd-claim
Reclaim Policy:  Retain
Access Modes:    RWO
Capacity:        11Gi
Message:
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /mnt/data
    HostPathType:
Events:            <none>
ubuntu@kubefed-1:~#
 

ubuntu@kubefed-4:~$ kubectl describe pvc enterprise-apiserver-etcd-claim -n federation-system
Name:          enterprise-apiserver-etcd-claim
Namespace:     federation-system
StorageClass:
Status:        Bound
Volume:        enterprise-apiserver-etcd-volume2
Labels:        app=federated-cluster
Annotations:   federation.alpha.kubernetes.io/federation-name=enterprise
               pv.kubernetes.io/bind-completed=yes
               pv.kubernetes.io/bound-by-controller=yes
Finalizers:    []
Capacity:      11Gi
Access Modes:  RWO
Events:        <none>
ubuntu@kubefed-4:~$


  • New federation pods on host cluster

Two new pods are created unsed "federation-system" namespace. 

Code Block
#check newly created pods
root@kubefed-1:~# kubectl get pods --all-namespaces  -o wide
NAMESPACE           NAME                                                              READY     STATUS    RESTARTS   AGE       IP              NODE
federation-system   enterprise-apiserver-759656c886-t84hn                             2/2       Running   0          15h       10.44.0.6       kubefed-1w
federation-system   enterprise-controller-manager-59d95bcf6c-9nx9p                    1/1       Running   1          15h       10.44.0.7       kubefed-1w
 
#check newly created service accounts
ubuntu@kubefed-1:~$ kubectl get serviceaccounts --all-namespaces | grep federation-system
NAMESPACE           NAME                                                SECRETS   AGE
federation-system   default                                             1         1h
federation-system   federation-controller-manager                       1         1h
ubuntu@kubefed-1:~$


#check deployments
root@kubefed-1:~#  kubectl get deployments --namespace=federation-system
NAME                            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
enterprise-apiserver            1         1         1            1           1d
enterprise-controller-manager   1         1         1            1           1d
root@kubefed-1:~# 


  • Kube config update on federation master node

The ~/.kube/config file on federation master node is updated and a new cluster (here: enterprise) is added. Check below:  

Code Block
root@kubefed-1:~# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.113.19:31417
  name: enterprise
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.113.35:6443
  name: kubernetes-host
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.113.132:6443
  name: kubernetes-s3
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.147.112.160:6443
  name: kubernetes-s5
contexts:
- context:
    cluster: enterprise
    user: enterprise
  name: enterprise
- context:
    cluster: kubernetes-host
    user: kubernetes-adminhost
  name: kubernetes-admin-host
- context:
    cluster: kubernetes-s3
    user: kubernetes-admins3
  name: kubernetes-admin-s3
- context:
    cluster: kubernetes-s5
    user: kubernetes-admins5
  name: kubernetes-admin-s5
current-context: federation
kind: Config
preferences: {}
users:
- name: enterprise
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kubernetes-adminhost
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kubernetes-admins3
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kubernetes-admins5
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
root@kubefed-1:~#


Facing errors? Need to redeploy federation? 

Undeploy federation control plane with below steps:

...