Kubernetes Federation supports following dns providers -
This page talks about setting up coredns provider for Kubernetes Federation. This needs to be executed on federation master node. The steps are taken from below link-
https://kubernetes.io/docs/tasks/federation/set-up-coredns-provider-federation/
Installing etcd charts
ubuntu @k8s -kubefed:~# helm install --namespace kube-system --name etcd-operator stable/etcd-operator
# Sample output start
NAME: etcd-operator
LAST DEPLOYED: Wed Jan 17 10 : 15 : 14 2018
NAMESPACE: kube-system
STATUS: DEPLOYED
RESOURCES:
==> v1/ServiceAccount
NAME SECRETS AGE
etcd-operator-etcd-operator-etcd-backup-operator 1 <invalid>
etcd-operator-etcd-operator-etcd-operator 1 <invalid>
etcd-operator-etcd-operator-etcd-restore-operator 1 <invalid>
==> v1beta1/ClusterRole
NAME AGE
etcd-operator-etcd-operator-etcd-operator <invalid>
==> v1beta1/ClusterRoleBinding
NAME AGE
etcd-operator-etcd-operator-etcd-backup-operator <invalid>
etcd-operator-etcd-operator-etcd-operator <invalid>
etcd-operator-etcd-operator-etcd-restore-operator <invalid>
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
etcd-restore-operator ClusterIP 10.100 . 70.132 <none> 19999 /TCP <invalid>
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
etcd-operator-etcd-operator-etcd-backup-operator 1 1 1 0 <invalid>
etcd-operator-etcd-operator-etcd-operator 1 1 1 0 <invalid>
etcd-operator-etcd-operator-etcd-restore-operator 1 1 1 0 <invalid>
NOTES:
1 . etcd-operator deployed.
If you would like to deploy an etcd-cluster set cluster.enabled to true in values.yaml
Check the etcd-operator logs
export POD=$(kubectl get pods -l app=etcd-operator-etcd-operator-etcd-operator --namespace kube-system --output name)
kubectl logs $POD --namespace=kube-system
# Sample output end
# Upgrade etc-operator and enable clustering.
ubuntu @k8s -kubefed:~# helm upgrade --namespace kube-system --set cluster.enabled= true etcd-operator stable/etcd-operator
# Sample output start
Release "etcd-operator" has been upgraded. Happy Helming!
LAST DEPLOYED: Wed Jan 17 10 : 16 : 33 2018
NAMESPACE: kube-system
STATUS: DEPLOYED
RESOURCES:
==> v1beta1/ClusterRole
NAME AGE
etcd-operator-etcd-operator-etcd-operator 24s
==> v1beta1/ClusterRoleBinding
NAME AGE
etcd-operator-etcd-operator-etcd-backup-operator 24s
etcd-operator-etcd-operator-etcd-operator 24s
etcd-operator-etcd-operator-etcd-restore-operator 24s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
etcd-restore-operator ClusterIP 10.100 . 70.132 <none> 19999 /TCP 24s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
etcd-operator-etcd-operator-etcd-backup-operator 1 1 1 1 24s
etcd-operator-etcd-operator-etcd-operator 1 1 1 1 24s
etcd-operator-etcd-operator-etcd-restore-operator 1 1 1 1 24s
==> v1/ServiceAccount
NAME SECRETS AGE
etcd-operator-etcd-operator-etcd-backup-operator 1 24s
etcd-operator-etcd-operator-etcd-operator 1 24s
etcd-operator-etcd-operator-etcd-restore-operator 1 24s
NOTES:
1 . etcd-operator deployed.
If you would like to deploy an etcd-cluster set cluster.enabled to true in values.yaml
Check the etcd-operator logs
export POD=$(kubectl get pods -l app=etcd-operator-etcd-operator-etcd-operator --namespace kube-system --output name)
kubectl logs $POD --namespace=kube-system
# Sample output end
|
Verify objects created and running
#Now, verify the new pods
ubuntu @k8s -kubefed:~# kubectl get pods --all-namespaces | grep "etcd-operator"
# Sample output
kube-system etcd-operator-etcd-operator-etcd-backup-operator-5d59fdf4frbswb 1 / 1 Running 0 1m
kube-system etcd-operator-etcd-operator-etcd-operator-84f5ddddbf-vsjcj 1 / 1 Running 0 1m
kube-system etcd-operator-etcd-operator-etcd-restore-operator-d8b667cdb47dp 1 / 1 Running 0 1m
#Now, verify the new Service Accounts
ubuntu @k8s -kubefed:~# kubectl get sa --all-namespaces | grep "etcd-operator"
# Sample output
kube-system etcd-operator-etcd-operator-etcd-backup-operator 1 7m
kube-system etcd-operator-etcd-operator-etcd-operator 1 7m
kube-system etcd-operator-etcd-operator-etcd-restore-operator 1 7m
#Now, verify the new Cluster Role Binding
ubuntu @k8s -kubefed:~# kubectl get clusterrolebinding --all-namespaces | grep "etcd-operator"
# Sample output
etcd-operator-etcd-operator-etcd-backup-operator 9m
etcd-operator-etcd-operator-etcd-operator 9m
etcd-operator-etcd-operator-etcd-restore-operator 9m
#Now, verify the new deployments
ubuntu @kk8s -kubefed:~# kubectl get deployment --all-namespaces | grep "etcd-operator"
# Sample output
kube-system etcd-operator-etcd-operator-etcd-backup-operator 1 1 1 1 12m
kube-system etcd-operator-etcd-operator-etcd-operator 1 1 1 1 12m
kube-system etcd-operator-etcd-operator-etcd-restore-operator 1 1 1 1 12m
#Now, verify the new service
ubuntu @k8s -kubefed:~# kubectl get service --all-namespaces | grep "operator"
# Sample output
kube-system etcd-restore-operator ClusterIP 10.103 . 24.135 <none> 19999 /TCP 19m
|
Deploying CoreDns
ubuntu @k8s -kubefed:~# cat > Values.yaml << EOF
isClusterService: false
serviceType: "NodePort"
middleware:
kubernetes:
enabled: false
etcd:
enabled: true
zones:
- "example.com."
endpoint: "http://etcd-cluster.kube-system:2379"
EOF
ubuntu @k8s -kubefed:~# helm install --namespace kube-system --name coredns -f Values.yaml stable/coredns
NAME: coredns
LAST DEPLOYED: Mon Jan 29 17 : 43 : 06 2018
NAMESPACE: kube-system
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
coredns-coredns 1 <invalid>
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns-coredns NodePort 10.109 . 54.134 <none> 53 : 30878 /UDP, 53 : 30878 /TCP, 9153 : 32108 /TCP <invalid>
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
coredns-coredns 1 1 1 0 <invalid>
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
coredns-coredns-5fc6cdb5d7-82trc 0 / 1 ContainerCreating 0 <invalid>
NOTES:
CoreDNS is now running in the cluster.
It can be accessed using the below endpoint
export NODE_PORT=$(kubectl get --namespace kube-system -o jsonpath= "{.spec.ports[0].nodePort}" services coredns-coredns)
export NODE_IP=$(kubectl get nodes --namespace kube-system -o jsonpath= "{.items[0].status.addresses[0].address}" )
echo "$NODE_IP:$NODE_PORT"
It can be tested with the following:
1 . Launch a Pod with DNS tools:
kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
2 . Query the DNS server:
/ # host kubernetes
ubuntu @k8s -kubefed:~#
|
Verify CoreDns pod and other objects are running as expected
#Now, verify the new pods
ubuntu @k8s -kubefed:/# kubectl get pods --all-namespaces | grep coredns
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-coredns-798f955756-z5bfm
#Now, verify the new deployments
ubuntu @k8s -kubefed:/# kubectl get deployment --all-namespaces | grep coredns
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system coredns-coredns 1 1 1 1 56s
#Now, verify the new service
ubuntu @k8s -kubefed:/# kubectl get svc --all-namespaces | grep coredns
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-system coredns-coredns NodePort 10.97 . 177.10 <none> 53 : 30099 /UDP, 53 : 30099 /TCP, 9153 : 32673 /TCP 2m
ubuntu @k8s -kubefed:~$ kubectl get configmap --all-namespaces| grep coredns
NAMESPACE NAME DATA AGE
kube-system coredns-coredns 1 3m
kube-system coredns.v1 1 3m
|
If you are interested knowing more details about coredns configuration, run the following commands.
root @kubefed - 1 :~# kubectl -n kube-system get configmap coredns-coredns -o yaml
apiVersion: v1
data:
Corefile: |-
.: 53 {
cache 30
errors
health
kubernetes cluster.local 10.3 . 0.0 / 24
loadbalance round_robin
prometheus 0.0 . 0.0 : 9153
proxy . /etc/resolv.conf
}
kind: ConfigMap
metadata:
creationTimestamp: 2018 - 01 -29T22: 43 :10Z
name: coredns-coredns
namespace: kube-system
resourceVersion: "1823"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns-coredns
uid: c80b6d98- 0545 -11e8-8bce-fa163e155847
root @kubefed - 1 :~# kubectl -n kube-system get configmap coredns.v1 -o yaml
apiVersion: v1
data:
release: H4sIAAAAAAAC/+x8TWwdx304nL+DJON/44AF2sTo4delUkmMdh9JWY60iYLIpJKwtSVWZBwYiiDO25333pizM+uZWVLPFAP01lsvPfdQoL300ENP7SFAgSJADj31UKBFC/RSoMdeeyrma3f2fVCPsoLWhWWA5M7H72t+ 8 /uY+Y3RlwohScnV2j+9jv7i9S+/lvz562hHSLL74ACoAi5OQTacUz4GykFPCBSsUZrIDO1pKDCHIQFcFEQpUkKjzEAzakiYOAXCy1pQrhEAeVYLqeHBw937T/cfPjq8e+XacTMkhWYwJhrSlOOKqBoXBEx7qqZKkwpSAR8pwWusJ3eTs0zVpMgMIPV480nGRUn2hdTnCSgiT2hBFHh+Uv/ 7 +gzqvf0ZxAaGWhU91aSymJXGulEZLktpOLdt/uM8sTiLiYDkiseZX2n5TlAkOE2UJiWcUj2xUhsJxsQp5eMcoa0M3sMNLyaAYV/4QWZZtBBM5QgFLmTDIaVGhLIyP4jSWOq7D8gJkZCmtMJjcpfykRgy8WxQcuUAMGywQ/hGaDuD32+InFpKDCIjUyJzhAawDhOhtBWN5EQThdbe+PJ
kind: ConfigMap
metadata:
creationTimestamp: 2018 - 01 -29T22: 43 :10Z
labels:
MODIFIED_AT: "1517265786"
NAME: coredns
OWNER: TILLER
STATUS: DEPLOYED
VERSION: "1"
name: coredns.v1
namespace: kube-system
resourceVersion: "1831"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns.v1
uid: c8096702- 0545 -11e8-8bce-fa163e155847
root @kubefed - 1 :~#
|
Create coredns-provider config
This custom coredns-provider.conf will be used to deploy federation.
ubuntu @k8s -kubefed:~# cat > $HOME/coredns-provider.conf << EOF
[Global]
etcd-endpoints = http:
zones = example.com.
EOF
|
Need to delete/re-install coredns?
Below command can be used to delete coredns release. This step may be followed only if there's a need to re-install coredns.
ubuntu @kk8s -kubefed:~# helm del --purge coredns
release "coredns" deleted
root @kubefed - 4 :~#
|