Geo-Redundancy:-
Problem/Requirement:-
Solution Delivered (POC):-
For achieving this we used a 4 node setup as shown below
And we used the default labels provided by kubernetes to each nodes
Now lets see how the codes looks like for configuring the Anti-Affinity for VNFSDK-POSTGRESS PODs (just for the POC purpose we increased the replica count to 4)
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- vnfsdk-postgres
topologyKey: "kubernetes.io/hostname"
This snippet of values.yaml under vnfsdk-postgress dir ensures that the db pods of vnfsdk-postgress will never reside on the same nodes
Now for the Affinity between the DBand the APPpods and also the antiaffinity between APP pods we used the below code snippet in values.yaml of vnfsdk
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- vnfsdk
topologyKey: "kubernetes.io/hostname"
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- vnfsdk-postgres
topologyKey: "kubernetes.io/hostname"
When the code was deployed the result was as below
Replica sets on different nodes with the APP and DB on the same node while the APP and DB are also never colocated on the same node
K8s-1 | k8s-2 | k8s-3 | k8s-4 |
---|---|---|---|
goodly-squid-vnfsdk-556f59ccd9-xtzff | goodly-squid-vnfsdk-556f59ccd9-n95jh | goodly-squid-vnfsdk-556f59ccd9-snlzc | goodly-squid-vnfsdk-556f59ccd9-jx9q8 |
goodly-squid-vnfsdk-postgres-78d58775c4-9rnhh | goodly-squid-vnfsdk-postgres-78d58775c4-s4l8r | goodly-squid-vnfsdk-postgres-78d58775c4-9cf5g | goodly-squid-vnfsdk-postgres-78d58775c4-98dr9 |
So with this, We achieved a deployment Active-Active Geo-redundancy.