You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 21 Next »

Introduction

As part of the HA (High Availability) investigation, the decision was made to focus on the use of certain tools in testing and implementation.

As a database HA solution for Postgres, the Patroni project was chosen. Patroni's solution involves a primary-replica structure for replication. Writes and reads are made to the primary and the data is streamed asynchronously to the replicas. The primary is referred to as the leader in the documentation and this database instance becomes leader by way of a leader election. The elected leader holds a lock. All other instances see this and assume a role as a secondary. If the leader node dies, a new election is done and a new leader is selected. This process is extremely quick.

For backup and restore a tool called Velero was chosen. Velero is a widely used and comprehensive tool for backup of Kubernetes resources, including persistent volumes. The CLI provided by Velero is easy to use and adopts a good amount of the syntax used by kubectl.

The remainder of this document will show a deployment of a 5 pod Patroni cluster inside a Kubernetes environment. The database cluster will then be backed-up, destroyed and then restored - in that order.

Notes

  • All installation takes place on Ubuntu 20.04
  • This demonstration assumes that you have a running Kubernetes cluster and a helm repo (The name used in this case for the repo is "local").
  • A tool called "minio" is used to simulate connection to AWS. In a real world situation, minio would not be needed because a Kubernetes cluster would be backed by cloud provider storage.
  • This demo is based on kubernetes server version 1.22.6 and client (kubectl) version 1.23.3

Running minio

Firstly, we will install minio on the machine you are using with Docker. This command pulls the minio image and creates the container with a volume called "data" mapped to the /data directory.

docker run --name minio -p 9000:9000 -v data:/data minio/minio server /data

The output should be similar to the below. Pay particular attention to the credentials and the address of the web ui.

Navigate to the web ui and enter in the credentials.

Once the credentials are entered, you can create an S3 bucket called "kubedemo"

That is all the setup required for minio.

Velero Installation

Firstly we download the required version of the velero client, untar, put the binaries in the appropriate place and delete the tar.gz

wget https://github.com/vmware-tanzu/velero/releases/download/v1.8.0-rc.1/velero-v1.8.0-rc.1-linux-amd64.tar.gz
tar zxf velero-v1.0.0-linux-amd64.tar.gz
sudo mv velero-v1.0.0-linux-amd64/velero /usr/local/bin/
rm -rf velero*

This completes the installation of the Velero client.

Then run the following to create a credentials file that Velero will use to access minio. Change the credentials accordingly.

cat <<EOF>> minio.credentials
[default]
aws_access_key_id=minioadmin
aws_secret_access_key=minioadmin
EOF

Finally, we can install the Velero components in the cluster with the following. Change the IP of your minio installation accordingly and make sure you provide the correct path to "minio.credentials".

velero install --provider aws --bucket kubedemo \
--secret-file ./minio.credentials \
--backup-location-config region=minio,s3ForcePathStyle=true,s3Url=http://172.17.0.2:9000 \
--plugins velero/velero-plugin-for-aws:v1.4.0-rc1 \
--use-volume-snapshots false --use-restic

Once this executes successfully, Velero is fully installed. If you check your Kubernetes namespaces you should now see one named "velero" with both Velero and Restic pods running. Restic enables us to backup persistent volumes that are stored in our local file system.

Patroni Setup and Installation

Once minio and Velero are set up, we can start to bring up the Patroni installation. Details about Patroni for Postgres and what it does can be found here and in out investigation wiki article. In brief, what Partoni does is make the Postgres distribution High Availability. It sets up a cluster of database instances (number is configurable). It will elect one as the leader  - this election is done in a quick and efficient manner to minimise downtime. Data is synchronised quickly across databases so that they always have the same data. Kubernetes will quickly spin up a new instance if one of the current instances fails.

We have provided a helm chart for Patroni that uses the OOM common helm named templates for some of its operations. In this way the Patroni chart will fit into the OOM repo quite easily. We have provided the Patroni chart and common chart:

patroni-0.12.1.tgz

common-9.0.0.tgz

These can be used in your local/remote setup. Here we will detail how to set this up in your environment.

Once these charts are pushed to your "local" repo and Velero and minio are running, you should create a new Kubernetes namespace called "patroni". You can do this with

kubectl create ns patroni

Make sure the charts are visible in your repo.

All of the Patroni Postgres database instances will be created in that namespace. Next, we can simple install our helm charts in the Patroni namespace.

helm install patroni-velero patroni -n patroni

Once installation runs without error, you should be able to observe several things in your cluster. We look at the pods that have been created:

We can take a quick look to see which of these pods has been elected the leader of the cluster currently

Here we can see the pod patroni-velero-0 has been elected leader and the others are replicas. We can see what happens if we delete the leader.

kubectl delete po patroni-velero-0 -n patroni

Very quickly we will see that a new leader has been elected and that the deleted pod has been recreated as a replica

So, at this stage we know the failover mechanism and leader election is working properly.

  • No labels