You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Introduction

Current policy-framework supports different communication endpoints, defined in policy/common/policy-endpoints as below
-> DMaaP
-> UEB
-> NOOP
-> Http Servers
-> Http Clients

DMaaP, UEB, and NOOP are message-based communication infrastructures which operate in asynchronous unidirectional mode, hence the terminology of source and sinks, to denote their directionality into or out of the controller, respectively.

Http, which is synchronous bi-directional communication with remote endpoints is out-of-scope of this page.


This investigation focuses on including Kubernetes-friendly Kafka/Strimzi backend, as another communication endpoint choice

-> DMaaP (Kafka/Zookeeper backend)
-> Kafka (Kafka/Strimzi backend)
-> UEB
-> NOOP

Policy framwork application properties

Firstly, we will install minio on the machine you are using with Docker. This command pulls the minio image and creates the container with a volume called "data" mapped to the /data directory.

topicParameterGroup:
    topicSources:
    - topic: POLICY-PDP-PAP
      servers:
      - message-router
      topicCommInfrastructure: dmaap
      fetchTimeout: 15000
    - topic: POLICY-HEARTBEAT
      effectiveTopic: POLICY-PDP-PAP
      consumerGroup: policy-pap
      servers:
      - message-router
      topicCommInfrastructure: dmaap
      fetchTimeout: 15000
    topicSinks:
    - topic: POLICY-PDP-PAP
      servers:
      - message-router
      topicCommInfrastructure: dmaap
    - topic: POLICY-NOTIFICATION
      servers:
      - message-router
      topicCommInfrastructure: dmaap

The output should be similar to the below. Pay particular attention to the credentials and the address of the web ui.

Navigate to the web ui and enter in the credentials.

Once the credentials are entered, you can create an S3 bucket called "kubedemo"

That is all the setup required for minio.



Delete the Patroni Namespace & Create a Velero Restore

To delete the namespace and its' contents run:

helm uninstall patroni-velero -n patroni
kubectl delete ns patroni

Ensure that all pvs and pvcs are deleted.

To create the restore - it is again, a simple command:

velero restore create partoni-restore --from-backup patroni-backup

Check the result of the restore with:

velero restore describe partoni-restore

It should show everything completed successfully. We can check the pods, pvs and pvcs and we should see everything works as expected - including leader election:

kubectl get pods -l spilo-role -L spilo-role -n patroni
kubectl get pv -n patroni
kubectl get pvc -n patroni

We should now check the database to see if our data has been restored. The NodePort will have changed on restart, so we will have to check that again. Once we have that, we connect to the database. Checking the table we created, we should be able to see that the data is restored:

This marks an end of the demo. To summarise:

  • We ran minio to simulate AWS storage infrastructure
  • We installed Velero in our cluster
  • We used the charts that were provided to install Patroni in helm/kubernetes
  • We added a schema, a table and some data on the Patroni cluster
  • We created a Velero backup of the patroni namespace including persistent volumes
  • We deleted the namespace and volumes.
  • We used velero to restore the namespace and volumes
  • We confirmed the data was restored
  • No labels