You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

What's desired

For replicating database(MySQL) in SDNC following things were needed:

  1. Have a Master and at least 1 Slave.
  2. Reuse MySQL inbuilt data replication for replicating data between Master and Slaves.
  3. Using Kubernetes scaling mechasism to scale the pods.


How Kubernetes enabled it

  • StatefulSet: 
    • Used to manage Stateful applications. 
    • Guarantees fixed numbering for a POD. 
    • Also using headless service, PODs were registered with there own unique FQDN in DNS; this makes it possible for other PODs to find a POD even after restart (with different IPAddress).
  • Since we were using a single Kubernetes VM, hosting a volume dynamically on the local-store VM for newly spun slaves was not straight-forward (support wasn't inbuilt). However, Kubernetes does support writing external provisioners which did the job for us. This provisioner actually created a virtual NFS Server on top of a local store.The instance of nfs-provisioner will watch for PersistentVolumeClaims that ask for the StorageClass and automatically create NFS-backed  PersistentVolumes for them.

We used Kubernetes example to replicate MySQL server; this was modified to suit the needs for SDNC-DB.


Internals

For each MYSQL POD, 2 init-containers and 2 containers are spawned.

  • 2 init containers:
    • init-mysql
      1. Generates special Mysql config files based on Ordinal index. (Save ordinal index in server-id.cnf)
      2. Uses config-map to copy the master.cnf/slave.cnf files to conf.d directory.
    • clone-mysql:
      1. Performs clone operation first time the Slave comes up - assuming that Master already has some data on it when the Slave starts.
      2. Uses Opensource tool Percona for this job
  • 2 containers:
    • mysqld:
      • Actual mysql server
    • xtrabackup sidecar:
      1. Handles all the replication between this server and Master.
      2. Handles request from other Pods for data cloning.

As mentioned above, used nfs-provisioner to dynamically create Persistent Volume Claims to enable dynamic scaling of slaves.

Master Failure

Unfortunately if a master fails, we need to write a script (or an application) to promote one of the slaves to be the master and instruct other slaves and applications to change to the new master. You can see more details here.

Other way is to use GTID based replication.


Advantages 

  • Can have multiple slaves with a Master server.
  • Allows scaling slaves dynamically.
  • Any data write is done to Master but data-read can happen on Slaves as well. Hence a 'DBHost-Read' Service was introduced which should be used by Clients for data fetch operations.
  • For any write operation. the write service - DBHost - can be used.
  • Once a Slave is replicated from master, that Slave is then used to replicate data on any new Slave; Low impact on the Master server.


Examples:

  1. Running mysql client to create DB and Table and fetch it using DBHost-Read service:

    kubectl run mysql-client --image=mysql:5.7 -i --rm --restart=Never -- mysql -h sdnc-dbhost-0.dbhost.onap-sdnc -uroot -popenECOMP1.0 <<EOF
    CREATE DATABASE test;
    CREATE TABLE test.messages (message VARCHAR(250));
    INSERT INTO test.messages VALUES ('hello');
    EOF
    
    kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never -- mysql -uroot -popenECOMP1.0 -h dbhost-read.onap-sdnc -e "SELECT * FROM test.messages"
  2. To demonstrate that DBHost-read distributes service across slaves, see the ServerID changing in it's response

    kubectl run mysql-client-loop --image=mysql:5.7 -i -t --rm --restart=Never --  bash -ic "while sleep 0.5; do mysql -uroot -popenECOMP1.0 -h dbhost-read.onap-sdnc -e 'SELECT @@server_id,NOW()'; done"
  3. Can scale (up or down) mysql dynamically:

    Scale up:
    kubectl scale statefulset sdnc-dbhost -n onap-sdnc  --replicas=5
     
    Scale Down
    kubectl scale statefulset sdnc-dbhost -n onap-sdnc  --replicas=2
  • No labels