You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Next »

Often we require a multi cluster deployment where we need to replicate a stack across multiple locations.
These multiple clusters should be able to connect and communicate among themselves.

We tried deploying such a stack through EMCO platform.

While most of the deployment is automated, there are few steps which were done manually and each of them have been mentioned here.

So, these steps shall also be automated and the entire stack shall be a 1 click deployment through EMCO.

The components of the stack are below :



Step-by-step guide

For smooth deployment we have created 3 scripts. One must edit the script to give the correct cluster details to each script. For eg, kubeconfig file, cluster ips
and also the corrects URLs where EMCO binaries like orchestrator, clm etc are running.

  1. m3operatorScript -  This script has to run first as it deploys the m3operator. M3operator is a pre-requisite for the m3 cluster deployment.
    Make necessary changes in the script as per the cluster details at your end. Helm charts and profile can be found at : operatorHelmChartsAndProfile

  2. m3dbInstallerScript - This script shall install m3db nodes.  Helm charts and profile : m3dbHelmCharts

  3. CollectD-PrometheusScript - This script shall install collectd and prometheus. Helm charts for collectd and prometheus : collectd and prometheus

  4. Once all pods are deployed correctly, the topology should be like below:


Manual steps for getting the m3db up and running:

As pointed out earlier, we working on some steps which are not automated for the deployment of the stack.
In due course of time, these might as well be automated. But these are steps till then :

1. Three node cluster on which m3db needs to be deployed have to be labelled, before the m3db script is run. The commands :

NodeLabelling
NODES=($(kubectl get nodes --output=jsonpath={.items..metadata.name}))
kubectl label node/${NODES[0]} failure-domain.beta.kubernetes.io/region=us-west1
kubectl label node/${NODES[1]} failure-domain.beta.kubernetes.io/region=us-west1
kubectl label node/${NODES[2]} failure-domain.beta.kubernetes.io/region=us-west1
kubectl label node/${NODES[0]} failure-domain.beta.kubernetes.io/zone=us-west1-a --overwrite=true
kubectl label node/${NODES[1]} failure-domain.beta.kubernetes.io/zone=us-west1-b --overwrite=true
kubectl label node/${NODES[2]} failure-domain.beta.kubernetes.io/zone=us-west1-c --overwrite=true



2.  Create db namespace and bootstrap m3 db

Bootstrap M3db
kubectl -n training port-forward svc/m3coordinator-m3db-cluster 7201

curl -vvv -X POST http://localhost:7201/api/v1/database/create -d '{
  "type": "cluster",
  "namespaceName": "collectd",
  "retentionTime": "168h",
  "numShards": "64",
  "replicationFactor": "3",
  "hosts": [
        {
            "id": "m3db-cluster-rep0-0",
            "isolationGroup": "us-west1-a",
            "zone": "embedded",
            "weight": 100,
            "address": "m3db-cluster-rep0-0.m3dbnode-m3db-cluster:9000",
            "port": 9000
        },
        {
            "id": "m3db-cluster-rep1-0",
            "isolationGroup": "us-west1-b",
            "zone": "embedded",
            "weight": 100,
            "address": "m3db-cluster-rep1-0.m3dbnode-m3db-cluster:9000",
            "port": 9000
        },
        {
            "id": "m3db-cluster-rep2-0",
            "isolationGroup": "us-west1-c",
            "zone": "embedded",
            "weight": 100,
            "address": "m3db-cluster-rep2-0.m3dbnode-m3db-cluster:9000",
            "port": 9000
        }
    ]
}'


3. Connecting the prometheus and m3coordinator service.

After the pods for m3db and prometheus, we need to make m3coordinator service a NodePort in case, we are not using the loadbalance.
This can be done by kubectl edit command :


Make m3coordinator service a NodePort
kubectl edit svc/m3coordinator-m3-cluster
Edit type to be nodePort : 32701







  • No labels