Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Scenario 01 - Considering microservice replication across multiple locations with replication within each cluster

Diagram

Image RemovedImage Added


Testing Steps

  1. Install ISTIO - Deploy istio control plane in each cluster. (NOTE - For testing use common root CA)
  2. Configure DNS - To provide resolution of service from remote clusters, istio uses its own DNS called istiocoredns which provides the resolution of remote istio servicesEg:

NOTE - In order to utilize istiocoredns, Kubernetes DNS' must be configured to stub a domain in a specific format

...

Code Block
languageyml
titleconfigmap coredns
linenumberstrue
kind: ConfigMap
data:
  Corefile: |
    .:53 {
		log
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           upstream
           fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
    global:53 {
        errors
        cache 30
        proxy . 10.43.57.78
    }

4. Add Istio service entry with details of the remote servers  (server service 03 and server service 04) to the cluster where the client is running (For ISTIO multi-cluster communication usage of SNI ports is mandatory at both ends on istio-ingressgateway)

Code Block
languageyml
titleServiceEntry
linenumberstrue
collapsetrue
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: servicenameservicename04-bar
spec:
  hosts:
  # template for the remote service name - <servicename.namespace.global>
  - httpbinserverservice04.bar.global
  # Treat remote cluster services as part of the service mesh
  # as all clusters in the service mesh share the same root of trust.
  location: MESH_INTERNAL
  ports:
  - name: http1
    number: 8000
    protocol: http
  resolution: DNS
  addresses:
  # the IP address to which httpbin.bar.global will resolve to
  # must be unique for each remote service, within a given cluster.
  # This address need not be routable. Traffic for this IP will be captured
  # by the sidecar and routed appropriately.
  - 240.0.0.2
  endpoints:
  # This is the routable address of the ingress gateway in cluster2 that
  # sits in front of sleep.foo service. Traffic from the sidecar will be
  # routed to this address.
  - address: 172.25.55.50
    ports:
      http1: 15443  # Do not change this port value


5. Create istio virtual service on the client cluster with all the destination server that it wants to connect. The API calls from the client can be load balanced by assigned weight to each server. This can be achieved using DestinationRule as well.

Code Block
languageyml
titleServiceEntry
linenumberstrue
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews-route
spec:
  hosts:
  - service01.bar.svc.cluster.local
  http:
    - match:
      - uri:
          prefix: "/headers"
      route:
      - destination:
          host: serviceserver04.bar.global
          port:
            number: 8000
        weight: 50
      - destination:
          host: serviceserver01.bar.svc.cluster.local
          port:
            number: 8000
        weight: 25
	  - destination:
          host: serviceserver02.bar.svc.cluster.local
          port:
            number: 8000
        weight: 25

6. Verify that client pod is sending requests to servers in the order of assigned weight

Code Block
languagebash
themeMidnight
titleTrafficdistribution
linenumberstrue
#!/bin/bash
COUNTER=0
while [ $COUNTER -lt 10 ]; do
	curl -v httpbin.bar.svc.cluster.local/headers
	sleep 2
done


IN PROGRESS......