Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Table of Contents

Environment and Resources

Kubernetes cluster with 4 worker nodes, sharing hardware configuration shown in a table below, is deployed in OpenStack cloud operating system. The test components in docker containers are further deployed on the Kubernetes cluster.

...

CPU

...

Test architecture

In order to conduct client tests this will be conducted in following architecture:

Image Added

  • HV-VES Client - produces high amount of events for processing.
  • Processing Consumer - consumes events from Kafka topics and creates performance metrics.
  • Offset Consumer - reads Kafka offsets.
  • Prometheus - sends requests for performance metrics to HV-VES, Processing Consumer and Offset Consumer, provides data to Grafana.
  • Grafana - delivers analytics and its visualization.

Link between HV-VES Client and HV-VES is TLS secured (provided scripts generate and place certificates on proper containers).

Info

Note: In the Without DMaaP Kafka tests the DMaaP/Kafka service was substituted with wurstmeister kafka

Environment and Resources

Kubernetes cluster with 4 worker nodes, sharing hardware configuration shown in a table below, is deployed in OpenStack cloud operating system. The test components in docker containers are further deployed on the Kubernetes cluster.

Configuration


CPU

ModelIntel(R) Xeon(R) CPU E5-2680 v4
No. of cores24
CPU clock speed [GHz]2.40
Total RAM [GB]62.9

Network Performance

Pod measurement method

In order to check cluster network performance tests with usage of Iperf3 have been applied. Iperf is a tool for measurement of the maximum bandwidth on IP networks, it runs on two mode: server and client. We used a docker image: networkstatic/iperf3.

Following deployment creates a pod with iperf (server mode) on one worker, and one pod with iperf client for each worker.

Code Block
titleDeployment
collapsetrue
apiVersion: apps/v1
kind: Deployment
metadata:
  name: iperf3-server
  namespace: onap
  labels:

Network Performance

Pod measurement method

In order to check cluster network performance tests with usage of Iperf3 have been applied. Iperf is a tool for measurement of the maximum bandwidth on IP networks, it runs on two mode: server and client. We used a docker image: networkstatic/iperf3.

Following deployment creates a pod with iperf (server mode) on one worker, and one pod with iperf client for each worker.

Code Block
titleDeployment
collapsetrue
apiVersion: apps/v1
kind: Deployment
metadata:
  name: iperf3-server
  namespace: onap
  labels:
    app: iperf3-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: iperf3-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: iperf3-server
  template:
    metadata:
      labels:
        app: iperf3-server
    spec:
      containers:
        - name: iperf3-server
          image: networkstatic/iperf3
          args: ['-s']
          ports:
            - containerPort: 5201
              name: server


---

apiVersion: v1
kind: Service
metadata:
  name: iperf3-server
  namespace: onap
spec:
  selector:
    app: iperf3-server
  ports:
    - protocol: TCP
      port: 5201
      targetPort: server



---

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: iperf3-clients
  namespace: onap
  labels:
    app: iperf3-client
spec:
  selector:
    matchLabels:
      app: iperf3-client
  template:
    metadata:
      labels:
        app: iperf3-client
    spec:
      containers:
        - name: iperf3-client
          image: networkstatic/iperf3
          command: ['/bin/sh', '-c', 'sleep infinity']

...

  • worker1 (136 MBytes/sec)

    Code Block
    titleresults
    collapsetrue
    Connecting to host iperf3-server, port 5201
    [  4] local 10.42.5.127 port 39752 connected to 10.43.25.161 port 5201
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec   141 MBytes   141 MBytes/sec   32    673 KBytes       
    [  4]   1.00-2.00   sec   139 MBytes   139 MBytes/sec    0    817 KBytes       
    [  4]   2.00-3.00   sec   139 MBytes   139 MBytes/sec    0    936 KBytes       
    [  4]   3.00-4.00   sec   138 MBytes   137 MBytes/sec    0   1.02 MBytes       
    [  4]   4.00-5.00   sec   138 MBytes   137 MBytes/sec    0   1.12 MBytes       
    [  4]   5.00-6.00   sec   129 MBytes   129 MBytes/sec    0   1.20 MBytes       
    [  4]   6.00-7.00   sec   129 MBytes   129 MBytes/sec    0   1.27 MBytes       
    [  4]   7.00-8.00   sec   134 MBytes   134 MBytes/sec    0   1.35 MBytes       
    [  4]   8.00-9.00   sec   135 MBytes   135 MBytes/sec    0   1.42 MBytes       
    [  4]   9.00-10.00  sec   135 MBytes   135 MBytes/sec   45   1.06 MBytes       
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec  1.32 GBytes   136 MBytes/sec   77             sender
    [  4]   0.00-10.00  sec  1.32 GBytes   135 MBytes/sec                  receiver


  • worker2 (87 MBytes/sec)

    Code Block
    titleresults
    collapsetrue
    Connecting to host iperf3-server, port 5201
    [  4] local 10.42.3.188 port 35472 connected to 10.43.25.161 port 5201
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec  88.3 MBytes  88.3 MBytes/sec  121    697 KBytes       
    [  4]   1.00-2.00   sec  96.2 MBytes  96.3 MBytes/sec    0    796 KBytes       
    [  4]   2.00-3.00   sec  92.5 MBytes  92.5 MBytes/sec    0    881 KBytes       
    [  4]   3.00-4.00   sec  90.0 MBytes  90.0 MBytes/sec    0    957 KBytes       
    [  4]   4.00-5.00   sec  87.5 MBytes  87.5 MBytes/sec    0   1.00 MBytes       
    [  4]   5.00-6.00   sec  88.8 MBytes  88.7 MBytes/sec    0   1.06 MBytes       
    [  4]   6.00-7.00   sec  80.0 MBytes  80.0 MBytes/sec    0   1.12 MBytes       
    [  4]   7.00-8.00   sec  81.2 MBytes  81.3 MBytes/sec   25    895 KBytes       
    [  4]   8.00-9.00   sec  85.0 MBytes  85.0 MBytes/sec    0    983 KBytes       
    [  4]   9.00-10.00  sec  83.8 MBytes  83.7 MBytes/sec    0   1.03 MBytes       
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec   873 MBytes  87.3 MBytes/sec  146             sender
    [  4]   0.00-10.00  sec   870 MBytes  87.0 MBytes/sec                  receiver


  • worker3 (135 MBytes/sec)

    Code Block
    titleresults
    collapsetrue
    Connecting to host iperf3-server, port 5201
    [  4] local 10.42.4.182 port 35288 connected to 10.43.25.161 port 5201
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec   129 MBytes   129 MBytes/sec   45   1.17 MBytes       
    [  4]   1.00-2.00   sec   134 MBytes   134 MBytes/sec   32   1.25 MBytes       
    [  4]   2.00-3.00   sec   135 MBytes   135 MBytes/sec    0   1.32 MBytes       
    [  4]   3.00-4.00   sec   139 MBytes   139 MBytes/sec    0   1.40 MBytes       
    [  4]   4.00-5.00   sec   144 MBytes   144 MBytes/sec    0   1.47 MBytes       
    [  4]   5.00-6.00   sec   131 MBytes   131 MBytes/sec   45   1.14 MBytes       
    [  4]   6.00-7.00   sec   129 MBytes   129 MBytes/sec    0   1.25 MBytes       
    [  4]   7.00-8.00   sec   134 MBytes   134 MBytes/sec    0   1.33 MBytes       
    [  4]   8.00-9.00   sec   138 MBytes   138 MBytes/sec    0   1.39 MBytes       
    [  4]   9.00-10.00  sec   135 MBytes   135 MBytes/sec    0   1.44 MBytes       
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec  1.31 GBytes   135 MBytes/sec  122             sender
    [  4]   0.00-10.00  sec  1.31 GBytes   134 MBytes/sec                  receiver


  • worker0 (2282 MBytes/sec) (iperf client and server exist on same worker )

    Code Block
    titleresults
    collapsetrue
    Connecting to host iperf3-server, port 5201
    [  4] local 10.42.6.132 port 51156 connected to 10.43.25.161 port 5201
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec  2.13 GBytes  2185 MBytes/sec    0    536 KBytes       
    [  4]   1.00-2.00   sec  1.66 GBytes  1702 MBytes/sec    0    621 KBytes       
    [  4]   2.00-3.00   sec  2.10 GBytes  2154 MBytes/sec    0    766 KBytes       
    [  4]   3.00-4.00   sec  1.89 GBytes  1937 MBytes/sec    0   1.01 MBytes       
    [  4]   4.00-5.00   sec  1.87 GBytes  1914 MBytes/sec    0   1.39 MBytes       
    [  4]   5.00-6.00   sec  2.76 GBytes  2826 MBytes/sec    0   1.39 MBytes       
    [  4]   6.00-7.00   sec  1.81 GBytes  1853 MBytes/sec  792   1.09 MBytes       
    [  4]   7.00-8.00   sec  2.54 GBytes  2600 MBytes/sec    0   1.21 MBytes       
    [  4]   8.00-9.00   sec  2.70 GBytes  2763 MBytes/sec    0   1.34 MBytes       
    [  4]   9.00-10.00  sec  2.82 GBytes  2889 MBytes/sec    0   1.34 MBytes       
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec  22.3 GBytes  2282 MBytes/sec  792             sender
    [  4]   0.00-10.00  sec  22.3 GBytes  2282 MBytes/sec                  receiver

Average speed (without worker 0 )  : 119 MBytes/sec

HV-VES Performance

Preconditions

Before start tests, download docker image of producer which is available here. To extract image locally use command: 

Code Block
docker load < hv-collector-go-client.tar.gz  
  • 
    [  4]   0.00-10.00  sec  22.3 GBytes  2282 MBytes/sec  792             sender
    [  4]   0.00-10.00  sec  22.3 GBytes  2282 MBytes/sec                  receiver


Average speed (without worker 0 )  : 119 MBytes/sec

Test Setup

Preconditions

  • Installed ONAP (Frankfurt)
  • Plain TCP connection between HV-VES and clients (default configuration)
  • Metric port exposed on HV-VES service

In order to reach metrics endpoint in HV-VES there is a need to add the following lines in the ports section of HV-VES service configuration file:

Code Block
titleLines to add to ports section of HV-VES service configuration file
  - name: port-t-6060
	port: 6060
	protocol: TCP
	targetPort: 6060

Before start tests, download docker image of producer which is available here:

View file
namehv-collector-go-client.tar.gz
height250

To extract image locally use command: 

Code Block
docker load < hv-collector-go-client.tar.gz  

Modify tools/performance/cloud/producer-pod.yaml file to use the above image and set imagePullPolicy to IfNotPresent:

Code Block
languageyml
...
spec:
  containers:
    - name: hv-collector-producer
      image: onap/org.onap.dcaegen2.collectors.hv-ves.hv-collector-go-client:latest
      imagePullPolicy: IfNotPresent
      volumeMounts:
...


To execute performance tests we have to run functions from a shell script cloud-based-performance-test.sh in HV-VES project directory: ~/tools/performance/cloud/

  1. First we have to generate certificates in ~/tools/ssl folder by using gen_certs. This step only needs to be performed during the first test setup (or if the generated files have been deleted).

    Code Block
    titleGenerating certificates
    ./cloud-based-performance-test.sh gen_certs


  2. Then we call setup in order to send certificates to HV-VES, and deploy Consumers, Prometheus, Grafana and create their ConfigMaps.

    Code Block
    titleSetting up the test environment
    ./cloud-based-performance-test.sh setup


  3. After completing previous steps we can call the start function, which provides Producers and starts the test.

    Code Block
    titlePerforming the test
    ./cloud-based-performance-test.sh start

    For the start function we can use optional arguments:

    --loadshould the test keep defined number of running producers until script interruption (false)
    --containersnumber of producer containers to create (1)
    --properties-filepath to file with benchmark properties (./test.properties)
    --retention-time-minutesretention time of messages in kafka in minutes (60)

    Example invocations of test start:

    Code Block
    titleStarting performance test with single producers creation
    ./cloud-based-performance-test.sh start --containers 10

    The command above starts the test that creates 10 producers which send the amount of messages defined in test.properties once.

    Code Block
    titleStarting performance test with constant messages load
    ./cloud-based-performance-test.sh start --load true --containers 10 --retention-time-minutes 30

    This invocation starts load test, meaning the script will try to keep the amount of running containers at 10 with kafka message retention of 30 minutes.


    The test.properties file contains Producers and Consumers configurations and it allows setting following properties:

    Producer
    hvVesAddressHV-VES address (dcae-hv-ves-collector.onap:6061)
    client.countNumber of clients per pod (1)
    message.sizeSize of a single message in bytes (16384)
    message.countAmount of messages to be send by each client (1000)
    message.intervalInterval between messages in miliseconds (1)
    Certificates paths
    client.cert.pathPath to cert file (/ssl/client.p12)
    client.cert.pass.pathPath to cert's pass file (/ssl/client.pass)
    Consumer
    kafka.bootstrapServersAdress of Kafka service to consume from (message-router-kafka:9092)
    kafka.topicsKafka topics to subscribe to (HV_VES_PERF3GPP)


    Results can be accessed under following links:

HV-VES Performance test results

With dmaap Kafka

Conditions

...

Raw results data with screenshots can be found in following files:

Test Results - series 1

Expand
titleClick here to see results...

Below tables show the test results across a wide range of containers' number. 

NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
218000000.03541.61752.90.38

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

218000000.0218.51.663.20.37

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

218000000.02214.81.6662.40.38

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

218000000.02319.41.6592.20.38

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

218000000.0217.61.6582.60.36

Image AddedImage Added

Image Added

Image RemovedImage Removed

Image Removed


NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
431586844824822603,2240030,4

Image RemovedImage Added

Image RemovedImage RemovedImage AddedImage Added
43591938073660104083,231004,90,78

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

4359401591135034863.730004,90,56

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

436000001.51303.312004.10.39

Image RemovedImage RemovedImage AddedImage Added

Image RemovedImage Added
436000000.02573.31803.50.38

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added


NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
635847515256600178754.949005.80.97

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

635849415067080201824.745004.50.97

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

63592447564900172314.919003.90.93

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

635894410566150172004.9480050.97

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

635868913115410172024.9410050.96

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added


Test Results - series 2

Expand
titleClick here to see results...


NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
218000000.0268.91.61130.35

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

218000000.02611.31.6233.20.35Image Added

Image AddedImage AddedImage AddedImage AddedImage Added

Image Removed

Image RemovedImage RemovedImage RemovedImage RemovedImage Removed


NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [s]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
436000000.02241.33.213040.35Image AddedImage AddedImage Added

Image AddedImage AddedImage Added

Image RemovedImage RemovedImage Removed

Image RemovedImage RemovedImage Removed

436000000.02340.73.23704.20.35

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added


NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
63593176836240159654.845004.31

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

63592177836490168344.842005.81

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added



No DMaaP Kafka SetUp

...

Modify tools/performance/cloud scripts to match the names in your deployments, described in the previous step. Here is a diff file (you may need to adapt it to the current code situation):

tools.diff 

Go to tools/performance/cloud and reboot the environment:

Code Block
titleRun the script
./reboot-test-environment.sh -v

Now you are ready to run the test.

Without DMaaP Kafka

Conditions

To gather tcpdump data another container was added to hv-ves deployment in kubernetes and producer-pod.yaml (command to get tcpdump data file: kubectl cp -n onap <pod-name>:/tcpdump.pcap -c tcpdump ./<pod-name>.pcap)the environment:

... containers: ... <<default containers>> ... - name: tcpdump image: onap-dev-local.esisoj70.emea.nsn-net.net/rjanecze/my-tcpdump:1.0.7 ...
Code Block
titleTcpdump container
collapsetrue
Run the script
./reboot-test-environment.sh -v

Now you are ready to run the test.

Without DMaaP Kafka

Conditions

Tests were performed with following configuration:

...

Raw results data with screenshots can be found in following files:

To see custom Kafka metrics you may want to change kafka-and-producers.json (located in HV-VES project directory: tools/performance/cloud/grafana/dashboards) to

...

Expand
titleClick here to see results...

Below tables show the test results across a wide range of containers' number. 

NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
218000000.0241.51.633.50.37

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

436000000.0201.73.235.70.37

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

654000000.022.64.8246.00.37
Image AddedImage Added

Image AddedImage AddedImage AddedImage Added

Image RemovedImage Removed

Image RemovedImage RemovedImage RemovedImage Removed

872000000.022.86.4148.50.37

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

1090000000.027.38.15528.50.38

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

121080000000.02789.710777.50.41

Image AddedImage AddedImage Added

Image AddedImage Added
Image Added

Image RemovedImage RemovedImage Removed

Image RemovedImage Removed
Image Removed
1412600000061401900013.0106309.80.99

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added


Test results - series 2

Expand
titleClick here to see results...

Below tables show the test results across a wide range of containers' number. 

NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
218000000.0251.71.633.40.37

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

436000000.0212.13.2144.90.37

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

654000000.022.54.8606.20.38

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

872000000.022.96.4247.40.37

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

1090000000.0185.58.02018.10.36

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added


121080000000.019141.79.717169.10.44

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

1412600000031756816.057788.60.50

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added