Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Table of Contents

Environment and Resources

Kubernetes cluster with 4 worker nodes, sharing hardware configuration shown in a table below, is deployed in OpenStack cloud operating system. The test components in docker containers are further deployed on the Kubernetes cluster.

...

CPU

...

Test architecture

In order to conduct client tests this will be conducted in following architecture:

Image Added

  • HV-VES Client - produces high amount of events for processing.
  • Processing Consumer - consumes events from Kafka topics and creates performance metrics.
  • Offset Consumer - reads Kafka offsets.
  • Prometheus - sends requests for performance metrics to HV-VES, Processing Consumer and Offset Consumer, provides data to Grafana.
  • Grafana - delivers analytics and its visualization.

Link between HV-VES Client and HV-VES is TLS secured (provided scripts generate and place certificates on proper containers).

Info

Note: In the Without DMaaP Kafka tests the DMaaP/Kafka service was substituted with wurstmeister kafka

Environment and Resources

Kubernetes cluster with 4 worker nodes, sharing hardware configuration shown in a table below, is deployed in OpenStack cloud operating system. The test components in docker containers are further deployed on the Kubernetes cluster.

Configuration


CPU

ModelIntel(R) Xeon(R) CPU E5-2680 v4
No. of cores24
CPU clock speed [GHz]2.40
Total RAM [GB]62.9

Network Performance

Pod measurement method

In order to check cluster network performance tests with usage of Iperf3 have been applied. Iperf is a tool for measurement of the maximum bandwidth on IP networks, it runs on two mode: server and client. We used a docker image: networkstatic/iperf3.

Following deployment creates a pod with iperf (server mode) on one worker, and one pod with iperf client for each worker.

Code Block
titleDeployment
collapsetrue
apiVersion: apps/v1
kind: Deployment
metadata:
  name: iperf3-server
  namespace: onap
  labels:
    app: iperf3-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: iperf3-server
  template:
    metadata:
      labels:
        app: iperf3-server
    spec:
      containers:
        - name: iperf3-server
          image: networkstatic/iperf3
          args: ['-s']
          ports:
            - containerPort: 5201
              name: server


---

apiVersion: v1
kind: Service
metadata:
  name: iperf3-server
  namespace: onap
spec:
  selector:
    app: iperf3-server
  ports:
    - protocol: TCP
      port: 5201
      targetPort: server



---

apiVersion: apps/v1
kind: DaemonSet

Network Performance

Pod measurement method

In order to check cluster network performance tests with usage of Iperf3 have been applied. Iperf is a tool for measurement of the maximum bandwidth on IP networks, it runs on two mode: server and client. We used a docker image: networkstatic/iperf3.

Following deployment creates a pod with iperf (server mode) on one worker, and one pod with iperf client for each worker.

Code Block
titleDeployment
collapsetrue
apiVersion: apps/v1
kind: Deployment
metadata:
  name: iperf3-serverclients
  namespace: onap
  labels:
    app: iperf3-serverclient
spec:
  replicas: 1
  selector:
    matchLabels:
      app: iperf3-serverclient
  template:
    metadata:
      labels:
        app: iperf3-serverclient
    spec:
      containers:
        - name: iperf3-server
          image: networkstatic/iperf3
          args: ['-s']
          ports:
            - containerPort: 5201
              name: server


---

apiVersion: v1
kind: Service
metadata:
  name: iperf3-server
  namespace: onap
spec:
  selector:
    app: iperf3-server
  ports:
    - protocol: TCP
      port: 5201
      targetPort: server



---

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: iperf3-clients
  namespace: onap
  labels:
    app: iperf3-client
spec:
  selector:
    matchLabels:
      app: iperf3-client
  template:
    metadata:
      labels:
        app: iperf3-client
    spec:
      containers:
        - name: iperf3-client
          image: networkstatic/iperf3
          command: ['/bin/sh', '-c', 'sleep infinity']

To create deployment, execute following command:

Code Block
kubectl create -f deployment.yaml

To find all iperf pods, execute:

Code Block
kubectl -n onap get pods -o wide | grep iperf

To measure connection between pods, run iperf on iperf-client pod, using following command:

Code Block
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server

To change output format from MBits/sec to MBytes/sec:

Code Block
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes

To change measure time:

Code Block
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -t <time-in-second>

To gather results, the command was executed:

Code Block
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes

Results of performed tests

client
          image: networkstatic/iperf3
          command: ['/bin/sh', '-c', 'sleep infinity']

To create deployment, execute following command:

Code Block
kubectl create -f deployment.yaml

To find all iperf pods, execute:

Code Block
kubectl -n onap get pods -o wide | grep iperf

To measure connection between pods, run iperf on iperf-client pod, using following command:

Code Block
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server

To change output format from MBits/sec to MBytes/sec:

Code Block
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes

To change measure time:

Code Block
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -t <time-in-second>

To gather results, the command was executed:

Code Block
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes

Results of performed tests

  • worker1 (136 MBytes/sec)

    Code Block
    titleresults
    collapsetrue
    Connecting to host iperf3-server, port 5201
    [  4] local 10.42.5.127 port 39752 connected to 10.43.25.161 port 5201
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec   141 MBytes   141 MBytes/sec   32    673 KBytes       
    [  4]   1.00-2.00   sec   139 MBytes   139 MBytes/sec    0    817 KBytes       
    [  4]   2.00-3.00   sec   139 MBytes   139 MBytes/sec    0    936 KBytes       
    [  4]   3.00-4.00   sec   138 MBytes   137 MBytes/sec    0   1.02 MBytes       
    [  4]   4.00-5.00   sec   138 MBytes   137 MBytes/sec    0   1.12 MBytes       
    [  4]   5.00-6.00   sec   129 MBytes   129 MBytes/sec    0   1.20 MBytes       
    [  4]   6.00-7.00   sec   129 MBytes   129 MBytes/sec    0   1.27 MBytes       

    worker1 (136 MBytes/sec)

    Code Block
    titleresults
    collapsetrue
    Connecting to host iperf3-server, port 5201
    [  4] local 10.42.5.127 port 39752 connected to 10.43.25.161 port 5201
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   07.00-18.00   sec   141134 MBytes   141134 MBytes/sec   32 0   6731.35 KBytesMBytes       
    [  4]   18.00-29.00   sec   139135 MBytes   139135 MBytes/sec    0   1.42 817MBytes KBytes       
    [  4]   29.00-310.00   sec   139135 MBytes   139135 MBytes/sec   45   1.06  0    936 KBytes       
    [  4]   3.00-4.00   sec   138 MBytes   137 MBytes/sec    0   1.02 MBytesMBytes       
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   40.00-510.00   sec   138 MBytes1.32 GBytes   137136 MBytes/sec   77  0   1.12 MBytes       sender
    [  4]   50.00-610.00   sec  1.32 129GBytes MBytes  135 129 MBytes/sec         0      1.20 MBytes  receiver


  • worker2 (87 MBytes/sec)

    Code Block
    titleresults
    collapsetrue
    Connecting to host iperf3-server, port 5201
    [  4] local 10.42.3.188 port 35472 connected to 6.00-7.00   sec10.43.25.161 port 5201
    [ ID] Interval     129 MBytes   129 MBytes/sec Transfer   0  Bandwidth 1.27 MBytes     Retr  Cwnd
    [  4]   70.00-81.00   sec   13488.3 MBytes   13488.3 MBytes/sec  121  0  697 1.35 MBytesKBytes       
    [  4]   81.00-92.00   sec  96.2 135 MBytes  96.3 135 MBytes/sec    0    1.42796 MBytesKBytes       
    [  4]   92.00-103.00   sec   13592.5 MBytes   13592.5 MBytes/sec   45  0 1.06 MBytes  881 KBytes    
    - - - -
    [ - -4] - - 3.00-4.00 - - -sec - - - - - - - - - - - - -
    [ ID] Interval       90.0 MBytes  90.0 MBytes/sec    0 Transfer   957  BandwidthKBytes       Retr
    [  4]   04.00-105.00   sec  187.325 GBytesMBytes   13687.5 MBytes/sec   77 0   1.00 MBytes        sender
    [  4]   05.00-106.00   sec  188.328 GBytesMBytes   13588.7 MBytes/sec    0          1.06 MBytes    receiver

    worker2 (87 MBytes/sec)

    Code Block
    titleresults
    collapsetrue
    Connecting to host iperf3-server, port 5201
    [  4] local 10.42.3.188 port 35472 connected to 10.43.25.161 port 5201
    [ ID] Interval  6.00-7.00   sec  80.0 MBytes   Transfer80.0 MBytes/sec    0  Bandwidth 1.12 MBytes     Retr  Cwnd
    [  4]   07.00-18.00   sec  8881.32 MBytes  8881.3 MBytes/sec   12125    697895 KBytes       
    [  4]   18.00-29.00   sec  9685.20 MBytes  9685.30 MBytes/sec    0    796983 KBytes       
    [  4]   29.00-310.00   sec  9283.58 MBytes  9283.57 MBytes/sec    0   1.03 MBytes       
    - - - - - - - - - - 881- - KBytes- - - - - - - 
    [- - 4]- - - 3.00-4.00   sec  90.0 MBytes  90.0 MBytes/sec-
    [ ID] Interval           0Transfer    957 KBytesBandwidth       Retr
    [  4]   40.00-510.00  sec sec  87.5873 MBytes  87.53 MBytes/sec  146  0   1.00 MBytes       sender
    [  4]   50.00-610.00  sec sec  88.8870 MBytes  8887.70 MBytes/sec     0   1.06 MBytes         receiver


  • worker3 (135 MBytes/sec)

    Code Block
    titleresults
    collapsetrue
    Connecting to host iperf3-server, port 5201
    [  4] local 10.42.4.182 port 35288 connected to 10.43.25.161 port 5201
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   6.00-7.00   sec  80.0 MBytes  80.0 MBytes/sec    0   1.12 MBytes       
    [  4]   7.00-8.00   sec  81.2 MBytes  81.3 MBytes/sec   25    895 KBytes       
    [  4]   80.00-91.00   sec   85.0129 MBytes   85.0129 MBytes/sec   45 0    983 KBytes1.17 MBytes       
    [  4]   91.00-102.00   sec   83.8134 MBytes   83.7134 MBytes/sec   32 0   1.0325 MBytes       
    -[ - - -4] - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth 2.00-3.00   sec   135 MBytes   135 MBytes/sec    0   1.32 MBytes       Retr
    [  4]   03.00-104.00   sec   873139 MBytes   87.3139 MBytes/sec  146  0   1.40 MBytes       sender
    [  4]   04.00-105.00   sec   870144 MBytes   87.0144 MBytes/sec    0        1.47 MBytes      receiver

    worker3 (135 MBytes/sec)

    Code Block
    titleresults
    collapsetrue
    Connecting to host iperf3-server, port 5201
    [  4] local 10.42.4.182 port 35288 connected to 10.43.25.161 port 5201
    [ ID] Interval           Transfer     Bandwidth   5.00-6.00   sec   131 MBytes   131 MBytes/sec   45   1.14 MBytes     Retr  Cwnd
    [  4]   06.00-17.00   sec   129 MBytes   129 MBytes/sec    450   1.1725 MBytes       
    [  4]   17.00-28.00   sec   134 MBytes   134 MBytes/sec   32 0   1.2533 MBytes       
    [  4]   28.00-39.00   sec   135138 MBytes   135138 MBytes/sec    0   1.3239 MBytes       
    [  4]   39.00-410.00   sec   139135 MBytes   139135 MBytes/sec    0   1.4044 MBytes       
    [  4]   4.00-5.00   sec   144 MBytes   144 MBytes/sec    0   1.47 MBytes- - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   50.00-610.00   sec   131 MBytes1.31 GBytes   131135 MBytes/sec  122  45   1.14 MBytes       sender
    [  4]   60.00-710.00   sec   129 MBytes1.31 GBytes   129134 MBytes/sec    0           1.25 MBytes  receiver


  • worker0 (2282 MBytes/sec) (iperf client and server exist on same worker )

    Code Block
    titleresults
    collapsetrue
    Connecting to host iperf3-server, port 5201
    [  4] local 10.42.6.132 port  7.00-8.00   sec51156 connected to 10.43.25.161 port 5201
    [ ID] Interval     134 MBytes   134 MBytes/sec Transfer   0  Bandwidth 1.33 MBytes     Retr  Cwnd
    [  4]   80.00-91.00   sec   138 MBytes2.13 GBytes   1382185 MBytes/sec    0    1.39536 MBytesKBytes       
    [  4]   91.00-102.00   sec  1.66 135GBytes MBytes 1702  135 MBytes/sec    0    1.44621 MBytesKBytes       
    [  4]   2.00-3.00  - -sec - -2.10 -GBytes - -2154 -MBytes/sec - - - -0 - - - -766 -KBytes - - - - - - -
    [ ID 4] Interval   3.00-4.00   sec  1.89 GBytes  1937 MBytes/sec   Transfer 0   1.01 BandwidthMBytes       Retr
    [  4]   04.00-105.00   sec  1.3187 GBytes   1351914 MBytes/sec  122  0   1.39 MBytes       sender
    [  4]   05.00-106.00   sec  12.3176 GBytes   1342826 MBytes/sec    0   1.39 MBytes       
    [  4]   receiver

    worker0 (2282 MBytes/sec) (iperf client and server exist on same worker )

    Code Block
    titleresults
    collapsetrue
    Connecting to host iperf3-server, port 5201
    [  4] local 10.42.6.132 port 51156 connected to 10.43.25.161 port 5201
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd6.00-7.00   sec  1.81 GBytes  1853 MBytes/sec  792   1.09 MBytes       
    [  4]   7.00-8.00   sec  2.54 GBytes  2600 MBytes/sec    0   1.21 MBytes       
    [  4]   08.00-19.00   sec  2.1370 GBytes  21852763 MBytes/sec    0   1.34 536MBytes KBytes       
    [  4]   19.00-210.00   sec  12.6682 GBytes  17022889 MBytes/sec    0   1.34 MBytes        621 KBytes       
    [  4]   2.00-3.00   sec  2.10 GBytes  2154 MBytes/sec    0    766 KBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   30.00-410.00   sec  122.893 GBytes  19372282 MBytes/sec  792  0   1.01 MBytes       sender
    [  4]   40.00-510.00   sec  122.873 GBytes  1914 MBytes/sec    0   1.39 MBytes       
    [  4]   5.00-6.00   sec  2.76 GBytes  2826 MBytes/sec    0   1.39 MBytes       
    [  4]   6.00-7.00   sec  1.81 GBytes  1853 MBytes/sec  792   1.09 MBytes       
    [  4]   7.00-8.00   sec  2.54 GBytes  2600 MBytes/sec    0   1.21 MBytes       
    [  4]   8.00-9.00   sec  2.70 GBytes  2763 MBytes/sec    0   1.34 MBytes       
    [  4]   9.00-10.00  sec  2.82 GBytes  2889 MBytes/sec    0   1.34 MBytes       
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec  22.3 GBytes  2282 MBytes/sec  792             sender
    [  4]   0.00-10.00  sec  22.3 GBytes  2282 MBytes/sec                  receiver

Average speed (without worker 0 )  : 119 MBytes/sec

HV-VES Performance

Preconditions

Before start tests, download docker image of producer which is available here. To extract image locally use command: 

...

  •   2282 MBytes/sec                  receiver


Average speed (without worker 0 )  : 119 MBytes/sec

Test Setup

Preconditions

  • Installed ONAP (Frankfurt)
  • Plain TCP connection between HV-VES and clients (default configuration)
  • Metric port exposed on HV-VES service

In order to reach metrics endpoint in HV-VES there is a need to add the following lines in the ports section of HV-VES service configuration file:

Code Block
titleLines to add to ports section of HV-VES service configuration file
  - name: port-t-6060
	port: 6060
	protocol: TCP
	targetPort: 6060

Before start tests, download docker image of producer which is available here:

View file
namehv-collector-go-client.tar.gz
height250

To extract image locally use command: 

Code Block
docker load < hv-collector-go-client.tar.gz  

Modify tools/performance/cloud/producer-pod.yaml file to use the above image and set imagePullPolicy to IfNotPresent:

Code Block
languageyml
...
spec:
  containers:
    - name: hv-collector-producer
      image: onap/org.onap.dcaegen2.collectors.hv-ves.hv-collector-go-client:latest
      imagePullPolicy: IfNotPresent
      volumeMounts:
...


To execute performance tests we have to run functions from a shell script cloud-based-performance-test.sh in HV-VES project directory: ~/tools/performance/cloud/

  1. First we have to generate certificates in ~/tools/ssl folder by using gen_certs. This step only needs to be performed during the first test setup (or if the generated files have been deleted).

    Code Block
    titleGenerating certificates
    ./cloud-based-performance-test.sh gen_certs


  2. Then we call setup in order to send certificates to HV-VES, and deploy Consumers, Prometheus, Grafana and create their ConfigMaps.

    Code Block
    titleSetting up the test environment
    ./cloud-based-performance-test.sh setup


  3. After completing previous steps we can call the start function, which provides Producers and starts the test.

    Code Block
    titlePerforming the test
    ./cloud-based-performance-test.sh start

    For the start function we can use optional arguments:

    --loadshould the test keep defined number of running producers until script interruption (false)
    --containersnumber of producer containers to create (1)
    --properties-filepath to file with benchmark properties (./test.properties)
    --retention-time-minutesretention time of messages in kafka in minutes (60)

    Example invocations of test start:

    Code Block
    titleStarting performance test with single producers creation
    ./cloud-based-performance-test.sh start --containers 10

    The command above starts the test that creates 10 producers which send the amount of messages defined in test.properties once.

    Code Block
    titleStarting performance test with constant messages load
    ./cloud-based-performance-test.sh start --load true --containers 10 --retention-time-minutes 30

    This invocation starts load test, meaning the script will try to keep the amount of running containers at 10 with kafka message retention of 30 minutes.


    The test.properties file contains Producers and Consumers configurations and it allows setting following properties:

    Producer
    hvVesAddressHV-VES address (dcae-hv-ves-collector.onap:6061)
    client.countNumber of clients per pod (1)
    message.sizeSize of a single message in bytes (16384)
    message.countAmount of messages to be send by each client (1000)
    message.intervalInterval between messages in miliseconds (1)
    Certificates paths
    client.cert.pathPath to cert file (/ssl/client.p12)
    client.cert.pass.pathPath to cert's pass file (/ssl/client.pass)
    Consumer
    kafka.bootstrapServersAdress of Kafka service to consume from (message-router-kafka:9092)
    kafka.topicsKafka topics to subscribe to (HV_VES_PERF3GPP)


    Results can be accessed under following links:

HV-VES Performance test results

With dmaap Kafka

Conditions

...

Raw results data with screenshots can be found in following files:

Test Results - series 1

Expand
titleClick here to see results...

Below tables show the test results across a wide range of containers' number. 

NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
218000000.03541.61752.90.38

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

218000000.0218.51.663.20.37

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

218000000.02214.81.6662.40.38

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

218000000.02319.41.6592.20.38

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

218000000.0217.61.6582.60.36

Image AddedImage Added

Image Added

Image RemovedImage Removed

Image Removed


NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
431586844824822603,2240030,4

Image RemovedImage Added

Image RemovedImage RemovedImage AddedImage Added
43591938073660104083,231004,90,78

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

4359401591135034863.730004,90,56

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

436000001.51303.312004.10.39

Image RemovedImage RemovedImage AddedImage Added

Image RemovedImage Added
436000000.02573.31803.50.38

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added


NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
635847515256600178754.949005.80.97

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

635849415067080201824.745004.50.97

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

63592447564900172314.919003.90.93

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

635894410566150172004.9480050.97

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added

635868913115410172024.9410050.96

Image RemovedImage RemovedImage RemovedImage AddedImage AddedImage Added


Test Results - series 2

Expand
titleClick here to see results...


NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
218000000.0268.91.61130.35

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

218000000.02611.31.6233.20.35Image Added

Image AddedImage AddedImage AddedImage AddedImage Added

Image Removed

Image RemovedImage RemovedImage RemovedImage RemovedImage Removed


NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [s]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
436000000.02241.33.213040.35Image AddedImage AddedImage Added

Image AddedImage AddedImage Added

Image RemovedImage RemovedImage Removed

Image RemovedImage RemovedImage Removed

436000000.02340.73.23704.20.35

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added


NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
63593176836240159654.845004.31

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

63592177836490168344.842005.81

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added



No DMaaP Kafka SetUp

...

Raw results data with screenshots can be found in following files:

To see custom Kafka metrics you may want to change kafka-and-producers.json (located in HV-VES project directory: tools/performance/cloud/grafana/dashboards) to

...

Expand
titleClick here to see results...

Below tables show the test results across a wide range of containers' number. 

NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
218000000.0241.51.633.50.37

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

436000000.0201.73.235.70.37

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

654000000.022.64.8246.00.37
Image AddedImage Added

Image AddedImage AddedImage AddedImage Added

Image RemovedImage Removed

Image RemovedImage RemovedImage RemovedImage Removed

872000000.022.86.4148.50.37

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

1090000000.027.38.15528.50.38

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

121080000000.02789.710777.50.41

Image AddedImage AddedImage Added

Image AddedImage Added
Image Added

Image RemovedImage RemovedImage Removed

Image RemovedImage Removed
Image Removed
1412600000061401900013.0106309.80.99

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added


Test results - series 2

Expand
titleClick here to see results...

Below tables show the test results across a wide range of containers' number. 

NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
218000000.0251.71.633.40.37

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

436000000.0212.13.2144.90.37

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

654000000.022.54.8606.20.38

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

872000000.022.96.4247.40.37

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

1090000000.0185.58.02018.10.36

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added


121080000000.019141.79.717169.10.44

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

1412600000031756816.057788.60.50

Image RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added