You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »


Environment and Resources

Kubernetes cluster with 4 worker nodes, sharing hardware configuration shown in a table below, is deployed in OpenStack cloud operating system. The test components in docker containers are further deployed on the Kubernetes cluster.

Configuration


CPU

ModelIntel(R) Xeon(R) CPU E5-2680 v4
No. of cores24
CPU clock speed [GHz]2.40
Total RAM [GB]62.9

Network Performance

Pod measurement method

In order to check cluster network performance tests with usage of Iperf3 have been applied. Iperf is a tool for measurement of the maximum bandwidth on IP networks, it runs on two mode: server and client. We used a docker image: networkstatic/iperf3.

Following deployment creates a pod with iperf (server mode) on one worker, and one pod with iperf client for each worker.

Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: iperf3-server
  namespace: onap
  labels:
    app: iperf3-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: iperf3-server
  template:
    metadata:
      labels:
        app: iperf3-server
    spec:
      containers:
        - name: iperf3-server
          image: networkstatic/iperf3
          args: ['-s']
          ports:
            - containerPort: 5201
              name: server


---

apiVersion: v1
kind: Service
metadata:
  name: iperf3-server
  namespace: onap
spec:
  selector:
    app: iperf3-server
  ports:
    - protocol: TCP
      port: 5201
      targetPort: server



---

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: iperf3-clients
  namespace: onap
  labels:
    app: iperf3-client
spec:
  selector:
    matchLabels:
      app: iperf3-client
  template:
    metadata:
      labels:
        app: iperf3-client
    spec:
      containers:
        - name: iperf3-client
          image: networkstatic/iperf3
          command: ['/bin/sh', '-c', 'sleep infinity']

To create deployment, execute following command:

kubectl create -f deployment.yaml

To find all iperf pods, execute:

kubectl -n onap get pods -o wide | grep iperf

To measure connection between pods, run iperf on iperf-client pod, using following command:

kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server

To change output format from MBits/sec to MBytes/sec:

kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes

To change measure time:

kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -t <time-in-second>

To gather results, the command was executed:

kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes

Results of performed tests

  • worker1 (136 MBytes/sec)

    results
    Connecting to host iperf3-server, port 5201
    [  4] local 10.42.5.127 port 39752 connected to 10.43.25.161 port 5201
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec   141 MBytes   141 MBytes/sec   32    673 KBytes       
    [  4]   1.00-2.00   sec   139 MBytes   139 MBytes/sec    0    817 KBytes       
    [  4]   2.00-3.00   sec   139 MBytes   139 MBytes/sec    0    936 KBytes       
    [  4]   3.00-4.00   sec   138 MBytes   137 MBytes/sec    0   1.02 MBytes       
    [  4]   4.00-5.00   sec   138 MBytes   137 MBytes/sec    0   1.12 MBytes       
    [  4]   5.00-6.00   sec   129 MBytes   129 MBytes/sec    0   1.20 MBytes       
    [  4]   6.00-7.00   sec   129 MBytes   129 MBytes/sec    0   1.27 MBytes       
    [  4]   7.00-8.00   sec   134 MBytes   134 MBytes/sec    0   1.35 MBytes       
    [  4]   8.00-9.00   sec   135 MBytes   135 MBytes/sec    0   1.42 MBytes       
    [  4]   9.00-10.00  sec   135 MBytes   135 MBytes/sec   45   1.06 MBytes       
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec  1.32 GBytes   136 MBytes/sec   77             sender
    [  4]   0.00-10.00  sec  1.32 GBytes   135 MBytes/sec                  receiver
  • worker2 (87 MBytes/sec)

    results
    Connecting to host iperf3-server, port 5201
    [  4] local 10.42.3.188 port 35472 connected to 10.43.25.161 port 5201
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec  88.3 MBytes  88.3 MBytes/sec  121    697 KBytes       
    [  4]   1.00-2.00   sec  96.2 MBytes  96.3 MBytes/sec    0    796 KBytes       
    [  4]   2.00-3.00   sec  92.5 MBytes  92.5 MBytes/sec    0    881 KBytes       
    [  4]   3.00-4.00   sec  90.0 MBytes  90.0 MBytes/sec    0    957 KBytes       
    [  4]   4.00-5.00   sec  87.5 MBytes  87.5 MBytes/sec    0   1.00 MBytes       
    [  4]   5.00-6.00   sec  88.8 MBytes  88.7 MBytes/sec    0   1.06 MBytes       
    [  4]   6.00-7.00   sec  80.0 MBytes  80.0 MBytes/sec    0   1.12 MBytes       
    [  4]   7.00-8.00   sec  81.2 MBytes  81.3 MBytes/sec   25    895 KBytes       
    [  4]   8.00-9.00   sec  85.0 MBytes  85.0 MBytes/sec    0    983 KBytes       
    [  4]   9.00-10.00  sec  83.8 MBytes  83.7 MBytes/sec    0   1.03 MBytes       
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec   873 MBytes  87.3 MBytes/sec  146             sender
    [  4]   0.00-10.00  sec   870 MBytes  87.0 MBytes/sec                  receiver
  • worker3 (135 MBytes/sec)

    results
    Connecting to host iperf3-server, port 5201
    [  4] local 10.42.4.182 port 35288 connected to 10.43.25.161 port 5201
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec   129 MBytes   129 MBytes/sec   45   1.17 MBytes       
    [  4]   1.00-2.00   sec   134 MBytes   134 MBytes/sec   32   1.25 MBytes       
    [  4]   2.00-3.00   sec   135 MBytes   135 MBytes/sec    0   1.32 MBytes       
    [  4]   3.00-4.00   sec   139 MBytes   139 MBytes/sec    0   1.40 MBytes       
    [  4]   4.00-5.00   sec   144 MBytes   144 MBytes/sec    0   1.47 MBytes       
    [  4]   5.00-6.00   sec   131 MBytes   131 MBytes/sec   45   1.14 MBytes       
    [  4]   6.00-7.00   sec   129 MBytes   129 MBytes/sec    0   1.25 MBytes       
    [  4]   7.00-8.00   sec   134 MBytes   134 MBytes/sec    0   1.33 MBytes       
    [  4]   8.00-9.00   sec   138 MBytes   138 MBytes/sec    0   1.39 MBytes       
    [  4]   9.00-10.00  sec   135 MBytes   135 MBytes/sec    0   1.44 MBytes       
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec  1.31 GBytes   135 MBytes/sec  122             sender
    [  4]   0.00-10.00  sec  1.31 GBytes   134 MBytes/sec                  receiver
  • worker0 (2282 MBytes/sec) (iperf client and server exist on same worker )

    results
    Connecting to host iperf3-server, port 5201
    [  4] local 10.42.6.132 port 51156 connected to 10.43.25.161 port 5201
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec  2.13 GBytes  2185 MBytes/sec    0    536 KBytes       
    [  4]   1.00-2.00   sec  1.66 GBytes  1702 MBytes/sec    0    621 KBytes       
    [  4]   2.00-3.00   sec  2.10 GBytes  2154 MBytes/sec    0    766 KBytes       
    [  4]   3.00-4.00   sec  1.89 GBytes  1937 MBytes/sec    0   1.01 MBytes       
    [  4]   4.00-5.00   sec  1.87 GBytes  1914 MBytes/sec    0   1.39 MBytes       
    [  4]   5.00-6.00   sec  2.76 GBytes  2826 MBytes/sec    0   1.39 MBytes       
    [  4]   6.00-7.00   sec  1.81 GBytes  1853 MBytes/sec  792   1.09 MBytes       
    [  4]   7.00-8.00   sec  2.54 GBytes  2600 MBytes/sec    0   1.21 MBytes       
    [  4]   8.00-9.00   sec  2.70 GBytes  2763 MBytes/sec    0   1.34 MBytes       
    [  4]   9.00-10.00  sec  2.82 GBytes  2889 MBytes/sec    0   1.34 MBytes       
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec  22.3 GBytes  2282 MBytes/sec  792             sender
    [  4]   0.00-10.00  sec  22.3 GBytes  2282 MBytes/sec                  receiver

Average speed (without worker 0 )  : 119 MBytes/sec

HV-VES Performance

Preconditions

Before start tests, download docker image of producer which is available here. To extract image locally use command: 

docker load < hv-collector-go-client.tar.gz  

With dmaap Kafka

Conditions

Tests were performed with 5 repetitions for each configuration shown in the table below.

Number of producersMessages per producer

Payload size [B]

Interval [ms]
290000819210
490000819210
660000819210

Raw results data

Raw results data with screenshots can be found in following files:

Test Results - series 1

Below tables show the test results across a wide range of containers' number. 

NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
218000000.03541.61752.90.38

218000000.0218.51.663.20.37

218000000.02214.81.6662.40.38

218000000.02319.41.6592.20.38

218000000.0217.61.6582.60.36


NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
431586844824822603,2240030,4

43591938073660104083,231004,90,78

4359401591135034863.730004,90,56

436000001.51303.312004.10.39

436000000.02573.31803.50.38


NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
635847515256600178754.949005.80.97

635849415067080201824.745004.50.97

63592447564900172314.919003.90.93

635894410566150172004.9480050.97

635868913115410172024.9410050.96

Test Results - series 2


NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
218000000.0268.91.61130.35

218000000.02611.31.6233.20.35


NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [s]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
436000000.02241.33.213040.35

436000000.02340.73.23704.20.35


NUMBER OF PRODUCERSTOTAL MESSAGES PROCESSEDDIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VESAVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms]AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms]PEAK INCOMING DATA RATE [MB/s]PEAK PROCESSING MESSAGE QUEUE SIZEPEAK CPU LOAD [%]PEAK MEMORY USAGE [GB]RESULTS PRESENTED IN GRAFANA
63593176836240159654.845004.31

63592177836490168344.842005.81



  • No labels