Table of Contents |
---|
Environment and Resources
Kubernetes cluster with 4 worker nodes, sharing hardware configuration shown in a table below, is deployed in OpenStack cloud operating system. The test components in docker containers are further deployed on the Kubernetes cluster.
Configuration | ||
---|---|---|
CPU | Model | Intel(R) Xeon(R) CPU E5-2680 v4 |
No. of cores | 24 | |
CPU clock speed [GHz] | 2.40 | |
Total RAM [GB] | 62.9 |
Network Performance
Pod measurement method
In order to check cluster network performance tests with usage of Iperf3 have been applied. Iperf is a tool for measurement of the maximum bandwidth on IP networks, it runs on two mode: server and client. We used a docker image: networkstatic/iperf3.
Following deployment creates a pod with iperf (server mode) on one worker, and one pod with iperf client for each worker.
Code Block | ||||
---|---|---|---|---|
| ||||
apiVersion: apps/v1 kind: Deployment metadata: name: iperf3-server namespace: onap labels: app: iperf3-server spec: replicas: 1 selector: matchLabels: app: iperf3-server template: metadata: labels: app: iperf3-server spec: containers: - name: iperf3-server image: networkstatic/iperf3 args: ['-s'] ports: - containerPort: 5201 name: server --- apiVersion: v1 kind: Service metadata: name: iperf3-server namespace: onap spec: selector: app: iperf3-server ports: - protocol: TCP port: 5201 targetPort: server --- apiVersion: apps/v1 kind: DaemonSet metadata: name: iperf3-clients namespace: onap labels: app: iperf3-client spec: selector: matchLabels: app: iperf3-client template: metadata: labels: app: iperf3-client spec: containers: - name: iperf3-client image: networkstatic/iperf3 command: ['/bin/sh', '-c', 'sleep infinity'] |
To create deployment, execute following command:
Code Block |
---|
kubectl create -f deployment.yaml |
To find all iperf pods, execute:
Code Block |
---|
kubectl -n onap get pods -o wide | grep iperf |
To measure connection between pods, run iperf on iperf-client pod, using following command:
Code Block |
---|
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server |
To change output format from MBits/sec to MBytes/sec:
Code Block |
---|
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes |
To change measure time:
Code Block |
---|
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -t <time-in-second> |
To gather results, the command was executed:
Code Block |
---|
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes |
Results of performed tests
worker1 (136 MBytes/sec)
Code Block title results collapse true Connecting to host iperf3-server, port 5201 [ 4] local 10.42.5.127 port 39752 connected to 10.43.25.161 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 141 MBytes 141 MBytes/sec 32 673 KBytes [ 4] 1.00-2.00 sec 139 MBytes 139 MBytes/sec 0 817 KBytes [ 4] 2.00-3.00 sec 139 MBytes 139 MBytes/sec 0 936 KBytes [ 4] 3.00-4.00 sec 138 MBytes 137 MBytes/sec 0 1.02 MBytes [ 4] 4.00-5.00 sec 138 MBytes 137 MBytes/sec 0 1.12 MBytes [ 4] 5.00-6.00 sec 129 MBytes 129 MBytes/sec 0 1.20 MBytes [ 4] 6.00-7.00 sec 129 MBytes 129 MBytes/sec 0 1.27 MBytes [ 4] 7.00-8.00 sec 134 MBytes 134 MBytes/sec 0 1.35 MBytes [ 4] 8.00-9.00 sec 135 MBytes 135 MBytes/sec 0 1.42 MBytes [ 4] 9.00-10.00 sec 135 MBytes 135 MBytes/sec 45 1.06 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 1.32 GBytes 136 MBytes/sec 77 sender [ 4] 0.00-10.00 sec 1.32 GBytes 135 MBytes/sec receiver
worker2 (87 MBytes/sec)
Code Block title results collapse true Connecting to host iperf3-server, port 5201 [ 4] local 10.42.3.188 port 35472 connected to 10.43.25.161 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 88.3 MBytes 88.3 MBytes/sec 121 697 KBytes [ 4] 1.00-2.00 sec 96.2 MBytes 96.3 MBytes/sec 0 796 KBytes [ 4] 2.00-3.00 sec 92.5 MBytes 92.5 MBytes/sec 0 881 KBytes [ 4] 3.00-4.00 sec 90.0 MBytes 90.0 MBytes/sec 0 957 KBytes [ 4] 4.00-5.00 sec 87.5 MBytes 87.5 MBytes/sec 0 1.00 MBytes [ 4] 5.00-6.00 sec 88.8 MBytes 88.7 MBytes/sec 0 1.06 MBytes [ 4] 6.00-7.00 sec 80.0 MBytes 80.0 MBytes/sec 0 1.12 MBytes [ 4] 7.00-8.00 sec 81.2 MBytes 81.3 MBytes/sec 25 895 KBytes [ 4] 8.00-9.00 sec 85.0 MBytes 85.0 MBytes/sec 0 983 KBytes [ 4] 9.00-10.00 sec 83.8 MBytes 83.7 MBytes/sec 0 1.03 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 873 MBytes 87.3 MBytes/sec 146 sender [ 4] 0.00-10.00 sec 870 MBytes 87.0 MBytes/sec receiver
worker3 (135 MBytes/sec)
Code Block title results collapse true Connecting to host iperf3-server, port 5201 [ 4] local 10.42.4.182 port 35288 connected to 10.43.25.161 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 129 MBytes 129 MBytes/sec 45 1.17 MBytes [ 4] 1.00-2.00 sec 134 MBytes 134 MBytes/sec 32 1.25 MBytes [ 4] 2.00-3.00 sec 135 MBytes 135 MBytes/sec 0 1.32 MBytes [ 4] 3.00-4.00 sec 139 MBytes 139 MBytes/sec 0 1.40 MBytes [ 4] 4.00-5.00 sec 144 MBytes 144 MBytes/sec 0 1.47 MBytes [ 4] 5.00-6.00 sec 131 MBytes 131 MBytes/sec 45 1.14 MBytes [ 4] 6.00-7.00 sec 129 MBytes 129 MBytes/sec 0 1.25 MBytes [ 4] 7.00-8.00 sec 134 MBytes 134 MBytes/sec 0 1.33 MBytes [ 4] 8.00-9.00 sec 138 MBytes 138 MBytes/sec 0 1.39 MBytes [ 4] 9.00-10.00 sec 135 MBytes 135 MBytes/sec 0 1.44 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 1.31 GBytes 135 MBytes/sec 122 sender [ 4] 0.00-10.00 sec 1.31 GBytes 134 MBytes/sec receiver
worker0 (2282 MBytes/sec) (iperf client and server exist on same worker )
Code Block title results collapse true Connecting to host iperf3-server, port 5201 [ 4] local 10.42.6.132 port 51156 connected to 10.43.25.161 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 2.13 GBytes 2185 MBytes/sec 0 536 KBytes [ 4] 1.00-2.00 sec 1.66 GBytes 1702 MBytes/sec 0 621 KBytes [ 4] 2.00-3.00 sec 2.10 GBytes 2154 MBytes/sec 0 766 KBytes [ 4] 3.00-4.00 sec 1.89 GBytes 1937 MBytes/sec 0 1.01 MBytes [ 4] 4.00-5.00 sec 1.87 GBytes 1914 MBytes/sec 0 1.39 MBytes [ 4] 5.00-6.00 sec 2.76 GBytes 2826 MBytes/sec 0 1.39 MBytes [ 4] 6.00-7.00 sec 1.81 GBytes 1853 MBytes/sec 792 1.09 MBytes [ 4] 7.00-8.00 sec 2.54 GBytes 2600 MBytes/sec 0 1.21 MBytes [ 4] 8.00-9.00 sec 2.70 GBytes 2763 MBytes/sec 0 1.34 MBytes [ 4] 9.00-10.00 sec 2.82 GBytes 2889 MBytes/sec 0 1.34 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 22.3 GBytes 2282 MBytes/sec 792 sender [ 4] 0.00-10.00 sec 22.3 GBytes 2282 MBytes/sec receiver
Average speed (without worker 0 ) : 119 MBytes/sec
HV-VES Performance
Preconditions
Before start tests, download docker image of producer which is available here. To extract image locally use command:
Code Block |
---|
docker load < hv-collector-go-client.tar.gz |
With dmaap Kafka
Conditions
Tests were performed with 5 repetitions for each configuration shown in the table below.
Number of producers | Messages per producer | Payload size [B] | Interval [ms] |
---|---|---|---|
2 | 90000 | 8192 | 10 |
4 | 90000 | 8192 | 10 |
6 | 60000 | 8192 | 10 |
Raw results data
Raw results data with screenshots can be found in following files:
- Series 1 - results_series_1.zip
- Series 2 - results_series_2.zip