Kubernetes cluster with 4 worker nodes, sharing hardware configuration shown in a table below, is deployed in OpenStack cloud operating system. The test components in docker containers are further deployed on the Kubernetes cluster.
Configuration |
---|
CPU | Model | Intel(R) Xeon(R) CPU E5-2680 v4 |
No. of cores | 24 |
CPU clock speed [GHz] | 2.40 |
Total RAM [GB] | 62.9 |
In order to check cluster network performance tests with usage of Iperf3 have been applied. Iperf is a tool for measurement of the maximum bandwidth on IP networks, it runs on two mode: server and client. We used a docker image: networkstatic/iperf3.
Following deployment creates a pod with iperf (server mode) on one worker, and one pod with iperf client for each worker.
apiVersion: apps/v1
kind: Deployment
metadata:
name: iperf3-server
namespace: onap
labels:
app: iperf3-server
spec:
replicas: 1
selector:
matchLabels:
app: iperf3-server
template:
metadata:
labels:
app: iperf3-server
spec:
containers:
- name: iperf3-server
image: networkstatic/iperf3
args: ['-s']
ports:
- containerPort: 5201
name: server
---
apiVersion: v1
kind: Service
metadata:
name: iperf3-server
namespace: onap
spec:
selector:
app: iperf3-server
ports:
- protocol: TCP
port: 5201
targetPort: server
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: iperf3-clients
namespace: onap
labels:
app: iperf3-client
spec:
selector:
matchLabels:
app: iperf3-client
template:
metadata:
labels:
app: iperf3-client
spec:
containers:
- name: iperf3-client
image: networkstatic/iperf3
command: ['/bin/sh', '-c', 'sleep infinity']
To create deployment, execute following command:
kubectl create -f deployment.yaml
To find all iperf pods, execute:
kubectl -n onap get pods -o wide | grep iperf
To measure connection between pods, run iperf on iperf-client pod, using following command:
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server
To change output format from MBits/sec to MBytes/sec:
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes
To change measure time:
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -t <time-in-second>
To gather results, the command was executed:
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes
Average speed (without worker 0 ) : 119 MBytes/sec
Before start tests, download docker image of producer which is available here. To extract image locally use command:
docker load < hv-collector-go-client.tar.gz
Tests were performed with 5 repetitions for each configuration shown in the table below.
Number of producers | Messages per producer | Payload size [B] | Interval [ms] |
---|
2 | 90000 | 8192 | 10 |
---|
4 | 90000 | 8192 | 10 |
---|
6 | 60000 | 8192 | 10 |
---|
Raw results data with screenshots can be found in following files:
Click here to see results...
Below tables show the test results across a wide range of containers' number.
NUMBER OF PRODUCERS | TOTAL MESSAGES PROCESSED | DIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VES | AVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms] | AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms] | PEAK INCOMING DATA RATE [MB/s] | PEAK PROCESSING MESSAGE QUEUE SIZE | PEAK CPU LOAD [%] | PEAK MEMORY USAGE [GB] | RESULTS PRESENTED IN GRAFANA |
---|
2 | 180000 | 0 | 0.03 | 54 | 1.6 | 175 | 2.9 | 0.38 | |
2 | 180000 | 0 | 0.021 | 8.5 | 1.6 | 6 | 3.2 | 0.37 | |
2 | 180000 | 0 | 0.022 | 14.8 | 1.6 | 66 | 2.4 | 0.38 | |
2 | 180000 | 0 | 0.023 | 19.4 | 1.6 | 59 | 2.2 | 0.38 | |
2 | 180000 | 0 | 0.021 | 7.6 | 1.6 | 58 | 2.6 | 0.36 | |
NUMBER OF PRODUCERS | TOTAL MESSAGES PROCESSED | DIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VES | AVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms] | AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms] | PEAK INCOMING DATA RATE [MB/s] | PEAK PROCESSING MESSAGE QUEUE SIZE | PEAK CPU LOAD [%] | PEAK MEMORY USAGE [GB] | RESULTS PRESENTED IN GRAFANA |
---|
4 | 315868 | 44824 | 82 | 260 | 3,2 | 2400 | 3 | 0,4 | ![](https://confluence.ext.net.nokia.com/download/thumbnails/983925650/Screenshot%20from%202020-04-22%2013-43-46.png?version=1&modificationDate=1587556763000&api=v2) ![](https://confluence.ext.net.nokia.com/download/thumbnails/983925650/Screenshot%20from%202020-04-22%2013-43-41.png?version=1&modificationDate=1587556763000&api=v2) |
4 | 359193 | 807 | 3660 | 10408 | 3,2 | 3100 | 4,9 | 0,78 | |
4 | 359401 | 591 | 1350 | 3486 | 3.7 | 3000 | 4,9 | 0,56 | |
4 | 360000 | 0 | 1.5 | 130 | 3.3 | 1200 | 4.1 | 0.39 | |
4 | 360000 | 0 | 0.02 | 57 | 3.3 | 180 | 3.5 | 0.38 | |
NUMBER OF PRODUCERS | TOTAL MESSAGES PROCESSED | DIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VES | AVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms] | AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms] | PEAK INCOMING DATA RATE [MB/s] | PEAK PROCESSING MESSAGE QUEUE SIZE | PEAK CPU LOAD [%] | PEAK MEMORY USAGE [GB] | RESULTS PRESENTED IN GRAFANA |
---|
6 | 358475 | 1525 | 6600 | 17875 | 4.9 | 4900 | 5.8 | 0.97 | |
6 | 358494 | 1506 | 7080 | 20182 | 4.7 | 4500 | 4.5 | 0.97 | |
6 | 359244 | 756 | 4900 | 17231 | 4.9 | 1900 | 3.9 | 0.93 | |
6 | 358944 | 1056 | 6150 | 17200 | 4.9 | 4800 | 5 | 0.97 | |
6 | 358689 | 1311 | 5410 | 17202 | 4.9 | 4100 | 5 | 0.96 | |
Click here to see results...
NUMBER OF PRODUCERS | TOTAL MESSAGES PROCESSED | DIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VES | AVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms] | AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms] | PEAK INCOMING DATA RATE [MB/s] | PEAK PROCESSING MESSAGE QUEUE SIZE | PEAK CPU LOAD [%] | PEAK MEMORY USAGE [GB] | RESULTS PRESENTED IN GRAFANA |
---|
2 | 180000 | 0 | 0.026 | 8.9 | 1.6 | 11 | 3 | 0.35 | |
2 | 180000 | 0 | 0.026 | 11.3 | 1.6 | 23 | 3.2 | 0.35 | ![](https://confluence.ext.net.nokia.com/download/thumbnails/983925650/Screenshot%20from%202020-04-23%2013-24-30.png?version=1&modificationDate=1587641506000&api=v2) |
NUMBER OF PRODUCERS | TOTAL MESSAGES PROCESSED | DIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VES | AVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [s] | AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms] | PEAK INCOMING DATA RATE [MB/s] | PEAK PROCESSING MESSAGE QUEUE SIZE | PEAK CPU LOAD [%] | PEAK MEMORY USAGE [GB] | RESULTS PRESENTED IN GRAFANA |
---|
4 | 360000 | 0 | 0.022 | 41.3 | 3.2 | 130 | 4 | 0.35 | ![](https://confluence.ext.net.nokia.com/download/thumbnails/983925650/Screenshot%20from%202020-04-23%2012-42-54.png?version=1&modificationDate=1587641670000&api=v2) ![](https://confluence.ext.net.nokia.com/download/thumbnails/983925650/Screenshot%20from%202020-04-23%2012-42-56.png?version=1&modificationDate=1587641670000&api=v2) ![](https://confluence.ext.net.nokia.com/download/thumbnails/983925650/Screenshot%20from%202020-04-23%2012-43-01.png?version=1&modificationDate=1587641673000&api=v2) |
4 | 360000 | 0 | 0.023 | 40.7 | 3.2 | 370 | 4.2 | 0.35 | |
NUMBER OF PRODUCERS | TOTAL MESSAGES PROCESSED | DIFFERENCE BETWEEN ALL MESSAGES AND SENT TO HV-VES | AVERAGE PROCESSING TIME IN HV-VES WITHOUT ROUTING [ms] | AVERAGE LATENCY TO HV-VES OUTPUT WITH ROUTING [ms] | PEAK INCOMING DATA RATE [MB/s] | PEAK PROCESSING MESSAGE QUEUE SIZE | PEAK CPU LOAD [%] | PEAK MEMORY USAGE [GB] | RESULTS PRESENTED IN GRAFANA |
---|
6 | 359317 | 683 | 6240 | 15965 | 4.8 | 4500 | 4.3 | 1 | |
6 | 359217 | 783 | 6490 | 16834 | 4.8 | 4200 | 5.8 | 1 | |