...
Info |
---|
Note: In the Without DMaaP Kafka tests the DMaaP/Kafka service was substituted with wurstmeister kafka |
Test Setup
To execute performance tests we have to run functions from a shell script cloud-based-performance-test.sh in HV-VES project directory: ~/tools/performance/cloud/
First we have to generate certificates in ~/tools/ssl folder by using gen_certs. This step only needs to be performed during the first test setup (or if the generated files have been deleted).
Code Block | ||
---|---|---|
| ||
./cloud-based-performance-test.sh gen_certs |
Then we call setup in order to send certificates to HV-VES, and deploy Consumers, Prometheus, Grafana and create their ConfigMaps.
Code Block | ||
---|---|---|
| ||
./cloud-based-performance-test.sh setup |
After that we have to change HV-VES configuration in Consul KEY/VALUE tab (typically we can access Consul at port 30270 of any Controller node, i.e. http://slave1:30270/ui/#/dc1/kv/dcae-hv-ves-collector/edit).
Code Block | ||
---|---|---|
| ||
{"security.sslDisable": false,
"logLevel": "INFO",
"server.listenPort": 6061,
"server.idleTimeoutSec": 300,
"cbs.requestIntervalSec": 5,
"streams_publishes": {
"perf3gpp": {
"type": "kafka",
"aaf_credentials": {
"username": "admin",
"password": "admin_secret"
},
"kafka_info": {
"bootstrap_servers": "message-router-kafka:9092",
"topic_name": "HV_VES_PERF3GPP"
}
}
},
"security.keys.trustStoreFile": "/etc/ves-hv/ssl/custom/trust.p12",
"security.keys.keyStoreFile": "/etc/ves-hv/ssl/custom/server.p12",
"security.keys.trustStorePasswordFile":"/etc/ves-hv/ssl/custom/trust.pass",
"security.keys.keyStorePasswordFile": "/etc/ves-hv/ssl/custom/server.pass"} |
Environment and Resources
Kubernetes cluster with 4 worker nodes, sharing hardware configuration shown in a table below, is deployed in OpenStack cloud operating system. The test components in docker containers are further deployed on the Kubernetes cluster.
Configuration | ||
---|---|---|
CPU | Model | Intel(R) Xeon(R) CPU E5-2680 v4 |
No. of cores | 24 | |
CPU clock speed [GHz] | 2.40 | |
Total RAM [GB] | 62.9 |
Network Performance
Pod measurement method
In order to check cluster network performance tests with usage of Iperf3 have been applied. Iperf is a tool for measurement of the maximum bandwidth on IP networks, it runs on two mode: server and client. We used a docker image: networkstatic/iperf3.
Following deployment creates a pod with iperf (server mode) on one worker, and one pod with iperf client for each worker.
Code Block | ||||
---|---|---|---|---|
| ||||
apiVersion: apps/v1
kind: Deployment
metadata:
name: iperf3-server
namespace: onap
labels:
app: iperf3-server
spec:
replicas: 1
selector:
matchLabels:
app: iperf3-server
template:
metadata:
labels:
app: iperf3-server
spec:
containers:
- name: iperf3-server
image: networkstatic/iperf3
args: ['-s']
ports:
- containerPort: 5201
name: server
---
apiVersion: v1
kind: Service
metadata:
name: iperf3-server
namespace: onap
spec:
selector:
app: iperf3-server
ports:
- protocol: TCP
port: 5201
targetPort: server
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: iperf3-clients
namespace: onap
labels:
app: iperf3-client
spec:
selector:
matchLabels:
app: iperf3-client
template:
metadata:
labels:
app: iperf3-client
spec:
containers:
- name: iperf3-client
image: networkstatic/iperf3
command: ['/bin/sh', '-c', 'sleep infinity'] |
To create deployment, execute following command:
Code Block |
---|
kubectl create -f deployment.yaml |
To find all iperf pods, execute:
Code Block |
---|
kubectl -n onap get pods -o wide | grep iperf |
To measure connection between pods, run iperf on iperf-client pod, using following command:
Code Block |
---|
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server |
To change output format from MBits/sec to MBytes/sec:
Code Block |
---|
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes |
To change measure time:
Code Block |
---|
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -t <time-in-second> |
To gather results, the command was executed:
Code Block |
---|
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes |
Results of performed tests
worker1 (136 MBytes/sec)
Code Block title results collapse true Connecting to host iperf3-server, port 5201 [ 4] local 10.42.5.127 port 39752 connected to 10.43.25.161 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 141 MBytes 141 MBytes/sec 32 673 KBytes [ 4] 1.00-2.00 sec 139 MBytes 139 MBytes/sec 0 817 KBytes [ 4] 2.00-3.00 sec 139 MBytes 139 MBytes/sec 0 936 KBytes [ 4] 3.00-4.00 sec 138 MBytes 137 MBytes/sec 0 1.02 MBytes [ 4] 4.00-5.00 sec 138 MBytes 137 MBytes/sec 0 1.12 MBytes [ 4] 5.00-6.00 sec 129 MBytes 129 MBytes/sec 0 1.20 MBytes [ 4] 6.00-7.00 sec 129 MBytes 129 MBytes/sec 0 1.27 MBytes [ 4] 7.00-8.00 sec 134 MBytes 134 MBytes/sec 0 1.35 MBytes [ 4] 8.00-9.00 sec 135 MBytes 135 MBytes/sec 0 1.42 MBytes [ 4] 9.00-10.00 sec 135 MBytes 135 MBytes/sec 45 1.06 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 1.32 GBytes 136 MBytes/sec 77 sender [ 4] 0.00-10.00 sec 1.32 GBytes 135 MBytes/sec
After completing previous steps we can call the start function, which provides Producers and starts the test.
Code Block | ||
---|---|---|
| ||
./cloud-based-performance-test.sh start |
For the start function we can use optional arguments:
...
Example invocations of test start:
Code Block | ||
---|---|---|
| ||
./cloud-based-performance-test.sh start --containers 10 |
The command above starts the test that creates 10 producers which send the amount of messages defined in test.properties once.
Code Block | ||
---|---|---|
| ||
./cloud-based-performance-test.sh start --load true --containers 10 --retention-time-minutes 30 |
This invocation starts load test, meaning the script will try to keep the amount of running containers at 10 with kafka message retention of 30 minutes.
The test.properties file contains Producers and Consumers configurations and it allows setting following properties:
...
To remove created ConfigMaps, Consumers, Producers, Grafana and Prometheus from Kubernetes cluster we call clean function. Note: clean doesn't remove certificates from HV-VES.
Code Block | ||
---|---|---|
| ||
./cloud-based-performance-test.sh clean |
In order to restart the test environment (perform steps in following order: 5, 2 and then 3), which means redeploying hv-ves pod, resetting kafka topic and performing setup, we use reboot-test-environment.sh.
Code Block | ||
---|---|---|
| ||
./reboot-test-environment.sh |
Results can be accessed under following links:
- Prometheus: http://slave1:30000/graph?g0.range_input=1h&g0.expr=hv_kafka_consumer_travel_time_seconds_count&g0.tab=1
- Grafana: http://slave1:30001/d/V94Kjlwmz/hv-ves-processing?orgId=1&refresh=5s
Environment and Resources
Kubernetes cluster with 4 worker nodes, sharing hardware configuration shown in a table below, is deployed in OpenStack cloud operating system. The test components in docker containers are further deployed on the Kubernetes cluster.
...
CPU
...
Network Performance
Pod measurement method
In order to check cluster network performance tests with usage of Iperf3 have been applied. Iperf is a tool for measurement of the maximum bandwidth on IP networks, it runs on two mode: server and client. We used a docker image: networkstatic/iperf3.
Following deployment creates a pod with iperf (server mode) on one worker, and one pod with iperf client for each worker.
...
title | Deployment |
---|---|
collapse | true |
...
...
...
...
...
...
receiver
worker2 (87 MBytes/sec)
Code Block title results collapse true Connecting to host iperf3-server, port 5201 [ 4] local 10.42.3.188 port 35472 connected to 10.43.25.161 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 88.3 MBytes 88.3 MBytes/sec 121 697 KBytes [ 4] 1.00-2.00 sec 96.2 MBytes 96.3 MBytes/sec 0 796 KBytes
To create deployment, execute following command:
Code Block |
---|
kubectl create -f deployment.yaml |
To find all iperf pods, execute:
Code Block |
---|
kubectl -n onap get pods -o wide | grep iperf |
To measure connection between pods, run iperf on iperf-client pod, using following command:
Code Block |
---|
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server |
To change output format from MBits/sec to MBytes/sec:
Code Block |
---|
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes |
To change measure time:
Code Block |
---|
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -t <time-in-second> |
To gather results, the command was executed:
Code Block |
---|
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes |
Results of performed tests
worker1 (136 MBytes/sec)
Code Block title results collapse true Connecting to host iperf3-server, port 5201 [ 4] local 10.42.5.127 port 39752 connected to 10.43.25.161 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 02.00-13.00 sec 14192.5 MBytes 14192.5 MBytes/sec 320 673881 KBytes [ 4] 13.00-24.00 sec 13990.0 MBytes 90.0 139 MBytes/sec 0 817957 KBytes [ 4] 24.00-35.00 sec 13987.5 MBytes 13987.5 MBytes/sec 0 936 KBytes1.00 MBytes [ 4] 35.00-46.00 sec 13888.8 MBytes 88.7 137 MBytes/sec 0 1.0206 MBytes [ 4] 46.00-57.00 sec 13880.0 MBytes 13780.0 MBytes/sec 0 1.12 MBytes [ 4] 57.00-68.00 sec 12981.2 MBytes 81.3 129 MBytes/sec 25 0 1.20895 MBytesKBytes [ 4] 68.00-79.00 sec 12985.0 MBytes 85.0 129 MBytes/sec 0 1.27983 MBytesKBytes [ 4] 79.00-810.00 sec 83.8 134 MBytes 13483.7 MBytes/sec 0 1.3503 MBytes [ 4] 8.00-9.00 sec 135 MBytes 135 MBytes/sec 0 1.42 MBytes- - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 90.00-10.00 sec 135873 MBytes 13587.3 MBytes/sec 146 45 1.06 MBytes sender -[ - -4] - - 0.00-10.00 - -sec - - -870 -MBytes - - - - - - - - - - - - - [ ID] Interval87.0 MBytes/sec receiver
worker3 (135 MBytes/sec)
Code Block title results collapse true Connecting to host iperf3-server, port Transfer5201 [ 4] local Bandwidth Retr [ 4] 0.00-10.00 sec 1.32 GBytes 136 MBytes/sec 77 10.42.4.182 port 35288 connected to 10.43.25.161 port 5201 [ ID] Interval Transfer Bandwidth Retr senderCwnd [ 4] 0.00-101.00 sec 1.32 GBytes 129 MBytes 135129 MBytes/sec 45 1.17 receiver
worker2 (87 MBytes/sec)
Code Block title results collapse true Connecting to host iperf3-server, port 5201MBytes [ 4] local 10.42.3.188 port 35472 connected to 10.43.25.161 port 5201 [ ID] Interval Transfer Bandwidth 1.00-2.00 sec 134 MBytes 134 MBytes/sec 32 1.25 MBytes Retr Cwnd [ 4] 02.00-13.00 sec 88.3 135 MBytes 88.3135 MBytes/sec 121 0 6971.32 KBytesMBytes [ 4] 13.00-24.00 sec 96.2 139 MBytes 96.3 139 MBytes/sec 0 796 KBytes1.40 MBytes [ 4] 24.00-35.00 sec 92.5 144 MBytes 92.5 144 MBytes/sec 0 1.47 881MBytes KBytes [ 4] 35.00-46.00 sec 90.0131 MBytes 90.0131 MBytes/sec 45 0 1.14 957MBytes KBytes [ 4] 46.00-57.00 sec 87.5129 MBytes 87.5 129 MBytes/sec 0 1.0025 MBytes [ 4] 57.00-68.00 sec 88.8134 MBytes 88.7 134 MBytes/sec 0 1.0633 MBytes [ 4] 68.00-79.00 sec 80.0138 MBytes 80.0 138 MBytes/sec 0 1.1239 MBytes [ 4] 79.00-810.00 sec sec 81.2135 MBytes 81.3 135 MBytes/sec 25 0 8951.44 KBytesMBytes [- 4] 8.00-9.00 sec 85.0 MBytes 85.0 MBytes/sec 0 983 KBytes [ 4] 9.00-10.00 sec 83.8 MBytes 83.7 MBytes/sec 0 1.03 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 1.31 873GBytes MBytes 87.3135 MBytes/sec 146122 sender [ 4] 0.00-10.00 sec 1.31 870GBytes MBytes 87.0134 MBytes/sec receiver
worker3 worker0 (135 2282 MBytes/sec) (iperf client and server exist on same worker )
worker0 (2282Code Block title results collapse true Connecting to host iperf3-server, port 5201 [ 4] local 10.42.46.182132 port 3528851156 connected to 10.43.25.161 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 2.13 129GBytes MBytes 2185 129 MBytes/sec 0 45 1.17536 MBytesKBytes [ 4] 1.00-2.00 sec 134 MBytes1.66 GBytes 1341702 MBytes/sec 0 32 1.25621 MBytesKBytes [ 4] 2.00-3.00 sec 2.10 135GBytes MBytes 2154 135 MBytes/sec 0 1.32 MBytes 766 KBytes [ 4] 3.00-4.00 sec 1.89 139GBytes MBytes 1391937 MBytes/sec 0 1.4001 MBytes [ 4] 4.00-5.00 sec 1.87 144GBytes MBytes 1441914 MBytes/sec 0 1.4739 MBytes [ 4] 5.00-6.00 sec 2.76 131GBytes MBytes 2826 131 MBytes/sec 45 0 1.1439 MBytes [ 4] 6.00-7.00 sec 129 MBytes1.81 GBytes 1291853 MBytes/sec 0792 1.2509 MBytes [ 4] 7.00-8.00 sec 2.54 134GBytes MBytes 2600 134 MBytes/sec 0 1.3321 MBytes [ 4] 8.00-9.00 sec 2.70 138GBytes MBytes 1382763 MBytes/sec 0 1.3934 MBytes [ 4] 9.00-10.00 sec 2.82 135GBytes MBytes 1352889 MBytes/sec 0 1.4434 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 122.313 GBytes 2282 135 MBytes/sec 122792 sender [ 4] 0.00-10.00 sec 122.313 GBytes 1342282 MBytes/sec receiver
Average speed (without worker 0 ) : 119 MBytes/sec
...
Code Block | ||||
---|---|---|---|---|
| ||||
Connecting to host iperf3-server, port 5201
[ 4] local 10.42.6.132 port 51156 connected to 10.43.25.161 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 2.13 GBytes 2185 MBytes/sec 0 536 KBytes
[ 4] 1.00-2.00 sec 1.66 GBytes 1702 MBytes/sec 0 621 KBytes
[ 4] 2.00-3.00 sec 2.10 GBytes 2154 MBytes/sec 0 766 KBytes
[ 4] 3.00-4.00 sec 1.89 GBytes 1937 MBytes/sec 0 1.01 MBytes
[ 4] 4.00-5.00 sec 1.87 GBytes 1914 MBytes/sec 0 1.39 MBytes
[ 4] 5.00-6.00 sec 2.76 GBytes 2826 MBytes/sec 0 1.39 MBytes
[ 4] 6.00-7.00 sec 1.81 GBytes 1853 MBytes/sec 792 1.09 MBytes
[ 4] 7.00-8.00 sec 2.54 GBytes 2600 MBytes/sec 0 1.21 MBytes
[ 4] 8.00-9.00 sec 2.70 GBytes 2763 MBytes/sec 0 1.34 MBytes
[ 4] 9.00-10.00 sec 2.82 GBytes 2889 MBytes/sec 0 1.34 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 22.3 GBytes 2282 MBytes/sec 792 sender
[ 4] 0.00-10.00 sec 22.3 GBytes 2282 MBytes/sec receiver |
Test Setup
Preconditions
Before start tests, download docker image of producer which is available here. To extract image locally use command:
Code Block |
---|
docker load < hv-collector-go-client.tar.gz
|
To execute performance tests we have to run functions from a shell script cloud-based-performance-test.sh in HV-VES project directory: ~/tools/performance/cloud/
First we have to generate certificates in ~/tools/ssl folder by using gen_certs. This step only needs to be performed during the first test setup (or if the generated files have been deleted).
Code Block title Generating certificates ./cloud-based-performance-test.sh gen_certs
Then we call setup in order to send certificates to HV-VES, and deploy Consumers, Prometheus, Grafana and create their ConfigMaps.
Code Block title Setting up the test environment ./cloud-based-performance-test.sh setup
After that we have to change HV-VES configuration in Consul KEY/VALUE tab (typically we can access Consul at port 30270 of any Controller node, i.e. http://slave1:30270/ui/#/dc1/kv/dcae-hv-ves-collector/edit).
Code Block title HV-VES Consul configuration {"security.sslDisable": false, "logLevel": "INFO", "server.listenPort": 6061, "server.idleTimeoutSec": 300, "cbs.requestIntervalSec": 5, "streams_publishes": { "perf3gpp": { "type": "kafka", "aaf_credentials": { "username": "admin", "password": "admin_secret" }, "kafka_info": { "bootstrap_servers": "message-router-kafka:9092", "topic_name": "HV_VES_PERF3GPP" } } }, "security.keys.trustStoreFile": "/etc/ves-hv/ssl/custom/trust.p12", "security.keys.keyStoreFile": "/etc/ves-hv/ssl/custom/server.p12", "security.keys.trustStorePasswordFile":"/etc/ves-hv/ssl/custom/trust.pass", "security.keys.keyStorePasswordFile": "/etc/ves-hv/ssl/custom/server.pass"}
After completing previous steps we can call the start function, which provides Producers and starts the test.
Code Block title Performing the test ./cloud-based-performance-test.sh start
For the start function we can use optional arguments:
--load should the test keep defined number of running producers until script interruption (false) --containers number of producer containers to create (1) --properties-file path to file with benchmark properties (./test.properties) --retention-time-minutes retention time of messages in kafka in minutes (60) Example invocations of test start:
Code Block title Starting performance test with single producers creation ./cloud-based-performance-test.sh start --containers 10
The command above starts the test that creates 10 producers which send the amount of messages defined in test.properties once.
Code Block title Starting performance test with constant messages load ./cloud-based-performance-test.sh start --load true --containers 10 --retention-time-minutes 30
This invocation starts load test, meaning the script will try to keep the amount of running containers at 10 with kafka message retention of 30 minutes.
The test.properties file contains Producers and Consumers configurations and it allows setting following properties:
Producer hvVesAddress HV-VES address (dcae-hv-ves-collector.onap:6061) client.count Number of clients per pod (1) message.size Size of a single message in bytes (16384) message.count Amount of messages to be send by each client (1000) message.interval Interval between messages in miliseconds (1) Certificates paths client.cert.path Path to cert file (/ssl/client.p12) client.cert.pass.path Path to cert's pass file (/ssl/client.pass) Consumer kafka.bootstrapServers Adress of Kafka service to consume from (message-router-kafka:9092) kafka.topics Kafka topics to subscribe to (HV_VES_PERF3GPP)
Results can be accessed under following links:To remove created ConfigMaps, Consumers, Producers, Grafana and Prometheus from Kubernetes cluster we call clean function. Note: clean doesn't remove certificates from HV-VES.
Code Block title Cleaning the environment ./cloud-based-performance-test.sh clean
In order to restart the test environment (perform steps in following order: 5, 2 and then 3), which means redeploying hv-ves pod, resetting kafka topic and performing setup, we use reboot-test-environment.sh.
Code Block | ||
---|---|---|
| ||
./reboot-test-environment.sh |
Results can be accessed under following links:
- Prometheus: http://slave1:30000/graph?g0.range_input=1h&g0.expr=hv_kafka_consumer_travel_time_seconds_count&g0.tab=1
- Grafana: http://slave1:30001/d/V94Kjlwmz/hv-ves-processing?orgId=1&refresh=5s
HV-VES Performance test results
Average speed (without worker 0 ) : 119 MBytes/sec
HV-VES Performance
Preconditions
Before start tests, download docker image of producer which is available here. To extract image locally use command:
...
With dmaap Kafka
Conditions
...