Environment and Resources
Kubernetes cluster with 4 worker nodes, sharing hardware configuration shown in a table below, is deployed in OpenStack cloud operating system. The test components in docker containers are further deployed on the Kubernetes cluster.
Configuration | ||
---|---|---|
CPU | Model | Intel(R) Xeon(R) CPU E5-2680 v4 |
No. of cores | 24 | |
CPU clock speed [GHz] | 2.40 | |
Total RAM [GB] | 62.9 |
Network Performance
Pod measurement method
In order to check cluster network performance tests with usage of Iperf3 have been applied. Iperf is a tool for measurement of the maximum bandwidth on IP networks, it runs on two mode: server and client. We used a docker image: networkstatic/iperf3.
Following deployment creates a pod with iperf (server mode) on one worker, and one pod with iperf client for each worker.
To create deployment, execute following command:
kubectl create -f deployment.yaml
To find all iperf pods, execute:
kubectl -n onap get pods -o wide | grep iperf
To measure connection between pods, run iperf on iperf-client pod, using following command:
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server
To change output format from MBits/sec to MBytes/sec:
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes
To change measure time:
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -t <time-in-second>
To gather results, the command was executed:
kubectl -n onap exec -it <iperf-client-pod> -- iperf3 -c iperf3-server -f MBytes
Results of performed tests
worker1 (136 MBytes/sec)
worker2 (87 MBytes/sec)
worker3 (135 MBytes/sec)
worker0 (2282 MBytes/sec) (iperf client and server exist on same worker )
Average speed (without worker 0 ) : 119 MBytes/sec
HV-VES Performance
Preconditions
Before start tests, download docker image of producer which is available here. To extract image locally use command:
docker load < hv-collector-go-client.tar.gz
With dmaap Kafka
Conditions
Tests were performed with 5 repetitions for each configuration shown in the table below.
Number of producers | Messages per producer | Payload size [B] | Interval [ms] |
---|---|---|---|
2 | 90000 | 8192 | 10 |
4 | 90000 | 8192 | 10 |
6 | 60000 | 8192 | 10 |
Raw results data
Raw results data with screenshots can be found in following files:
- Series 1 - results_series_1.zip
- Series 2 - results_series_2.zip
Test Results - series 1
Test Results - series 2
No DMaaP Kafka SetUp
Install Kafka Docker on Kubernetes
(based on: ultimate-guide-to-installing-kafka-docker-on-kuber)
Create config maps
Config maps are required by zookeeper and kafka-broker deployments.
kubectl -n onap create cm kafka-config-map --from-file=kafka_server_jaas.conf
kubectl -n onap create cm zk-config-map --from-file=zk_server_jaas.conf
Create deployments
kubectl -n onap create -f zookeeper.yml
kubectl -n onap create -f kafka-service.yml
kubectl -n onap create -f kafka-broker.yml
Verify that pods are up and running
kubectl -n onap get pods | grep 'zookeeper-deployment-1\|broker0'
kubectl -n onap get svc | grep kafka-service
If you need to change some variable or anything in a yml file, delete the current deployment, for example:
kubectl -n onap delete deploy kafka-broker0
And after modifying the file create a new deployment as described above.
Run the test
Modify tools/performance/cloud scripts to match the names in your deployments, described in the previous step. Here is a diff file (you may need to adapt it to the current code situation):
Go to tools/performance/cloud and reboot the environment:
./reboot-test-environment.sh -v
Now you are ready to run the test.
Without DMaaP Kafka
Conditions
Tests were performed with following configuration:
Messages per producer | Payload size [B] | Interval [ms] |
---|---|---|
90000 | 8192 | 10 |
Raw results data
Raw results data with screenshots can be found in following files:
- Series 1 - results_series_1.zip
- Series 2 - results_series_2.zip
To see custom Kafka metrics you may want to change kafka-and-producers.json (located in HV-VES project directory: tools/performance/cloud/grafana/dashboards) to