Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Setup

Environment:

OS: Zorin OS 16.2

RAM: 32 GB

CPU: Intel® Core™ i7-10610U CPU @ 1.80GHz × 8

Data: 

Included in ZIP file (at bottom)

  1. All data under 1 anchors
    1. Under /openroadm-devices we have list of 10,000 openroadm-device[..]
  2. tree-size per 'device' fragments 86 fragments
  3. KB per devices: 333 KB


Single-large object request

Single-large object request

 (1 object out of many)

...

Query:

cps/api/v1/dataspaces/openroadm/anchors/owb-msa221-anchor/node?xpath=/openroadm-devices/openroadm-device[@device-id='C201-7-13A-5A1']&include-descendants=true

Durations are average of 100 measurements

 (1 object out of many)

0.0456149590.0233666320.022248327332.997
8620000.0543103030.0359963230.01831398050000.1449356810.11717110.027764581100000.2906715490.2602821120.030389437
PatchDevicesE2E duration (s)Fragment Query duration (s)Service OverheadGraph

1) Baseline

https://gerrit.onap.org/r/c/cps/+/133482

1,0000.0450.0230.022

Image Added

2,0000.0540.0350.018
5,0000.1440.1170.027
10,0000.2900.2600.030
2)
https://gerrit.onap.org/r/c/cps/+/133511/2
1000
1,0000.
054642542
0540.
053553622
0530.
001088920
001

Image Added

2,000
2000
0.
100691847
1000.
100027188
1000.
000664659
000
5000
5,0000.
22977974
2290.
229137155
2290.
000642585
000
10000
10,0000.
213588816
2130.
212645649
2120.
000943167
000
1000

Merged  

1,0000.
020945751
0200.
016911524
0160.
004034227
004

Image Added

2,000
2000
0.
030075068
0300.
026774606
0260.
003300462
003
5000
5,0000.
113882515
1130.
108687874
1080.
005194641
005
10000
10,0000.
100853571
1000.
09698906
0960.
003864511

Graphs:

ttps://gerrit.onap.org/r/c/cps/+/133482:

Image Removed

https://gerrit.onap.org/r/c/cps/+/133511/2:

Image Removed

https://gerrit.onap.org/r/c/cps/+/133511/12:

Image Removed

Whole data tree as one request

1 object contaning all node as descendatnts

003

Observations (patch 3) 

  1. Is 'findByAnchorAndCspPath' being used (shouldn't?!)
  2. Query time increases until list-size reached 6,000 elements and then levels off

Whole data tree as one request

1 object containing all node as descendants (mainly one big list)

Query: cps/api/v1/dataspaces/openroadm/anchors/owb-msa221-anchor/node?xpath=/openroadm-device&include-descendants=true

All queries ran 10-reames

PatchDevicesE2E duration (s)Fragment
PatchNodesXpathTotal duration (s)
Query duration (s)Service duration (s)Object Size (
KBcps/api/v1/dataspaces/openroadm/anchors/owb-msa221-anchor/node?xpath=/openroadm-device
MB)Object Size #FragmentsGraph

1) Baseline

https://gerrit.onap.org/r/c/cps/+/133482

1000
1,00011.
77267071
8
0.031783711.7408870133299786006
<0.1 *120.386,000

Image Added

2,000
2000
28.
45149836
5
0
<0.
049900806
1 *280.
401597554
7
665994172006500086.973075510.15865594986.814419561166498543000610000201.4534370.445119103201.0083178973329970860006
172,000
5,00087.0<0.1 *861,7430,000
10,000201.0<0.1*2013.3860,000

2)

https://gerrit.onap.org/r/c/cps/+/133511/2

1000

**

1,0000.50.
545405404
20.
2234512
30.
321954204
386,000

Image Added

2,0001.0
332997860062000
0.
976134445
40.
417910373
60.
558224072
7
665994172006
172,000
5,000
5000
2.
52749303
51.
087584666
11.
439908364
4
166498543000610000
1.7430,000
10,0007.0 
6.97774575
2.
928634867
94.
049110883
03.3860,000

3)

3329970860006
10002.954251872

Merged

1,0003.01.
262556839
31.
691695033
70.386,000

Image Added

2,000
332997860062000
5.
491693817 
52.
317822096
33.
173871721
2
665994172006
0.7172,000
5,000
5000
11.
02445057
05.
433378386
45.
591072184
6
1664985430006
1.7430,000
10,000
10000
25.
35983212
411.
69850384
713.63.
66132828
3
3329970860006

Graphs:

ttps://gerrit.onap.org/r/c/cps/+/133482:

Image Removed

860,000

*Only initial Hibernate query, hibernate will lazily fetch data later which is reflected in E2E time

Observations:

  1. PathsSet #2  did perform better than the latest patch! Need to compare Daniel Hanrahan will follow up

Get nodes parallel

Fetch 1 device from a database with 10,000 devices

Bash parallel Curl commands, 1 thread executed 10 Sequential requests with no delays, average response times are reported

Query: cps/api/v1/dataspaces/openroadm/anchors/owb-msa221-anchor/node?xpath=/openroadm-devices/openroadm-device[@device-id='C201-7-13A-5A1']&include-descendants=true

Patch: https://gerrit.onap.org/r/c/cps/+/133511/2:

Image Removed

https://gerrit.onap.org/r/c/cps/+/133511/12:

Image Removed

Get nodes parallel

...

12

ThreadsE2E duration (s)Succes RatioFragment Query duration (s)
10.082100%0.2
20.091100%0.1
30.120100%0.1
50.3100%0.2
100.399.9%0.3
200.599.5%0.5
501.099.4%1.0
1002.399.7%2.3
2007.699.7%6.2
50017.141.4%13.8
1,00015.3 (many connection errors)26.0%11.9

Graphs:

  1. Average E2E Execution Time
  2. Internal Method Counts (total)

Image Added

Observations

  1. From 10 Parallel request (of 10 sequential request) the client can't always connect and we see time out error (succes ratio <100%)
    1. Sequential request are fired faster than actual responses so from DB perspective they are almost parallel request as well 
  2. Database probably already become bottleneck with 2 threads, effectively firening a total of 20 call very quickly. Its know that the DB connection pool/internal will slow down from 12 or more 'parallel' request

Get 1000 nodes in Parallel with varying thread count

In this test, 1000 requests are sent using curl, but with varying thread count (using --parallel-max option).

Code Block
languagebash
echo -e "Threads\tTime"
for threads in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 20 30 40 50; do
	echo -n -e "$threads\t"
	/usr/bin/time -f "%e" curl --silent --output /dev/null --fail --show-error \
		--header "Authorization: Basic Y3BzdXNlcjpjcHNyMGNrcyE=" \
		--get "http://localhost:8883/cps/api/v1/dataspaces/openroadm/anchors/owb-msa221-anchor/node?xpath=/openroadm-devices/openroadm-device\[@device-id='C201-7-

...

[1-25]A-[1-40]A1'\]&include-descendants=true" \
		--parallel --parallel-max $threads --parallel-immediate
done

Note the above curl command performs 1000 requests. It is based on globbing in the URL - curl allows ranges such as [1-25]  in the URL, for example:

  http://example.com/archive[1996-1999]/vol[1-4].html

which would expand into a series of 16 requests to:

  • http://example.com/archive1996/vol1.html
  • http://example.com/archive1996/vol2.html
  • ...
  • http://example.com/archive1999/vol4.html

Results

ThreadsTime (s)SpeedupComments
1140.41.0
271.62.02 threads is 2x faster than 1 thread
348.52.9
437.23.8
531.04.5
626.65.3
723.85.9
821.66.5
920.07.0
1018.77.510 threads is 7.5x faster than 1 thread
1117.77.9
1216.88.4There are exactly 12 CPU cores (logical) on test machine
1316.78.4
1416.78.4
1516.88.4
2016.88.4
3016.78.4
4016.88.4
5016.78.4

Graphs

Image Added

Observations

  • There were no failures during the tests (e.g. timeouts or refused connections).
  • Performance increases nearly linearly with increasing thread count, up to the number of CPU cores.
  • Performance stops increasing when the number of threads equals the number of CPU cores (expected).
  • Verbose statistics show that each individual request takes around 0.14 seconds, regardless of thread count (but with multiple CPU cores, requests are really done in parallel).

Data sheets

...

Graph:

...

View file
nameCpsPerformance.xlsx
height250
View file
nameperformanceTest.zip
height250
View file
namePerformance test.postman_collection.json
height250

Test scripts overview 

- performanceTest.sh
   Get 1000 times single large object from thousands of devices (1000, 2000, ..., 10000) and create metric after each run
- performanceRootTest.sh
   Get 10 times the whole data tree as one object from thousands of devices (1000, 2000, ..., 10000) and create metric after each run
- parallelGetRequestTest.sh
   Get one devices parallel from a database with 10000 devices, executed 10 times sequential

- buildup.sh
   Create the dataspace, create the schemaset, create the anchor and create the root node
- owb-msa221.zip
   The schemaset for the tests
- outNode.json
   The input for the root node creation
- createThousandNode.sh
   Helper script for the database creation
- innerNode.json
   The input for the sub node creation
- createMetric.sh
   Helper script for metric creation