References

CPS-2146 - Getting issue details... STATUS

Purpose

The long-term goal of this Study & Implementation Proposal is gain precise control over memory consumption in CPS and NCMP, going from linear O(N) to constant O(1) space complexity. Note, current memory consumption during queries is both linear on amount of data returned from the query AND linear on the number of simultaneous queries (the number of concurrent Rest client requests).

The immediate objective is to fix Out Of Memory Errors encountered in NCMP while performing CM handles searches.

Summary of Problem

CPS and NCMP receive queries (CPS path queries and NCMP CM handle queries respectively) for which it will not be known how much data will be returned. Since internal and external APIs return Collections, these Collections containing arbitrary amounts of data are being held in memory. Additionally, these Collections undergo many transformations. Each collection cannot be garbage collected until fully processed/transformed.

In many cases, such as CM handle search, NCMP may be viewed as simply transforming lists of data. Here is an example flow in NCMP showing getting all CM handle IDs for a given DMI, GET http://ncmp-1/ncmpInventory/v1/ch/cmHandles?dmi-plugin-identifier=http://dmi-1

Example heap dump from OOME during CM-handle search

Summary of Test

In a test deployment using one instance of NCMP (limited to 2 CPUs and 1GB memory), 20000 CM-handles were registered with some public properties (10K handles having different properties). Then five CM-handle searches were run in parallel using curl. An OOME was observed within 3 seconds.

Analysis

Analyzing the heap dump produced from the crash shows that each of five searches consumed substantial memory. (Hazelcast was also observed to be a lesser but still significant memory consumer. In fact, the thread triggering the OOME was hz.hazelcastInstanceTrustLevelPerDmiPluginMap.cached.thread-1)

From this graph, we see each search returning 10K handles consumed around 25 MB each (important: this is how much memory used during the OOME - the actual peak memory required is higher, likely around 50MB).

Looking more closely at each thread executing the queries, it is show that there were many ArrayLists in memory, two of which are very large.

Looking more closely at the ArrayLists, we see one contains many thousands of Postgres Tuples, while the other contains CPS FragmentEntities:

Note in the above case, the system ran out of memory before the Tuples were fully converted to FragmentEntity, so peak memory requirement is larger than illustrated above, 50MB per 10K nodes - in reality, actual memory used will depend on the complexity of the data, e.g. number of public properties per CM-handle, as well as how many search parameters are used (as each search parameter results in an additional DB query).

This illustrates the core problem that large collections are stored in memory, and the full collections cannot be garbage collected until the collection is fully processed/transformed.

Details of Test Setup

In a test deployment using a single instance of NCMP run using docker (with resources limited to 2 CPUs and 1GB memory), 20000 CM-handles were registered with some public properties (10K using different properties):

{
    "dmiPlugin": "http://ncmp-dmi-plugin-demo-and-csit-stub:8092",
    "createdCmHandles": [
        {
            "cmHandle": "ch-1",
            "cmHandleProperties": { "neType": "RadioNode" },
            "publicCmHandleProperties": {
                "Color": "yellow",
                "Size": "small",
                "Shape": "cube"
            }
        }
    ]
}

Then five CM-handle searches were run in parallel using curl (each search using two condition parameters):

curl --location 'http://localhost:8883/ncmp/v1/ch/searches' \
--header 'Content-Type: application/json' \
--data '{
    "cmHandleQueryParameters": [
        {
            "conditionName": "hasAllProperties",
            "conditionParameters": [ {"Color": "yellow"}, {"Size": "small"} ]
        }
    ]
}'

Implementation of CM-handle Search and ID Search

While it is difficult to give a complete explanation of CM-handle Search functionality, a workflow for a particular CM-handle Search will be given to illustrate. In this case, the following Rest request will be issued:

POST http://{{CPS_HOST}}:{{CPS_PORT}}/ncmp/v1/ch/searches

with the following request body:

{
    "cmHandleQueryParameters": [
        {
            "conditionName": "hasAllModules",
            "conditionParameters": [ {"moduleName": "ietf-netconf"} ]
        },
        {
            "conditionName": "hasAllProperties",
            "conditionParameters": [ {"Color": "yellow"}, {"Size": "small"} ]
        }
    ]
}

In this case, a search will be executed returning all CM-handles having the Yang module "ietf-netconf" and the public property Color="yellow" and the public property Size="small". These three condition parameters will be combined using logical AND (or set intersection), meaning only CM-handles satisfying all three criteria will be returned.

The relevant code is in cps-ncmp-service/src/main/java/org/onap/cps/ncmp/api/impl/NetworkCmProxyCmHandleQueryServiceImpl.java

  • executeModuleNameQuery: The module name search works by executing a search for Anchors with the given module references (a single DB query is run for all modules)

    • cpsAnchorService.queryAnchorNames(NFP_OPERATIONAL_DATASTORE_DATASPACE_NAME, moduleNamesForQuery)
  • queryCmHandlesByPublicProperties: The properties search works by executing a separate Cps Path Query (thus separate DB query) for each property pair:

    • //public-properties[@name='Color' and @value='yellow']

    • //public-properties[@name='Size' and @value='small']
  • Note additional query parameters are supported, such as Cps Path Query and query by Trust Level. These would result in additional DB queries.

The results of each of these are sets of CM-handle IDs, which are combined using set intersection (Set::retainAll). After the final set of CM-handle IDs has been computed:

  • In the case of ID search, the set will be returned as a list of IDs from the Rest controller:
    • return ResponseEntity.ok(List.copyOf(cmHandleIds));
  • In the case of CM-handle search, the DB will be queried again to find all CM-handles with the given IDs:
    • return getNcmpServiceCmHandles(cmHandleIds);

Here is a diagram of the flow:

Proposed Solution

It is proposed to create an end-to-end streaming solution, from Persistence layer to Controller. A Proof of Concept will be constructed to document challenges and investigate performance characteristics.

An important observation as to why this solution will achieve O(1) constant space complexity is that the streams will not be terminated until they leave the Rest controller.

Streaming all the way

This will require adding Stream versions of CPS Core read operations, e.g.

Stream<DataNode> queryDataNodesAsStream(String dataspaceName, String anchorName, String cpsPath, FetchDescendantsOption fetchDescendantsOption);

This Stream could implement pagination when fetching data from the FragmentRepository. (This may be implemented in a variety of ways, using Spring JpaRepository Pageable interface, or alternately Spring Data supports streaming from a repository directly, but this needs to be investigated for suitability.) The below example shows the Stream<FragmentEntity> using pagination internally to control memory usage, with the transformed data being streamed to the Rest client (no client changes needed):

NOTE: Spring Data has stream support, and will page results given appropriate settings. For example, JdbcTemplate::queryForStream will page at 100 when following settings are used:

  • spring.jdbc.template.fetch-size=100


  • spring.datasource.hikari.auto-commit=false


Here is some source code showing how the streams API would be used:

// In CPS core
Stream<DataNode> queryDataNodesAsStream(String dataspaceName, String anchorName, String cpsPath, FetchDescendantsOption fetchDescendantsOption) {
	return fragmentRepository.streamByAnchorAndCpsPath(getAnchor(dataspaceName, anchorName), cpsPath)
			.map(fragment -> fetchDescendants(fragment, fetchDescendantsOption))
			.map(fragment -> convertToDataNode(fragment));
}

// In NCMP
private YangModelCmHandle getAnyReadyCmHandleByModuleSetTag(final String moduleSetTag) {
    return cmHandleQueries.queryNcmpRegistryByCpsPath("/dmi-registry/cm-handles[@module-set-tag='" + moduleSetTag + "']", DIRECT_CHILDREN_ONLY)
			.map(YangDataConverter::convertCmHandleToYangModel)
            .filter(cmHandle -> cmHandle.getCompositeState().getCmHandleState() == CmHandleState.READY)
            .findFirst()
            .orElse(null);
}

CPS and NCMP Rest APIs

Instead of returning Collections from Rest APIs, a Stream may be returned, reducing memory pressure on the server.

Current Rest APIs return ResponseEntity using Lists. This means the whole structure must be held in memory before returning response.

    @Override
    public ResponseEntity<List<String>> searchCmHandleIds(final CmHandleQueryParameters cmHandleQueryParameters) {
        final CmHandleQueryServiceParameters cmHandleQueryServiceParameters = ncmpRestInputMapper.toCmHandleQueryServiceParameters(cmHandleQueryParameters);
        final Collection<String> cmHandleIds = networkCmProxyDataService.executeCmHandleIdSearchForInventory(cmHandleQueryServiceParameters);
        return ResponseEntity.ok(List.copyOf(cmHandleIds));
    }

It is proposed to return a Stream instead - Spring Boot supports this, and will allow returning very large results without incurring memory penalty.

    @Override
    public ResponseEntity<Stream<String>> searchCmHandleIds(final CmHandleQueryParameters cmHandleQueryParameters) {
        final CmHandleQueryServiceParameters cmHandleQueryServiceParameters = ncmpRestInputMapper.toCmHandleQueryServiceParameters(cmHandleQueryParameters);
        final Stream<String> cmHandleIds = networkCmProxyDataService.executeCmHandleIdSearchForInventory(cmHandleQueryServiceParameters); 
        return ResponseEntity.ok(cmHandleIds);
    }

Consideration: OpenAPI definition may need to change to use stream instead of array type. This could effect client consumers.

Further Improvements

The use of pagination in the FragmentEntity Stream could be later made to self-opmimize using adaptive paging. The use of Java Streams could allow for faster processing using parallel streams.

  • No labels