Overview
Ideally one anchor could support more then one data tree instance using different independent models provided in the schema-set for that anchor. Refer - CPS-341Getting issue details... STATUS
The Problem statement
The YANG specification allows multiple containers (lists etc) to be defined on a module's top level, however in CPS currently the eligible JSON data (having corresponding data sets) is not accepted. When data is parsed using YangUtils.parseJsonData(...) the exception like below occurs
org.opendaylight.yangtools.yang.data.impl.schema.ResultAlreadySetException: Normalized Node result was already set. at org.opendaylight.yangtools.yang.data.impl.schema.NormalizedNodeResult.setResult(NormalizedNodeResult.java:39) at org.opendaylight.yangtools.yang.data.impl.schema.NormalizedNodeResultBuilder.addChild(NormalizedNodeResultBuilder.java:57) at org.opendaylight.yangtools.yang.data.impl.schema.ImmutableNormalizedNodeStreamWriter.writeChild(ImmutableNormalizedNodeStreamWriter.java:305) at org.opendaylight.yangtools.yang.data.impl.schema.ImmutableNormalizedNodeStreamWriter.endNode(ImmutableNormalizedNodeStreamWriter.java:283) at org.opendaylight.yangtools.yang.data.util.ContainerNodeDataWithSchema.write(ContainerNodeDataWithSchema.java:37) at org.opendaylight.yangtools.yang.data.util.CompositeNodeDataWithSchema.write(CompositeNodeDataWithSchema.java:273) at org.opendaylight.yangtools.yang.data.util.AbstractNodeDataWithSchema.write(AbstractNodeDataWithSchema.java:74) at org.opendaylight.yangtools.yang.data.codec.gson.JsonParserStream.parse(JsonParserStream.java:170) at org.onap.cps.utils.YangUtils.parseJsonData(YangUtils.java:73) ...
Goal of spike
The current limitation of only one data tree(single top level data node) being supported by CPS does not seems to be justifiable. The goal of this study is to find out what is the cause of the above error and bring in support for JSON data with multiple top level nodes.
Problem Statement | Possible Solutions | Sub-Tasks | Notes/Comments | Pros/Cons |
---|---|---|---|---|
Identify the top-level data nodes in the existing JSON payload and then iterate over them and store them individually one by one in using Create node endpoint |
|
| Pros:
Cons:
| |
Use JSON array and store the JSON array in CPS DB using Create node endpoint |
Sample: [{...},{...},{...}]
|
| Pros:
Cons:
|
Analysis of ODL yang parser
Yang modeled data is deserialized into a NormalizedNode using the JSON parser provided by ODL yang tools. One of the limitations of ODL parser as mentioned in the documentation is "The parser also expects that the YANG-modeled data in the JSON source are wrapped in a root element."
By the above statement we can imply that there cannot be multiple elements of same type under the parent (in case of multiple data trees, it will be multiple containers each representing a data tree). Hence to overcome this problem and have support for multiple data trees ODL makes use of a custom normalized node type called DataContainerNode
.
A DataContainerNode is an abstract node which does not have value and it does not have a direct representation in the YANG syntax. It contains valid DataContainerChild
nodes, these are direct children of DataContainerNode and include node types such as ContainerNode, AugmentationNode, etc.
DataContainerNode can be thought of as an imaginary container that wraps around the multiple data trees. Since it does not have a direct representation in the YANG syntax, existing Yang Models and JSON data can be used with a DataContainerNode without any changes as ODL handles the wrapping of data trees within the DataContainerNode.
Full documentation of Normalized DOM Model is here.
Problem with current use of ODL yang parser in CPS
As stated, for deserialization of yang modeled data into NormalizedNode
ODL makes use of NormalizedNodeStreamWriter
. The parser walks through the JSON document containing YANG-modeled data based on the provided SchemaContext and emits node events into a NormalizedNodeStreamWriter. ODL provides two ways of instantiating NormalizedNodeStreamWriter
- Using NormalizedNodeResult (currently used in CPS)
- Using NormalizedNodeContainerBuilder (in case of CPS we are going to use
DataContainerNodeBuilder
, which is a sub-interface of NormalizedNodeContainerBuilder).
Here we compare both the ways of instantiating NormalizedNodeStreamWriter, and see how the current approach fails in parsing JSON data with multiple trees.
NormalizedNodeResult | NormalizedNodeContainerBuilder |
---|---|
private static NormalizedNode<?, ?> parseJsonData(final String jsonData, final SchemaContext schemaContext, final Optional<DataSchemaNode> optionalParentSchemaNode) { final var jsonCodecFactory = JSONCodecFactorySupplier.DRAFT_LHOTKA_NETMOD_YANG_JSON_02 .getShared((EffectiveModelContext) schemaContext); final var normalizedNodeResult = new NormalizedNodeResult(); final var normalizedNodeStreamWriter = ImmutableNormalizedNodeStreamWriter .from(normalizedNodeResult); try (final JsonParserStream jsonParserStream = optionalParentSchemaNode.isPresent() ?JsonParserStream.create(normalizedNodeStreamWriter, jsonCodecFactory,optionalParentSchemaNode.get()) : JsonParserStream.create(normalizedNodeStreamWriter, jsonCodecFactory)) { final var jsonReader = new JsonReader(new StringReader(jsonData)); jsonParserStream.parse(jsonReader); } catch (final IOException | JsonSyntaxException exception) { //TODO } catch (final IllegalStateException illegalStateException) { //TODO } return normalizedNodeResult.getResult(); } | private static Collection<DataContainerChild<? extends YangInstanceIdentifier.PathArgument, ?>> parseJsonData(final String jsonData, final SchemaContext schemaContext, final Optional<DataSchemaNode> optionalParentSchemaNode) { final var jsonCodecFactory = JSONCodecFactorySupplier.DRAFT_LHOTKA_NETMOD_YANG_JSON_02 .getShared((EffectiveModelContext) schemaContext); final DataContainerNodeBuilder<YangInstanceIdentifier.NodeIdentifier, ContainerNode> dataContainerNodeBuilder = Builders.containerBuilder().withNodeIdentifier(new YangInstanceIdentifier.NodeIdentifier(schemaContext.getQName())); final var normalizedNodeStreamWriter = ImmutableNormalizedNodeStreamWriter .from(dataContainerNodeBuilder); try (final JsonParserStream jsonParserStream = optionalParentSchemaNode.isPresent() ? JsonParserStream.create(normalizedNodeStreamWriter, jsonCodecFactory, optionalParentSchemaNode.get()) : JsonParserStream.create(normalizedNodeStreamWriter, jsonCodecFactory)) { final var jsonReader = new JsonReader(new StringReader(jsonData)); jsonParserStream.parse(jsonReader); } catch (final IOException | JsonSyntaxException exception) { throw new DataValidationException( "Failed to parse json data: " + jsonData, exception.getMessage(), exception); } catch (final IllegalStateException illegalStateException) { throw new DataValidationException( "Failed to parse json data. Unsupported xpath or json data:" + jsonData, illegalStateException.getMessage(), illegalStateException); } return dataContainerNodeBuilder.build(); } |
{ "first-container": { "a-leaf": "a-value", "b-leaf": "b-value" }, //Value of NormalizedNodeResult is set after parsing first tree "last-container": { "x-leaf": "x-value", "y-leaf": "y-value" } }
|
|
Impact analysis on remaining CPS core API's
After updating the post operation to support multiple data trees. the following issues were noticed with remaining API's in CPS
Get a node
When passing the xpath value as root(/) in the query parameter, the expected response should return all the data trees stored under a particular, But currently the Get API only returns the first data tree under the particular anchor//Expected Response with xpath= / as query parameter { "first-container": { "a-leaf": "a-Value" }, "last-container": { "x-leaf": "x-value" } }
//Response received { "multipleDataTree:first-container": { "a-leaf": "a-Value" } }
//Confirmation test to check that all data trees exist under the particular anchor by passing individual container names // Response when xpath=/first-container in query parameter { "multipleDataTree:first-container": { "a-leaf": "a-Value" } } // Response when xpath=/last-container in query parameter { "multipleDataTree:first-container": { "a-leaf": "a-Value" } }
Update a node
The update node API performs update operation on a single data tree. In order to perform the update operation on an existing data tree the update API first performs a get operation using the following method:public void updateDataNodeAndDescendants(final String dataspaceName, final String anchorName, final DataNode dataNode) { //performing a Get operation to retrieve a Fragment entity final FragmentEntity fragmentEntity = getFragmentWithoutDescendantsByXpath(dataspaceName, anchorName, dataNode.getXpath()); //updated code returns a list of Fragment entities final List<FragmentEntity> fragmentEntities = getFragmentWithoutDescendantsByXpath(dataspaceName, anchorName, dataNode.getXpath()); updateFragmentEntityAndDescendantsWithDataNode(fragmentEntity, dataNode); try { fragmentRepository.save(fragmentEntity); } catch (final StaleStateException staleStateException) { throw new ConcurrencyException("Concurrent Transactions", String.format("dataspace :'%s', Anchor : '%s' and xpath: '%s' is updated by another transaction.", dataspaceName, anchorName, dataNode.getXpath())); } }
The getFragmentWithoutDescendantsByXpath() used to method used to return a single FragmentEntity using the getFragmentByXpath() method.
- The same method was also used by the GET API to get a single data tree,
- But now that we have support for multiple data tress under a single Anchor the getFragmentByXpath() returns multiple FragmentEntities, each for the respective data tree when the xpath is set to root(/).
- Now since getFragmentWithoutDescendantsByXpath() also uses getFragmentByXpath() to get FragmentEntities, it now returns a list of Fragment Entities
- But it is to be noted that this operation will not be feasible for Update operation with xpath set to root(/)
- For instance
- If there are 10 data trees stored under an Anchor
- An update operation is to be performed on any 2 of the data trees
- Then it is not feasible to get all the data trees by passing the xpath as root and then updating only the desired number of data trees (here in this example 2 data trees)
- Rather it will be better to update the data trees individually
- An argument can be made, what if there is a need to update all data trees or majority of data trees.
- An update operation with xpath as root in such case can be feasible. But the probability of such scenario is something to be thought upon
- Also in such a scenario where a major chunk of old data trees is changed, a fresh data node can be created containing the updated data trees and the old data trees can be deleted.
- The same scenario can be brought forward for Replace a Node API as well.