Overview

Ideally one anchor could support more then one data tree instance using different independent models provided in the schema-set for that anchor. Refer CPS-341 - Getting issue details... STATUS

The Problem statement

The YANG specification allows multiple containers (lists etc) to be defined on a module's top level, however in CPS currently the eligible JSON data (having corresponding data sets) is not accepted. When data is parsed using YangUtils.parseJsonData(...) the exception like below occurs

org.opendaylight.yangtools.yang.data.impl.schema.ResultAlreadySetException: Normalized Node result was already set.
 at org.opendaylight.yangtools.yang.data.impl.schema.NormalizedNodeResult.setResult(NormalizedNodeResult.java:39) at org.opendaylight.yangtools.yang.data.impl.schema.NormalizedNodeResultBuilder.addChild(NormalizedNodeResultBuilder.java:57) at org.opendaylight.yangtools.yang.data.impl.schema.ImmutableNormalizedNodeStreamWriter.writeChild(ImmutableNormalizedNodeStreamWriter.java:305) at org.opendaylight.yangtools.yang.data.impl.schema.ImmutableNormalizedNodeStreamWriter.endNode(ImmutableNormalizedNodeStreamWriter.java:283) at org.opendaylight.yangtools.yang.data.util.ContainerNodeDataWithSchema.write(ContainerNodeDataWithSchema.java:37) at org.opendaylight.yangtools.yang.data.util.CompositeNodeDataWithSchema.write(CompositeNodeDataWithSchema.java:273) at org.opendaylight.yangtools.yang.data.util.AbstractNodeDataWithSchema.write(AbstractNodeDataWithSchema.java:74) at org.opendaylight.yangtools.yang.data.codec.gson.JsonParserStream.parse(JsonParserStream.java:170) at org.onap.cps.utils.YangUtils.parseJsonData(YangUtils.java:73)
...

Goal of spike

The current limitation of only one data tree(single top level data node) being supported by CPS does not seems to be justifiable. The goal of this study is to find out what is the cause of the above error and bring in support for JSON data with multiple top level nodes.

Problem Statement

Possible SolutionsSub-TasksNotes/CommentsPros/Cons

Add support for JSON data with multiple top level nodes. Refer: CPS-341 - Getting issue details... STATUS

Identify the top-level data nodes in the existing JSON payload and then iterate over them and store them individually one by one in using Create node endpoint
  • Make a separate PoC where we parse JSON data individually
  • Get operation needs to be updated accordingly, currently it returns only the first data tree even if multiple data trees exist under same anchor. New Get Data node API is needed


  • Analysis of current parser used in CPS i.e. ODL yang parser
  • gson has support for this
  • CPS team preferred Jackson over gson


  • PoC was presented to CPS team and since it involved handling of JSON data at controller layer the approach was scrapped.
  • It is preferred to resolve the issue at service layer
  • This requires analysis of how ODL yang parser works and is there any issue with its current implementation in CPS

Pros: 

  • No changes to current payload
  • No need to write new API, existing API will serve the purpose
  • Backwards compatible

Cons:

  • The approach is complex compared to using JSON array as payload, Need to write logic to parse over multiple data trees, most libraries use the top json elements(in this case it is container-name) to identify the number of data trees and return the corresponding value under then and hence it requires handling data within CPS so as to get entire JSON data as is.
Use JSON array and store the JSON array in CPS DB using Create node endpoint 
  • need to define new payload

Sample:

[{...},{...},{...}]
  • New test cases need to be added
  • Need to update get operation as well to retrieve multiple data trees, so new endpoint is needed
  • Need to test existing GET nodes API in depth.
  • The payload needs to be changed. Will add the payload format soon.
  • Need to identify which endpoints might get effected with this change


  • PoC was presented to CPS team and since it involved handling of JSON data at controller layer the approach was scrapped.
  • It is preferred to resolve the issue at service layer
  • This requires analysis of how ODL yang parser works and is there any issue with its current implementation in CPS

Pros:

  • Minimal code change (approach will require us to iterate over multiple data trees passed as JSON array)

Cons:

  • New endpoint need to be added. Need to decide on endpoint because current Post API uses "nodes".

Analysis of ODL yang parser

Yang modeled data is deserialized into a NormalizedNode using the JSON parser provided by ODL yang tools. One of the limitations of ODL parser as mentioned in the documentation is "The parser also expects that the YANG-modeled data in the JSON source are wrapped in a root element."

By the above statement we can imply that there cannot be multiple elements of same type under the parent (in case of multiple data trees, it will be multiple containers each representing a data tree). Hence to overcome this problem and have support for multiple data trees ODL makes use of a custom normalized node type called DataContainerNode.

A DataContainerNode is an abstract node which does not have value and it does not have a direct representation in the YANG syntax. It contains valid DataContainerChild nodes, these are direct children of DataContainerNode and include node types such as ContainerNode, AugmentationNode, etc.

DataContainerNode can be thought of as an imaginary container that wraps around the multiple data trees. Since it does not have a direct representation in the YANG syntax, existing Yang Models and JSON data can be used with a DataContainerNode without any changes as ODL handles the wrapping of data trees within the DataContainerNode.

Full documentation of Normalized DOM Model is here.

Problem with current use of ODL yang parser in CPS

As stated, for deserialization of yang modeled data into NormalizedNode ODL makes use of NormalizedNodeStreamWriter. The parser walks through the JSON document containing YANG-modeled data based on the provided SchemaContext and emits node events into a NormalizedNodeStreamWriter. ODL provides two ways of instantiating NormalizedNodeStreamWriter

  • Using NormalizedNodeResult (currently used in CPS)
  • Using NormalizedNodeContainerBuilder (in case of CPS we are going to use DataContainerNodeBuilder, which is a sub-interface of NormalizedNodeContainerBuilder).

Here we compare both the ways of instantiating NormalizedNodeStreamWriter, and see how the current approach fails in parsing JSON data with multiple trees.

NormalizedNodeResultNormalizedNodeContainerBuilder
private static NormalizedNode<?, ?> parseJsonData(final String jsonData, final SchemaContext schemaContext, 
final Optional<DataSchemaNode> optionalParentSchemaNode) {

   final var jsonCodecFactory = JSONCodecFactorySupplier.DRAFT_LHOTKA_NETMOD_YANG_JSON_02
            			.getShared((EffectiveModelContext) schemaContext);
   final var normalizedNodeResult = new NormalizedNodeResult();
   final var normalizedNodeStreamWriter = ImmutableNormalizedNodeStreamWriter
            			.from(normalizedNodeResult);

   try (final JsonParserStream jsonParserStream = optionalParentSchemaNode.isPresent()
      ?JsonParserStream.create(normalizedNodeStreamWriter, jsonCodecFactory,optionalParentSchemaNode.get())
            : JsonParserStream.create(normalizedNodeStreamWriter, jsonCodecFactory)) {

          final var jsonReader = new JsonReader(new StringReader(jsonData));
          jsonParserStream.parse(jsonReader);

        } catch (final IOException | JsonSyntaxException exception) {
            //TODO
        } catch (final IllegalStateException illegalStateException) {
            //TODO
        }
        return normalizedNodeResult.getResult();
    }

private static Collection<DataContainerChild<? extends YangInstanceIdentifier.PathArgument,
    ?>> parseJsonData(final String jsonData, final SchemaContext schemaContext,
    final Optional<DataSchemaNode> optionalParentSchemaNode) {

      final var jsonCodecFactory = JSONCodecFactorySupplier.DRAFT_LHOTKA_NETMOD_YANG_JSON_02
            .getShared((EffectiveModelContext) schemaContext);
      final DataContainerNodeBuilder<YangInstanceIdentifier.NodeIdentifier, ContainerNode>
			dataContainerNodeBuilder = Builders.containerBuilder().withNodeIdentifier(new
                         YangInstanceIdentifier.NodeIdentifier(schemaContext.getQName()));
      final var normalizedNodeStreamWriter = ImmutableNormalizedNodeStreamWriter
            .from(dataContainerNodeBuilder);

      try (final JsonParserStream jsonParserStream = optionalParentSchemaNode.isPresent()
        ? JsonParserStream.create(normalizedNodeStreamWriter, jsonCodecFactory,
			optionalParentSchemaNode.get()) : JsonParserStream.create(normalizedNodeStreamWriter,
				jsonCodecFactory)) {            
			final var jsonReader = new JsonReader(new StringReader(jsonData));
            jsonParserStream.parse(jsonReader);
        } catch (final IOException | JsonSyntaxException exception) {
            throw new DataValidationException(
                "Failed to parse json data: " + jsonData, exception.getMessage(), exception);
        } catch (final IllegalStateException illegalStateException) {
            throw new DataValidationException(
                "Failed to parse json data. Unsupported xpath or json data:" + jsonData,
					illegalStateException.getMessage(), illegalStateException);
        }
        
        return dataContainerNodeBuilder.build();
     }
  • In the above method, NormalizedNodeStreamWriter is instantiated using NormalizedNodeResult
  • using an object of type NormalizedNodeResult instantiates a NormalizedNodeStreamWriter which creates "one" instance of top level NormalizedNode and the type of this NormalizedNode is determined by start of first node event (in this case it will be the container representing first data tree, hence type of NormalizedNode would be a single ContainerNode).
  • So here when a JSON data with multiple data trees is passed, we notice that, since NormalizedNodeStreamWriter created only one instance of NormalizedNode it is utilized while parsing the first data tree.
{
  "first-container": {
    "a-leaf": "a-value",
    "b-leaf": "b-value"
  },						//Value of NormalizedNodeResult is set after parsing first tree
  "last-container": {
    "x-leaf": "x-value",
    "y-leaf": "y-value"
  }
}
  • Once the first data tree is parsed the value of NormalizedNodeResult, which is acting as a flag here to identify that the data tree has been successfully parsed, is set to true.
  • At this point the code tries to parse the remaining data trees as well but since the NormalizedNodeResult value is already set after parsing the first tree. The code fails to parse the remaining data trees. And hence throws:
    ResultAlreadySetException: Normalized Node result was already set.
  • In the updated method, NormalizedNodeStreamWriter is instantiated using a DataContainerNodeBuilder, which is a type of a NormalizedNode
  • using DataContainerNodeBuilder instantiates a NormalizedNodeStreamWriter which creates multiple instances of top level NormalizedNodes (that is the containers representing each Data tree)
  • The type of NormalizedNode is determined by individual node events.
  • All the created nodes are written to the builder
  • The builder finally returns a ContainerNode which contains the collection of DataNodes, each DataNode representing a Data tree.
  • This collection of Data nodes can then be used to store multiple data trees to the database.

Impact analysis on remaining CPS core API's

After updating the post operation to support multiple data trees. the following issues were noticed with remaining API's in CPS

  • Get a node
    When passing the xpath value as root(/) in the query parameter, the expected response should return all the data trees stored under a particular, But currently the Get API only returns the first data tree under the particular anchor

    //Expected Response with xpath= / as query parameter
    {
      "first-container": {
        "a-leaf": "a-Value"
      },
      "last-container": {
        "x-leaf": "x-value"
      }
    }
    //Response received
    {    
    "multipleDataTree:first-container": {
            "a-leaf": "a-Value"
        }
    }
    //Confirmation test to check that all data trees exist under the particular anchor by passing individual container names
    
    // Response when xpath=/first-container in query parameter
    {
         "multipleDataTree:first-container": {
             "a-leaf": "a-Value"
         }
     }
    
    
    // Response when xpath=/last-container in query parameter 
    {
         "multipleDataTree:first-container": {
            "a-leaf": "a-Value"
        }
    }
  • Update a node
    The update node API performs update operation on a single data tree. In order to perform the update operation on an existing data tree the update API first performs a get operation using the following method:

    public void updateDataNodeAndDescendants(final String dataspaceName, final String anchorName,
                                                 final DataNode dataNode) {
    
    		//performing a Get operation to retrieve a Fragment entity
            final FragmentEntity fragmentEntity =
                getFragmentWithoutDescendantsByXpath(dataspaceName, anchorName, dataNode.getXpath());
    
    		//updated code returns a list of Fragment entities
    	 	final List<FragmentEntity> fragmentEntities =
                getFragmentWithoutDescendantsByXpath(dataspaceName, anchorName, dataNode.getXpath());
    
            updateFragmentEntityAndDescendantsWithDataNode(fragmentEntity, dataNode);
            try {
                fragmentRepository.save(fragmentEntity);
            } catch (final StaleStateException staleStateException) {
                throw new ConcurrencyException("Concurrent Transactions",
                        String.format("dataspace :'%s', Anchor : '%s' and xpath: '%s' is updated by another transaction.",
                                dataspaceName, anchorName, dataNode.getXpath()));
            }
        }
    • The getFragmentWithoutDescendantsByXpath() used to method used to return a single FragmentEntity using the getFragmentByXpath() method.

    • The same method was also used by  the GET API to get a single data tree,
    • But now that we have support for multiple data tress under a single Anchor the getFragmentByXpath() returns multiple FragmentEntities, each for the respective data tree when the xpath is set to root(/).
    • Now since getFragmentWithoutDescendantsByXpath() also uses getFragmentByXpath() to get FragmentEntities, it now returns a list of Fragment Entities
    • But it is to be noted that this operation will not be feasible for Update operation with xpath set to root(/)
    • For instance
      • If there are 10 data trees stored under an Anchor
      • An update operation is to be performed on any 2 of the data trees
      • Then it is not feasible to get all the data trees by passing the xpath as root and then updating only the desired number of data trees (here in this example 2 data trees)
      • Rather it will be better to update the data trees individually
    • An argument can be made, what if there is a need to update all data trees or majority of data trees.
    • An update operation with xpath as root in such case can be feasible. But the probability of such scenario is something to be thought upon
    • Also in such a scenario where a major chunk of old data trees is changed, a fresh data node can be created containing the updated data trees and the old data trees can be deleted.
  • The same scenario can be brought forward for Replace a Node API as well (need to add it here)
  • No labels