Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Problem Statement

Possible SolutionsSub-TasksNotes/CommentsPros/Cons

Add support for JSON data with multiple top level nodes. Refer:

Jira
serverONAP Jira
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyCPS-341

Identify the top-level data nodes in the existing JSON payload and then iterate over them and store them individually one by one in using Create node endpoint
  • Make a separate PoC where we parse JSON data individually
  • Get operation needs to be updated accordingly, currently it returns only the first data tree even if multiple data trees exist under same anchor. New Get Data node API is needed


  • Analysis of current parser used in CPS i.e. ODL yang parser
  • gson has support for this
  • CPS team preferred Jackson over gson


  • PoC was presented to CPS team and since it involved handling of JSON data at controller layer the approach was scrapped.
  • It is preferred to resolve the issue at service layer
  • This requires analysis of how ODL yang parser works and is there any issue with its current implementation in CPS

Pros: 

  • No changes to current payload
  • No need to write new API, existing API will serve the purpose
  • Backwards compatible

Cons:

  • The approach is complex compared to using JSON array as payload, Need to write logic to parse over multiple data trees, most libraries use the top json elements(in this case it is container-name) to identify the number of data trees and return the corresponding value under then and hence it requires handling data within CPS so as to get entire JSON data as is.
Use JSON array and store the JSON array in CPS DB using Create node endpoint 
  • need to define new payload

Sample:

Code Block
[{...},{...},{...}]
  • New test cases need to be added
  • Need to update get operation as well to retrieve multiple data trees, so new endpoint is needed
  • Need to test existing GET nodes API in depth.
  • The payload needs to be changed. Will add the payload format soon.
  • Need to identify which endpoints might get effected with this change


  • PoC was presented to CPS team and since it involved handling of JSON data at controller layer the approach was scrapped.
  • It is preferred to resolve the issue at service layer
  • This requires analysis of how ODL yang parser works and is there any issue with its current implementation in CPS

Pros:

  • Minimal code change (approach will require us to iterate over multiple data trees passed as JSON array)

Cons:

  • New endpoint need to be added. Need to decide on endpoint because current Post API uses "nodes".

...

NormalizedNodeResultNormalizedNodeContainerBuilder


Code Block
private static NormalizedNode<?, ?> parseJsonData(final String jsonData, final SchemaContext schemaContext, 
final Optional<DataSchemaNode> optionalParentSchemaNode) {

   final var jsonCodecFactory = JSONCodecFactorySupplier.DRAFT_LHOTKA_NETMOD_YANG_JSON_02
            			.getShared((EffectiveModelContext) schemaContext);
   final var normalizedNodeResult = new NormalizedNodeResult();
   final var normalizedNodeStreamWriter = ImmutableNormalizedNodeStreamWriter
            			.from(normalizedNodeResult);

   try (final JsonParserStream jsonParserStream = optionalParentSchemaNode.isPresent()
      ?JsonParserStream.create(normalizedNodeStreamWriter, jsonCodecFactory,optionalParentSchemaNode.get())
            : JsonParserStream.create(normalizedNodeStreamWriter, jsonCodecFactory)) {

          final var jsonReader = new JsonReader(new StringReader(jsonData));
          jsonParserStream.parse(jsonReader);

        } catch (final IOException | JsonSyntaxException exception) {
            //TODO
        } catch (final IllegalStateException illegalStateException) {
            //TODO
        }
        return normalizedNodeResult.getResult();
    }



Code Block
private static Collection<DataContainerChild<? extends YangInstanceIdentifier.PathArgument,
    ?>> parseJsonData(final String jsonData, final SchemaContext schemaContext,
    final Optional<DataSchemaNode> optionalParentSchemaNode) {

      final var jsonCodecFactory = JSONCodecFactorySupplier.DRAFT_LHOTKA_NETMOD_YANG_JSON_02
            .getShared((EffectiveModelContext) schemaContext);
      final DataContainerNodeBuilder<YangInstanceIdentifier.NodeIdentifier, ContainerNode>
			dataContainerNodeBuilder = Builders.containerBuilder().withNodeIdentifier(new
                         YangInstanceIdentifier.NodeIdentifier(schemaContext.getQName()));
      final var normalizedNodeStreamWriter = ImmutableNormalizedNodeStreamWriter
            .from(dataContainerNodeBuilder);

      try (final JsonParserStream jsonParserStream = optionalParentSchemaNode.isPresent()
        ? JsonParserStream.create(normalizedNodeStreamWriter, jsonCodecFactory,
			optionalParentSchemaNode.get()) : JsonParserStream.create(normalizedNodeStreamWriter,
				jsonCodecFactory)) {            
			final var jsonReader = new JsonReader(new StringReader(jsonData));
            jsonParserStream.parse(jsonReader);
        } catch (final IOException | JsonSyntaxException exception) {
            throw new DataValidationException(
                "Failed to parse json data: " + jsonData, exception.getMessage(), exception);
        } catch (final IllegalStateException illegalStateException) {
            throw new DataValidationException(
                "Failed to parse json data. Unsupported xpath or json data:" + jsonData,
					illegalStateException.getMessage(), illegalStateException);
        }
        final ContainerNode result = dataContainerNodeBuilder.build();
        return resultdataContainerNodeBuilder.getValuebuild();
      }


  • In the above method, NormalizedNodeStreamWriter is instantiated using NormalizedNodeResult
  • using an object of type NormalizedNodeResult instantiates a NormalizedNodeStreamWriter which creates "one" instance of top level NormalizedNode and the type of this NormalizedNode is determined by start of first node event (in this case it will be the container representing first data tree, hence type of NormalizedNode would be a single ContainerNode).
  • So here when a JSON data with multiple data trees is passed, we notice that, since NormalizedNodeStreamWriter created only one instance of NormalizedNode it is utilized while parsing the first data tree.
Code Block
{
  "first-container": {
    "a-leaf": "a-value",
    "b-leaf": "b-value"
  },						//Value of NormalizedNodeResult is set after parsing first tree
  "last-container": {
    "x-leaf": "x-value",
    "y-leaf": "y-value"
  }
}
  • Once the first data tree is parsed the value of NormalizedNodeResult, which is acting as a flag here to identify that the data tree has been successfully parsed, is set to true.
  • At this point the code tries to parse the remaining data trees as well but since the NormalizedNodeResult value is already set after parsing the first tree. The code fails to parse the remaining data trees. And hence throws:
    ResultAlreadySetException: Normalized Node result was already set.
  • In the updated method, NormalizedNodeStreamWriter is instantiated using a DataContainerNodeBuilder, which is a type of a NormalizedNode
  • using DataContainerNodeBuilder instantiates a NormalizedNodeStreamWriter which creates multiple instances of top level NormalizedNodes (that is the containers representing each Data tree)
  • The type of NormalizedNode is determined by individual node events.
  • All the created nodes are written to the builder
  • The builder finally returns a ContainerNode which contains the collection of DataNodes, each DataNode representing a Data tree.
  • This collection of Data nodes can then be used to store multiple data trees to the database.

...

  • Get a node
    When passing the xpath value as root(/) in the query parameter, the expected response should return all the data trees stored under a particular, But currently the Get API only returns the first data tree under the particular anchor

    Code Block
    //Expected Response with xpath= / as query parameter
    {
      "first-container": {
        "a-leaf": "a-Value"
      },
      "last-container": {
        "x-leaf": "x-value"
      }
    }


    Code Block
    //Response received
    {    
    "multipleDataTree:first-container": {
            "a-leaf": "a-Value"
        }
    }


    Code Block
    //Confirmation test to check that all data trees exist under the particular anchor by passing individual container names
    
    // Response when xpath=/first-container in query parameter
    {
         "multipleDataTree:first-container": {
             "a-leaf": "a-Value"
         }
     }
    
    
    // Response when xpath=/last-container in query parameter 
    {
         "multipleDataTree:first-container": {
            "a-leaf": "a-Value"
        }
    }


     

  • Update a node
    The update node API performs update operation on a single data tree. In order to perform the update operation on an existing data tree the update API first performs a get operation using the following method:

    Code Block
    public void updateDataNodeAndDescendants(final String dataspaceName, final String anchorName,
                                                 final DataNode dataNode) {
    
    		//performing a Get operation to retrieve a Fragment entity
            final FragmentEntity fragmentEntity =
                getFragmentWithoutDescendantsByXpath(dataspaceName, anchorName, dataNode.getXpath());
    
    		//updated code returns a list of Fragment entities
    	 	final List<FragmentEntity> fragmentEntities =
                getFragmentWithoutDescendantsByXpath(dataspaceName, anchorName, dataNode.getXpath());
    
            updateFragmentEntityAndDescendantsWithDataNode(fragmentEntity, dataNode);
            try {
                fragmentRepository.save(fragmentEntity);
            } catch (final StaleStateException staleStateException) {
                throw new ConcurrencyException("Concurrent Transactions",
                        String.format("dataspace :'%s', Anchor : '%s' and xpath: '%s' is updated by another transaction.",
                                dataspaceName, anchorName, dataNode.getXpath()));
            }
        }


    • The getFragmentWithoutDescendantsByXpath() used to method used to return a single FragmentEntity using the getFragmentByXpath() method.

    • The same method was also used by  the GET API to get a single data tree,
    • But now that we have support for multiple data tress under a single Anchor the getFragmentByXpath() returns multiple FragmentEntities, each for the respective data tree when the xpath is set to root(/).
    • Now since getFragmentWithoutDescendantsByXpath() also uses getFragmentByXpath() to get FragmentEntities, it now returns a list of Fragment Entities
    • But it is to be noted that this operation will not be feasible for Update operation with xpath set to root(/)
    • For instance
      • If there are 10 data trees stored under an Anchor
      • An update operation is to be performed on any 2 of the data trees
      • Then it is not feasible to get all the data trees by passing the xpath as root and then updating only the desired number of data trees (here in this example 2 data trees)
      • Rather it will be better to update the data trees individually
    • An argument can be made, what if there is a need to update all data trees or majority of data trees.
    • An update operation with xpath as root in such case can be feasible. But the probability of such scenario is something to be thought upon
    • Also in such a scenario where a major chunk of old data trees is changed, a fresh data node can be created containing the updated data trees and the old data trees can be deleted.
  • The same scenario can be brought forward for Replace a Node API as well (need to add it here)