...
Problem Statement | Possible Solutions | Sub-Tasks | Notes/Comments | Pros/Cons |
---|
Add support for JSON data with multiple top level nodes. Refer: Jira |
---|
server | ONAP Jira |
---|
serverId | 425b2b0a-557c-3c0c-b515-579789cceedb |
---|
key | CPS-341 |
---|
|
| Identify the top-level data nodes in the existing JSON payload and then iterate over them and store them individually one by one in using Create node endpoint | - Make a separate PoC where we parse JSON data individually
- Get operation needs to be updated accordingly, currently it returns only the first data tree even if multiple data trees exist under same anchor. New Get Data node API is needed
- Analysis of current parser used in CPS i.e. ODL yang parser
| - gson has support for this
- CPS team preferred Jackson over gson
- PoC was presented to CPS team and since it involved handling of JSON data at controller layer the approach was scrapped.
- It is preferred to resolve the issue at service layer
- This requires analysis of how ODL yang parser works and is there any issue with its current implementation in CPS
| Pros: - No changes to current payload
- No need to write new API, existing API will serve the purpose
- Backwards compatible
Cons: - The approach is complex compared to using JSON array as payload, Need to write logic to parse over multiple data trees, most libraries use the top json elements(in this case it is container-name) to identify the number of data trees and return the corresponding value under then and hence it requires handling data within CPS so as to get entire JSON data as is.
|
Use JSON array and store the JSON array in CPS DB using Create node endpoint | - need to define new payload
Sample: Code Block |
---|
[{...},{...},{...}] |
- New test cases need to be added
- Need to update get operation as well to retrieve multiple data trees, so new endpoint is needed
- Need to test existing GET nodes API in depth.
| - The payload needs to be changed. Will add the payload format soon.
- Need to identify which endpoints might get effected with this change
- PoC was presented to CPS team and since it involved handling of JSON data at controller layer the approach was scrapped.
- It is preferred to resolve the issue at service layer
- This requires analysis of how ODL yang parser works and is there any issue with its current implementation in CPS
| Pros: - Minimal code change (approach will require us to iterate over multiple data trees passed as JSON array)
Cons: - New endpoint need to be added. Need to decide on endpoint because current Post API uses "nodes".
|
...
NormalizedNodeResult | NormalizedNodeContainerBuilder |
---|
Code Block |
---|
private static NormalizedNode<?, ?> parseJsonData(final String jsonData, final SchemaContext schemaContext,
final Optional<DataSchemaNode> optionalParentSchemaNode) {
final var jsonCodecFactory = JSONCodecFactorySupplier.DRAFT_LHOTKA_NETMOD_YANG_JSON_02
.getShared((EffectiveModelContext) schemaContext);
final var normalizedNodeResult = new NormalizedNodeResult();
final var normalizedNodeStreamWriter = ImmutableNormalizedNodeStreamWriter
.from(normalizedNodeResult);
try (final JsonParserStream jsonParserStream = optionalParentSchemaNode.isPresent()
?JsonParserStream.create(normalizedNodeStreamWriter, jsonCodecFactory,optionalParentSchemaNode.get())
: JsonParserStream.create(normalizedNodeStreamWriter, jsonCodecFactory)) {
final var jsonReader = new JsonReader(new StringReader(jsonData));
jsonParserStream.parse(jsonReader);
} catch (final IOException | JsonSyntaxException exception) {
//TODO
} catch (final IllegalStateException illegalStateException) {
//TODO
}
return normalizedNodeResult.getResult();
}
|
|
Code Block |
---|
private static Collection<DataContainerChild<? extends YangInstanceIdentifier.PathArgument,
?>> parseJsonData(final String jsonData, final SchemaContext schemaContext,
final Optional<DataSchemaNode> optionalParentSchemaNode) {
final var jsonCodecFactory = JSONCodecFactorySupplier.DRAFT_LHOTKA_NETMOD_YANG_JSON_02
.getShared((EffectiveModelContext) schemaContext);
final DataContainerNodeBuilder<YangInstanceIdentifier.NodeIdentifier, ContainerNode>
dataContainerNodeBuilder = Builders.containerBuilder().withNodeIdentifier(new
YangInstanceIdentifier.NodeIdentifier(schemaContext.getQName()));
final var normalizedNodeStreamWriter = ImmutableNormalizedNodeStreamWriter
.from(dataContainerNodeBuilder);
try (final JsonParserStream jsonParserStream = optionalParentSchemaNode.isPresent()
? JsonParserStream.create(normalizedNodeStreamWriter, jsonCodecFactory,
optionalParentSchemaNode.get()) : JsonParserStream.create(normalizedNodeStreamWriter,
jsonCodecFactory)) {
final var jsonReader = new JsonReader(new StringReader(jsonData));
jsonParserStream.parse(jsonReader);
} catch (final IOException | JsonSyntaxException exception) {
throw new DataValidationException(
"Failed to parse json data: " + jsonData, exception.getMessage(), exception);
} catch (final IllegalStateException illegalStateException) {
throw new DataValidationException(
"Failed to parse json data. Unsupported xpath or json data:" + jsonData,
illegalStateException.getMessage(), illegalStateException);
}
final ContainerNode result = dataContainerNodeBuilder.build();
return resultdataContainerNodeBuilder.getValuebuild();
} |
|
- In the above method, NormalizedNodeStreamWriter is instantiated using NormalizedNodeResult
- using an object of type NormalizedNodeResult instantiates a NormalizedNodeStreamWriter which creates "one" instance of top level NormalizedNode and the type of this NormalizedNode is determined by start of first node event (in this case it will be the container representing first data tree, hence type of NormalizedNode would be a single ContainerNode).
- So here when a JSON data with multiple data trees is passed, we notice that, since NormalizedNodeStreamWriter created only one instance of NormalizedNode it is utilized while parsing the first data tree.
Code Block |
---|
{
"first-container": {
"a-leaf": "a-value",
"b-leaf": "b-value"
}, //Value of NormalizedNodeResult is set after parsing first tree
"last-container": {
"x-leaf": "x-value",
"y-leaf": "y-value"
}
} |
| - In the updated method, NormalizedNodeStreamWriter is instantiated using a DataContainerNodeBuilder, which is a type of a NormalizedNode
- using DataContainerNodeBuilder instantiates a NormalizedNodeStreamWriter which creates multiple instances of top level NormalizedNodes (that is the containers representing each Data tree)
- The type of NormalizedNode is determined by individual node events.
- All the created nodes are written to the builder
- The builder finally returns a ContainerNode which contains the collection of DataNodes, each DataNode representing a Data tree.
- This collection of Data nodes can then be used to store multiple data trees to the database.
|
Impact analysis on
...
remaining CPS core API's
After updating the post operation to support multiple data trees. the following problems issues were noticed with the GET APIremaining API's in CPS
Get a node
When passing the xpath value as root(/) in the query parameter, the expected response should return all the data trees stored under a particular, But currently the Get API only returns the first data tree under the particular anchor
Code Block |
---|
//Expected Response with xpath= / as query parameter
{
"first-container": {
"a-leaf": "a-Value"
},
"last-container": {
"x-leaf": "x-value"
}
} |
Code Block |
---|
//Response received
{
"multipleDataTree:first-container": {
"a-leaf": "a-Value"
}
} |
Code Block |
---|
//Confirmation test to check that all data trees exist under the particular anchor by passing individual container names
// Response when xpath=/first-container in query parameter
{
"multipleDataTree:first-container": {
"a-leaf": "a-Value"
}
}
// Response when xpath=/last-container in query parameter
{
"multipleDataTree:first-container": {
"a-leaf": "a-Value"
}
} |
Update a node
The update node API performs update operation on a single data tree. In order to perform the update operation on an existing data tree the update API first performs a get operation using the following method:
Code Block |
---|
public void updateDataNodeAndDescendants(final String dataspaceName, final String anchorName,
final DataNode dataNode) {
//performing a Get operation to retrieve a Fragment entity
final FragmentEntity fragmentEntity =
getFragmentWithoutDescendantsByXpath(dataspaceName, anchorName, dataNode.getXpath());
//updated code returns a list of Fragment entities
final List<FragmentEntity> fragmentEntities =
getFragmentWithoutDescendantsByXpath(dataspaceName, anchorName, dataNode.getXpath());
updateFragmentEntityAndDescendantsWithDataNode(fragmentEntity, dataNode);
try {
fragmentRepository.save(fragmentEntity);
} catch (final StaleStateException staleStateException) {
throw new ConcurrencyException("Concurrent Transactions",
String.format("dataspace :'%s', Anchor : '%s' and xpath: '%s' is updated by another transaction.",
dataspaceName, anchorName, dataNode.getXpath()));
}
} |
- The same scenario can be brought forward for Replace a Node API as well (need to add it here)