Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Table of Contents

Needed Stories

Resources needs to handle GET & PUT of network sequence time (truth) - DONE

Resources needs to handle GET by DB Time and Network Sequence Time (truth) - DONE

Gizmo needs to be updated to handle a PUT of network sequence time (truth) - DONE

Spike needs to be updated to handle network sequence time(truth) - DONE

Synapse needs to be updated to handle get flow through of parameters to chameleon from network sequence time and db time - DONE

Chameleon needs to be updated to handle the network sequence time (truth) - DONE

Gallifrey to expose lifespan (possibly) - ???? are we doing this?

Chameleon/Gallifrey need to handle all deployment stories (dockerization/runbook/etc), and adhere to junit coverage

Historical Meta-properties

Outlined in the following ppt: historyScenarios.pptx

Historical Work Breakdown

Historical Tracking Iterations Story Assignment.xlsx

mS Pros/Cons

Why are we breaking the functionality out into these granular levels (mainly for scalability, deployment/maintenance flexibility)

  • Independent Development – All microservices can be easily developed based on their individual functionality
  • Independent Deployment – Microservices can be individually deployed in any application
  • Fault Isolation – Even if one service of the application does not work, the system still continues to function and the fault is easily detectable
  • Mixed Technology Stack – Different languages and technologies can be used to build different services of the same application
  • Granular Scaling – Individual components can scale as per need, there is no need to scale all components together

Some disadvantages of splitting out the functionality into these granular microservices are:

  • Full stack error log traceability
  • Latency introduced
  • Deployment more complicated
  • testing more complicated

High Level Design of microservice flow

Image Removed

Resources - Client exposed endpoints

Gizmo - CRUD abstraction subsystem

Synapse - Data/request router, will handle the traffic proxying to various microservices based on its built in rules

Champ - General purpose graph database abstraction

Spike - publishes dmaap or kafka events and attempts to ensure order

Chameleon -  Subsystem that processes spike events from the real-time graph updates and formats the events, entry point to gallifrey, and enforces/formats the requests into the format needed by Gallifrey

Gallifrey - Subsystem that makes & retrieves historical assertions

PUT/POST/PATCH/DELETE to real-time flow


LIFESPAN not to be supported in Casablanca.

Chameleon/Gallifrey need to handle all deployment stories (dockerization/runbook/etc), and adhere to junit coverage - amdocs to create these stories

Historical Meta-properties

Outlined in the following ppt: historyScenarios.pptx

Historical Work Breakdown

Historical Tracking Iterations Story Assignment.xlsx

mS Pros/Cons

Why are we breaking the functionality out into these granular levels (mainly for scalability, deployment/maintenance flexibility)

  • Independent Development – All microservices can be easily developed based on their individual functionality
  • Independent Deployment – Microservices can be individually deployed in any application
  • Fault Isolation – Even if one service of the application does not work, the system still continues to function and the fault is easily detectable
  • Mixed Technology Stack – Different languages and technologies can be used to build different services of the same application
  • Granular Scaling – Individual components can scale as per need, there is no need to scale all components together
  • Ease of Unit Testing - unit testing is easier to maintain as functionality per mS is isolated

Some disadvantages of splitting out the functionality into these granular microservices are:

  • Full stack error log traceability
  • Latency introduced
  • Deployment more complicated
  • e2e testing more complicated
  • more points of failure

High Level Design of microservice flow

Image Added

Resources - Client exposed endpoints

Gizmo - CRUD abstraction subsystem

Synapse - Data/request router, will handle the traffic proxying to various microservices based on its built in rules

Champ - General purpose graph database abstraction

Spike - publishes dmaap or kafka events and attempts to ensure order

Chameleon -  Subsystem that processes spike events from the real-time graph updates and formats the events, entry point to gallifrey/time machine, and enforces/formats the requests into the format needed by Gallifrey/Time Machine

Gallifrey/Time Machine - Subsystem that makes & retrieves historical assertions


PUT/POST/PATCH/DELETE to real-time flow

Resources > Gizmo > Champ > Real-time DB Cluster

this triggers the historical storage flow

Champ > Spike > Chameleon > Gallifrey > Historical Champ > Historical DB Cluster

GET of real-time data

Resources > Gizmo > Synapse > Real-time Resources > Gizmo > Champ > Real-time DB Cluster

this triggers the historical storage flow

Champ > Spike > Chameleon > Gallifrey > Historical Champ > Historical DB Cluster

GET of real-time data

GET of historical data

Resources > Gizmo > Synapse > Real-time Champ > Real-time DB Cluster

GET of historical data

Resources > Gizmo > Synapse > Chameleon > Gallifrey/Time Machine > Historical Champ > Historical DB Cluster

I feel Chameleon could be bypassed for this flow, treating chameleon's functionality as just the dmaap history processor, and synapse would call gallifrey directly./tm directly. This way if chameleon goes down, gallifrey/tm could still service historical GET requests.

Resources Resources > Gizmo > Synapse > Chameleon > Gallifrey/Time Machine > Historical Champ > Historical DB Cluster

...

Resources will be updated to accept a timestamp or a network timestamp and will trickle down through gizmo to synapse then to chameleon.

If the timestamp was sent on a non-singular node call then we would return a message stating that this functionality is not supported.

Gallifrey API Spec:

If the timestamp was sent on a non-singular node call then we would return a message stating that this functionality is not supported.

Gallifrey/Time Machine API Spec:


Retrieve all the timestamps for create, update, delete operations against this entity

Type

URI

Query Params

Description

Champ Interaction

GET

relationship/<ID>

t-k=Timestamp that specifies knowledge ie: when we received the assertion in Gallifrey/TM

t-t=Timestamp that specifies a time assertion made by the client for when the change (update/add/delete) occurred in the network


Retrieve a relationship by ID


Champ needs to handle accepting a relationship id and the timestamp to run a historical query on the graph db

A subgraph strategy will be used in champ to filter on the relative timestamp provided.

meta=[true/false] if true, payload retrieved will hold the metaproperties at t-k or t-t

GET

relationship/<ID>/lifespan

NOT in scope for Casablanca - decided on call 5/23/2018

None

Retrieve all the timestamps for create, update, delete operations against this relationship

Champ would be called to retrieve the lifespan on the relationship with meta=true and t-k=lifespan to retrieve all the metaproperties on the relationship

GET

entity

Type

URI

Query Params

Description

Champ Interaction

GET

relationship/<ID>

t-k=Timestamp that specifies knowledge ie: when we received the assertion in Gallifrey/TM

ntt-kt=Timestamp that specifies a time assertion made by the client for when the change (update/add/delete) occurred in the network

meta=[true/false] if true, payload retrieved will hold the metaproperties at t-k or nt-k

the network


Retrieve an entity Retrieve a relationship by ID

Champ needs to handle accepting a relationship an entity id and the timestamp to run a historical query on the graph db retrieving the asserted state. Default not sending back metaproperties, if metaproperties are needed a parameter would need to be sent to champ.

A subgraph strategy will be used in champ to filter on the relative timestamp provided.

meta=[true/false] if true, payload retrieved will hold the metaproperties at t-k

GET

relationshipentity/<ID>/lifespan

NOT in scope for Casablanca - decided on call 5/23/2018

None

Retrieve all the timestamps for create, update, delete operations against this relationshipentity


Champ would be called to retrieve the lifespan on the relationship entity with meta=true and t-k=lifespan to retrieve all the metaproperties on the relationship

GET

entity

PUT

relationship/<ID>

actor=name of the system making the assertion

changes-only=[true|

/<ID>

t-k=Timestamp that specifies knowledge ie: when we received the assertion in Gallifrey

nt-k=Timestamp that specifies a time assertion made by the client for when the change (update/add/delete) occurred in the network

meta=[true/false] if true, payload retrieved will hold the metaproperties at t-k

Retrieve an entity by ID

Champ needs to handle accepting an entity id and the timestamp to run a historical query on the graph db retrieving the asserted state. Default not sending back metaproperties, if metaproperties are needed a parameter would need to be sent to champ.

A subgraph strategy will be used in champ to filter on the relative timestamp provided.

GET

entity/<ID>/lifespan

None

Champ would be called to retrieve the lifespan on the entity with meta=true and t-k=lifespan to retrieve all the metaproperties on the entity

gallifrey/tm will actually determine what has changed between the PUT payload and the most recent set of assertions for the relationship. If false, the entire PUT body will be considered as a new set of assertions whether something has changed or not.

create=[true|false] if true, Gallifrey/TM assumes that this is a create request, if false it assumes it is an update

t-t=Timestamp that specifies a time assertion made by the client for when the change (update/add/delete) occurred in the network


Asserts that a relationship is to be created or updated (depending on the query parameters that are passed in). This API appends new assertions against the specified relationship.

Generated in Gallifrey/TM

PUT

relationship/<ID>

actor=name of the system making the assertion

changes-only=[true|false] if true, gallifrey will actually determine what has changed between the PUT payload and the most recent set of assertions for the relationship. If false, the entire PUT body will be considered as a new set of assertions whether something has changed or not.

create=[true|false] if true, Gallifrey assumes that this is a create request, if false it assumes it is an update

t-k=Timestamp that specifies knowledge ie: when we received the assertion in Gallifrey/TM (why are we generating this here? if there is a maintenance issue our timings would be out of sync with when these took place in the real-time db)

nt-k=Timestamp that specifies a time assertion made by the client for when the change (update/add/delete) occurred in the network

Asserts that a relationship is to be created or updated (depending on the query parameters that are passed in). This API appends new assertions against the specified relationship.when these took place in the real-time db)

create = false (changes-only true (execute diff)/false(assume everything changed))

When an assertion is being made without a network timestamp gallifrey/tm will call champ requesting the relationship with it's most current metaproperties. Gallifrey/TM would then adjust the metaproperties (of the updated properties) and would send the payload back to champ with a new current state and an updated previous state's metaproperties. Champ would override it's current metaproperty (for the updated/deleted properties) with the old and current metaproperties sent from Gallifrey/TM. For added properties, they would be added directly with the metaproperties sent from Gallifrey/TM.

When an assertion is being made with a network timestamp gallifrey/tm will call champ requesting the relationship with all of its metaproperties. Gallifrey/TM would then insert the new assertion where appropriate (and adjust neighboring metaproperties) and send the modified payload back to champ for a replace.

create = true

POST - this would be a new create and Gallifrey/TM would pass the metaproperties on each of it's property values and on the relationship itself

PUT

entity/<ID>

actor=name of the system making the assertion

changes-only=[true|false] if true, gallifrey/tm will actually determine what has changed between the PUT payload and the most recent set of assertions for the entity. If false, the entire PUT body will be considered as a new set of assertions whether something has changed or not.

create=[true|false] if true, Gallifrey/TM assumes that this is a create request, if false it assumes it is an update

t-k=Timestamp that specifies knowledge ie: when we received the assertion in Gallifrey

t-tnt-k=Timestamp that specifies a time assertion made by the client for when the change (update/add/delete) occurred in the network

Asserts that an entity is to be created or updated (depending on the query parameters that are passed in). This API appends new assertions against the specified entity.

Generated in Gallifrey/TM

t-k=Timestamp that specifies knowledge ie: when we received the assertion in Gallifrey/TM

create = false (changes-only true (execute diff)/false(assume everything changed))

When an assertion is being made without a network timestamp gallifrey/tm will call champ requesting the entity with it's most current metaproperties. Gallifrey/TM would then adjust the metaproperties (of the updated properties) and would send the payload back to champ with a new current state and an updated previous state's metaproperties. Champ would override it's current metaproperty (for the updated/deleted properties) with the old and current metaproperties sent from Gallifrey/TM. For added properties, they would be added directly with the metaproperties sent from Gallifrey/TM.

When an assertion is being made with a network timestamp gallifrey/tm will call champ requesting the entity with all of its metaproperties. Gallifrey/TM would then insert the new assertion where appropriate (and adjust neighboring metaproperties) and send the modified payload back to champ for a replace.

create = true

POST - this would be a new create and Gallifrey/TM would pass the metaproperties on each of it's property values and on the entity itself

DELETE

relationship/<ID>

actor=name of the system making the assertion

t-k=Timestamp that specifies knowledge ie: when we received the assertion in Gallifrey

the system making the assertion

t-tnt-k=Timestamp that specifies a time assertion made by the client for when the change (update/add/delete) occurred in the network

Asserts that a relationship has been deleted.

Generated in Gallifrey/TM

t-k=Timestamp that specifies knowledge ie: when we received the assertion in Gallifrey/TM

Gallifrey/TM would request the latest relationship from champ and would set all of it's properties dbEndTimes to t-k along with the dbEndTime on the relationship itself.

DELETE

entity/<ID>

actor=name of the system making the assertion

t-k=Timestamp that specifies knowledge ie: when we received the assertion in Gallifrey

t-tnt-k=Timestamp that specifies a time assertion made by the client for when the change (update/add/delete) occurred in the network

Asserts that an entity has been deleted.

Generated in Gallifrey/TM

t-k=Timestamp that specifies knowledge ie: when we received the assertion in Gallifrey/TM


Gallifrey/TM would request the latest entity from champ and would set all of it's properties dbEndTimes to t-k along with the dbEndTime on the entity itself.

...

Type

URI

Query Params

Description

GET

relationship/<ID>

t-k=Timestamp that specifies knowledge ie: when we received the assertion in Gallifrey/TM

ntt-kt=Timestamp that specifies a time assertion made by the client for when the change (update/add/delete) occurred in the network

Retrieve a relationship by ID

GET

entity/<ID>

t-k=Timestamp that specifies knowledge ie: when we received the assertion in Gallifrey/TM

ntt-kt=Timestamp that specifies a time assertion made by the client for when the change (update/add/delete) occurred in the network

Retrieve an entity by ID

...

Between Chameleon->Gallifrey/TM, the following calls are made:

...

Need Chameleon updated with this:

GET entity/<ID>?ntt-kt=<timestamp>

GET relationship/<ID>? ntt-kt=<timestamp>

Champ API Spec

...

https://<host>:9522/services/champ-service/v1/objects/<key>?ntt-kt=t1
https://<host>:9522/services/champ-service/v1/objects/<key>?ntt-kt=t1&meta=true

UPDATE an object

...

URL: https://<host>:9522/services/champ-service/v1/relationships/<key>?ntt-kt=t1
URL: https://<host>:9522/services/champ-service/v1/relationships/<key>?ntt-kt=t1&meta=true

UPDATE an object

...

dbEndTime - when and entity or relationship was deleted from the db or a property's value was asserted to another state

ntStartTime - asserted by the client as to when the change took place in the network

ntEndTime - set when an assertion provided by the client make's the current state no longer true

startSOT - the source of truth that made the assertion

endSOT- the source of truth that made an assertion to make the current state no longer true

Schema

Separate db edge rules file: with all relationships many to many except parent child which could be one to many

Separate schema file: no properties on vertices are unique except for aai-uuid

state

ntStartTime - asserted by the client as to when the change took place in the network

ntEndTime - set when an assertion provided by the client make's the current state no longer true

startSOT - the source of truth that made the assertion

endSOT- the source of truth that made an assertion to make the current state no longer true

Schema

Separate db edge rules file: with all relationships many to many except parent child which could be one to many

Separate schema file: no properties on vertices are unique except for aai-uuid

Open question: what happens in the event that db edge rules change etc. (migrations)

When To Record T-K value (Live DB vs Historical DB)

Below is an explanation of the thoughts around when to record the t-k value, examples explaining outcome of storing at live vs historical DB and their pros/cons.

View file
nameHistoricalData - WritingTK Value.pdf
height250
Open question: what happens in the event that db edge rules change etc. (migrations)

GUI Mocks

New integrated functionality (updates for history)

...