Backlog

NumberNotesRelates ToTypeAssigneeRank

Investigate ways of persisting policy state information with different structures at run time. One approach could be to use Avro & Apache Hive. [There is also a NoSQL/JSON option available in MariaDB and Postgres]

Another option is the Apache Cassandra database. Datastax provide a Java Driver for Apache Cassandra.

Writing plugins towards state information stored in CPS might be another approach that could work across all the PDPs.

POLICY-2898Epic
5

Multi-cluster database support

POLICY-1821Story
1

Allow DB to be changed based on user needs e.g Postgres for MariaDBPOLICY-1787Story
3 (xacml & drools need to be tested with postgres)

Upgrade/rollback of database tables

POLICY-2715StoryKevin Timoney1 (follows backup/restore JIRA)

References to DB should be moved from persistence.xml to properties file to facilitate the use of alternate databases.

We can modify the current code to read in the location of a properties file 


-Djdbc.properties=/path/to/jdbc.properties
 String propertiesPath = System.getProperty( "jdbc.properties" );

if ( propertiesPath != null )
{
     FileInputStream in = new FileInputStream( propertiesPath );

     try
     {
         jdbcProperties = Properties.load( in );
     }
     finally
     {
         in.close( );
     }
}

Kubernetes

There are four different ways that you can use a ConfigMap to configure a container inside a Pod:

  • Inside a container command and args
  • Environment variables for a container
  • Add a file in read-only volume, for the application to read
  • Write code to run inside the Pod that uses the Kubernetes API to read a ConfigMap

Kubernetes Configmap


Story
3

Invalid target-database property in persistence.xml in apex pdp.

Persistence.xml should use eclipselink.target-database

e.g. <property name="eclipselink.target-database" value="MySQL" />


Task

Ajith Sreekumar

  • reassign to Kevin

3 (bug fix)

Done

Currently, the models Provider classes manage transactions. Transaction management should be moved to client for better performance and atomicity. This will also eliminate the need for caching on the client side.

Note: The transactional annotation can be used  to group transactions together.

Ideally transactions should be executed serially in order to avoid dirty reads, non-repeatable reads and phantom reads. 

Note: DatabasePolicyModelsProviderImpl in the models-provider package provides rest APIs such as createServiceTemplate, updateServiceTemplate and deleteServiceTemplate which in turn call the corresponding APIs from AuthorativeToscaProvider in the models-tosca package..


Story
5 (depends on Spring direction)

How best to deal with CRUD of data types in policy-api and policy-models. It is possible only to create data types indirectly in policy type create requests. Update and delete of data types is not possible.

SimpleToscaProvider contains the following methods not available in AuthorativeToscaProvider : getDataTypes, getCascadedDataTypes, createDataTypes, updateDataTypes and deleteDataType.


StoryAssign to Liam5

Database backup and restore

MDEV-23119Story

1

rename to backup/restore

What policy components needs to be centralized vs de-centralized and moved to tenant namespaces. (R8 ONAP to support multi-tenancy)

Multi-tenancy support in Policy Framework

Note: Eclipselink supports Using Table-Per-Tenant Multi-Tenancy 


Story
8 (design is available)

Clean up/Roll up of old DB data. Purge/Archive job should be created and run at a scheduled interval.

MariaDB Event Scheduler     Postgres pg_cron

MariaDB allows for the partitioning of tables : Partitioning Overview



Story

4 (for purge)

6 (roll-up)

create separate tickets for purge & roll-up, and separate tickets for operations history and statistics

Concurrent DB access issues in control loop POC.
BugRamesh Murugan Iyerbug (related to transaction handling?)

Investigate what's involved in switching to spring
StoryLiam Fallon4

Externalizing ONAP DBs to a separate namespace

We would like to improve the current situation by implementing a two staged deployment:

1) Deploy all required DB engines (can be done using community charts or any user chart)

2) Deploy ONAP components & configure them to make use of those engines


This would allow use to basically bring his own database for ONAP no matter if it is running in the same k8s cluster or it has been provided by some DBaaS solution. Additionally it makes our deployment more modular and configurable thus may result in significant footprint savings.




6

How much statistics should be stored
Task

JPA table-creation errors
Task
2







Jira relationships