Backlog
Number | Notes | Relates To | Type | Assignee |
---|---|---|---|---|
Investigate ways of persisting policy state information with different structures at run time. One approach could be to use Avro & Apache Hive. [There is also a NoSQL/JSON option available in MariaDB and Postgres] Another option is the Apache Cassandra database. Writing plugins towards state information stored in CPS might be another approach that could work across all the PDPs. | POLICY-2898 | Epic | ||
Need to determine a strategy and/or design for multi-cluster support. MariaDB Galera Cluster PostgreSQL Clustering Note: PostgreSQL does not natively support any multi-master clustering solution, like MySQL or Oracle do. | POLICY-1821 | Story | ||
Allow DB to be changed based on user needs e.g Postgres for MariaDB | POLICY-1787 | Epic | ||
Move table creation into the upgrade/downgrade/install scripts in order to support upgrade/rollback of ONAP releases. Liquibase may be the required solution. Policy database DDL can be exported from MariaDB to a file using HeidiSQL. | POLICY-2715 | Story | Jorge Hernandez | |
References to DB should be moved from persistence.xml to properties file to facilitate the use of alternate databases. We can modify the current code to read in the location of a properties file JDBC properties -Djdbc.properties=/path/to/jdbc.properties String propertiesPath = System.getProperty( "jdbc.properties" ); if ( propertiesPath != null ) { FileInputStream in = new FileInputStream( propertiesPath ); try { jdbcProperties = Properties.load( in ); } finally { in.close( ); } } KubernetesThere are four different ways that you can use a ConfigMap to configure a container inside a Pod:
| Story | |||
Invalid target-database property in persistence.xml in apex pdp. Persistence.xml should use eclipselink.target-database e.g. <property name="eclipselink.target-database" value="MySQL" /> | Task | Ajith Sreekumar | ||
Currently, the models Provider classes manage transactions. Transaction management should be moved to client for better performance and atomicity. This will also eliminate the need for caching on the client side. Note: DatabasePolicyModelsProviderImpl in the models-provider package provides rest APIs such as createServiceTemplate, updateServiceTemplate and deleteServiceTemplate which in turn call the corresponding APIs from AuthorativeToscaProvider in the models-tosca package.. | Story | |||
How best to deal with CRUD of data types in policy-api and policy-models. It is possible only to create data types indirectly in policy type create requests. Update and delete of data types is not possible. SimpleToscaProvider contains the following methods not available in AuthorativeToscaProvider : getDataTypes, getCascadedDataTypes, createDataTypes, updateDataTypes and deleteDataType. | Story | |||
Recover from corruption of policy database. [Possibly a bug in mariadb] | MDEV-23119 | Story | ||
The CLC (Control Loop Coordinator) lockingStrategy should allow policy designers to specify the target locking behaviour according to the needs of the use case [Policy template issue] | POLICY-2588 | Story | Pramod Jamkhedkar | |
Ability to implement target locking mechanisms over such sub-parts of a target or collection of targets. [Policy template issue] Note: AuthorativeToscaProvider provides synchronized object thread locking in it's methods. | POLICY-2587 | Story | Pramod Jamkhedkar | |
What policy components needs to be centralized vs de-centralized and moved to tenant namespaces. (R8 ONAP to support multi-tenancy) Note: Eclipselink supports Using Table-Per-Tenant Multi-Tenancy | Story | |||
Clean up/Roll up of old DB data. Purge/Archive job should be created and run at a scheduled interval. | Story | Jorge Hernandez | ||
Concurrent DB access issues in control loop POC. | Bug | Ramesh Murugan Iyer | ||
Investigate what's involved in switching to spring | Story | Liam Fallon | ||
Externalizing ONAP DBs to a separate namespace We would like to improve the current situation by implementing a two staged deployment: 1) Deploy all required DB engines (can be done using community charts or any user chart) 2) Deploy ONAP components & configure them to make use of those engines This would allow use to basically bring his own database for ONAP no matter if it is running in the same k8s cluster or it has been provided by some DBaaS solution. Additionally it makes our deployment more modular and configurable thus may result in significant footprint savings. |