In the current implementation, ACM supports multi-participant with same supported element Type but different participantId, so they need different properties file.

In order to support replica, it needs to support multi-participant using same properties file.

Note: 

  • In a scenario of high number of compositions, if participant is restarting it will be slow-down the restarting action: AC-runtime will send a message for each composition primed and instance deployed to the participant.
    To avoid the restarting action, participant needs a database support;
  • In a scenario where a participant is stuck in deploying, the instance will be in TIMEOUT and the user can take action like deploy again or undeploy. In that scenario the intermediary-participant has to receive the next message, kill the thread that is stuck in deploying and create a new thread.
  • In a scenario where we are increasing the number or participants, could be useful to have different topic name for source and sink. This solution will eliminate the number of useless messages in kafka.
    Example: 
    • for ACM-runtime:
      • sink: POLICY-ACM-PARTICIPANT
      • source: POLICY-ACM-RUNTIME
    • for participant: 
      • sink: POLICY-ACM-RUNTIME
      • source: POLICY-ACM-PARTICIPANT


ACM Runtime ACM Runtime Participant-intermediary Participant-intermediary Participant Participant alt["Deploying the instance"] [ASYNC] Deploying the instance Create Deploy thread Deploy thread is stuck alt["Instance in Timeout"] set instance in Timeout alt["Undeploying the instance"] [ASYNC] Undeploying the instance Terminate Deploy thread Create Undeploy thread instance Undeployed [ASYNC] instance Undeployed

Solutions

Solution 1: Replicas and Dynamic participantId - still using cache

Changes in Participant intermediary:

  • UUID participantId will be generated in memory instead to fetch it in properties file.
  • consumerGroup will be generated in memory instead to fetch it in properties file.

Changes in ACM-runtime:

  • When participant go OFF_LINE:
    • if there are compositions connected to that participant, ACM-runtime will find other ON_LINE participant with same supported element type;
    • if other ON_LINE participant is present it will change the connection with all compositions and instance;
    • after that, it will execute restart for all compositions and instances to the ON_LINE participant.
  • When receive a participant REGISTER:
    • it will check if there are compositions connected to a OFF_LINE participant with same supported element type;
    • if there are, it will change the connection with all compositions and instances to that new registered participant;
    • after that it will execute restart for all compositions and instances changed.
    • Refactor restarting scenario to apply the restarting only for compositions and instances in transition

Issues:

  • Participants create randomly participantId and Kafka consumerGroup. This solution has been tested and has the issue to create a new Kafka queue in restarting scenario. 
    During restart scenario, a new consumerGroup is created, that cause some missing initial messages due the creation of new Kafka queue . The result is that to fail to receive messages from ACM to restore compositions and instances.

Solution 2: StatefulSets - still uses cache

Participant replicas can be a kubernetes StatefulSets that consume two different properties file with unique UUIDs and unique consumer groups.

The StatefulSet uses the SPRING_CONFIG_NAME environment variable pointing to the spring application properties file unique to each of the participant replica.

Each of the properties file with the names pod-0.yaml, pod-1.yaml is mounted to the volumes. And the SPRING_CONFIG_NAME variable can be set to /path/to/$HOSTNAME.yaml to use the corresponding

properties file. 

By this approach the participant can have multiple replicas with different UUIDs and kafka consumer groups to work with a shared data.

env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: metadata.name

- name: SPRING_CONFIG_NAME
value: /path/to/${HOSTNAME}.yaml

For example considering the http participant replica, ${HOSTNAME} will be "policy-http-ppnt-0" and "policy-http-ppnt-1" and their
corresponding properties files with the names "http-ppnt-0.yaml" and "http-ppnt-1.yaml" is volume mounted.

Note: In a scenario of two participants in replicas (we are calling "policy-http-ppnt-0" and "policy-http-ppnt-1"), ACM-Runtime will assignee randomly any composition definition in prime time to specific participant based of supported element definition type. So we could have a scenario where a composition definition "composition 1.0.0" is assigned to policy-http-ppnt-0 and the instance too; the new composition "composition 1.0.1" is assigned to policy-http-ppnt-1. In that scenario the migration of an instance from "composition 1.0.0" to "composition 1.0.1" wouldn't work, because policy-http-ppnt-0 do not have "composition 1.0.1" assigned.

Issues:

  • At migrate time - In that scenario the migration of an instance from "composition 1.0.0" to "composition 1.0.1" wouldn't work, because policy-http-ppnt-0 do not have "composition 1.0.1" assigned. This is a critical issue.

Solution 3: Replicas and Database support - no cache

Changes in Participant intermediary:

  • Redesign TimeOut scenario: Participant has the responsibility to stop the thread in execution after a specific time.
  • Add client support for database (MariaDB or PostgreSQL).
  • Add mock database for Unit Tests.
  • Refactor CacheProvider to ParticipantProvider to support insert/update, intermediary-participant with transactions.
  • Refactor Intermediary to use insert/update of ParticipantProvider.
  • Refactor Participants that are using own HashMap in memory (Policy Participant saves policy and policy type in memory)

Changes in Participant:

  • Add @EnableJpaRepositories and @EntityScan in Application:
  • Application
    @SpringBootApplication
    @EnableJpaRepositories({
        "org.onap.policy.clamp.acm.participant.intermediary.persistence.repository"
    })
    @ComponentScan({
        "org.onap.policy.clamp.acm.participant.sim",
        "org.onap.policy.clamp.acm.participant.intermediary"
    })
    @EntityScan({
        "org.onap.policy.clamp.acm.participant.intermediary.persistence.concepts"
    })
    @ConfigurationPropertiesScan("org.onap.policy.clamp.acm.participant.sim.parameters")
    public class Application {
    
        public static void main(String[] args) {
            SpringApplication.run(Application.class, args);
        }
    }
    
    
  • Add db connection in properties file and properties file for tests:
properties.yaml
spring:
  security:
    user:
      name: participantUser
      password: zb!XztG34
  mvc:
    converters:
      preferred-json-mapper: gson
  datasource:
    url: jdbc:mariadb://${mariadb.host:localhost}:${mariadb.port:3306}/participantsim
    driverClassName: org.mariadb.jdbc.Driver
    username: policy
    password: P01icY
    hikari:
      connectionTimeout: 30000
      idleTimeout: 600000
      maxLifetime: 1800000
      maximumPoolSize: 10
  jpa:
    hibernate:
      ddl-auto: update
      naming:
        physical-strategy: org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
        implicit-strategy: org.onap.policy.common.spring.utils.CustomImplicitNamingStrategy
    properties:
      hibernate:
        format_sql: true
properties-test.yaml
spring:
  datasource:
    url: jdbc:h2:mem:testdb
    driverClassName: org.h2.Driver
    hikari:
      maxLifetime: 1800000
      maximumPoolSize: 3
  jpa:
    hibernate:
      ddl-auto: create
    open-in-view: false
  • Unit Tests may need some changes

Changes in docker/Kubernetes environment

  • Refactor CSIT to support database configuration for participants
  • Refactor OOM to support database configuration for participants
  • DB Migrator must be added to the helm chart and docker environments. The database schema of the older and newer versions will be different. Cache data added to the db.

Addition of DB Migrator

  • Db migrator will alter old version of the db to add new parts of the schema required by this participant change
  • Liquibase used for script generation
  • Separate image needed for DB Migrator - this will have to be released as a new dependency
  • New Job in kubernetes and new service in docker should be added for this migration

Advantages of DB use

  • Multiple participant replicas possible - it can deal with messages across many participants
  • All participants should have same group-id in kafka
  • All should have the same participant-id.

Solution 4: Distributed Cache

Issues:

  • Not persistent - if the application that handles cache server restarts - data is lost.
  • Approval issues - with Redis, Etcd, Search Engine.

Optimal Solution:

After analysis, it is clear that the best solution to use is number 3.

  • Arbitrary number of participants possible
  • DB migrator upgrades older versions
  • Restart scenario not applicable anymore. Could be removed.
  • Approval not an issue - postgres already used by acm.
  • DB will be created automatically - as are required tables.

Older participant versions support (Regression)

  • Do they have to upgrade to newest participant version? No, but if they want new functionality - they need to upgrade.


  • No labels