CPS-433 - Getting issue details... STATUS

VES-HV Collector (Option 1)

HV-VES collector has been proposed, based on a need to process high-volumes of data generated frequently by a large number of NFs.  It uses plain TCP connections. Connections are stream-based (as opposed to request-based) and long running. Payload is binary-encoded (currently using Google Protocol Buffers). 

Pros: 

  • Designed to support high volume of data with minimal latency
  • HV-VES uses direct connection to DMaaP’s Kafka. 

Cons:

  • Added dependency on HV-VES DCAE components

Kafka Interfacing using DMaaP client (Option 2)

Message router is added as an additional layer on top of DMaaP to support message service API interaction with the ZooKeeper/Kafka. DmaapClient is a deliverable jar that can be used to interact with the DMaaP message Router api.

Pros:

  • Designed to support REST calls to Kafka from both publishers and consumers.
  • Pre Defined APIs in Message Router to create/view/delete a topic in Kafka and also to publish a message to a topic and subscribe to a topic.

Cons:

  • Additional overhead as an additional layer Message Router would be added between CPS and Kafka. 

Kafka Direct interface without using DMaaP client: To be used in CPS (Option 3)

Pros:

  • No additional layer used between CPS and DMaaP Kafka. 
  • Spring boot enables easier configuration.
  • CPS can make a direct interface with Kafka using spring-kafka. Spring-kafka used in CPS also provides support for Message-driven POJOs for publishing and subscribing events. CPS does not require message service API for interacting with Kafka.

Kafka configuration details needs to be added in the application yaml of both publisher(cps-core) and consumer(cps-temporal) of the events published to Kafka. These configuration should preferably be defined in application-helm.yaml included in the OOM charts to provide flexibility while deploying the application. 

Based on the encryption and authentication mechanism used, the required configurations could change. Hence it is suggested to use override files for configuring the required values according to the used environment.

Encryption and Authentication Listener Configuration

Supported security protocols are :

1.PLAINTEXT : Listener without any encryption or authentication. CPS application by default is configured to use PLAINTEXT both with testcontainers and docker-compose.


Default Kafka configuration
kafka:
    bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVER}
    security:
        protocol: PLAINTEXT
    # to be added only in cps-core(producer)
    producer:
        client-id: cps-core
        value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
    # to be added only in cps-temporal(consumer)
    consumer:
        group-id: ${KAFKA_CONSUMER_GROUP_ID:cps-temporal-group}
        client-id: cps-temporal
        # Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializers
        # See https://docs.spring.io/spring-kafka/docs/2.5.11.RELEASE/reference/html/#error-handling-deserializer
        # and https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/
        key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
        value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
        properties:
             spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
             spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
             spring.json.value.default.type: org.onap.cps.event.model.CpsDataUpdatedEvent


Any other security protocol to be used could be configured using the OOM charts on a k8s environment.


2. SASL_PLAINTEXT using Plain mechanism: 

Implements authentication based on username and passwords. Usernames and passwords are stored locally in Kafka configuration.

DMaap-Message-router-kafka by defaullt uses SASL_PLAINTEXT. 

Properties to be added in values.yaml
kafka
  sasl_plaintext:
    security:
      protocol: SASL_PLAINTEXT
    ssl:
      trust-store-type:
      trust-store-location:
      trust-store-password:
    properties:
      sasl.mechanism: PLAIN
      sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username=admin password=admin_secret;
      ssl.endpoint.identification.algorithm:


The kafka configuration details could be configured in the override files as below:

Override file configuration
kafka:
    security:
        protocol: '{{ .Values.kafka.sasl_plaintext.security.protocol }}'
    ssl:
        trust-store-type: '{{ .Values.kafka.sasl_plaintext.security.trust-store-type }}'
        trust-store-location: '{{ .Values.kafka.sasl_plaintext.security.trust-store-location }}'
        trust-store-password: '{{ .Values.kafka.sasl_plaintext.security.trust-store-password }}'
    properties:
        sasl.mechanism: '{{ .Values.kafka.sasl_plaintext.proeprties.sasl_mechanism }}'
        sasl.jaas.config: '{{ .Values.kafka.sasl.jaas.config }}'


3. SASL_SSL using SCRAM-SHA-256 and SCRAM-SHA-512 :

Implements authentication using Salted Challenge Response Authentication Mechanism (SCRAM). SCRAM credentials are stored centrally in ZooKeeper. SCRAM can be used in situations where ZooKeeper cluster nodes are running isolated in a private network.

Spring.kafka.ssl related configuration is required. In order to use TLS encryption and server authentication, a keystore containing private and public keys has to be provided. This is usually done using a file in the Java Key store (JKS) format.

Properties to be added in values.yaml
kafka:
  sasl_ssl:
    security:
      protocol: SASL_SSL
    ssl:
      trust-store-type: JKS
      trust-store-location: file:///C:/Users/adityaputhuparambil/ltec-com-strimzi.jks
      trust-store-password: secret
    properties:
      sasl.mechanism: SCRAM-SHA-512
  sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username=admin password=admin_secret;
  ssl.endpoint.identification.algorithm: 


Few additional properties related to SSL also need to be configured as shown below:

Override file configuration for SASL_SSL
kafka:
    security:
        protocol: '{{ .Values.kafka.sasl_ssl.security.protocol }}'
    ssl:
        trust-store-type: '{{ .Values.kafka.sasl_ssl.security.trust-store-type }}'
        trust-store-location: '{{ .Values.kafka.sasl_ssl.security.trust-store-location }}'
        trust-store-password: '{{ .Values.kafka.sasl_ssl.security.trust-store-password }}'
    properties:
        sasl.mechanism: '{{ .Values.kafka.sasl_ssl.proeprties.sasl_mechanism }}'
        sasl.jaas.config: '{{ .Values.kafka.sasl.jaas.config }}'

Application-helm configuration : 

The final configuration required in application-helm.yaml :

Application-helm changes in OOM charts
spring:
    kafka:
        bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVER}
        security:
            protocol: {{ .Values.kafka.security.protocol }}
        ssl:
            trust-store-type: {{ .Values.kafka.ssl.trust-store-type }}
            trust-store-location: {{ .Values.kafka.ssl.trust-store-location }}
            trust-store-password: {{ .Values.kafka.ssl.trust-store-password }}
        properties:
            sasl.mechanism: '{{ .Values.kafka.proeprties.sasl_mechanism }}'
            sasl.jaas.config: '{{ .Values.kafka.proeprties.sasl.jaas.config }}';
            ssl.endpoint.identification.algorithm:


NOTE: Topics are auto generated in ONAP DMaaP Kafka. Hence topic creation is not covered in the scope on CPS.

Proof Of Concept :

POC was performed with ONAP DMaaPMessageRouterKafka running on k8e environment(172.16.1.205) in Nordix lab . The configuration details for both cps-core and cps-temporal as shared below:

Configuration in cps-core using SASL_PLAINTEXT
spring:
    kafka:
        bootstrap-servers: 172.16.3.38:30490
        security:
            protocol: SASL_PLAINTEXT
        properties:
            sasl.mechanism: PLAIN
            sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username=admin password=admin_secret;
            ssl.endpoint.identification.algorithm:
        producer:
            # Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializers
            # See https://docs.spring.io/spring-kafka/docs/2.5.11.RELEASE/reference/html/#error-handling-deserializer
            # and https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/
            key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
            value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
            properties:
                spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
                spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
                spring.json.value.default.type: org.onap.cps.event.model.CpsDataUpdatedEvent

app:
    kafka:
        consumer:
            topic: ${KAFKA_CONSUMER_TOPIC:cps.cfg-state-events}


Configuration in cps-temporal using SASL_PLAINTEXT
spring:
    kafka:
        bootstrap-servers: 172.16.3.38:30490
        security:
            protocol: SASL_PLAINTEXT
        properties:
            sasl.mechanism: PLAIN
            sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username=admin password=admin_secret;
            ssl.endpoint.identification.algorithm:
        consumer:
            group-id: ${KAFKA_CONSUMER_GROUP_ID:cps-temporal-group}
            # Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializers
            # See https://docs.spring.io/spring-kafka/docs/2.5.11.RELEASE/reference/html/#error-handling-deserializer
            # and https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/
            key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
            value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
            properties:
                spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
                spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
                spring.json.value.default.type: org.onap.cps.event.model.CpsDataUpdatedEvent

app:
    kafka:
        consumer:
            topic: ${KAFKA_CONSUMER_TOPIC:cps.cfg-state-events}


Note: AAF integration is not included in this documentation as there is already a Jira to handle the integration 
CPS-281 which is still under discussion.







  • No labels

6 Comments

  1. client-id: ${KAFKA_client_ID:cps}

    Probably good to be able to track producer and consumer applications by having 'cps-core' and 'cps-temporal`.

    Should it be configurable ?
    1. I dont think they need to be configurable. As per the discussion, I agree both cps-core and cps-temporal are 2 different clients and hence renamed the client_id.

  2. Configuration in values.yaml

    This configuration is related to authentication and encryption, then it should be removed from section 1 and split to be moved to section 3 (SASL_PLAINTEXT) and section 4 (SASL_SSL).
  3. Application-helm changes in OOM charts
    I would suggest to focus on the connectivity configuration only. It will keep the documentation simple and scoped. This is  the common configuration to connect any client application (producers or consumers). 

    Configuration related to  serializers or deserializers are application specific and not related to connectivity. They are not impacting neither depends on how the application is connected to the broker.
  4. To complete the spike, we should now run a PoC of CPS Core and Temporal connected to DMaaP Kafka, and then document this PoC here.

  5. Hi aditya puthuparambilcan you add a section on 'with DMaap' Client describe pros and cons for that?