You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 22 Next »

CPS-433 - Getting issue details... STATUS


VES-HV Collector 

HV-VES collector has been proposed, based on a need to process high-volumes of data generated frequently by a large number of NFs.  It uses plain TCP connections. Connections are stream-based (as opposed to request-based) and long running. Payload is binary-encoded (currently using Google Protocol Buffers). 

Pros: 

Designed to support high volume of data with minimal latency

HV-VES uses direct connection to DMaaP’s Kafka. 

Cons:

Added dependency on HV-VES DCAE components

Kafka Direct interface without using Message Router/ DMaaP client:

Kafka configuration details needs to be added in the application yaml of both publisher(cps-core) and consumer(cps-temporal) of the events published to Kafka. These configuration should preferably be defined in application-helm.yaml included in the OOM charts to provide flexibility while deploying the application. 

Based on the encryption and authentication mechanism used, the required configurations could change. Hence it is suggested to use override files for configuring the required values according to the used environment.

Encryption and Authentication Listener Configuration

Supported security protocols are :

1.PLAINTEXT : Listener without any encryption or authentication. CPS application by default is configured to use PLAINTEXT both with testcontainers and docker-compose.


Default Kafka configuration
kafka:
    bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVER}
    security:
        protocol: PLAINTEXT
    # to be added only in cps-core(producer)
    producer:
        group-id: ${KAFKA_GROUP_ID:cps-temporal-group}
        client-id: ${KAFKA_client_ID:cps}
        value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
    # to be added only in cps-temporal(consumer)
    consumer:
        group-id: ${KAFKA_CONSUMER_GROUP_ID:cps-temporal-group}
        client-id: ${KAFKA_client_ID:cps}
        # Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializers
        # See https://docs.spring.io/spring-kafka/docs/2.5.11.RELEASE/reference/html/#error-handling-deserializer
        # and https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/
        key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
        value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
        properties:
             spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
             spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
             spring.json.value.default.type: org.onap.cps.event.model.CpsDataUpdatedEvent


Any other security protocol to be used could be configured using the OOM charts on a k8s environment.


2. SASL_PLAINTEXT using Plain mechanism: 

Implements authentication based on username and passwords. Usernames and passwords are stored locally in Kafka configuration.

DMaap-Message-router-kafka by defaullt uses SASL_PLAINTEXT. 

Properties to be added in values.yaml
kafka
  sasl_plaintext:
    security:
      protocol: SASL_PLAINTEXT
    ssl:
      trust-store-type:
      trust-store-location:
      trust-store-password:
    properties:
      sasl.mechanism: PLAIN
      sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username=admin password=admin_secret;
      ssl.endpoint.identification.algorithm:


The kafka configuration details could be configured in the override files as below:

Override file configuration
kafka:
    security:
        protocol: '{{ .Values.kafka.sasl_plaintext.security.protocol }}'
    ssl:
        trust-store-type: '{{ .Values.kafka.sasl_plaintext.security.trust-store-type }}'
        trust-store-location: '{{ .Values.kafka.sasl_plaintext.security.trust-store-location }}'
        trust-store-password: '{{ .Values.kafka.sasl_plaintext.security.trust-store-password }}'
    properties:
        sasl.mechanism: '{{ .Values.kafka.sasl_plaintext.proeprties.sasl_mechanism }}'
        sasl.jaas.config: '{{ .Values.kafka.sasl.jaas.config }}'


3. SASL_SSL using SCRAM-SHA-256 and SCRAM-SHA-512 :

Implements authentication using Salted Challenge Response Authentication Mechanism (SCRAM). SCRAM credentials are stored centrally in ZooKeeper. SCRAM can be used in situations where ZooKeeper cluster nodes are running isolated in a private network.

Spring.kafka.ssl related configuration is required. In order to use TLS encryption and server authentication, a keystore containing private and public keys has to be provided. This is usually done using a file in the Java Key store (JKS) format.

Properties to be added in values.yaml
kafka:
  sasl_ssl:
    security:
      protocol: SASL_SSL
    ssl:
      trust-store-type: JKS
      trust-store-location: file:///C:/Users/adityaputhuparambil/ltec-com-strimzi.jks
      trust-store-password: secret
    properties:
      sasl.mechanism: SCRAM-SHA-512
  sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username=admin password=admin_secret;
  ssl.endpoint.identification.algorithm: 


Few additional properties related to SSL also need to be configured as shown below:

Override file configuration for SASL_SSL
kafka:
    security:
        protocol: '{{ .Values.kafka.sasl_ssl.security.protocol }}'
    ssl:
        trust-store-type: '{{ .Values.kafka.sasl_ssl.security.trust-store-type }}'
        trust-store-location: '{{ .Values.kafka.sasl_ssl.security.trust-store-location }}'
        trust-store-password: '{{ .Values.kafka.sasl_ssl.security.trust-store-password }}'
    properties:
        sasl.mechanism: '{{ .Values.kafka.sasl_ssl.proeprties.sasl_mechanism }}'
        sasl.jaas.config: '{{ .Values.kafka.sasl.jaas.config }}'

Application-helm configuration : 

The final configuration required in application-helm.yaml :

Application-helm changes in OOM charts
spring:
    kafka:
        bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVER}
        security:
            protocol: {{ .Values.kafka.security.protocol }}
        ssl:
            trust-store-type: {{ .Values.kafka.ssl.trust-store-type }}
            trust-store-location: {{ .Values.kafka.ssl.trust-store-location }}
            trust-store-password: {{ .Values.kafka.ssl.trust-store-password }}
        properties:
            sasl.mechanism: '{{ .Values.kafka.proeprties.sasl_mechanism }}'
            sasl.jaas.config: '{{ .Values.kafka.proeprties.sasl.jaas.config }}';
            ssl.endpoint.identification.algorithm:


NOTE: Topics are auto generated in ONAP DMaaP Kafka. Hence topic creation is not covered in the scope on CPS.

Proof Of Concept :

POC was performed with ONAP DMaaPMessageRouterKafka running on k8e environment(172.16.1.205) in Nordix lab . The configuration details for both cps-core and cps-temporal as shared below:

Configuration in cps-core using SASL_PLAINTEXT
spring:
    kafka:
        bootstrap-servers: 172.16.3.38:30490
        security:
            protocol: SASL_PLAINTEXT
        properties:
            sasl.mechanism: PLAIN
            sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username=admin password=admin_secret;
            ssl.endpoint.identification.algorithm:
        producer:
            # Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializers
            # See https://docs.spring.io/spring-kafka/docs/2.5.11.RELEASE/reference/html/#error-handling-deserializer
            # and https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/
            key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
            value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
            properties:
                spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
                spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
                spring.json.value.default.type: org.onap.cps.event.model.CpsDataUpdatedEvent

app:
    kafka:
        consumer:
            topic: ${KAFKA_CONSUMER_TOPIC:cps.cfg-state-events}



Configuration in cps-temporal using SASL_PLAINTEXT
spring:
    kafka:
        bootstrap-servers: 172.16.3.38:30490
        security:
            protocol: SASL_PLAINTEXT
        properties:
            sasl.mechanism: PLAIN
            sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username=admin password=admin_secret;
            ssl.endpoint.identification.algorithm:
        consumer:
            group-id: ${KAFKA_CONSUMER_GROUP_ID:cps-temporal-group}
            # Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializers
            # See https://docs.spring.io/spring-kafka/docs/2.5.11.RELEASE/reference/html/#error-handling-deserializer
            # and https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/
            key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
            value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
            properties:
                spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
                spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
                spring.json.value.default.type: org.onap.cps.event.model.CpsDataUpdatedEvent

app:
    kafka:
        consumer:
            topic: ${KAFKA_CONSUMER_TOPIC:cps.cfg-state-events}








  • No labels