You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

CPS-433 - Getting issue details... STATUS


VES-HV Collector 

HV-VES collector has been proposed, based on a need to process high-volumes of data generated frequently by a large number of NFs.  It uses plain TCP connections. Connections are stream-based (as opposed to request-based) and long running. Payload is binary-encoded (currently using Google Protocol Buffers). 

Pros: 

Designed to support high volume of data with minimal latency

HV-VES uses direct connection to DMaaP’s Kafka. 

Cons:

Added dependency on HV-VES DCAE components

Kafka Direct interface without using Message Router/ DMaaP client:

The below configuration details needs to be added in the application yaml of both publisher(cps-core) and consumer(cps-temporal) of the events published to Kafka. These configuration should be defined in application-helm.yaml included in the OOM charts to provide flexibility while deploying the application. The environment variables could also be replaced by override values.

spring:

kafka:
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVER}
security:
protocol: ${KAFKA_SECURITY_PROTOCOL}
ssl:
trust-store-type: ${KAFKA_SSL_TRUST_TYPE}
trust-store-location: ${KAFKA_SSL_TRUST_STORE_LOCATION}
trust-store-password: ${KAFKA_SSL_TRUST_STORE_PASSWORD}
properties:
sasl.mechanism: ${KAFKA_SASL_MECHANISM}
sasl.jaas.config: ${KAFKA_SASL_JAAS_CONFIG}
ssl.endpoint.identification.algorithm:

app:
kafka:
consumer:
topic: ${KAFKA_CONSUMER_TOPIC:cps.cfg-state-events}


Topics are auto generated in ONAP DMaaP Kafka. Hence topic creation is not covered in the scope of cps.

Encryption and Authentication

AMQ Streams supports encryption and authentication, which is configured as part of the listener configuration.

Listener Configuration

Encryption and authentication in Kafka brokers is configured per listener. 

Each listener in the Kafka broker is configured with its own security protocol. The configuration property listener.security.protocal defines which listener uses which security protocol. It maps each listener name to its security protocol. 

Supported security protocols are 

PLAINTEXT : Listener without any encryption or authentication.

SSL : Listener using TLS encryption and, optionally, authentication using TLS client certificates.

SASL_PLAINTEXT : Listener without encryption but with SASL-based authentication.

SASL_SSL : Listener with TLS-based encryption and SASL-based authentication.


CPS application by default is configured to use PLAINTEXT.

Default Kafka configuration
kafka:
    bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVER}
    security:
        protocol: PLAINTEXT


Any other security protocol to be used could be configured using the OOM charts.
The below values are to be defined in the values.yaml
kafka
sasl_plaintext:
security:
protocol: SASL_PLAINTEXT
ssl:
trust-store-type:
trust-store-location:
trust-store-password:
properties:
sasl.mechanism: PLAIN
sasl_ssl:
security:
protocol: SASL_SSL
ssl:
trust-store-type: JKS
trust-store-location: file:///C:/Users/adityaputhuparambil/ltec-com-strimzi.jks
trust-store-password: secret
properties:
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username=admin password=admin_secret;
ssl.endpoint.identification.algorithm:

SASL Authentication

SASL authentication is supported both through plain unencrypted connections as well as through TLS connections.

Plain : Implements authentication based on username and passwords. Usernames and passwords are stored locally in Kafka configuration.

DMaap-Message-router-kafka by defaullt uses SASL_PLAINTEXT. 

The kafka configuration details could be configured in the override files as below:

Override file configuration
kafka:
    security:
        protocol: '{{ .Values.kafka.sasl_plaintext.security.protocol }}'
    ssl:
        trust-store-type: '{{ .Values.kafka.sasl_plaintext.security.trust-store-type }}'
        trust-store-location: '{{ .Values.kafka.sasl_plaintext.security.trust-store-location }}'
        trust-store-password: '{{ .Values.kafka.sasl_plaintext.security.trust-store-password }}'
    properties:
        sasl.mechanism: '{{ .Values.kafka.sasl_plaintext.proeprties.sasl_mechanism }}'
        sasl.jaas.config: '{{ .Values.kafka.sasl.jaas.config }}'


Sample configuration :

spring:
kafka:
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVER}
security:
protocol: {{ .Values.kafka.security.protocol }}
ssl:
trust-store-type: {{ .Values.kafka.ssl.trust-store-type }}
trust-store-location: {{ .Values.kafka.ssl.trust-store-location }}
trust-store-password: {{ .Values.kafka.ssl.trust-store-password }}
properties:
sasl.mechanism: '{{ .Values.kafka.proeprties.sasl_mechanism }}'
sasl.jaas.config: '{{ .Values.kafka.proeprties.sasl.jaas.config }}';
ssl.endpoint.identification.algorithm:


SCRAM-SHA-256 and SCRAM-SHA-512 : Implements authentication using Salted Challenge Response Authentication Mechanism (SCRAM). SCRAM credentials are stored centrally in ZooKeeper. SCRAM can be used in situations where ZooKeeper cluster nodes are running isolated in a private network.

Spring.kafka.ssl related configuration is required. In order to use TLS encryption and server authentication, a keystore containing private and public keys has to be provided. This is usually done using a file in the Java Key store (JKS) format.

Few additional properties related to SSL also need to be configured as shown below:

Override file configuration for SASL_SSL
kafka:
    security:
        protocol: '{{ .Values.kafka.sasl_ssl.security.protocol }}'
    ssl:
        trust-store-type: '{{ .Values.kafka.sasl_ssl.security.trust-store-type }}'
        trust-store-location: '{{ .Values.kafka.sasl_ssl.security.trust-store-location }}'
        trust-store-password: '{{ .Values.kafka.sasl_ssl.security.trust-store-password }}'
    properties:
        sasl.mechanism: '{{ .Values.kafka.sasl_ssl.proeprties.sasl_mechanism }}'
        sasl.jaas.config: '{{ .Values.kafka.sasl.jaas.config }}'


spring:
kafka:
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVER}
security:
protocol: SASL_SSL
ssl:
trust-store-type: JKS
trust-store-location: file:///C:/Users/adityaputhuparambil/ltec-com-strimzi.jks
trust-store-password: secret
properties:
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="adminr" password="admin_secret";
ssl.endpoint.identification.algorithm:
producer:
# Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializers
# See https://docs.spring.io/spring-kafka/docs/2.5.11.RELEASE/reference/html/#error-handling-deserializer
# and https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.json.value.default.type: org.onap.cps.event.model.CpsDataUpdatedEvent


  • No labels