You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

CPS-433 - Getting issue details... STATUS


VES-HV Collector 

HV-VES collector has been proposed, based on a need to process high-volumes of data generated frequently by a large number of NFs.  It uses plain TCP connections. Connections are stream-based (as opposed to request-based) and long running. Payload is binary-encoded (currently using Google Protocol Buffers). 

Pros: 

Designed to support high volume of data with minimal latency

HV-VES uses direct connection to DMaaP’s Kafka. 

Cons:

Added dependency on HV-VES DCAE components

DMaaP Kafka :

Listener Configuration

Encryption and authentication in Kafka brokers is configured per listener. 

Each listener in the Kafka broker is configured with its own security protocol. The configuration property listener.security.protocal defines which listener uses which security protocol. It maps each listener name to its security protocol. 

Supported security protocols are 

  • PLAINTEXT

Listener without any encryption or authentication.

  • SSL

Listener using TLS encryption and, optionally, authentication using TLS client certificates.

  • SASL_PLAINTEXT

Listener without encryption but with SASL-based authentication.

  • SASL_SSL

Listener with TLS-based encryption and SASL-based authentication.

SASL authentication is supported both through plain unencrypted connections as well as through TLS connections. SASL can be enabled individually for each listener. To enable it, the security protocol in listener.security.protocol.map has to be either SASL_PLAINTEXT or SASL_SSL.

SASL authentication in Kafka supports several different mechanisms:

  • PLAIN

Implements authentication based on username and passwords. Usernames and passwords are stored locally in Kafka configuration.

  • SCRAM-SHA-256 and SCRAM-SHA-512

Implements authentication using Salted Challenge Response Authentication Mechanism (SCRAM). SCRAM credentials are stored centrally in ZooKeeper. SCRAM can be used in situations where ZooKeeper cluster nodes are running isolated in a private network.

DMaap-Message-router-kafka by defaullt uses SASL_PLAINTEXT. 

Configuration required at the published end :

spring:
kafka:
bootstrap-servers: host:port
security:
protocol: SASL_PLAINTEXT
properties:
sasl.mechanism: PLAIN
sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username=admin password=admin_secret;
ssl.endpoint.identification.algorithm:
producer:
# Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializers
# See https://docs.spring.io/spring-kafka/docs/2.5.11.RELEASE/reference/html/#error-handling-deserializer
# and https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.json.value.default.type: org.onap.cps.event.model.CpsDataUpdatedEvent
app:
kafka:
consumer:
topic: ${KAFKA_CONSUMER_TOPIC:cps.cfg-state-events}


Configuration at consumer end:
kafka:
bootstrap-servers: host:port
security:
protocol: SASL_PLAINTEXT
properties:
sasl.mechanism: PLAIN
sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username=admin password=admin_secret;
ssl.endpoint.identification.algorithm:
consumer:
group-id: ${KAFKA_CONSUMER_GROUP_ID:cps-temporal-group}
# Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializers
# See https://docs.spring.io/spring-kafka/docs/2.5.11.RELEASE/reference/html/#error-handling-deserializer
# and https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.json.value.default.type: org.onap.cps.event.model.CpsDataUpdatedEvent


Using SASL_SSL mechanism: In order to use TLS encryption and server authentication, a keystore containing private and public keys has to be provided. This is usually done using a file in the Java Key store (JKS) format.

Few additional properties related to SSL also need to be configured as shown below:

kafka:
bootstrap-servers: hostname:port
security:
protocol: SASL_SSL
ssl:
trust-store-type: JKS
trust-store-location: file:///C:/Users/adityaputhuparambil/ltec-com-strimzi.jks
trust-store-password: secret
properties:
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username=admin password=admin_secret;;
ssl.endpoint.identification.algorithm:
  • No labels