Table of Contents |
---|
Jira | ||||||||
---|---|---|---|---|---|---|---|---|
|
VES-HV Collector
...
(Option 1)
HV-VES collector has been proposed, based on a need to process high-volumes of data generated frequently by a large number of NFs. It uses plain TCP connections. Connections are stream-based (as opposed to request-based) and long running. Payload is binary-encoded (currently using Google Protocol Buffers).
Pros:
- Designed to support high volume of data with minimal latency
- HV-VES uses direct connection to DMaaP’s Kafka.
Cons:
- Added dependency on HV-VES DCAE components
Kafka
...
Interfacing using DMaaP client
...
(Option 2)
Message router is added as an additional layer on top of DMaaP to support message service API interaction with the ZooKeeper/Kafka. DmaapClient is a deliverable jar that can be used to interact with the DMaaP message Router api.
Pros:
- Designed to support REST calls to Kafka from both publishers and consumers.
- Pre Defined APIs in Message Router to create/view/delete a topic in Kafka and also to publish a message to a topic and subscribe to a topic.
Cons:
- Additional overhead as an additional layer Message Router would be added between CPS and Kafka.
Kafka Direct interface without using DMaaP client: To be used in CPS (Option 3)
Pros:
- No additional layer used between CPS and DMaaP Kafka.
- Spring boot enables easier configuration.
- CPS can make a direct interface with Kafka using spring-kafka. Spring-kafka used in CPS also provides support for Message-driven POJOs for publishing and subscribing events. CPS does not require message service API for interacting with Kafka.
Kafka configuration details needs to be added in the application yaml of both publisher(cps-core) and consumer(cps-temporal) of the events published to Kafka. These configuration should preferably be defined in application-helm.yaml included in the OOM charts to provide flexibility while deploying the application.
Based on the encryption and authentication mechanism used, the required configurations could change. Hence it is suggested to use override files for configuring the required values according to the used environment.
Encryption and Authentication Listener Configuration
Supported security protocols are :
1.PLAINTEXT : Listener without any encryption or authentication. CPS application by default is configured to use PLAINTEXT both with testcontainers and docker-compose.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
kafka:
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVER}
security:
protocol: PLAINTEXT
# to be added only in cps-core(producer)
producer:
client-id: cps-core |
...
Kafka configuration details needs to be added in the application yaml of both publisher(cps-core) and consumer(cps-temporal) of the events published to Kafka. These configuration should preferably be defined in application-helm.yaml included in the OOM charts to provide flexibility while deploying the application.
Based on the encryption and authentication mechanism used, the required configurations could change. Hence it is suggested to use override files for configuring the required values according to the used environment.
Encryption and Authentication Listener Configuration
Supported security protocols are :
1.PLAINTEXT : Listener without any encryption or authentication. CPS application by default is configured to use PLAINTEXT both with testcontainers and docker-compose.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
kafka: bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVER} security: protocol: PLAINTEXT # to be added only in cps-core(producer) producer: client-id: cps value-serializer: org.springframework.kafka.support.serializer.JsonSerializer # to be added only in cps-temporal(consumer) consumer: group-id: ${KAFKA_CONSUMER_GROUP_ID:cps-temporal-group} client-id: cps # Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializers # See https://docs.spring.io/spring-kafka/docs/2.5.11.RELEASE/reference/html/#error-handling-deserializer # and https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/ key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer value-deserializerserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializerJsonSerializer # to be added only properties:in cps-temporal(consumer) consumer: spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializergroup-id: ${KAFKA_CONSUMER_GROUP_ID:cps-temporal-group} spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializerclient-id: cps-temporal # Configures the Spring spring.json.value.default.type: org.onap.cps.event.model.CpsDataUpdatedEvent |
...
Implements authentication based on username and passwords. Usernames and passwords are stored locally in Kafka configuration.
DMaap-Message-router-kafka by defaullt uses SASL_PLAINTEXT.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
kafka sasl_plaintext: security: protocol: SASL_PLAINTEXT ssl: trust-store-typeKafka ErrorHandlingDeserializer that delegates to the 'real' deserializers # See https://docs.spring.io/spring-kafka/docs/2.5.11.RELEASE/reference/html/#error-handling-deserializer # and https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/ key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer properties: trust-store-location: trust-store-password: properties:spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer sasl.mechanism: PLAIN saslspring.jaas.configdeserializer.value.delegate.class: org.apachespringframework.kafka.commonsupport.security.plain.PlainLoginModule required username=admin password=admin_secret; serializer.JsonDeserializer sslspring.json.endpointvalue.identificationdefault.algorithm:type: org.onap.cps.event.model.CpsDataUpdatedEvent |
Any other security protocol to be used The kafka configuration details could be configured in the override files as below:using the OOM charts on a k8s environment.
2. SASL_PLAINTEXT using Plain mechanism:
Implements authentication based on username and passwords. Usernames and passwords are stored locally in Kafka configuration.
DMaap-Message-router-kafka by defaullt uses SASL_PLAINTEXT.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
kafka
sasl_plaintext | ||||||
Code Block | ||||||
| ||||||
kafka: security: protocol: SASL_PLAINTEXT protocolssl: '{{ .Values.kafka.sasl_plaintext.security.protocol }}' ssl: trust-store-type: trust-store-typelocation: '{{ .Values.kafka.sasl_plaintext.security. trust-store-type }}'password: properties: trust-store-location: '{{ .Values.kafka.sasl_plaintext.security.trust-store-location }}' sasl.mechanism: PLAIN trust-store-password: '{{ .Valuessasl.jaas.config: org.apache.kafka.sasl_plaintextcommon.security.plain.PlainLoginModule required username=admin password=admin_secret; ssl.endpoint.identification.algorithm: |
The kafka configuration details could be configured in the override files as below:
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
kafka.trust-store-password }}' properties: security: sasl.mechanism protocol: '{{ .Values.kafka.sasl_plaintext.proeprtiessecurity.sasl_mechanismprotocol }}' ssl: sasl.jaas.configtrust-store-type: '{{ .Values.kafka.sasl_plaintext.jaas.configsecurity.trust-store-type }}' |
...
Implements authentication using Salted Challenge Response Authentication Mechanism (SCRAM). SCRAM credentials are stored centrally in ZooKeeper. SCRAM can be used in situations where ZooKeeper cluster nodes are running isolated in a private network.
Spring.kafka.ssl related configuration is required. In order to use TLS encryption and server authentication, a keystore containing private and public keys has to be provided. This is usually done using a file in the Java Key store (JKS) format.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
kafka: sasl_ssl: security: trust-store-location: '{{ .Values.kafka.sasl_plaintext.security.trust-store-location }}' trust-store-password: '{{ .Values.kafka.sasl_plaintext.security.trust-store-password }}' properties: protocol: SASL_SSL ssl: sasl.mechanism: '{{ .Values.kafka.sasl_plaintext.proeprties.sasl_mechanism }}' trust-store-type: JKS sasl.jaas.config: '{{ trust-store-location: file:///C:/Users/adityaputhuparambil/ltec-com-strimzi.jks trust-store-password: secret properties: sasl.mechanism: SCRAM-SHA-512 sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username=admin password=admin_secret; ssl.endpoint.identification.algorithm: .Values.kafka.sasl.jaas.config }}' |
3. SASL_SSL using SCRAM-SHA-256 and SCRAM-SHA-512 :
Implements authentication using Salted Challenge Response Authentication Mechanism (SCRAM). SCRAM credentials are stored centrally in ZooKeeper. SCRAM can be used in situations where ZooKeeper cluster nodes are running isolated in a private network.
Spring.kafka.ssl related configuration is required. In order to use TLS encryption and server authentication, a keystore containing private and public keys has to be provided. This is usually done using a file in the Java Key store (JKS) format.Few additional properties related to SSL also need to be configured as shown below:
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
kafka: sasl_ssl: security: protocol: '{{ .Values.kafka.sasl_ssl.security.protocol }}'SASL_SSL ssl: trust-store-type: '{{ .Values.kafka.sasl_ssl.security.trust-store-type }}' JKS trust-store-location: '{{ .Values.kafka.sasl_ssl.security.trust-store-location }}'file:///C:/Users/adityaputhuparambil/ltec-com-strimzi.jks trust-store-password: '{{ .Values.kafka.sasl_ssl.security.trust-store-password }}'secret properties: sasl.mechanism: '{{ .Values.kafka.sasl_ssl.proeprties.sasl_mechanism }}'SCRAM-SHA-512 sasl.jaas.config: '{{ .Valuesorg.apache.kafka.common.saslsecurity.jaasplain.config }}' |
Application-helm configuration :
PlainLoginModule required username=admin password=admin_secret;
ssl.endpoint.identification.algorithm: |
Few additional properties related to SSL also need to be configured as shown belowThe final configuration required in application-helm.yaml :
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
springkafka: kafkasecurity: bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVER} security: protocol: protocol: '{{ .Values.kafka.sasl_ssl.security.protocol }}' ssl: trust-store-type: '{{ .Values.kafka.sasl_ssl.security.trust-store-type }}' trust-trust-store-location: '{{ .Values.kafka.sasl_ssl.security.trust-store-location }}' trust-store-password: '{{ .Values.kafka.sasl_ssl.security.trust-store-password }}' properties: sasl.mechanism: '{{ .Values.kafka.sasl_ssl.proeprties.sasl_mechanism }}' sasl.jaas.config: '{{ .Values.kafka.proeprties.sasl.jaas.config }}' |
Application-helm configuration :
The final configuration required in application-helm.yaml :
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
spring: ; ssl.endpoint.identification.algorithm: |
NOTE: Topics are auto generated in ONAP DMaaP Kafka. Hence topic creation is not covered in the scope on CPS.
Proof Of Concept :
POC was performed with ONAP DMaaPMessageRouterKafka running on k8e environment(172.16.1.205) in Nordix lab . The configuration details for both cps-core and cps-temporal as shared below:
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
spring: kafka: kafka: bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVER} security: protocol: {{ .Values.kafka.security.protocol }} ssl: trust-store-type: {{ .Values.kafka.ssl.trust-store-type }} bootstraptrust-store-serverslocation: 172.16.3.38:30490{{ .Values.kafka.ssl.trust-store-location }} security: trust-store-password: {{ protocol: SASL_PLAINTEXT.Values.kafka.ssl.trust-store-password }} properties: sasl.mechanism: PLAIN '{{ .Values.kafka.proeprties.sasl_mechanism }}' sasl.jaas.config: '{{ org.apacheValues.kafka.commonproeprties.securitysasl.plainjaas.PlainLoginModule required username=admin password=admin_secretconfig }}'; ssl.endpoint.identification.algorithm: |
NOTE: Topics are auto generated in ONAP DMaaP Kafka. Hence topic creation is not covered in the scope on CPS.
Proof Of Concept :
POC was performed with ONAP DMaaPMessageRouterKafka running on k8e environment(172.16.1.205) in Nordix lab . The configuration details for both cps-core and cps-temporal as shared below:
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
spring: kafka: producer: # Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializers # See https://docs.spring.io/spring-kafka/docs/2.5.11.RELEASE/reference/html/#error-handling-deserializer # and https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/bootstrap-servers: 172.16.3.38:30490 security: key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializerprotocol: SASL_PLAINTEXT properties: value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer properties:sasl.mechanism: PLAIN spring.deserializer.key.delegate.classsasl.jaas.config: org.apache.kafka.common.serialization.StringDeserializer security.plain.PlainLoginModule required username=admin password=admin_secret; springssl.deserializerendpoint.valueidentification.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializeralgorithm: producer: spring.json.value.default.type: org.onap.cps.event.model.CpsDataUpdatedEvent app: # kafka: Configures the Spring Kafka ErrorHandlingDeserializer that delegates to consumer:the 'real' deserializers topic: ${KAFKA_CONSUMER_TOPIC:cps.cfg-state-events} | ||||||
Code Block | ||||||
| ||||||
spring: kafka: # See https://docs.spring.io/spring-kafka/docs/2.5.11.RELEASE/reference/html/#error-handling-deserializer # and bootstrap-servers: 172.16.3.38:30490 security:https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/ protocol: SASL_PLAINTEXTkey-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer properties: value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer sasl.mechanism: PLAIN properties: sasl.jaas.configspring.deserializer.key.delegate.class: org.apache.kafka.common.security.plain.PlainLoginModule required username=admin password=admin_secret; serialization.StringDeserializer sslspring.deserializer.endpointvalue.identificationdelegate.algorithmclass: org.springframework.kafka.support.serializer.JsonDeserializer consumer: group-id: ${KAFKA_CONSUMER_GROUP_ID:cps-temporal-group}spring.json.value.default.type: org.onap.cps.event.model.CpsDataUpdatedEvent app: kafka: # Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializersconsumer: topic: ${KAFKA_CONSUMER_TOPIC:cps.cfg-state-events} |
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
spring:# See https://docs.spring.io/spring-kafka/docs/2.5.11.RELEASE/reference/html/#error-handling-deserializer kafka: # and https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/bootstrap-servers: 172.16.3.38:30490 key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializersecurity: value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializerprotocol: SASL_PLAINTEXT properties: properties: sasl.mechanism: PLAIN springsasl.deserializer.key.delegate.classjaas.config: org.apache.kafka.common.serialization.StringDeserializer security.plain.PlainLoginModule required username=admin password=admin_secret; springssl.deserializerendpoint.valueidentification.delegate.classalgorithm: org.springframework.kafka.support.serializer.JsonDeserializer consumer: spring.json.value.default.type: org.onap.cps.event.model.CpsDataUpdatedEvent app: group-id: ${KAFKA_CONSUMER_GROUP_ID:cps-temporal-group} kafka: consumer:# Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializers topic: ${KAFKA_CONSUMER_TOPIC:cps.cfg-state-events}# See https://docs.spring.io/spring-kafka/docs/2.5.11.RELEASE/reference/html/#error-handling-deserializer # and https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/ key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer properties: spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer spring.json.value.default.type: org.onap.cps.event.model.CpsDataUpdatedEvent app: kafka: consumer: topic: ${KAFKA_CONSUMER_TOPIC:cps.cfg-state-events} |
Note: AAF integration is not included in this documentation as there is already a Jira to handle the integration CPS-281 which is still under discussion.
draw.io Diagram border true diagramName Untitled Diagram simpleViewer false width links auto tbstyle top lbox true diagramWidth 1281 revision 5