Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

HV-VES collector has been proposed, based on a need to process high-volumes of data generated frequently by a large number of NFs.  It uses plain TCP connections. Connections are stream-based (as opposed to request-based) and long running. Payload is binary-encoded (currently using Google Protocol Buffers). 

Pros: 

  • Designed to support high volume of data with minimal latency
  • HV-VES uses direct connection to DMaaP’s Kafka. 

Cons:

  • Added dependency on HV-VES DCAE components

Kafka Interfacing using DMaaP client :

Message router is added as an additional layer on top of DMaaP to support message service API interaction with the ZooKeeper/Kafka. DmaapClient is a deliverable jar that can be used to interact with the DMaaP message Router api.

Pros:

  • Designed to support REST calls to Kafka from both publishers and consumers.
  • Pre Defined APIs in Message Router to create/view/delete a topic in Kafka and also to publish a message to a topic and subscribe to a topic.

Cons:

  • Additional overhead as an additional layer would be added. CPS can make a direct interface with Kafka and using spring-kafka. Spring-kafka also provides support for Message-driven POJOs for publishing and subscribing events.

Kafka Direct interface without using

...

DMaaP client: To be used in CPS

Kafka configuration details needs to be added in the application yaml of both publisher(cps-core) and consumer(cps-temporal) of the events published to Kafka. These configuration should preferably be defined in application-helm.yaml included in the OOM charts to provide flexibility while deploying the application. 

...

Code Block
languageyml
titleConfiguration in cps-temporal using SASL_PLAINTEXT
collapsetrue
spring:
    kafka:
        bootstrap-servers: 172.16.3.38:30490
        security:
            protocol: SASL_PLAINTEXT
        properties:
            sasl.mechanism: PLAIN
            sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username=admin password=admin_secret;
            ssl.endpoint.identification.algorithm:
        consumer:
            group-id: ${KAFKA_CONSUMER_GROUP_ID:cps-temporal-group}
            # Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializers
            # See https://docs.spring.io/spring-kafka/docs/2.5.11.RELEASE/reference/html/#error-handling-deserializer
            # and https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/
            key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
            value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
            properties:
                spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
                spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
                spring.json.value.default.type: org.onap.cps.event.model.CpsDataUpdatedEvent

app:
    kafka:
        consumer:
            topic: ${KAFKA_CONSUMER_TOPIC:cps.cfg-state-events}


Note: AAF integration is not included in this documentation as there is already a Jira to handle the integration 
CPS-281 which is still under discussion.