CPS-821
-
Getting issue details...
STATUS
Description/Scope
The scope of this spike is to ascertain:
- How to use messaging (producer, agree topic etc))
- Using existing rest endpoint with additional flag indicating async response
- Also consider asynchronous request option using messaging in the proposal
Associated Jira Created for Implementation
Key
|
Summary
|
T
|
Created
|
Updated
|
Assignee
|
Reporter
|
P
|
Status
|
Resolution
|
Sub-Tasks
|
Issues/Decisions
# | Issue | Notes/Jira | Decision |
---|
1 | What topic to use for client? | Topic provided by client as a parameter which will be injected into our environment and used for asynchronous requests sent back to client. | To be supplied by cient |
2 | What topic to use for private DMI-NCMP? | e.g. ncmp-async-private but decision needs to be made with current best practices. Contact Fiachra Corcoran regarding ONAP conventions. Response was that there aren't any conventions to speak of but we would use dashes (i.e. my-new-topic) instead of dot notation (i.e. my.new.topic) for topic name | Proposed: ncmp-async-m2m |
3 | Are adding a new REST endpoint for async or modifying an existing endpoint? |
To facilitate asynchronous requests to DMI we will need to either create a new endpoint or modify existing endpoint to include /async flag. The second solution may not be backwards compatible. However creating a new endpoint solely for a flag is also not ideal. We could add async to list of options (but this might interfere with the purpose of /options. Additionally, considered adding a new endpoint for async which simply re-routes the response to the original endpoint while adding the logic for OK response to the client. However, would this lead to a change in the schema? If so, would this be backwards compatible?
CPS-830
-
Getting issue details...
STATUS
| /ncmp/v1/data/ch/123ee5/ds/ncmp-datastore:*?topic=<topic-name> |
4 | Agree URL for async once #2 is clarified | CPS R10 Release Planning#NCMPRequirements #11. Based on this additional path parameter we no longer require additional /async flag in url. | /ncmp/v1/data/ch/123ee5/ds/ncmp-datastore:*?topic=<topic-name>
|
5 | Passthrough request need to be able to handle different response types (using accept header) but the async option would have a fixed and possibly different response type. | CPS R10 Release Planning#NCMPRequirements #11. | We should, by default, be able to accept multiple contnet-types. |
6 | Should we create a standalone app to demo or are tests sufficient? | CSIT tests may require more involved effort - perhaps we could add standalone app to nexus and use it as part of CSIT test? | See #13 |
7 | | We should be be stateless | No |
8 | Error Reporting - Topic Correctness/Availability | At a minimum we should report to the client if a topic was not found or if the topic name was incorrect | In Scope |
9 | Error Reporting - Kafka Issues | Issues such full buffer/queue, drop messages, failure not in scope | Out of scope |
10 | Async Request Option using Messaging | See: https://wiki.onap.org/display/DW/CPS-821+Spike%3A+Support+Async+read-write+operations+on+CPS-NCMP+interface#CPS821Spike:SupportAsyncreadwriteoperationsonCPSNCMPinterface-AsyncRequestOptionusingMessaging(OutofScope) | Out of scope |
11 | Do we actually require futures in this implementation proposal? | It could be argued that the need for futures is made redundant by the fact we call dmi from ncmp through rest and the response will be consumed via Kafka. What benefit would future give us in this case? | Not needed |
12 | ID Generation | Which mechanism to use? Look at CPS-Temporal and follow to keep consistency |
|
13 | Can robot framework verify if Kafka events have been sent/received | This would be less work and overhead (rather than creating/.maintaining client app) Will need to verify if 3PP libraries are safe to introduce into codebase. If so, what is the process? Do they need to be FOSSed? | UNDER INVESTIGATION |
14 | Can Webflux do this work with less code/impl? | Sourabh Sourabh suggested using this to compliment our existing approach. By adding webflux we add an event loop to synchronize and access I/O connections to the database. | No, It will compliment the design by adding an event loop for I/O synchronization and access |
15 | ONAP may be deprecating PLAINTEXT for Kafka. Strimzi Kafka might need to be used |
| UNDER INVESTIGATION |
eyJleHRTcnZJbnRlZ1R5cGUiOiIiLCJnQ2xpZW50SWQiOiIiLCJjcmVhdG9yTmFtZSI6Ikpvc2VwaCBLZWVuYW4iLCJvdXRwdXRUeXBlIjoiYmxvY2siLCJsYXN0TW9kaWZpZXJOYW1lIjoiSm9zZXBoIEtlZW5hbiIsImxhbmd1YWdlIjoiZW4iLCJkaWFncmFtRGlzcGxheU5hbWUiOiJQcm9wb3NlZCBEZXNpZ24iLCJzRmlsZUlkIjoiIiwiYXR0SWQiOiIxMTc3NDEwNTQiLCJkaWFncmFtTmFtZSI6IkNQUy04MjEiLCJhc3BlY3QiOiIiLCJsaW5rcyI6ImF1dG8iLCJjZW9OYW1lIjoiQ1BTLTgyMSBTcGlrZTogU3VwcG9ydCBBc3luYyByZWFkLXdyaXRlIG9wZXJhdGlvbnMgb24gQ1BTLU5DTVAgaW50ZXJmYWNlIiwidGJzdHlsZSI6InRvcCIsImNhbkNvbW1lbnQiOmZhbHNlLCJkaWFncmFtVXJsIjoiIiwiY3N2RmlsZVVybCI6IiIsImJvcmRlciI6dHJ1ZSwibWF4U2NhbGUiOiIxIiwib3duaW5nUGFnZUlkIjoxMTc3NDA0ODcsImVkaXRhYmxlIjpmYWxzZSwiY2VvSWQiOjExNzc0Mjk1OSwicGFnZUlkIjoiIiwibGJveCI6dHJ1ZSwic2VydmVyQ29uZmlnIjp7ImVtYWlscHJldmlldyI6IjEifSwib2RyaXZlSWQiOiIiLCJyZXZpc2lvbiI6MTQsIm1hY3JvSWQiOiJmMWY1OGUxNy0wZDRlLTQ2OTMtODQ3ZS1lOGVhMDVkYTcyZDgiLCJwcmV2aWV3TmFtZSI6IkNQUy04MjEucG5nIiwibGljZW5zZVN0YXR1cyI6Ik9LIiwic2VydmljZSI6IiIsImlzVGVtcGxhdGUiOiIiLCJ3aWR0aCI6IjEwMDAiLCJzaW1wbGVWaWV3ZXIiOmZhbHNlLCJsYXN0TW9kaWZpZWQiOjE2NDMzMDM5NzgwMDAsImV4Y2VlZFBhZ2VXaWR0aCI6ZmFsc2UsIm9DbGllbnRJZCI6IiJ9
High-level Steps/Possible Tickets:
- Modify REST endpoint to include param topic (1)
- Add logic to send response and request (2a & 2b)
- Add producer to DMI (implementation and config) (31 & 3b)
- Add consumer to NCMP (implementation and config) (4a)
- Add Producer to NCMP (implementation and config) (4b)
- Demo & Test (5)
Kafka config & Implementation
Example Kafka Consumer Implementation from CPS-Temporal
The below code snippet taken from cps-temporal can be used in the same way in NCMP to listen to message from DMI substituting the topics and errorHandler
/**
* Consume the specified event.
*
* @param cpsDataUpdatedEvent the data updated event to be consumed and persisted.
*/
@KafkaListener(topics = "${app.listener.data-updated.topic}", errorHandler = "dataUpdatedEventListenerErrorHandler")
public void consume(final CpsDataUpdatedEvent cpsDataUpdatedEvent) {
log.debug("Receiving {} ...", cpsDataUpdatedEvent);
// Validate event envelop
validateEventEnvelop(cpsDataUpdatedEvent);
// Map event to entity
final var networkData = this.cpsDataUpdatedEventMapper.eventToEntity(cpsDataUpdatedEvent);
log.debug("Persisting {} ...", networkData);
// Persist entity
final var persistedNetworkData = this.networkDataService.addNetworkData(networkData);
log.debug("Persisted {}", persistedNetworkData);
}
Example Kafka Consumer Config from CPS-Temporal
# ============LICENSE_START=======================================================
# Copyright (c) 2021 Bell Canada.
# ================================================================================
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============LICENSE_END=========================================================
# Spring profile configuration for sasl ssl Kafka
spring:
kafka:
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVER}
security:
protocol: SASL_SSL
ssl:
trust-store-type: JKS
trust-store-location: ${KAFKA_SSL_TRUST_STORE_LOCATION}
trust-store-password: ${KAFKA_SSL_TRUST_STORE_PASSWORD}
properties:
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: ${KAFKA_SASL_JAAS_CONFIG}
ssl.endpoint.identification.algorithm:
Example Kafka Producer Implementation from CPS-NCMP
/*
* ============LICENSE_START=======================================================
* Copyright (c) 2021 Bell Canada.
* ================================================================================
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
* SPDX-License-Identifier: Apache-2.0
* ============LICENSE_END=========================================================
*/
package org.onap.cps.notification;
import lombok.extern.slf4j.Slf4j;
import org.checkerframework.checker.nullness.qual.NonNull;
import org.onap.cps.event.model.CpsDataUpdatedEvent;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Component;
@Component
@Slf4j
public class NotificationPublisher {
private KafkaTemplate<String, CpsDataUpdatedEvent> kafkaTemplate;
private String topicName;
/**
* Create an instance of Notification Publisher.
*
* @param kafkaTemplate kafkaTemplate is send event using kafka
* @param topicName topic, to which cpsDataUpdatedEvent is sent, is provided by setting
* 'notification.data-updated.topic' in the application properties
*/
@Autowired
public NotificationPublisher(
final KafkaTemplate<String, CpsDataUpdatedEvent> kafkaTemplate,
final @Value("${notification.data-updated.topic}") String topicName) {
this.kafkaTemplate = kafkaTemplate;
this.topicName = topicName;
}
/**
* Send event to Kafka with correct message key.
*
* @param cpsDataUpdatedEvent event to be sent to kafka
*/
public void sendNotification(@NonNull final CpsDataUpdatedEvent cpsDataUpdatedEvent) {
final var messageKey = cpsDataUpdatedEvent.getContent().getDataspaceName() + ","
+ cpsDataUpdatedEvent.getContent().getAnchorName();
log.debug("Data Updated event is being sent with messageKey: '{}' & body : {} ",
messageKey, cpsDataUpdatedEvent);
kafkaTemplate.send(topicName, messageKey, cpsDataUpdatedEvent);
}
}
Example Kafka Producer Config from CPS-NCMP
spring:
kafka:
properties:
request.timeout.ms: 5000
retries: 1
max.block.ms: 10000
producer:
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
cliend-id: cps
consumer:
group-id: cps-test
auto-offset-reset: earliest
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
properties:
spring.json.value.default.type: org.onap.cps.event.model.CpsDataUpdatedEvent
Example Kafka Docker-Compose
kafka:
image: confluentinc/cp-kafka:6.1.1
container_name: kafka
ports:
- "19092:19092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,CONNECTIONS_FROM_HOST://localhost:19092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,CONNECTIONS_FROM_HOST:PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
Future or alternative (Out of Scope)
What are Futures?
A Java Future, java.util.concurrent.Future
, represents the result of an asynchronous computation. When the asynchronous task is created, a Java Future
object is returned. This Future
object functions as a handle to the result of the asynchronous task. Once the asynchronous task completes, the result can be accessed via the Future
object returned when the task was started
source: http://tutorials.jenkov.com/java-util-concurrent/java-future.html
CompletableFuture (Java8+)
Java 8 introduced the CompletableFuture class. Along with the Future interface, it also implemented the CompletionStage interface. This interface defines the contract for an asynchronous computation step that we can combine with other steps.
CompletableFuture is at the same time a building block and a framework, with about 50 different methods for composing, combining, and executing asynchronous computation steps and handling errors.
CompletableFuture<Void> future = CompletableFuture.runAsync(() -> {
// Simulate a long-running Job
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
System.out.println("I'll run in a separate thread than the main thread.");
});
source: https://www.callicoder.com/java-8-completablefuture-tutorial/
Alternatives - Thread
int number = 20;
Thread newThread = new Thread(() -> {
System.out.println("Factorial of " + number + " is: " + factorial(number));
});
newThread.start();
# | Type | Pros | Cons | Recommend |
---|
1 | Future | Futures return value |
| Y |
2 | Thread |
| threads does not return anything as the run() method returns void . We could possibly implement mechanism to trigger a response but this is unnecessary as futures do this | N |
Type | Method | Ease of implementation | Decision |
---|
UUID | String uniqueID = UUID.randomUUID().toString();
| Easy | ~ |
Custom | We generate our own (example exists in NCMP (notificationPublisher - confrm)) | Medium | - |
HTTP Request ID | Further investigation required |
| ~ |
Kafka Event ID | Further investigation required |
| ~ |
/*
* ============LICENSE_START=======================================================
* Copyright (c) 2021 Bell Canada.
* ================================================================================
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
* SPDX-License-Identifier: Apache-2.0
* ============LICENSE_END=========================================================
*/
package org.onap.cps.notification;
import java.time.OffsetDateTime;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.Future;
import java.util.regex.Pattern;
import java.util.stream.Collectors;
import lombok.extern.slf4j.Slf4j;
import org.springframework.scheduling.annotation.Async;
import org.springframework.stereotype.Service;
@Service
@Slf4j
public class NotificationService {
private NotificationProperties notificationProperties;
private NotificationPublisher notificationPublisher;
private CpsDataUpdatedEventFactory cpsDataUpdatedEventFactory;
private NotificationErrorHandler notificationErrorHandler;
private List<Pattern> dataspacePatterns;
/**
* Create an instance of Notification Subscriber.
*
* @param notificationProperties properties for notification
* @param notificationPublisher notification Publisher
* @param cpsDataUpdatedEventFactory to create CPSDataUpdatedEvent
* @param notificationErrorHandler error handler
*/
public NotificationService(
final NotificationProperties notificationProperties,
final NotificationPublisher notificationPublisher,
final CpsDataUpdatedEventFactory cpsDataUpdatedEventFactory,
final NotificationErrorHandler notificationErrorHandler) {
log.info("Notification Properties {}", notificationProperties);
this.notificationProperties = notificationProperties;
this.notificationPublisher = notificationPublisher;
this.cpsDataUpdatedEventFactory = cpsDataUpdatedEventFactory;
this.notificationErrorHandler = notificationErrorHandler;
this.dataspacePatterns = getDataspaceFilterPatterns(notificationProperties);
}
private List<Pattern> getDataspaceFilterPatterns(final NotificationProperties notificationProperties) {
if (notificationProperties.isEnabled()) {
return Arrays.stream(notificationProperties.getFilters()
.getOrDefault("enabled-dataspaces", "")
.split(","))
.map(filterPattern -> Pattern.compile(filterPattern, Pattern.CASE_INSENSITIVE))
.collect(Collectors.toList());
} else {
return Collections.emptyList();
}
}
/**
* Process Data Updated Event and publishes the notification.
*
* @param dataspaceName dataspace name
* @param anchorName anchor name
* @param observedTimestamp observedTimestamp
* @return future
*/
@Async("notificationExecutor")
public Future<Void> processDataUpdatedEvent(final String dataspaceName, final String anchorName,
final OffsetDateTime observedTimestamp) {
log.debug("process data updated event for dataspace '{}' & anchor '{}'", dataspaceName, anchorName);
try {
if (shouldSendNotification(dataspaceName)) {
final var cpsDataUpdatedEvent =
cpsDataUpdatedEventFactory.createCpsDataUpdatedEvent(dataspaceName, anchorName, observedTimestamp);
log.debug("data updated event to be published {}", cpsDataUpdatedEvent);
notificationPublisher.sendNotification(cpsDataUpdatedEvent);
}
} catch (final Exception exception) {
/* All the exceptions are handled to not to propagate it to caller.
CPS operation should not fail if sending event fails for any reason.
*/
notificationErrorHandler.onException("Failed to process cps-data-updated-event.",
exception, dataspaceName, anchorName);
}
return CompletableFuture.completedFuture(null);
}
/*
Add more complex rules based on dataspace and anchor later
*/
private boolean shouldSendNotification(final String dataspaceName) {
return notificationProperties.isEnabled()
&& dataspacePatterns.stream()
.anyMatch(pattern -> pattern.matcher(dataspaceName).find());
}
}
/*
* ============LICENSE_START=======================================================
* Copyright (c) 2021 Bell Canada.
* ================================================================================
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
* SPDX-License-Identifier: Apache-2.0
* ============LICENSE_END=========================================================
*/
package org.onap.cps.notification;
import lombok.extern.slf4j.Slf4j;
import org.checkerframework.checker.nullness.qual.NonNull;
import org.onap.cps.event.model.CpsDataUpdatedEvent;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Component;
@Component
@Slf4j
public class NotificationPublisher {
private KafkaTemplate<String, CpsDataUpdatedEvent> kafkaTemplate;
private String topicName;
/**
* Create an instance of Notification Publisher.
*
* @param kafkaTemplate kafkaTemplate is send event using kafka
* @param topicName topic, to which cpsDataUpdatedEvent is sent, is provided by setting
* 'notification.data-updated.topic' in the application properties
*/
@Autowired
public NotificationPublisher(
final KafkaTemplate<String, CpsDataUpdatedEvent> kafkaTemplate,
final @Value("${notification.data-updated.topic}") String topicName) {
this.kafkaTemplate = kafkaTemplate;
this.topicName = topicName;
}
/**
* Send event to Kafka with correct message key.
*
* @param cpsDataUpdatedEvent event to be sent to kafka
*/
public void sendNotification(@NonNull final CpsDataUpdatedEvent cpsDataUpdatedEvent) {
final var messageKey = cpsDataUpdatedEvent.getContent().getDataspaceName() + ","
+ cpsDataUpdatedEvent.getContent().getAnchorName();
log.debug("Data Updated event is being sent with messageKey: '{}' & body : {} ",
messageKey, cpsDataUpdatedEvent);
kafkaTemplate.send(topicName, messageKey, cpsDataUpdatedEvent);
}
}
Async Request Option using Messaging (Out of Scope)
This was for a future completely message driven solution (for now we start with a REST request that will generate an async message eventually. In future we could also send a message that will trigger the same.
Webflux Investigation (Out of Scope)
What is Webflux?
Spring WebFlux is a web framework that’s built on top of Project Reactor, to give you asynchronous I/O, and allow your application to perform better. The original web framework included in the Spring Framework, Spring Web MVC, was purpose-built for the Servlet API and Servlet containers. The reactive-stack web framework, Spring WebFlux, was added later in version 5.0. It is fully non-blocking, supports Reactive Streams back pressure, and runs on such servers as Netty, Undertow, and Servlet 3.1+ containers.
We suggest that you consider the following specific points:
If you have a Spring MVC application that works fine, there is no need to change. Imperative programming is the easiest way to write, understand, and debug code. You have maximum choice of libraries, since, historically, most are blocking.
In a microservice architecture, you can have a mix of applications with either Spring MVC or Spring WebFlux controllers or with Spring WebFlux functional endpoints. Having support for the same annotation-based programming model in both frameworks makes it easier to re-use knowledge while also selecting the right tool for the right job.
A simple way to evaluate an application is to check its dependencies. If you have blocking persistence APIs (JPA, JDBC) or networking APIs to use, Spring MVC is the best choice for common architectures at least. It is technically feasible with both Reactor and RxJava to perform blocking calls on a separate thread but you would not be making the most of a non-blocking web stack.
If you have a Spring MVC application with calls to remote services, try the reactive WebClient
. You can return reactive types (Reactor, RxJava, or other) directly from Spring MVC controller methods. The greater the latency per call or the interdependency among calls, the more dramatic the benefits. Spring MVC controllers can call other reactive components too.
If you have a large team, keep in mind the steep learning curve in the shift to non-blocking, functional, and declarative programming. A practical way to start without a full switch is to use the reactive WebClient
. Beyond that, start small and measure the benefits. We expect that, for a wide range of applications, the shift is unnecessary. If you are unsure what benefits to look for, start by learning about how non-blocking I/O works (for example, concurrency on single-threaded Node.js) and its effects.
Webflux supports:
- Annotation-based reactive components
- Functional routing and handling
Source: https://docs.spring.io/spring-framework/docs/current/reference/html/web-reactive.html
Pros & cons
Pros | Cons |
---|
- Better scalability due to non blocking threads
- Use less threads (1 per core)
- Better CPU Efficiency
| Reactive web programming is great for applications that have streaming data, and clients that consume it and stream it to their users. It ain’t great for developing CRUD apps. If you want to develop a CRUD API, stick with Spring MVC. - Steep learning curve in the shift to non-blocking, functional, and declarative programming
|
Links to materials
https://www.baeldung.com/spring-webflux
https://www.youtube.com/watch?v=1F10gr2pbvQ
Kafka Strimzi Investigation
https://strimzi.io/
Can Robot Framework verify Kafka Events?
It does not appear to be possible to verify Kafka in Robotframework natively, but there are third party libraries that would aid in this:
Existing Groovy tests exist for Kafka in cps-service/src/test/groovy/org/onap/cps/notification
CPS-834
-
Getting issue details...
STATUS
To facilitate demo and testing of this functionality a new standalone app will be required to act as client.
This is necessary as the client will need to connect to Kafka to consume async responses.