Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Figure 2. DCAE architecture


DCAE Components

The DCAE subsystem consists of multiple components: Common Collection Framework, Data Movement, Edge

...

and Central Lake, Analytic Framework, and Analytic Applications. 

Common Collection Framework

 The collection layer provides the various data collectors necessary to collect the instrumentation that is made available in the cloud infrastructure.  Included are both physical and virtual elements. For example, collection of the following types of data is supported: 

  • events data for monitoring the health of the managed environment

  • data to compute the key performance and capacity indicators necessary for elastic management of the resources

  • granular data  needed for detecting network  and service conditions (such as flow, session and call records)

The collection layer supports both real-time streaming and batch collection.

 
Data Movement

 This component facilitates the movement of messages and data between various publishers and interested subscribers. While a key component within DCAE, this is also the component that enables data movement between various ECOMP OpenECOMP components.

Edge and Central Lake

DCAE needs to support a variety of applications and use cases ranging from real-time applications that have stringent latency requirements to other analytic applications that have a need to process a range of unstructured and structured data. The DCAE storage lake needs to support all of these needs and must do so in a way that allows for incorporating new storage technologies as they become available. This will be done by encapsulating data access via APIs and minimizing application knowledge of the specific technology implementations.

Given the scope of requirements around the volume, velocity and variety of data that DCAE needs to support, the storage will technologies that Big Data has to offer, such as support for NOSQL technologies, including in-memory repositories, and support for raw, structured, unstructured and semi-structured data. While there may be detailed data retained at the DCAE edge layer for detailed analysis and trouble-shooting, applications should optimize the use of precious bandwidth & storage resources by ensuring they propagate only the required data (reduced, transformed, aggregated, etc.) to the Core Data Lake for other analyses.

 Analytic Framework

The Analytic Framework is an environment that allows for development of real-time applications (e.g., analytics, anomaly detection, capacity monitoring, congestion monitoring, alarm correlation etc.) as well as other non-real-time applications (e.g., analytics, forwarding synthesized or aggregated or transformed data to Big Data stores and applications); the intent is to structure the environment that allows for agile introduction of applications from various providers (Labs, IT, vendors, etc.). The framework should support the ability to process both a real-time stream of data as well as data collected via traditional batch methods. The framework should support methods that allow developers to compose applications that process data from multiple streams and sources. Analytic applications are developed by various organizations, however, they all run in the DCAE framework and are managed by the DCAE controller. These applications are micro-services developed by a broad community and adhere to ECOMP Framework standards.

 Analytic Applications

he following list provides examples of types of applications that can be built on top of DCAE and that depend on the timely collection of detailed data and events by DCAE.

 Analytics These will be the most commonapplications that are processing the collected data and deriving interesting metrics or analytics for use by other applications or Operations. These analytics range from very simple ones (from a single source of data) that compute usage, utilization, latency, etc. to very complex ones that detect specific conditions based on data collected from various sources. The analytics could be capacity indicators used to adjust resources or could be performance indicators pointing to anomalous conditions requiring response.

 Fault / Event Correlation This is a key applicationthat processes events and thresholds published by managed resources or other applications that detect specific conditions. Based on defined rules, policies, known signatures and other knowledge about the network or service behavior, this application would determine root cause for various conditions and notify interested applications and Operations.

 Performance Surveillance & Visualization Thisclass of application provides a window to Operations notifying them of network and service conditions. The notifications could include outages and impacted services or customers based on various dimensions of interest to Operations. They provide visual aids ranging from geographic dashboards to virtual information model browsers to detailed drilldown to specific service or customer impacts.

...