Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Proposed name for the project: DataLake
  • Proposed name for the repository: datalake

Project Goals:

Build permanent storage to persist the data that flows through ONAP, and build data analytics tools on it.

Project description:

DMaaP data is read and processed by varieties of many ONAP components. DMaaP is backed by Kafka, which is a system for Publish-Subscribe, and is not suitable for data query and data analytics. It Additionally, data in Kafka is not meant to be a permanent storage and data got deleted after certain retention period. Thus it is useful to persist many of the data that flow flows through DMaaP in a database, for to databases, with the following reasonsbenefits:

  1. Data is stored in a permanent storage for history record. DMaaP is free to set its message retention time without period without taking history record as a concern.

  2. With database table’s schema, it is convenient to query and retrieve data.

  3. For data analytics and report, accessing data from a database is easier than from DMaaP/Kafka.

In this project, we provide a systematic way to real-time ingest DMaaP data to MongoDB Couchbase, a  documentdistributed document-oriented NoSQL database with flexible table schema, and Druid, a data store designed for real-time OLAP analytics. We also provide sophisticated and ready-to-use data analytics tools built on the data.

DataLake's goals are:

  1. Provide a systematic way to real-time ingest DMaaP data to Couchbase, a distributed document-oriented database, and Druid, a data store designed for real-time OLAP analytics.
  2. Also serves as a common data storage for other ONAP components, with easy access.
  3. Provides APIs and ways for ONAP components and external systems (e.g. BSS/OSS) to consume the data.
  4. Provides sophisticated and ready-to-use data analytics tools built on the data.

Architecture:

Image Added

Scope:


Data Sources

  • Monitor all or selected Data topics, real-time reads the data, and persists it..

  • Other ONAP components can use DataLake as a storage to save application specific data, through DMaaP or DataLake REST APIs.

  • Other data sources will be supported if needed.

Dispatcher

  • Provide admin REST API for configuration and topic management. A topic can be configured to be exported to which data stores, with MongoDB Couchbase and Druid supported initially. We may support more noSQL databases distributed databases in the future.

  • Provide SDC/Design time framework UI for management, making use of the above admin REST API.

MongoDB:Document Store

  • Monitor selected topics, real-time pull the data and insert it into MongoDBCouchbase, one table for each topic, with the same table name as the topic name.

  • Data types JSON, XML, and YAML are auto detected, and are stored in native MongoDB schemaconverted into native store  schema. We may support additional formats. Data not in these formats is stored as a single string (for now). We may support additional formats.

  • Provide REST API for data query, while applications can access the data through native MongoDB’s API as well.

  • Couchbase supports Spark direct running on it, which allow complicate analytics tools to be built. We may develop Spark analytics applications if needed.

OLAP StoreDruid:

  • Monitor selected topics, real-time pull the data and insert it into Druid, one datasource for each topic, with the same datasource name as the topic name.

  • Provide basic schema Extracts the dimensions and metrics from JSON files, and pre-configure Druid settings for each datasource, which  which is customizable through a web interface.

  • Integrate Apache Superset for data exploration and visualization., and provide pre-builds interactive dashboards

Architecture Alignment:

  • How does this project fit into the rest of the ONAP Architecture?
    DataLake provides both API and UI interfaces. UI is for analyst to analysis the data, while API is for other ONAP (and external) components to query the data. For example, UUI can use the API to retrieve historical events. Some of DCAE service applications may also make use of the APIs.
    • What other ONAP projects does this project depend on?
      DataLake depends on DMaaP for data ingestion, also depends on some other common services: OOM, SDC, MSB.

  • In Relation to Other Components
    • DCAE focuses on being a part of automated closed control loop on VNFs, storing collected data for archiving has not been covered by DCAE scope. (see ONAP wiki forum). Envision that some DCAE analytics applications may use the data in DataLake.
    • PNDA is an infrastructure that bundles a wide variety of big data technologies for data processing. Applications are to be developed on the technologies provided by PNDA. The goal of DataLake is to store DMaaP and other data, and build ready-to-use applications around the data, making use of suitable technologies, whether they are provided by PNDA. Currently Couchbase, Druid and Superset are not included in PNDA.
  • How does this align with external standards/specifications?
    • APIs/Interfaces  - REST, JSON, XML, YAML
    • Information/data models - Swagger JSON
  • Are there dependencies with other open source projects?
    • MongoDBCouchbase
    • DruidApache Druid
    • Apache Superset
    • Apache Spark

Other Information:

  • link to seed code (if applicable)
  • Vendor Neutral
    • Yes
  • Meets Board policy (including IPR)

...

Role

First Name Last Name

Linux Foundation ID

Email Address

Location

PTLGuobiao Moguobiaomoguobiaomo@chinamobile.comMilpitas, CA USA. UTC -7
CommittersGuobiao Moguobiaomoguobiaomo@chinamobile.comMilpitas, CA USA. UTC -7

Xin Miao
xin.miao@hauwei.comTexas, USA, CST

Zhaoxing MengZhaoxingmeng.zhaoxing1@zte.com.cnChengdu, China. UTC +8

Tao Shenshentao999shentao@chinamobile.comBeijing, China. UTC +8
Contributors