Purpose:
Main purpose is to ensure that metric/log collection, correlation & analysis, and closed loop actions are performed closer to the data.
Analytics can be infrastructure analytics, VNF analytics or application analytics. But in Dublin, analytics-as-a-service is proved using infrastructure analytics.
- Big Data analytics to make sure that the analysis is accurate.
- Big data frameworks to allow usage of machine learning and deep learning.
- Avoid sending large amount of data to ONAP-Central for training, by letting training happen near the data source (Clolud-regions).
- ONAP scale-out performance, by distributing some functions out of ONAP-Central such as Analytics
- Letting inferencing happen closer to the edges/cloud-regions for future closed loop operations, thereby reducing the latency for closed loop.
- Opportunity to standardize Infra analytics events/alerts/alarms through output normalization across ONAP-based and 3rd party analytics application.
Owner : Dileep Ranganathan, TBD
Participating Companies: Intel, VMware
Operator Support: China Mobile, Vodafone
Parent page: Edge Automation Functional Requirements for Dublin
Link to presentation documents : Distributed Analytics-as-a-Service presentations
Use Case Name
Showcase VNF | Test Environment | Integration Team Liaison |
---|---|---|
vFW, vCPE (TBD), 5G (TBD) | Intel/Windriver Lab, VMware Lab (TBD) | TBD |
Dublin focus
- Creation of Helm charts for analytics framework. Two packages
- Standard package (with all SW) and inferencing package (Minimal).
- Deployment of Analytics framework in the cloud-regions that are based on K8S. Identify any gaps and work with "K8S based Cloud region support" team to fix them.
- Cloud infra Event/Alert/Alarm/Fault Normalization & Dispatching microservice deployment on K8S.
- Spark Application management with PNDA deployment manager (to dispatch application image to various cloud regions)
- ML/DL Model management & Dispatcher (Stretch goal)
- Analytics Application (consisting of multiple components) configuration profile support using Multi-Cloud/K8S configuration service. Develop config-sync plugin.
- Development of Collection and Distribution Service - CollectD to Kafka (CollectD-kafka/avro)
- Collection and Distribution Service - Node-export & cAdvisor to Kafka (Stretch Goal)
- ONAP alarm Event dispatcher micro-service (ONAP--event-dispatcher)
- Make TCA application generic or create a a simple TCA application (since it needs to run in cloud-region that does not have ONAP specific components) to run on any spark based framework (Get input using Kafka, Get configuration updates via Consul directly, Output via Kafka) : TCA-spark application (for testing)
- 3rd Party Infra Analytics application aligning with the output of generic TCA application.
- Creation of set of helm charts 'infra analytics base' consisting of following
- Daemon set consisting of "CollectD & collectD-config-agent'.
- CollectD-Kafka/avro
- Node-exporter-to-Kafka/Avro
- cAdvisor-to-kafka/Avro
Dublin Assumptions:
- Kubernetes support in Cloud regions ( support others in future. What to be supported is TBD)
- PNDA as a base (Alignment with DCAE – DCAE already decided to use PNDA framework)
- Spark framework even for both training and inference ( future - make the inference as a Micro Service for easier deployment and make the inference as set of executable to be deployed even within application/NF workload or in the compute node)
- Full framework instantiation ( Future - work with partial deployment that already exists (For example, support existing HDFS deployment by only instantiating the other components))
- Instantiates in new name space (not on existing namespace) in remote cloud regions
- Dynamic configuration updates to analytics applications will be using Consul in Dublin. Other mechanisms for further study.
- Closed loop actions are performed at the ONAP-Central.
DCAE/CLAMP integration
It was felt that DCAE integration is complicated. Due to resource constraints, develop common items in R4 and integrate with DCAE/CLAMP in future releases. But, during Dublin time frame, like to do following
- Understand on how DCAE/CLAMP can play role in analytics-as-a-service.
- Identify work items
- Create E2E sequence flows.
Impacted Projects
Project | PTL | JIRA Epic / User Story* | Requirements |
---|---|---|---|
DCAE (Or Demo repository) | Vijay Venkatesh Kumar |
| |
Demo repository |
| ||
Multi-VIM/Cloud |
| ||
Multi-VIM/Cloud | Bin Yang | Cloud infra Event/Alert/Alarm/Fault Normalization & Dispatching microservice development
|
*Each Requirement should be tracked by its own User Story in JIRA
Testing
Current Status
Testing Blockers
- High visibility bugs
- Other issues for testing that should be seen at a summary level
- Where possible, always include JIRA links
End to End flow to be Tested
Same as vFW (TBD)
Test Cases and Status
# | Test Case | Status |
---|---|---|
1 | There should be a test case for each item in the sequence diagram | NOT YET TESTED |
2 | create additional requirements as needed for each discreet step | COMPLETE |
3 | Test cases should cover entire Use Case | PARTIALLY COMPLETE |
4 | Test Cases should include enough detail for testing team to implement the test | FAILED |