Team:
Lead:
Team:
ramki krishnan , Srinivasa Addepalli, Vimal Begwani, Mike Elliott, Vijay Venkatesh Kumar , Avi Chapnick , Borislav Glozman , Fernando Oliveira , Tal Liron , Margaret Chiosi , ravi rao , Raghu Ranganathan , Michael O'Brien , Xin Miao , Simone Mangiante , Timo Perala, Davide Cherubini, John Ng
Others – please add yourself if you are interested
Meetings:
Every week as part of Edge Automation WG – Edge Automation through ONAP
References:
- ONAP Dublin Architecture Requirements: https://wiki.lfnetworking.org/display/LN/OPNFV-ONAP+January+2019+Session+Proposals?preview=/8257582/10551784/2019-01%20Dublin%20Architecture%20Requirements-pa1.pptx
- DCAE Platform Requirements: https://wiki.onap.org/download/attachments/28379482/DCAE%20Platform%20Requirements.pptx?api=v2
Activity Description:
Starting with Analytics, describe the options and recommendations for distributing management (ONAP etc.) functions.
Problem Statement:
- Management Workloads
- Currently, Multiple Orchestrators for Management Workloads.
- ONAP Central Management – OOM
- Analytics Central/Distributed Management – DCAE (ONAP, SP internal, Third Party)
- There is an opportunity to get some alignment across multiple orchestrators which will be greatly beneficial especially in Distributed Edge environment
- Currently, Multiple Orchestrators for Management Workloads.
- Managed Workloads (SDC, SO, OOF etc)
- Fully Support for containerized network functions (work in progress)
- Support for non-network functions (VM and Container based), e.g. vProbe, Automation Apps
Solution Direction:
- Leverage existing capabilities, and select what; or motivate new approaches
- Management Workload:
- Align on a single orchestrator solution for all management workloads
- Managed Workload:
- Enhance SDC, SO, A&AI, MC etc. to support containerized functions
- Leverage ONAP for deploying and managing non-network functions
- Longer-term:
- Explore feasibility for orchestration alignment between managed workload and management workload
- Cloud-Native-foundation:
- Leverage K8S (Operators, Custom Resource Definitions etc.) for Distributed Systems Management
- Image management – at scale rolling upgrade
Policy/Configuration change – notify only deltas
- Leverage Istio Service Mesh (Distributed Tracing etc.) for Component Performance Management
- Leverage K8S (Operators, Custom Resource Definitions etc.) for Distributed Systems Management
Architectural Deployment Scenarios to consider:
Management Workloads
Deployment Model | Edge using certain ONAP management workload functions as an Offload Note: In this context, Offload is the process of moving certain functions from central to edge locations to address various requirements such a WAN bandwidth constraint, higher resiliency, real-time service assurance etc. | |
Description | Architecture Near-term Priority | |
Edge and Central Provider are same |
| Priority - ? Rationale:
Note: Analytics is currently addressed by a Distributed DCAE Orchestrator based on Cloudify. Participant Operator Priority
|
Edge and Central Providers are different Note: In this case, Central provider is still managing the management functionality running at the edge, but using another operators infrastructure. |
Note: A Virtual Private Cloud (VPC) provides a dedicated pool of compute/network/storage resources using a infrastructure as a service approach. | Same as above. Vodafone (question): should this model be at a lower priority? For a first phase it's sufficient to enable the case where edge and central providers are the same. |
Managed Workloads
- Managed workload instantiation is always started by ONAP Central components
- If "Edge using certain ONAP management workload functions as an Offload" as described in the previous table, the corresponding workload LCM functions will be taken care of by offloaded ONAP management components
No change is envisioned in the workload instantiation from a ONAP user perspective.
Distributed Management application Requirements / Considerations
Definitions
Day0 configuration: Configuration that is applied at the time of VNF instantiations (Example: Ether config-drive, config-init or config-map)
Day 2 configuration: on-going configuration after Day-0 configurations
(in VNFs, Day 1 configuration is treated as Day 2 configuration in the following table)
Management application: Can be ONAP component or equivalent component from third parties
Solution Options
Management Apps as traditional Apps/VNFs - Option 1 | Extending DCAE Orchestration (OOM-Tosca) - Option 2 | Cloud Native K8S Ecosystem (which includes current OOM helm charts ) - Option 3 |
---|---|---|
Existing infrastructure that is available in ONAP •Use SDC for onboarding management applications if they are independent of VNF Or make management app also part of VNF if it needs to be dynamic •Use SO to bring up the management app like any other VNF •Leverage MC for talking to various cloud-regions of different technologies (Management App as VM or container or public cloud entity) •Leverage OOF to make placement decisions (such as HPA, affinity, anti-affinity) | Using Existing DCAE orchestration to deploy and manage lifecycle for all management application (central and edge) DCAE orchestration supports deployments of Ms (collectors, and analytics services); primary orchestrator is cloudify. The current orchestration can be extended for supporting deployment of helm charts and Tosca_blueprints to deploy wide ranges of managed applications/services at multiple sites. | Cloud Native K8S Ecosystem - https://landscape.cncf.io/ ONAP OOM Project - Prescriptive Helm Charts for various ONAP management plane components |
Quick Analysis of All Options
Option 1 (uses same infrastructure that is available for deploying VNFs)
- Option 1 Pros (with respect to other options)
- Since it uses same infrastructure as VNFs, any enhancements done in orchestration for VNFs comes free for ONAP management applications .
- ONAP Management applications requiring selective placement (based on criteria such as hardware capabilities, latency, distance, affinity and anti-affinity) can be satisfied using OOF.
- ONAP management applications that have 1:1 correspondence with VNFs can be brought together.
- ONAP management applications of different form factors (such as VM and containers) can be taken care.
- ONAP management applications can be placed in cloud regions with different technologies
- Single orchestrator for both managed and management applications.
- Option 1 Cons:
- Majority of ONAP management applications today are described using helm and hence they can be deployed easily using option 1. But, there are many ONAP management applications are described using TOSCA. They can't be deployed using option 1 until TOSCA support is added in SO. Since Cloudify-TOSCA plan is not in the roadmap for Dublin., It is also an understanding that Cloudify-TOSCA support in SO may not happen this year.
- Some of the ONAP components are not instantiated using OOM. They get instantiated dynamically by CLAMP framework. Supporting CLAMP initiated ONAP management application deployment with SO may require significant development.
- Option 1 Analysis:
- Any management application that is described in HEAT/Helm, that is independent of CLAMP can leverage this option.
- Since there are applications that are described in Cloudify-TOSCA, it is felt that this option alone can't satisfy the critical requirement
- Option 1 Pros (with respect to other options)
- Conclusion:
- It was considered pragmatic to not consider option 1 further and synergize Options 2 and 3 to produce a best of breed solution.
Requirements and Narrowed-down Solution Options Mapping
Category | Requirement Item | Priority | Added by | Management Apps as traditional Apps/VNFs - Option 1 (Not considered as it is not satisfying the critical requirement of supporting existing Cloudify-TOSCA based management applications) | Extending DCAE Orchestration (OOM-Tosca) - Option 2 | Cloud Native K8S Ecosystem (which includes current OOM helm charts ) Mapping (Option 3) | ||
---|---|---|---|---|---|---|---|---|
Current status | Future work planned (approx. timeline desired) | Current status | Future work planned (approx. timeline desired) | |||||
Onboarding | Ability to onboard management applications, that are to be deployed in cloud-regions, in ONAP-Central. Shall not have any expectations that all management applications are onboarded as a single bundle. | high | Yes (Using SDC. SDC allows to define it as VNF with multiple management applications as VNFCs. SDC allows multiple VNFs in a service and there could be multiple services) | Supported through DCAE, SDC*, Policy, CLAMP | SDC Design tool enhancement (E release) | Existing OOM functionality. Same mechanism used to deploy helm charts in ONAP Central is used to deploy helm charts to cloud regions. | N/A | |
Onboarding | Ability to compose multiple management applications to be part of one management bundle and defining the dependency graph of applications belonging to a bundle | high | Yes (SDC now supports Helm, based description. It is possible to introduce dependency via initContainers and helm hooks) | Supported through DCAE, SDC (DS) | SDC Design tool enhancement (E release) | Existing OOM functionality. Customization of components to deploy is defined in configuration override files (such as cloud-2.yaml). Dependencies defined in application Helm Charts. | N/A | |
Onboarding | Shall have a way to specify licensing options for third party management applications (similar to VNF licensing) | high | Srinivasa Addepalli | Yes (SDC has a way to provide licensing information) | TBA | |||
Instantiation | Ability to deploy management applications in selected cloud regions that are owned by ONAP operator | high | Partially (SO has ability to select the cloud-region while deploying the VNF and hence same is applicable for management applications. But there is no bulk deployment by selecting multiple cloud regions.It require enhancements. But we believe this requirement is needed for NFs too) | Supported by design; need further work to identify the cloud-region part of deployment | k8s Plugin enhancement worked as stretch goal for Dublin | Existing OOM functionality. Cloud regions are defined in onap-central as kube configs. Iterate over each cloud-region and deploy common components. Can be scripted if desired. Question: Should upgrades of cloud regions be under central control or regional control (ie. via edge stacks such as Akranio)? | ||
Instantiation | Ability to deploy management applications that are ephemeral (example: Analytics applications) | high | Yes (Complete control at the SO. One can terminate the management application bundle at any time) | Yes | Existing OOM functionality. Terminate deployed application by setting 'enabled' flags to false
| N/A | ||
Instantiation | Ability to deploy management applications in selected cloud regions that are not owned by ONAP operator, but has business relationship (Examples: Public Clouds or Edge Clouds owned by some other organization) | low | Yes (Multi-Cloud has ability to deploy workloads in any cloud-region - whether owned by operators or even public clouds as long as right credentials are used) | Supported by design; need further work to identify target edge cloud (with right credentials maintained in ONAP central) | TBD | Existing OOM functionality. Cloud regions are defined in onap-central as kube configs. Kube configs already contain credentials for communicating with any cloud region - public, private and regardless of ownership model. | N/A | |
Instantiation | Support for deploying management applications independent of each other when there are no dependencies (no expectation that all management applications are brought up together). | high | Yes (SO provides API at various granularity) | Yes | Existing OOM functionality. Deploy individual applications by setting 'enabled' flags to true
| N/A | ||
Instantiation | Ability to deploy few management applications based on VNF instantiations and bring down when VNF is terminated | high | Yes (SDC/SO with their bundling approaches - management app can be added as VFC in a VNF or as a VNF in a service) | Can be triggered from CLAMP | Add new service based on topology interface notification to trigger MS deployment | Add new mS to register for A&AI Kafka events | ||
Instantiation | Ability to apply configuration (Day0 configuration) of management applications at the time of deployment | high | Yes (SDC supports adding default Day0 configuration of workloads) | Yes | ||||
Instantiation | Support for various Day0 configuration profiles (e.g. different profiles for different cloud regions w/ differing capabilities) | high | Yes (SDC supports multiple Day0 profles - either through customization or as artifacts in case of K8S) | Yes (Supported through Policy/DCAE) | ||||
Instantiation | Support for placement of management applications based on platform features (example: GPU, FPGA etc...) | high | Yes (SO can talk to OOF to get the right flavor for workloads in a VFM) | Not currently | Can be enhanced to interface with OOF to determine required placement | |||
Instantiation | Support for consistent Day0 configuration mechanisms - should be the same path as Day 2. | high | Vijay Venkatesh Kumar | Yes (Work is going on in K8S plugin to ensure that Day 2 configuration is also supported as Helm charts as Day0 configuration. This is made possible due to microservices supporting K8S operators for their configurations) | Yes | |||
Run time | Support for Day 2 configuration of single or multiple instances of management applications in various cloud regions | high | Yes (APPC support for Day2 configuration. Also Day2 configuration support in K8S plugin - Ongoing. One can select cloud-region, instance while applying Day2 configuration) | Yes | ||||
Run time | Support for management applications depending on other management applications - Support for configuration (Day2 configuration) of provider services when the consuming service is being instantiated and removal of the configuration on provider services when consuming service is terminated (Example: When analytics applications are brought up, analytics/collection framework need to be updated with additional configuration such as DB table, Kafka topic etc..) | high | Yes (In case of K8S world, as long as day2 configuration is also supported via K8S resources, it is possible. K8s Plugin does support this) | Supported under current design. Interface with DMAAP BC through plugin being worked for Dublin | ||||
Run time | Support for Day 2 configuration (add/delete) of appropriate management applications upon VNF instantiation/termination (Example: configuration of analytics & collection services when VNFs are brought up and removing the added configuration upon VNF termination) | high | WIP | Functionality supported under current design; but not currently exist in ONAP. | Add new service based on topology interface notification to trigger MS reconfiguration | |||
Networking | Secure connectivity between central ONAP and management applications in cloud regions | high | Yes (Using SSL/TLS) | Supported via securing DMAAP topics | ||||
Networking | Support for various connectivity protocols (Kafka, HTTP 1.1, 2.0, GRPC, Netconf etc...) between ONAP-Central and management components in cloud regions | high | Yes (No restriction. it is based on management application) | Interface between Central and edge supported primarily via DMAAP | ||||
Run time | Monitoring and visualization of management applications of cloud-regions along with ONAP components at the ONAP-Central | high | Partial (Same monitoring schemes as available for VNFs, but suggest that all management components acts prometheus target) | Yes | ||||
Run time | Scale-out of management application components at the cloud-regions & traffic (transaction) distribution | high | Yes (but testing is required with a use case to ensure that there are no gaps) (This work is slated for Release E - configuration of ISTIO for L7 workloads and NSM for L2/L3/L4 workloads) Even though K8S can bring up more instances, traffic distribution is expected to be configure properly) | Yes (relies on k8s) | ||||
Run time | Ability to upgrade management application components without loss of functionality | low | Yes ((but testing is required with a use case to ensure that there are no gaps) (This work is slated for Release E - configuration of ISTIO for L7 workloads and NSM for L3/L3/L4 workloads) Loss of functionality requires careful configuration of traffic rules of ISTIO or NSM) | Yes (relies on k8s) | ||||
Run time | High availability of management applications in the cloud regions | high | Yes (It is part of K8S) | Yes (relies on k8s) | ||||
Miscellaneous | Support for ONAP-compliant third party management applications that provide similar functionality as ONAP management applications.
| high | Yes (As long as third party management applications are described using Helm) | Yes (helm chart deployment will be supported via new helm plugin contributed for Dublin (the policy reconfiguration will require complying to onboarding req#1) | ||||
Miscellaneous | Support management applications as containers | high | @Srinivasa Addepalli | Yes (Using K8S plugin) | Yes | |||
Miscellaneous | Support management applications as VMs | low | Yes (Using K8S plugin) | Yes | ||||
Security | Security and privacy aspects of management applications (To be expanded) | high | It is generic requirement and to be taken care outside of this work item | TBA | ||||
Instantiation | Support for MS deployment not binded to any VNF/service; these are application which are service agnostic can be managed by dynamic configuration rule to support different usecases | Yes (If management application is not bound to any network function, this can be deployed as a separate VSP) | Yes | |||||
Miscellaneous | Backward compatibility with existing application based on TOSCA in ONAP | Critical | No | Yes | ||||
Miscellaneous | Single orchestrator for both managed (VNFs/Apps) and management applications that are to be deployed in cloud-regions | low (but highly preferred) | Srinivasa Addepalli | Yes | Can be supported if vnf/app onboarding can be aligned with current management application flow. |
Assumptions
Item | Added by | Modified by | |
---|---|---|---|
ONAP Management components can only be brought up in cloud-regions that are based on Kubernetes | |||
Architectural Options:
Discussion Kick off:
Various Architectural Options: https://wiki.onap.org/download/attachments/28379482/ONAP-DDF-Distributed-Analytics-framework-v1.pptx?api=v2
Definition of done:
- This activity is closed when there is a:
- Description of alternative concepts for distributing the ONAP functionality.
- A recommendation for which alternatives to pursue (and when).
Expected Timeframe:
This activity is expected to conclude at/before the start of April, 2019 by the ONAP Architecture meeting at ONS.
Definitions:
Conclusion:
Other Deliverables:
LF blog and Architecture white paper during ONS time frame.