Team:

Lead: 

ramki krishnan

Team:

ramki krishnan , Srinivasa AddepalliVimal BegwaniMike ElliottVijay Venkatesh Kumar , Avi Chapnick , Borislav Glozman , Fernando Oliveira , Tal Liron , Margaret Chiosi , ravi rao , Raghu Ranganathan , Michael O'Brien , Xin Miao , Simone MangianteTimo PeralaDavide CherubiniJohn NGSeshu Kumar Mudiganti


Others – please add yourself if you are interested

Meetings:

Every week as part of Edge Automation WG – Edge Automation through ONAP

References:

  1. ONAP Dublin Architecture Requirements: https://wiki.lfnetworking.org/display/LN/OPNFV-ONAP+January+2019+Session+Proposals?preview=/8257582/10551784/2019-01%20Dublin%20Architecture%20Requirements-pa1.pptx
  2. DCAE Platform Requirements: https://wiki.onap.org/download/attachments/28379482/DCAE%20Platform%20Requirements.pptx?api=v2

Activity Description:

Starting with Analytics, describe the options and recommendations for distributing management (ONAP etc.) functions.

Problem Statement:

  • Management Workloads
    • Currently, Multiple Orchestrators for Management Workloads. 
      • ONAP Central Management   – OOM
      • Analytics Central/Distributed Management   – DCAE (ONAP, SP internal, Third Party)
    • There is an opportunity to get some alignment across multiple orchestrators which will be greatly beneficial especially in Distributed Edge environment
  • Managed Workloads (SDC, SO, OOF etc)
    • Fully Support for containerized network functions (work in progress)
    • Support for non-network functions (VM and Container based), e.g. vProbe, Automation Apps

Solution Direction:

  • Leverage existing capabilities, and select what; or motivate new approaches
  • Management Workload:
    • Align on a single orchestrator solution for all management workloads
  • Managed Workload:
    • Enhance SDC, SO, A&AI, MC etc. to support containerized functions
    • Leverage ONAP for deploying and managing non-network functions
  • Longer-term: 
    • Explore feasibility for orchestration alignment between managed workload and management workload
  • Cloud-Native-foundation: 
    • Leverage K8S (Operators, Custom Resource Definitions etc.) for Distributed Systems Management
      • Image management – at scale rolling upgrade
      • Policy/Configuration change – notify only deltas

    • Leverage Istio Service Mesh (Distributed Tracing etc.) for Component Performance Management 

Architectural Deployment Scenarios to consider:

Management Workloads

Deployment Model

Edge using certain ONAP management workload functions as an Offload

Note: In this context, Offload is the process of moving certain functions from central to edge locations to address various requirements such a WAN bandwidth constraint, higher resiliency, real-time service assurance etc.


DescriptionArchitecture Near-term Priority

Edge and Central Provider are same

  • Allows ONAP Central Controller function to install ONAP SW components (purely ONAP mgmt. based or 3rd party integrated with ONAP mgmt.).
  • This also supports ONAP specific K8S cluster installation.

Priority - High

Rationale:

  • Analytics and closed loop offloads are key edge use cases.

Note: Analytics is currently addressed by a Distributed DCAE Orchestrator based on Cloudify.

Participant Operator Priority

  • AT&T - High; To distribute DCAE services (Analytics, collectors etc.) and other ONAP components for resiliency.
  • Verizon - Medium; Primarily distributed DCAE collectors and data mediation
  • Vodafone - High; Distribute both DCAE and ONAP components

Edge and Central Providers are different

Note: In this case, Central provider is still managing the management functionality running at the edge, but using another operators infrastructure.

  • Use Existing VPCs (VPC creation out of scope for ONAP)
  • Rest - Same as above.

Note: A Virtual Private Cloud (VPC) provides a dedicated pool of compute/network/storage resources using a infrastructure as a service approach.

Priority - Medium

Vodafone (question): should this model be at a lower priority? For a first phase it's sufficient to enable the case where edge and central providers are the same.

  • Valid point - lowered priority to medium

Managed Workloads

  • Managed workload instantiation is always started by ONAP Central components 
    • If "Edge using certain ONAP management workload functions as an Offload" as described in the previous table, the corresponding workload LCM functions will be taken care of by offloaded ONAP management components 

No change is envisioned in the workload instantiation from a ONAP user perspective. 


Architectural Options:

Discussion Kick off:  https://wiki.onap.org/download/attachments/28379482/ONAP-DDF-Distributed-Analytics-framework-v1.pptx?api=v2

Management Apps as traditional Apps/VNFs - Option 1

Extending DCAE Orchestration (OOM-Tosca + OOM helm charts)- Option 2Cloud Native K8S Ecosystem (which includes current OOM helm charts ) - Option 3

Existing infrastructure that is available in ONAP •Use SDC for onboarding management applications if they are independent of VNF Or make management app also part of VNF if it needs to be dynamic •Use SO to bring up the management app like any other VNF •Leverage MC for talking to various cloud-regions of different technologies (Management App as VM or container or public cloud entity) •Leverage OOF to make placement decisions (such as HPA, affinity, anti-affinity)

Using Existing DCAE orchestration to deploy and manage lifecycle for all management application (central and edge)

DCAE orchestration supports deployments of Ms (collectors, and analytics services); primary orchestrator is cloudify. The current orchestration can be extended for supporting deployment of helm charts and Tosca_blueprints to deploy wide ranges of managed applications/services at multiple sites.


Cloud Native K8S Ecosystem - https://landscape.cncf.io/

ONAP OOM Project - Prescriptive Helm Charts for various ONAP management plane components

Quick Analysis of All Options 

  • Option 1 (uses same infrastructure that is available for deploying VNFs)

    • Option 1 Pros (with respect to other options)
      • Since it uses same infrastructure as VNFs, any enhancements done in orchestration for VNFs comes free for ONAP management applications .
      • ONAP Management applications requiring selective placement (based on criteria such as hardware capabilities, latency, distance, affinity and anti-affinity) can be satisfied using OOF.
      • ONAP management applications that have 1:1 correspondence with VNFs can be brought together.
      • ONAP management applications of different form factors (such as VM and containers) can be taken care.
      • ONAP management applications can be placed in cloud regions with different technologies
      • Single orchestrator for both managed and management applications.
    • Option 1 Cons:
      • Majority of ONAP management applications today are described using helm and hence they can be deployed easily using option 1.  But, there are many ONAP management applications are described using TOSCA. They can't be deployed using option 1 until TOSCA support is added in SO.  Since Cloudify-TOSCA plan is not in the roadmap for Dublin., It is also an understanding that Cloudify-TOSCA support in SO may not happen this year. 
      • Some of the ONAP components are not instantiated using OOM. They get instantiated dynamically by CLAMP framework.  Supporting CLAMP initiated ONAP management application deployment with SO may require significant development.
    • Option 1 Analysis:
      • Any management application that is described in HEAT/Helm, that is independent of CLAMP can leverage this option. 
      • Since there are applications that are described in Cloudify-TOSCA, it is felt that this option alone can't satisfy the critical requirement 
  • Conclusion:
    • It was considered pragmatic to not consider option 1 further and synergize Options 2 and 3 to produce a best of breed solution.


Requirements and Narrowed-down Solution Options Mapping

Definitions
  • Day0 configuration:  Configuration that is applied at the time of VNF instantiations (Example: Ether config-drive, config-init or config-map)
  • Day 2 configuration: on-going configuration after Day-0 configurations (in VNFs, Day 1 configuration is treated as Day 2 configuration in the following table)
  • Management application:  Can be ONAP component or equivalent component from third parties
No CategoryRequirement ItemPriorityAdded by

Management Apps as traditional Apps/VNFs - Option 1

(Not considered as it is not satisfying the critical requirement of supporting existing Cloudify-TOSCA based management applications)

Extending DCAE Orchestration  (OOM-Tosca + OOM helm charts )  - Option 2

Cloud Native K8S Ecosystem (based on OOM ) Mapping (Option 3)
Current statusFuture work planned (approx. timeline desired)Current statusFuture work planned (approx. timeline desired)
 1OnboardingAbility to onboard management applications, that are to be deployed in cloud-regions, in ONAP-Central. Shall not have any expectations that all management applications are onboarded as a single bundle.high

Yes

(Using SDC. SDC allows to define it as VNF with multiple management applications as VNFCs. SDC allows multiple VNFs in a service and there could be multiple services)

Supported through DCAE, SDC*, Policy, CLAMP.

Onboarded application artifact stored in DCAE inventory (ready for instantiation)

SDC Design tool enhancement (E release)

Existing OOM functionality.

Same mechanism used to deploy helm charts in ONAP Central is used to deploy helm charts to cloud regions.




N/A
 2OnboardingAbility to compose multiple management applications to be part of one management bundle and defining the dependency graph of applications belonging to a bundlehigh

Yes

(SDC now supports Helm, based description. It is possible to introduce dependency via initContainers and helm hooks)

Supported through DCAE, SDC (DS)


Through Cloudify/Tosca models - different components can be linked and deployed as single entity in dynamic fashion.

SDC Design tool enhancement (E release)

Existing OOM functionality.

Customization of components to deploy is defined in configuration override files (such as cloud-2.yaml).

Dependencies defined in application Helm Charts.

N/A
 3OnboardingShall have a way to specify licensing options for third party management applications (similar to VNF licensing)high

Yes

(SDC has a way to provide licensing information)

TBA
No mechanism currently exists to manage 3rd party management application licenses.
 4InstantiationAbility to deploy management applications in selected cloud regions that are owned by ONAP operatorhigh

Partially

(SO has ability to select the cloud-region while deploying the VNF and hence same is applicable for management applications. But there is no bulk deployment by selecting multiple cloud regions.It require enhancements. But we believe this requirement is needed for NFs too)

Supported by design. Work in Dublin done to identify the cloud-region part of deployment inputk8s Plugin enhancement worked as stretch goal for Dublin

Existing OOM functionality.

Cloud regions are defined in onap-central as kube configs. Iterate over each cloud-region and deploy common components. Can be scripted if desired.


Question: Should upgrades of cloud regions be under central control or regional control (ie. via edge stacks such as Akranio)?


 5InstantiationAbility to deploy management applications that are ephemeral (example: Analytics applications)high

Yes

(Complete control at the SO. One can terminate the management application bundle at any time)

Existing capability. Supported on both Tosca and Helm.

Existing OOM functionality.

Terminate deployed application by setting 'enabled' flags to false

  • in override file (e.g. cloud.yaml)
  • using --set option
  • using helm undeploy


N/A
 6Instantiation

Ability to deploy management applications in selected cloud regions that are not owned by ONAP operator, but has business relationship

(Examples: Public Clouds or Edge Clouds owned by some other organization)

low

Yes

(Multi-Cloud has ability to deploy workloads in any cloud-region - whether owned by operators or even public clouds as long as right credentials are used)

Supported by design.

Work in Dublin - Target edge cloud (with right credentials maintained in ONAP central) will be provided as input to deployment


Existing OOM functionality.

Cloud regions are defined in onap-central as kube configs. Kube configs already contain credentials for communicating with any cloud region - public, private and regardless of ownership model.

N/A
 7InstantiationSupport for deploying management applications independent of each other when there are no dependencies (no expectation that all management applications are brought up together).high

Yes

(SO provides API at various granularity)

Existing capability.

Supported under both Tosca and Helm.


Existing OOM functionality.

Deploy individual applications by setting 'enabled' flags to true

  • in override file (e.g. cloud.yaml)
  • using --set option


N/A
 8InstantiationAbility to deploy few management applications based on VNF instantiations and bring down when VNF is terminatedhigh

Yes

(SDC/SO with their bundling approaches - management app can be added as VFC in a VNF or as a VNF in a service)

Can be triggered from CLAMP currently.

Functionality can be built to support based on AAI event as future enhancement.

Add new service based on topology interface notification to trigger MS deployment

There are 2 options.

  1. The management applications that are directly related to instantiation and termination of a Service (VNFs) are an extension of the Service. Therefore it makes sense that the management applications (Helm Charts) under this scenario should remain as part of the Service Orchestration flow.

  2. Alternatively, the Cloud Native approach would be to deploy all needed management applications to the cloud region but with a replicaCount=0. Only the specifications for the management applications are stored (etcd). No other resources are consumed until its replicaCount becomes > 0. This would happen in response to a monitored metric that is set (or increases) when a VNF is deployed. Likewise the management application would free all consumed resources and return to its dormant state (replicaCount=0) once the monitored metric clears (or reduces) after VNF is terminated. Horizontal scalability and the Custom Metric APIs, are existing Kubernetes capabilities.
N/A
 9InstantiationAbility to apply configuration (Day0 configuration) of management applications at the time of deploymenthigh

Yes

(SDC supports adding default Day0 configuration of workloads)

Existing capability

Day0 configuration can be provided part of designer or operator as input.



Existing OOM functionality.

Global and application-specific hierarchical configuration may be applied anytime using override files and/or via --set commands. Here is an example of configuration for Service Orchestrator (so).

N/A
 10InstantiationSupport for various Day0 configuration profiles (e.g. different profiles for different cloud regions w/ differing capabilities)high

Yes

(SDC supports multiple Day0 profles - either through customization or as artifacts in case of K8S)

Yes

Supported through Policy/DCAE for Tosca work flow.

Helm chart configuration update supported via custom helm plugin


Existing OOM functionality.

Configuration override files are 'profiles' for customizing which applications to deploy and how they will be configured. Any number of override files may be applied to allow for easier management of the configuration.


N/A
 11InstantiationSupport for placement of management applications based on platform features (example: GPU, FPGA etc...)high

Yes

(SO can talk to OOF to get the right flavor for workloads in a VFM)

Do we need full blown OOF functionality - Note 1?

Not currently exist for Tosca workflow.

Helm charts supported.


Can be enhanced to interface  with OOF to determine required placement

Do we need full blown OOF functionality - Note 1?

Existing OOM functionality.

Node labels are assigned to kubernetes nodes that have special capabilities.


Node Selector support is built into every OOM Helm Chart and allow one to configure deployment onto nodes with a matching label. Kubernetes orchestration ensures that your mS is only deployed on supporting nodes.

N/A

Do we need full blown OOF functionality - Note 1?


 12InstantiationSupport for consistent Day0 configuration mechanisms - should be the same path as Day 2.high

Yes

(Work is going on in K8S plugin to ensure that Day 2 configuration is also supported as Helm charts as Day0 configuration. This is made possible due to microservices supporting K8S operators for their configurations)

Yes

Supported through Policy/DCAE for Tosca work flow.

Helm chart configuration update supported via custom helm plugin


Existing OOM functionality.

Global and application-specific hierarchical configuration may be applied anytime using override files and/or via --set commands. Applying Day 2 configuration to management applications is no different than applying their Day 0 configuration.


REST APIs or ConfigMaps could be used.

N/A
 13Run timeSupport for Day 2 configuration of single or multiple instances of management applications in various cloud regionshigh

Yes

(APPC support for Day2 configuration. Also Day2 configuration support in K8S plugin - Ongoing. One can select cloud-region, instance while applying Day2 configuration)

Yes

Supported through Policy/DCAE for Tosca work flow.

Helm chart configuration update supported via custom helm plugin


Existing OOM functionality.

Global and application-specific hierarchical configuration may be applied anytime using override files and/or via --set commands. Applying Day 2 configuration to management applications is no different than applying their Day 0 configuration.

N/A
 14Run timeSupport for management applications depending on other management applications - Support for configuration (Day2 configuration) of provider services when the consuming service is being instantiated and removal of the configuration on provider services when consuming service is terminated (Example: When analytics applications are brought up, analytics/collection framework need to be updated with additional configuration such as DB table, Kafka topic etc..)high

Yes

(In case of K8S world, as long as day2 configuration is also supported via K8S resources, it is possible. K8s Plugin does support this)

Supported under current design. Interface with DMAAP BC through plugin being worked for Dublin

Existing OOM functionality.

Global and application-specific hierarchical configuration may be applied anytime using override files and/or via --set commands. Applying Day 2 configuration to management applications is no different than applying their Day 0 configuration.


Could make use of Kuberentes Operators to manage incrementally changing configuration that may be applied to edge clouds from central cloud.

N/A
 15Run timeSupport for Day 2 configuration (add/delete) of appropriate management applications upon VNF instantiation/termination (Example: configuration of analytics & collection services when VNFs are brought up and removing the added configuration upon VNF termination)highWIP

Functionality supported under current design; for Tosca workflow installed component but not currently exist in ONAP.


Add new service based on topology interface notification to trigger MS reconfiguration

The Cloud Native approach would be to deploy all needed management applications to the cloud region but with a replicaCount=0. Only the specifications for the management applications are stored (etcd). No other resources are consumed until its replicaCount becomes > 0. This would happen in response to a monitored metric that is set (or increases) when a VNF is deployed. Likewise the management application would free all consumed resources and return to its dormant state (replicaCount=0) once the monitored metric clears (or reduces) after VNF is terminated. Horizontal scalability and the Custom Metric APIs, are existing Kubernetes capabilities.


N/A
 16NetworkingSecure connectivity between central ONAP and management applications in cloud regionshigh

Yes

(Using SSL/TLS)

Supported via securing DMAAP topics.

Platform support dynamic topic creation and AAF role assignment using DMAAP DBCL


N/A

This is an application requirement and not a requirement of the Platform Orchestrator.

Application needs to support SSL/TLS.

Or use of Istio or like technologies.


N/A
 17NetworkingSupport for various connectivity protocols (Kafka, HTTP 1.1, 2.0, GRPC, Netconf etc...) between ONAP-Central and management components in cloud regionshigh

Yes

(No restriction. it is based on management application)

Interface between Central and edge supported primarily via DMAAP

N/A

This is an application requirement and not a requirement of the Platform Orchestrator.

N/A
 18Run timeMonitoring and visualization of management applications of cloud-regions along with ONAP components at the ONAP-Centralhigh

Partial

(Same monitoring schemes as available for VNFs, but suggest that all management components acts prometheus target)

Cloudify UI provides the relation of the multiple MS (based on node types).

Consul integration supported

Dashboard - Provides unified console for Operation to check/manage deployment of application


Limited to Consul, ELK and Grafana Dashboards.Kubernetes Platform Management UI POC underway - deliverable in El Alto
 19Run timeScale-out of management application components at the cloud-regions & traffic (transaction) distributionhigh

Yes

(but testing is required with a use case to ensure that there are no gaps)

(This work is slated for Release E - configuration of ISTIO for L7 workloads and NSM for L2/L3/L4 workloads)

Even though K8S can bring up more instances, traffic distribution is expected to be configure properly)

Existing capability

(relies on k8s)


Existing OOM functionality.

Existing Kubernetes Horizontal scalability capabilities. Traffic management achieved using additional open source technologies (ie. ISTIO).

N/A
 20Run timeAbility to upgrade management application components without loss of functionalitylow

Yes

((but testing is required with a use case to ensure that there are no gaps)

(This work is slated for Release E - configuration of ISTIO for L7 workloads and NSM for L3/L3/L4 workloads)

Loss of functionality requires careful configuration of traffic rules of ISTIO or NSM)

Existing capability; supported for both Tosca and Helm

(relies on k8s)


Existing OOM functionality.

Rolling upgrade and rollback are existing capabilities of Helm + Kubernetes. Traffic management achieve using additional open source technologies (ie. ISTIO).

N/A
 21Run timeHigh availability of management applications in the cloud regionshigh

Yes

(It is part of K8S)

Existing capability; supported for both Tosca and Helm

(relies on k8s)


Native OOM functionality.
 22Miscellaneous

Support for ONAP-compliant third party management applications that provide similar functionality as ONAP management applications.

  • Some of the key aspects of ONAP-compliance include but are not limited to the following - API compatibility, Cloud Native Packaging in ONAP Helm chart format etc.
high

Yes

(As long as third party management applications are described using Helm)

Yes

(helm chart deployment will be supported via new helm plugin contributed for Dublin (the policy reconfiguration will require complying to onboarding req#1)


Native OOM functionality.
 23MiscellaneousSupport management applications as containershigh@Srinivasa Addepalli

Yes

(Using K8S plugin)

Existing capability; supported for both Tosca and Helm
Native OOM functionality.
 24MiscellaneousSupport management applications as VMs

Yes

(Using K8S plugin)

Capability under Tosca work flow; allows support for multi-cloud, hybrid containerized and non-containerized environment.
Legacy VM management achieved using additional open source technologies (ie. kubevirt, virtlet).
 25SecuritySecurity and privacy aspects of management applications (To be expanded)high
It is generic requirement and to be taken care outside of this work itemTBA
Various levels of security can be achieved for the kubernetes cluster (RBAC, Ingress Controllers) and for TLS encryption using open source technologies such as Istio.
 26InstantiationSupport for MS deployment not binded to any VNF/service; these are application which are service agnostic can be managed by dynamic configuration rule to support different usecases

Yes

(If management application is not bound to any network function, this can be deployed as a separate VSP)

Existing functionality under Tosca workflow
Existing OOM functionality.
 27 MiscellaneousBackward compatibility with existing application based on TOSCA in ONAP Critical NoYes. Support both Tosca work flow and helm - both active among different application in ONAP

Yes.

Backwards compatibility must be a top priority as we evolution the platform orchestrator.

Policy current implementation should be evaluated to understand use case, so we can potentially evolve to single orchestrator.
 28MiscellaneousSingle orchestrator for both managed (VNFs/Apps) and management applications that are to be deployed in cloud-regionslow (but highly preferred)YesCan be supported if vnf/app onboarding can be aligned with current management application flow.

Decisions was made to maintain separation of concerns.

  • SO manages Service (VNF) Orchestration
  • single Platform Orchestrator will manage life-cycle of ONAP Platform Components

Notes
  1. Do we need full blow OOF functionality for management component placement?
    1. Evaluate if Minizinc used CMSO is sufficient
    2. Key Considerations:
      1. Typically, management plane components don't need dynamic placement like workloads
      2. Minizinc models are leverage the optimizer of your choice with the flexibility of multiple solution options – https://github.com/onap/optf-osdf/tree/master/examples/placement-models-minizinc

Requirements Needing Operator Feedback

Background:

  • The scope here is only for Management Components
  • The same principles apply for Central and Edge Management Components




Requirement

Operator

Near-term (2019) vs Long-term (Beyond 2019)

Operator feedback ConsolidationComments

1.Consolidated view of deployed management component micro-services (in Central and other locations such as Edge) and their relationship for central Orchestrator

AT&T - near-term

Bell Canada - long-term

Orange - long-term

Swisscom - near-term

TIM - near-term

Verizon- near-term

Vodafone - near-term

Near-term

2.For a given application configuration management mechanism is the same independent of location (Central and Edge), including dynamic configuration support

AT&T - near-term

Bell Canada - long-term

Orange - long-term

Swisscom - near-term

TIM - near-term

Verizon- long-term

Vodafone - long-term

Long-term

Clarification (added 4/11): The requirement indicates configuration management at both central and edge should be consistent. Having different mechanism to handle this at different site will be operational issue.

3. In relation to No. 2, retain the capability to integrate with Central ONAP Policy and DMaap BusController (for secure topic/feed)

AT&T - near-term

Bell Canada - long-term

Orange - long-term

Swisscom - near-term

TIM - near-term

Verizon - long-term

Vodafone - long-term

Long-term

Clarification (added 4/11): This is relevant to #6 and should be treated under same priority.

This is also existing capability in ONAP and will be required for Day0 in the target architecture for backward compatibility to existing components (DCAE, Policy)

4.Retain the ability to support TOSCA artifacts distributed from SDC and CLAMP.

AT&T - near-term

Bell Canada - long-term

Orange - long-term

Swisscom - near-term

TIM - near-term

Verizon - long-term

Vodafone - long-term

Long-term

Clarification (added 4/11): This is relevant to #5 and should be treated under same priority.

This is also existing capability in ONAP and will be required for Day0 in the target architecture for backward compatibility to existing components (DCAE, SDC, Policy, CLAMP). It is not mandatory to follow existing workflow.

With regard to TOSCA support, the relevant standard is ETSI SOL 001.

5.Design flow integration (SDC, DCAE-DS) and Standardized configuration modelling support by Central Orchestrator

AT&T - near-term

Bell Canada - near-term

Orange - near-term

Swisscom - near-term

TIM - near-term

Verizon - near-term

Vodafone - near-term

Near-termApplicable only to management applications (Analytics, 3rd party etc.) which are on boarded after the basic ONAP components are up and running.

6.Dynamic Control Loop flow deployment and support through CLAMP/Policy Integration by Central Orchestrator

AT&T - near-term

Bell Canada - long-term

Orange - near-term

Swisscom - long-term

TIM - near-term

Verizon - long-term

Vodafone - near-term

Near-term

Applicable only to management applications (Analytics, 3rd party etc.) which are on boarded after the basic ONAP components are up and running.

Additional Clarification – If the application is using ONAP Control Loop flow, then it shall follow the approach suggested by CLAMP framework.

7.Multi-tenant infrastructure management - Install relevant K8S clusters if they are not already present.

AT&T - near-term (for both a and b)

Bell Canada - N/A - not an ONAP accountability.

Orange - NO. No the ONAP purposes according to us.

Swisscom - not in ONAP scope

TIM - near-term

Verizon - near-term

Vodafone - near-term

Tie - 4 near-term, 3 No

Clarification – This also includes installation of other relevant VIMs (OpenStack etc.) besides K8S if needed.

Can use open source K8S cluster API effort (https://github.com/kubernetes-sigs/cluster-api) as applicable to bring up K8S clusters on various Clouds (Amazon, Azure, OpenStack, VMware etc.)



8. Support for non-K8S based workload deployment


AT&T - near-term

Bell Canada - long-term

Orange - long-term

Swisscom - long-term

TIM - near-term

Verizon - near-term

Vodafone - long-term

Long-term

Clarification – This is the ability for ONAP to deploy applications across heterogeneous env (k8s, openstack etc)



Consensus on Requirements (Arch. sub committee presentation/discussion - 04/16/2019)


Definition of done:

  • This activity is closed when there is a:
    • Description of alternative concepts for distributing the ONAP functionality.
    • A recommendation for which alternatives to pursue (and when). 

Expected Timeframe:

 This activity is expected to conclude at/before the start of April, 2019 by the ONAP Architecture meeting at ONS. 

Definitions: 


Conclusion: 


Other Deliverables:

LF blog and Architecture white paper during ONS time frame.


  • No labels

16 Comments

  1. I would be willing to participate as this gets flushed out

  2. I think the discussions by some is that the non-ONAP managed edge is higher priority than the ONAP managed edge because that is what is out there today - AWS, Microsoft, OTT players and maybe even akraino


  3. Thanks Margaret. Breaking this to two areas

    • NFV (includes relevant management applications) - Edge using ONAP NFV Orchestration- High priority?
    • App –  Edge using ONAP Orchestration - Medium priority?

    BTW, Akraino is an integration project which leverages ONAP, MobilEdgex etc.


  4. Ramki: if we have the information on this topic in this wiki but we have the calls under edge automation - won't it be confusing on how people find this meeting or find the information? I rather change the title of the edge automation call to this title and then everything is in one place and names are consistent.

  5. Hi Margaret,

    Understand the potential confusion. Edge automation is (Edge Automation through ONAP) is handling functional requirements for Dublin besides this task force. One way to manage is to change the task force title as follows "Edge Automation Through ONAP - Distributed Management (ONAP etc.) components" so that wiki searches are not messed up. Thoughts?

    Thanks,

    Ramki


  6. Dear All,

    Added more details in the table and also separated out NFV orchestration and App orchestration -- https://wiki.onap.org/display/DW/Distributed+Management+%28ONAP+etc.%29+components.

    Wondering if it would be worthwhile getting feedback from key operator participants especially on the prioritization of various deployment scenarios. To reduce company bias, we could have one entry per company. Thoughts?

    Thanks,

    Ramki

  7. Hi Ramki,

    For Edge using ONAP Orchestration ,  it says that ONAP central is responsible to install ONAP orchestrator and also K8s cluster SW on the edge:

    1. This somewhat overlaps with edge platforms such as Akraino which defines the the specific edge platform installed components which include K8s cluster , ONAP SW etc.
    2. ONAP edge orchetrator should be  pre registered in ONAP central which then will allow ONAP central to communicate with the edge orchestrator during service instantiation where the service require specific capabilities (resource , operational etc) which are available using this registered edge orchestrator.

    Thanks

    Avi, 


    1. Hi Avi,

      Thanks.

      I left the description a little broad – "Can install ONAP SW components via other mechanisms".

      I like your description which brings more clarity – "ONAP edge orchetrator should be  pre-registered in ONAP central which then will allow ONAP central to communicate with the edge orchestrator during service instantiation where the service require specific capabilities (resource , operational etc) which are available using this registered edge orchestrator." 

      Will modify the table accordingly.

      Thanks,

      Ramki

  8. Hi Ramki - Under requirements captured for distributed management application, pls consider the requirement to provide consistent configuration management (api) for deployed MS/applications.

  9. Hi Vijay,

    Thanks for bringing this up. This is covered in the latest table.

    Thanks,

    Ramki


  10. "Support for ONAP-compliant third party management applications that provide similar functionality as ONAP management applications (Modularity)"

    This looks like a very broad definition that may be hard to fully address. ONAP compatibility can be at the top layer API, some ONAP module APIs (like ETSI SOL003/SOL005 that are supported by SO, VF-C), or some internal APIs that are not even fully documented. Perhaps it is best to limit the scope of this functionality and choose 1-2 types of compatible management applications.

    1. Hi Ranny,

      I agree, Internal/external API compatibility (I assume also the NB APIs are included - such as TMF 6xx, etc) is critical to enable Modularity. I think this should be added to that requirement. I would also agree that "third party applications that provide similar functionality as ONAP applications" is equally important. I think both requirements should be included.

  11. Hi Ranny, Davide,

    Thanks for your comments.

    Please let me know if the following is a good summary for the change in the current text.

    • Support for ONAP-compliant third party management applications that provide similar functionality as ONAP management applications.
    • Some of the key aspects of ONAP-compliance include but are not limited to the following - API compatibility, Cloud Native Packaging in ONAP Helm chart format etc.

    Thanks,

    Ramki



    1. I believe this is a good start (wink)

  12. The following was stated in the summary notes:  "It is also an understanding that Cloudify-TOSCA support in SO may not happen this year".  Pending the development of detailed requirements, the El Alto release will include formal integration of Cloudify into the SO via the VNF adapter mechanism originally contributed by AT&T.