Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

BUSINESS DRIVER

Executive Summary - Increasingly, edge locations are becoming K8S based as K8S can support multiple deployment types (VNFs, CNFs, VMs and containers).  This work enables ONAP to deploy workloads in K8S based sites.

Business Impact - This will enable operators to use both Openstack based sites as well K8s based sites to deploy workloads.  It also enables usage of common compute resources for both network functions and applications, thereby utilizing compute infrastructure efficiently. Since K8S supported with the same API, superior workload mobility can be achieved.

Business Markets - Applicable across compute continuum : On-prem edges, network edges, edge clouds and public clouds.

Funding/Financial Impacts - Potential of significantly avoiding multiple service orchestrators, avoid multiple infrastructure managers, thereby savings on CAPEX.

Organization Mgmt, Sales Strategies - There is no additional organizational management or sales strategies for this use case outside of a service providers "normal" ONAP deployment and its attendant organizational resources from a service provider. 

Technical Debt

R4 has many feature planned and there may be few items that may spill over to R5.  

...

  • Dynamic route and provider network operator
  • OVN operator 
  • ISTIO security
  • Modularity stuff 
    • Logging (Each Micro-service is expected to log messages as expected by fluentd)
    • Monitoring (Each Micro-service is expected to expose metrics as expected by Prometheus)
    • Tracing  (Ensure that all HTTP based applications use tracing libraries to enable distributed tracing)
  • Visualization of resource bundle 
  • Use case that show cases the Day 2 configuration (Kafka or Collection package of Distributed Analytics as a service)
  • CLI commands


New requirements coming from various use cases

...

  • SRIOV-NIC Support
  • Multi-Cluster schedulersupport
  • Edge-Labeling & Daemon-set implementation across edges
  • User Manager
  • Cluster-Labeling 
  • Distributed Cloud support
  • Multi-tenancyMeta-configuration scheduler
  • Placement support (if there are multiple edge candidates)
  • HPA support (being taken care as part of HPA work)
  • with HPA
  • Service Coupling using ISTIO and WGRDNSM and OVN SFC for function chaining - PoC item
  • CLI Support for all relevant APIs. (Applications, Resource-Bundle definitions,  Profiles, Configuration templates, configs and meta-configs etc...)
  • Continuous monitoring (using Kubernetes APIs) and updating the DB with latest status and resources allocated.
  • CLI/GUI support on the status of resources (At app level, At resource bundle level and at each resource level)
  • Integration with CDS
  • Study:  Security orchestration (Possibly in R7)
  • Study : ETSI defined Container definition


ONAP Architecture impact

None

All the changes are expected to be done in Multi-Cloud project.  We don't expect any changes to API exposed by Multi-Cloud to the SO.  Also, there are no changes expected in SDC to Multi-Cloud interface. All the work that was done SDC and SO will be good enough in R6.  

There are some suggestions from the community to make "K8S workload support" as first class citizen and that may require changes to SDC and SO. But that is not planned for R6.

Few conceptual differences:

  • In R4/R5, each K8S cluster is expected to be registered as cloud-region in A&AI by the ONAP admin user.  Now, it is expected that each 'distributed cloud' is registered as the cloud-region.
  • In R4/R5, each RB only has one helm chart.   In R6, it is going to be enhanced where one RB can have multiple helm charts. There would be one meta file in the RB to describe the helm charts. Since, the entire RB is represented as tar file, there are no code changes expected in SDC or SDC client in Multi-Cloud.
  • In R4/R5,  there is no concept of 'Deployment intent'.  In R6, deployment intents are to be created by the user before instantiating the service/RB.
  • In R4/R5,  each profile has only one values.yaml file, but now each profile can have multiple values.yaml files as RB would have multiple helm charts (sub-apps).

All these conceptual differences are localized to Multi-Cloud project and no change is expected in any other project.

R4 Page Link: K8S based Cloud Region Support


Attachments