BUSINESS DRIVER

Executive Summary - Increasingly, edge locations are becoming K8S based as K8S can support multiple deployment types (VNFs, CNFs, VMs and containers).  This work enables ONAP to deploy workloads in K8S based sites.

Business Impact - This will enable operators to use both Openstack based sites as well K8s based sites to deploy workloads.  It also enables usage of common compute resources for both network functions and applications, thereby utilizing compute infrastructure efficiently. Since K8S supported with the same API, superior workload mobility can be achieved.

Business Markets - Applicable across compute continuum : On-prem edges, network edges, edge clouds and public clouds.

Funding/Financial Impacts - Potential of significantly avoiding multiple service orchestrators, avoid multiple infrastructure managers, thereby savings on CAPEX.

Organization Mgmt, Sales Strategies - There is no additional organizational management or sales strategies for this use case outside of a service providers "normal" ONAP deployment and its attendant organizational resources from a service provider. 

Technical Debt

R4 has many feature planned and there may be few items that may spill over to R5.  

Some of the features that are being done in R4 (for recollection)

  • K8S based Cloud region support
  • Deploy VMs and container based workloads.
  • VM and container description in Helm
  • Support for multiple resource types including Deployment, POD, Service, Config-map, CRDs, stateful set etc..
  • Support for multiple profiles, where given resource bundle definition is used to deploy multiple times.
  • Support for Day2 configuration of each individual profile/instance.
  • Networking:
    • Support for dynamic and multiple networks
    • Ability to place the POD in multiple networks
    • Initial support for OVN for data networks
    • Provider network support (using OVN)

Some features that are postponed to R5 are:

  • Dynamic route and provider network operator
  • OVN operator 
  • ISTIO security
  • Modularity stuff 
    • Logging (Each Micro-service is expected to log messages as expected by fluentd)
    • Monitoring (Each Micro-service is expected to expose metrics as expected by Prometheus)
    • Tracing  (Ensure that all HTTP based applications use tracing libraries to enable distributed tracing)
  • Visualization of resource bundle 
  • Use case that show cases the Day 2 configuration (Kafka or Collection package of Distributed Analytics as a service)


New requirements coming from various use cases

(Most of the requirements are coming from big data AI platform use case)

  • A way to deploy apps/services that span across multiple clusters.
  • Day2 configuration control of workloads at the app/service level as a transaction
  • Dependency graph (DAG) of deploying workloads across multiple clusters
  • Bulk deployment of apps/services in multiple clusters.
  • Function chaining
  • Multi-tenant management (such as namespaces, users etc...)
  • Edge Daemonset via labeling (for scheduler to know what kinds of apps/services to be deployed without any user intervention) 

Functional requirements

  • SRIOV-NIC Support
  • Multi-Cluster support
  • Cluster-Labeling 
  • Distributed Cloud support
  • Multi-tenancy
  • Placement support (if there are multiple edge candidates) with HPA
  • Service Coupling using ISTIO and WGRD
  • CLI Support for all relevant APIs. (Applications, Resource-Bundle definitions,  Profiles, Configuration templates, configs and meta-configs etc...)
  • Continuous monitoring (using Kubernetes APIs) and updating the DB with latest status and resources allocated.
  • CLI/GUI support on the status of resources (At app level, At resource bundle level and at each resource level)
  • Integration with CDS
  • Study:  Security orchestration (Possibly in R7)
  • Study : ETSI defined Container definition


ONAP Architecture impact

None

All the changes are expected to be done in Multi-Cloud project.  We don't expect any changes to API exposed by Multi-Cloud to the SO.  Also, there are no changes expected in SDC to Multi-Cloud interface. All the work that was done SDC and SO will be good enough in R6.  

There are some suggestions from the community to make "K8S workload support" as first class citizen and that may require changes to SDC and SO. But that is not planned for R6.

Few conceptual differences:

  • In R4/R5, each K8S cluster is expected to be registered as cloud-region in A&AI by the ONAP admin user.  Now, it is expected that each 'distributed cloud' is registered as the cloud-region.
  • In R4/R5, each RB only has one helm chart.   In R6, it is going to be enhanced where one RB can have multiple helm charts. There would be one meta file in the RB to describe the helm charts. Since, the entire RB is represented as tar file, there are no code changes expected in SDC or SDC client in Multi-Cloud.
  • In R4/R5,  there is no concept of 'Deployment intent'.  In R6, deployment intents are to be created by the user before instantiating the service/RB.
  • In R4/R5,  each profile has only one values.yaml file, but now each profile can have multiple values.yaml files as RB would have multiple helm charts (sub-apps).

All these conceptual differences are localized to Multi-Cloud project and no change is expected in any other project.

R4 Page Link: K8S based Cloud Region Support


  File Modified
Microsoft Powerpoint Presentation CloudNativeNFVONAP_R6_LFN_DDF.pptx Jun 17, 2019 by Srinivasa Addepalli
Microsoft Powerpoint Presentation ONAP-Security-Management.pptx Aug 23, 2019 by Srinivasa Addepalli
Microsoft Powerpoint Presentation Multi Cluster Orchestration_v2.pptx Corrected more typos Sep 05, 2019 by Srinivasa Addepalli
Microsoft Powerpoint Presentation EdgeSecureOverlayNetworks.pptx Nov 05, 2019 by Srinivasa Addepalli


  • No labels

5 Comments

  1. Srinivasa Addepalli - I have few questions on R4 continuation. AFAIK in R4, user can import helm charts as other artifacts in SDC and same get passed to K8S via MC plugin. Do you have plans on including concepts like CNF design / edit in SDC, Service design using VNF & CNF in SDC and corresponding runtime enhancements as well, in R6 & beyond ?

  2. Adding Kiran KamineniRitu Sood Eric Multanenand Lukasz Rajewski to the thread.

    Integration with CDS is being led by Lukasz.  That is for Day 0 configuration for now. You may want to touch base with Lukasz.

    When you say CNF design/edit in SDC, what do you have in mind?

    We understand that Day0 configuration can help in modifying some parameter values, but it may not be sufficient.  In R6, we plan to come out with extensible RB scheduler functionality in MC-K8S service. That allows introduction of new MC-K8S controllers.  We intend to add one controller "Resource Update controller". This controller allows per deployment update of K8S resources - Adding new resources, Deleting existing resources from the RB, modification of existing resources.   For example, vendor firewall helm charts may by default contains only two networks. If firewall is to work on multiple networks for a given deployment use case, then it is required to include network resource and update of deployment/pod specification with new interface on that network.   'Resource Update Controller" is expected to do those kind of deployment specific updates. Currently, this controller is of lower priority, but if Engineering resources permit, we like to do this.  I would let Kiran Kamineni talk more about this.  

    Is that what you are looking for?  Let us know whether above is what you are looking for.

    Srini



  3. Thanks Srinivasa Addepalli. Here is what I was looking for.

    • Ability to do CNF Design / POD Design - rather than importing helm chart, just like how there is a path to design VF from scratch, I was looking for an ability to design a POD and associated aggregation in SDC.
    • Ability to edit incoming helm chart - this is needed as during service design, we will change the internal structure of POD corresponding to overall design of the service.
    • Ability to choose individual helm chart and reuse it as & when needed for a larger service design
    • Override values.yaml per chart with default design-time values and also have an ability during runtime ( may be VID / UUI kinda portal ) to override per env values / config maps.

    While above are some of the requirements I see during design time, I still have questions around how are we going to deal with inventory, policy & service assurance aspects when it comes to containers.

    From what I understood, current development on containers in ONAP only focussed on importing helm chart and have SO deploy the same via MC-k8s service and also an ability to dynamically inject values.yaml overrides along with ability to dictate network plugins. Beyond that, what is the resulting interface / calls to AAI, POLICY, DCAE etc?

    Appreciate if you share some light on above aspects.

    BR,

    Viswa

    1. CC : Kiran Kamineni Ritu Sood Eric Multanen

      Hi Viswanath Kumar Skand Priya

      On

      "Ability to do CNF Design / POD Design" and "Ability to choose individual helm chart and reuse it as & when needed for a larger service design"

      We have been, mainly, working on lower backend and APIs to onboard the application (RB) consisting of multiple Helm charts (Note that in R5, only one helm chart is possible, but in R6, one RB can consists of multiple Helm charts).  Even though it is good to have a designer tool to create the RB, we feel that creation of RB is very simple and possible with simple Linux tools.  One can select the helm charts, put them in a directory, create a meta file and create a tar file out of it.  Same can be onboarded using APIs provided by ONAP4K8S or create a CSAR file with the tar file and onboard using SDC. Say for example, one wants to build an application (RB) say using maria DB helm chart,  Vault helm chart, MongoDB helm chart and a business application helm chart - they need to simply copy the helm charts in a directory of your PC, tar them up and use that tar file to onboard.  it is as simple as that.

      Happy to get VZ contributions in this area of simplifying the RB creation via SDC or via some GUI/CLI.


      On

      "Ability to edit incoming helm chart" and "Override values.yaml per chart"

      We expect that there is a need to change the helm charts for some specific deployments.  As I mentioned in my earlier post, we intend to provide 'deployment intent' API to add/modify/delete helm chart resources.

      If there is no change in the helm chart, but requires different values for various parameters of the helm charts for deployments, we do provide APIs to upload values file via APIs.


      On

      "Beyond that, what is the resulting interface / calls to AAI, POLICY, DCAE etc?"

      As I mentioned before Lukasz Rajewski and others are planning to integrate CDS with K8S support.

      TM ( Milind Jalwadi and his team) is working on uploading the status of K8S based RB deployment in A&AI.

      Eric Multanen already made vFWCL (vFirewall with Closed loop, that involve Policy, DCAE) work with K8S, but with some hardcoded stuff.

      Once what TM is planning  is completed, closed loop will work as it is working in Openstack environment.


  4. I look forward to seeing how this evolved. Please try to use "K8s" instead of "K8S". https://github.com/cncf/foundation/blob/master/style-guide.md