Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Sharing the networks across VMs and containers.
  • Sharing the volumes across VMs and containers.


Proposal 1 & Proposal 2 feedback from community

Please see slides attached in slides/links section on detailed information on Proposal 1 and Proposal 2.

In summary:

Proposal 1:  All the orchestration information is represented as TOSCA based service templates.  There is no cloud technology specific artifacts.  All the information about VNF, VDU, VL and Volumes are represented as per ETSI SOL specifications.  In this proposal, ONAP with the help of Multi-Cloud translates the TOSCA representation of VNF/VDU/VL/Volume to Cloud technology specific API information before issuing cloud technology specific API calls. 

Proposal 2:  In this proposal, service orchestration information is represented in TOSCA grammar, but majority of VNFD/VDU/VL/Volume information is represented in cloud-technology specific artifacts.  These artifacts are part of the VNF portion of CSAR.  Since, cloud region selected is unknown during the VNF onboarding, multiple cloud-technology specific artifacts would need to be part of CSAR.  Right artifact is chosen at run time by ONAP.  Cloud technology specific artifacts are : HOT in case of Openstack,  ARM in case of Azure,  K8S deployment information in case of K8S etc .. 

Though Proposal 1 is ideal,  for pragmatic reasons Proposal 2 is chosen in this POC.  Few reasons are given here:

  • TOSCA way of representing Kubernetes deployment is not popular today.  Many container vendors are comfortable in providing K8S deployment yaml files or Helm charts.
  • Current TOSCA standards don't expose all the capabilities of K8S.  TOSCA based container standards can take lot of time in standardizing and agreed upon by the community.  But, there is a need now to support K8S based cloud regions.
  • K8S community is very active and features supported by various K8S providers can differ from one another.  VNF vendors are aware of that and create VNF images that take advantage of those features.  It is felt that ONAP should not hinder these deployments.

Project Description:


The effort will investigate/drive a way to allow ONAP to deploy/manage Network Services/VNFs over cloud regions that support K8S orchestrator. 

...

Though it may identify the changes required in VNF package,  SDC, OOF,  SO and Modeling,  it will not try to enhance these projects as part of this effort.

This project is described described using API capabilities and its functionality.

  • Create a Multi-Cloud plugin service that interacts with Cloud regions supporting K8S
    • VNF Bring up:
      • API: Exposes API to upper layers in ONAP to bring up VNF.
        • Currently Proposal 2 (Please see the attached presentation and referenced in Slides/Links section) seems to be the choice. 
        • Information expected by this plugin :
          • K8S deployment information (in the form understood by K8S), which is opaque to rest of ONAP.  This information is normally expected to be provided as part of VNF onboarding in CSAR)
            • TBD - Is this artifact passed to Multi-Cloud as reference or is it going to be passed as immediate data from the upper layers of ONAP.
          • Metadata information collected by upper layers of ONAP
            • Cloud region ID
            • Set of compute profiles (One for each VDU within the VNF).
            • TBD - Is there anything to be passed
      • Functionality:
        • Instantiate VNFs that only consist of VMs.
        • Instantiate VNFs that only consist of containers
        • Instantiate multiple VNFs (some VNFs realized as VMs and some VNFs realized as containers) that communicate with each other on same networks (External Connection Points of various VNFs could be on the same network)
        • Reference to newly brought up VNF is stored in A&AI (Which is needed when VNF needs to be brought down, modify the deployment)
        • TBD - Should it populate A&AI with reference to each VM & Container instance of VNF? Or one reference to entire VNF instance good enough? Assuming that there is a need for storing reference to each VM/Container instance  in A&AI, some exploration is required to see whether this information is made available by K8S API or should it watch for the events from K8S?
        • TBD - Is there any other information that this plugin is expected to populate A&AI DB (IP address of each VM/Container instance) and anything else?
    • VNF Bring down:
      • API: Exposes API function to upper layers in ONAP to terminate VNF
      • Functionality: Based on the request coming from upper ONAP layer, it will terminate the VNF that was created earlier.
    • Scaling within VNF:
      • It leaves the decision of scaling-out and scaling-in of services of the VNF to the K8S controller at the cloud-region. 
      • (TBD) - How the configuration life cycle management be taken care?  
        • Should the plugin watch for new replicas being created by K8S and inform APPC, which in turn sends the configuration?
        • Or should we let the new instance that is being brought up talk to APP-C or anything else and let it get the latest configuration? 
    • Healing & Workload Movement (Not part of Casablanca)
      • No API is expected as it is assumed that K8S master at the cloud region will take care of this.
      • TBD - Is there any information to be populated in the A&AI when some healing or workload movement occurs that the cloud-region.
    • VNF scaling: (Not part of Casablanca)
      • API :  Scaling of entire VNF 
        • Similar to VNF bring up.
    • Create Virtual Link:
      • API :  Exposes API to create virtual link
        • Meta data
        • Opaque information (Since OVN + SRIOV  are chosen,  opaque information passed to it is amenable to create networks and subnets as per the OVN/SRIOV Controller  capabilities)
        • Reference to the newly created network is added to the A&AI.
        • If network already exists, it is expected that use count is incremented.
      • Functionality:
        • Creates network if it does not exist.
        • Using OVN/SRIOV CNI API, it will populate remote DHCP/DNS Servers.
        • TBD :  Need to understand OVN controller and SRIOV controller capabilities and figure out the functionality of this API in this plugin.
    • Delete Virtual Link:
      • API : Exposes API to delete virtual network
      • Functionality:
        • If there is no reference to this network (if use count is 0), then using OVN/SRIOV controllers, it deletes the virtual network.
    • Create persistent volume
      • Create volume that needs to exist across VNF life cycle.
    • Delete persistent volume
      • Delete volume

...