Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

As starting point, this effort has started as small subgroup of multicloud as task force.As the efforts evolve, logistics would be revised. Maybe this task force would be promoted to a independent group or an independent project.


Meetings

...


Version 2 of slides are here:

K8S_for_VNFs_And_ONAP_Support_v2.pptx

...

K8s Plugin progress slide: 

Kubernetes.pptx

Project (This is sub project of Multi-Cloud as decided by Architecture subcommittee & Multi-Cloud team)

...

  • Create a Multi-Cloud plugin service that interacts with Cloud regions supporting K8S
    • VNF Bring up:
      • API: Exposes API to upper layers in ONAP to bring up VNF.
        • Currently Proposal 2 (Please see the attached presentation and referenced in Slides/Links section) seems to be the choice. 
        • Information expected by this plugin :
          • K8S deployment information (in the form understood by K8S), which is opaque to rest of ONAP.  This information is normally expected to be provided as part of VNF onboarding in CSAR) and some information (variables values) are created part of every instantiation.
            • TBD - Is this artifact passed to Multi-Cloud as reference or is it going to be passed as immediate data from the upper layers of ONAP.
          • Metadata information collected by upper layers of ONAP
            • Cloud region ID
            • Set of compute profiles (One for each VDU within the VNF).
            • TBD - Is there anything to be passed
      • Functionality:
        • Instantiate VNFs that only consist of VMs.
        • Instantiate VNFs that only consist of containers
        • Instantiate multiple VNFs (some VNFs realized as VMs and some VNFs realized as containers) that communicate with each other on same networks (External Connection Points of various VNFs could be on the same network)
        • Reference to newly brought up VNF is stored in A&AI (Which is needed when VNF needs to be brought down, modify the deployment)
        • TBD - Should it populate A&AI with reference to each VM & Container instance of VNF? Or one reference to entire VNF instance good enough? Assuming that there is a need for storing reference to each VM/Container instance  in A&AI, some exploration is required to see whether this information is made available by K8S API or should it watch for the events from K8S?
        • TBD - Is there any other information that this plugin is expected to populate A&AI DB (IP address of each VM/Container instance) and anything else?
    • VNF Bring down:
      • API: Exposes API function to upper layers in ONAP to terminate VNF
      • Functionality: Based on the request coming from upper ONAP layer, it will terminate the VNF that was created earlier.
            • Note that there could be multiple artifacts for a given deployment.  For example, EdgeXFoundry requires multiple deployment yaml files (one for each type of container) and multiple service yaml files.  Since, there could be many of them, ordering in which they get executed is important. Hence, it is required that the priority order is given in a separate yaml artifact, which is interpreted only by K8S plugin.
            • Supported K8S yaml artifacts - Deployment (POD, Daemonset, Stateful set)(, Service, Persistent volume). Others for for future releases.
            • Kubernetes templates (artifacts) and variables:
              • Since many instances of deployments can be instantiated using same CSAR, it is necessary that they are brought in different namespaces.  Namespace name is expected to be part of variables (which is different for different instances).
              • And also any template information starting with $.... is replaced from the variables.
          • Metadata information collected by upper layers of ONAP
            • Cloud region ID
            • Set of compute profiles (One for each VDU within the VNF).
            • TBD - Is there anything to be passed
      • Functionality:
        • Instantiate VNFs that only consist of VMs.
        • Instantiate VNFs that only consist of containers
        • Instantiate multiple VNFs (some VNFs realized as VMs and some VNFs realized as containers) that communicate with each other on same networks (External Connection Points of various VNFs could be on the same network)
        • Reference to newly brought up VNF is stored in A&AI (Which is needed when VNF needs to be brought down, modify the deployment)
        • TBD - Should it populate A&AI with reference to each VM & Container instance of VNF? Or one reference to entire VNF instance good enough? Assuming that there is a need for storing reference to each VM/Container instance  in A&AI, some exploration is required to see whether this information is made available by K8S API or should it watch for the events from K8S?
        • TBD - Is there any other information that this plugin is expected to populate A&AI DB (IP address of each VM/Container instance) and anything else?
    • VNF Bring down:
      • API: Exposes API function to upper layers in ONAP to terminate VNF
      • Functionality: Based on the request coming from upper ONAP layer, it will terminate the VNF that was created earlier.
    • Scaling within VNF:
      • It leaves the decision of scaling-
      Scaling within VNF:
      • It leaves the decision of scaling-out and scaling-in of services of the VNF to the K8S controller at the cloud-region. 
      • (TBD) - How the configuration life cycle management be taken care?  
        • Should the plugin watch for new replicas being created by K8S and inform APPC, which in turn sends the configuration?
        • Or should we let the new instance that is being brought up talk to APP-C or anything else and let it get the latest configuration? 
    • Healing & Workload Movement (Not part of Casablanca)
      • No API is expected as it is assumed that K8S master at the cloud region will take care of this.
      • TBD - Is there any information to be populated in the A&AI when some healing or workload movement occurs that the cloud-region.
    • VNF scaling: (Not part of Casablanca)
      • API :  Scaling of entire VNF 
        • Similar to VNF bring up.
    • Create Virtual Link:
      • API :  Exposes API to create virtual link
        • Meta data
        • Opaque information (Since OVN + SRIOV  are chosen,  opaque information passed to it is amenable to create networks and subnets as per the OVN/SRIOV Controller  capabilities)
        • Reference to the newly created network is added to the A&AI.
        • If network already exists, it is expected that use count is incremented.
      • Functionality:
        • Creates network if it does not exist.
        • Using OVN/SRIOV CNI API, it will populate remote DHCP/DNS Servers.
        • TBD :  Need to understand OVN controller and SRIOV controller capabilities and figure out the functionality of this API in this plugin.
    • Delete Virtual Link:
      • API : Exposes API to delete virtual network
      • Functionality:
        • If there is no reference to this network (if use count is 0), then using OVN/SRIOV controllers, it deletes the virtual network.
    • Create persistent volume
      • Create volume that needs to exist across VNF life cycle.
    • Delete persistent volume
      • Delete volume

...

Offers Ansible playbooks for installing a Kubernetes Deployment with additional components required for ONAP MultiCloud plugin. Its temporal repository is https://github.com/electrocucaracha/krd

Image Added

Activities:

Activity (Non ONAP related, but necessary to prove K8S plugin)OwnerStatus
Add K8S installation scriptsVictor MoralesDone
Add flannel Networking supportVictor MoralesDone
Add OVN ansible playbookVictor MoralesDone
Create functional test to validate OVN operabilityIn progress
Add Virtlet ansible playbookVictor MoralesIn progressDone
Create functional test to validate Virtlet operabilityIn progress
Prove deployment with EdgeXFoundry containers with flannel networkramamani yeleswarapu
Prove Prove deployment with one VM and container sharing flannel network
Prove deployment with one VM and container sharing CNI network
Add Multus CNI ansible playbookramamani yeleswarapuIn progress
Create functional test to validate Multus CNI operabilityramamani yeleswarapu
Prove deployment with one VM (firewall VM) and container (simple router container) sharing two networks (both from OVN)
Prove deployment with one VM and container sharing two networks (one from OVN and another from Flannel
Document how the usage of the projectVictor MoralesIn progress
Add Node Feature Discovery for KubernetesVictor Morales

Create functional test for NFD

Victor Morales

MultiCloud/Kubernetes Plugin

Translates the ONAP runtime instructions into Kubernetes RESTful API calls. Its temporal repository is https://github.com/shank7485/k8-plugin-multicloud

Activities:

ActivityOwnerStatus
K8S Plugin API definition towards rest of ONAP for computeK8S Plugin API definition towards rest of ONAP for networkingK8S plugin API definition towards rest of ONAP for storage (May not be needed)SO Simulator for compute

K8S plugin for compute

Instantiation time:

  • Loading artifact
  • Updating loaded artifact based on API information.
  • Making calls to K8S (Getting endpoint to talk to from ESR registered repo)

Return values to be put in the A&AI

Testing with K8S reference deployment with hardcoded flannel configuration at the site (Using EdgeXFoundry) - Deployment yaml files to be part of K8S plugin (uploaded manually)K8S Plugin implementation for OVNSO simulator for networkTesting with K8S reference deployment with OVN networking (using EdgeXFoundry)Testing with K8S reference deployment with OVN with VM and containers having multiple interfacesK8S plugin - Artifact distribution Client to receive artifacts from SDCAbove test scenario without harcoding yaml files in K8S plugin

Note: Once above list is decided,  appropriate JIRA stories will be created.

FOLLOWING SECTIONS are YET TO BE UPDATED

Goal and scope

the first target of container/COE is k8s. but other container/COE technology, e.g. docker swarm, is not precluded. If volunteers steps up for it, it would be also addressed.

  • Have ONAP take advantage of container/COE technology for cloud native era
  • Utilizing of industry momentum/direction for container/COE
  • Influence/feedback the related technologies(e.g. TOSCA, container/COE)

  • Teach ONAP container/COE instead of openstack so that VNFs can be deployed/run over container/COE in cloud native way

At the same time it's important to keep ONAP working, not break them.

  • Don’t change the existing components/work flow with (mostly) zero impact.
  • Leverage the existing interfaces and the integration points where possible

Functionality

...

API/Interfaces

Swagger API:

Image Removed

View file
nameswagger.yaml
height250

the following table summarizes the impact on other projects

...

component

...

comment

...

modelling

...

New names of Data model to describe k8s node/COE instead of compute/openstack.

Already modeling for k8s is being discussed.

...

OOF

...

New policy to use COE, to run VNF in container

...

A&AI/ESR

...

Schema extensions to represent k8s data. (kay value pairs)

...

Multicloud

...

New plugin for COE/k8s.

(depending on the community discussion, ARIA and helm support needs to be considered. But this is contained within multicloud project.)

First target for first release

the scope of Beijing is

Scope for Beijing

    1. First baby step to support containers in a Kubernetes cluster via a Multicloud SBI / Plugin

    2. Minimal implementation with zero impact on MVP of Multicloud Beijing work

Use Cases

    1. Sample VNFs(vFW and vDNS)

integration scenario

    1. Register/unregister k8s cluster instance which is already deployed. (dynamic deployment of k8s is out of scope)

    2. onboard VNFD/NSD to use container

    3. Instantiate / de-instantiate containerized VNFs through K8S Plugin in K8S cluster

    4. Vnf configuration with sample VNFs(vFW, vDNS)

Target for later release

  • Installer/test/integration
  • More container orchestration technology
  • More than sample VNFs
  • delegating functionalities to CoE/K8S

Non-Goal/out of scope

The followings are not goal or out-of-scope of this proposal.

Architecture Alignment.

...

How does this project fit into the rest of the ONAP Architecture?

  • The architecture (will be)is designed to enhancement to some existing project.

  • It doesn’t introduce new dependency

...

How does this align with external standards/specifications?

  • Convert TOSCA model to each container northbound APIs in some ONAP component. To be discussed.

...

Are there dependencies with other open source projects?

Create a layout for the projectShashank Kumar ShankarDone
Create a README file with the basic installation instructionsShashank Kumar ShankarDone
Define the initial swagger APIShashank Kumar ShankarDone

Implement /vnf_instances POST endpoint

Victor MoralesDone

Implement the Create method for  VNFInstanceClient struct

Victor MoralesDone

Implement /vnf_instances GET endpoint

Done

Implement the List method for VNFInstanceClient struct

Victor MoralesDone

Implement /vnf_instances/{name} GET endpoint

Victor MoralesIn progress

Implement the Get method for VNFInstanceClient struct

In progress

Implement /vnf_instances/{name} PATCH endpoint

In progress

Implement the Get method for VNFInstanceClient struct

Shashank Kumar ShankarIn progress

Implement /vnf_instances/{name} DELETE endpoint

Shashank Kumar ShankarDone

Implement the Delete method for VNFInstanceClient struct

Done
Create the struct for the Creation response
Create the struct for the List responseVictor Morales
Create the struct for the Get response
K8S Plugin API definition towards rest of ONAP for compute
K8S Plugin API definition towards rest of ONAP for networkingShashank Kumar Shankar
K8S plugin API definition towards rest of ONAP for storage (May not be needed)Shashank Kumar Shankar
Merge KRD and plugin repo and upload into the ONAP official repoVictor Morales
SO Simulator for computeShashank Kumar Shankar

K8S plugin for compute

Instantiation time:

  • Loading artifacts based on the order
  • For each artifact
    • Updating loaded artifact based on API information.
    • Updating loaded artifact based on variables
    • Making calls to K8S (Getting endpoint to talk to from ESR registered repo)




Testing with K8S reference deployment with hardcoded flannel configuration at the site (Using EdgeXFoundry) - Deployment yaml files to be part of K8S plugin (uploaded manually)ramamani yeleswarapu
K8S Plugin implementation for OVNRitu Sood
SO simulator for network

Testing with K8S reference deployment with OVN networking (using EdgeXFoundry)
Testing with K8S reference deployment with OVN with VM and containers having multiple interfaces

K8S plugin - Artifact distribution Client to receive artifacts from SDC (Mandatory - On demand artifact download, pro-active storage is stretch goal)

Above test scenario without harcoding yaml files in K8S plugin

K8s plugin - Download Kube Config file form AAI and use it to authenticate/operate with a Kubernetes clusterShashank Kumar Shankar
K8s plugin - Add an endpoint to render Swagger fileShashank Kumar Shankar

Note: Once above list is decided,  appropriate JIRA stories will be created.


Projects that may be impacted 


ProjectPossible impactWorkaroundownerStatus
SOAbility to call generic VNF API

Until SO is enhanced to support

  • TOSCA orchestartion
  • VNF level Abstract API

SO will be simulated to test the K8S plugin and reference deployment

SO simulation owner: ???



SDC

May not be any impact, but need to see if there any impact

  • for adding new artifacts.
  • supporting download requests on specific artifcacts

Owner : Libo
A&AI AND ESR

May not be any impact, but need to see whether any schema changes are required

  • Add Kubeconfig related data on per cloud-region basis.

Check whether any existing fields in cloud-region can be used to store this information or introduce new attributes in the schema (under ESR)


Owner : Shashank and Dileep
MSB/ISTIO

No impact on MSB. But fixes required to do following:

Integration with ISTIO CA to have the certificate enrolled for communicating with other ONAP servceis

Also to communicate with remote K8S master.





Activities that are in scope for phase1 (Stretch goals)

ActivityOwnerStatus
K8S node-feature discovery and population of A&AI DB with the features

Support for Cloud based CaaS (IBM, GCP to start with)


Scope

  • Support for K8S based sites (others such as Dockerswarm,  Mesos are not in the scope of Casablanca)
  • Support for OVN and flannel based networks in sites
  • Support for virtlet to bring up VM based workloads (Others such as Kubevirt is for future)
  • Support for bare-metal containers using docker run time (Kata containers support will be taken care later)
  • Multiple virtual network support
  • Support for multiple interfaces to VMs and containers.
  • Proving using VFW VM,  Simple router container and EdgeXFoundy containers.
  • Support for K8S deployment and other yaml files as artifacts (Helm charts and pure TOSCA based container deployment representation is beyond Casablanca)
  • Integration with ISTIO CA (for certificate enrolment)


API/Interfaces

Swagger API:

Image Added

View file
nameswagger.yaml
height250


Key Project Facts:

This project will be subproject of Multicloud project. 

...

Kubernetes pod API or other container northbound AP

...

Image Removed

UseCases

  • sample VNF(vFW and vDNS): In Beijing only deploying those VNF over CoE
  • other potential usecases(vCPE) are addressed after Beijing release.

the work flow to register k8s instance is depicted as follows

Image Removed

the work flow to deploy VNF into pod is as follows

Image Removed

...

  • link to seed code (if applicable) N/A

  • Vendor Neutral

    • if the proposal is coming from an existing proprietary codebase, have you ensured that all proprietary trademarks, logos, product names, etc., have been removed?

  • Meets Board policy (including IPR)

Use the above information to create a key project facts section on your project page

Key Project Facts:

This project will be subproject of Multicloud project. Isaku will lead this effort under the umbrella of multicloud project.

NOTE: if this effort is sub-project of multicloud as ARC committee recommended, this will be same to multicloud's.

...

isaku.yamahata@gmailMunish.Agarwal@ericssonmanjeetsPST

Role

First Name Last Name

Linux Foundation ID

Email Address

Location

Location

committerelectrocucarachavictor.morales@intel.comPT(pacific time zone)
contributorsmunish agarwal
Munish.Agarwal@ericsson.com

Ritu Soodritusoodritu.sood@intel.comPT(pacific time zone)

Shashank Kumar Shankar
shashank.kumar.shankar@intel.comPT(pacific time zone)

ramamani yeleswarapu
ramamani.yeleswarapu@intelcommitterIsaku Yamahatayamahata.comPT(pacific time zone)

Kiran Kamineni
kiran.k.kamineni@intelcontributorsMunish Agarwal.comPT(pacific time zone)

Bin Hubh526rbh526r@att.combh526r@att.com

libo zhu



Manjeet Singh Bhatiamanjeetsmanjeet.s.bhatia@intel.comPT(pacific time zone)Manjeet S. Bhatia

Phuoc Hoanghoangphuocbkphuoc.hc@dcn.ssu.ac.kr

Mohamed ElSerngawymelserngawymohamed.elserngawy@kontron.comEST

Komer Poodarikpoodarikpoodari@berkeley.eduPST

ramki krishnanramkri123ramki krishnanPST
Interested (will attend my first on 20180206) - part of oom and logging projectsmichaelobrienfrank.obrien@amdocs.comEST (GMT-5)Victor Moraleselectrocucarachavictor.morale@intel.com






View file
nameK8S_R3_Update_R4_Items.pptx
height250