Table of Contents | ||
---|---|---|
|
ONAP Container/Container Orchestration Engine(COE) support
Code Name: TBD(invent short fancy name)COE
This page tracks the related materials/discussions/etc related to Container based network service/function deployment(ONAP container/Container Orchestration Engine(COE)) effort.
As starting point, this effort has started as small subgroup of multicloud as task force.As the efforts evolve, logistics would be revised. Maybe this task force would be promoted to a independent group or an independent project.
Meetings
- [coe] #coe Team ONAP5, China(Wed)2:00am / UTC(Wed) 18:00PM / ET(Tue) 13:00pm / PT(Tue) 10:00am Jan ONAP7, Friday UTC 3:00pm from Jan 16/17, 2018. weekly meeting
- draft discussion for second presentation at architecture subcommittee draft slides/discussion will be posted here.
- Meeting Minutes
Items
determine timeslot for weekly zoom meeting: doodle poll- new time slot is China CST(Wed)2:00am/UTC(Wed) 18:00PM/ET(Tue) 13:00pm/PT(Tue) 10:00am
- start from Jan 16/17
materials for Usecase subcommitteeupdate matrials for architecture subcommitteesettle down legisticssubproject under the umbrella of multicloud projectsub PTL forsetup repocommitters
JIRA items
Beijing planninguse case
- archtecture/API design: WIP
- discussion is ongoing at https://gerrit.onap.org/r/#/c/30027/
collect k8s usage optionsMunish proposalIsaku proposalBin Hu proposal:Frank proposal
impact on modelingimpact on policyimpact on TOSCA
use case proposalimpacted projects
architecture proposaldiscussion pointshow to register k8s clusterhow to deploy vnflife cycle management and others(e.g. auto scaling): generic VNF controllers
timeline
- Create repository: now asking Gildas infra manager for it.
- start implementation
timeline
- Feb 5, 2018: discussion at multicloud meeting
- Feb 6, 2018: weekly meeting
- Feb 18, 2018: M2 deadline
Design Documents
Slides/links
- Jan
- determine timeslot for oneline meeting
- the week of Nov 30/Dec 1, 2017: first online meeting
- the week of Dec 4: kubecon: will skip
- the week of Dec 11: ONAP developer event. Let's have f2f session there.
- the week of Dec 18:
- zoom meeting? or skip?
- after that, online meeting
- Maybe people will be on vacation around chrismas/new year
- The week of Dec 25: skip
- The week of Jan 1: skip?
- The week of Jan 8: Time slot will be determined by doodle poll
- Jan 9, 2018: ARC subcommitee 2nd review: slide
- Jan 15, 2018: attending multicloud weekly meeting
- Jan 16, 2018: weekly meeting
- the week of TBD: usecase subcommittee meeting
The week of TBD: archtecture subcommittee meetingThe week of TBD: present for TSC for new projects(or taskforce or whatever)- slide
Project template for project proposal
Project Name:
Project name: Container based network service/function deployment(ONAP container support):
code name: TBDCOE
Repository name: TBDmulticloud/container
Project Description:
The effort will investigate/drive a way to allow ONAP to deploy/manage Network Services/VNFs over container/COE.
...
- Support for multiple container orchestration technologies as VIM(Virtualized Infrastructure Manager) technologies - Support container orchestration technology as VIM by ONAP and cloud infrastructure - allow VNFs to run within container technology and also allow closed feedback loop same to VM based VIM. e.g. openstack
- Support for co-existence of VNF VMs and VNF containers - Add container orchestration technology in addition to the traditional VM-based VIM or environment managed by ONAP. So that VNFs can be deployed within container by ONAP.
- Support in various ONAP projects to bring up VNFs as containers by standardized definition of workload as VM or container (e.g TOSCA model)
- Enhance VNF SDK to support container based VNFs.
- Support for uniform network connectivity among VMs and containers.
Goal and scope
the first target of container/COE is k8s. but other container/COE technology, e.g. docker swarm, is not precluded. If volunteers steps up for it, it would be also addressed.The use of CoE/K8S is optional.
- Have ONAP take advantage of container/COE technology for cloud native era
- Utilizing of industry momentum/direction for container/COE
Influence/feedback the related technologies(e.g. TOSCA, container/COE)
...
- Don’t change the existing components/work flow with (mostly) zero impact.
- Leverage the existing interfaces and the integration points where possible
...
- sample VNF(vFW and vDNS): In Beijing only deploying those VNF over CoE
- other potential usecases are addressed after Beijing release.
Functionality
- Allow VNFs within container through container orchestration managed by ONAP same to VM based VIM.
- Allow closed loop feedback and policy
- Allow container based VNFs to be designed
- Allow container based VNFs be configured and monitored.
- Kubernetes VIM cloud infrastructure as initial container orchestration technology under Multi-Cloud project.
API/Interfaces
...
the following table summarizes the impact on other projects
component | comment | |
modelling | New names of Data model to describe k8s node/COE instead of compute/openstack. Already modeling for k8s is proposedbeing discussed.SO | |
Multi-cloud adapter to call multicloud k8s driver. ARIA adaptor which already was merged will be utilized. The difference of VM and container will be hidden in model driver API | OOF | New policy to use COE, to run VNF in container |
A&AI/ESR | Schema extensions to represent k8s data. (kay value pairs) | |
Multicloud | New driver plugin for COE/k8s. (depending on the community discussion, ARIA and helm support needs to be considered. But this is contained within multicloud project.)controllers/APP-C | No impact or new adapto |
First target for first release
the scope of Beijing is
Pure COE/K8S deployment
Assume that COE/K8S is already deployed
Deployment unit is Pod
Future work
Hybrid deployment: VM + container (+ PF)
Dynamic deployment of COE/K8S instances on demand
Multicloud: basic API
Lifecycle management: keep compatibility of APP-C
No (major) change by using bare pod
Future work
Delegation in APP-C or VNF controllers is future work
...
Scope for Beijing
First baby step to support containers in a Kubernetes cluster via a Multicloud SBI / Plugin
Minimal implementation with zero impact on MVP of Multicloud Beijing work
Use Cases
Sample VNFs(vFW and vDNS)
integration scenario
Register/unregister k8s cluster instance which is already deployed. (dynamic deployment of k8s is out of scope)
onboard VNFD/NSD to use container
Instantiate / de-instantiate containerized VNFs through K8S Plugin in K8S cluster
Vnf configuration with sample VNFs(vFW, vDNS)
...
Target for later release
- Installer/test/integration
- More container orchestration technology
- More than sample VNFs
- delegating functionalities to CoE/K8S
Non-Goal/out of scope
The followings are not goal or out-of-scope of this proposal.
- Not installer/deployment. ONAP running over container
- OOM project ONAP on kubernetes
- https://wiki.onap.org/pages/viewpage.action?pageId=3247305
- https://wiki.onap.org/display/DW/ONAP+Operations+Manager+Project
- Self hosting/management might be possible. But it would be further phase.
- container/COE deployment
- On-demand Installing container/coe on public cloud/VMs/baremetal as cloud deployment
- This is also out of scope for now.
- For ease of use/deployment, this will be further phase.
Architecture Alignment.
How does this project fit into the rest of the ONAP Architecture?
The architecture (will be)is designed to enhancement to some existing project.
It doesn’t introduce new dependency
How does this align with external standards/specifications?
Convert TOSCA model to each container northbound APIs in some ONAP component. To be discussed.
Are there dependencies with other open source projects?
Kubernetes pod API or other container northbound AP
CNCF(Cloud Native Computing Foundation), OCI(Open Container Initiative), CNI(Container Networking interface)
...
UseCases
- sample VNF(vFW and vDNS): In Beijing only deploying those VNF over CoE
- other potential usecases(vCPE) are addressed after Beijing release.
the work flow to register k8s instance is depicted as follows
the work flow to deploy VNF into pod is as follows
Other Information:
link to seed code (if applicable) N/A
Vendor Neutral
if the proposal is coming from an existing proprietary codebase, have you ensured that all proprietary trademarks, logos, product names, etc., have been removed?
Meets Board policy (including IPR)
Use the above information to create a key project facts section on your project page
Key Project Facts:
This project will be subproject of Multicloud project. Isaku will lead this effort under the umbrella of multicloud project.
NOTE: if this effort is sub-project of multicloud as ARC committee recommended, this will be same to multicloud's.
Facts | Info |
---|---|
PTL (first and last name) | Isaku Yamahatasame to multicloud project |
Jira Project Name | same to multicloud project |
Jira Key | same to multicloud project |
Project ID | same to multicloud project |
Link to Wiki Space | Container based network service/function deployment (DEPRECATED) |
Release Components Name:
Note: refer to existing project for details on how to fill out this table
Components Name | Components Repository name | Maven Group ID | Components Description |
---|---|---|---|
container | multicloud/container | org.onap.multicloud.container | container orchestration engine cloud infrastructure |
Resources committed to the Release:
Note 1: No more than 5 committers per project. Balance the committers list and avoid members representing only one company.
...
Role | First Name Last Name | Linux Foundation ID | Email Address | Location |
---|---|---|---|---|
PTLcommitter | Isaku Yamahata | yamahata | isaku.yamahata@gmail.com | PT(pacific time zone) |
contributorsCommitters | Munish Agarwal | Munish.Agarwal@ericsson.com | ||
Bin Hu | bh526r | bh526r@att.comContributors | ||
Manjeet S. Bhatia | manjeets | |||
Phuoc Hoang | hoangphuocbk | phuoc.hc@dcn.ssu.ac.kr | ||
Mohamed ElSerngawy | melserngawy | mohamed.elserngawy@kontron.com | EST | |
Interested (will attend my first on 20180206) - part of oom and logging projects | michaelobrien | frank.obrien@amdocs.com | EST (GMT-5) |