Date

Attendees

Goals

Discussion items

TimeItemWhoNotes




Action items

  • Isaku Yamahata propose k8s discussion to the architecture subcommittee
  • Isaku Yamahata start discussion at mailing list
  • Isaku Yamahata prepare fist draft for proposal for architecture subcommittee
  • Isaku Yamahata file a JIRA item for architecture subcommittee to schedule the topic at architecture subcommittee meeting

8 Comments

  1. First pass at meeting notes:


    General agreement that ONAP architecture should support containerized VNFs.  The challenge with K8S, especially when described as a “VIM”, is that K8S provides App management capabilities at the VNFM layer in addition to managing infrastructure.  Recommendation is to bring the topic to the ONAP architectural committee. 

    One requirement would be to ensure that we can model VNFs the same way regardless of whether they are deployed in VMs or Containers. 

    Specific comments:

    “One VIM API to rule them all is a flawed approach.  Accept that there are a limited number of VIMs.  [and that there are APIs for each]

    “How to reinforce tiered model (infrastructure, app, operator-facing service)”

    “Infrastructure abstraction would have to manage PNFs as well. 

    Multi-VIM API is not helpful here as: 1) it is does not resolve the App management aspect of k8s, and 2) is not intended as an infrastructure API.  

    1. It is true that K8S is not only used to orchestrate (bring up/down containers), but is also used to monitor the health of containers it brought up.  Yes, it is is much more than VIM for containers. That is actually a good thing. 

      In regards to "One requirement would be to ensure that we can model VNFs the same way regardless of whether they are deployed in VMs or Containers" statement :  Recent additions of virtlet/kubevirt,  K8S can also be used to bring up VMs.  One instance of K8S master can be used to bring up VFs as VMs and containers. 

      Multi-Cloud architecture workshop addressed some of the comments.

      1. Multi-Cloud component north bound API will be model-driven and expected to be common API across multiple VIM technologies (including K8S).
      2. Plugin under this API will translate north bound API data to VIM technology specific information.  In case of K8S,  K8S plugin will convert from northbound API data to K8S specific API.
      3. Multi-Cloud architecture is expected to provide FCAPS interface and K8S plugin can satisfy this using its monitoring capabilities. 

      It is good to coordinate this activity along side with Multi-Cloud architecture.

      Srini

  2. There's a deeper issue to be discussed, namely what we want to use Kubernetes (or any other container orchestration platform for that matter) for:

    It seems some people think of Kube's main value being "create a container with image X, schedule it on a node meeting the container's requirements". And there is a Kube API to do this low-level operation, of course. If one limits Kube to this, then on the surface this is very similar to what a VIM would do. I say on the surface, because when you dig deeper, there are differences between containers and VMs(*) that are difficult to "hide" below a common API in a meaningful way. Just ask the virtlet/kubevirt folks that are attempting to do this. Or talk to OpenStack Nova why they deprecated support for scheduling containers.

    BUT: The reason for Kube's huge momentum in the industry is not that it schedules containers. It comes from encoding years of best practices designing and operating complex distributed applications into an opinionated deployment model (the deployment unit is a pod, i.e. a collection of containers with common service endpoint and namespace) and application model (e.g. replicasets for stateless applications, statefulsets for stateful, clustered applications).

    So, conceptually, Kube is not a VIM, it is a VNFM / AppC! Plugging it below a VIM abstraction and using the AppC to compose distributed applications and lifecycle manage them would mean ignoring most of Kube's functionality and the tooling ecosystem around it (e.g. CI/CD pipelines, analytics, service routing, policy engines, ...) that are the reason for Kube's adoption across so many industries.


    (*) e.g. containers being backed by file storage, VMs by block storage, hosting single app processes (Docker) vs multiple app processes, etc.

  3. As a follow-up to the above:

    If we want to leverage the application (and app lifecycle) model of Kubernetes (or any other modern container platform FTM), I think we also need to accept that there may not be a single app model across container- and VM-based VNFs. Of course, the basic lifecycle operations (provision, upgrade/rollback, scale, deprovision, ...) would be the same. The difference would be in the details, like what is the "contract" / the "guarantees" of the app controller regarding ordering, identity, addressing, persistence of backing volumes, etc. etc.

    I don't think this would be a big issue actually, given a clean separation between application and service layers. This is what I was referring to when during the session I said that the service (orchestration) layer should be oblivious to how an app is provisioned and lifecycle-managed: a PNF, a traditional VM-based VNF or a cloud-native / microservice-style container-based VNF should mostly look the same.

  4. Hi Frank,

    Thanks for the feedback.

    K8S is not just a container scheduler and it is much more.. Agree on that.   In the context of ONAP,  few ONAP components are already doing VNFM function (VF-C, APP-C).  ONAP  architecture is expected to support multiple Cloud technologies running at the same time.  That is one site may be running Openstack VIM, second site maybe running VMWare vcenter and third site may be running K8S.  So, we felt that anything we do should work in this ONAP architecture and hence we thought Multi-Cloud/VIM layer is the right place.

    In that scope, you are right that only part of K8S functionality may be leveraged.  Existing ONAP components be leveraged for other functions.  APP-C for LCM, DCAE for analytics, CLAMP & Policy for closed feedback system etc...  That is what we thought we can crawl with and develop more understanding as we move forward (smile).   

    Does that make sense?

    What kind of issues you see (other than not leveraging full power of K8S:-)?

    Thanks

    Srini



    1. Thanks Srini for that background. That is good to know.

      My concern with degrading K8s to a VIM is that there's no real path to evolve it towards AppC later; we'd basically have to start from scratch and that means a high barrier to evolution.

      A second concern is that - as I wrote earlier - containers and VMs are only similar on the surface. We'd need to add container-specifics to the API that would not be leveraged by VMs and vice versa. Ugly. Ask the OpenStack Nova folks why they dropped that idea. Or the KubeVirt folks why they are separating the Pod and VirtualMachine objects in K8s.

      My third concern is that it would mean making a pre-decision that the current AppC app model for LCM, which is geared towards traditional, VM-based VNF designs, is also a good model for modern, cloud-native, containerized VNFs. And it would separate us more from the rest of the industry that is looking for the K8s model for managing apps. 

      I don't see the concern that we'd not be leveraging existing ONAP components enough: K8s still needs a VIM below to create & manage its infrastructure, i.e. the Multi-VIM stuff would stay the same. Likewise, DCAE, CLAMP & Policy could likely be reused. Also, talking to the TOSCA experts, there's a good way to integrate K8s. The only change would be using AppC for VM-based VNFs and a new AppC for container-based VNFs, means, there is a clean migration path.

  5. Were there slides presented? Can someone please post them here? Thanks Amar

  6. Hello,

    sounds like a good discussion, I'd like to  my few cents. First using term VIM for k8 infra used to house VNF's sounds confusing, it not yet clear where kuberentes cluster for VNF will be hosted, will it be a bare metal cluster ? and then containers will be on baremetal servers, technically it is not virtual infrastructure manager, It can be called (just a suggestion) Network functions Infra manager it covers (all VNF, PNF both infra be it vms or baremetals). Other advantage I see of using K8's is HA for a single VNF can achieved easily then it would be with openstack, In case if that VNF is used too much and have multiple instances of it, k8's services IP can be used to provide HA for that VNF, which is totally a matter of choice if there's any use case for that. I see you're starting with Kubernetes COE, and docker swarm is also mentioned in the discussion, I think choice of COE should be extendable, three of basics or most common COE, I see are Kubernetes, docker swarm, Mesos.