Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Below are thoughts to continue the SO-APPC-VFC discussion. So far, we have discussed proposals from Vimal, Lingli, and Jamil.  This page attempts to synthesize and analyze the proposals to help drive further discussion.  Please feel free to update.

Language

One of the first challenges we have is around language.  We use the term “controller” for the entities that bring up SDN services and NFV applications.  However, I can’t find good industry definitions for an “NFV Controller”.  Controller does have widespread use and common definitions in the SDN space, as it operates in the control plane to direct traffic flows. In ONAP, many of the controllers are built on ODL, so this is understandable. On the other hand, NFV generally works in the management plane (providing management, monitoring, and configuration functions).  Indeed, Vimal’s description of a controller covers many of those management functions: “Maintain Topology, Configuration State, Perform Configuration Management and Provide Life Cycle Management Functions (i.e. common verbs – Restart, Suspend, Drain, etc.)”.  ETSI NFV language, too, focuses on managers (Management and Orchestration, Virtual Network Function Manager, Virtual Infrastructure Manager, etc.). Reality is, of course, messier. For example, ODL is primarily an SDN controller, but can also be used for management functions and OpenStack is primarily a manager, but also supports controller functions through Neutron.

...

Note that for clarity, let's still use the terms “APPC” and “VFC” to refer to the modules in question, regardless of whether they are providing controller or manager functions.

 

Goals and questions

Through the various proposals we’ve heard on the topic, there have been several requirements and assumptions – some explicit and some implicit.  


Goals/requirements where we have consensus


  • Flexibility for operator deployments: AT&T, China Mobile, Orange, and others may all want to deploy ONAP differently in their networks, and the system should be able to accommodate their needs.
  • Align with ONAP architecture principles
    • Minimize the impact to external components (e.g., DCAE, A&AI): while some updates may be necessary, we want to fit into the existing framework as best as possible
    • Minimize the impact to VNF vendors: we want to make it easy for VNF vendors to work with ONAP
    • NF/Services/Product agnostic
    • Microservices-based
    • Modular
    • Align with the ONAP charter
    • Capable of delivering in R1

Open questions that have been raised, but where I don’t think we’ve reached consensus

  • Avoid replication of functionality? 
  • ETSI MANO alignment?
  • Feature parity between APP-C and VF-C?  Do we require synchronization or do we allow variation within each module as long as the interfaces and interactions with other modules are consistent?


Functional Definitions


Let’s combine ETSI MANO definitions with service orchestration (which ETSI doesn’t define) to come up with a set of roles/functions for various components which we can map into deployment scenarios. I’ll work down the stack, starting at the service orchestrator. Note that below, I will use “service orchestrator” to define generic functions and “SO” to talk about the ONAP module which may or may not be covered by a different component in the ETSI MANO architecture that I’ll be using for reference.

...

    1. Decompose service template into connectivity and application components

    2. Call controllers/managers to configure the network and instantiate VNFs

NFV orchestrator (including Resource Orchestrator) (ETSI MANO: http://www.etsi.org/deliver/etsi_gs/NFV-MAN/001_099/001/01.01.01_60/gs_NFV-MAN001v010101p.pdf)


    1. Management of Network Services deployment templates and VNF Packages (e.g. on-boarding new Network Services and VNF Packages). During on-boarding of NS and VNF, a validation step is required. To support subsequent instantiation of a NS, respectively a VNF, the validation procedure needs to verify the integrity and authenticity of the provided deployment template, and that all mandatory information is present and consistent. In addition, during the on-boarding of VNFs, software images provided in the VNF Package for the different VNF components are catalogued in one or more NFVI-PoPs, using the support of VIM.
    2. Network Service instantiation and Network Service instance lifecycle management, e.g. update, query, scaling, collecting performance measurement results, event collection and correlation, termination.
    3. Management of the instantiation of VNF Managers where applicable.
    4. Management of the instantiation of VNFs, in coordination with VNF Managers.
    5. Validation and authorization of NFVI resource requests from VNF Managers, as those may impact Network Services (granting of the requested operation needs to be governed by policies).
    6. Management of the integrity and visibility of the Network Service instances through their lifecycle, and the relationship between the Network Service instances and the VNF instances, using the NFV Instances repository.
    7. Management of the Network Service instances topology (e.g. create, update, query, delete VNF Forwarding Graphs).
    8. Network Service instances automation management (e.g. trigger automatic operational management of NS instances and VNF instances, according to triggers and actions captured in the on-boarded NS and VNF deployment templates and governed by policies applicable to those NS and VNF instances).
    9. Policy management and evaluation for the Network Service instances and VNF instances (e.g. policies related with affinity/anti-affinity, scaling, fault and performance, geography, regulatory rules, NS topology, etc.).
    10. Validation and authorization of NFVI resource requests from VNF Manager(s), as those may impact the way the requested resources are allocated within one NFVI-PoP or across multiple NFVI-PoPs (granting of the requested resources is governed by policies, and may require prior reservation).
    11. NFVI resource management across operator's Infrastructure Domains including the distribution, reservation and allocation of NFVI resources to Network Service instances and VNF instances by using an NFVI resources repository, as well as locating and/or accessing one or more VIMs as needed and providing the location of the appropriate VIM to the VNFM, when required.
    12. Supporting the management of the relationship between the VNF instances and the NFVI resources allocated to those VNF instances by using NFVI Resources repository and information received from the VIMs.
    13. Policy management and enforcement for the Network Service instances and VNF instances (e.g. NFVI resources access control, reservation and/or allocation policies, placement optimization based on affinity and/or anti-affinity rules as well as geography and/or regulatory rules, resource usage, etc.).
    14. Collect usage information of NFVI resources by VNF instances or groups of VNF instances, for example, by collecting information about the quantity of NFVI resources consumed via NFVI interfaces and then correlating NFVI usage records to VNF instances.

...

Based on Vimal’s and Lingli’s presentations, let’s map these functions into the various modules (including DCAE & Policy).

SO:

  • 1a&b
  • 2a, c, d, j, k, l

APPC

  • 2 b, g, h
  • 3a-k

VFC

  • 2 a-n
  • 3 a-k (optional – using OPEN-O G-VNFM; other gVNFM or sVNFM possible)

DCAE

  • 2 b, n
  • 3 f

Policy

  • 2 i, m


So, SO+VFC would provide all of 1, 2, and 3, with some overlap in 2 a, c, d, j, k, and l. SO+APPC provides 1, 2 a-d, g, h, j-l. SO+VFC+APPC again provides all of 1, 2, and 3 with overlap in 2 a, c, d, j, k, l. 

...

Also, regardless of approach, VFC could work with DCAE and Policy on 2 b, i, m, n, and 3 f.


Deployment scenarios


We have seen three different deployment scenarios 

...