Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Introduction

The ONAP Operations Manager (OOM) is responsible for life-cycle management of the ONAP platform itself; components such as MSO, SDNC, etc. It is not responsible for the management of services, VNFs or infrastructure instantiated by ONAP or used by ONAP to host such services or VNFs. OOM uses the open-source Kubernetes container management system as a means to manage the Docker containers that compose ONAP where the containers are hosted either directly on bare-metal servers or on VMs hosted by a 3rd party management system. OOM ensures that ONAP is easily deployable and maintainable throughout its life cycle while using hardware resources efficiently. There are two deployment options for OOM:

  • A minimal deployment where single instances of the ONAP components are instantiated with no resource reservations, and
  • A production deployment where ONAP components are deployed with redundancy and anti-affinity rules such that single faults do not interrupt ONAP operation.
    When deployed as containers directly on bare-metal, the minimal deployment option requires a single host (32GB memory with 12 vCPUs) however further optimization should allow this deployment to target a laptop computer. Production deployments will require more resources as determined by anti-affinity and geo-redundancy requirements.

OOM deployments of ONAP provide many benefits:

  • Life-cycle Management Kubernetes is a comprehensive system for managing the life-cycle of containerized applications. Its use as a platform manager will ease the deployment of ONAP, provide fault tolerance and horizontal scalability, and enable seamless upgrades.
  • Hardware Efficiency ONAP can be deployed on a single host using less than 32GB of memory. As opposed to VMs that require a guest operating system be deployed along with the application, containers provide similar application encapsulation with neither the computing, memory and storage overhead nor the associated long term support costs of those guest operating systems. An informal goal of the project is to be able to create a development deployment of ONAP that can be hosted on a laptop.
  • Rapid Deployment With locally cached images ONAP can be deployed from scratch in 7 minutes. Eliminating the guest operating system results in containers coming into service much faster than a VM equivalent. This advantage can be particularly useful for ONAP where rapid reaction to inevitable failures will be critical in production environments.
  • Portability OOM takes advantage of Kubernetes' ability to be hosted on multiple hosted cloud solutions like Google Compute Engine, AWS EC2, Microsoft Azure, CenturyLink Cloud, IBM Bluemix and more.
  • Minimal Impact As ONAP is already deployed with Docker containers minimal changes are required to the components themselves when deployed with OOM.

Features of OOM:

  • Platform Deployment Automated deployment/un-deployment of ONAP instance(s) / Automated deployment/un-deployment of individual platform components using docker containers & kubernetes
  • Platform Monitoring & healing Monitor platform state, Platform health checks, fault tolerance and self-healing using docker containers & kubernetes
  • Platform Scaling Platform horizontal scalability through using docker containers & kubernetes
  • Platform Upgrades Platform upgrades using docker containers & kubernetes
  • Platform Configurations Manage overall platform components configurations using docker containers & kubernetes
  • Platform migrations Manage migration of platform components using docker containers & kubernetes
    Please note that the ONAP Operations Manager does not provide support for containerization of services or VNFs that are managed by ONAP; the OOM orchestrates the life-cycle of the ONAP platform components themselves.
Note
titleWarning: Draft Content

This wiki is under construction

Table of Contents

Container Background

Linux containers allow for an application and all of its operating system dependencies to be packaged and deployed as a single unit without including a guest operating system as done with virtual machines. The most popular container solution is Docker which provides tools for container management like the Docker Host (dockerd) which can create, run, stop, move, or delete a container. Docker has a very popular registry of containers images that can be used by any Docker system; however, in the ONAP context, Docker images are built by the standard CI/CD flow and stored in Nexus repositories. OOM uses the "standard" ONAP docker containers and three new ones specifically created for OOM.

Containers are isolated from each other primarily via name spaces within the Linux kernel without the need for multiple guest operating systems. As such, multiple containers can be deployed with little overhead such as all of ONAP can be deployed on a single host. With some optimization of the ONAP components (e.g. elimination of redundant database instances) it may be possible to deploy ONAP on a single laptop computer.

Life Cycle Management via Kubernetes

As with the VNFs deployed by ONAP, the components of ONAP have their own life-cycle where the components are created, run, healed, scaled, stopped and deleted. These life-cycle operations are managed by the Kubernetes container management system which maintains the desired state of the container system as described by one or more deployment descriptors - similar in concept to OpenStack HEAT Orchestration Templates. The following sections describe the fundamental objects managed by Kubernetes, the network these components use to communicate with each other and other entities outside of ONAP and the templates that describe the configuration and desired state of the ONAP components.

ONAP Components to Kubernetes Object Relationships

Kubernetes deployments consist of multiple objects:

  • nodes - a worker machine - either physical or virtual - that hosts multiple containers managed by kubernetes.
  • services - an abstraction of a logical set of pods that provide a micro-service.
  • pods - one or more (but typically one) container(s) that provide specific application functionality. 
  • permanent volumes - One or more permanent volumes need to be established to hold non-ephemeral configuration and state data.

OOM uses these kubernetes objects as described in the following sections.

Nodes

OOM works with both physical and virtual worker machines.  

  • Virtual Machine Deployments - If ONAP is to be deployed onto a set of virtual machines, the creation of the VMs is outside of the scope of OOM and could be done in many ways, such as:
    • manually, for example by a user using the OpenStack Horizon dashboard, or
    • automatically, for example with the use of a OpenStack Heat Orchestration Template which builds an ONAP stack, or

    • orchestrated, for example with Cloudify creating the VMs from a TOSCA template and controlling their life cycle for the life of the ONAP deployment.
  • Physical Machine Deployments - If ONAP is to be deployed onto physical machines there are several options but the recommendation is to use Rancher to associate hosts with a kubernetes cluster.

Services

...

Pods

...

OOM Networking with Kubernetes

  • DNS
  • Ports - Flattening the containers also expose port conflicts between the containers which need to be resolved.
  • Name Spaces


Kubernetes Deployment Specifications for ONAP

Each of the ONAP components are deployed as described in a deployment specification.  This specification documents key parameters and dependencies between the pods of an ONAP components such that kubernetes is able to repeatably startup the component.  The components artifacts are stores here in the oom/kubernetes repo in in ONAP gerrit. The mso project is a relatively simple example, so let's start there.

MSO Example

Within the the oom/kubernetes/mso repo repo, one will find three file in yaml format:

The db-deployment.yaml file describes deployment of the database component of mso.  Here is the contents:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mariadb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mariadb
  template:
    metadata:
      labels:
        app: mariadb
      name: mariadb
    spec:
      hostname: mariadb
      containers:
      - args:
        image: nexus3.onap.org:10001/mariadb:10.1.11
        name: "mariadb"
        env:
          - name: MYSQL_ROOT_PASSWORD
            value: password
          - name: MARIADB_MAJOR
            value: "10.1"
          - name: MARIADB_VERSION
            value: "10.1.11+maria-1~jessie"
        volumeMounts:
        - mountPath: /etc/mysql/conf.d
          name: mso-mariadb-conf
        - mountPath: /docker-entrypoint-initdb.d
          name: mso-mariadb-docker-entrypoint-initdb
        ports:
        - containerPort: 3306
          name: mariadb
        readinessProbe:
          tcpSocket:
            port: 3306
          initialDelaySeconds: 5
          periodSeconds: 10
      volumes:
        - name: mso-mariadb-conf
          hostPath:
            path: /dockerdata-nfs/onapdemo/mso/mariadb/conf.d
        - name: mso-mariadb-docker-entrypoint-initdb
          hostPath:
            path: /dockerdata-nfs/onapdemo/mso/mariadb/docker-entrypoint-initdb.d
      imagePullSecrets:
      - name: onap-docker-registry-key

The first part of the yaml file simply states that this is a deployment specification for a mariadb pod.

The spec section starts off with 'replicas: 1' which states that only 1 'replica' will be use here.  If one was to change the number of replicas to 3 for example, kubernetes would attempt to ensure that three replicas of this pod are operational at all times.  One can see that in a clustered environment the number of replicas should probably be more than 1 but for simple deployments 1 is sufficient.

The selector label is a grouping primitive of kubernetes but this simple example doesn't exercise it's full capabilities.

The template/spec section is where the key information required to start this pod is found.

  • image: is a reference to the location of the docker image in nexus3
  • name: is the name of the docker image
  • env is a section supports the creation of operating system environment variables within the container and are specified as a set of key/value pairs.  For example, MYSQL_ROOT_PASSWORD is set to "password".
  • volumeMounts: allow for the creation of custom mount points
  • ports: define the networking ports that will be opened on the container.  Note that further in the all-services.yaml file ports that are defined here can be exposed outside of ONAP component's name space by creating a 'nodePort' - a mechanism used to resolve port duplication.
  • readinessProbe: is the mechanism kubernetes uses to determine the state of the container. 
  • volumes: a location to define volumes required by the container, in this case configuration and initialization information.
  • imagePullSecrets: an key to access the nexus3 repo when pulling docker containers.

Initialization Containers - ONAP has built-in temporal dependencies between containers on startup. Supporting these dependencies will likely result in multiple Kubernetes deployment specifications.

Container Dependencies

Deployment Specs

Development Deployments

Production Deployments

  • Load Balancers
  • Horizontal Scaling
  • Stateless Pods

Kubernetes Deployments

The automated ONAP deployment depends on a fully functional kubernetes environment being available prior to ONAP installation. Fortunately, kubenetes is supported on a wide variety of systems such as Google Compute Engine, AWS EC2, Microsoft Azure, CenturyLink Cloud, IBM Bluemix and more.  If you're setting up your own kubernetes environment, please refer to ONAP on Kubernetes for a walk through of how to set this environment up on several platforms.

ONAP Deployment Customization

...

ONAP 'OneClick' Deployment Walk-though

Once a kubernetes environment is available and the deployment artifacts have been customized for your location, ONAP is ready to be installed. 

The bash script createAll.bash is used to create an ONAP deployment with kubernetes. It has two primary functions:

  • Creating the namespaces used to encapsulate the ONAP components, and
  • Creating the services, pods and containers within each of these namespaces that provide the core functionality of ONAP.

Namespaces provide isolation between ONAP components as ONAP release 1.0 contains duplicate application (e.g. MariaDB) and port usage. As such createAll.bash requires the user to enter a namespace prefix string (e.g. createAll.bash -n onap) that can be used to separate multiple deployments of onap. The result will be set of 10 namespaces (e.g. onap-sdc, onap-aai, onap-mso, onap-message-router, onap-robot, onap-vid, onap-sdnc, onap-portal, onap-policy, onap-appc) being created within the kubernetes environment.  A prerequisite pod config-init (pod-config-init.yaml) requires editing against your OpenStack environment and deployment into the default namespace before running createAll.bash.

Within the namespaces are kubernete's services that provide external connectivity to pods that host Docker containers. The following is a list of the namespaces and the services within:

  • onap-aai
    • aai-service
    • hbase
    • model-loader-service
  • onap-appc
    • dbhost
    • dgbuilder
    • sdnctldb01
    • sdnctldb02
    • sdnhost
  • onap-message-router
    • dmaap
    • global-kafka
    • zookeeper
  • onap-mso
    • mariadb
    • mso
  • onap-policy
    • brmsgw
    • drools
    • mariadb
    • nexus
    • pap
    • pdp
    • pypdp
  • onap-portal
    • portalapps
    • portaldb
    • vnc-portal
  • onap-robot
    • robot
  • onap-sdc
    • sdc-be
    • sdc-cs
    • sdc-es
    • sdc-fe
    • sdc-kb
  • onap-sdnc
    • dbhost
    • sdnc-dgbuilder
    • sdnc-portal
    • sdnctldb01
    • sdnctldb02
    • sdnhost
  • onap-vid
    • vid-mariadb
    • vid-server

Note that services listed in italics are local to the namespace itself and not accessible from outside of the namespace.

  • Integration with MSB

The Microservices Bus Project provides facilities to integrate micro-services into ONAP and therefore needs to integrate into OOM - primarily through Consul which is the backend of MSB service discovery. The following is a brief description of how this integration will be done (thanks Huabing):

A registrator to push the service endpoint info to MSB service discovery. 

  • The needed service endpoint info is put into the kubernetes yaml file as Env variables, including service name, Protocol,version, visual range,LB method, IP, Port,etc.
  • OOM deploy/start/restart/scale in/scale out/upgrade ONAP components
  • Registrator watch the kubernetes event
  • When an ONAP component instance has been started/destroyed by OOM, Registrator get the notification from kubernetes
  • Registrator parse the service endpoint info from environment variables and register/update/unregister it to MSB service discovery
  • MSB API Gateway uses the service endpoint info for service routing and load balancing.

Details of the registration service API can be found at Microservice Bus API Documentation.

A preliminary view of the OOM-MSB integration is as follows:

A message sequence chart of the registration process:

  • MSB Usage Instructions

Code Block
languagebash
titlePull and run MSB docker containers
collapsetrue
sudo docker run -d --net=host --name msb_consul consul agent -dev
sudo docker run -d --net=host --name msb_discovery zhaohuabing/msb_discovery
sudo docker run -d --net=host -e "ROUTE_LABELS=visualRange:1" --name internal_msb_apigateway zhaohuabing/msb_apigateway 
Code Block
languagebash
titleRegister a REST service to MSB via curl
collapsetrue
curl -X POST \
  -H "Content-Type: application/json" \
  -d '{"serviceName": "aai", "version": "v8", "url": "/aai/v8/","protocol": "REST", "path": "/aai/v8", "nodes": [ {"ip": "10.74.215.65","port": "8443"}]}' \
  "http://127.0.0.1:10081/api/microservices/v1/services”
Code Block
languagebash
titleTest the REST Service via the internal API gateway
collapsetrue
curl http://127.0.0.1/aai/v8/cloud-infrastructure/cloud-regions


  • FAQ (Frequently Asked Questions)

Does OOM enable the deployment of VNFs on containers?

  • No. OOM provides a mechanism to instantiate and manage the ONAP components themselves with containers but does not provide a Multi-VIM capability such that VNFs can be deployed into containers.  The Multi VIM/Cloud Project may provide this functionality at some point.

DCAE has its own controller - how is this managed with OOM?

  • The DCAE controller will merge with OOM during the Amsterdam release as described in the Data Collection Analytics & Events Project.  In the short term the DCAE controller is problematic in a container environment as it directly interfaces to OpenStack and request multiple VMs (e.g. CDAP, etc). The short term proposal is to containerize the DCAE components and statically create them as part of the larger ONAP deployment. Advanced DCAE controller features like hierarchical and geographically diverse deployments need further investigation.  
  • Related Tools


  • Current Limitations and Feature Requests

  • DCAE - The DCAE component not only is not containerized but also includes its own VM orchestration system. A possible solution is to not use the DCAE Controller but port this controller's policies to Kubenetes directly, such as scaling CDAP nodes to match offered capacity.
  • Single Name Space
  • Deployment Parameter Optimization
  • Configuration Parameters

Currently ONAP configuration parameters are stored in multiple files; a solution to coordinate these configuration parameters is required. Kubernetes Config Maps may provide a solution or at least partial solution to this problem.

  • Centralized Parameter Control

  • Component Rationalization
    Duplicate containers - The VM structure of ONAP hides internal container structure from each of the components including the existence of duplicate containers such as Maria DB.
  • Jira Stories


Docker
Kubernetes