Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
themeMidnight
title(M)SO deployment specification excerpt
collapsetrue
..
kind: Deployment
metadata:
  name: mso
..
spec:
..

      initContainers:
      - command:
        - /root/ready.py
        args:
        - --container-name
        - mariadb
..        
      containers:
      - command:
        - /tmp/start-jboss-server.sh
        image: {{ .Values.image.mso }}
        imagePullPolicy: {{ .Values.pullPolicy }}
        name: mso
..

Pod Placement Rules

OOM will use the rich set of Kubernetes node and pod affinity / anti-affinity rules to minimize the chance of a single failure resulting in a loss of ONAP service. Node affinity / anti-affinity is used to guide the Kubernetes orchestrator in the placement of pods on nodes (physical or virtual machines).  For example:

  • if a container used Intel DPDK technology the pod may state that it as affinity to an Intel processor based node, or
  • geographical based node labels (such as the Kubernetes standard zone or region labels) may be used to ensure placement of a DCAE complex close to the VNFs generating high volumes of traffic thus minimizing networking cost. Specifically, if nodes were pre-assigned labels East and West, the pod deployment spec to distribute pods to these nodes would be: 
Code Block
themeMidnight
nodeSelector:
    failure-domain.beta.kubernetes.io/region: {{ .Values.location }}

Where "location: West" is specified in the values.yaml file used to deploy one DCAE cluster and  "location: East" is specified in a second values.yaml file (see OOM Configuration Management for more information about configuration files like the values.yamlfile).

Node affinity can also be used to achieve geographic redundancy if pods are assigned to multiple failure domains. For more information refer to Assigning Pods to Nodes.  Kubernetes has a comprehensive system called Taints and Tolerations that can be used to force the container orchestrator to repel pods from nodes based on static events (an administrator assigning a taint to a node) or dynamic events (such as a node becoming unreachable or running out of disk space). There are no plans to use taints or tolerations in the ONAP Beijing release.

Pod affinity / anti-affinity is the concept of creating a spacial relationship between pods when the Kubernetes orchestrator does assignment (both initially an in operation) to nodes as explained in Inter-pod affinity and anti-affinity. For example, one might choose to co-located all of the ONAP SDC containers on a single node as they are not critical runtime components and co-location minimizes overhead. On the other hand, one might choose to ensure that all of the containers in an ODL cluster (SDNC and APPC) are placed on separate nodes such that a node failure has minimal impact to the operation of the cluster.  An example of how pod affinity / anti-affinity is shown below:

Code Block
themeMidnight
titlePod Affinity / Anti-Affinity
collapsetrue
apiVersion: v1
kind: Pod
metadata:
  name: with-pod-affinity
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: security
            operator: In
            values:
            - S1
        topologyKey: failure-domain.beta.kubernetes.io/zone
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: security
              operator: In
              values:
              - S2
          topologyKey: kubernetes.io/hostname
  containers:
  - name: with-pod-affinity
    image: gcr.io/google_containers/pause:2.0

This example contains both podAffinity and podAntiAffinity rules, the first rule is is a must (requiredDuringSchedulingIgnoredDuringExecution) while the second will be met pending other considerations (preferredDuringSchedulingIgnoredDuringExecution).

Preemption

Another feature that may assist in achieving a repeatable deployment in the presence of faults that may have reduced the capacity of the cloud is assigning priority to the containers such that mission critical components have the ability to evict less critical components.  Kubernetes provides this capability with Pod Priority and Preemption.

Prior to having more advanced production grade features available, the ability to at least be able to re-deploy ONAP (or a subset of) reliably provides a level of confidence that should an outage occur the system can be brought back on-line predictably.

The overall Epic for OOM based deployment of ONAP is 
Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyOOM-6
 with the following status: 

Jira Charts
borderfalse
showinforfalse
serverONAP JIRA
jql%22Epic%20Link%22%20in%20(%22OOM-6%22)
statTypestatuses
chartTypepie
width
isAuthenticatedtrue
serverId425b2b0a-557c-3c0c-b515-579789cceedb

During the Beijing release the 'initContainers' constructs were updated from the previous beta implementation to the current released syntax under the JIRA Story 

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyOOM-406

Image Modified

...

Configure

Each project within ONAP has its own configuration data generally consisting of: environment variables, configuration files, and database initial values.  Many technologies are used across the projects resulting in significant operational complexity and an inability to apply global parameters across the entire ONAP deployment. OOM solves this problem by introducing a common configuration technology, Helm charts, that provide a hierarchical configuration configuration with the ability to override values with higher level charts or command line options.  For example, if one wishes to change the OpenStack instance oam_network_cidr and ensure that all ONAP components reflect this change, one could change the vnfDeployment/openstack/oam_network_cidr value in the global configuration file as shown below:

...