Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

NOTE: This is the discussion version of the Application Service Descriptor (ASD) onboarding IM. Please, refer to the released version 1.0 in the "Clean" wiki subfolder. 

Introduction:

In today, many complex applications are consisting in a mixed, complex workload, that is described in many Kubernetes resources, e.g. to be run on a certain cluster, etc.  In order to deploy the application, the orchestration task would be requiring dealing with different abstract layers of resources, different templates system mapping, and application packaging.  Another challenge is to keep up with changes in the cloud infrastructures features enhancement mapping into abstract resource template.

Application metadata:

In order to describe the containerized application to an orchestrator, there is a need for some metadata to accompany the bundled cloud-native deployment artifacts (e.g. Helm files) and images.

A basic decision that will affect this metadata is what exactly the cloud-native deployment artifacts are, and what implications that has for the way the orchestrator communicates with the cluster.

The page is under construction, until it's finalised please check prel. information at: Application Service Descriptor (ASD) and packaging Proposals for CNF - Developer Wiki - Confluence (onap.org)In this page we will define ASD onboarding detailed informationprimary decision is to require that applications are deployed using Helm v3 or later (ref: https://helm.sh/docs/helm/helm/), and therefore the deployment artifacts are one or more Helm Charts. Furthermore, in order not to limit future choices of tooling, or tooling choices creating dependencies between the orchestrator and the underlying Kubernetes cluster, it is assumed that the interface between the orchestrator and the cluster is the Kubernetes API (ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/).

Image Added

This implies that the orchestrator

  • Has an embedded Helm v3 client and is able to use it to deploy the artifacts embedded in the package.
  • If it requires any information present in the K8S resource descriptions, it is ably to pre-render the Helm Charts and extract such information.

The most widely used format for describing telco virtualized applications is the ETSI MANO Virtualized Network Function Descriptor (VNFD), ETSI NFV SOL001 specification. This descriptor was created as a vendor neutral way to fully describe a virtualized application, with highly detailed requirements ranging from placement rules, the various virtual components like virtual NICs, CPU and memory requirements, scaling policy, input parameters, and monitoring parameters, etc.

ETSI sought to extend the existing VNFD to cover containerized workloads. It has been pointed out that such efforts have noticeable drawbacks:

  • The overall proposed model is essentially duplicating the workload description that K8S (and Helm) provide, but so far with less features.
  • The ETSI NFV VNFD (VM and containerized based) definitions could conflict with the Helm Chart definitions, which can cause orchestration confusion and/or failure.
  • Non-ETSI-based CNF model and orchestration desire a simplified CNF descriptor

Therefore, this proposal is proposing a new, simple descriptor, the Application Service Descriptor (ASD) with the minimum information for the orchestrator, and pointers to cloud-native artifacts and code (including configuration) required for the LCM implementation. An ASD can describe a complete application / NF, or parts of application / NF.

The ASD allows a clean separation between high-level orchestration, focused on service and resource models, and cloud-native application deployment, implemented via Helm Charts.

Application Service Descriptor model

The tables below summarize the Application Service Descriptor contents.

The overall objective is to keep the items in the descriptor to the bare minimum information, and not duplicate any attributes that might be instead extracted from the Helm Charts. This helps maintain the principle that Helm Charts are the primary deployment artifact for a containerized application and avoids any possible source of error or confusion that such duplication would cause.

Application Service Descriptor (ASD) Information Element (top level)

Attribute

Qualifier

#

Type

Description

asdId

M

1

Identifier

Identifier of this ASD information element. This attribute shall be globally unique. The format will be defined in the data model specification phase.

asdVersionM1StringIdentifies the version of the ASD

asdSchemaVersion

M

1

Version

Specifies the version of the ASD’s schema (if we modify an ASD field definition, add/remove field definitions, etc.).

asdProvider

M

1

String

Provider of the AS and of the ASD.

asdApplicationName

M

1

String

Name to identify the Application Service. Invariant for the AS lifetime.

asdApplicationVersion

M

1

Version

Specifies the version of the Application (so, if software,

DeploymentArtifacts , ASD values, ... change, this changes).

asdApplicationInfoName

M

0..1

String

Human readable name for the Application service. Can change during the AS lifetime.

asdInfoDescription

M

0..1

String

Human readable description of the AS. Can change during the AS lifetime.

asdExtCpd

M

0..N

datatype.ExtCpd

Describes the externally exposed connection points of the application.

enhancedClusterCapabilities

M

0..1

datatype. enhancedClusterCapabilities

A list of  expected capabilities of the target Kubernetes cluster to aid placement of the application service on a suitable cluster.

deploymentItems

M

1..N

DeploymentItem

Deployment artifacts

...

The attributes “enhancedClusterCapabilities” will be used for multi-app or multi-cloud orchestrationprovides information which is used to aid placement of the application service on a suitable cluster.

Finally, “deploymentItems” is a list of deployment items, i.e. Helm Charts, that together can deploy an application. The table below shows the information element of these deployment item descriptors.

Deployment Item Information Element

Attribute

Qualifier

#

Content

Description

deploymentItemId

M

1

Identifier

The identifier of this deployment item

artifactType

M

1

String

/enum

Specify

Specifies the artifact type.

e.g. Helm chart, helmfile, CRD etc.

One of following values can be chosen: "helm_chart", "helmfile", "crd", "terraform"

artifactId

M

1

Identifier (reference to)

String

Reference to a DeploymentArtifact. It can refer to URI or file path. 

deploymentOrder

M

0..1

Integer

Specifies the deployment stage that the DeploymentArtifact belongs to. A lower value specifies that the DeploymentArtifact belongs to an earlier deployment stage, i.e. needs to be installed prior to DeploymentArtifact with higher deploymentOrder values. If not specified, the deployment of the DeploymentArtifact can be done in arbitrary order and decided by the orchestrator.

lifecycleParameters

M

0..

1

N

List of strings

String

The list of parameters that can be overridden at deployment time (e.g.,

values for values.yaml in the chart this item references)

the list of parameters in the values.yaml which can be overridden at deployment time)

Note 1: All cloud native CNF resource IM is specified by particular cloud native specifications. K8s document is an example.

asdExtCpd Information Element

AttributeQualifierCardinalityContentDescription
idM1StringThe identifier of this extCpdData
descriptionM1StringDescribes the service exposed.
virtualLinkRequirementM1..NStringRefers in an abstract way to the network or multiple networks that the ExtCpd shall be exposed on (ex: OAM, EndUser, backhaul, LI, etc). The intent is to enable a network operator to take decision on to which actual VPN to connect the extCpd to. NOTE 1.
interfaceOrderM0..1Integer, greater or equal to zeroMandatory attribute for a secondary network interface (not applicable for a primary network interface). Defines the order in which an the additional/secondary network interface declaration appears in the pod manifest. Note that an SRIOV mated vNIC pair shall be modelled by a single vduCp, and its order value set to the lower value of the two vNIC’s order numbers; the two vNICs are expected to appear in consecutive order on the compute instance, and be attached to same network(s). See NOTE 2networkInterfaceRequirements
networkInterfaceRealizationRequirementsM0..1datatype.
NetworkInterfaceRequirements
networkInterfaceRealizationRequirementsDetails container implementation specific requirements on the NetworkAttachmentDefinition to . See NOTE 2 & 3.
inputParamMappingsM0..1datatype.extCpd.ParamMappingsInformation on what parameters that are required to be provided to the deployment tools for the asdExtCpd instance.
resourceMappingM0..1StringKubernetes API resource name for the resource manifest as specified e.g. in helm.chart for the service, ingress or pod resource declaring the network interface. Enables, together with knowledge on namespace, the orchestrator to lookup the runtime data related to the extCpd.

NOTE 1: Corresponds more or less to a virtual_link requirement in ETSI NFV SOL001.
NOTE 2: Applies only for ExtCpds representing secondary network interfaces in a pod.
NOTE 3: Several ExtCpd may refer to same additional network interface requirements.

...

networkInterfaceRealizationRequirements Information Element

AttributeQualifierCardinalityContentDescription
trunkModeM0..1”false” | ”true”If not present or  set to”false”, means that this interface shall connect to single network. If set to ”true” then the network interface shall be a trunk interface (connects to multiple VLANS).
ipamM0..1
”cniManaged”
”userManaged”
”inBand”
"infraProvided", "orchestrated", "userManaged"The default value (
”cniManaged”
"infraProvided") means that the CNI specifies
how IPAM
how IPAM is done and assigns the IP address to the pod interface.
 Value ”user” indicates that IPAM is done via the application inside the pod.​ "inBand" indicates that the application expects to receive the interface configuration through protocols/procedures over the interface itself, such as DHCP, DHCPv6, SLAAC.  
 
interfaceType​M0..1
”kernel” | ”userspaceDpdk” | "memif"

"kernel.netdev", "direct.userdriver", "direct.kerneldriver", "direct.bond", "userspace

This attribute is applicable for passthrough and memif interfaces. Value default value is
”kernel”
kernel.netdev.​ 
interfaceOption
interfaceOptionsM0..N
e.g. "speed=1G|10G|25G|100G"
"nic-type=virtio|i710|mlx-cx5|"

Applicable to Pod network interfaces that directly connect to a physical NIC.

The value is a list of verified options for physical NIC caracteristics.

"virtio",
"memif"

Alternative vNIC configurations the network interface is verified to work with.

interfaceRedundancy
networkRedundancy
M0..1
"infraProvided“ |
"none“ |
"matedPair"Default value is "infra-provided”, which means that the infrastructure is expected to provide network redundancy for the pod interface. Value "none" means that the application has no requirement on network redundancy. Value ”matedPair” means that the Pod asks for a mated pair of non-redundant left/right network attachments (typically SRIOV) and handles redundancy on application level. The same set of networks shall be configured on both interfaces. switchPlaneM0..1“left” | “right”

Used (only) in conjunction with networkRedundancy "none" when the application requires two independent virtual links that for redundancy reasons have to reside on different switch planes (left or right).

redundancyMethodM0..1 ”activePassiveBond” |
”activeActiveBond” |
”activePassiveL3”​ |
”activeActiveL3”Used (only) in conjunction with networkRedundancy "matedPair".
"activeActiveBond
"infraProvided", "activePassiveBond", "activeActiveBond", "activePassiveL3", "activeActiveL3",
 ”bondCni”,
"Left", "Right"

”infraProvided” means that the application sees one vNIC but that the infrastruture provides redundant access to the network via both switch planes. ”Left” and ”right” indicates a vNIC connected non-redundantly to the network via one specific (left or right) switchplane. All other attributes indicates a mated vNIC pair in the Pod, one connecting to the network via left switchplane and the other connecting to the network via the right switchplane, and with application using them together as a redundant network interface using a particular redundancy method that need to be accomodated in the node infrastructure.
"activeActiveBond": The bonded left/right links must be part of a multi-chassis LAG in active-active mode

. |


"activePassiveBond": Interfaces bonded in active-passive mode in the application with move of bond MAC address. No specific requirements on DC fabric.

|


"activePassiveL3":  Move of application IP address

. | 


"activeActiveL3": Anycast/ECMP


”bondCni” ; the mated pair network interfaces are used via a 3rd bond cni based network interface.

nicOptionsM0..N"examples": ["i710", "mlx-cx5v"]

nics a direct user space driver the application is verified to work with. Allowed values from ETSI registry.


datatype.ExtCpd.ParamMappings Information Element

AttributeQualifierCardinalityContentDescription
ipAddressParameter
loadbalancerIPM0..1String

When present, this attribute specifies the name of the deployment artifact input parameter through which the orchestrator can configure the loadbalancerIP parameter of the K8s service or ingress controller that the ExtCpd represents.

Note 2

externalIPsM0..NString

When present, this attribute specifies the name of the deployment artifact input parameter through which the orchestrator can configure

the IP address(es), ipv4 and/or IPv6, for this asdExtCpd

the extermalIPs parameter of the K8s service or ingress controller, or the pod network interface annotation, that the ExtCpd represents. The param name and provided IP address(es) value will be passed to the deployment tool when deploying the DeploymentArtifacts. 

Note

1

2

nadName
nadNamesM0..
1
N
[

String

]1..N, Note 2

These attributes specifies, for an

asdExtCpd

ExtCpd respesenting a secondary network interface, the name(s) of the deployment artifact input

parameters

parameter(s) through which the orchestrator can

configure

provide the names of the

corresponding network annotation in the pod manifest with references to the NAD(s) to be used for creating the network interface

network attachment definitions (NADs) the orchestrator has created as base for the network interface the ExtCpd represents.
It is expected that the NADs themselves have been created prior to the deployment of the deployment artifacts.

Note 1,2,3

nadNamespaceM0..1String

Specifies, for an ExtCpd respesenting a secondary network interface, the name of the deployment artifact input parameter through which the orchestrator can provide the namespace where the NADs are located.
Attribute may be omitted if the namespace is same as the application namespace. 

Note 2

Note 1: When the asdExt Cpd ExtCpd represent a networkRedundant/mated-pair of sriov interfaces, there are references to 2 or 3 related NADs needed to be passed, lwhile while for other interface types only one NAD reference is needed to be passed.

Note 2: The format of the Content strings is specific for each different orchestration templating technology used (Helm, Teraform, etc.). Currently only a format for use with Helm charts is suggested: helmchartname "<helmchartname>:[subchartname<subchartname>.]0..N[parentparamname<parentparamname>.]0..Nparametername<paramname>”. Whether the optional parts of the format are present depends on how the parameter is declared in the helm chart. An example is: "chartName:subChart1.subChart2.subChart3.Parent1.Parent2.Parent3.parameter".

Note 3:  A direct attached (passthrough) network interface, such as an sriov interface, attaches to a network via only one of the two switch planes in the infrastructure.
When using a direct attached network interface one therefore commonly in a pod uses a mated pair of sriov network attachments, where each interface attaches same network but via different switchplane.
The application uses the mated pair of network interfaces as a single logical “swith-path-redundant” network interface – and this is represented by a single ExtCpd. 
Also there is a case where a third “bond” attachment interface is used in the pod, bonding the two direct interfaces so that the application do not need to handle the redundancy issues – application just uses the bond interface.
In this case all three attachments are together making up a logical “switch-path-redundant” network interface represented by a single ExtCpd. When three NADs are used in the ExtCpd the NAD implementing the bond attachment interface is provided through the parameter indicated in the third place in the nadNames attribute.


enhancedClusterCapabilities Information Element

AttributeQualifierCardinalityContentDescription
IdM1StringAsd local unique name for the enhanceClusterCapabilities instance
minKernelVersionM1StringDescribes the minimal required Kernel version, e.g. 4.15.0. Coded as displayed by linux command uname –r
requiredKernelModulesM0..
1
N
List of
StringRequired kernel modules are coded as listed by linux lsmod command, e.g. ip6_tables, cryptd, nf_nat etc.
conflictingKernelModulesM0..
1
N
List of
StringKernel modules, which must not be present in the target environment. The kernel modules are coded as listed by linux lsmod command, e.g. ip6_tables, cryptd, nf_nat etc. Example: Linux kernel SCTP module, which
would
may conflict with use of proprietary user space SCTP stack provided by the application.
requiredCRDs 

requiredCustomResources


M0..
1List of String
NStructure (inlined)

List the required

CRDs and their versions

custom resources types in the target environment, identifying each by the "kind" and "apiVersion" field in the K8S resource manifests and in the application. The list shall include those

CRDs

custom resource types which are not delivered with the application.

Example:
requiredCustomResources: 
-{kind: "Redis

CRD, version 5.0.

", apiVersion: "kubedb.com/v1alpha1"}


>kind

M0..1String

Kind of the custom resource

>apiVersion

clusterLabels 

M0..1
List of
String

apiVersion of the custom resource

clusterLabels M0..NString

This attribute allows to associate arbitrary labels to clusters.

These can indicate special infrastructure capabilities (e.g., NW acceleration,

GPGPU

GPU compute, etc.). The intent of these labels is to serve as a set of values that can help in application placement decisions.

This can be specified with the attribute

-m: Mandatory, means deployment is not attempted if such support is not available in the target system

-p: As preference - it means orchestrator will try to select a system with specific requirements, but if not found it will attempt deployment in a system not having such HW. 

clusterLabels follow the Kubernetes label key-value-nomenclature (https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). It is recommended that labels follow a standardised meaning e.g. for node features (https://kubernetes-sigs.github.io/node-feature-discovery/v0.9/get-started/features.html#table-of-contents).

Example:

ClusterLabels
- feature.node.kubernetes.io/cpu-cpuid.AESNI: true

requiredPluginM0..NStructure (inlined)A list of the names and versions of the required K8s plugin (e.g. multus v3.8)
>requiredPluginNameM0..1StringThe names of the required K8s plugin (e.g. multus)
>requiredPluginVersion
secondaryInterfacePlugin
M0..1StringThe
plug-in name / revision
version of the
operator to handle secondary interface
required plugin (e.g.
Multus-CNI, v3.8
3.8)


References: