Note, that this wiki is not maintained anymore, please refer to the wiki in IM subcommittee space: Application Service Descriptor (ASD) onboarding IM

Introduction:

In today, many complex applications are consisting in a mixed, complex workload, that is described in many Kubernetes resources, e.g. to be run on a certain cluster, etc.  In order to deploy the application, the orchestration task would be requiring dealing with different abstract layers of resources, different templates system mapping, and application packaging.  Another challenge is to keep up with changes in the cloud infrastructures features enhancement mapping into abstract resource template.

 The proposal focuses on two main parts:

  1. Packaging, in a single, well defined bundle, cloud applications, to enable distribution, provisioning and installation.
  2. Application metadata and cloud-native tooling.



An overview of the presentation is available and presented at CNF Task Force in May 2021.

https://wiki.onap.org/download/attachments/93004634/ONAP_Nokia_Ericsson_K8S_application_descriptor_and_packaging_r2.pdf?version=1&modificationDate=1621622424000&api=v2

Application Packaging:

In order to facilitate compatibility with ETSI, ONAP and other telco standards, the CSAR (NFV SOL 004ed351 or ed421) packaging format is used. The only modification is allowing the proposed Application Service Descriptor (ASD) to be a top-level descriptor for the package, instead of an NSD or VNFD as defined in SOL001.

Additionally, the following directories will exist inside the CSAR:

  • deployment_artifacts: where all deployment files go, like Helm charts.
  • images: holds container images referenced from the main application and dependencies, in OCI format.(ref: https://github.com/opencontainers/image-spec)

Note that the “images” directory may be empty, or only contain a part of the images required for the whole application. This might happen for example in application update packages, where the container images may have already been onboarded onto the associated registry. In case images are present, they must be referenced in the CSAR manifest.

This application packaging format is designed to support a single, container-based deployment “flavor” or “type”. If an application has multiple such deployment types, there should be multiple packages, with their own appropriate descriptors.

An orchestrator is expected to load any container images present in the package onto the correct registry for the target cluster(s) before attempting to deploy the application. Since the images are in OCI format, they should follow the OCI image layout specification, and MUST contain a “name” annotation in their index.json layout descriptor. This name should be used as the “tag” value when the orchestrator provisions the image in the corresponding registry.


for the Application Service Descriptor (ASD) onboarding packaging format, see Application Service Descriptor (ASD) Onboarding Packaging Format .


Application metadata and cloud-native tooling:

In order to describe the containerized application to an orchestrator, there is a need for some metadata to accompany the bundled cloud-native deployment artifacts (e.g. Helm files) and images.

A basic decision that will affect this metadata is what exactly the cloud-native deployment artifacts are, and what implications that has for the way the orchestrator communicates with the cluster.

The primary decision is to require that applications are deployed using Helm v3 or later (ref: https://helm.sh/docs/helm/helm/), and therefore the deployment artifacts are one or more Helm Charts. Furthermore, in order not to limit future choices of tooling, or tooling choices creating dependencies between the orchestrator and the underlying Kubernetes cluster, it is assumed that the interface between the orchestrator and the cluster is the Kubernetes API (ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/).



Figure xx: Interaction between orchestrator and cluster



This implies that the orchestrator

  • Has an embedded Helm v3 client and is able to use it to deploy the artifacts embedded in the package.
  • If it requires any information present in the K8S resource descriptions, it is ably to pre-render the Helm Charts and extract such information.

The most widely used format for describing telco virtualized applications is the ETSI MANO Virtualized Network Function Descriptor (VNFD), ETSI NFV SOL001 specification. This descriptor was created as a vendor neutral way to fully describe a virtualized application, with highly detailed requirements ranging from placement rules, the various virtual components like virtual NICs, CPU and memory requirements, scaling policy, input parameters, and monitoring parameters, etc.

ETSI sought to extend the existing VNFD to cover containerized workloads. It has been pointed out that such efforts have noticeable drawbacks:

  • The overall proposed model is essentially duplicating the workload description that K8S (and Helm) provide, but so far with less features.
  • The ETSI NFV VNFD (VM and containerized based) definitions could conflict with the Helm Chart definitions, which can cause orchestration confusion and/or failure.
  • Non-ETSI-based CNF model and orchestration desire a simplified CNF descriptor

Therefore, this proposal is proposing a new, simple descriptor, the Application Service Descriptor (ASD) with the minimum information for the orchestrator, and pointers to cloud-native artifacts and code (including configuration) required for the LCM implementation. An ASD can describe a complete application / NF, or parts of application / NF.

The ASD allows a clean separation between high-level orchestration, focused on service and resource models, and cloud-native application deployment, implemented via Helm Charts.

Application Service Descriptor model

The tables below summarize the Application Service Descriptor contents.

The overall objective is to keep the items in the descriptor to the bare minimum information, and not duplicate any attributes that might be instead extracted from the Helm Charts. This helps maintain the principle that Helm Charts are the primary deployment artifact for a containerized application and avoids any possible source of error or confusion that such duplication would cause.

Application Service Descriptor (ASD) Information Element (top level)

Attribute

Qualifier

#

Type

Description

asdId

M

1

Identifier

Identifier of this ASD information element. This attribute shall be globally unique. The format will be defined in the data model specification phase.

asdVersionM1StringIdentifies the version of the ASD

asdSchemaVersion

M

1

Version

Specifies the version of the ASD’s schema (if we modify an ASD field definition, add/remove field definitions, etc.).

asdProvider

M

1

String

Provider of the AS and of the ASD.

asdApplicationName

M

1

String

Name to identify the Application Service. Invariant for the AS lifetime.

asdApplicationVersion

M

1

Version

Specifies the version of the Application (so, if software,

DeploymentArtifacts , ASD values, ... change, this changes).

asdApplicationInfoName

M

0..1

String

Human readable name for the Application service. Can change during the AS lifetime.

asdInfoDescription

M

0..1

String

Human readable description of the AS. Can change during the AS lifetime.

asdExtCpd

M

0..N

datatype.ExtCpd

Describes the externally exposed connection points of the application.

enhancedClusterCapabilities

M

0..1

datatype. enhancedClusterCapabilities

A list of  expected capabilities of the target Kubernetes cluster to aid placement of the application service on a suitable cluster.

deploymentItems

M

1..N

DeploymentItem

Deployment artifacts


The initial attributes essentially describe the application – a unique identifier, a schema version (that enables versioning the data model of the descriptor itself), and basic metadata, like application name and version and human-readable descriptive fields.

The attribute "asdExtCpd" will be used for exposing endpoints (to enable orchestrators to string together or optimally place linked applications.

The attributes “enhancedClusterCapabilities” provides information which is used to aid placement of the application service on a suitable cluster.

Finally, “deploymentItems” is a list of deployment items, i.e. Helm Charts, that together can deploy an application. The table below shows the information element of these deployment item descriptors.

Deployment Item Information Element

Attribute

Qualifier

#

Content

Description

deploymentItemId

M

1

Identifier

The identifier of this deployment item

artifactType

M

1

String

Specifies the artifact type. One of following values can be chosen: "helm_chart", "helmfile", "crd", "terraform"

artifactId

M

1

String

Reference to a DeploymentArtifact. It can refer to URI or file path. 

deploymentOrder

M

0..1

Integer

Specifies the deployment stage that the DeploymentArtifact belongs to. A lower value specifies that the DeploymentArtifact belongs to an earlier deployment stage, i.e. needs to be installed prior to DeploymentArtifact with higher deploymentOrder values. If not specified, the deployment of the DeploymentArtifact can be done in arbitrary order and decided by the orchestrator.

lifecycleParameters

M

0..N

String

The list of parameters that can be overridden at deployment time (e.g., the list of parameters in the values.yaml which can be overridden at deployment time)


asdExtCpd Information Element

Attribute

Qualifier

Cardinality

Content

Description

idM1StringThe identifier of this extCpdData
descriptionM1StringDescribes the service exposed.
virtualLinkRequirementM1..NStringRefers in an abstract way to the network or multiple networks that the ExtCpd shall be exposed on (ex: OAM, EndUser, backhaul, LI, etc). The intent is to enable a network operator to take decision on to which actual VPN to connect the extCpd to. NOTE 1.
networkInterfaceRealizationRequirementsM0..1datatype.networkInterfaceRealizationRequirementsDetails container implementation specific requirements on the NetworkAttachmentDefinition to . See NOTE 2 & 3.
inputParamMappingsM0..1datatype.extCpd.ParamMappingsInformation on what parameters that are required to be provided to the deployment tools for the asdExtCpd instance.
resourceMappingM0..1StringKubernetes API resource name for the resource manifest as specified e.g. in helm.chart for the service, ingress or pod resource declaring the network interface. Enables, together with knowledge on namespace, the orchestrator to lookup the runtime data related to the extCpd.

NOTE 1: Corresponds more or less to a virtual_link requirement in ETSI NFV SOL001.
NOTE 2: Applies only for ExtCpds representing secondary network interfaces in a pod.
NOTE 3: Several ExtCpd may refer to same additional network interface requirements.

networkInterfaceRealizationRequirements Information Element

Attribute

Qualifier

Cardinality

Content

Description

trunkModeM0..1”false” | ”true”If not present or  set to”false”, means that this interface shall connect to single network. If set to ”true” then the network interface shall be a trunk interface (connects to multiple VLANS).
ipamM0..1"infraProvided", "orchestrated", "userManaged"The default value ("infraProvided") means that the CNI specifies how IPAM is done and assigns the IP address to the pod interface. 
interfaceType​M0..1

"kernel.netdev", "direct.userdriver", "direct.kerneldriver", "direct.bond", "userspace

This attribute is applicable for passthrough and memif interfaces. Value default value is ”kernel.netdev”.​ 
interfaceOptionsM0..N

"virtio",
"memif"

Alternative vNIC configurations the network interface is verified to work with.

interfaceRedundancyM0..1"infraProvided", "activePassiveBond", "activeActiveBond", "activePassiveL3", "activeActiveL3",
 ”bondCni”,
"Left", "Right"

”infraProvided” means that the application sees one vNIC but that the infrastruture provides redundant access to the network via both switch planes. ”Left” and ”right” indicates a vNIC connected non-redundantly to the network via one specific (left or right) switchplane. All other attributes indicates a mated vNIC pair in the Pod, one connecting to the network via left switchplane and the other connecting to the network via the right switchplane, and with application using them together as a redundant network interface using a particular redundancy method that need to be accomodated in the node infrastructure.
"activeActiveBond": The bonded left/right links must be part of a multi-chassis LAG in active-active mode
"activePassiveBond": Interfaces bonded in active-passive mode in the application with move of bond MAC address. No specific requirements on DC fabric.
"activePassiveL3":  Move of application IP address
"activeActiveL3": Anycast/ECMP
”bondCni” ; the mated pair network interfaces are used via a 3rd bond cni based network interface.

nicOptionsM0..N"examples": ["i710", "mlx-cx5v"]

nics a direct user space driver the application is verified to work with. Allowed values from ETSI registry.

datatype.ExtCpd.ParamMappings Information Element

Attribute

Qualifier

Cardinality

Content

Description

loadbalancerIPM0..1String

When present, this attribute specifies the name of the deployment artifact input parameter through which the orchestrator can configure the loadbalancerIP parameter of the K8s service or ingress controller that the ExtCpd represents.

Note 2

externalIPsM0..NString

When present, this attribute specifies the name of the deployment artifact input parameter through which the orchestrator can configure the extermalIPs parameter of the K8s service or ingress controller, or the pod network interface annotation, that the ExtCpd represents. The param name and provided IP address(es) value will be passed to the deployment tool when deploying the DeploymentArtifacts

Note 2

nadNamesM0..N

String

These attributes specifies, for an ExtCpd respesenting a secondary network interface, the name(s) of the deployment artifact input parameter(s) through which the orchestrator can provide the names of the network attachment definitions (NADs) the orchestrator has created as base for the network interface the ExtCpd represents.
It is expected that the NADs themselves have been created prior to the deployment of the deployment artifacts.

Note 1,2,3

nadNamespaceM0..1String

Specifies, for an ExtCpd respesenting a secondary network interface, the name of the deployment artifact input parameter through which the orchestrator can provide the namespace where the NADs are located.
Attribute may be omitted if the namespace is same as the application namespace. 

Note 2

Note 1: When the ExtCpd represent a networkRedundant/mated-pair of sriov interfaces, there are references to 2 or 3 related NADs needed to be passed, while for other interface types only one NAD reference is needed to be passed.

Note 2: The format of the Content strings is specific for each different orchestration templating technology used (Helm, Teraform, etc.). Currently only a format for use with Helm charts is suggested: "<helmchartname>:[<subchartname>.]0..N[<parentparamname>.]0..N<paramname>”. Whether the optional parts of the format are present depends on how the parameter is declared in the helm chart. An example is: "chartName:subChart1.subChart2.subChart3.Parent1.Parent2.Parent3.parameter".

Note 3:  A direct attached (passthrough) network interface, such as an sriov interface, attaches to a network via only one of the two switch planes in the infrastructure.
When using a direct attached network interface one therefore commonly in a pod uses a mated pair of sriov network attachments, where each interface attaches same network but via different switchplane.
The application uses the mated pair of network interfaces as a single logical “swith-path-redundant” network interface – and this is represented by a single ExtCpd. 
Also there is a case where a third “bond” attachment interface is used in the pod, bonding the two direct interfaces so that the application do not need to handle the redundancy issues – application just uses the bond interface.
In this case all three attachments are together making up a logical “switch-path-redundant” network interface represented by a single ExtCpd. When three NADs are used in the ExtCpd the NAD implementing the bond attachment interface is provided through the parameter indicated in the third place in the nadNames attribute.


enhancedClusterCapabilities Information Element

Attribute

Qualifier

Cardinality

Content

Description

Attribute

Qualifier

Cardinality

Content

Description

minKernelVersionM1StringDescribes the minimal required Kernel version, e.g. 4.15.0. Coded as displayed by linux command uname –r
requiredKernelModulesM0..NStringRequired kernel modules are coded as listed by linux lsmod command, e.g. ip6_tables, cryptd, nf_nat etc.
conflictingKernelModulesM0..NStringKernel modules, which must not be present in the target environment. The kernel modules are coded as listed by linux lsmod command, e.g. ip6_tables, cryptd, nf_nat etc. Example: Linux kernel SCTP module, which may conflict with use of proprietary user space SCTP stack provided by the application.

requiredCustomResources


M0..NStructure (inlined)

List the required custom resources types in the target environment, identifying each by the "kind" and "apiVersion" field in the K8S resource manifests and in the application. The list shall include those custom resource types which are not delivered with the application.

Example:
requiredCustomResources: 
-{kind: "Redis", apiVersion: "kubedb.com/v1alpha1"}


>kind

M0..1String

Kind of the custom resource

>apiVersion

M0..1String

apiVersion of the custom resource

clusterLabels M0..NString

This attribute allows to associate arbitrary labels to clusters.

These can indicate special infrastructure capabilities (e.g., NW acceleration, GPU compute, etc.). The intent of these labels is to serve as a set of values that can help in application placement decisions.

clusterLabels follow the Kubernetes label key-value-nomenclature (https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). It is recommended that labels follow a standardised meaning e.g. for node features (https://kubernetes-sigs.github.io/node-feature-discovery/v0.9/get-started/features.html#table-of-contents).

Example:

ClusterLabels
- feature.node.kubernetes.io/cpu-cpuid.AESNI: true

requiredPluginM0..NStructure (inlined)A list of the names and versions of the required K8s plugin (e.g. multus v3.8)
>requiredPluginNameM0..1StringThe names of the required K8s plugin (e.g. multus)
>requiredPluginVersionM0..1StringThe version of the required plugin (e.g. 3.8)


References: 

  • No labels

3 Comments

  1. Thinh wrote the following CNF Descriptor Proposal for ASD. The above wiki page is based on the proposal. 

  2. Thinh Nguyenphu , about the following statement, we need to rephrase it.  Could you please take a look at the comments?


    The next two attributes “extraServiceRequirement” and “enhancedClusterCapabilities” will be used for multi-app or multi-cloud orchestration – a description of exposed endpoints (to enable orchestrators to string together or optimally place linked applications), and two fields that list extra capabilities required from clusters.

    → I think we decided the "extraServiceRequirement" attribute is no longer needed (i.e., it is removed). 

    → I think a description of exposed endpoints should be represented by the "asdExtCpd". 


    Thanks. 

    Byung


    1. I updated the text. Thanks.  Thinh