Active and Available Inventory (AAI) needs to be updated when a Resource Bundle is instantiated in one of the cloud regions managed by the k8splugin.

The requires that;

  1. The k8splugin provide an API to get the status of a running instance
  2. The status fields need to be mapped to the AAI data model
  3. A REST API call to AAI needs to be made when the status of resources changes.


Heatbridge code (non-K8S)

The robot testsuite uses a heatbridge python script to generate the AAI updates. The SO openstack adapter also has built-in java code to perform the AAI update.  See:  Code which performs AAI Update (HeatBridge)

Example of AAI data for vFW (non-K8S) use case

Robot testsuite examples

Here is an example from a vFW instantiate test (different from the previous) that shows the AAI bulk PUT request the robot heatbridge code uses to update AAI.  See: Example of vFW AAI Update PUT request via Robot

Shows example output of AAI contents when the vFWCL test is executed - part of which includes running Heatbridge - which performs an AAI update to ensure 'vserver' objects are present.  See:  Example of AAI data for vFWCL with Robot

SO Openstack adapter examples

Here is an example of the PUT request to AAI performed by SO:  Example AAI update PUT command - using SO heatbridge code path

Here is an example the AAI contents after SO openstack adapter has performed heatbridge update:  Example of AAI update results - using SO based heatbridge method


Structure of Resources

Here is how the an instance created by K8splugin is tracked

Instance Body
{
  "id": "fnKPvVAL",
  "request": {
    "rb-name": "edgex",
    "rb-version": "v1",
    "profile-name": "profile2",
    "cloud-region": "k8sregionone",
    "labels": null
  },
  "namespace": "testns2",
  "resources": [
    {
      "GVK": {
        "Group": "",
        "Version": "v1",
        "Kind": "Service"
      },
      "Name": "profile2-edgex-ui"
    },
    {
      "GVK": {
        "Group": "apps",
        "Version": "v1beta2",
        "Kind": "Deployment"
      },
      "Name": "profile2-edgex-vault"
    }
  ]
}


Each Resource is tracked as two parts that uniquely identify it in a given namespace

Name and GVK

GVK
{
	"Group": "apps",
    "Version": "v1",
    "Kind": "Deployment"
}

Structure of Status

The structure of status fields is not finalized yet. However, we would like to include the entire status as returned by Kubernetes for the object.

This means that the status can vary based on the type of object being queried.

Deployment Status

Deployment Status
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2019-07-19T21:48:55Z"
    lastUpdateTime: "2019-07-19T21:48:55Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2019-07-19T21:47:50Z"
    lastUpdateTime: "2019-07-19T21:48:55Z"
    message: ReplicaSet "profile2-edgex-mongo-d64f6c7c8" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

POD Status

Pod Status
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2019-07-19T21:47:51Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2019-07-19T21:48:55Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2019-07-19T21:48:55Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2019-07-19T21:47:50Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://eb3f5ae847a5a3aa38c95cba3aea0bf6a15fce12bcd220206de8b513c481154f
    image: edgexfoundry/docker-edgex-mongo:0.8.0
    imageID: docker-pullable://edgexfoundry/docker-edgex-mongo@sha256:e2cf9380555867ed747be392284117be3b50cfdff04e13e6f57cb51d86f331b3
    lastState: {}
    name: edgex-mongo
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: "2019-07-19T21:48:27Z"
  hostIP: 172.25.55.139
  phase: Running
  podIP: 10.42.0.13
  qosClass: BestEffort
  startTime: "2019-07-19T21:47:51Z"


Given this disparate nature of Status' we need to filter the list of resources we update status  for and figure out a way to summarize this status to a simple Ready|Pending|Failed status.

Eg:

  1. A deployment status would be READY if replicas == readyReplicas
  2. A POD status would be READY if all its containerStatuses have ready:True


  • No labels

6 Comments

  1. FYI Jacqueline Beaulac, another piece of external data that is mapped into AAI.

    The "status update" part seems like it might be opaque in AAI, but it has internal structure that could be interesting for the information model.


  2. I have modified information about the objects that we should fill in for K8s as well

    Example of AAI data for vFWCL with Robot

    Base on what we have in K8s it looks that we should complete following object from A&AI

    generic-vnf and vf-module are created automatically when we instantiate service instance and its objects through VID. We need to create vnfc, vserver and l3-network with corresponding sub-objects what would enable use of other features existing in ONAP. The most important is how to map vf-module and vserver to existing resources in K8s in order to keep the same or similar operations we could do with ONAP on OpenStack and K8s resources. 

  3. For OpenStack resources deployment unit is heat stack that creates VMs. Heat stack is mapped to vf-module while VM is mapped to v-server and vnfc. When we scale OpenStack resources we scale vf-module so we create new heat stack.

    Today for K8s our deployment unit is artificial Instance object which in fact creates different K8s resources like deployments. Equivalent of heat stack is k8s deployment. If we would limit only one deployment for each Instance object we could have similar situation. As a consequence we would map then pod to v-server so scaling of deployment/vf-module will result with new v-servers/pods created. The only difference is that today scaling of openstack resources results with new heat stack and new vf-module each time. Since in K8s scaling is on deployment basis and we want to use native scaling mechanisms of K8s we would not like to create new Instance  (and new vf-module) each time we would like to scale k8s deployment and scaling of k8s resources will not result with new vf-module what could be a problem.

    generic-vnf → Container of Instance

    vf-module → Instance with max 1 deployment

    v-server/vnfc → pod


    Alternatively, K8sPlugin Instance could be mapped to generic-vnf with one deployment. Then each new pod created for deployment would be represented as new vf-module so scaling of deployment will result with new pods created (new vf-modules) - what is similar to scaling for openstack today. The consequence is that v-server and vnfc will be mapped to containers in pods (what is much closer to the modelling of resources in NFV nomenclature) but it may bring problems with assignment of IP addresses (which are defined for pod and not container level) and we do not report container level in resources' list given by k8s plugin API.

    generic-vnf → Instance with max 1 deployment

    vf-module → pod

    v-server/vnfc → container


    The second approach seems much more natural but there is problem with representation of container level resources.

    Eric Multanen Kiran KamineniSrinivasa Addepalli let me know how do you plan to map A&AI objects to K8s resources since this does not look obvious...

    1. Hi all,

      I am not a modeling person so take what I write with a grain of salt. As a prerequisite let me paste the descriptions of the above mentioned entities:

      generic-vnf = the VNF

      vnfc = a VNF is composed of VNF components (vnfcs)

      vf-module = a deployment unit of VNFC

      v-server = a virtual machine/VM

      Only based on the descriptions above (and not looking into the needed properties) it seems like:

      A helm VNF deployment → generic-vnf

      Kubernetes deployment/replicaset/.. → vnfc

      Pod → vf-module

      Not sure if individual containers running inside a pod should be mapped explicitly (maybe pod mapping is enough).

      I don't know much about the K8sPlugin or the subject matter discussed, so I might be wrong here. Maybe seeking advice from modeling sub-committee members would be beneficial.

      I paste a work-in-progress diagram of the AAI model here (the portion relevant to the discussion).

  4. Deployment Status
    Deployment status shall have the port that is visible to outside world. I would assume that this should have external IP address, destination port, that is used to program the ISTIO.
  5. Pod Status
    I guess, it should have not only one IP address, but all IP addresses that are rechable from other PODs.  If the POD has three ethernet interfaces, it should all of them, each of them consisting of interface-name, ipv4 address, ipv6 address