Table of Contents

After EMCO has been installed in a central cluster and some 'edge' clusters have been prepared, this section describes the basic operational sequences that are used to onboard clusters and create and deploy composite applications.

EMCO API

Interaction with the EMCO REST API is primary interface to EMCO.

View the EMCO API documentation with the swagger editor:  https://editor.swagger.io/?url=https://raw.githubusercontent.com/onap/multicloud-k8s/master/docs/emco_apis.yaml

A Postman collection can be found here:  https://github.com/onap/multicloud-k8s/blob/master/docs/EMCO.postman_collection.json

The EMCO REST API is the foundation for the other interaction facilities like the EMCO CLI and EMCO GUI.

EMCO CLI

EMCO has a CLI tool called emcoctl.  More information can be found here: https://github.com/onap/multicloud-k8s/tree/master/src/tools/emcoctl

EMCO GUI

EMCO has a GUI - details:  TBD

EMCO Setup

The EMCO architecture is extensible via the use of controllers which can be used to handle specific placement and configuration (i.e. actions) operations.  The EMCO orchestrator communicates with these controllers via a gRPC interface.  EMCO supports a controller API to allow the administrator to register these controllers with EMCO to provide the necessary connection information (name, port) as well as controller type and relative priority.

The EMCO rsync microservice also exposes it's API via gRPC to the EMCO microservices.  So, while rsync is not a placement or action controller, it is also registered with the controller API so that EMCO microservices that interact with rsync as a gRPC client can obtain the gRPC connections details in the same manner as with other controllers.

The sequence diagram illustrates the process of registering rsync with the orchestrator via the Controller API.  The diagram also shows two scenarios of how the rsync registration information is used.  In the first case, the ncm component will obtain the rsync controller record to set up its own GRPC connection table when it needs to communicate with rsync to deploy network intents.  In the second case, orchestrator obtains the rsync client connection - which will already be in its internal client table - during the sequence of installing a composite application. Register rsync via Controller API Distributed Application Scheduler(orchestrator) Network Configuration Manager(ncm) EMCO DB AppContext Resource Synchronizer(rsync) Admin Admin Controller_API Controller_API GRPC_Server_info GRPC_Server_info scheduler scheduler GRPC_Conns_ncm GRPC_Conns_ncm scheduler_ncm scheduler_ncmmongo mongo etcd etcd InstallAppAPI InstallAppAPI POST rsync controllerregistration information(Name:"rsync", Host, Port) Save rsync controller record add a GRPC connectionto rsync server tointernal table Return Some time later - ncmcalls rsync to instantiateNetwork Intents Prepares network intentresources in AppContext Retrieve gRPC connectionto rsync from internal table if rsync connection not presentretrieve rsync recordand create connection return rsync connection GRPC InstallApp API call (AppContext identifier) Some time later - orchestratorcalls rsync to instantiatean application Prepares compositeapplication resourcesin AppContext Retrieve gRPC connectionto rsync from internal table GRPC InstallApp API call (AppContext identifier)

Controller Registration

As mentioned above, EMCO provides for the addition of controllers to perform specific placement and action operations on a Composite Application prior to it being deployed to a set of clusters.

This sequence diagram illustrates how action and placment controllers are registered with orchestrator.  Also shown is the part of a sequence where the orchestrator is preparing the AppContext for a composite application and the controllers are invoked in priority order to update the AppContext per their specific function. Register placement and action controllers via Controller API Distributed Application Scheduler(orchestrator) EMCO DB AppContext Placement Controller(s) Action Controller(s) Admin Admin Controller_API Controller_API GRPC_Server_info GRPC_Server_info scheduler schedulermongo mongo etcd etcd UpdateContextAPI_place UpdateContextAPI_place UpdateContextAPI_action UpdateContextAPI_action loop[for all placement and action controllers] POST controllerregistration information(Name, Host, Port, Type, Priority) Save controller record add a GRPC connectionto controller tointernal table Return Some time later - orchestratoris preparing an AppContextfor deployment Prepares initial AppContextfor the composite appusing the genericplacement intents Retrieve all controllersrequired for this AppContextdeployment loop[for all placement controllers in priority order] Retrieve gRPC connectionfor controller GRPC UpdateContext API call (AppContext identifier) updates AppContext according tothis controllers placement function return loop[for all action controllers in priority order] Retrieve gRPC connectionfor controller GRPC UpdateContext API call (AppContext identifier) updates AppContext according tothis controllers action function return

Onboard Clusters

Clusters are onboarded to EMCO by first creating a Cluster Provider and then adding Clusters to the Cluster Provider.

When a cluster is created, the KubeConfig file for that cluster is provided as part of the multi-part POST call to the Cluster API.

Additionally, once a Cluster is created, labels and key value pairs may be added to the Cluster via the API.  Clusters can be specified by label when preparing placement intents.

The sequence diagram illustrates the process onboarding the cluster and a couple examples of how other sequences in the EMCO system obtain cluster information during operation. Onboard clusters Cluster Manager(clm) Distributed Application Scheduler(orchestrator) EMCO DB AppContext Resource Synchronizer(rsync) Edge Cluster Admin Admin Cluster_API Cluster_API Cluster_Module Cluster_Module Controller_API Controller_API GRPC_Server_info GRPC_Server_info scheduler schedulermongo mongo etcd etcd rsync rsync K8S_API K8S_API POST cluster_provider (name) call create clusterprovider handler Save cluster provider record return return POST cluster (name, kubeconfig file) call create clusterhandler Save cluster record return return POST label(s) or kvpair(s) (optionally) call create cluster labelor kv pair handler Save cluster label orkv pair record return return Some time later - generic placement controlleris preparing an AppContextfor deployment GET clusters by label (or name) retrieve cluster records by label return loop[for all resources in a cluster] Create cluster key(s)in AppContext and addappropriate resources Some time later - rsync isdeploying resources to a cluster loop[for all clusters in a composite application] get k8s client for cluster if no k8s client, GET kubeconfig for cluster retrieve cluster kubeconfig return create k8s client with kubeconfig loop[for all resources in a cluster] apply resource to cluster

Create Network Intents

(show how network intents are defined and deployed to clusters)

Create Composite Application

(show how composite applications are prepared along with the intents required to deploy them)

Basic Composite Application Lifecycle

(show instantiate, terminate, status operations of a composite application)

  • No labels