Table of Contents
After EMCO has been installed in a central cluster and some 'edge' clusters have been prepared, this section describes the basic operational sequences that are used to onboard clusters and create and deploy composite applications.
Interaction with the EMCO REST API is primary interface to EMCO.
View the EMCO API documentation with the swagger editor: https://editor.swagger.io/?url=https://raw.githubusercontent.com/onap/multicloud-k8s/master/docs/emco_apis.yaml
A Postman collection can be found here: https://github.com/onap/multicloud-k8s/blob/master/docs/EMCO.postman_collection.json
The EMCO REST API is the foundation for the other interaction facilities like the EMCO CLI and EMCO GUI.
EMCO has a CLI tool called emcoctl. More information can be found here: https://github.com/onap/multicloud-k8s/tree/master/src/tools/emcoctl
EMCO has a GUI - details: TBD
The EMCO architecture is extensible via the use of controllers which can be used to handle specific placement and configuration (i.e. actions) operations. The EMCO orchestrator communicates with these controllers via a gRPC interface. EMCO supports a controller API to allow the administrator to register these controllers with EMCO to provide the necessary connection information (name, port) as well as controller type and relative priority.
The EMCO rsync microservice also exposes it's API via gRPC to the EMCO microservices. So, while rsync is not a placement or action controller, it is also registered with the controller API so that EMCO microservices that interact with rsync as a gRPC client can obtain the gRPC connections details in the same manner as with other controllers.
The sequence diagram illustrates the process of registering rsync with the orchestrator via the Controller API. The diagram also shows two scenarios of how the rsync registration information is used. In the first case, the ncm component will obtain the rsync controller record to set up its own GRPC connection table when it needs to communicate with rsync to deploy network intents. In the second case, orchestrator obtains the rsync client connection - which will already be in its internal client table - during the sequence of installing a composite application.
@startuml skinparam roundcorner 20 title Register rsync via Controller API actor Admin box "Distributed Application Scheduler\n(orchestrator)" #LightBlue participant Controller_API participant GRPC_Server_info participant scheduler end box box "Network Configuration Manager\n(ncm)" #LightBlue participant GRPC_Conns_ncm participant scheduler_ncm end box box "EMCO DB" #LightGreen database mongo end box box "AppContext" #LightGreen database etcd end box box "Resource Synchronizer\n(rsync)" #LightBlue participant InstallAppAPI end box Admin -> Controller_API : POST rsync controller\nregistration information\n(Name:"rsync", Host, Port) Controller_API -> mongo : Save rsync controller record Controller_API -> GRPC_Server_info : add a GRPC connection\nto rsync server to\ninternal table Controller_API -> Admin : Return == Some time later - ncm\ncalls rsync to instantiate\nNetwork Intents == ||| scheduler_ncm -> etcd : Prepares network intent\nresources in AppContext scheduler_ncm -> GRPC_Conns_ncm : Retrieve gRPC connection\nto rsync from internal table GRPC_Conns_ncm -> mongo : if rsync connection not present\nretrieve rsync record\nand create connection GRPC_Conns_ncm -> scheduler_ncm : return rsync connection scheduler_ncm -> InstallAppAPI : GRPC InstallApp API call (AppContext identifier) == Some time later - orchestrator\ncalls rsync to instantiate\nan application == ||| scheduler -> etcd : Prepares composite\napplication resources\nin AppContext scheduler -> GRPC_Server_info : Retrieve gRPC connection\nto rsync from internal table scheduler -> InstallAppAPI : GRPC InstallApp API call (AppContext identifier) ||| @enduml |
As mentioned above, EMCO provides for the addition of controllers to perform specific placement and action operations on a Composite Application prior to it being deployed to a set of clusters.
This sequence diagram illustrates how action and placment controllers are registered with orchestrator. Also shown is the part of a sequence where the orchestrator is preparing the AppContext for a composite application and the controllers are invoked in priority order to update the AppContext per their specific function.
@startuml skinparam roundcorner 20 title Register placement and action controllers via Controller API actor Admin box "Distributed Application Scheduler\n(orchestrator)" #LightBlue participant Controller_API participant GRPC_Server_info participant scheduler end box box "EMCO DB" #LightGreen database mongo end box box "AppContext" #LightGreen database etcd end box box "Placement Controller(s)" #LightBlue participant UpdateContextAPI_place end box box "Action Controller(s)" #LightBlue participant UpdateContextAPI_action end box loop for all placement and action controllers Admin -> Controller_API : POST controller\nregistration information\n(Name, Host, Port, Type, Priority) Controller_API -> mongo : Save controller record Controller_API -> GRPC_Server_info : add a GRPC connection\nto controller to\ninternal table Controller_API -> Admin : Return end == Some time later - orchestrator\nis preparing an AppContext\nfor deployment == ||| scheduler -> etcd : Prepares initial AppContext\nfor the composite app\nusing the generic\nplacement intents scheduler -> mongo : Retrieve all controllers\nrequired for this AppContext\ndeployment loop for all placement controllers in priority order scheduler -> GRPC_Server_info : Retrieve gRPC connection\nfor controller scheduler -> UpdateContextAPI_place : GRPC UpdateContext API call (AppContext identifier) UpdateContextAPI_place -> etcd : updates AppContext according to\nthis controllers placement function UpdateContextAPI_place -> scheduler : return end ||| loop for all action controllers in priority order scheduler -> GRPC_Server_info : Retrieve gRPC connection\nfor controller scheduler -> UpdateContextAPI_action : GRPC UpdateContext API call (AppContext identifier) UpdateContextAPI_action -> etcd : updates AppContext according to\nthis controllers action function UpdateContextAPI_action -> scheduler : return end ||| @enduml |
Clusters are onboarded to EMCO by first creating a Cluster Provider and then adding Clusters to the Cluster Provider.
When a cluster is created, the KubeConfig file for that cluster is provided as part of the multi-part POST call to the Cluster API.
Additionally, once a Cluster is created, labels and key value pairs may be added to the Cluster via the API. Clusters can be specified by label when preparing placement intents.
The sequence diagram illustrates the process onboarding the cluster and a couple examples of how other sequences in the EMCO system obtain cluster information during operation.
@startuml skinparam roundcorner 20 title Onboard clusters actor Admin box "Cluster Manager\n(clm)" #LightBlue participant Cluster_API participant Cluster_Module end box box "Distributed Application Scheduler\n(orchestrator)" #LightBlue participant Controller_API participant GRPC_Server_info participant scheduler end box box "EMCO DB" #LightGreen database mongo end box box "AppContext" #LightGreen database etcd end box box "Resource Synchronizer\n(rsync)" #LightBlue participant rsync end box box "Edge Cluster" participant K8S_API end box Admin -> Cluster_API ++ : POST cluster_provider (name) Cluster_API -> Cluster_Module ++ : call create cluster\nprovider handler Cluster_Module -> mongo : Save cluster provider record Cluster_Module -> Cluster_API -- : return Cluster_API -> Admin -- : return Admin -> Cluster_API ++ : POST cluster (name, kubeconfig file) Cluster_API -> Cluster_Module ++ : call create cluster\nhandler Cluster_Module -> mongo : Save cluster record Cluster_Module -> Cluster_API -- : return Cluster_API -> Admin -- : return Admin -> Cluster_API ++ : POST label(s) or kvpair(s) (optionally) Cluster_API -> Cluster_Module ++ : call create cluster label\nor kv pair handler Cluster_Module -> mongo : Save cluster label or\nkv pair record Cluster_Module -> Cluster_API -- : return Cluster_API -> Admin -- : return == Some time later - generic placement controller\nis preparing an AppContext\nfor deployment == ||| scheduler -> Cluster_Module ++ : GET clusters by label (or name) Cluster_Module -> mongo : retrieve cluster records by label Cluster_Module -> scheduler -- : return loop for all resources in a cluster scheduler -> etcd : Create cluster key(s)\nin AppContext and add\nappropriate resources end == Some time later - rsync is\ndeploying resources to a cluster == ||| loop for all clusters in a composite application rsync -> rsync : get k8s client for cluster rsync -> Cluster_Module ++ : if no k8s client, GET kubeconfig for cluster Cluster_Module -> mongo : retrieve cluster kubeconfig Cluster_Module -> rsync -- : return rsync -> rsync : create k8s client with kubeconfig loop for all resources in a cluster rsync -> K8S_API : apply resource to cluster end end ||| @enduml |
(show how network intents are defined and deployed to clusters)
(show how composite applications are prepared along with the intents required to deploy them)
(show instantiate, terminate, status operations of a composite application)