Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

PlantUML Macro
titleASD-CNF Instantiation
@startuml
participant SO_Client
participant SO
participant SO_BPMN
participant CNFM
participant AAI
participant SDNC
participant OOF
participant ASD_Catalog_Mgr
participant Helm_Repository
participant K8S_Cluster

autonumber 

group ASD-Based CNF Instantiation
    SO_Client -> SO : Create Service
    SO -> SO_BPMN : Process and Decompose Service
    SO_BPMN -> AAI : Create Service Instance
opt Service-Level Homing
    SO_BPMN -> OOF : Homing Information (optional for PoC)
    OOF -> SO_BPMN : Receive Homing Information (optional for PoC)
end
    SO_BPMN --> SO_BPMN : Process Model Info & Decide flows      
    SO_BPMN -> CNFM : Delegate Resource Orchestration,\npass input parameters
    CNFM -> ASD_Catalog_Mgr : Get ASD
    CNFM -> Helm_Repository : Get associated Helm Charts
    CNFM --> CNFM : Process and decompose ASD and DeploymentItems\n(VF & Vf-Modules)
    CNFM --> CNFM : get DeploymentItem order and create a sequence list
    CNFM --> CNFM : execute each deployment item by following the sequence order
loop
    CNFM -> AAI : Create vf-module
    CNFM -> SDNC : Assign vf-module
    CNFM --> CNFM : Get AsInstance LifecycleParameterMetadata from the request
    CNFM --> CNFM : Get the corresponding Helm Chart
    CNFM --> CNFM : Create a new values file by replacing the values file from the Helm Chart with LifecycleParameterMetadata
    CNFM --> CNFM : generate K8S resource (e.g., helm template) based on the Helm Chart plus a new custom values file
    CNFM -> OOF : get a placement decision (for PoC, returns a predefined K8S cluster name)
	CNFM --> CNFM: set kube config environment for the target K8S Cluster
    CNFM --> K8S_Cluster : invoke Helm Install with a custom values file
    CNFM -> AAI : Update vf-module

end
end  



@enduml


Option B: use of CNF Adapter (this is not part of initial PoC)

Note: use of CDS is TBD.


PlantUML Macro
titleASD-CNF Instantiation
@startuml
participant SO_Client
participant SO
participant SO_BPMN
participant CNFM
participant AAI
participant SDNC
participant CDS
participant OOF
participant ASD_Catalog_Mgr
participant Helm_Repository
participant CNF_Adapter
participant K8S_Plugin
participant K8S_Cluster

autonumber 

group ASD-Based CNF Instantiation
    SO_Client -> SO : Create Service
    SO -> SO_BPMN : Process and Decompose Service
    SO_BPMN -> AAI : Create Service Instance
opt Service-Level Homing
    SO_BPMN -> OOF : Homing Information (optional for PoC)
    OOF -> SO_BPMN : Receive Homing Information (optional for PoC)
end
    SO_BPMN --> SO_BPMN : Process Model Info & Decide flows      
    SO_BPMN -> CNFM : Delegate Resource Orchestration,\npass input parameters
    CNFM -> ASD_Catalog_Mgr : Get ASD
    CNFM -> Helm_Repository : Get associated Helm Charts
    CNFM --> CNFM : Process and decompose ASD and DeploymentItems\n(VF & Vf-Modules)
    CNFM --> CNFM : get DeploymentItem order and create a sequence list
loop
    CNFM -> AAI : Create vf-module
    CNFM -> SDNC : Assign vf-module
    SDNC -> CDS: Assign vf-module
    CDS --> CDS : Build RB Profile
    CDS -> SDNC : Assign result
    SDNC -> CNFM : vf-module assigned
    CNFM -> CNF_Adapter : Assign vf-module
    CNF_Adapter -> K8S_Plugin : RB Profile (Helm enrichment)
    CNF_Adapter -> CNFM : vf-module assigned
    CNFM -> AAI : Update vf-module
    CNFM --> CNFM : Dry run for getting K8S resource\n(Vf-Module)
    CNFM -> OOF : get Homing Information per resource\n(Vf-Module)
    CNFM -> CNF_Adapter : Create vf-module in K8S
    CNF_Adapter -> K8S_Plugin : Create vf-module in K8S\n(RB Instance)
    K8S_Plugin -> K8S_Cluster : Install Helm Chart
    K8S_Plugin -> CNF_Adapter : K8S Resource Instance Status
    CNF_Adapter -> CNFM : vf-module in K8S created
    CNFM -> AAI : Update vf-module

end
end  



@enduml


Tenant Support

The Tenant support will be realized by Kubernetes cluster, namespace, node and container. Initially, namespace can be used to restrict API access, to constrain resource usage and to restrict what containers are allowed to do.

Assumption & Requirements (from cloud.google.com)

source: https://cloud.google.com/kubernetes-engine/docs/best-practices/enterprise-multitenancy 

The best practices in this guide are based on a multi-tenant use case for an enterprise environment, which has the following assumptions and requirements:

  • The organization is a single company that has many tenants (two or more application/service teams) that use Kubernetes and would like to share computing and administrative resources.
  • Each tenant is a single team developing a single workload.
  • Other than the application/service teams, there are other teams that also utilize and manage clusters, including platform team members, cluster administrators, auditors, etc.
  • The platform team owns the clusters and defines the amount of resources each tenant team can use; each tenant can request more.
  • Each tenant team should be able to deploy their application through the Kubernetes API without having to communicate with the platform team.
  • Each tenant should not be able to affect other tenants in the shared cluster, except via explicit design decisions like API calls, shared data sources, etc.

Access Control

TBD

Network Policies

TBD


Resource Quotas

TBD

AAI Data Model

AAI CNF Model - Overview

...