Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Extend the configuration of the Jack's proxy to include DMaaP services.  Note: Current capability will route from edge to central. (See Jack's demo)
    1. Include central deployed DMaaP services with existing node ports in proxy config: dr-prov, message-router, dmaap-bc
    2. Expose central deployed DMaaP service on node port and add to proxy configuration: dr-node
    3. NOTE: proxy can subsequently route by FQDN (for HTTP only).  
  2. K8S External Service.  Deploy services at Edge which map to central services.
    1. REF: https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services
  3. Add entries for central services into /etc/hosts on Edge pods so they can route properly
  4. Provision some external DNS service that is able to resolve to required IP addresses in other k8s cluster
    1. Will require establishing a convention for FQDN.  eg.  <Release>-<service>.<namespace>
    2. Convention should leverage assumptions of using same value for Release and k8s cluster name.
  5. Determine how clients can specify FQDN (service name) but designate IP address to use.  
    1. See  --resolve option in curl for example of how this might work.
  6. Apply k8s thinking to DMaaP component design:
    1. Abandon the DR publish redirect protocol and simply use dr-node service instead.
      1. if dr-node is local to the cluster, then client will route to local dr-node pod for publishing (which is desired)
      2. if dr-node isn't local to cluster, then client will route to central dr-node via proxy  (fallback)
    2. Change dr-prov algorithm for distributing prov data to dr-node so dr-prov doesn't need to know how to address every pod
      1. consider simple periodic polling by dr-node
      2. consider using an MR topic to trigger dr-node to poll for prov data
    3. migrate to ELK design for logging, removes need for dr-prov to gather logs from each dr-node.  (already in progress)

Upon review of this list, some concern was expressed about entertaining options that involve code changes given where we are in Dublin.  Also, there is a desire for being directionally consistent with future ONAP OOM plans.

Subsequently, Fiachra Corcoran inquired at OOM meeting about approaches consistent with future directions, and learned:

  • intent is to utilize Ingress Controllers
  • RKE deployment has Ingress Controller support (although selection of which Ingress Controller technology is not finalized)
  • Some useful notes:
    • From Michael O'Brien(Amdocs, LOG) to Everyone: 10:09 AM
      default rke ingres https://git.onap.org/oom/tree/kubernetes/contrib/tools/rke/rke_setup.sh#n177 ingress: rancher/nginx-ingress-controller:0.21.0-rancher3 ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.4-rancher1
    • From Michael O'Brien(Amdocs, LOG) to Everyone: 10:20 AM
      Jira
      serverONAP JIRA
      serverId425b2b0a-557c-3c0c-b515-579789cceedb
      keyOOM-1598
      Document a Highly-Available K8s Cluster Deployment (RKE 0.2.1 / K8S 1.13.5 / Helm (2.12.3 - not 2.13.1) / Docker 18.09.5)
  • Much of this is now under discussion in the Edge Automation Working Group. (meets wed @11am EST) 
  • Also,  Fiachra andMike Elliott agreed to continue discussion on how DMaaP POC might proceed.  Possible meeting next week.

Open Issues

REFStatusDiscussion
1Open

DNS Update for inter-site routing

We have several examples of an edge component which needs to communicate to a central service. Mike suggested that edge DNS might be updated such that edge clients could resolve to central services. This might satisfy a common need across several components. e.g. access to central AAF comes to mind

05/02:

Another alternative was demoed by DCAE where an nginx container is deployed at edge site which proxies service traffic to the relevant NodePort on the central k8s cluster.

This may be suitable for some of DMaaP components (as a POC) but not a preferred solution.

Work is ongoing in OOM to provide this (with input from the community)

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyOOM-1572

2Open

Location discovery

Bus Controller manages dcaeLocations as the name of different sites. What mechanism can be used to:

a) register dcaeLocations when each k8s cluster is deployed.

b) serve as an attribute when MR and DR clients are provisioned. Current expectation is that there is some k8s info in A&AI API that might be useful.


05/02:

Agreement from DCAE on requirement to involve all ONAP components (AAI, OOF, etc) to find a suitable solution here.

Defined use-case defined here

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyOOM-1579

3Closed

Relying on Helm chart enabled flag

2/12:

"Mike,

Last week we discussed using a helm configuration override file to control which components get deployed at edge.

The idea being we would set enabled: false for a component that shouldn’t be deployed.

But dmaap chart actually consists of several sub-charts, each of these sub-charts correspond to a specific dmaap component which we may want to deploy at edge or not.

So, curious if you know the syntax for this – I haven’t been able to find a reference for how enabled is actually used, and I don’t see that value referenced in our charts so not clear what is reading it.


Wondering if our edge config override would be something like:

  dmaap:
    dmaap-message-router:
      enabled: true
    dmaap-bus-controller:
      enabled: false
    dmaap-dr-prov:
      enabled: false
    dmaap-dr-node:
      enabled: true


or, do charts for our individual components need to be top level  directories under oom/kubernetes in order to use the enabled flag?"

2/13: From Mike Elliot:

"I’ve been trying to allow for the conditional control over the dr-prov and dr-node as well, with no success.

Still investigating options for this. Hope to have a solution on this by EOD."


05/02:

Current chart structure allows deployment of individual components. (BC, MR, DR).

One caveat to this is a dependency on AAF being reachable by BC & MR. (DR soon to follow)

See the DMaaP Deployment Guide - Dublin for more details.

4Open

05/02:

Helm chart edge deploy.

  • POC procedure demoed using multiple "kube-config --contexts" to target the edge site/cluster during helm deploy. (Inter cluster security may come into play here also)

"edge charts" may require several override params to cater for the following.

  1. dcaeLocation (see issue 2)
  2. pod specs - size, resources, etc
  3. readiness configuration?
  4. potential service endpoint changes/proxies?
5Open

05/02:

Need to identify if all of the required services (logstash, AAF, dr-node, mr-kafka, etc) have exposed NodePorts available for bi-directional traffic between sites.

...