PDP Registration

  • Currently support one default group - contains drools, apex and xacml pdps.
  • Desire for CLAMP to be able to use our API fully: CLAMP Deployment of Policies to PDP Groups
  • Ex.: Drools PDP when it initializes it will register with the PAP with policy types. PAP will evaluate based on the need of a group, if a group needs 5 PDP's of a type and has 3 then that PDP will be assigned to that group.
  • Requirement:
    • PDP types are 1:1 with K8S deployment
      • How many PDP instances desired, scaling requirements
      • The charts can be configured to define the PDP type, supported policy types for the PDP instances and which sub-group.
      • PDP sub group is 1:1 with PDP type - only allowed one per group
    • PAP does the logical partitioning
      • PDP's should not be split across groups
  • Action Item: Ram and Jim will work on K8S charts to describe an example of multiple deployments.

Questions:

How does server pooling get affected by this? Bobby: would work as is, no knowledge of each other --> server pools can be governing each sub-group.

How does URI for policy decisions support load balancing eg how do the clients find the right URI?

Can a subgroup have more than one deployment? Or should they be in separate groups?

Can K8S deployment span across cluster? PDP instances over different clusters?

Should we leave supported policy types out of the PDP definition and use the API?


 - discussion during Policy team meeting.


 

Drools Server Pooling needs clarification:

  • Works in local and remote clusters.
  • The server will update automatically a primary and backup drools PDP. Built into the PDP code.
  • Provides more than load balancing, provides backup and distributed locking.
  • Could server pool and PDP group logical mapping be 1:1? Yes.
    • As long as the PDP's comply that they are in the same sub-group, how the underlying PDP's pool and share state is opaque to the Policy Framework.
    • Req: Should require that PDP's in the same server pool should not be split across groups.
    • May need to extend the Drools Server Pooling code to ensure that all the policies deployed to the group are supported by all the PDPs in the pool.
    • Frankfurt - effort will be to test and understand the server pooling. Will be disabled by default.


K8S Issues:

TODO: schedule a follow up to help understand how the drools server pooling works for interest.

Per Jim:



  • No labels

6 Comments

  1. Something else that occurs to me has to do with diagnosing a problem.  If a k8s deployment only exists within a single group/subgroup pair, then you know which PDPs to check if there's an issue with a particular pair.  However, if a k8s deployment were to span more than one group, then you'd have to check all of the PDPs, just to determine if it's in the subgroup, even before trying to diagnose the issue.

  2. Not sure I said this clearly in the meeting, but my biggest concern with having a k8s deployment span more than one group is the robustness of the solution.  I could easily imagine that PDPs and PAP somehow end up out of sync and no one can explain why things are happening as they are, because PDPs are not in the expected/needed group.  Even with only one group, we've already seen issues where PAP and a PDP don't agree on the state of the PDP, and that's a much simpler issue than dealing with rearranging PDPs across groups.

    In general, I would argue for something that's easier to use/configure even if it meant more complex code.  However, in this case, I feel like the code complexity would leak out into the visible world, causing lots of issues that would be difficult to diagnose.

  3. These are the comments that I brought up during the 10/2 Policy Weekly Meeting.    A multiple group/subgroup design that implies direct commands from a pap container instance to the kubernetes api server, does not seem very natural, at least intuitively in the directionality.    Note that the container would have  cluster wide side effects, which could be negative, case of malfunction.   There is also the concern about role separation, between a kubernetes cluster admin, and a policy application admin, the design forces to  merge of both roles.   These roles are typically separate and have distinct privileges.   In summary, there are 2 fundamental questions posed here: 1) architecturally, would it be acceptable to have a container affect the kubernetes cluster state? and 2) in terms of security, the blending of kubernetes admin plus policy application admin, is it acceptable?   Note that when delving into the details, the complexity could be daunting, as the pap would need to have authentication credentials to talk to the Kubernetes API server (not an ONAP component), deployment distinct configuration, ..

    A favorable argument though, could be made, by drawing an analogy with APPC, where this component is issuing openstack commands that also affect "network" state (although the targets are not part of the ONAP control plane, so it is a subtle but important difference).    Under this premise, it seems that there is a call  to create a new generic component, not policy specific, but rather ONAP wide (if there's not one out of the box  that encapsulates this functionality).

    Based on the above, I tend to think, that a solution that was to be able to manage available PDP resources within the application layer would be  cleaner since the boundaries between k8s and application stay the same, faster to implement, and have simpler configuration.   In such approach as PDPs are presented to the PAP, the PAP would assign group/subgroup/behavior(policies) with the resources at hand.    At the end, group management operations would be typically manual, and as such it would be expected that there is some planning ahead of time on the administrators part, to support it in terms of kubernetes resources as well.    This take does not preclude the use of auto-scaler capabilities c if necessary (see for example autoscale), or simply overprovisioning the value of the replica count.


  4. My understanding of the proposed solution:

    • Initially there will be 1 defaultGroup with 1 instance of PDP-A, PDP-D and PDP-X (as per the oom chart values)
    • PAP is asked to deploy a newGroup, say for e.g. with 2 PDP-A instances and 3 PDP-D instances, PAP will go ahead and use Kubernetes API to create a deployment for PDP-A(with 2 replicas) and a deployment for PDP-D(with 3 replicas).
    • Similarly PAP can be asked to undeploy as well.

    Not sure about the directionality problem though, but, isn't something similar happening in CLAMP.

    When we Deploy a CL in CLAMP, a new Pod is spun up in Kubernetes. When we Undeploy the CL, the pod is terminated.

    PAP also could do something similar, and the proposed solution seems viable to me.

    1. CLAMP does not spin anything up. CLAMP calls DCAE Controller API, which uses a Cloudify to spin things up. Not exactly how the ONAP community wants things done.


      I do not think we need or should have a Policy Controller that spins anything up. These things should be managed by a separate DevOps team.

  5. Just to be sure folks understand, when it comes to deployment of containers across clusters, (eg georedundancy as an example) the approach should be an overall consistent ONAP approach amongst all the projects. Individual projects should not be coming up with their own solutions for this. It seems that by putting K8S API's within the PAP would violate that.