Proposal -1:

we will use the vserver to update the pod information in AAI (no.of pods = no.of vservers).

For IP will use the l-interface in the vserver and currently proposing to add five parameters in AAI, as shown in below vserver.json (vserver-name,vserver-name2,prov-status,l3-interface-ipv4-address and l3-interface-ipv4-prefix-length).


Vserver.json
{  
   "vserver-id":"example-vserver-id-val-54206",
   "vserver-name":"POD-NAME",
   "vserver-name2":"profile-name",
   "prov-status":"NAMESPACE",
   "vserver-selflink":"example-vserver-selflink-val-53168",
   "in-maint":true,
   "is-closed-loop-disabled":true,
   "volumes":{  
      "volume":[  
         {  
            "volume-id":"example-volume-id-val-71602",
            "volume-selflink":"example-volume-selflink-val-80626"
         }
      ]
   },
   "l-interfaces":{  
      "l-interface":[  
         {  
            "interface-name":"example-interface-name-val-29637",
            "interface-role":"example-interface-role-val-49763",
            "v6-wan-link-ip":"example-v6-wan-link-ip-val-73419",
            "selflink":"example-selflink-val-83045",
            "interface-id":"example-interface-id-val-17917",
            "macaddr":"example-macaddr-val-71542",
            "network-name":"example-network-name-val-2842",
            "management-option":"example-management-option-val-16536",
            "interface-description":"example-interface-description-val-85643",
            "is-port-mirrored":true,
            "in-maint":true,
            "prov-status":"example-prov-status-val-73622",
            "is-ip-unnumbered":true,
            "allowed-address-pairs":"example-allowed-address-pairs-val-92267",
            "vlans":{  
               "vlan":[  
                  {  
                     "vlan-interface":"example-vlan-interface-val-54497",
                     "vlan-id-inner":70918078,
                     "vlan-id-outer":61012074,
                     "speed-value":"example-speed-value-val-12296",
                     "speed-units":"example-speed-units-val-84452",
                     "vlan-description":"example-vlan-description-val-13828",
                     "backdoor-connection":"example-backdoor-connection-val-63476",
                     "vpn-key":"example-vpn-key-val-54149",
                     "orchestration-status":"example-orchestration-status-val-66103",
                     "in-maint":true,
                     "prov-status":"example-prov-status-val-60754",
                     "is-ip-unnumbered":true,
                     "is-private":true,
                     "l3-interface-ipv4-address-list":[  
                        {  
                           "l3-interface-ipv4-address":"example-l3-interface-ipv4-address-val-25332",
                           "l3-interface-ipv4-prefix-length":44948626,
                           "vlan-id-inner":90390215,
                           "vlan-id-outer":59415181,
                           "is-floating":true,
                           "neutron-network-id":"example-neutron-network-id-val-49439",
                           "neutron-subnet-id":"example-neutron-subnet-id-val-17490"
                        }
                     ],
                     "l3-interface-ipv6-address-list":[  
                        {  
                           "l3-interface-ipv6-address":"example-l3-interface-ipv6-address-val-65463",
                           "l3-interface-ipv6-prefix-length":28285864,
                           "vlan-id-inner":68853321,
                           "vlan-id-outer":36358258,
                           "is-floating":true,
                           "neutron-network-id":"example-neutron-network-id-val-85239",
                           "neutron-subnet-id":"example-neutron-subnet-id-val-11639"
                        }
                     ]
                  }
               ]
            },
            "sriov-vfs":{  
               "sriov-vf":[  
                  {  
                     "pci-id":"example-pci-id-val-68804",
                     "vf-vlan-filter":"example-vf-vlan-filter-val-66686",
                     "vf-mac-filter":"example-vf-mac-filter-val-4844",
                     "vf-vlan-strip":true,
                     "vf-vlan-anti-spoof-check":true,
                     "vf-mac-anti-spoof-check":true,
                     "vf-mirrors":"example-vf-mirrors-val-62994",
                     "vf-broadcast-allow":true,
                     "vf-unknown-multicast-allow":true,
                     "vf-unknown-unicast-allow":true,
                     "vf-insert-stag":true,
                     "vf-link-status":"example-vf-link-status-val-61210",
                     "neutron-network-id":"example-neutron-network-id-val-92364"
                  }
               ]
            },
            "l-interfaces":{  
               "l-interface":[  
                  {  
                     "interface-name":"example-interface-name-val-24756",
                     "interface-role":"example-interface-role-val-99880",
                     "v6-wan-link-ip":"example-v6-wan-link-ip-val-83433",
                     "selflink":"example-selflink-val-37078",
                     "interface-id":"example-interface-id-val-30028",
                     "macaddr":"example-macaddr-val-48581",
                     "network-name":"example-network-name-val-66290",
                     "management-option":"example-management-option-val-75756",
                     "interface-description":"example-interface-description-val-12906",
                     "is-port-mirrored":true,
                     "in-maint":true,
                     "prov-status":"example-prov-status-val-34593",
                     "is-ip-unnumbered":true,
                     "allowed-address-pairs":"example-allowed-address-pairs-val-55176",
                     "admin-status":"example-admin-status-val-64707"
                  }
               ]
            },
            "l3-interface-ipv4-address-list":[  
               {  
                  "l3-interface-ipv4-address":"IP_Address",
                  "l3-interface-ipv4-prefix-length":"PORT",
                  "vlan-id-inner":17112845,
                  "vlan-id-outer":77792568,
                  "is-floating":true,
                  "neutron-network-id":"example-neutron-network-id-val-60469",
                  "neutron-subnet-id":"example-neutron-subnet-id-val-22212"
               }
            ],
            "l3-interface-ipv6-address-list":[  
               {  
                  "l3-interface-ipv6-address":"example-l3-interface-ipv6-address-val-99899",
                  "l3-interface-ipv6-prefix-length":78690445,
                  "vlan-id-inner":66245854,
                  "vlan-id-outer":11258712,
                  "is-floating":true,
                  "neutron-network-id":"example-neutron-network-id-val-57495",
                  "neutron-subnet-id":"example-neutron-subnet-id-val-2203"
               }
            ],
            "admin-status":"example-admin-status-val-39685"
         }
      ]
   }
}

Proposal-2:

Pod names are not fixed. They will always change whenever the pod gets restarted. Remember pods are *NOT* like VMs.

If the goal here is to be able to connect to the POD to do some actions on it, then we are better off storing the Name of the Service that sends traffic to that POD.

Basically, the vserver information should say: "I want to connect to this service within my application and here is the IP and PORT to do so"


The connectivity information will be the Public IP: Port combo for each Kubernetes Service, which will allow you to ssh into the pod itself via the service.

Now, when there are multiple pods for each service, the pods will  need to have to have a mechanism that will allow them to synchronize that information between them. Either via a persistent volume that stores state or via some kind of peer to peer gossip.


The connection flow will be like this. The only thing that is accessible to the outside is the PUBLIC_IP.

ONAP USER --> PUBLIC_IP:PORT --> INGRESS_GATEWAY --> SERVICE:SERVICE_PORT --> ONE OF THE PODS


Having said that, selection of a particular service can be done via route inspection for HTTP services.

For TCP services such as SSH or NETCONF, there is no route and we will have to use PORT based TCP routes.

So, a dynamically generated port will be allocated on the public_IP.

Observations from vFW Closed-loop:

That example has 2 parts:

  1. The vFW component sends VES events to the DCAE collector.  In our lab setup, all that was needed was the IP address and port of the DCAE collector – and of course, the vFW pod (virtlet) needs to be able to access that.  The aspect that needs to line up with AAI is the VES event contains a ‘sourceName’ field which is used by DCAE to lookup the corresponding vserver in AAI.  With the current demo vFW code, the hostname of the vFW (virtlet) turns out to be the pod name.  (see https://wiki.onap.org/display/DW/Setting+up+Closed+Loop+for+K8S+vFW+-+initial+pass#SettingupClosedLoopforK8SvFW-initialpass-VESEventsfromthevFirewall for details).
    1. If you look at my example in the link refered, you’ll note that I did not add any IP address info – it seems the key info is the vserver name and its relationship to the vnf-id.  Obviously IP info is needed for connections coming the other direction.


  1. The other part of the vFW closed loop example is the connection made so that APPC can configure the Packet Generator in response to policy.  So far, I have mimicked the steps used by the integration robot testing – and set up the APPC netconf mount to the Packet Generator manually.  The 3 specific data items needed were: vnf-id, IP address and Port number.  In my initial test case, the IP address was the IP of the Node and the Port was the Node Port of a Service (which will be added to the example).    (see https://wiki.onap.org/display/DW/Setting+up+Closed+Loop+for+K8S+vFW+-+initial+pass#SettingupClosedLoopforK8SvFW-initialpass-SettingupaServicethePacketGeneratornetconfmount )
    1. If I’m following proposal-2 point then eventually, we’d use a Public IP:Port instead of the node values I used.
    2. For the service, the vserver object could be the service name.
    3. Once the netconf mount is done, ONAP just needs to know the vnf-id (I believe – response to policy worked without AAI having any IP info).  So, while I manually performed the netconf mount, if ONAP has the ability to automatically perform the netconf mount, then AAI will need the appropriate information.


So, what if, for example, the vFW needs to send VES events (as in part #1) and it also needs to have configuration (as in part #2)?

Perhaps this implies two vserver objects?

One for the pod name of the pod generating ves events and another for the service and the network interface it exposes.


Also, one other thought, as proposal-2 mentions, the pod name can change – so if pod is used as vserver name, then you’d want some process to update AAI.  Maybe we need to make our sample vFW VES events send something a little less volatile then pod name.

  • No labels

6 Comments

  1. In my opinion proposal No 2 is not acceptable. In the past we have stated that we should not put many of requirements on the helm package itself. So we cannot have an assumption that there is a service declared in the package for each deployment, and we cannot guarantee there will be at maximum one service for each deployment. We can have 0, or more than 1 service, in each case we cannot assume there is one service for each pod so we cannot use name of service as a name of v-server. The v-server will be an equivalent of pod so when pod name will change, v-server information should be rebuild and updated in AAI. In the k8splugin there should be a "listener" that will observe the status of resources (pods, deployments, services, etc.). It would be good to reflect in the attributes of v-server both the name of the pod, and the name of its deployment, daemonset, statefulset. We may use vserver-name and vserver-name2. The first one could be just an original name of pod and the second one could be some identifier generated by us  that will let to identify the pod under the specific deployment/daemonset/statefulset what would simplify further lookup on changes and relations pods → vservers. 

    Please remember also that for VNFs in ONAP we do not store any information about their endpoints so today there is no equivalent information to port details of pods, services and endpoints in K8s. In my opinion an equivalent of K8s service or endpoint could be created in AAI what would be useful also for VNFs and will give an information about ports exposed by VNFCs/CNFCs. Today we have in AAI only information about networks, interfaces and IP addresses and we should think how to represent this information in AAI. If there would be an assumption that networks could be created only by OVN it would be simple, but we give a freedom, as I understand, to which CNI is used in K8s. For each CNI information about created interfaces is presented in different way in pod description - as a consequence AAI Updater would have to implement methods to read network information for each CNI or we will propose some mechanisms to write extensions to new CNIs in AAI updater, or we will propose some other mechanism - maybe metadata based. Information about networks, interfaces and IP addresses is required for further LCM operations that exist today in ONAP.

    We should store in AAI information about all the network interfaces from pod description but also this coming from the service description/endpoints.

  2. Hi all,

    in the Onap Dublin release which proposal has been developed?

    Thanks


  3. Hi Aniello,

    These proposal were made for Frankfurt release, selected Proposal-1 to update AAI and it is under development. 

  4. Thamlur Raju

    As mentioned on the COE call this morning by Ritu Sood  and myself, it would be useful for all AAI calls to go through new AAI update code written for this page's purpose. Creating the client is likely the most work and if the client is made generic enough and extendable, we would then be able to update status of CNF in AAI  (per proposal above) and other AAI info regarding K8s Clusters (in my case cloud/tenant/flavor creation/update).

    We are happy to write extension code for updating cloud/tenant/flavor if there is an extensible base to work on.

    FYI Huang Haibin Ruoyu Ying



  5. Marcus Williams , yes Markus. I'll create a separate page right under this for that work today.