Skip to end of metadata
Go to start of metadata

For the Beijing release of ONAP on OOM, accessing the ONAP portal from the user's own environment (laptop etc.) was a frequently requested feature. 

So to achieve that, what we've decided on doing is to expose the portal application's port 8989 through a K8s LoadBalancer object. This means that your Kubernetes deployment needs to support Load Balancing capabilities (Rancher's default Kubernetes deployment has this implemented by default), or this approach will not work.

This has some non-obvious implications in a clustered Kubernetes environment specifically where the K8s cluster nodes communicate with each other via a private network that isn't publicly accessible (i.e. Openstack VMs with private internal network).

Typically, to be able to access the K8s nodes publicly a public address is assigned.  In Openstack, this is a floating IP address (or if your provider network is public/external and can attach directly to your K8S nodes, your nodes have direct public access).

What happens when the portal-app chart is deployed is that a K8S service is created that instantiates a load balancer (in its own separate container). The LB chooses the private interface of one of the nodes (if happening in a multi-node K8S cluster deployment) as in the example below (i.e. in OpenStack, is the private IP of the specific K8s cluster node VM only).

Then, to be able to access the portal on port 8989 from outside the OpenStack private network where the K8S cluster is connected to, the user needs to assign/get the specific floating IP address that corresponds to the private IP address of the specific K8S Node VM where the Portal Service "portal-app" (as shown below) is deployed at.

kubectl -n onap get services | grep "portal-app"

NAME          			  TYPE        	 CLUSTER-IP      EXTERNAL-IP   							PORT(S)          															 AGE	   LABELS
portal-app                LoadBalancer                               8989:30215/TCP,8006:30213/TCP,8010:30214/TCP                                 1d        app=portal-app,release=dev

In this example, that public floating IP that is associated with private IP is, which can be obtained through the horizon GUI or the Openstack CLI for your tenant (by running command "openstack server list" and looking for the specific K8S Node where the portal-app service is deployed at). This floating IP is then used in your /etc/hosts to map the fixed DNS aliases required by the ONAP Portal as shown below.


If you are not using floating IPs in your Kubernetes deployment and directly attaching a public IP address (i.e. by using your public provider network) to your K8S Node VMs' network interface, then the output of 'kubectl -n onap get services | grep "portal-app"' will show your public IP instead of the private network's IP. Therefore, you can grab this public IP directly (as compared to trying to find the floating IP first) and map this IP in /etc/hosts.

For K8S clusters that use Rancher as its CMP (container management platform) only: If your kubernetes nodes/VMs are using floating IPs and want to use this floating IP as the Kubernetes node's "EXTERNAL-IP", then you need to make sure that the "CATTLE_AGENT_IP" argument of the "docker run" command that will run the Rancher Agent in the kubernetes node(s) is set to be the Floating IP and not the VM's private IP.

Ensure you've disabled any proxy settings the browser you are using to access the portal and then simply access the familiar URL:

Other things we tried:

We went through using Kubernetes port forwarding but thought it a little clunky for the end user to have to use a script to open up port forwarding tunnels to each K8s pod that provides a portal application widget

We considered bringing back the VNC chart with a different image but there were many issues with resolution, lack of volume mount, /etc/hosts dynamic update, file upload that were a tall order to solve in time for the Beijing release.

  • No labels


  1. Hi, 

    Should this IP, in the list, be the kubernetes master IP, not the load balancer IP.

    Test the OOM master, portal try to access sdc via 

    So should assign kubernetes master IP to

  2. In Casablanca there is actually a mechanism to modify the, and the data loaded into the portal application catalog so that a k8 host name with nodeports can be used to access portal/sdc/vid/aai etc.