Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

...

  1. External attacker analyzes the captured traffic among services to steal secrets such as passwords and certificates
  2. Internal attacker analyzes the captured traffic among services to steal secrets such as passwords and certificates
  3. External attacker bombards the container services with new connections, leading to large number forked processes and threads leading to resource issues on other workloads (containers) in the system
  4. Internal attacker bombards the container services with new connections, leading to large number forked processes and threads leading to resource issues on other workloads (containers) in the system
  5. External attacker exploits downloads of containers from repositories to tamper with them and inject malicious code
  6. Internal attacker exploits downloads of containers from repositories to tamper with them and inject malicious code
  7. External attacker introduces malicious VM into ONAP environment to steal data and subvert operations
  8. Internal attacker introduces malicious VM into ONAP environment to steal data and subvert operations
  9. External attacker introduces malicious pod into ONAP environment to steal data and subvert operations
  10. Internal attacker introduces malicious pod into ONAP environment to steal data and subvert operations
  11. External attacker introduces malicious container into ONAP environment to steal data and subvert operations
  12. Internal attacker introduces malicious container into ONAP environment to steal data and subvert operations
  13. External attacker introduces malicious process into ONAP environment to steal data and subvert operations
  14. Internal attacker introduces malicious process into ONAP environment to steal data and subvert operations
  15. External attacker introduces malicious external-system into ONAP environment to steal data and subvert operations
  16. Internal attacker introduces malicious external-system into ONAP environment to steal data and subvert operations

Discussion

ONAP Operating Environment

Example from Cloud Native Deployment:

  • ubuntu@a-cd-one:~$ kubectl get pods --all-namespaces
    (shows 210 pods in onap namespace)


TypeVMsContainers
Full Cluster (14 + 1) - recommended15248 total



Example from Open Wireless Laboratory (OWL) at Wireless Information Network Laboratory (WINLAB):

  • There are currently three Ubuntu 18.04 servers: node1-1, node2-1 and node2-2, which are managed by OpenStack.
    Node1-1 is the controller node, and node2-1 and node202 are compute nodes.
    We have installed ONAP using the OOM Rancher/Kubernetes instructions into five VMs.

Development

  • There is a transition from http ports to https ports, so that communications are protected by TLS encryption.
  • However the transition is piecemeal and spread over multiple ONAP releases, so individual projects still have vulnerabilities to due intra-ONAP dependencies, e.g.
    Jira
    serverONAP JIRA
    serverId425b2b0a-557c-3c0c-b515-579789cceedb
    keyOJSI-97
    out of a total of
    Jira
    serverONAP JIRA
    jqlQuerytext ~ "plain text http" ORDER BY updated DESC
    counttrue
    serverId425b2b0a-557c-3c0c-b515-579789cceedb
    .
  • A node-to-node VPN (working at the level of the VM or physical servers that host the Kubernetes pods/docker containers of ONAP) would provide blanket coverage of all communications with encryption.
  • A node-to-node VPN is both
    • an immediate stopgap solution in the short-term to cover the exposed plain text HTTP ports
    • an extra layer of security in the long-term to thwart unforeseen gaps in the use of HTTPS ports

Discussion

  • There has already been discussion and recommendation for using Istio https://istio.io/
    • Istio Envoy is deployed within each pod using sidecar-injection, then stays in the configuration when the pods are restarted
    • Istio Envoy probably appears within each pod as
    There has already been discussion and recommendation for using Istio https://istio.io/
    • Istio Envoy is deployed within each pod using sidecar-injection, then stays in the configuration when the pods are restarted
    • Istio Envoy probably appears within each pod as a network bridge, such as Kubernetes cluster networking bridge cbr0, thereby controlling all network traffic within the pod
    • Istio Envoy provides full mesh routing but can also provides control of routing with traffic management and policies
    • Istio Envoy also provides telemetry in addition to the security of mutual TLS authentication
    • Istio Citadel is run in the environment as the certificate authority / PKI supporting the mutual TLS authentication
    • Istio appears to have only a single overall security domain (i.e. the environment that includes Mixer, Pilot, Citadel and Galley), though it does contain many options to distinguish different services, users, roles and authorities

...

  • Appearance:
    • Tinc VPN appears as IP level network device
    • ZeroTier appears as Ethernet level network port
    • WireGuard appears as IP level network device
  • Connectivity provided:
    • Tinc VPN automatically gives full mesh routing
    • ZeroTier automatically gives full mesh routing
    • WireGuard gives point-to-point connection like SSH (mesh routing is a todo)
  • Node/Host Configuration:
    • Tinc VPN host is configured with public/private key pair, in a config file
    • ZeroTier node is configured with public/private key pair, then generates a VL1 ZeroTier Address
    • WireGuard host is configured with public/private key pair and ACL, in a config file
  • Network Configuration:
    • Tinc VPN network is configured by hosts exchanging (out-of-band) exported config files for a specified "network name"
      • rest of network is exchanged in-band
    • ZeroTier network is configured with knowledge of "roots" and with VL2 ZeroTier Network ID (VL1 ZeroTier Address of the controller and network number)
      • rest of network is exchanged in-band
    • WireGuard network is configured by hosts sharing public keys (out-of-band), connect via IP Address corresponding to keys
      • IP roaming is exchanged in-band
  • Number of network connections:
    • Tinc VPN hosts can connect to many "network names" concurrently
    • ZeroTier nodes can connect to multiple VL2 ZeroTier Network IDs concurrently
    • WireGuard hosts can connect to many other hosts concurrently
  • Deployment:
    • Tinc VPN is deployed on the VM hosting the pods/containers/processes
      • could be in the container base image
      • no explicit interoperability with kubernetes to manipulate pod/container network namespaces
    • ZeroTier is deployed on the VM hosting the pods/containers/processes
      • could be in the container base image
      • no explicit interoperability with kubernetes to manipulate pod/container network namespaces
    • WireGuard is deployed on the VM hosting the pods/containers/processes
      • could be in the container base image
      • no explicit interoperability with kubernetes to manipulate pod/container network namespaces
  • Single-Points-of-Failure:
    • Tinc VPN runs daemon processes on each host (one per network name), topology is peer-to-peer
    • ZeroTier runs a global "planet" root server called "Earth" apparently as testing network and casual communications
      • Unclear about how users can deploy their own "planet" root servers
      • Users can deploy their own "moon" root servers
    • WireGuard runs daemon processes on each host, topology is peer-to-peer
  • Scaling:
    • Tinc VPN can add new hosts to existing network names without altering configurations of existing hosts
      • invitations dynamically create a configuration on the server
    • ZeroTier can add new hosts nodes to existing network names IDs without altering configurations of existing nodes (Network ID is obscure but public information)
      • Unclear whether adding new root servers requires a restart
    • WireGuard can add new hosts but requires both ends of the connection to be updated and present in the ACL of host config file
  • Access Control:
    • Tinc VPN has control by the exchange of exported host config files
      • an invitation refers to the
      • invitations dynamically create a configuration on the server
    • ZeroTier can add new nodes to existing network IDs without altering configurations of existing nodes (Network ID is obscure but public information)
      • Unclear whether adding new root servers requires a restart
    • WireGuard can add new hosts but requires both ends of the connection to be updated and present in the ACL of host config file
    Access Control:
  • Tinc VPN has control by the exchange of exported host config files
    • an invitation refers to the configuration on the server
  • ZeroTier nodes need to be authorised after attempting to connect the network ID, but it can be turned off to allow "public" networks
  • WireGuard has control by the exchange of host public keys and ACL in host config file
    • need to be authorised after attempting to connect the network ID, but it can be turned off to allow "public" networks
    • WireGuard has control by the exchange of host public keys and ACL in host config file
  • Based on example from Cloud Native Deployment:
    • Tinc VPN would be deployed on 15 VMs, compared to 210 pods
    • ZeroTier would be deployed on 15 VMs, compared to 210 pods
    • WireGuard would be deployed on 15 VMs, compared to 210 pods
  • Based on example from Open Wireless Laboratory:
    • Tinc VPN would be deployed on 3 servers or 5 VMs, compared to 210 pods
    • ZeroTier would be deployed on 3 servers or 5 VMs, compared to 210 pods
    • WireGuard would be deployed on 3 servers or 5 VMs, compared to 210 pods
  • tbc


Comparison to Istio

  • Istio Envoy is deployed as a sidecar-per-pod
    • so on a single VM, there could be many such sidecars and resource usage may be higher
    • requires admin access to kubernetes
  • Istio Envoy performs mutual TLS authentication for pod-to-pod network communicationfor pod-to-pod network communication
      but for pods on the same VM, this might be unnecessary as the traffic would not appear on the network
    • appears to work only inside one kubernetes system
  • Istio Envoy provides control of routing with traffic management and policies
    • this might not be needed if full mesh routing is intended everywhere
  • Istio Mixer, Pilot, Citadel and Galley servers may represent Single-Points-of-Failure in the environment, as well as additional setup requiredIstio provides functionality over and above the VPN encryption of network trafficSingle-Points-of-Failure in the environment, as well as additional setup required
  • Istio provides functionality over and above the VPN encryption of network traffic


Questions

  1. Is it necessary to encrypt pod-to-pod communications if both ends are on the same VM? The traffic would not appear on the network.
  2. What is the actual resource overhead for each VPN/sidecar (e.g. in terms of RAM, CPU, disk, I/O, etc)?
  3. tbc