Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. External attacker analyzes the captured traffic among services to steal secrets such as passwords and certificates
  2. Internal attacker analyzes the captured traffic among services to steal secrets such as passwords and certificates
  3. External attacker bombards the container services with new connections, leading to large number forked processes and threads leading to resource issues on other workloads (containers) in the system
  4. Internal attacker bombards the container services with new connections, leading to large number forked processes and threads leading to resource issues on other workloads (containers) in the system
  5. External attacker exploits downloads of containers from repositories to tamper with them and inject malicious code
  6. Internal attacker exploits downloads of containers from repositories to tamper with them and inject malicious code
  7. External attacker introduces malicious VM into ONAP environment to steal data and subvert operations
  8. Internal attacker introduces malicious VM into ONAP environment to steal data and subvert operations
  9. External attacker introduces malicious pod into ONAP environment to steal data and subvert operations
  10. Internal attacker introduces malicious pod into ONAP environment to steal data and subvert operations
  11. External attacker introduces malicious container into ONAP environment to steal data and subvert operations
  12. Internal attacker introduces malicious container into ONAP environment to steal data and subvert operations
  13. External attacker introduces malicious process into ONAP environment to steal data and subvert operations
  14. Internal attacker introduces malicious process into ONAP environment to steal data and subvert operations
  15. External attacker introduces malicious external-system into ONAP environment to steal data and subvert operations
  16. Internal attacker introduces malicious external-system into ONAP environment to steal data and subvert operations

Discussion

  • There has already been discussion and recommendation for using Istio https://istio.io/
    • Istio Envoy is deployed within each pod using sidecar-injection, then stays in the configuration when the pods are restarted
    • Istio Envoy probably appears within each pod as a network bridge, such as Kubernetes cluster networking bridge cbr0, thereby controlling all network traffic within the pod
    • Istio Envoy provides full mesh routing but can also provides control of routing with traffic management and policies
    • Istio Envoy also provides telemetry in addition to the security of mutual TLS authentication
    • Istio Citadel is run in the environment as the certificate authority / PKI supporting the mutual TLS authentication
    • Istio appears to have only a single overall security domain (i.e. the environment that includes Mixer, Pilot, Citadel and Galley), though it does contain many options to distinguish different services, users, roles and authorities

The sections below are for gathering thoughts on alternative solutions

Discussion of Tinc VPN

ONAP Operating Environment

Example from Cloud Native Deployment:



TypeVMsPodsContainers
Full Cluster (14 + 1) - recommended1528248 total



Example from Open Wireless Laboratory (OWL) at Wireless Information Network Laboratory (WINLAB):

  • There are currently three Ubuntu 18.04 servers: node1-1, node2-1 and node2-2, which are managed by OpenStack.
    Node1-1 is the controller node, and node2-1 and node202 are compute nodes.
    We have installed ONAP using the OOM Rancher/Kubernetes instructions into five VMs.

Discussion

  • There has already been discussion and recommendation for using Istio https://istio.io/
    • Istio Envoy is deployed within each pod using sidecar-injection, then stays in the configuration when the pods are restarted
    • Istio Envoy probably appears within each pod as a network bridge, such as Kubernetes cluster networking bridge cbr0, thereby controlling all network traffic within the pod
    • Istio Envoy provides full mesh routing but can also provides control of routing with traffic management and policies
    • Istio Envoy also provides telemetry in addition to the security of mutual TLS authentication
    • Istio Citadel is run in the environment as the certificate authority / PKI supporting the mutual TLS authentication
    • Istio appears to have only a single overall security domain (i.e. the environment that includes Mixer, Pilot, Citadel and Galley), though it does contain many options to distinguish different services, users, roles and authorities


The sections below are for gathering thoughts on alternative solutions

Discussion of Tinc VPN

  • VPN appears to the IP level network code as a normal network device
  • Automatic full mesh routing. Regardless of how you set up the tinc daemons to connect to each other, VPN traffic is always (if possible) sent directly to the destination, without going through intermediate hops.
  • Easily expand your VPN. When you want to add nodes to your VPN, all you have to do is add an extra configuration file, there is no need to start new daemons or create and configure new devices or network interfaces
  • Ability to bridge ethernet segments. You can link multiple ethernet segments together to work like a single segment, allowing you to run applications and games that normally only work on a LAN over the Internet.
  • Runs on many operating systems and supports IPv6. Currently Linux, FreeBSD, OpenBSD, NetBSD, OS X, Solaris, Windows 2000, XP, Vista and Windows 7 and 8 platforms are supported. tinc has also full support for IPv6.
  • First, create the initial configuration files and public/private keypairs using the following command:

    tinc -n NETNAME init NAME

    Second, use ‘tinc -n NETNAME add ...’ to further configure tinc. Finally, export your host configuration file using ‘tinc -n NETNAME export’ and

  • VPN appears to the IP level network code as a normal network device
  • Automatic full mesh routing. Regardless of how you set up the tinc daemons to connect to each other, VPN traffic is always (if possible) sent directly to the destination, without going through intermediate hops.
  • Easily expand your VPN. When you want to add nodes to your VPN, all you have to do is add an extra configuration file, there is no need to start new daemons or create and configure new devices or network interfaces
  • Ability to bridge ethernet segments. You can link multiple ethernet segments together to work like a single segment, allowing you to run applications and games that normally only work on a LAN over the Internet.
  • Runs on many operating systems and supports IPv6. Currently Linux, FreeBSD, OpenBSD, NetBSD, OS X, Solaris, Windows 2000, XP, Vista and Windows 7 and 8 platforms are supported. tinc has also full support for IPv6.
  • First, create the initial configuration files and public/private keypairs using the following command:

    tinc -n NETNAME init NAME

    Second, use ‘tinc -n NETNAME add ...’ to further configure tinc. Finally, export your host configuration file using ‘tinc -n NETNAME export’ and send it to those people or computers you want tinc to connect to. They should send you their host configuration file back, which you can import using ‘tinc -n NETNAME import’

  • Connections specified using ‘ConnectTo’ are so-called meta-connections. Tinc daemons exchange information about all other daemon they know about via these meta-connections. After learning about all the daemons in the VPN, tinc will create other connections as necessary in order to communicate with them.
  • Invitations are an easy way to add new nodes to an existing VPN. Invitations can be created on an existing node using the tinc invite command, which generates a relatively short URL which can be given to someone else, who uses the tinc join command to automatically set up tinc so it can connect to the inviting node

...

  • WireGuard aims to be as easy to configure and deploy as SSH. A VPN connection is made simply by exchanging very simple public keys – exactly like exchanging SSH keys – and all the rest is transparently handled by WireGuard. It is even capable of roaming between IP addresses, just like Mosh. There is no need to manage connections, be concerned about state, manage daemons, or worry about what's under the hood
  • WireGuard securely encapsulates IP packets over UDP. You add a WireGuard interface, configure it with your private key and your peers' public keys, and then you send packets across it. All issues of key distribution and pushed configurations are out of scope of WireGuard. In contrast, it more mimics the model of SSH and Mosh; both parties have each other's public keys, and then they're simply able to begin exchanging packets through the interface
  • WireGuard works by adding a network interface (or multiple), like eth0 or wlan0, called wg0 (or wg1, wg2, wg3, etc). This network interface can then be configured normally using ifconfig(8) or ip-address(8), with routes for it added and removed using route(8) or ip-route(8), and so on with all the ordinary networking utilities. The specific WireGuard aspects of the interface are configured using the wg(8) tool. This interface acts as a tunnel interface.
  • At the heart of WireGuard is a concept called Cryptokey Routing, which works by associating public keys with a list of tunnel IP addresses that are allowed inside the tunnel. Each network interface has a private key and a list of peers. Each peer has a public key. Public keys are short and simple, and are used by peers to authenticate each other. They can be passed around for use in configuration files by any out-of-band method, similar to how one might send their SSH public key to a friend for access to a shell server
  • The client configuration contains an initial endpoint of its single peer (the server), so that it knows where to send encrypted data before it has received encrypted data. The server configuration doesn't have any initial endpoints of its peers (the clients). This is because the server discovers the endpoint of its peers by examining from where correctly authenticated data originates. If the server itself changes its own endpoint, and sends data to the clients, the clients will discover the new server endpoint and update the configuration just the same. Both client and server send encrypted data to the most recent IP endpoint for which they authentically decrypted data. Thus, there is full IP roaming on both ends
  • WireGuard sends and receives encrypted packets using the network namespace in which the WireGuard interface was originally created. This means that you can create the WireGuard interface in your main network namespace, which has access to the Internet, and then move it into a network namespace belonging to a Docker container as that container's only interface. This ensures that the only possible way that container is able to access the network is through a secure encrypted WireGuard tunnel
  • The most obvious usage of this is to give containers (like Docker containers, for example) a WireGuard interface as its sole interface.

  • A less obvious usage, but extremely powerful nonetheless, is to use this characteristic of WireGuard for redirecting all of your ordinary Internet traffic over WireGuard.

  • It turns out that we can route all Internet traffic via WireGuard using network namespaces, rather than the classic routing table hacks.

WireGuard Todo

Mesh Networking Tools. It is possible to build a mesh network out of WireGuard using WireGuard as the building block. Write a tool that builds meshes and has peers discover each other, taking into account important authentication and trust properties. Build onto of wg-dynamic.

Comparisons

  • there is full IP roaming on both ends
  • WireGuard sends and receives encrypted packets using the network namespace in which the WireGuard interface was originally created. This means that you can create the WireGuard interface in your main network namespace, which has access to the Internet, and then move it into a network namespace belonging to a Docker container as that container's only interface. This ensures that the only possible way that container is able to access the network is through a secure encrypted WireGuard tunnel
  • The most obvious usage of this is to give containers (like Docker containers, for example) a WireGuard interface as its sole interface.

  • A less obvious usage, but extremely powerful nonetheless, is to use this characteristic of WireGuard for redirecting all of your ordinary Internet traffic over WireGuard.

  • It turns out that we can route all Internet traffic via WireGuard using network namespaces, rather than the classic routing table hacks.

WireGuard Todo


Mesh Networking Tools. It is possible to build a mesh network out of WireGuard using WireGuard as the building block. Write a tool that builds meshes and has peers discover each other, taking into account important authentication and trust properties. Build onto of wg-dynamic.



Comparisons

  • Appearance:
    • Tinc VPN appears as IP level network device
    • ZeroTier appears as Ethernet level network port
    • WireGuard appears as IP level network device
  • Connectivity provided:
    • Tinc VPN automatically gives full mesh routing
    • ZeroTier automatically gives full mesh routing
    • WireGuard gives point-to-point connection like SSH (mesh routing is a todo)
  • Node/Host Configuration:
    • Tinc VPN host is configured with public/private key pair, in a config file
    • ZeroTier node is configured with public/private key pair, then generates a VL1 ZeroTier Address
    • WireGuard host is configured with public/private key pair and ACL, in a config file
  • Network Configuration:
    • Tinc VPN network is configured by hosts exchanging (out-of-band) exported config files for a specified "network name"
      • rest of network is exchanged in-band
    • ZeroTier network is configured with knowledge of "roots" and with VL2 ZeroTier Network ID (VL1 ZeroTier Address of the controller and network number)
      • rest of network is exchanged in-band
    • WireGuard network is configured by hosts sharing public keys (out-of-band), connect via IP Address corresponding to keys
      • IP roaming is exchanged in-band
  • Number of network connections:
    • Tinc VPN hosts can connect to many "network names" concurrently
    • ZeroTier nodes can connect to multiple VL2 ZeroTier Network IDs concurrently
    • WireGuard hosts can connect to many other hosts concurrently
  • Deployment:
    • Tinc VPN is deployed on the VM hosting the pods/containers/processes
      • could be in the container base image
      • no explicit interoperability with kubernetes to manipulate pod/container network namespaces
    • ZeroTier is deployed on the VM hosting the pods/containers/processes
      • could be in the container base image
      • no explicit interoperability with kubernetes to manipulate pod/container network namespaces
    • WireGuard is deployed on the VM hosting the pods/containers/processes
      • could be in the container base image
      • no explicit interoperability with kubernetes to manipulate pod/container network namespaces
  • Single-Points-of-Failure:
    • Tinc VPN runs daemon processes on each host (one per network name), topology is peer-to-peer
    • ZeroTier runs a global "planet" root server called "Earth" apparently as testing network and casual communications
      • Unclear about how users can deploy their own "planet" root servers
      • Users can deploy their own "moon" root servers
    • WireGuard runs daemon processes on each host, topology is peer-to-peer
  • Scaling:
    • Tinc VPN can add new hosts to existing network names without altering configurations of existing hosts
      • invitations dynamically create a configuration on the server
    • ZeroTier can add new nodes to existing network IDs without altering configurations of existing nodes (Network ID is obscure but public information)
      • Unclear whether adding new root servers requires a restart
    • WireGuard can add new hosts but requires both ends of the connection to be updated and present in the ACL of host config file
  • Access Control:
    • Tinc VPN has control by the exchange of exported host config files
      • an invitation refers to the configuration on the server
    • ZeroTier nodes need to be authorised after attempting to connect the network ID, but it can be turned off to allow "public" networks
    • WireGuard has control by the exchange of host public keys and ACL in host config file
  • Based on example from Cloud Native Deployment:
    • Tinc VPN would be deployed on 15 VMs, compared to 28 pods
    • ZeroTier would be deployed on 15 VMs, compared to 28 pods
    • WireGuard would be deployed on 15 VMs, compared to 28 pods
  • Based on example from Open Wireless Laboratory:
    • Tinc VPN would be deployed on 3 servers or 5 VMs, compared to 28 pods
    • ZeroTier would be deployed on 3 servers or 5 VMs, compared to 28 pods
    • WireGuard would be deployed on 3 servers or 5 VMs, compared to 28 pods
  • Appearance:
    • Tinc VPN appears as IP level network device
    • ZeroTier appears as Ethernet level network port
    • WireGuard appears as IP level network device
  • Connectivity provided:
    • Tinc VPN automatically gives full mesh routing
    • ZeroTier automatically gives full mesh routing
    • WireGuard gives point-to-point connection like SSH (mesh routing is a todo)
  • Node/Host Configuration:
    • Tinc VPN host is configured with public/private key pair, in a config file
    • ZeroTier node is configured with public/private key pair, then generates a VL1 ZeroTier Address
    • WireGuard host is configured with public/private key pair and ACL, in a config file
  • Network Configuration:
    • Tinc VPN network is configured by hosts exchanging (out-of-band) exported config files for a specified "network name"
      • rest of network is exchanged in-band
    • ZeroTier network is configured with knowledge of "roots" and with VL2 ZeroTier Network ID (VL1 ZeroTier Address of the controller and network number)
      • rest of network is exchanged in-band
    • WireGuard network is configured by hosts sharing public keys (out-of-band), connect via IP Address corresponding to keys
      • IP roaming is exchanged in-band
  • Number of network connections:
    • Tinc VPN hosts can connect to many "network names" concurrently
    • ZeroTier nodes can connect to multiple VL2 ZeroTier Network IDs concurrently
    • WireGuard hosts can connect to many other hosts concurrently
  • Deployment:
    • Tinc VPN is deployed on the VM hosting the pods/containers/processes
      • could be in the container base image
      • no explicit interoperability with kubernetes to manipulate pod/container network namespaces
    • ZeroTier is deployed on the VM hosting the pods/containers/processes
      • could be in the container base image
      • no explicit interoperability with kubernetes to manipulate pod/container network namespaces
    • WireGuard is deployed on the VM hosting the pods/containers/processes
      • could be in the container base image
      • no explicit interoperability with kubernetes to manipulate pod/container network namespaces
  • Single-Points-of-Failure:
    • Tinc VPN runs daemon processes on each host (one per network name), topology is peer-to-peer
    • ZeroTier runs a global "planet" root server called "Earth" apparently as testing network and casual communications
      • Unclear about how users can deploy their own "planet" root servers
      • Users can deploy their own "moon" root servers
    • WireGuard runs daemon processes on each host, topology is peer-to-peer
  • Scaling:
    • Tinc VPN can add new hosts to existing network names without altering configurations of existing hosts
      • invitations dynamically create a configuration on the server
    • ZeroTier can add new nodes to existing network IDs without altering configurations of existing nodes (Network ID is obscure but public information)
      • Unclear whether adding new root servers requires a restart
    • WireGuard can add new hosts but requires both ends of the connection to be updated and present in the ACL of host config file
  • Access Control:
  • Tinc VPN has control by the exchange of exported host config files
    • an invitation refers to the configuration on the server
  • ZeroTier nodes need to be authorised after attempting to connect the network ID, but it can be turned off to allow "public" networks
  • WireGuard has control by the exchange of host public keys and ACL in host config file
  • tbc


Comparison to Istio

  • Istio Envoy is deployed as a sidecar-per-pod
    • so on a single VM, there could be many such sidecars and resource usage may be higher
    • requires admin access to kubernetes
  • Istio Envoy performs mutual TLS authentication for pod-to-pod network communicationpod-to-pod network communication
      but for pods on the same VM, this might be unnecessary as the traffic would not appear on the network
    • appears to work only inside one kubernetes system
  • Istio Envoy provides control of routing with traffic management and policies
    • this might not be needed if full mesh routing is intended everywhere
  • Istio Mixer, Pilot, Citadel and Galley servers may represent Single-Points-of-Failure in the environment, as well as additional setup requiredIstio provides functionality over and above the VPN encryption of network traffic-Points-of-Failure in the environment, as well as additional setup required
  • Istio provides functionality over and above the VPN encryption of network traffic


Questions

  1. Is it necessary to encrypt pod-to-pod communications if both ends are on the same VM? The traffic would not appear on the network.
  2. What is the actual resource overhead for each VPN/sidecar (e.g. in terms of RAM, CPU, disk, I/O, etc)?
  3. tbc