Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Survey of Options

...

  • Appearance:
    • Tinc VPN appears as IP level network device
    • ZeroTier appears as Ethernet level network port
    • WireGuard appears as IP level network device
  • Connectivity provided:
    • Tinc VPN automatically gives full mesh routing
    • ZeroTier automatically gives full mesh routing
    • WireGuard gives point-to-point connection like SSH (mesh routing is a todo)
  • Node/Host Configuration:
    • Tinc VPN host is configured with public/private key pair, in a config file
    • ZeroTier node is configured with public/private key pair, then generates a VL1 ZeroTier Address
    • WireGuard host is configured with public/private key pair and ACL, in a config file
  • Network Configuration:
    • Tinc VPN network is configured by hosts exchanging (out-of-band) exported config files for a specified "network name"
      • rest of network is exchanged in-band
    • ZeroTier network is configured with knowledge of "roots" and with VL2 ZeroTier Network ID (VL1 ZeroTier Address of the controller and network number)
      • rest of network is exchanged in-band
    • WireGuard network is configured by hosts sharing public keys (out-of-band), connect via IP Address corresponding to keys
      • IP roaming is exchanged in-band
  • Number of network connections:
    • Tinc VPN hosts can connect to many "network names" concurrently
    • ZeroTier nodes can connect to multiple VL2 ZeroTier Network IDs concurrently
    • WireGuard hosts can connect to many other hosts concurrently
  • Deployment:
    • Tinc VPN is deployed on the VM hosting the pods/containers/processes (
      • could be in the container base image
      )
      • no explicit interoperability with kubernetes to manipulate pod/container network namespaces
    • ZeroTier is deployed on the VM hosting the pods/containers/processes (
      • could be in the container base image
      )
      • no explicit interoperability with kubernetes to manipulate pod/container network namespaces
    • WireGuard is deployed on the VM hosting the pods/containers/processes (
      • could be in the container base image
      )
      • no explicit interoperability with kubernetes to manipulate pod/container network namespaces
  • Single-Points-of-Failure:
    • Tinc VPN runs daemon processes on each host (one per network name), topology is peer-to-peer
    • ZeroTier runs a global "planet" root server called "Earth" apparently as testing network and casual communications
      • Unclear about how users can deploy their own "planet" root servers
      • Users can deploy their own "moon" root servers
    • WireGuard runs daemon processes on each host, topology is peer-to-peer
  • Scaling:
    • Tinc VPN can add new hosts to existing network names without altering configurations of existing hosts (
      • invitations dynamically create a configuration
      )
      • on the server
    • ZeroTier can add new nodes to existing network IDs without altering configurations of existing nodes (Network ID is obscure but public information)
      • Unclear whether adding new root servers requires a restart
    • WireGuard can add new hosts but requires both ends of the connection to be updated and present in the ACL of host config file
  • Access Control:
    • Tinc VPN has control by the exchange of exported host config files (an invitation is effectively an embedded host config file)
      • an invitation refers to the configuration on the server
    • ZeroTier nodes need to be authorised after attempting to connect the network ID, but it can be turned off to allow "public" networks
    • WireGuard has control by the exchange of host public keys and ACL in host config file
  • tbc

...

  • Istio Envoy is deployed as a sidecar-per-pod
    • so on a single VM, there could be many such sidecars and resource usage may be higher
    • requires admin access to kubernetes
  • Istio Envoy performs mutual TLS authentication for pod-to-pod network communication
    • but for pods on the same VM, this might be unnecessary as the traffic would not appear on the network
    • appears to work only inside one kubernetes system
  • Istio Envoy provides control of routing with traffic management and policies
    • but unclear whether this would be usedmight not be needed if full mesh routing is intended everywhere
  • Istio Mixer, Pilot, Citadel and Galley servers may represent Single-Points-of-Failure in the environment, as well as additional setup required
  • Istio provides functionality over and above the VPN encryption of network traffic

...