vIPsec use case is used to test the available HPA features, and also the HPA pluggability(applying QAT) as well. This page will briefly introduce both IPsec use case implemented through Openstack and future plan with Kubernetes.


I. IPsec and its Architecture Overview

IPsec is also known as Internet Protocol Security or IP Security Protocol. It is kind of protocol that is used to protect the traffic in and above IP layer. Normally, one can choose protocols like the Authentication Header(AH), Encapsulated Security Payload(ESP) or both of them to help with the data origin authentication, data integrity and confidentiality. Also, there're two modes supported for IPsec. One is the tunnel mode, which passes the secured traffic through the tunnel. The other one is called transparent mode, which directly passes the traffic with encryption/hashed.

Here, the IPsec use case is implemented within the tunnel mode and using the ESP protocol to protect the data integrity, confidentiality and its authentication. The whole workflow is consist of a packet source machine, two IPsec gateways and a sink. The detailed architecture looks like this:



The overall IPsec functionality is achieved through VPP based on DPDK. And inside VPP act as the gateway application that helps transfer the traffic between networks.

The detailed network topology looks like this:

II. vIPsec implemented through Openstack

Our first implementation of the IPsec use case is based on Openstack, where contains all the resources we need. To bring up vIPsec with better performance, we first take use of SRIOV to efficiently share the PCI resource and optimize its performance.  The 'IPSec External Network' is later composed through the SRIOV NIC card and the configurations are conveyed through the heat template. Sample port configurations looks like this:

 vipsec_A_private_2_port:
    type: OS::Neutron::Port
    properties:
      allowed_address_pairs: [{ "ip_address": { get_param: vpg_private_ip_0 }}]
      network: { get_resource: protected_ipsec_network }  //The network here is based on a provider network that supports SRIOV
      binding:vnic_type: { get_param: vipsec_private_2_port_vnic_type}  //The vnic_type is set to 'direct' in order to use SRIOV
      fixed_ips: [{"subnet": { get_resource: ipsec_private_subnet }, "ip_address": { get_param: vipsec_A_private_ip_2 }}]
      security_groups:
      - { get_resource: security_group_ipsec }


In addition to SRIOV, we'd also like to add QAT support in this use case. Since VPP itself supports QAT, extra configuration can be added to envoke the Cryptodev API inside VPP(powered by DPDK) to offload the software based encryption/decryption to the QAT hardware. Sample configs looks like this:

unix {
        exec /opt/config/ipsec.conf
        nodaemon
        cli-listen /run/vpp/cli.sock
        log /tmp/vpp.log
     }

cpu {
       main-core 0
       corelist-workers 1  }

dpdk {
        socket-mem 1024
        log-level debug
        dev default{
                num-tx-desc 1024
                num-rx-desc 1024
        }
        dev 0000:00:05.0 
        {
                workers 0
        }
        dev 0000:00:06.0   
        {
                workers 0
        }
        vdev crypto_aesni_mb0

 # Option: QAT hardware acceleration.
        enable-cryptodev
        dev 0000:03:01.1 //Two PCI interfaces from QAT
        dev 0000:03:01.2
}


III. cIPsec implemented through Kubernetes

cIPsec is a new implemented that is targeted in Frankfurt. This time, the VIMs containing these resources is no longer Openstack but Kubernetes. This implementation is planned to be achieved along with the help of the functionalities provided in Extending HPA for K8S. The K8s plugin resided in MC will handle the job of scheduling IPsec. As it is implemented in Kubernetes, IPsec need to be packaged as a helm chart that would be processed by the k8s plugin. 

#Helm chart architecture
----- Chart.yaml
      values.yaml
      templates/
        -------- deployment.yaml
                 xx-network.yaml
                 ...
      charts/
        -------- remote-ipsec-gateway/
                   -------- Chart.yaml
                            values.yaml
                            templates/
                 pktgen/
                   -------- Chart.yaml
                            values.yaml
                            templates/
                 sink/
                   -------- Chart.yaml
                            values.yaml
                            templates/ 

However, since the helm chart will be transmitted as an artifact inside the CSAR package and currently SO is not able to handle the container-based CSAR package, a heat template and related policies need to be mapped to the helm chart, containing the requirements that needed for IPsec. 

#Requirements designed in the heat template and the policies
#Requirements on SRIOV
{
            "hpa-feature": "sriovNICNetwork",
            "mandatory": "True",
            "architecture": "generic",
            "hpa-version": "v1",
            "directives" : [] ,// a placeholder for OOF usage
            "hpa-feature-attributes": [
              { "hpa-attribute-key": "pciVendorId", "hpa-attribute-value": "0000", "operator": "=", "unit": "" },
              { "hpa-attribute-key": "pciDeviceId", "hpa-attribute-value": "0000", "operator": "=", "unit": "" },
              { "hpa-attribute-key": "pciCount", "hpa-attribute-value": "1", "operator": ">=", "unit": "" },
              { "hpa-attribute-key": "physicalNetwork", "hpa-attribute-value": "xxx", "operator": "=", "unit": "" }
            ]
          }

#Requirements on CPU and memory size
{
            "hpa-feature": "basicCapabilities",
            "mandatory": "True",
            "architecture": "generic",
            "hpa-version": "v1",
            "directives" : [] ,//a placeholder for OOF usage
            "hpa-feature-attributes": [
              { "hpa-attribute-key": "numVirtualCpu", "hpa-attribute-value": "1", "operator": "=", "unit": "" },
              { "hpa-attribute-key": "virtualMemSize", "hpa-attribute-value": "4", "operator": "=", "unit": "GB" }
            ]
          }

#Requirements designed in the helm chart
#Requirements on SRIOV
nodeSelector:                                                                             
  feature.node.kubernetes.io/sriov-capable: "true"                                        
  feature.node.kubernetes.io/pci-0000_0000_present: "true" 

#Requirement on CPU and Memory size
resources:
  requests:
    cpu: 1
    memory: 4Gi


  • No labels