Working draft: to be finalized when use cases have been finalized.

The section provides information for hardware and network configuration. ONAP Open Labs are collections of dedicated hardware, generally partitioned into pods with servers. Pods can be used for different testings like development, CI/CD, ONAP platform testing, or E2E testing.  The minimal requirements for each pod are defined in the following.

Hardware Summary

A lab compliant pod provides:

  • 2-8 controller / compute nodes depends on the use case (please refer to Server Pod session)
  • A configured network topology allowing for IPMI, Admin(PXE), Public, Private, and Storage Networks
  • Remote access through VPN
  • Security through a firewall
  • Internet access to install and update some software online

Server Pod

In the following table, we define 3 types of pod based on the resource usage assumption. Please note in lab and real deployment scenarios, resource can be over subscribed depending on workload. Also we assume that ONAP platform will be deployed in a separate pod from VNFs.

Type of Pod

Compute NodesNumber of Control Nodes

Vanilla OpenStack / Titanium Cloud

Total Memory(GB)

Total VCPU

Total StorageNumber of Compute Nodes

Large

600GB

120

4TB>=2

3 / 2

Medium

200GB

80

2TB>=2

3 / 2

Small

40GB

24

1TB>=1

1 / can support AIO (all in one) i.e. one node for controller, compute, and storage functions.

In addition, you may need a provisioning server to help install and access a server pod.

A recommended node (server) configuration is as following:

  • Memory: 256GB RAM
  • CPU: Intel Xeon E5-2658v3 Series or newer, with 12 cores and 24 hyper-threads
  • Firmware: BIOS/EFI compatible for x86-family servers
  • Local Storage: 2 x 1TB HDD, virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)

Networking

Network Hardware

    • 48-Port TOR Switch
    • NICs - Combination of 1GE and 10GE based on network topology options
    • Connectivity for each data/control network is through a separate NIC port or a shared port. Separate port simplifies switch management however requires more NICs on the server and also more switch ports
    • BMC (Baseboard Management Controller) for lights-out management network using IPMI (Intelligent Platform Management Interface)

Remote Management

    • Developers to access deploy/test environments at aggregate 100Mbps upload and download speed

Basic requirements

    • SSH sessions can be established (initially on the jump server)
    • Packages can be installed on a system by pulling from an external repo.

Firewall rules accommodate

    • SSH sessions

Internet access

    • Internet available for some software online installation and update

Requirements for 3 use cases supported by ONAP Release 1 (to be finalized), not including ONAP:

ONAP itself will need a medium sized server pod.


Use CasesVNFsDeployment TopologyServer Pod NumberNetwork HardwareSoftware
Development or vFW/vDNS demo appsOpen sourced vFW/vDNS

1 (Small)TORCloud OS
vCPEvCPE

2 (Medium)

WAN/SPTN Router (2)

DC Gateway (2)

TOR (n)

ThinCPE (1)

Cloud OS (for Edge and Core)

WAN/SPTN Controller

DC Controller

Specific VNFM & EMS

VoLTEvIMS/vEPC

2 (Large)

WAN/SPTN Router (2)

DC Gateway (2)

TOR (n)

Wireless Access Point (2)

VoLTE Terminal Devices (2)

Cloud OS (for Edge and Core)

WAN/SPTN Controller

DC Controller

Specific VNFM & EMS

  • No labels

5 Comments

  1. Can we also add min. HW configuration i.e.

    Memory: min. 256GB RAM

    CPU: Intel Xeon E5-2658v3 Series or newer, with 12 cores and 24 hyper-threads

    Firmware: BIOS/EFI compatible for x86-family servers

    Local Storage:

      • Disks: 2 x 1TB HDD
      • The first HDD should be used for OS & additional software/tool installation
      • Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)

    Backup/Restore Strategy

    Jump Server


     

  2. For test cases vCPE and VoLTE, SPTN or IPRAN Router for the operators are generally demands, it is recommended that this part of the minimum requirements to modify the "WAN /SPTN or IPRAN Router"

  3. This page seems to duplicate information on ONAP Lab Specification(draft), can they be consolidated or better clarified?

  4. For the vCPE use-case, do we need 2 different clouds for R1 ?

    Why do we need WAN/SPTN Controller and Specific VNFM & EMS ?

  5. Is it a requirement for ONAP to be installed on a separate pod from the server pod hosting the instantiated VNFs?