You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Next »


Name of Use Case:


VoLTE


Use Case Authors:

AT&T, China Mobile, Huawei,  ZTE, Nokia, Jio, VMWare, Wind River, BOCO

Description:

A Mobile Service Provider (SP) plans to deploy VoLTE services based on SDN/NFV.  The SP is able to onboard the service via ONAP. Specific sub-use cases are:

  • Service onboarding
  • Service configuration 
  • Service termination
  • Auto-scaling based on fault and/or performance
  • Fault detection & correlation, and auto-healing
  • Data correlation and analytics to support all sub use cases


ONAP will perform those functions in a reliable way. Which includes:

  • the reliability, performance and serviceability of the ONAP platform itself
  • security of the ONAP platform
  • policy driven configuration management using standard APIs or scripting languages like chef/ansible (stretch goal)
  • automated configuration audit and change management (stretch goal)

To connect the different Data centers ONAP will also have to interface with legacy systems and physical function to establish VPN connectivity in a brown field deployment.


Users and Benefit:

SPs benefit from VoLTE use case in the following aspects:

  1. service agility: more easy design of both VNF and network service, VNF onboarding, and agile service deployment.
  2. resource efficiency: through ONAP platform, the resource can be utilized more efficiently, as the services are deployed and scaled automatically on demand.
  3. operation automation and intelligence: through ONAP platform, especially integration with DCAE and policy framework, VoLTE VNFs and the service as a whole are expected to be managed with much less human interference and therefore will be more robust and intelligent.

VoLTE users benefit from the network service provided by SPs via ONAP, as their user experience will be improved, especially during the peak period of traffic

VNF:

Utilize vendors VNFs in the ONAP platform.


TIC Location

VNFs

Intended VNF Provider

Notes

Edge

vSBC

Huawei

Confirmed

vPCSCF

Huawei

Confirmed

vSPGW

ZTE/Huawei

Confirmed

Core





vPCRF

Huawei

Confirmed

VI/SCSCF

Nokia

Confirmed

vTAS

Nokia

Confirmed

VHSS

Huawei

Confirmed

vMME

ZTE/Huawei

Confirmed

Note: The above captures the currently committed VNF providers, we are open to adding more VNF providers.

Note: The committed VNF providers will be responsible for providing support for licensing and technical assistance for VNF interowrking issues, while the core ONAP usecase testing team will be focused on platform validation. 

NFVI+VIM:

Utilize vendors NFVI+VIMs in the ONAP platform.


TIC Location

NFVI+VIMs

Intended VIM Provider

Notes

Edge

Titanium Cloud (OpenStack based)

Wind River

Confirmed

VMware Integrated OpenStack

VMware

Confirmed

Core

Titanium Cloud (OpenStack based)

Wind River

Confirmed

VMware Integrated OpenStack

VMwareConfirmed


Note: The above captures the currently committed VIM providers, we are open to adding more VIM providers.

Note: The committed VIM providers will be responsible for providing support for licensing and technical assistance for VIM integration issues, while the core ONAP usecase testing team will be focused on platform validation. 


Network equipment

Network equipment vendors.


Network equipment intended providerNotes

Bare Metal Host

Huawei, ZTEConfirmed

WAN/SPTN Router

Huawei/ZTEConfirmed
DC GatewayHuawei, ZTEConfirmed
TORHuawei,ZTEConfirmed
Wireless Access Point RaisecomConfirmed
VoLTE Terminal DevicesRaisecomConfirmed

Note: The above captures the currently committed HW providers, we are open to adding more HW providers.

Note: The committed HW providers will be responsible for providing support for licensing and technical assistance for HW integration issues, while the core ONAP usecase testing team will be focused on platform validation. 

Topology Diagram:


Work Flows:

Customer ordering

  • Design


  • 1.0: ONAP user uses SDC to import and design VoLTE E2E model/templates via Portal. The E2E models should include, vIMS + vEPC network services, SDN-WAN connection services, relevant auto-healing policy and alarm correlation rules, etc. All of these should be tied together as a E2E VoLTE model
    SDC/CLAMP should distribute all above models to related components in run time when use need to instantiate the E2E VoLTE service.
  • 1.1 Distribute E2E model to SO.
  • 1.2 Distribute vIMS+vEPC NS to VF-C, 
  • 1.3 Distribute SDN-WAN connection service to SDN-C
  • 1.4 Distribute auto-healing policy to Policy engine
  • 1.5 Distribute alarm correlation rules to Holmes engine.
  • 1.6 Design and Distribute DCAE configuration from CLAMP


  • Instantiation

         

  • 2.0 The user clicks the button on portal to deploy the service.
  • 2.1 Portal will send request to SO to deploy the VoLTE service.
  • 2.2 SO talk with A&AI to create the new E2E instance in A&AI inventory
  • 2.3 SO parse the E2E model via TOSCA parser provided by Modeling project.
  • 2.4 SO send request to SDN-C to deploy SDN-WAN connection service, including underlay and overlay.
  • 2.5 SDN-C need talk with 3rd SDN controller to provision the MPLS BGP L3VPN for underlay and setup VXLAN tunnel based on EVPN protocol for overlay.
  • 2.6 During SDN-C deploying network connection service, it need to create related instances in A&AI inventory.
  • 2.7 SO send request to VF-C to deploy vIMS+vEPC network service.
  • 2.8 VF-C talk with A&AI to create NS instances in A&AI inventory.
  • 2.9 VF-C parse the NS model via TOSCA parser to decompose NS to VNFs and recognize the relationship between VNFs.
  • 2.10 VF-C talk with Multi-VIM to create virtual network connections between VNFs if needed
  • 2.11 VF-C create related virtual link instances to A&AI inventory
  • 2.12 VF-C send request to S-VNFM/G-VNFM to deploy each VNFs in terms of the mapping of VNF and VNFM.
  • 2.13 Aligned with ETSI specs work flow, VNFM need to send granting resource request to VF-C, VF-C will response the granting result and related VIM information(such as url/username/password, etc) to VNFM
  • 2.14 VNFM call VIM API to deploy VNF into VIM
  • 2.15 VNFM need to send notification to VF-C to notify the changes of virtual resources, including VDU/VL/CP/VNFC, etc.
  • 2.16 VF-C talk with 3rd EMS via EMS driver to do the service configuration.
  • 2.17 VF-C need to create/update related records in A&AI inventory


  • VNF Auto-Scaling/Auto-healing


  • 3.0 when user instantiate the service, need CLAMP to instantiate related control-loop to run time environment.
  • 3.1 SDC will distribute auto-healing policy rules to policy engine, and distribute alarms correlation rules to Holmes engine as well
  • 3.2 CLAMP portal will talk with DCEA to deploy related analytic application/collector tied to the services
  • 3.3 CLAMP distribute alarm correlation rules to Holmes
  • 3.4 During the runtime, Multi-VIM will report FCAPS metrics data from VIM/NFVI to DCAE in real-time or period.
  • 3.5 VF-C will integrate with 3rd EMS, EMS will report/notify VNF service level FCAPS to VF-C in real-time or period.
  • 3.6 VF-C will transfer VNF service level FCAPS metrics to DCAE aligned with DCAE’s data structure requirements.
  • 3.7 Data filtering/cleaning inside DCAE, DCAE can send related events to data bus.
  • 3.8 Holmes can keep track the events published to data bus
  • 3.9 Holmes do the alarm correlation analysis based on the imported rules
  • 3.10 Holmes send the result, the root cause, to the event bus.
  • 3.11 Policy engine subscribe related topic on event bus. After receiving auto-healing/scaling triggering events, matching the events with exist rules.
  • 3.12 Policy invoke VF-C APIs to do the action of auto-healing/scaling once matching events with scaling/healing rules.
  • 3.13/3.14 VF-C update/create related instances information to A&AI inventory according to the changes of resources.


  • Termination

                

  • 4.0 The ONAP user trigger termination action via portal.
  • 4.1 Portal talk with SO to delete the VoLTE service
  • 4.2 SO check with A&AI if the instance exist.
  • 4.3 SO talk with VF-C to request deletion of vIMS/vEPC network services
  • 4.4 VF-C check with A&AI if the vIMS/vEPC instances exist.
  • 4.5 VF-C talk with S-VNFM/G-VNFM to request deletion of VNFs and release sources.
  • 4.6/4.7 Aligned with ETSI specs work flow, VNFM will delete/release virtual resources with the granting of VF-C and notify the changes(releasing) of virtual resources.
  • 4.8 VF-C update/delete related resource instances in A&AI inventory
  • 4.9 VF-C check with A&AI if the VL instances exist.
  • 4.10 VF-C talk with Multi-VIM to request deletion of virtual network connected to VNFs
  • 4.11 Multi-VIM delete related virtual network resources, such as network, sub-network and port, etc.
  • 4.12 VF-C update/delete related VL resource instances in A&AI inventory
  • 4.13 VF-C update/delete related NS instances in A&AI inventory
  • 4.14 SO talk with SDN-C to request deletion of SDN-WAN connecting between clouds, release resources including overlay and underlay.
  • 4.15 SDN-C talk with 3rd SDN controller to release connetion resources.
  • 4.16 SDN-C update/delete related connection resource instances in A&AI inventory
  • 4.17 SO update/delete related E2E service instances in A&AI inventory



Controll Automation:

Open Loop

  • Auto ticket creation based on the policy (stretch goal)

Closed Loop

  • Auto-scaling (stretch goal)

When a large-scale event, like concert, contest, is coming, the service traffic may increase continuously, the monitoring data of service may grow higher, or other similar things make the virtual resources located in TIC edge become resource-constrained. ONAP should automatically trigger VNF actions to horizontal scale out to add more virtual resource on data plane to cope with the traffic. On the contrary, when the event is done, which means the traffic goes down, ONAP should trigger VNF actions to scale in to reduce resource.

  • Fault detection & correlation, and auto-healing

During the utilization of VoLTE service, faults alarms can be issued at various layers of the system, including hardware, resource and service layers. ONAP should detect these fault alarms and report to the system to do the alarm correlation to identify the root cause of a series of alarms and do the correction actions for auto healing accordingly.

After the fault detected and its root correlated, ONAP should do the auto-healing action as specified by a given policy to make the system back to normal.

Configuration flows (Stretch goal)

  • Create (or onboard vendor provided) application configuration Gold standard (files) in Chef/Ansible server
  • Create Chef cookbook or Ansible playbook (or onboard vendor provided artifacts) to audit and optionally update configuration on the VNF VM(s)
  • Install the Chef client on the VM (Ansible doesn’t requires)
  • After every upgrade or once application misconfiguration is detected, trigger auditing with update option to update configuration based on the Gold Standards
  • Post-audit update, re-run audit, run healthcheck to verify application is running as expected
  • Provide configuration change alert to Operation via control loop dashboard

Platform Requirements:

  • Support for commercial VNFs
  • Support for commercial S-VNFM/EMS
  • Support for Multiple Cloud Infrastructure Platforms or VIMs
  • Cross-DC NFV and SDN orchestration
  • Telemetry collection for both resource and service layer
  • Fault correlation application
  • Policy for scaling/healing

Project Impact:

< list all projects that are impacted by this use case and identify any project which would have to be created >

  • Modeling/Models

    Provide TOSCA parser to support to parse NS/VNF or E2E models
    Modeling will need to be added to describe how VNFs are to be instantiated, removed, healed (restart, rebuild), scaled, how metrics related are gathered, how events are received
    Modeling will need to be added to describe the connection service (underlay/overlay) between cloud Edge and Core.
  • SDC

    Design vIMS/vEPC network service, TOSCA based. Who provides initial VNF templates? VNF Vendor.
    Design SDN-WAN network connection service.
    Design Auto-healing policy
    Design Alarm correlation rules
    Design work flow/DG used by SO/VF-C/SDN-C
    Design E2E service, tie all above together.

  • SO

    E2E service lifecycle management, decompose E2E service and talk with VF-C/SDN-C to instantiate services respectively in NFV/SDN domain.

  • DCAE

    Support metrics data collection on the VoLTE case and receipt of events as per the new model
    Support NB APIs to notify/report metrics data collected from multi-components in real-time.

  • VF-C

    network service(vIMS+vEPC) lifecycle management
    Integrate with S-VNFM/G-VNFM to do VNF lifecycle management
    Integrate with EMS to collect FCAPS metrics data of VNF level and do application configuration
    Integrate with Multi-VIM to do virtual resource management
    Transfer FCAPS data to DCAE aligned with DCAE’s requirements

  • SDN-C

    network connection service lifecycle management
    Integrate with 3rd party SDN Controllers to setup MPLS BGP L3VPN for underlay
    Integrate with 3rd party SDN Controllers to setup VXLAN based on EVPN protocol tunnel for overlay

  • A&AI
    Support the new data model
    Support the integration with new added components

  • Policy

    Support to define and execute auto-healing policy rules.
    Support integration with VF-C for execution of auto-healing policy rules.

  • Multi-VIM
    Support to aggregate multi version OpenStack to provide common NB APIs including resources management and FCAPS

  • Holmes

    Integration with CLAMP for support alarm correlation rules definition and distribution and LCM.
    Integration with DCAE for FCAPS metrics as input of Holmes.
    Support for alarm correction execution.

  • CLAMP
    Support auto-healing control loop lifecycle management (design, distribution, deletion, etc.).
    Integration with Policy for auto-healing rules
    Integration with Holmes for alarm correlation rules.

  • VNF-SDK
    Support VNFs on-boarding included in VoLTE cases, such as packaging, validation and testing, etc.


  • OOM
    Support to deploy and operate new added components into ONAP, such as VF-C/Holmes

  • MSB
    Support to route requests to components correctly.

  • Use Case UI
    Support VoLTE case lifecycle management actions, monitor service instances status, display FCAPS metrics data in real time.

  • ESR
    Support to register 3rd Party VIMs, VNFMs, EMSs and SDN controllers to ONAP

Priorities:

1 means the highest priority.

Functional Platform RequirementPriority

basic/stretch goal

default basic goal

VNF onboarding2
Service Design1
Service Composition1
Network Provisioning1
Deployment automation1
Termination automation1
Policy driven/optimal VNF placement3stretch
Performance monitoring and analysis2
Resource dedication3stretch
Controll Loops 2
Capacity based scaling3stretch
Triggered Healthcheck2
Health  monitoring and analysis2
Data collection2
Data analysis2
Policy driven scaling3stretch
Policy based healing2
Configuration audit3stretch
Multi Cloud Support2
Framework for integration with OSS/BSS3stretch
Framework for integration with vendor provided VNFM(if needed) 1
Framework for integration with external controller1
Non-functional Platform Requirement

Provide Tools for Vendor Self-Service VNF Certification (VNF SDK)NANA
ONAP platform  Fault RecoveryNANA
SecurityNANA
ReliabilityNANA
Dister RecoveryNANA
ONAP Change Management/Upgrade Control/Automation NANA

Work Commitment:

< identify who is committing to work on this use case and on which part>

Work Item

ONAP Member Committed to work on VoLTE

Modeling

CMCC, Huawei, ZTE, BOCO

SDC

CMCC, ZTE

SO

CMCC, Huawei, ZTE

SDN-C/SDN-Agent

CMCC, Huawei, ZTE

DCAE/Homles/CLAMP

CMCC, ZTE, BOCO, Huawei, Jio

VF-C

CMCC, HUAWEI, ZTE, BOCO, Nokia, Jio

A&AI

HUAWEI, ZTE, BOCO

Policy

ZTE

Multi-VIM

VMWare, Wind River

 PortalCMCC 
  • No labels