1. June 28, 2017 Workflow review

Chengli Wang and Lingli Deng from China Mobile presented VoLTE use case. They went through the work flow for service design, instantiation, auto healing/auto scaling and termination. 31 Participants joined the meeting and discussion. The meeting recording can be accessed here: https://zoom.us/recording/play/zClGuWa2cFq-g3YEWC-sZ74V9iJaHAx2Iw7ePKuhh1q1ttjSSFXRyoJhYxpmk8bC

Meeting minutes:

At high level, there are edge and core data centers in the use case. EPC components split between edge and core data centers. Similarly IMS components also split between edge and core. A WAN network with underlay and overlay connects the two data center. 

At design time SDC design network service and closed loop policy. The output will be distributed to SO, VFC, SDNC and Policy.

At runtime, we will simulate alarm and verify the closed loop with auto healing/auto scaling feature.

Specifically, each related module and its function in the use case is described below:

VF-C: collect data from EMS and report the data to DCAE

Model: provide e2e VoLTE template for IMS, EPS and WAN

SDC: design e2e template for VoLTE use case

SO: Parse service template, and VFC and SDNC orchestrate service

SDNC: Orchestrate WAN connectivity service

DCAE: Receive resource layer and service layer telemetry data, do data analysis, and publish event through event bus

VFC: Provide northbound interface for network service layer life cycle management, and southbound interface to EMS and S-VNFM for service configuration and service provision

A&AI: Create/Update/Delete service related records

Policy: Design and support closed loop related rules for VoLTE. Policy will interact with VFC for auto healing and auto scaling operations

Multi-VIM: Create and delete virtual network between VNFs.

Holmes: Alarm correlation for service and resource layer, and publish event to event bus

Clamp: Design closed loop related policy


 Discussions in the 2nd half of the meeting covered the following topics:

EPA support is too early to decide because it’s VNF and hardware dependent, and no specific requirement is given right now. Further discussion will be held for the more requirement and lab availability.

Telemetry data from VoLTE VNF is in JSON format, but not in VES format.  DCAE team will provide example and guidance on how to integrate with VES in Java and C. A separate call will be setup later. DCAE will also support SNMP polling.

More details need to be added for work flow step 1.0 in SDC, like policy input from Clamp.

SDNC provisions connection between 2 data center - Provision hardware router, and set up L3VPN as underlay and VXLAN as overlay. Right now there is no SDN Agent to do the job. SDNC team will discuss the use case requirements internally and get back to integration team.

Need to talk to OOM and MSB teams to see if we need those modules in the use case.


July 13, 2017 Meeting with CLAMP, DCAE, Policy and Holmes teams

The meeting was to clarify workflow related questions with regard to CLAMP, DCAE, Policy and Holmes projects.

 1. CLAMP team needed some clarification of DCAE and Holmes relationship on VoLTE design workflow. After some discussion teams agreed that we need to separate Holmes deployment scenarios into two options. First option is Holmes is deployed as an DCAE analytics; the 2nd option is Holmes is deployed as a standalone component in ONAP. The first option is what everyone agreed to support in R1, and the 2nd option needs more discussion in a separate meeting. The VoLTE workflow will be updated for the 1st option where DCAE should be labelled as DCAE collector, and Holmes should not be in the design time. Also, configuration policy is pushed by CLAMP to Policy engine first and then Policy engine forwards the configuration policy to DCAE. Closed loop template blueprint is sent by CLAMP to SDC and SDC forwards to DCAE.  

2. StringMatch policy configuration shown in CLAMP VoLTE use case is not enough to support Holmes alarm correlation rules. CLAMP team needs to know what exactly needs to be configured for VoLTE use case. Teams agreed to start with concrete event example from a single VNF failure, and CMCC team will provide an example within 2 weeks. In R1, auto healing is required and auto scaling is an aspiration goal. 

3. Team discussed DCAE deployment for multiple data center scenario. Ideally, some data collection and analytics function should be run close to data source. However, due to time constraint only one instance of DCAE is supported in R1. 

4. In Auto-healing and auto-scaling diagram of VoLTE use case, modification is needed to show Holmes as part of DCAE as option one Holmes deployment scenario. 

5. Explained that in VoLTE use case VF-C will collect VNF events from EMS and forward to DCAE in VES format. Also, MultiVIM component will report VIM events to DCAE directly in VES format.

A followup meeting will be set up for next Thursday.    


July 13, 2017 WAN Overlay Network 

The meeting focused on how to build VXLAN overlay network between data centers (DCs) with EVPN protocol. 

1. Discussed BGPVPN support by Openstack. VMWare and Wind River teams will go back and check if their VIMs support it. When new Intel lab is available for the community, a cross data center network can be set up to test the scenario.

2. Discussed the updated version of network diagram on wiki. Underlay network between two PEs is MPLS L3VPN; and from DC-GW to PE is VLAN.  VXLAN overlay is set up between two DC-GWs by EVPN. VTEP is on DC-GW. DC-GW is controlled by SDN local controller provided by vendor. 

3. Inside DC from VM to DC-GW is VXLAN, it is set up and managed by Neutron. 

4. Discussed the solution that SDN local controller provides some APIs to map DC-side VXLAN with WAN-side VXLAN on DC-GW. Chengli will provide the SDN local controller APIs which CMCC used in its lab to set up VXLAN tunnel with EVPN.  Based on that information, we will provide a diagram on wiki to illustrate how the VXLAN mapping is done.  

5. Depend on the interface and capability of SDN local controller, we will decide in the next meeting how modules (MultiVIM, Neutron, SDN local controller, SDN WAN controller, DC-GW) work together to provide VXLAN overlay between data centers via EVPN.


July 20, 2017 DC Gateway Local Controller Interface

During the meeting, we discussed local DC gateway controller sample APIs to manage EVPN VXLAN overlay network between two data centers. 

  1. Explained one option of how DC gateway controller interacts with DC gateway, Neutron and compute node.
  2. Discussed DC gateway controller sample APIs to create L2&L3 EVPN VXLAN overlay network between DCs. Other APIs to support read, update and delete EVPN VXLAN overlay network are also available. SDN-C team agreed that the APIs looked good.   
  3. VIM team and Multi-VIM team also described other options for DC network to support VXLAN overlay. We will have further discussions of those options after R1.

The action items:

  1. Have a table to define the source for the parameters (import route target, export route target, networks, etc.) used in controller APIs. The source can be from vendor or SDC, etc. 
  2. Should make decision on whether L2, L3 or L2&L3 VXLANs are required for R1.   

The slides used in the meeting: volte-overlay-dist.pptx


July 28, 2017 VoLTE Use Case Status Update

We reviewed the following topics during the meeting, and the details are in the attached slides: 

1) The use case current status

2) The blocking issues raised during virtual developer event, see July Virtual Developers Event Blockers

3) The next steps

4) Testing strategy for VoLTE use case. One comment was g-vnfm driver from VFC can be used as base for developing mock vnfm driver.   

The slides used in the meeting: VoLTE Use Case Status Update 07-28-2017.pdf. Recording: Status update and testing strategy 07-28-2017.mp4


August 4, 2017 VoLTE TOSCA Template and Network Service Discussion

We reviewed the following topics and some details are in the attached slides: 

1) Reviewed the latest SDC and SO plans to support VoLTE

2) Looked into VoLTE TOSCA template example from Open-O, with focus on Network Service template

3) Proposed an approach for Virtual Link implementation in WAN scenario

The slides used in the meeting: VoLTE TOSCA Template and Network Service.pdf


August 7, 2017 Discussion of Two Blocking Issues Raised by DCAE Team

Two issues regarding ONAP service url registration and discovery were raised by DCAE, see B26 and B27 on July Virtual Developers Event Blockers

Through the discussion it became clear how the parameters are passed to DCAE when ONAP components are installed by HEAT as VMs. When ONAP components are deployed as containers directly by OOM through Kubernetes, further discussion is needed with OOM team.

Here is the meeting recordings: Meeting with DCAE Team on Two Blocking Issues.mp4


August 15, 2017 VoLTE Draft Test Cases Review

We reviewed the draft of VoLTE Test Cases. Here is the recording: VoLTE Draft Test Cases Review.mp4 


September 6, 2017 A&AI Portal App Demo 

Arul Nambi gave a demo of A&AI portal app. The new portal app visualizes the nodes and connections between nodes. The meeting recording is here: A&AI Portal App Demo.mp4


September 27, 2017 VoLTE Service Creation

David gave a demo of VoLTE Service Creation in Paris F2F conference. VoLTE Service Creation Demo.mp4


  • No labels

2 Comments

  1. Hi, Yang Xu

        I'm very interesting in this use case, could you please email me the calendar about this use case meeting too?  Thank you very much!

  2. Sure, we will do. Also we will discuss the topic at next week's virtual developer event