You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 20 Next »


Background

A broad set of transformations are taking place:

  • Business transformation: OTT services, faster TTM, Monetization
  • Technical transformation: QoE, ULL, SDN/NFV/OMEC integration, Edge Analytics, Big data, Virtualization, Automation, C->E, R->E
  • Architectural transformation: 4 views “NORMA-like” Cloud, ECOMP, Flexible architecture (RAN, Core, CDN, Application delivery, Automation, IoT, fog,..)
  • Industrial transformation: ICT&E

To efficiently and effectively deploy 5G network supporting ultra low latency and high bandwidth mobile network, we need to deploy verity of applications and workload at the edge and close to the mobile end user devices (UE or IoT).  That would include various virtualized RAN and core network elements, content (video), various applications (AR / VR, industrial automation, connected cars, etc.).  We might deploy near-real time network optimization, customer experience / UE performance enhancement applications at edge.  Edge cloud must support deployment of third party application (e.g. Value added optional services, Marketing, Advertising, etc.).  We must deploy mechanisms to collect real time radio network information, process them in real-time (e.g. Geo Location data), summarize, anonymize, etc. and make them available to third party applications deployed at the edge or central location or outside service provider environment.  Edge data collection could also be used for training machine learning models and fully trained models can be deployed at the edge to support network optimization.

ONAP Managed Environment


Edge Deployment and ONAP Scope:

This diverse work load will require somewhat heterogeneous cloud environment, including Graphical Processing Unit, highly programmable network accelerators, etc., in addition to traditional compute, storage, etc.

To support edge deployment, we need:

1)     Rich information / data model to discover and capture hardware resources deployed at the edge and request right type of resource to meet unique application needs.

2)     Must support workload deployment options such as VM, Container (e.g. Kubernetes) on VM or bare metal

3)     Must support a very small foot print to an edge location supporting a metropolitan area with verity of workload deployment

4)     Edge cloud could be on customer premises – Factory automation

5)     Must provide efficient network infrastructure that support slicing and QoS configuration options to meet various mobility services need

6)     Must support policy driven auto recovery / scale up scale down

Several industry efforts are currently underway around a broad suite of technologies ranging from Compute and Intelligence capabilities residing on a mobile user device (e.g. vehicle, or handset); located in a home (e.g. home automation appliance); or an enterprise (e.g. local service network); or positioned in the network at a Cell Tower, or a Central Office. As it results from amalgamation of different market segments, this technology suite is currently being referred to by different names, as defined by the contributing market segment, e.g. the Telco industry has landed on the term Multi-Access Edge, whereas the IoT industry seems fragmented across many: Open Fog, Industry 4.0 and Industrial Internet to name a few.

We support current industry efforts for new technology development around the Edge, however, we see these trends impacting a much broader scope of the service provider systems than simply the Edge of the Access Network. We see an imminent need for logical convergence of traditional “Telco” architecture and design, Cloud Data Center design and IoT systems with an hierarchical approach in which disparate control systems are dynamically stitched together with east-west and north-south interfaces in order to distribute ‘Resources’ (including compute, storage, networking) and ‘Intelligence’ in a spatio-temporal manner as depicted below:

 

The need:

End users and other devices, cyber-physical systems will benefit from a broad set of context information that can enhance and enrich the delivery of a broad set of applications.

Applications and time scales:


Application ClassificationDescriptionDeployment OptionsPotential Application ProvidersReal-time   or Near Real-time

1

ONAP Edge Analytic,  Optimization & Context processing

Edge  analytic  / Optimization applications that support broad scope from slice monitoring, performance analysis, fault analysis, root cause analysis, and centralized SON applications, ML methodologies for various apps, Policy, optimization apps (e.g. Video optimization, Drive Test Minimization, etc.), customer context information processing (e.g. geoLocation information)

Candidate for Casablanca release

Deployed in Edge cloud, running on Edge DCAE / MECThese Applications could be provided by NF vendors, service providers, and third party (e.g. LTE SON applications, Video Optimization).  Need to provide guidelines to develop deployable micro-services.  But distinction is not important from ONAP perspective.  They still run in DCAEThese Applications operates in order of second (500ms and above)

2

Real-Time   Network & Service Control

Near-real time (~50-100 ms) UE / Area optimization applications/ 3rd party apps:  These are in service path optimization applications and run in open CU-CP platform (also known as RAN Intelligent Controller, or SD-RAN controller).  These applications include load balancing,  link set-up, policies for L1-3 functions,  admission control and leverage standard interface defined by oRAN / xRAN between network information base (or context database) and third party applications. Data collection through B1 and implemented using x technology.

Post Casablanca release 

Deployed in Edge cloud, running on CU-CP (CU Control Plane)Once we have a open CU-CP platform (e.g. defined xRAN / oRAN), these Applications could be provided by NF vendors, service providers, and third party (e.g. Load balacing).  Need to provide guidelines to develop deployable micro-services.  But distinction is not important from ONAP perspective.  They still run in CU-CP and can be treated as VNFC.These Applications operates in order of 10s of ms (20-100 ms)

3

Value Added Services

These applications that are value added services provided by third party (e.g. Advertising, marketing, etc.).  These applications don’t fall in network optimization (as 1 or 2) or operations automation (category 2), but rather value added services.  MEC or Edge DCAE can provide needed data (e.g. Geo location, anonymized customer data, etc.) via standard set of APIs.

Post Casablanca release 

Two Deployment Options:

 1) Deployed at customer premises

2) Deployed on Edge Cloud, but no ONAP dependency

These are third party applications, developed by value added service providers.  Applications provider can leverage Edge cloud to run their workload, but don't have ONAP dependency.  However, they can leverage Edge Cloud to deploye these applications close to edge to meet some latency / performance constraint.  These applications consume APIs exposed by MEC / Edge DCAE.These Applications are usually non-real time and operate in seconds / minutes or longer.

4

Automation / AR-VR / Content, etc.

Third party applications that directly interacts with the UEs, like AR/VR, factory automation, drone control, etc.  In this case messages or requests or measurements directly go from UE (via UPF or GWs) to the applications and applications respond back.  ONAP can deploy these applications and manage them just like any other network element, or they could be un-managed applications (like APNs in today’s world).

Post Casablanca release 

Three Deployment Options:

1) Deployed at customer premises

 2) Deployed on Edge Cloud, but no ONAP dependency

 3) Deployed and managed by ONAP on Edge Cloud

These are third party applications, developed by enterprise customers (e.g. factory automation) or content creators (AR/VR applications).  Service providers can host on the edge cloud (to meet latency requirements) as unmanaged applications or fully manage them.These Applications are usually real time and operate in order of few MS.

We acknowledge the need for different reference implementations suited to match their respective market segments, however, interoperability across disparate implementations is paramount to ubiquitous provision of services across multiple jurisdictions (e.g. multiple service providers serving different segments of a particular service chain). 

One could try and build an all-encompassing standard that unifies potential domains and jurisdictions involved, but previous attempts to solve similar problems with an umbrella standard have not proved to be effective. However, we do see the industry (or industries, as there are multiple of these involved here) benefiting from a common architecture pattern that stitches disparate reference implementations with open API, information models, and abstractions building on common core principles, e.g. SDN reference architecture originally specified by ONF.

To this end, we’d like to propose new community project(s) that bring together the best and brightest from Telco, Cloud and IoT industries to put their heads together for specification of a reference framework that distributes ‘Resources’ (including compute, storage, networking) and ‘Intelligence’ across spatio-temporal hierarchies pertaining to disparate performance characteristics (e.g. latency budget) and multiple jurisdictions involved in a distributed service chain. This reference framework must include the following:

  1. Optimal Distribution of Intelligence and Control, includes distributed data collection and localized processing of intelligence;
  2. Optimal definition and placement of functional components;
  3. Optimal distribution of traffic and respective topology recommendations;
  4. Mobility implications of distributed service chains;
  5. Security implications of distributed service chains;
  6. Autonomic Control, Management and Operations of distributed service chains. 

Architecture: 

  1. Non-real time apps/3rd party analytic applications (>~500ms) that support broad scope from slice monitoring, performance analysis, fault analysis, root cause analysis, and centralized SON applications, ML methodologies for various apps, Policy and optimization apps (e.g. Video optimization, Drive Test Minimization, etc.).  
    ONAP Architectural impact: These applications run on DCAE and/or other ONAP components (edge or centralized) and are packaged as microservices. DMaaP pub-sub is the communication model.

(warning) Need to consolidate the scope

ONAP Edge Automation WG Scope: (Cagatay BuyukkocVimal Begwani, Kaniz Mahdi)

  1. Optimal Distribution of Intelligence and Control, includes distributed data collection and localized processing of intelligence;
  2. Autonomic Control, Management and Operations of distributed service chains.

ONAP Edge Automation WG Scope (Srinivasa Addepalli)

This group will address the needs of an edge automation environment to satisfy following 

  • Providing contextual information to application services after gathering information from 5G network functions.
  • Traffic  steering to the right edge applications (e.g  Programming UE classifier of UPF) and dynamic SFC within VNFCs of edge application.
  • Support for various edge sizes - From 2 nodes to 100 nodes
  • Scaling needs - Hierarchical federation (over and beyond auto scale-out of ONAP services) - Distribution of orchestartion, fabric control, stats/faults/log collection and distributed processing of same (Regional Controllers)
  • Securing confidential information/keys/secrets and detecting any software tampering at edges
  • Optimal placing of edge applications. For example placing edge applications in the best edge(s) considering various constraints (e.g Proximity to end user,  Radio/BW availability, cost,  accelerators availability - HPA,   Geo-affinity regulations, trusted infrastrcture of edge, device characteristics and resource availability to take up load etc...),  Auto creation of constraints is one requirement.
  • Supporting various application types (VMs and containers)
  • Performance determinism and high throughput edge 
  • Deploying IoT specific infrastructure software in edges such as EdgeXFoundry.
  • Supporting multi-tenancy to place workloads in Edges belonging to various organizations

This group will study the needs of edges,  identify ONAP role,  identify gaps in ONAP to satisfy above needs and propose solutions to address the gaps.

Few examples:  on scaling -   OOM based scaling may not be good enough and  there may be a need to  offload some ONAP functionality to regional level as the target number of edge clouds could be in tens of thousands.   Also, to reduce amount of data to central ONAP services for analytics,  there is a need for offloading DCAE functions to regional level, which could involve  identifying real time data sources, collecting and analyzing the data and disseminating output data to central ONAP function.  Controlling fabric (L2/L3 switches in edge-clouds and WAN links) is another function that may require offloading some ONAP SDNC functions to regional sites. 


The context information is ….

What is context?

  • User related/pref/prof/…
  • Location/trajectory/
  • ApplicationStatus
  • Device details
  • Mobility
  • Service characteristics/emergency characterization
  • Radio characteristics (eg RSRQ)
  • Load ->Thruput guidance.
  • Transport
  • Monetization/enhancing user experience aspects
  • Network/topology
  • ……

 

The collection and processing and delivery mechanism:


Group will show how API exposure/observability:

  • An ONAP-based data collection, processing and exposure mechanism  (API engine) can support all the API and exposure capabilities specified in various industry forums & standards bodies: e.g.,  ETSI MEC, and similarly support openfog applications
  • Real time data from RAN that identifies congestion and other cell specific information can be provided by DCAE in through various event streaming data (e.g.,  PM Event streaming.) A subset of this is included in ETSI/MEC RNIS.
  • Using Acumos, Akraino and other facilities provided by Linux Foundation a powerful  ONAP-based exposure structure can be developed  (e.g., predictions based on Machine intelligence, better optimization algorithms etc)
  • ONAP based systems can also coordinate information gathering from a variety of sources, to disseminate and make available to applications.
  • ONAP-based exposure supports separations of domains and provides a LCM of  network (e.g., slicing) and applications.


Objective:

This group will address the need of an edge automation environment that will provide context information and other autonomics capabilities to 5G services and 3rd party applications in close proximity of end users (~10s ms). Leveraging ONAP elements (such as DCAE) and identifying real time data sources, fundamental context information and ability to collect, disseminate and store and use standardized APIs or else create new ones will be the task of this group in addition to hosting autonomics capabilities apps/microservices/unikernels. The objective will be to determine the needs of ONAP for supporting this environment with some suggested approach to use-case-body for ONAP-ng (R2or3).


Additional requirements:

bring a set of nonfunctional req’s

  • Use information / data model to discover and capture hardware resources deployed at the edge and request right type of resource to meet unique application needs. The information model should be vendor agnostic and should be rich enough to cover new architectural directions within the 5G industry
  • Infrastructure management should be technology independent; Must support workload deployment options such as VM, Container (e.g. Kubernetes) on VM or bare metal
  • Since Edge clouds are confided to much smaller areas, must support a very small foot print to edge location support a metropolitan area with verity of workload deployment
  • Edge cloud could be on customer premises – Factory automation
  • Must provide efficient network infrastructure that support slicing and QoS configuration options to meet various mobility services need
  • Must support policy driven auto recovery / scale up scale down


  • No labels