Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Gliffy Diagram
nameRAN_deploy_step_1 Copy
pagePin1


This page contains work in progress!

Questions and comments are inserted in the text as needed, prefix them with "Question:" or "Comment:".

Text below the line "----temporary text ----" is a placeholder for text that may or may not be used later on.

...

RevAuthorComment
9/7/17Peter LCopied text from the v4 document, must check the v5 document for additional parts
9/14/17Oskar MSome restructuring and clarifications. Temporary text either removed or inserted into the various UC steps.
9/21/17Oskar MAdded some sequence diagrams and made some minor adjustments to descriptions as well as overall assumptions in order to align with diagrams. Policy-based automation to handle network faults or service degradation has been moved to a separate step.

Goal

A planned list of 5G nodes are on-boarded into ONAP, and ONAP configures the nodes to the level that they are ready to handle traffic. ONAP begins to actively monitor and manage the nodes.

...

  • The 5G nodes consist of both PNFs (DU) and VNFs (CU). A single CU may consist of several VNFs.
  • Scope is limited to one PLMN without slicing.
  • A single vendor delivers RAN equipment and software.
  • A single service provider:
    • Owns or leases data center equipment, cell sites, physical transport, and any new equipment installed on these sites
    • Owns and operates the resulting RAN
    • Is the single user of the entire ONAP based management system
      • VNF/PNF provider is not visible as actor in this use case and self-service for VNF/PNF onboarding is not supported.
  • Network status including KPIs can be monitored in Portal (dashboard), but exporting data via APIs to external monitoring applications is out of scope.
  • This use Use case covers only initial deployment of nodes. Thus, change management such as software upgrade is out of scope.

<Question-Karpura Suryadevara - re: 3rd bullet above)> Is there a specific reason why only single vendor is considered for all the components of RAN equipment and software ?
<Peter L> Yes - the intention is to simplify the use case by avoiding (a) interoperbility problems between components and (b) the mapping of the same ONAP level configurations to multiple, vendor specific models (with this limitation we only need to show one mapping). If we want multiple vendor equipment in the RAN then the UC should perhaps be relabeled to "5G Multi Vendor RAN Deployment" to clearly reflect that - but then we also need equipment from multiple vendors to run a demo. Or?

Preconditions

To clarify the limits of this use case for this release of ONAP the following preconditions are assumed:

...

The 5G RAN is providing RAN service for the user equipment according to expectations:

  • All planned services are on-lineonline, providing FCAPS data through the relevant channels
  • RAN NOC personnel have full access to the FCAPS data and ONAP automation framework
    • Dashboard or other NOC tools configured to display displays relevant RAN data
    • Calculation and monitoring of key performance indicators is activated, used to verify the capacity requirements
  • A first automation use
  • -
  • case reacting on an incident or state change
  • have
  • has been implemented

Steps

(Oskar M.) More sequences diagrams to added to steps below. Diagram in step 3 needs some more details.

Step 1: Service

...

design

...

  • Onboard SW packages, descriptors and any other artefacts provided by the RAN vendor
  • Onboard any planning data that is common across all nodes
  • Design RAN-level templates, recipes and workflows covering common network elements, transport network, data collection and analytics, policies and corrective actions
  • Design node-level templates, recipes and workflows covering network elements (PNFs and VNFs), transport network, placement or QoS constraints, data collection and analytics, policies and corrective actions

Step 2: Design verification (design-time/run-time)

  • Distribute the completed design as well as vendor-provided artefacts to the various run-time components.

(Oskar M.) General ONAP question: Are WAN resource requirements embedded in service template parsed by SO, or do they use separate descriptors that shall be distributed to SDNC?

Gliffy Diagram
nameRAN_deploy_step_1

Step 2: Verify design

  • Verify Verify templates and recipes from step 1, using dedicated test environment or limited trial following steps below. If necessary, make adjustments according to step 1.

Step 3: Deploy shared services

...

This step refers to deployment of any shared RAN services and functions defined by templates, recipes and workflows in step 1. Note that some of the functions below may be partially inactive until nodes are added in step 4.

  • On receiving service instantiation request via Portal or external API, SO and controllers will decompose the request, and allocate and connect the various resources.
  • DCAE will start fault, performance, and log data collection as described during design time.
  • DCAE will perform data analytics as configured in recipes, to monitor the environment and publish detect anomalous conditions.Corrective/remedial action for network impairments and for violations of service levels as described by defined policies are initiated using the SO and/or controllers. Output from analytics is forwarded to Policy and dashboard.

(Oskar M.) General ONAP question: The diagram below is modeled based on some other ONAP use cases. Why is there a service instantation request towards SO, but no corresponding requests towards DCAE or Policy to instantiate/activate analytics blueprints or policy rules for this particular service instance?

Gliffy Diagram
nameRAN_deploy_step_2
.



Step 4: Add nodes

...

This step refers to deployment of services and functions defined by templates, recipes and workflows in step 1 for a new 5G node.

  • A sub-flow includes the onboarding process for related PNFs.
  • In this step node-specific data from planning is also inserted.
  • On receiving request, SO and controllers will decompose the request, and allocate and connect the various resources.
  • DCAE will start fault, performance, and log data collection as described during design time.
  • DCAE will perform data analytics as configured in recipes, to monitor the environment and publish detect anomalous conditions. Output from analytics is forwarded to Policy and dashboard.

Step 5: Verify operation

  • Verify that service is provided and can be monitored through dashboard using basic observability data and calculated KPIs.

Step 6: React to incident

  • Corrective/remedial action for network impairments and or for violations of service levels as described by defined policies are initiated using the SO and/or controllersThis step is repeated for each added node. Bulk deployment should be possible to handle a larger number of nodes in an efficient way.

Step 5: Final verification (run-time including dashboard)

    • For verification purposes this may require fault injection
    Verify that service is provided and can be monitored through dashboards and northbound APIs using basic observability data and calculated KPIs
    • .
  • Verify that policy definitions and their corrective actions are active and has have intended effect.