This page is used to list important questions and answers related to the use cases. The goal is to explain the principles and possibly clarify further with concrete examples based on a specific use case, such as vCPE or VoLTE. With this we can avoid the same questions being asked repeatedly. We invite people to contribute to this wiki page by answering existing questions or adding new questions. 

  1. Question:
    The output of SDC will include a variety of files (csar, xml, yang, etc) being distributed to multiple modules, SO, AAI, Policy, etc. Can we create a document to specify the details, including package structure, format, usage, etc. There are NSD and VNFD specs on the VF-C page (refer https://wiki.onap.org/display/DW/VF-C+R1+Deliverables). Does ONAP use them as generic spec?

    Answer:
    On-boarding of ONAP VNF artifacts is done through SDC and may align in the future with standards based approaches as for example described in ETSI NFV (IFA11, IFA14, SOL004). The information on the VF-C page is an excerpt of these. The ONAP community can certainly create a document specifying the details of VNF packages, aligned with ETSI NFV SOL004 - this specification would then be consumed by SDC.
    AI: SDC needs to clarify this. A sample would be very helpful.


  2. Question:
    For VNF onboarding, do we use HEAT as the input for the VPP-based VNFs? Does SDC still convert them to TOSCA internally and output TOSCA-based VNF packages?

    Answer: 
    For vCPE, the input is HEAT. Internally, SDC converts HEAT to TOSCA. The output of onboarding is TOSCA-based. The service description is TOSCA but the attachment includes HEAT artifacts, which will be distributed to SO. SO will pass HEAT to Multi-VIM.
    For VoLTE, the input is TOSCA. 
    According to Zahi Kapeluto on 7/25 virtual event, currently SDC can only import HEAT templates, convert to TOSCA, and distribute TOSCA. 
    According to Eden Rozin on 7/26 virtual event, SDC can import TOSCA VNF. The limitation is that not all TOSCA syntax are supported.

  3. Question:
    What is the complete list of artifacts needed to complete a service design? Workflow recipes for SO, data model for AAI, Yang and DGs for SDNC and APPC, policies, data analytic programs. Anything else?

    Answer: 

    - Workflows for SO.
    - Yang models and DGs
    - Policies
    - AAI data models: generic VNF model (existing) and service-specific models (to be created).
    - Data analytics: to reuse the existing TCA
    - Robot framework to emulate BSS
    - Blueprint template and policies created using CLAMP

  4. Question:
    What specific workflow recipes are needed for vCPE? What are the tools to create such recipes? How are the workflow recipes associated with the service, packaged, and distributed?

    Answer: 
    There will be two sets of workflows. 
         - The first is to instantiate the general infrastructure, including vBNG, vG_Mux, vDHCP, and vAAA. They are explained on the use case wiki page
         - The second is to instantiate per-customer service, including service-level operations and resource-level operations (tunnel cross connect, vG, and vBRG). They are explained in these slides.


  5. Question:
    Are we supposed to create a specific set of workflow recipes for each use case? E.g., one set for vCPE and one set for VoLTE.

    Answer: 
    There are generic workflows that can be reused. There are also needs for specific workflows for each use case.

  6. Question:
    Do we manual create AAI data models or use tools? How are the model packaged with the service and distributed? What data models are needed for vCPE?

    Answer: 
    Based on meeting discussions:
    --------------------------------------------
    We need a consumer broadband service model. The existing generic VNF model will be reused. Additional parameters may be needed on top of the generic VNF model to create specific use case VNF models. There are tools to do this.

    Comments from James Forsyth:  
    -------------------------------------------
    A&AI watches for new models in SDC and then the model loader puts those model definitions into the A&AI backend.  The models needed for vCPE use case would be defined in SDC.

    https://wiki.onap.org/download/attachments/1015849/aai_schema_v11.xsd?api=v2 contains the entire schema for A&AI, including generic-vnf.

    Comments from Brian Freeman:
    ---------------------------------------------

    How would A&AI model the service path from the prem to the internet through vCPE Use Case ?

    I think the generic VNF model is part of it.

    I think the vlan/tunnel information might be missing or perhaps you could explain how the linked list of Layer 2 data could be used for the path ?

    Its the physical data path piece that I am worried about not the VNF themselves.

    Comments from James Forsyth:
    -----------------------------------------------

    This looks like a model-based query which would allow us to define the service path and then we could pull the whole thing out of the graph with a single call, starting at a given vertex like service-subscription.  We’ve done similar use cases with A&AI in the past, and I think we have the appropriate vnf structures, networks, and interfaces defined in the schema to be able to define the service path, possibly using the model from SDC and maybe building some custom network models to support it as well.  A&AI architecture will look closely at the use case and will perhaps provide a suggested model of how this service might look in A&AI.



  7. Question:
    Who creates the Yang files to define the SDNC/APPC NBI APIs and data models? Does SDC use the Yang files?

    Answer: 
    For vCPE, Yang files are needed for SDNC to define the NBI API and the service data model for configuration. The APPC will just do stop/start. APPC will also configure the VNFs to enable VES data reporting to DCAE. 
    The SDNC team will define the Yang files for the SDNC NBI APIs. The Yang models will be based on the Yang models provided by individual VNFs.
    In R1, DG Builder will work separately outside of SDC. So SDC does not use the Yang files. The long term goal is to integrate DG Builder into SDC.

  8. Question:
    In the general case, ONAP may need to create a new SDNC/APPC dedicated for a new service when the service is instantiated. Is it designed by the ONAP operational team outside of SDC or included as a SDC function? What is the process of creating a new SDNC/APPC? Who does it and in what way, manual or automatic?

    Answer: 
    In R1, the DGs are packaged with SDNC/APPC and are loaded into their DBs during instantiation. SDC does not use such DGs. The SDNC and APPC instances are created in advance with the required DGs loaded. The creation of SDNC and APPC is not part of SDC. It belongs to OOM.

  9. Question:
    What policies are needed for vCPE? Are they created manually or using tools (Drools?) How are they integrated into SDC or CLAMP?

    Answer:
    The policies needed for the vCPE usecase are being determined and defined, for R1 expectation is that they will be created manually, not integrating with SDC or other tools.

  10. Question:
    Message bus topics need to be defined and used on DMaaP to enable communication among different modules. When are such topics defined and how are they configured? E.g., Policy will send event on a TOPIC_VM_RESTART to invoke VM restart, APPC will subscribe to TOPIC_VM_RESTART and execute the restart DG. When and where to define this topic? Who configures Policy and APPC to publish/subscribe to the topic, and when?

    Answer:
    For R1 these will be configured statically.

  11. Question:
    Are we going to let VNFs actively post VES data to the DCAE REST API for data collection? If yes, we will need to configure the VNFs with the DCAE collector's URI. Is this configuration performed by APPC? Does APPC get the URI from AAI?

    Answer:
    Yes, VNF will post VES data to DCAE collector. VNF will need the IP and port of the DCAE collector. There are two options for this:
    -  Use APPC: VNF will provide NETCONF API, APPC will configure the VNF after it is instantiated.
    -  Use HEAT: The IP and port will be put in the HEAT template.
    Based on discussions, it is preferred to use APPC for the configuration. The reason is that in practice the collector may change due to reasons such as fail over. APPC can be used to modify the collector in the run time.

  12. Question:
    Who will develop the data analytics program? Is it required to re-build the DCAE containers to include the analytics program?

    Answer:
    Kang: To confirm with DCAE: will modify TCA, will need to rebuild.

  13. Question:
    Are we going to build the analytics program a CDAP application or a docker container?

    Answer:
    TCA will be used for the vCPE use case.

  14. Question:
    Are there any KPI scripts that need to be created in SDC?

    Answer:
    The control loop is designed by CLAMP, which includes KPIs as parameters and policies.
    Action Item: Discussions are needed to confirm the interface between SDC and CLAMP. 


  15. Question:
    What Robot testings need to be created during design time? How is the process integrated into SDC?

    Answer: 
    We need Robot framework to emulate BSS to send in customer order. We also need robot framework to load data into DHCP and AAA.

  16. Question:
    Is there a standard format for Robot testing reports? How are they presented in ONAP?

    Answer: 
    AI: Discuss with Daniel Rose, Jerry Flood.

  17. Question:
    Are we going to use the generic VID or create a vCPE flavor VID to instantiate vCPE?

    Answer:
    Expectation is that the robot framework will be used to emulate requests to instantiate the subscriber specific vCPE VNF. VID may be used to trigger orchestration / instantiation of the supporting (dhcp, dns, aaa) and edge/metro (bng, mux) VNF.

  18. Question:
    What kind of monitoring dashboard is required for vCPE?

    Answer:
    AI: Disuss with usecase UI project.


  19. Question:
    For vCPE, what is the data collection/reporting mechanism between VNF and DCAE?

    Answer:
    We would prefer to use connect approach to reports generic statistics (e.g. per-port packet in/out, packet drop rate) and VES agent approach for VNF specific statistics (e.g. per-flow and per-subscriber data).

  20. Question:
    For vCPE, list all the required SDNC control/configuration actions.

    Answer:
    -  

  21. Question:
    For vCPE, list all the required APPC control/configuration actions.

    Answer: 


  22. Question:
    What are the complete list of tools/UIs to perform design and operation? For each tool, is it available for testing purpose? If not, what main functions are to be developed for R1?

    Answer:

    FunctionToolsInputOutputSample input/output linkTools available for testing?Main functions to be developed for R1Notes
    VNF packagingVNF SDK





    VNF certificationICE





    VNF onboardingSDCVNF template, environment, reference to images.VNF package


    Every HEAT received by SDC should be previously certified by ICE. ICE is not integrated with SDC so far. Images are not stored in SDC. The images should be pulled from Multi-VIM – different VIMs have different image formats.
    Service template creationSDCVNF packages, ..Service template in TOSCA











    Closed loop designCLAMP, SDCClosed loop TOSCA template VES/TCA templates VES onboarding yaml filepolicies and a blueprint template











    Policy (out of closed loop)Policy GUI


    Yes, available via the Portal Dashboard
    Policy GUI has a tab which allows to deploy a policy into a PDP. Integration of Policy GUI into SDC is not planned for R1.
    Workflow

    bpmn files


    Camunda Modeler for the BPMN files.

    A&AI data model





    Check with AAI
    Yang model for SDNC/APPCtext editor
    yang files
    yes

    DGDG BuilderYang modeljson/xml/compiled DG
    yes

    Data analytics applicationjava





    Data collector

    VES docker container



    Service instantiationVID





    Monitoring dashboardUse Case UI





    Robot to emulate BSS






    Robot to invoke packet loss







  23. Question:
    How does vGMUX in vCPE restore to working state after being restarted?

    Answer:
    Approach proposed by Danny Zhou and Johnson Li:

    Step 1: SDN-C invokes an Agent in the vG MUX to configure VES collector’s IP and port, those information will be saved to a VES agent configuration file. Note: if VPP already started when this call is invoked, the Agent will update the in-memory variables as well.

    Step 2: When VPP starts, the VES Agent Lib will load the VES collector’s IP and port to memory, to be used to construct the URI to interface with the VES collector embedded in the DCAE

    Step 3: Robot to configure the packet loss rate statistics to VPP to emulate the high packet loss scenario

    Step 4: The VES Agent periodically reads the packet loss rate  statistics from the VPP

    Step 5: The VES Agent reports the statistics to DCAE

    Step 6: The policy engine matches the vG MUX restart policy with the packet loss rate  statistics, and triggers the APP-C to restart vG MUX VNF

    Step 7: APP-C restarts vG MUX via multi-VIM

    Step 8: VPP is signaled by the OS to save the current configuration to a configuration DB, and those configurations will be consumed when the VPP restarts. Note: all the statistics are cleared to zero when VPP restarts, in R1. We can save them to the DB as well but it needs a more complicated HA framework to support it, so might be a R2 feature.

    Oliver Spatscheck supports supports the above approach for R1 and points out the following shortcoming that should be fixed in R2:
    If we use VPP this way we can’t use authorization in the VES collection (this would require a call into DCAE to configure the VES collector access per VNF). What you describe below is similar to the setup we have for vDNS/vFW right now (except we “hardcode" the VES collector IP) but it’s not the way we would run it in a production setting.





  • No labels

8 Comments

  1. For Question No.11, we checked the vLD/vFW usecase hard-coded ip and port of the DCAE collector in the VES agent, and the vCPE usecase might take the same approach by hardcoding ip/port in either standalone VES agent application or VPP application (linked the VES agent library), but better approach is for APPC to define a either REST or Netconf spec that should not only benefit all VPP based VNFs for those two usecases but also benefits VoLTE use case as well as help integrating commercial VNFs that needs to report statistics to ONAP. Once the spec is defined, we can certainly support them in the open source VNFs.

  2. I thought that the .env file had a default DCAE_Collector_IP but that the preload data could over write that as parameter to the  heat stack create. The init scipts should be copying the ip from the a text file in config to the needed spot in the VNF ? Agree that APPC or SDNC could do that configuration and it would be a more flexible solution for handling things like life cycle management where the DCAE Collector changes. I didn't think it was hard coded though.


       user_data:
            str_replace:
              params:
                __dcae_collector_ip__: { get_param: dcae_collector_ip }
                __dcae_collector_port__: { get_param: dcae_collector_port }
                __local_private_ipaddr__: { get_param: vlb_private_ip_0 }
                __repo_url_blob__ : { get_param: repo_url_blob }
                __repo_url_artifacts__ : { get_param: repo_url_artifacts }
                __demo_artifacts_version__ : { get_param: demo_artifacts_version }
                __install_script_version__ : { get_param: install_script_version }
                __cloud_env__ : { get_param: cloud_env }
              template: |
                #!/bin/bash
    
                # Create configuration files
                mkdir /opt/config
                echo "__dcae_collector_ip__" > /opt/config/dcae_collector_ip.txt
                echo "__dcae_collector_port__" > /opt/config/dcae_collector_port.txt
                echo "__local_private_ipaddr__" > /opt/config/local_private_ipaddr.txt
                echo "__repo_url_blob__" > /opt/config/repo_url_blob.txt
                echo "__repo_url_artifacts__" > /opt/config/repo_url_artifacts.txt
                echo "__demo_artifacts_version__" > /opt/config/demo_artifacts_version.txt
                echo "__install_script_version__" > /opt/config/install_script_version.txt
                echo "__cloud_env__" > /opt/config/cloud_env.txt
    
                # Download and run install script
                curl -k __repo_url_blob__/org.openecomp.demo/vnfs/vlb/__install_script_version__/v_lb_install.sh -o /opt/v_lb_install.sh
                cd /opt
                chmod +x v_lb_install.sh
                ./v_lb_install.sh
    
  3. Danny, Brian,

       Also ran into the hardcoding of the vFW demo when attempting to create 2 instances (the env parameters are ignored) - the IPs/networks come from /opt/eteshare/config/integration_preload_parameters.py

    UCA-17 - Getting issue details... STATUS

        /michael

  4. I think the issue is the preload parameters are merged with parameters in the .env and preload parameters overwrite the .env parameter. Robot framework is setting the preload data and as indicated in the JIRA ticket that you need to change the robot properties in the right spot (I think Jerry replied in the JIRA ticket). So updating .env does not work if you set the variable for a specific instance in the preload data and you need to make sure that robot updates for your environment apply. Note that you don't need to use robot but its a lot easier to use robot for vFW/vLB case.

  5. The contend below captured from the go-client.sh at https://gerrit.onap.org/r/gitweb?p=demo.git;a=tree;f=vnfs/VESreporting_vFW5.0;hb=HEAD clearly shows how the VNF takes the IP and Port parameters from environment variables hard-coded (at least I think it is hard-coded by definition. (smile)) in the text file. We needs either Robot or APP-C to configure them via the REST or NETCONF prior to starting workload, and also allows the APP-C to change them at runtime for a lot of complicated scenarios (e.g. HA)

       1 #!/bin/bash
       2
       3 export LD_LIBRARY_PATH="/opt/VES/libs/x86_64/"
       4 DCAE_COLLECTOR_IP=$(cat /opt/config/dcae_collector_ip.txt)
       5 DCAE_COLLECTOR_PORT=$(cat /opt/config/dcae_collector_port.txt)
       6 ./vpp_measurement_reporter $DCAE_COLLECTOR_IP $DCAE_COLLECTOR_PORT eth1

  6. When trying to summarize the VNF operations (ie. generic operations that all VNFs must support)  from the R1 use cases, so that we can sync up the VNF Requirements work, I came up with the following table. Is this correct from Use Case Team view?  

    TSC Use Case

    VNFs identified in TSC Use case

    VNF operations in R1 Use cases

    Use Case: Residential Broadband vCPE (Approved)

    vBNG, vG_MUX, vG,  vAAA, vDHCP, vDNS

    VNF Onboarding

    VNF Instantiation

    VNF Activation

    VNF Configuration (initial)

    Use Case: vFW/vDNS (Approved)

    vFW, vPacketGenerator, vDataSink, vDNS, vLoadBalancer,

    all VPP based.

    VNF Onboarding

    VNF Instantiation

    VNF Configuration (initial)

    VNF Auto Healing (Auto Reboot)


    Use Case: VoLTE(approved)

    vSBC, vPCSCF, vSPGW, vPCRF, VI/SCSCF, vTAS, VHSS, vMME

    VNF Onboarding

    VNF Instantiation

    VNF Autoscaling

    VNF Termination


    1. For VoLTE use case, vEPDG is missing.  VNFs in this use case need to support auto healing and event reporting as well.   

  7. In orchestration I need to select a template for the requested service,if video streaming is received as a service ,what are all the details i supposed to have in the template model(which is defined in TOSCA),like data,node,policies,artifact,capabilities,etc.