Skip to end of metadata
Go to start of metadata

Problem Statement:

In support of the ONAP Beijing release there is a requirement such that the ONAP components will be able to maintain Stability both short term and long term.  The goal of the APPC Stability test will be to create a steady load to the APPC environment while maintaining application usability.

Stability Requirements:

  • Level 0:  none beyond release requirements
  • Level 1:  72 hour component-level soak test (random test transactions with 80% code coverage; steady load)
  • Level 2:  72 hour platform-level soak test (random test transactions with 80% code coverage; steady load)
  • Level 3:  track record over 6 months of reduced defect rate

Test Environment:

The Test Environment will be located in the Wind River lab and possibly consisting of the following:


  • 1 VM to manage and execute stability tests.
  • 1 VM for APPC (Optionally this could also be setup with 3 VMs in a 3 node ODL cluster)
  • 1 VM for DMaaP
  • 1 VM for A&AI
  • 1 VM to setup and use for VNF Configuration
  • 4 (Small VMs) to support random LCM Actions like Restart, Rebuild, Migrate etc…


Aaron Hay has experience with setting up the components (DMaaP, A&AI and AAPC) as well as working with OpenStack. 

Proposed Solution:

The solution will be to create a test client that will send commands to APPC via DMaaP and receive the asynchronous responses.  The results of the requests and responses will be logged in a delimited file.  The delimited file will provide details on each request as well as their responses.  Additionally, the APPC Metrics logs can be utilized to determine if the transaction responsiveness is slowing over time.


APPC Proposed Actions to be Tested (Randomly):

  • Configure
  • ConfigModify
  • Restart
  • Rebuild
  • Stop
  • Start


The proposed solution will utilize Python due to the availability of numerous Python modules.  The availability of the Python plugins will make coding the test client more efficient and maintain the spirit of Open Source. 


Additionally, the group will research any Open Source test clients that currently exist.

Action Items:

  • Aaron Hay
    • Follow up with Wind River (Stephen Gooch) on the availability of adding the VM necessary to conduct the test.
      • (Due 1/16/2018)
    • What collaboration is needed to ensure lab isn't touched during the testing period.
      • (Due 1/16/2018)
  • Scott Seabolt
    • Scott seabolt & Patrick Brady:
      • Follow up with APPC team on possible VNF options for the Configure and ConfigModify.
        • One possibility is to setup APPC to support vFirewall/PacketGen to work with LCM Provider instead of the legacy API Provider.
        • (Due 1/16/2018)
    • Layout plan for establishing data in A&AI in regards to the test VNFs/VMs.
      • Due TBD - Establish timeline on next week's call.
    • Setup follow up call for next Mon or Tues (1/15 or 1/16) to establish clear environment approach, test client design, VNF requirements as well as establish a timeline with the appropriate owners.  Call will be on 1/16/2018 @2:00pm ET.
    • Document the Test Approach including the design, environment and expected results.
      • Due TBD - Establish timeline on next week's call.
  • Ryan Young
    • Ryan will be researching Test Clients that might support this effort.
      • (Due 1/12/2018)
    • Ryan will research the available Python modules that might aid in creating the test client. 
      • (Due 1/12/2018)
    • Ryan to layout over approach (Programmatically) to creating the client.
      • I.e. what is the overall methodology, what technology might be required, divide up programming work between Ryan, Patrick and Scott.
      • Due TBD - Establish timeline on next week's call.
  • No labels

1 Comment

  1. Great progress team! Just to clarify, at the component level, we are committing to Level 1. Assumption is that Integration team is addressing the Level 2. Also, for Level 1, it's not clear how we measure 80% coverage, so that is not anything we can commit. That particular language is not included in the M1 release planning template, so we are just committing to this:

    • 1 – 72 hours component level soak w/random transactions

    If we have lab capacity, I would vote for a 3 node ODL cluster and 2 DBs with replication enabled.

    Also, need to measure for memory leaks, so ensure we have the right tooling for that. Jmeter has been mentioned is some meetings.