Skip to end of metadata
Go to start of metadata


ONAP CSIT (Continuous System and Integration Testing) tests are defined as what we call "test plans".  A test plan is meant to test a specific feature or functionality comprehensively, and each test plan runs one or more test suites that are written using the Robot Framework.  A particular test suite (e.g. to test a core module) can be included by multiple test plans.

Test plans and test suites are source controlled in the integration repo, under its own GIT repository located in integration/csit.git (;a=summary).  To get started and follow along, please clone the integration repo to your local environment.  If you want to run the CSIT test suites locally, you will also need to have have docker and python installed.

The key contents of integration/test/csit/ are:

  • the shell script that executes a particular CSIT test suite
  • plans/: contains the definitions of what is invoked by each test plan
  • tests/: contains the test suites written using the Robot Framework 
  • scripts/: contains various shared shell scripts that support the test plans

Video Tutorial

System Prerequisites

Make sure that your environment has the following packages installed:

sudo apt install python-pip virtualenv unzip sshuttle netcat libffi-dev libssl-dev docker-compose
sudo pip install robotframework
sudo pip install -U requests
sudo pip install -U robotframework-requests
sudo pip install -U robotframework-httplibrary

Setting Up the Test

Test plans are defined under the plans/ directory.

The directory layout under the plans/ directory is <project>/<functionality>/.  A basic sample CSIT test plan is provided in plans/integration/functionality1/.  To execute it, run the command: 

./ plans/integration/functionality1/

The will automatically set up a Robot environment for you, and execute the test plan defined in the plans/integration/functionality1/ directory.

If you look at the contents of the plans/integration/functionality1/ you will see the following:

  • the shell script that starts all the necessary docker containers required by this test plan, as well as passing the necessary environment variables to Robot.
  • testplan.txt: the text file listing, in order, the test suites that should be executed by Robot for this particular test plan.  This allows you to refactor test suites to be reused by multiple test plans as necessary.
  • the shell script that kills all the docker containers that were started by this test plan.

As an example, the sample plans/integration/functionality1/testplan.txt contains the following:

# Test suites are relative paths under [integration.git]/test/csit/tests/.
# Place the suites in run order.

This means that this test plan will run those two Robot test suites in the sequence specified above.

When runs a test plan, it will first execute to start any docker containers and set up environment variables.  Then, it will run the test suites listed in the testplan.txt in sequence.  Finally, it will run to terminate all the docker containers.

Defining Test Suites

The Robot test suites are placed under the tests/ directory.

The directory layout under the tests/ directory is <project>/<suite>/.  So, the sample test suites can be found in tests/integration/suite1/ and tests/integration/suite2/, respectively.  The test suites are written using the Robot Framework.

We won't go into details here about how to write Robot test suites.  You can take a look at the sample test suites in tests/integration/suite1/ and tests/integration/suite2/ to see some sample test cases implemented; for example, it demonstrates the retrieval of a URL to test for its return code.

Refactor Shared Shell Scripts

The scripts/ directory is where you place any shell scripts that may be useful for multiple test plans.  For example, the and shell scripts are placed here so that they can be used by all test plans.  Feel free to place any commonly useful shell scripts that you need across multiple test plans here.  

Setting Up a Jenkins CSIT Job

Once you have your test plan created, ensure that it can be run successfully locally with, and then check it into the integration repo.

After this, it is time to set up a Jenkins CSIT Job so that your test plan will be automatically run by Jenkins daily.

The CSIT job defined for the sample test plan above is at

Please refer to Jenkins -> Configuring Jenkins Jobs on the background information on setting up Jenkins jobs.  Roughly speaking, you need to check in a JJB YAML definition for your Jenkins job into the ci-management repo under ci-management/jjb/.

The easiest thing to do is to copy and modify the sample JJB definition of the integration-csit-functionality1 job, found at ci-management/jjb/integration/integration-csit.yaml, which contains the following:

- project:
    name: integration-csit
      - '{project-name}-{stream}-verify-csit-{functionality}'
      - '{project-name}-{stream}-csit-{functionality}'
    project-name: 'integration'
    stream: 'master'
      - 'functionality1':
    robot-options: ''
    branch: 'master'

And update the project name (line 7) and functionality name (line 10) as appropriate.  

The entries under trigger_jobs lists the jobs whose completion will automatically trigger the execution of this particular CSIT job.  You should change this list to include the list of merge jobs that may potentially affect the CSIT test results of this particular test plan.


  1. Gary Wu I'm still a little confused as to what my team needs to do. Has any other project done this work yet that I can take a look at? It seems that I am modifying the integration repository with our tests. Or maybe not? But then does the Jenkin's Job get put in the integration, or within my Policy jenkin's job? I think the latter and just wanted to confirm.

  2. Pamela Dragosh Yes, the CSIT tests and scripts will go in the integration repo.  The Jenkins jobs would be defined with the project and functionality parameters set to match the directory name under intergration/test/csit/plans/, i.e. intergration/test/csit/plans/{project}/{functionality}/.  Typically you would use your project name in that directory path, e.g. intergration/test/csit/plans/policy/<functionality X>/.

    1. Is there a way to test this before committing the test scripts to the integration repo?

      1. Yes, you should be able to run the test plan on your local environment using the script as described above.

    2. Just one more clarification on the Jenkin's Job: Does that change happen a certain yaml located in ci-management/jjb/policy, or appended to ci-management/jjb/integration-csit.yaml?? I think the latter...

      1. I recommend that you put it in ci-management/jjb/policy/policy-csit.yaml.

  3. Is there a way to trigger the CSIT manually? So you can test without having to do a commit to trigger it.

    1. I think if you're a committer for integration, you now have to ability to manually trigger/abort Jenkins jobs via the UI.  For example, go to, and on the left there should be a link to "Build with Parameters".

  4. In openo, there is a docker directory in csit, we can use, and so on to config our docker image, but in onap, I can not find the directory, can you tell me where to define those scripts? 

  5. Thanks Gary for the video.

    Some important points from the video (but not as clear on this page) that helped my understanding:

    • intention is to test that a (new) docker image does all the things you are expecting it to do
    • Jenkins jobs are run in their own VM, which is where the Robot Framework tests will run.  That same single VM, which may be large, will also be a docker host, and any containers needed for the test should be started on that host.
    • Robot Framework terminology is "test cases" and "test suites".  A "test plan" is ONAP CSIT terminology for specifying which suites you want to run. (because some test suites may be re-usable)
    • shell scripts are used to a) spin up docker containers that your test plan needs, b) control which suites get executed, c) teardown any containers no longer needed (important for rerunning tests)
    • a handy container to know about is the Mock server, which can return predefined http end points
    • a common pattern is for scripts to capture the IP address of started containers, and pass that info back to Robot Framework
    • the output from provides summary status and logs to help you distinguish application vs Robot Framework vs  docker problems.  Also, the output includes a reference to a (local) HTML file where test plan details are captured.
    1. Thanks for comment.  These clarifications are exactly right.

  6. Many tests now include Library HttpLibrary.HTTP,

    Installation of this library is accomplished by:

    sudo pip install --upgrade robotframework-httplibrary

  7. As a newcomer to CSIT, after reading this, and watching the video (I agree with Dom that the video is very helpful), I find myself very unclear on what kinds of problems might be discovered by CSIT tests that would not be caught in unit testing.  The video talks about how a CSIT test can run a docker container (or more than one), how it can run mocking containers to stub out down stream API calls, how it can initialize state, how it can make API calls and check the responses, and it mentions checking the "packaging" of an image, but I don't see, either here, or in the video, a clear description of when a CSIT test is called for and what it should test for that shouldn't already be checked by unit tests.  This handles the "how," but I think saying, as the overview does, that, "A test plan is meant to test a specific feature or functionality comprehensively," seems a lot like what unit testing does.  It is unclear about the "when" and "why" of CSIT.  I think this needs to explain, probably in the overview at the top, what kinds of defects can be found by CSIT but not in a project's unit tests (and hence what kinds of test cases should be covered by CSIT).

    Would something like, "The purpose of ONAP CSIT is not to duplicate unit testing, but to catch inter-project defects not caught in unit testing, such as where problems in a system's use of an API were masked by problems in that project's tests' mocking of the API responses," be accurate and complete, or is there some other purpose(s) for CSIT?