Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Description

Motivation

The most obvious solution for taking full advantage of CSIT is to add the test cases under the same repository with the functionalities that they are testing (instead of having them in separate centralized CSIT repository as we currently have). This would have the following benefits:

  • CSIT could be triggered by any commit to the project repo
  • CSIT tests the code (or specifically, docker images that have been built) from the committed branch
  • CSIT could have a vote on the commit based on the result of the test run
  • If the implementation changes require changes in CSIT tests (to cover new functionality or to pass in the first place), that could be handled within the same commit
  • ideally, local verification would become less complex (no need to work between CSIT repo and project repo) 
  • No need of Integration for the integration team to merge any changes related to your project

...

  • CSIT suites that test components from multiple project repositories at the same time 
    • such CSIT tests may have to be separated using additional simulators, or
    • project repository structures themselves may have to be reconsidered, or
    • the possibility of combining branches from multiple repositories under the same commit needs to be provided (if gerrit allows?)
    • In any case, the division between the images under test and images that are just necessary dependencies should be clearly made and documented
      • the images under test should be coming from the commit branch
      • in the case of necessary dependencies it must be decided whether they should be provided as simulators or real components
        • if provided as real components, they should be referred to with a fixed, unchanging version number and should be stable and mature enough to develop on
          • ONAP's (unintentional?) practice of allowing the same versioned image to be changed might prove problematic
      • project-specific simulators are trivial case from dependency handling point of view, but if common simulators are in use, dependency considerations for them are the same as with real components
  • Jenkins templates may have to be redesigned to support unified approach for triggering review branch-specific artifact and docker buildbuilds, CSIT execution and voting chain as part of review verification
    • What is the most elegant and effortless solution for ensuring that the docker images built from commit branch are taken in use in the CSIT (both in Jenkins and in local development environment)?
    • The redesign should also allow testing locally built docker images in local environment with CSIT as effortlessly as possible
  • CSITs will CSITs would become blockers for merging code
    • local pre-commit verification should be supported better by common CSIT tools
    • are all projects and suites mature enough to deal with that?

  • Docker image production practices will have to be unified (see Docker Image Build Guidelines, Independent Versioning and Release Process and Release Versioning Strategy)
  • To which extent are the current guidelines actually followed?
  • Are the guidelines good enough for CSIT development as they are now if they are followed?

Technical details to be decided

Technical decisions

  • New templates have been introduced for all verification steps (see Project-specific CSIT structure ) while all the existing ones have been left untouched
    • review-verification template
      • triggered by review commit
      • has verify vote on the review
      • should trigger artifact build (Sonar analysis, code coverage etc.)
        • this is not introduced yet in the verification template,
        • there are also no suitable templates for the artifact builds themselves (separate templates per "artifact-type" will probably be needed)
      • triggers docker image build - different types of docker builds each require their own templates (identified and separated by "artifact-type"), and so far we have them for
        • maven docker build
        • golang docker build
      • triggers CSIT execution that tests the produced docker images
        • common project-csit template for all CSIT jobs
    • merge-verification template
      • triggered by merge
      • should trigger artifact build in the same way as review-verification
      • triggers docker image build in the same way as review-verification
      • triggers CSIT execution that tests the produced docker images in the same way as review-verification
      • triggers job for tagging the docker images verified by CSIT with STAGING
        • should push them to ONAP staging Nexus3 (this is implemented per project, and all the existing ones are still pushing to ONAP snapshot Nexus3)
    • Specifying exact docker image versions to be tested by CSIT has been solved by using a new unique docker tag
      • this is provided by UNIQUE_DOCKER_TAG Jenkins environment parameter, which is, in practice, BUILD_ID of the triggering verification job
  • Common
  • Should we keep separate docker build and CSIT jobs and just chain them into review verification, or should we try to incorporate docker building and CSIT execution into existing review jobs?
    • Reusing existing jobs and chaining them would require some docker image tag tuning to make sure CSIT tests pick up the exact image that was produced by preceding docker build job 
    • Either way, JJB templates will have to be touched
    • How should the following issue be solved?
      • give unique tag to the docker image to be tested in review and pass it to CSIT job
        • what should the tag be?
          • timestamp (from maven.build.timestamp) already seems to be used, but how to extract it?
          • jenkins build identifier? That can be determined from triggered CSIT job and it would be a direct way to find out where the image came from afterwards also to human reader (I'm intending to use this in my initial PoC with CCSDK)
          • gerrit commit id that triggered the job?
          • sha-256 of the docker image? 
        • how to pass the parameters? 
          • triggered jenkins build identifier can be found from ${BUILD_URL}/api/json (if triggered at all)
          • reverse trigger mechanism used as the basis of current trigger_jobs doesn't seem to support parameter passing?
          • file-based or some other custom mechanism?
          • replace reverse trigger mechanism with normal trigger with parameters (i.e. define the trigger in the triggering job instead in the triggered job)?
      • Do we need new docker image job templates for in-review docker builds or can the existing ones be reused somehow?
      • Is it possible to chain triggered jobs and let them all vote on the original review or does it need an umbrella job?
  • Should we still have common CSIT scripts (run-csit.sh etc) in CSIT repo and related procedures (setup, tests, teardown and related result collection) as the basis of project-specific test execution? are still maintained
    • the decision to use any of these is left to projects
      • the existing examples so far (music and ccsdk/distribution) clone integration/csit and use the common scripts and corresponding directory structure
      • any new project that uses the same approach should ensure that the cloned integration/csit branch corresponds the branch of the project whenever possible  
  • Execution of CSIT tests and incorporating locally built test images should be made as easy as possible following common guidelines
    • Local docker build instructions should be maintained and easily accessible (for example, as part of CSIT README.md in the project repository)
    • Test environment setup should be as automated as possible (
    • Setting up testing environment (specific project-specific dependencies should be handled by the setup scripts)
    • Specific Need for specific environment variables expected by the test suite (like , for example, GERRIT_BRANCH) should be minimized and preferably avoided altogether  
  • What is the significance of Java/Python/etc. SNAPSHOT/STAGING/RELEASE artifacts in Nexus? Do they have any actual role, or are the docker builds always creating those artifacts for themselves on the fly (against current Docker Image Build Guidelines)?
    • Maven repository is empty
    • NuGet repository (whatever that is) has various rather old Mongo nuget archives that don't seem to be produced by anything in ONAP?
    • PyPI repository has a lot of 3rd party Python Wheel packages and two relatively recent tar packages from ONAP (onap_dcae_cbs_docker_client-1.0.1.tar.gz and onap-dcae-dcaepolicy-lib-2.4.1.tar.gz)
    • npm repository has various versions of clamp-ui tar packages 
    • repositories are in old ONAP Nexus 
  • Artifact builds should take care of code coverage/Sonar analysis
    • apparently
    What about code coverage/Sonar? Apparently
    • currently there are no templates dealing with Sonar? (instead, all the project have their custom Sonar JJB definition)
    , and
    • all the Sonars run on daily schedule instead of being triggered
    • are projects mature enough to define the various violations as review blockers?
      • can we give each project the power to define their own quality gates? 

Project status and readiness at the end of Guilin

...