Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Should we keep separate docker build and CSIT jobs and just chain them into review verification, or should we try to incorporate docker building and CSIT execution into existing review jobs?
    • Reusing existing jobs and chaining them would require some docker image tag tuning to make sure CSIT tests pick up the exact image that was produced by preceding docker build job 
    • Either way, JJB templates will have to be touched
    • How should the following issue be solved?
      • give unique tag to the docker image to be tested in review and pass it to CSIT job
        • what should the tag be?
          • timestamp (from maven.build.timestamp) already seems to be used, but how to extract it?
          • jenkins build identifier? That can be determined from triggered CSIT job and it would be a direct way to find out where the image came from afterwards also to human reader (I'm intending to use this in my initial PoC with CCSDK)
          • gerrit commit id that triggered the job?
          • sha-256 of the docker image? 
        • how to pass the parameters? 
          • triggered jenkins build identifier can be found from ${BUILD_URL}/api/json (if triggered at all)
          • reverse trigger mechanism used as the basis of current trigger_jobs doesn't seem to support parameter passing?
          • file-based or some other custom mechanism?
          • replace reverse trigger mechanism with normal trigger with parameters (i.e. define the trigger in the triggering job instead in the triggered job)?
      • Do we need new docker image job templates for in-review docker builds or can the existing ones be reused somehow?
      • Is it possible to chain triggered jobs and let them all vote on the original review or does it need an umbrella job?
  • Should we still have common CSIT scripts (run-csit.sh etc) in CSIT repo and related procedures (setup, tests, teardown and related result collection) as the basis of project-specific test execution? 
  • Execution of CSIT tests and incorporating locally built test images should be made as easy as possible following common guidelines
    • Setting up testing environment (specific project-specific dependencies should be handled by the setup scripts)
    • Specific environment variables expected by the test suite (like GERRIT_BRANCH)
  • What about is the significance of Java/Python/etc. SNAPSHOT/STAGING/RELEASE artifacts in Nexus? Do they have any actual role, or are the docker builds always creating those artifacts for themselves on the fly (against current Docker Image Build Guidelines)?
    •  Maven repository is empty
    • NuGet repository (whatever that is) has various rather old Mongo nuget archives that don't seem to be produced by anything in ONAP?
    • PyPI repository has a lot of 3rd party Python Wheel packages and two relatively recent tar packages from ONAP (onap_dcae_cbs_docker_client-1.0.1.tar.gz and onap-dcae-dcaepolicy-lib-2.4.1.tar.gz)
    • npm repository has various versions of clamp-ui tar packages 
  • What about code coverage/Sonar? Apparently currently there are no templates dealing with Sonar (instead, all the project have their custom Sonar JJB definition), and all the Sonars run on daily schedule instead of being triggered

Project status and readiness at the end of Guilin

...