Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Self release process had a few hiccups but seems better now
  • gerrit/jenkins/jira stability issues still occurring sporadicly
  • "one size fits all" relman JIRA tickets.  Should we have different tickets for different projects? Extra effort worthwhile?
  • Should we track "planned" vs. "actuals" so that we have more data on where we have schedule problems?  This would provide a source of data for retrospectives.

Product Creation

  • Job Deployer for OOM (and now SO) was great CI improvement.  Noticeable impact to reviewing merges when Job Deployer was offline due to lab issues.
  • Addition of Azure public cloud resources is helping with the verify job load.
  • Need to continue with more projects added to CI pipeline and tests targeted for the specific project (e.g., instantiateVFWCL , ./vcpe.py infra , rescustservice for SO)

...

  • Adding Orange and Ericsson Labs to installation testing was a good change. More  labs coming on board for this.
    • Vijay - need more clarity on how participants may use the labs
  • Still some issues due to lab capabilities, seems to be better but some slowliness are still occuring and hard to troubleshoot (infra has still a significant impact)
  • CSIT refactoring provided more clarity (some tests were running on very old versions and sometimes not maintained since casablanca), moreover the teams were not notified in case of errors (changed in Frankfurt)
  • Still space for improvements
    • robot healthcheck have not the same level of maturity from a component to another - some of them are still PASS even the component is clearly not working as expected (just check that a webserver is answering without really checking the component features), good examples shoudl be promoted as best practices
    • CSIT tests are more functional tests (which is good), integration tests in target deployement (using OOM) should be possible by the extension of the gating to the different components (but resources needed)
    • Still lots of manual processing to deal with the use cases - no programmatic way to verify all the release use cases on any ONAP solution
    • hard to get a good view of the real coverage in term of API/components - the Daily chain mainly used VNF-API for instance, no end to end automated tests dealing with policy/dcae

ONAP Project Oversight

  • Jira was improvement over wiki pages for milestone tracking but it still seems onerous on the PTLsNeed more consideration of test platforms, e.g., which version of OpenStack, k8s, etc.  EUAG input?  Something for consideration by TSC?
  • Need automated test result tracking for use cases (partially done in el alto through the first PoC leveraging xtesting - test DBs was used and collected results from E/// and Orange labs)

ONAP Project Oversight

  • Jira was improvement over wiki pages for milestone tracking but it still seems onerous on the PTLs
    • See "one size fits all" comment under release process. We can have different tickets for different projects, but this is much more work to support.
  • Planning failed to take into consideration summer holidays

Others

  • PTL meetings seem more like tracking calls. Might want to consider a PTL committee that would run the PTL meetings.
    • meeting format changed in October.  Better now?
  • SSL rocket chat seemed to work for folks - need to consider moving this to a supported solution.
  • Rocket chat private server ACL issue
    • onapci.org access to jenkins is conflicting  with rocket chat since they share the same gateway
    • IP ACL's that blocked some high volume downloads of logs from jenkins also blocked access to rocket chat fro some proxy's

...