Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Self release process had a few hiccups but seems better now → what would we like to fix/improve based on hiccups?
    • Dan Timoney suggests that new processes should be better tested before being rolled out to the community.  Engage projects to help with testing
  • gerrit/jenkins/jira/nexus stability issues still occurring sporadicly → link to LFN Infrastructure Initiative (next generation of toolchain) and KaaS Update (TSC Call 12/12). Is there anything else?
  • "one size fits all" relman JIRA tickets.  Should we have different tickets for different projects? Extra effort worthwhile? 

...

 → we invite PTLs to review Frankfurt Milestone for additional suggestion(s) - Frankfurt Deliverables by Milestone

Pamela Dragosh says that tasks need to be culled, clarified, better documentation on how to complete

Form a small working group.  Meet 3 -4 weeks to review and recommend changes/updates. Andy Mayersays to coordinate with subcommittees to avoid distorting intent of tasks.

  • Should we track "planned" vs. "actuals" so that we have more data on where we have schedule problems?  This would provide a source of data for retrospectives

...

       → Continue to raise your risks as soon as you have identifed identified them so we can explore if we can mitigate them with additional support - Frankfurt Risks

...

  • Job Deployer for OOM (and now SO) was great CI improvement.  Noticeable impact to reviewing merges when Job Deployer was offline due to lab issues.
  •  → Any recommendation/improvement sugegstionssuggestions?
  • Addition of Azure public cloud resources is helping with the verify job load.
  • Need to continue with more projects added to CI pipeline and tests targeted for the specific project (e.g., instantiateVFWCL , ./vcpe.py infra , rescustservice for SO)
  • → Agreed - current activities#1 OOM Gating (Integration Team)#2 Introduction of ONAP "Use Case/Requirement" Integration Lead - first experimentation with Frankfurt#3 Continue to meet Test Coverage Target (Project Team)#4 Automate your CSIT and/or pairwise testing (Project team)#5 Anything else?

...

  • Adding Orange and Ericsson Labs to installation testing was a good change. More  labs coming on board for this.
    • Vijay - need more clarity on how participants may use the labs
    • Orange OpenLab
    • → Ericsson Lab details (not an open lab - results shared with community)?
  • Still some issues due to lab capabilities, seems to be better but some slowliness are still occuring and hard to troubleshoot (infra has still a significant impact)
    •  → Re-activate/Review the OpenLab Subcommittee? (Morgan Richommesays already initiated)
    • → To be revisited based on Dev/Integration WindRiver needs considering the KaaS initiative
    •  → suggestion to review the lab strategy i.e. Orange for Integration Team; WindRiver for Dev?
  • CSIT refactoring provided more clarity (some tests were running on very old versions and sometimes not maintained since casablanca), moreover the teams were not notified in case of errors (changed in Frankfurt) - no action
  • Still space for improvements
    • robot healthcheck have not the same level of maturity from a component to another - some of them are still PASS even the component is clearly not working as expected (just check that a webserver is answering without really checking the component features), good examples shoudl be promoted as best practices
      • Morgan Richomme says this was discussed during integration meeting and will be the subject of a session at DDF in Prague.
    • CSIT tests are more functional tests (which is good), integration tests in target deployement (using OOM) should be possible by the extension of the gating to the different components (but resources needed)
      • Morgan Richomme - project functional testing should be done at project scope and should not rely on integration
    • Still lots of manual processing to deal with the use cases - no programmatic way to verify all the release use cases on any ONAP solution
      • Morgan Richomme pair-wise testing is primarily manual.  Making progress in each release on automation.
    • hard to get a good view of the real coverage in term of API/components - the Daily chain mainly used VNF-API for instance, no end to end automated tests dealing with policy/dcae
  • Need more consideration of test platforms, e.g., which version of OpenStack, k8s, etc.  EUAG input?  Something for consideration by TSC?
    •  Should it be a nesw new process based on combination of Security recommendations being reviewed with the PTLs (including Integration) and TSC approval.
  • Need automated test result tracking for use cases (partially done in el alto through the first PoC leveraging xtesting - test DBs was used and collected results from E/// and Orange labs)
  • → Any particular plan/target for Frankfurt ?

...