Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Self release process had a few hiccups but seems better nownow → what would we like to fix/improve based on hiccups?
  • gerrit/jenkins/jira/nexus stability issues still occurring sporadiclysporadicly → link to LFN Infrastructure Initiative (next generation of toolchain) and KaaS Update (TSC Call 12/12). Is there anything else?
  • "one size fits all" relman JIRA tickets.  Should we have different tickets for different projects? Extra effort worthwhile? 

 → Classify the nature of projects and have RELMAN tickets per category: DEV projects (code impact) like Policy, AAI etc; NO-DEV projects like doc, vnfrqs and finally testing/deployment prokects: Integration, VVP, VNFSDK, OOM

 → we invite PTLs to review Frankfurt Milestone for additional suggestion(s) - Frankfurt Deliverables by Milestone

  • Should we track "planned" vs. "actuals" so that we have more data on where we have schedule problems?  This would provide a source of data for retrospectives.

        →Ok - let's track from M1 since this is the milestone representing the Community Commitments

       → Continue to raise your risks as soon as you have identifed them so we can explore if we can mitigate them with additional support - Frankfurt Risks

  • Anything else?

Product Creation

  • Job Deployer for OOM (and now SO) was great CI improvement.  Noticeable impact to reviewing merges when Job Deployer was offline due to lab issues.
  •  → Any recommendation/improvement sugegstions?
  • Addition of Azure public cloud resources is helping with the verify job load.
  • Need to continue with more projects added to CI pipeline and tests targeted for the specific project (e.g., instantiateVFWCL , ./vcpe.py infra , rescustservice for SO)
  • → Agreed - current activities#1 OOM Gating (Integration Team)#2 Introduction of ONAP "Use Case/Requirement" Integration Lead - first experimentation with Frankfurt#3 Continue to meet Test Coverage Target (Project Team)#4 Automate your CSIT and/or pairwise testing (Project team)#5 Anything else?

Testing Process

  • Adding Orange and Ericsson Labs to installation testing was a good change. More  labs coming on board for this.
    • Vijay - need more clarity on how participants may use the labs
    • Orange OpenLab
    • → Ericsson Lab details?
  • Still some issues due to lab capabilities, seems to be better but some slowliness are still occuring and hard to troubleshoot (infra has still a significant impact)
    •  → Re-activate/Review the OpenLab Subcommittee?
    • → To be revisited based on Dev/Integration WindRiver needs considering the KaaS initiative
    •  → suggestion to review the lab strategy i.e. Orange for Integration Team; WindRiver for Dev?
  • CSIT refactoring provided more clarity (some tests were running on very old versions and sometimes not maintained since casablanca), moreover the teams were not notified in case of errors (changed in Frankfurt)
  • Still space for improvements
    • robot healthcheck have not the same level of maturity from a component to another - some of them are still PASS even the component is clearly not working as expected (just check that a webserver is answering without really checking the component features), good examples shoudl be promoted as best practices
    • CSIT tests are more functional tests (which is good), integration tests in target deployement (using OOM) should be possible by the extension of the gating to the different components (but resources needed)
    • Still lots of manual processing to deal with the use cases - no programmatic way to verify all the release use cases on any ONAP solution
    • hard to get a good view of the real coverage in term of API/components - the Daily chain mainly used VNF-API for instance, no end to end automated tests dealing with policy/dcae
  • Need more consideration of test platforms, e.g., which version of OpenStack, k8s, etc.  EUAG input?  Something for consideration by TSC?
    •  Should it be a nesw process based on combination of Security recommendations being reviewed with the PTLs (including Integration) and TSC approval.
  • Need automated test result tracking for use cases (partially done in el alto through the first PoC leveraging xtesting - test DBs was used and collected results from E/// and Orange labs)
  • → Any particular plan/target for Frankfurt ?

ONAP Project Oversight

  • Jira was improvement over wiki pages for milestone tracking but it still seems onerous on the PTLs
    • See "one size fits all" comment under release process. We can have different tickets for different projects, but this is much more work to support.
    •  See feedback provided above
  • Planning failed to take into consideration summer holidays
    • noted 

Others

  • PTL meetings seem more like tracking calls. Might want to consider a PTL committee that would run the PTL meetings.
    • meeting format changed in October.  Better now?
  • SSL rocket chat seemed to work for folks - need to consider moving this to a supported solution.
  • Rocket chat private server ACL issue
    • onapci.org access to jenkins is conflicting  with rocket chat since they share the same gateway
    • IP ACL's that blocked some high volume downloads of logs from jenkins also blocked access to rocket chat fro some proxy's
    • → Shall we use Slack for Frankfurt?

====================================================================================================================================================

...