You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »

This will include e.g. credentials usage, performance, resiliency, testing requirements etc.

    • Support ONAP platform upgrade

      • How can we update a running ONAP instance in the field. Should this be part of the OOM scope with help of the architecture sub committee?
    • Support ONAP reliability

  • We have no well defined approach to fault tolerance neither on the component nor the system level. Should this be part of the OOM scope with help of the architecture sub committee?
  • An instance of the complete "ONAP platform shall be configurable to be part of an "N+1" group of instances that operate in a specific, coordinated, controlled fail-over manner.  This coordinated fail-over manner enables the  operator to select an "the idle +1" instance in the group to functionally replace any of the operating, healthy,  "N" instances of the group on demand, within a specified amount of time.  This approach shall be used as a common method for surviving disaster, and also, the common approach to software upgrade distribution.
    • Support ONAP scalability

      • How to scale ONAP? Should this be part of the OOM scope with help of the architecture sub committee?

    • Support ONAP monitoring

      • Common logging formats and approaches need to be supported and automated cross components monitoring tools should be developed/provided

    • Support a Common ONAP security framework for authorization and authentication

    • API Versioning

      • What are the detailed rules of API depreciation. We did commit at the NJ F2F that APIs can’t just change between releases but we never agreed on the actual process/timeline surrounding API changes etc.

    • Support ONAP Portal - UI enhancements

    • Software Quality Maturity

      • The ONAP platform is composed of many components, each component exhibiting varying degrees of "software maturity."   A proposed requirement here is that prior to accepting software updates for an particular component as part of the complete ONAP solution, it must be shown that the software for such a component meets or exceeds specific quantitative measures of software reliability.  AKA "Software Releasability Engineering" analysis data should be computed and disclosed for each component.  This data is computed by collecting time-series data from the testing process (testing the published use-cases)  and fitting test failure counts to a family of curves known to track defect density over the software lifetime.  Information on this type of quantitative anallysis is here:   https://drive.google.com/open?id=0By_UqQM0rEuBei10TVdTOU5CalU   A particular tool that may be used to compute this "SRE" (Software Releasability Engineering) is open sourced and available entirely in a Docker container here:   https://cloud.docker.com/swarm/ehwest/repository/docker/ehwest/sre/general   Numerous papers are in the literature explaining the use of Markov Chain Monte Carlo methods to fit test data to the target curve.
  • No labels