You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 16 Next »

This will include e.g. credentials usage, performance, resiliency, testing requirements etc.

    • Support ONAP platform upgrade

      • How can we update a running ONAP instance in the field. Should this be part of the OOM scope with help of the architecture sub committee?
    • Support ONAP reliability

  • We have no well defined approach to fault tolerance neither on the component nor the system level. Should this be part of the OOM scope with help of the architecture sub committee?
  • An instance of the complete "ONAP platform shall be configurable to be part of an "N+1" group of instances that operate in a specific, coordinated, controlled fail-over manner.  This coordinated fail-over manner enables the  operator to select an "the idle +1" instance in the group to functionally replace any of the operating, healthy,  "N" instances of the group on demand, within a specified amount of time.  This approach shall be used as a common method for surviving disaster, and also, the common approach to software upgrade distribution.
    • Support ONAP scalability

      • How to scale ONAP? Should this be part of the OOM scope with help of the architecture sub committee?

    • Support ONAP monitoring

      • Common logging formats and approaches need to be supported and automated cross components monitoring tools should be developed/provided

    • Support a Common ONAP security framework for authorization and authentication

    • Secure all  Secrets and Keys (Identity or otherwise)  while they are in persistent memory or while they are in use 
      • Secrets such as passwords and keys are in clear current ONAP infrastructure components.  Security breaches are a possibility if these secrets are not protected well.  Many modern platforms support trusted execution environments,  It is needed to define the security architecture with respect to secrets and apply the architecture across all ONAP components and may be even across ONAP, VIM,  Site specific controllers and NFVI.
    • Support for ensuring that all ONAP infrastructure VMs/containers are brought up with intended Software
      • ONAP infrastructure itself is set of multiple services. At last count, there are more than 30 services.  In addition, some of these services can also be run in various sites for scalability and availability.  For example, some DCAE components may be run in various sites.    It is good practice to ensure that  the ONAP servers and services (containers or VMs) are brought up with intended firmware, OS,  utilities etc.. 
    • API Versioning

      • What are the detailed rules of API depreciation. We did commit at the NJ F2F that APIs can’t just change between releases but we never agreed on the actual process/timeline surrounding API changes etc.

      • APIs for the current release, and in addition,  the most recent two prior releases shall be supported, and if changed by comparison to the two prior releases, shall support deprecated, unchanged functions of such two prior releases.

    • Support ONAP Portal Platform enhancements

      • What are the new applications that needs to be on-boarded onto Portal Platform?
      • What are the Portal SDK enhancements needed for the usecase developers?
      • Adapting centralized role management by all applications that is now provided by the Portal platform.
    • Software Quality Maturity

      • The ONAP platform is composed of many components, each component exhibiting varying degrees of "software maturity."   A proposed requirement here is that prior to accepting software updates for an particular component as part of the complete ONAP solution, it must be shown that the software for such a component meets or exceeds specific quantitative measures of software reliability.  AKA "Software Releasability Engineering" analysis data should be computed and disclosed for each component.  This data is computed by collecting time-series data from the testing process (testing the published use-cases)  and fitting test failure counts to a family of curves known to track defect density over the software lifetime.  Information on this type of quantitative anallysis is here:   https://drive.google.com/open?id=0By_UqQM0rEuBei10TVdTOU5CalU   A particular tool that may be used to compute this "SRE" (Software Releasability Engineering) is open sourced and available entirely in a Docker container here:   https://cloud.docker.com/swarm/ehwest/repository/docker/ehwest/sre/general   Numerous papers are in the literature explaining the use of Markov Chain Monte Carlo methods to fit test data to the target curve.

  • No labels