Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Beijing Security Release Notes Template



Children Display

1 Introduction

1.1 Purpose

This section captures recommendations for handling certain security questions that are studied by the security sub-committee.  These recommendations, when implemented, can lead to new best practices.  The recommendation states are:

  • Draft: The ONAP Security sub-committee is working on the recommendation

  • Recommended: The ONAP security sub-committee agrees that this is a recommendation

  • Approved: The recommendation is approved by the TSC.

1.2 Threat Analysis

Some known threats in Micro Service architectures :

  1. Credential stealing and then used get the high level prilvileges:

    1. Attacker analyzes the container images to steal secrets such as SSH private keys, X.509v3 certificate private keys,  passwords etc...

    2. Attacker analyzes the captured traffic among services to steal secrets such as passwords and other secrets.

    3. Attacker analyzes environment variables (to containers) via orchestrator log files to steal password and other secrets.

    4. Attacker getting hold of default credentials or weak passwords

    5. Attacker has credentials because at one point was authorized to have access to credentials, is no longer authorized after a job change, but the credentials have not been changed. New

  2. Denial Of Service Attacks:

    1. Attacker bombards the container services with new connections, leading to large number forked processes and threads leading to resource issues on other workloads (containers) in the system.

    2. Attacker exploiting the container to get access to Kernel.

    3. Attacker exploits the container runtime and deletes executing containers

  3. Tampering of images (ONAP container images)

    1. Attacker keeping tampered images with similar looking name in the registry, leading to running containers from attacker images.

    2. Attacker has self-commit privileges and introduces malware into the images in the registry.

Typical vulnerabilities are:

  • Secrets/passwords/sensitive-data in images.

  • Unchanged default passwords 

  • Weak passwords

  • Unsecured communication

  • Usage of environment variables to pass sensitive information

  • Poor Security configuration

  • Vulnerable system software and libraries

Mitigation techniques are:

  • Host operating system (Not valid if ONAP is being installed in Hyperscale data centers) - Hardened operating system, Vulnerability scanning, Trusted computing infrastructure

  • Containers images:

    • Only have required software packages.

    • No password, secrets, private key in the image.

    • Vulnerable scanning and ensuring only patched versions of the packages are used.

    • Trusted image repository /  Image signing by VNF vendors.

  • Container image download 

    • Secure communication with repositories

    • Verifying the signature of images before they are launched.

    • Periodic check for patched container images from the repository.

  • Container run time 

    • Secret Management 

    • Mutual TLS for network security 

    • IPSEC for network security

    • Syscall white listing, MAC (Mandatory Access Control)

    • Usage of cgroups for resource isolation for all shared resources.

    • Monitoring of system call usage

    • Immutable - No run time patches to the packages.  Always download full container image.

Open Source Threat Modeling: https://www.coreinfrastructure.org/news/blogs/2017/11/open-source-threat-modeling

1.3 Main discussed topics

The main captured topics (Main focus areas):

  1. ONAP  Credential Management & Secret Management

  2. static code scanning

  3. Known vulnerability analysis

  4. Image signing/verification

  5. 3SP support from security perspective (recommendation done)

2 ONAP Credential Management.

 Status: Draft

2.1 ONAP level use cases

The following are the high level onap use cases that need to be supported.

2.1.1 Package signing

A package to be onboarded is signed

When onboarding the package it is validated for integrity.

Note:  Need to be clear on whether it is the vendor credential used for signing or the ONAP operator credential. 

2.1.2 ONAP operator signing into the manage the onap system

The operational staff that is using ONAP authenticate with the ONAP system and have authorized privilege's based on the authenticated persona. 

2.1.3 Secure communication between ONAP components

The ONAP components can securely communicate between

themselves 

themselves

The components are authenticated prior to establishing a connection

The connection is encrypted

2.1.4 External APIs being used to access ONAPs capabilities

ONAP offers API for external systems to use the ONAP capabilities.  For this, the external system is authenticated and authorized. 

2.1.5 ONAP accessing the services from another system.

ONAP can be a consumer of services offered from other external systems.  These can include e.g. the virtulization resources, the VNFs, or other external systems.  

2.2 Credentials to be managed

Credentials may be certificates, passwords and the like.  These need to be managed through the entire lifecycle.  The types of credentials that need to be managed are:

    • Credentials for ONAP users to access ONAP.  These are referred to as ONAP_User credentials.

    • Credentials for using the APIs exposed by ONAP. These are referred to as ONAP_ExtAPI credentials.

    • Credentials for ONAP to communicate to other ONAP components.  These are referred to as ONAP_Component credentials.

      • Note: This includes credentials for VNF SDK to package

        the artefacts

        the artifacts onboarded into SDC.

      • Note: Other ONAP components include VNFs that need to communicate with ONAP services such as DCAE securely.

      • Note:  ONAP components can spread across geographical locations.  For example, DCAE systems at Edge communicating with Central ONAP services.

    • Credentials for ONAP to communicate with other systems.  These are referred to as ONAP_Foreign credentials.  

      • As an example, if ONAP is to communicate to an external SDN controller or a cloud infrastructure, these credentials need to be managed.

      • A another example is the credentials to access a VNF

2.3 Credential Management Requirements

The credential management solution considers the following:

General Requirements

  • The credential management solution MUST be able to interact with existing credential creation and validation schemes

  • The following types of certificates SHOULD be supported by ONAP:

    • a, b, c, ... 

  • Securing the private keys - CA private keys shall be secured using PKCS11 based HSMs (e.g secure generation and storage of private key)

  • Usage of certificate identity wherever possible(binding an identity to a credential using the X.509v3 certificate)

Requirements for ONAP_USER credentials:

  • ONAP MUST support ONAP_User credentials of type user-ID and Password

  • ONAP Should support ONAP_User credentials as certificates.

Requirements for ONAP_ExtAPI credentials:

  • ONAP MUST support ONAP_ExtAPI credentials of type user-ID and Password

  • ONAP MUST support ONAP_ExtAPI credentials as certificates.

Requirements for ONAP_Component credentials:

  • ONAP MUST support ONAP_Component credentials of type user-ID and Password

  • ONAP MUST support ONAP_Component credentials as certificates. 

  • ONAP components SHOULD use credentials based on certificates for communication with other ONAP components.  The use of user-ID and Password is a fallback in the case of components that do not support certificates.

Requirements for ONAP_Foreign credentials:

  • ONAP MUST support ONAP_Foreign credentials of type user-ID and Password

  • ONAP MUST support ONAP_Foreign credentials as certificates



2.4 Credential Lifecycle

2.4.1 Credential State Diagram

Image Modified

In the implementation, some types of credentials have to be provisioned into ONAP components, e.g. certificate-based credentials or (user-ID,password) have to be added to VM images or containers before deployment.  It is probably better to do this during the deployment rather than storing images with imbedded credentials.  The Secrets Vault  is used to store these credentials securely.  The transition to the Credential_Provisioned state means the credential is stored in the Secrets Vault.


2.4.2 Credential States

State

Definition

Credential_Null

No credential currently exists.  The only valid operation is to create a credential. (The mechanism for creating a credential is out of scope of ONAP.)

Credential_Created

A credential has been created.  The credential is not yet available within ONAP, and cannot be validated.

Credential_Provisioned

The credential is provisioned into ONAP.  The credential can be validated within ONAP.

Credential_Expired

The credential has expired.  Credential validation within ONAP will fail.  The credential can be updated.

Credential_Revoked

The credential has been revoked.   Credential validation within ONAP will fail. The credential cannot be updated.

Credential_Destroyed

Note: Credentials can be copied, and the copy can be presented for validation.  Credentials can never be destroyed. 

2.4.3 Credential Operations

Operation

Definition

CREATE

Creates a new credential. Credential creation is external to ONAP.

DELETE

Credentials may not be deleted. (Design Note 1).

PROVISION

Provisions an existing credential into ONAP.  A credential must go through state Credential_Provisioned before it can be used within ONAP.

UPDATE

Updates an existing credential within ONAP.  UPDATE is used to update a credential in state Credential_Expired and return it to state Credential_Provisioned.  UPDATE may also be used to update internal parts of a credential.

VALIDATE

Validates an existing credential.  VALIDATE is used to test that a presented credential gives permission for access to a resource within ONAP (e.g. to access an ONAP component, perform an ONAP operation, or access data).

EXPIRE

Expires an existing credential. EXPIRE may be an implicit operation, as some credentials have a defined lifetime, and will expire automatically.  EXPIRE may be an explicit operation, where a specific credential is expired. Credentials in state Credential_Expired may be updated.

REVOKE

Revokes an existing credential.  Once a credential is in state Credential_Revoked there are no valid operations. A new credential is required.

Design Notes:

  • Design Note 1 - this is intended to make explicit that digital credentials may always be re-used, even if they are expired or revoked.

2.4 ONAP Credential Management Overview

ONAP requires two components to improve the security of credentials used in orchestration.

    1. a secrets vault to store credentials used by ONAP

    2. a process to instantiate credentials

Component 1: Secrets Vault - A service that can be integrated with ONAP that provides secure storage of the credentials used by ONAP to authenticate to VNFs.

Image Modified

2.5 Credential Management Use cases (credential perspective)

Use Cases:

 For ONAP_User Credentials

For ONAP_User Credentials, two uses cases are shown.

  1. Provisioning the credentials

 The ONAP_Admin credentials are directly provisioned.  The root administrator can create the onap admininstrator user-identifier and credentials.  Intially a temporary credential is created and the ONAP operational staff can update their credentials.

The credentials are securely stored (in a hashed format???)

2. Authenticating the user

When a ONAP operational staff attempts to log in for the first time.  ONAP challenges the user (with xxxxx).  This is done by comparing the hash of the entered credentials with the stored hash of the credentials.

For  ONAP_ExtAPI credentials:

There are two cases here.  The first case is when the user credentials have to be specifically provisioned.  The second case is when an identity management scheme is used.  What do we want to describe.

For ONAP_ExtAPI credentials, 3 use cases are described

 1. Provisioning the credentials 

<< insert here >>

 2. Distributing the credentials

<< Insert here >>

3. Retrieving the credentials

<< Insert here >>

For ONAP_Component credentials:

For ONAP_Component credentials, few use cases are described here

1.Certificate Authority Instance creation : This is normally required to be only one per ONAP deployment.

Steps are given below:

      • Administrator user creates CA instance by providing details such as following to CA Service

        • Subject name to use on self-signed CA certificate

        • PKCS11 slot ID and Key ID to use (in case PKCS11 based HW protection of CA private key)

        • Public key algorithm

        • In case of RSA, key size

        • In case of ECDSA, curve 

        • Hash algorithm and key size

        • Validity time of CA certificate

        • Whether to create token backend. If token backend is needed, life time and usage count of tokens to be supplied.

        • Returns:

          • Token request URL

          • Certificate request URL

          • CA Certificate

      • Administrator user also creates policy rules to apply on user certificate request with information such as

        • Subject name prefix, CA instance should accept.

        • Signing algorithm, key sizes or curves that are acceptable.

        • Hashing algorithm and key sizes that are acceptable.

        • MAC addresses it should accept in the subject name

        • Whether to verify MAC address in the subject name of PKCS10 request with the MAC address of the VM/Container.

        • Check for valid token (Yes/No)

        • Validity time of certificate.

2. Certificat request - Creation of credentials required for secure communication :   This normally occurs when service (e.g java application service) is started or when the certificate renewal is due

Steps are given below:

      • Java application gets the CA URL, Token, Subject name prefix  to be used via environment variables in case of containers or via cloud-init user data in case of VM.

      • Certificate Credential Client agent is called by application during its startup to create and get the certificate signed by CA by giving CA URL, token information.

      • Certificate Credential agent does following:

        • If there is an existing certificate and private key and if it is still valid, it returns back to the application immediately. If not, it does following

        • Generate ECDSA key pair.

        • Create PKCS10 request with subject name prefix + MAC address as Common Name of the subject name.

        • Sends PKCS10 request, token to CA.

        • Gets the x.50v3 certificate from CA.

        • Stores the certificate in file system.

        • Returns back private key handle, slot ID and path to the certificate.

      • Certificate credential agent informs application on acquiring credentials

      • Application moves forward to inform TLS service with CA certificate and subject prefix to validate incoming requests.

      • If Application is making TLS connection to another service, then it uses certificate enrolled and private key handle while creating TLS endpoint.

For ONAP_Foreign credentials:

For ONAP_Foreign credentials, two use cases are described. 

  1. Provisioning the credentials

    <<insert here>>

  2.  Retrieving the credentials

  3. Accessing VNFs during runtime and installation

<< Describe the flow for the credentials to access VNFs .  To be more specif, who owns the credentials for the case when ONAP has to configure the VNFs>>  (Zyg)

4. onboarding VNFs.

<< Describe the case where the VNF image and VNF package is  signed from the vendor (with or without VNF package) >>

Assumptions:  Vendor signs the image, not encrypts.


Use case:

NOTE to seccom: Probably should describe how this works for all lifecycle steps. 

Recommendation: ONAP should provide a reference implementation of a secrets vault service as an ONAP project.

Next Steps:

    • Find a project lead for a reference implementation.

Component 2: A process to provision ONAP instances with credentials. These credentials may be used for interprocess communication (e.g., APPC calling A&AI) or for ONAP configuring VNFs.

Automatic provisioning of certificates and credentials to ONAP components: AAF can provision certificates. ECOMP DCAE is currently using AAF to provision certificates.

Next steps:

    • Work with the AAF team to include this functionality in Release 2. It is important to understand that the AAF solution depends on the CA supporting the SCEP protocol.

    • Enhance AAF to provision userIDs & passwords to ONAP instances and VNFs. Most VNFs only support userID/password authentication today. ETSI NFV SEC may issue a spec in the future on a more comprehensive approach to using PKI for NFV which can be visited by ONAP SEC when released. Steve is working on this right now but doesn’t know when he’ll be done.

2.6 Recommended approach

2.7 Implications to the ONAP

Describe what this means to ONAP

QUESTIONS:

3 ONAP Static Code Scans

Status: Recommended, and recommendation approved by TSC on 11/2/2018

3.1 ONAP Static Code Scanning

The purpose of the ONAP static code scanning is perform static code scans of the code as it is introduced into the ONAP repositories looking for vulnerabilities.

3.2 Approaches

Tools that have been assessed: Coverity Scan (LF evaluation), HP Fortify (AT&T evaluation), Checkmarx (AT&T evaluation), Bandit (AT&T evaluation)

Prelimary Decision: Coverity Scan https://scan.coverity.com/

Motivation: Coverity Scan is a service by which Synopsys provides the results of analysis on open source coding projects to open source code developers that have registered their products with Coverity Scan. Coverity Scan is powered by Coverity® Quality Advisor. Coverity Quality Advisor surfaces defects identified by the Coverity Static Analysis Verification Engine (Coverity SAVE®). Synopsys offers the results of the analysis completed by Coverity Quality Advisor on registered projects at no charge to registered open source developers. Coverity is integrated into OPNFV and other Open Source projects and operating successfully. The Linux Foundation recommends the use of the tool.

Current Activity: In conversations with Coverity to understand the definition of “project” – does it refer to ONAP or the projects under an ONAP release to ensure that the limitation on free scans does not lead to bottlenecks in submissions and commits.

Open Source use: 4000+ open source projects use Coverity Scan

Frequency of builds:

Up to 28 builds per week, with a maximum of 4 builds per day, for projects with fewer than 100K lines of code

Up to 21 builds per week, with a maximum of 3 builds per day, for projects with 100K to 500K lines of code

Up to 14 builds per week, with a maximum of 2 build per day, for projects with 500K to 1 million lines of code

Up to 7 builds per week, with a maximum of 1 build per day, for projects with more than 1 million lines of code

Once a project reaches the maximum builds per week, additional build requests will be rejected. You will be able to re-submit the build request the following week.

Languages supported: C/C++, C#, Java, Javascript, Python, Ruby

The scanning process can be triggered from Jenkins. OPNFV is currently using a basic gerrit plug in for some basic scans.

Question: What about Go? which versions of Python.

Comment: Add some motivation of why Coverity is a good idea.

Comment: We need to catch the commitment now. 

Bring in a few prposals to the TSC.

3.3 ONAP process for static code scans

Two approaches are identified.

  1. Scan analysis in project

The PTL is informed of the scan analysis results on a regular basis (e.g. weekly).

    • Project has the responsibility to analysis the scans and make required changes.

Note:
•The work scales with projects
•Security competence may not be in projects to understand the results
•Have to work through the false positives.
•Requires that the scan process is incorporated into Jenkins.

2 Create a support team to support the scan analysis with the projects. 

•Under the guidance of the security sub-committee 1-2 team is created with project members (rally around timezones).
•Perform walkthrough of the static code scan results before MS-4.

In Either case, propose that MS-4 and Release criteria includes static code scan analysis. 

3.4

Recommendation

Example of a CoverityScan report

The following report was generated by running Coverity against code from the Zephyr project.

https://wiki.onap.org/download/attachments/11928162/CoverityScanReportForZephyr.docx?api=v2 

3.5 Recommendation


  • Use Coverity Scan https://scan.coverity.com/ to perform static code scans on all ONAP code.

  • Automate scanning by enabling Jenkins to trigger weekly scans with Coverity Scan.

  • Deliver scan reports to the PTLs for each project PTLs will be responsible for getting the vulnerabilities resolved (fixed or designated as false positive).

  • All projects in a release must have the high vulnerabilities resolved by MS-3.

  • All projects in a release must have the high and medium vulnerabilities resolved by MS-4.

  • The Security Committee will host session to help projects walk through the scanning process and reports.

4/11 Update

  • The LF ONAP helpdesk is creating a Jenkins job to scan each repo with CoverityScan, following the job used by OpenDaylight (ticket #54456)
  • The goal is to run one scan daily and have the results integrated into the Sonar dashboard
  • A few projects use the Go and Closure languages which are not supported by CoverityScan
  • Coverity scanning will not be implemented until after Beijing goes live

4. CII Badging process Learnings for ONAP.

Status: Draft

4.1 CII Badging process intro

This section captures the learning's of using the CII badging program in ONAP.

4.2 Learnings

The CLAMP project has been working as the CII badging certification.  Their feedback is found here: CII Badging Program - Feedback.  This is repeated below for simplicity:

4.2.1 CII Badging program introduction.

• Core Infrastructure Initiative Website:
-https://bestpractices.coreinfrastructure.org/

• Evaluate how projects follow best practices using voluntary self-certification

• Three levels: Passing, Silver and Gold

  • LF target level recommendation is Gold

• ONAP Pilot Project: CLAMP
-https://bestpractices.coreinfrastructure.org/projects/1197

4.2.2 The Questionnaire

• Edition is limited to a subset of users

  • Main editor can nominate other users as editors

• Divided into clear sections
 - For each section, a set of questions is provided, addressing best practices relating to the parent section

• Each question asks if a criterion is

  • Met, unmet, not applicable, or unknown

• Criteria are generally high-level as targeted to best practices, e.g.

  • “The project MUST have one or more mechanisms for discussion”

  • “The project SHOULD provide documentation in English”

4.2.3 The Goals

• Give confidence in the project being delivered

  • By quickly knowing what the project supports

• See what should be improved

  • Self-questioning helps project stakeholders identifying strengths and weaknesses, do’s and don'ts

• Align all projects using the same ratings

  • Makes projects connected together to follow the same practices

• Call for continuous improvement

  • Increase self rating and reach better software quality

4.2.4 Raised Questions

  • Introduce test coverage rules: how many tests should be added for each code changes

  • Digital signature: use digital signature in delivered packages (already in the plan?)

  • Vulnerability fixing SLA: vulnerabilities should be fixed within 60 days

  • Security mechanisms

    • Which cryptographic algorithms to use to encrypt password

    • The security mechanisms within the software produced by the project SHOULD implement perfect forward secrecy for key agreement protocols so a session key derived from a set of long-term keys cannot be compromised if one of the long-term keys is compromised in the future.

    • If the software produced by the project causes the storing of passwords for authentication of external users, the passwords MUST be stored as iterated hashes with a per-user salt by using a key stretching (iterated) algorithm (e.g., PBKDF2, Bcrypt or Scrypt).

    • The security mechanisms within the software produced by the project MUST generate all cryptographic keys and nonces using a cryptographically secure random number generator, and MUST NOT do so using generators that are cryptographically insecure

5 ONAP Communication Security

Status: Draft

5.1 ONAP Communication Security

Assuming the credential management is in place, ONAP needs to have a common means to support secure communication between the onap components.

There are two high level use cases to cover.

  1. Real-time communication between ONAP components

  2. Support for authentication and encryption of the modules and packages to be onboarded into SDK (from VNF SDK). 

 5.2 ONAP communication security requirements

To guide the solution development for the ONAP communication security, the following requirements are identified:

For: Real-time communication between ONAP components:

  • The solution  MUST support an approach that can be common to all onap modules.

  • The solution MUST support the credential management solution and MUST NOT be tied to any particular credential management scheme.

  • The solution MUST support secure communication between the ONAP components in the following sense:

    • A receiving ONAP component understands that the message is authentic

    • Any element in between the ONAP components cannot interpret or change the message.

  • The solution MUST enable that a sending ONAP component does not rely on what the receiving ONAP component is, and the receiving ONAP component does not rely on what the sending ONAP component is.  (This would put unnecessary restraints on the architecture).

  • The solution SHOULD be easy for the ONAP components to Adopt.

  • The solution MUST be independent of the underlying communication technology (i.e. communication buss technologies).

For models and packages to be onboarded:

  • The solution MUST support the credential management solution and MUST NOT be tied to any particular credential management scheme.

  • The soluction MUST allow Service Design and Creation to validate the package from a security perspective. 

6. ONAP known vulnerability management

Status: Draft

 Background:

Sonatype Nexus can provide a number of reports.  One report it can provide is identification of components with known vulnerabilities.

Policies can be provisioned for different types of vulnerabilities to identify them as critical, severe, moderate, etc.

A process is required to support this.  A project with a component that has a known vulnerability can do one of two things.  1. It can upgrade the component to a component version that does not have the vulnerability.  Alternatively, the project can investigate the vulnerability to and conclude that it doesn't effect the project due to the way it uses the component or the part of the component is uses.

A process is required to support this.

Next Steps

 Decide approach with projects:

Recommended to have MS-4 criterial as not to include modules with known vulnerabilities > 60 days old.  MS-4 and Release Criteria.


TopicStatusComment
7. Pluggable SecurityIn ReviewMoved to own page for manageability


8 VNF Package Security

User-level Authentication and Authorisation

Status: Draft

Background and Goals:

ONAP must be deployed in different service provider environments in order to be successful.We know that different service providers have requirements for different User-level security infrastructures. To meet these requirements we need a security framework (specifically for user level Authentication and Authorisation) that is pluggable. ONAP is a micro services architecture. We must address the standard requirements to localize the burdens of configuration and patching and to provide a solution that is development language neutral – i.e. we can’t assume all microservices are Java based.

  • Goal 1Alternative Authentication and Authorisation security providers can be integrated without requiring customisation of the underlying ONAP code.

  • Goal 2Minimise the operational effort required to configure and patch microservices.

  • Goal 3: Provide a language-independent solution (that supports microservices written in Clojure, Python etc as well as Java).

Since CADI/AAF is our open-sourced Authorisation provider, we propose to build a reference implementation of pluggable user–level authentication and authorisation based upon it.

Context

 This proposal relies upon the ONAP Credential Management and ONAP Communication Security initiatives to provide:

 

  1. Secure certificate generation and distribution.

  2. Component->component trust through mutual end point authentication.

  3. TLS communications resistant to:

    1. Spoofing

    2. Replay attacks

    3. Man-in-the-middle attacks

    4. Token theft.

The security infrastructure (Authorisation and Authentication) provides:

  1. User administration.

  2. Permissions and role management support

Basic Interaction Pattern

We shall illustrate the proposed approach with simple A&AI example, securing the Sparky (A&AI UI) microservice:

Image Removed

1.The ONAP portal authenticates the A&AI user on launch (this may use the Auth* service as an abstraction to perform the authentication).
2.The portal invokes the Sparky microservice to launch the A&AI UI including the provided token.
3.The auth* filter intercepts the request and invokes the authorize method on the Auth* microservice.
4.The Auth* microservice invokes the AuthZ provider (the CADI library in this example) authorisations method and returns the principal’s claims (ideally in a JWT).
5.The filter compares the claims with the filter requirements for the invoked method/URI pattern.
6.If the authorisations are satisfied, Sparky processes the request.

Components

Auth* Microservice

A standardised interface for:

    • User authentication
    • Request authorisation
    • Other operations –  e.g. token revocation check.

Characteristics

Operates on Security Provider tokens and JWTs.

Abstracts ONAP components from the specifics of authorisation and authentication service:

    • Composition and Implementation
    • Configuration
    • Token representation
Specific implementations, such as the CADI based example opposite, provide an adaptation layer between the standardised interface and the required Authentication and Authorisation providers.

Auth* example implementation based on CADI/AAF

Image Removed

Auth* Servlet filter

    • Intercepts Service Requests
    • Logs the request and its transactionId for traceability.
    • Invokes the Auth* service to authenticate/validate token and retrieve authorisations (in JWT)
    • Compares the user authorisations with those required for the requested operation
    • Can perform local JWT authorisation comparisons 
    • Can call Auth* to check for revoked tokens.
    • Passes authorisation context to the service.

Transparently propagates valid tokens to calls made by the microservice.

Characteristics

    • Isolates application code from security concerns
    • Insensitive to the token representation
    • Largely insensitive to the method of propagating the token
    • Driven by deploy time configuration to minimise updates to the service distribution.

8.1 Introduction

The scope of this item is verification of the integrity and authenticity of the

  • VNF package
  • The artifacts in the VNF package

In general, the intention is to align with ETSI NFV specifications on this area, see the references in 8.5. The published version of [ETSI NFV SOL004] is the main specification to follow. Going forward, [ETSI NFV SEC021] shall also be considered (as of Feb 2018 the work has started).

8.2 Use Cases

8.2.1 Priority 1: VNF Package Verification

Integrity of the VNF package needs to be verified prior to, or at the time of onboarding. The purpose is to ensure that the VNF package originates from the vendor, and that the content has not been tampered with. The verification is done against the signature provided by the vendor. Reference [ETSI NFV SOL004] contains the detailed specifications.

8.2.2 Priority 2: Integrity Verification at Instantiation

At instantiation, the integrity of VNF image and related files shall be verified. The options are:

A) Verify against the signature provided by the vendor. [ETSI NFV SOL004] specifies “The VNF provider may optionally digitally sign some artifacts individually”.

B) Verify against the signature created by the service provider. [ETSI NFV SOL004] specifies “If software images or other artifacts are not signed by the VNF provider, the service provider has the option, after having validated the VNF Package, to sign them before distributing the different package components to different function blocks or the NFVI”.

8.2.3 Priority 3: Service Provider Ability to Sign the Artifacts

If the vendor did not sign artifacts (inside the VNF package) individually, service provider may want to sign those. Also, if the service provider needs to modify or add any artifacts, the service provider may want to sign those.

8.3 ONAP Impacts

Tentatively, following projects are impacted:

  • VNF SDK: need to interpret the VNF Package according to [ETSI NFV SOL004]
  • SDC, APP-C, VF-C: need to interpret the manifest file according to [ETSI NFV SOL004]
  • VNF Requirements

AAF should not be impacted, because it already supports install of trusted certificates.

8.4 Certificate Assumptions

On the CA issuing the VNF package signing certificate, [ETSI NFV SOL004] specifies: “This solution, either option 1 or option 2, relies on the existence in the NFVO of a root certificate of a trusted CA that shall have been delivered via a trusted channel that preserves its integrity (separate from the VNF package) to the NFVO and be pre-installed in the NFVO before the on-boarding of the VNF package.
NOTE: The present document makes no assumption on who this trusted CA is. Furthermore, it does not exclude that the root certificate be issued by the VNF vendor or by the NFVI provider.

If the signing certificate has been issued by vendor’s own CA, the related root CA has to be installed in ONAP AAF as trusted certificate.

8.5 References

[ETSI NFV SOL004]
ETSI GS NFV-SOL 004 V2.3.1 (2017-07): 
http://www.etsi.org/deliver/etsi_gs/NFV-SOL/001_099/004/02.03.01_60/gs_nfv-sol004v020301p.pdf

[ETSI NFV SEC021]
ETSI NFV SEC021 work item description: https://portal.etsi.org/webapp/WorkProgram/Report_WorkItem.asp?WKI_ID=53601
 

Auth* filter

Image Removed

Extensions

Multiple microservices can be invoked for a single user request. Having to invoke the Auth* microservice at every leg in the call-chain inflates the latency budget (even if the Auth* service caches results). To offset this cost we propose:

1.The Sparky Auth* filter is configured to append the returned JSON Web Token (JWT - https://tools.ietf.org/html/rfc7519) to the downstream request.
2.The Resource Microservice Auth* filter is configured to accept either a JWT or the original credential.
3.The filter interrogates the JWT locally to determine whether the request is authorised for the requested URI pattern avoiding interrogating the Auth* service:

If the necessary authorisations are present the request is admitted.

If not, a call is made to the Auth* service with the original credential JWT and the current scope as per the standard flow.

If the necessary authorisations are still absent, the request is rejected.

Impacts of Pluggable Security:

To A&AI clients:

  1. (development) Need to integrate client with Authentication Provider to retrieve credential. Note - this could be abstracted via a call to Auth* configured with a re-direct if required?
  2. (development) Client needs to provide tokenised credential in REST request to A&AI.

TO A&AI:

  1. (deployment) Need to configure authorisation requirements per URI, per service. Note that this can be as fine or coarse as the project requires.
  2. (development) All new or modified microservices need to use the secured REST client to invoke other microservices for seamless and secure propagation of JWT.

Other impacts.

It is the intention of this proposal that all ONAP components can adopt pluggable security. This provides a consistent security implementation across the entire platform In order to do so the following changes would be required:

  1. (development) All REST interfaces are secured with an Auth* filter.
  2. (development) All REST interfaces specify their authorisation requirements in an Auth* configuration file.
  3. (development) Microservices that initiate REST calls to other ONAP components should use the A&AI REST client (should move to a general ONAP package) for seamless and secure propagation of JWT
  4. (test/deploy) All users of the application should be provisioned with the appropriate Authorisations.
Auth* / CADI analysis: AnchorCADIAUTHCOMPARECADIAUTHCOMPARE

Note: there seems to have been a confusion of Terms between Providers and Protocols.  Here is a mini-glossary defining these terms.

Protocol - the definitions of handling security at real time.  What kind of Security information is on a transaction, how is Security information passed with that transaction, what is the appropriate response when the security information is not there.

Examples:

Basic Auth puts encoded user/password on an HTTP Header called "Authorization" prepended by "Basic ".  

OAuth2 puts a reference token as an "Authorization" Header prepended by "Bearer".

X509 Authentication is entirely different.  The client certificate used to instantiate the TLS connection is made available for inspection by Request Object function call.

Provider - an external source of real-time security information.  

Examples:

For Basic Auth, the Provider is an external entity that will validate User/Password.  

For OAuth2, the Provider is an external entity that provides Tokens and Introspection information.

TopicAuth*CADI

Do you support more than one Authentication Protocol?

By Protocol, we mean, for example, Basic Auth/Certificate/OAuth2, etc)?

Expected Protocols, like those in the question are built in.

For each new kind of Authentication Protocol, you build an implementation of a TAF, making that available as part of the program.

This would be created, for instance, by a company having its own kind of SSO Protocol.

It should be noted that not every Protocol has an external Provider. Certificate Authentication, provided by CADI, does not require any external calls. Some have only an initialization setup, and do not make "provider" calls real time. The TAF allows all these differences to be accounted for, but plugged into CADI the same way.

How do you make make new Protocols?To create a whole new Protocol Adapter, you extend the TAF Interface, recognizing, for instance, the Security Information on the incoming call, and the appropriate response if it is not.Do you support these Authentication Protocols simultaneously?

CADI allows for Multiple Protocols to be enabled for the same call. This is critical, for instance, when Apps want to migrate from User/Password to Certificate based Authentication, or even support some Apps with User/Password and others Certificate.

Within the same Container, different Microservices can serve GUI (Browsers) or Applications with the same configuration, because CADI recognizes at real-time what these protocols are.

Could you support Multi-factor Authentication in the future?The current CADI behavior is to accept one valid Authentication out of all the configured Plugins before proceeding. Multi-factor Authentication only requires a small modification to require two Authentication protocols before proceeding. This would be an extremely minimal change.

How do you swap authentication provider?

Update/replace central Auth* implementation with adapter for new provider. No impact to microservices.

Adding Configuration Properties will both enable the Protocol indicated, but also change Providers (if it makes sense for the Protocol) by directing Endpoints in configurations.

Example: You can enable the OAuth2 TAF, and point it to ISAM Endpoints, AAF Endpoints or another OAuth2 provider altogether by modifying the endpoint property.

There is no impact to micro services to change out Providers of the same Protocol.

What is the response to the Client when there is no Authentication?

The CADI TAF accommodates different responses based on the Protocol. For instance, a BasicAuth response is "401" in the HTTP Header. However, the appropriate response to a Browser call with OAuth or Single Sign-on may be a redirect to an external login.

Further, CADI recognizes real-time whether the caller is a Machine or a Person (Browser), and sends the appropriate response.

Do you support more than one Authorization Protocol?Yes, Configured Authorization Protocols are in existence in CADI today. AAF supports and integrates into OAuth Authorization as well as AAF's Fine-Grained Authorization at the same time.Do you support more than one Authorization Protocol at the same time?Yes. CADI is able to work with OAuth2 Authorization information when it is available. If both OAuth2 and AAF information is requested, CADI extracts the AAF Fine Grained permissions from the Introspection without network call. If there is no tokenization, CADI queries and caches a call to AAF Fine Grained Permissions.How do you swap authorisation provider?Update/replace central Auth* implementation with adapter for new provider. No impact to microservices.Swapping providers of each Authorization Protocol is a matter of configuring the Endpoint. There is no impact to Microservices, for instance, in trading out one OAuth2 provider endpoint for another OAuth2 endpoint.Do you support JWT Tokens for carrying Authorization?

CADI was asked first to support OAuth2 Tokens. This was done by creating , which is provided in Beijing Release.

If a JWT Token method is proposed, then the details of the JWT Protocol need to be laid out. However, a new CADI LUR would extract the AAF permissions from the JWT token, like it does OAuth2 Tokens today, and make those available to the App.

Further, if this LUR is created, CADI would then accept 3 different LURs at runtime, choosing which one is appropriate on behalf of the App.

Can Security be externalized as a K8 MicroService?

There already exists today a K8 Microservice which encapsulates CADI behavior locally, and provides this to other Microservices within the Container when required for various reasons like language, or convenience.

Given this Microservice is already being used by a major contributor to ONAP, it would not likely take much to have this moved to ONAP.

It should be noted that a Separate Microservice, either the above, or the Auth* proposal requires additional overhead because

  • Security calls, when required, are passed to another entity to make on its behalf, not made locally. This reduces speed and resiliency
  • A Separate MicroService solution would have to Translate any Authentications or Authorizations into a common format.

How does an app include the solution in K8?

At the moment, Apps are pulling in CADI Code via Maven, and configuring, as you would any Java library.

However, the current proposal is to create a Docker Image with all the CADI elements (including Configurations) built in, making it easier for Microservices to include.

The point being is that CADI is flexible to work in many scenarios, so there are options not explored to make CADI even easier in a Container world.

Is your solution available outside of K8?

CADI (for Java) works as standalone process, within AppServers, as Adapters in Tools such as Cassandra, Shiro, etc. Essentially, anything that can be utilize Java can use CADI.

As other Languages are built around CADI, those are available in all kinds of deployment scenarios as well.

K8 Microservices, of course, only run in K8. If the choice is made in the future to choose another Container system, CADI still works, only jettisoning the K8 specific portion.

What is the impact of new security capabilities on microservices? E.g. Add SAML support for interfacesUpdate Auth* credential config with location of new credential type. Deploy updated credential config. No rebuild/redeploy of MS.

SAML would be a new Protocol (see above). The decision to pull the protocol into the service itself, or into a separate Microservice is a policy one.

If you want to use the new Protocol Directly, then you include on the next build. Otherwise, you could implement only in the external Microservice, as

What is the impact on microservice of patching security?Rebuild MS image with updated filter/REST client jar.

Security patching comes in many flavors.

1) Java (or other) Patching - rebuild MS Image

2) CADI Jar patching - rebuild users of Jars

a) if this the separate Microservice referenced above, then rebuild that image.

b) If Apps include CADI Directly, then rebuild, using extensive ONAP CI/CD system, and allow it to build the image. Update images.

Per micro-service, what configuration is required?TLS cert management (to talk to Auth* service) Vanity URI for Auth* service Filter configuration – declaration of authorisations required per supported URI pattern.

Configuration depends on the Protocol, but each Protocol is loaded by property in a property file chain depending on what is desired.

* TLS 2-way Authentication requires access to Keystore, and an alias

* Basic Auth requires endpoint to BasicAuth provider

* OAuth requires Endpoints to Token Server, Introspect Servers

Being Pluggable, if you don't want any particular Protocol or its endpoint, you simply don't configure it.

How are calls to the authentication/authorisation services minimised?Auth* doesn’t cache, but it does propagate JWT that can be locally interrogated for claims.

CADI optimizes the calls based on Protocol.

Examples of optimization:

2-way x509 Certificate Authentication

CADI developed a way to get Identity from the Certificate with no external calls whatsoever, using the certificate already prepared by TLS. There are NO external calls, and process time is in the hundredths of millisecond range.

OAuth2

CADI creates memory cache backed by encrypted persisted reference to the Introspection date, which is a one time call until expires. This is utilized over the life of Token. Further, CADI manages the Refresh of the Token, if required, without App needing to interact.

Regular AAF Call

With Beijing, regular AAF calls utilize the OAuth2 Persisted Memory solution, rather than previous Caching method in Amsterdam.

Basic Auth

Since Basic Auth requires external endpoint, these are cached.

Proposed JWT Protocol

If a new JWT Protocol is proposed and implemented by ONAP, the JWT Token would be on each transaction, and would not require caching

How is blacklisting handled?The first microservice that encounters a transaction authenticates the supplied token with Auth* (and thus the authentication provider). If the principal or the token are invalid, the request is rejected otherwise a JWT is returned. Subsequent requests in the transaction use the JWT (that has a configurable TTL).

Blacklisting involves one or both of Authentication and Authorization. For success, any given client must be Authenticated and Authorized, and since this varies by Protocol/Implementor, how this is done is based on CADI's pluggable TAF and LUR adapters.

2-way x509 Certificate

Certificate Authority choice - probably RCL

OAuth2

OAuth2 requires refreshing of Tokens after expiration. If user is blacklisted before Refreshing, standard errors apply

Regular AAF Call

Removal of User from Roles he is part of

Basic Auth

When Cache Expires, it is rechecked. If Password no longer valid, Client cannot proceed.

How are non JVM languages supported?Requires an equivalent implementation of Auth* filter and REST client in target language for full support.

The CADI methodology and interfaces can be applied to other languages. Currently, there is already a JavaScript version built, which we are looking to bring into ONAP. There are requests for other direct Language builds, which can be built according to specs by interested parties.

However, if other languages are not initially supported, then the recommended additional Microservice, recommended above, can be utilized in a manner similar to "Auth*" proposal.

Comments on the other solution/"final words"

The Auth* proposal requires first defining, then creating a new PROTOCOL (see definition at top of page), based on JWT. All ONAP microservices would then be required to adopt this new Protocol with a CADI like (but not CADI) Library for their own language.

CADI does not oppose a new PROTOCOL based on JWT, but asserts that as a pluggable library, this PROTOCOL can already be accommodated by normal CADI methodology.

Imposition of only one specific PROTOCOL between Microservices is a Policy decision, by Security Committee, easily implemented by CADI as it exists today.

In one such policy, if the ONAP decision makers decide to accept only x509 Certificates, then ONAP Microservices can be configured with only CADI x509 Certificates. Or, if they decide that Microservices should have both x509 and BasicAuth, CADI can be configured for both, and applied everywhere.

In such a way, the community asked for OAuth2 to be built for Beijing. AAF/CADI provided this. If the Committee turns its decision, and decides it doesn't like OAuth Tokens, but prefers to go with a new JWT token methodology, then this can be built, and configured as normal. It does not change CADI or how it works. There is no problem is the Committee changes its mind back, OR if individual companies decide they want to do OAuth while others choose JWT.

This is also true of Company specific Protocols, like specialized SSO Strategies.

With CADI, the idea of pluggable Security Protocols, both Authentication and Authorization already exists. Both the Protocols and the Providers are already pluggable.

The Auth* idea of creating a separate MicroService to accommodate non-Java languages is laudable, but a solution actually already exists and works with AAF already. It may be better to obtain this working model rather than build new.

Open Issues

      1. What considerations are needed when propagating a (JWT) token between microservices? i.e. how do we stop token's being hijacked/re-purposed?
        1. <Andy> - my initial view was that:
        2. Microservices are code-scanned/reviewed to ensure that their code is trustworthy.
        3. Interactions between microservices are via TLS/2-way cert that is establishing non-repudiation between parties - we can trust that a client (application) is who they say they are, and the comms between them is secured.
        4. If (I) and (ii) are both true, then where does the risk of hi-jack originate?

There is a practical/technical question regarding how the token is made available without being invasive to the microservices developer.

The current proposal is to transparently inject the JWT/credentials (referred to as tokens in this subject) into the relevant headers for propagation:

        • Mediate all REST requests via org.onap.aai.restclient.client (may need package name generalisation)
        • The Filter injects the tokens thread local into the rest client, setting the relevant headers and request context with the token information.
        • The microservice application code invokes the rest client to call out to onward microservices.

Positives

        • Aside from the requirement to use the REST Client for communication, the approach is transparent to the application developer.
        • Single piece of code to secure.
        • Common place to log outgoing requests with transactionId and target for traceability.

Issues

        1. Needs an implementation per language/technology - e.g. would need a Python equivalent
        2. Creates a coupling with the Auth * filter

All guidance on the subject appreciated.

2. Revocation of tokens:

        1. Requests to the Auth* service are checked for token revocation.
        2. JWTs should be short lived and can be configured with short TTLs.

Next Steps

  • Sub-committee/contributor review of proposal, clarifications and refinement.
  • Determine delivery vehicle for the capability in ONAP.

    – add library for implementation + configuration

    8 VNF Package Security

    Status: Draft

    8.1 Introduction

    The scope of this item is verification of the integrity and authenticity of the

    • VNF package
    • The artifacts in the VNF package

    In general, the intention is to align with ETSI NFV specifications on this area, see the references in 8.5. The published version of [ETSI NFV SOL004] is the main specification to follow. Going forward, [ETSI NFV SEC021] shall also be considered (as of Feb 2018 the work has started).

    8.2 Use Cases

    8.2.1 Priority 1: VNF Package Verification

    Integrity of the VNF package needs to be verified prior to, or at the time of onboarding. The purpose is to ensure that the VNF package originates from the vendor, and that the content has not been tampered with. The verification is done against the signature provided by the vendor. Reference [ETSI NFV SOL004] contains the detailed specifications.

    8.2.2 Priority 2: Integrity Verification at Instantiation

    At instantiation, the integrity of VNF image and related files shall be verified. The options are:

    A) Verify against the signature provided by the vendor. [ETSI NFV SOL004] specifies “The VNF provider may optionally digitally sign some artifacts individually”.
    B) Verify against the signature created by the service provider. [ETSI NFV SOL004] specifies “If software images or other artifacts are not signed by the VNF provider, the service provider has the option, after having validated the VNF Package, to sign them before distributing the different package components to different function blocks or the NFVI”.

    8.2.3 Priority 3: Service Provider Ability to Sign the Artifacts

    If the vendor did not sign artifacts (inside the VNF package) individually, service provider may want to sign those. Also, if the service provider needs to modify or add any artifacts, the service provider may want to sign those.

    8.3 ONAP Impacts

    Tentatively, following projects are impacted:

    • VNF SDK: need to interpret the VNF Package according to [ETSI NFV SOL004]
    • SDC, APP-C, VF-C: need to interpret the manifest file according to [ETSI NFV SOL004]
    • VNF Requirements

    AAF should not be impacted, because it already supports install of trusted certificates.

    8.4 Certificate Assumptions

    On the CA issuing the VNF package signing certificate, [ETSI NFV SOL004] specifies: “This solution, either option 1 or option 2, relies on the existence in the NFVO of a root certificate of a trusted CA that shall have been delivered via a trusted channel that preserves its integrity (separate from the VNF package) to the NFVO and be pre-installed in the NFVO before the on-boarding of the VNF package.
    NOTE: The present document makes no assumption on who this trusted CA is. Furthermore, it does not exclude that the root certificate be issued by the VNF vendor or by the NFVI provider.

    If the signing certificate has been issued by vendor’s own CA, the related root CA has to be installed in ONAP AAF as trusted certificate.

    8.5 References

    [ETSI NFV SOL004]
    ETSI GS NFV-SOL 004 V2.3.1 (2017-07): 
    http://www.etsi.org/deliver/etsi_gs/NFV-SOL/001_099/004/02.03.01_60/gs_nfv-sol004v020301p.pdf

    [ETSI NFV SEC021]
    ETSI NFV SEC021 work item description: https://portal.etsi.org/webapp/WorkProgram/Report_WorkItem.asp?WKI_ID=53601
     

    9 Nexus IQ Known vulnerability process.

    Status: Draft

    8.1 Purpose

    Clarity is required on the following aspects:

    • The process that the projects will follow regarding analyzing the known vulnerabilities
      • To address how a project can mark a known vulnerability as not impacting ONAP
      • What oversight is required
      • Address the case that the component used uses other components that have vulnerabilities.
    • The polices in nexus IQ to make the vulnerability status more visible.

    8.2 Known Vulnerability scanning

    8.3 Nexus IQ policies


    10 (tmp) input to the S3P (carrier grade) discussions from a security perspective

    Status: Draft

    Note: This will be removed when the feedback is sent back.

    The full list of the needs can be found at:  https://wiki.onap.org/plugins/servlet/mobile?contentId=1015829#content/view/15998867 

    Security:

    Per project:

    • Level 0: None
    • Level 1: CII Passing badge
    • Level 2: CII Silver badge, plus:
      • All internal/external system communications shall be able to be encrypted.
      • All internal/external service calls shall have common role-based access control and authorization.
    • Level 3: CII Gold badge 


    Note: When creating the CII project entry, it is recommended to use ONAP in the title to facilitate searching the onap projects.

    Per Release:

    • Level 1 70% of the projects included in the release at passing badge level
      • with non-passing projects reaching 80% towards passing level.
      • Non passing projects MUST pass these specific criteria:
        • The software produced by the project MUST use, by default, only cryptographic protocols and algorithms that are publicly published and reviewed by experts (if cryptographic protocols and algorithms are used).
        • If the software produced by the project is an application or library, and its primary purpose is not to implement cryptography, then it SHOULD only       call on software specifically designed to implement cryptographic functions; it SHOULD NOT re-implement its own.
        • The security mechanisms within the software produced by the project MUST use default keylengths that at least meet the NIST minimum requirements       through the year 2030 (as stated in 2012). It MUST be possible to configure the software so that smaller keylengths are completely       disabled.
        • The default security mechanisms within the software produced by the project MUST NOT depend on broken cryptographic algorithms (e.g., MD4, MD5,       single DES, RC4, Dual_EC_DRBG) or use cipher modes that are inappropriate to the context (e.g., ECB mode is almost never appropriate because it       reveals identical blocks within the ciphertext as demonstrated by the ECB penguin, and CTR  mode is often inappropriate because it does not perform authentication       and causes duplicates if the input state is repeated).
        • The default security mechanisms within the software produced by the project SHOULD NOT depend on cryptographic algorithms or modes with known serious       weaknesses (e.g., the SHA-1 cryptographic hash algorithm or the CBC mode in SSH).
        • If the software produced by the project causes the storing of passwords for authentication of external users, the passwords MUST be       stored as iterated hashes with a per-user salt by using a key stretching (iterated) algorithm (e.g., PBKDF2, Bcrypt or Scrypt).
    • Level 2  70% of the projects in the release passing silver
      • with non-silver projects completed passing level and 80% towards silver level
    • Level 3 70% of the projects included in the release passing gold
      • with non-gold projects achieving silver level and achieving 80% towards gold level
    • Level 4: 100% of the projects in the release passing gold level. 


    Examples of uses cases that people may want to see solved.

    5. Examples of secure communication between ONAP components

    6. Examples of security communiation between ONAP and other components.

    7. User provisioning, and relation to access to other systems.