You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

ONAP Security Logging
Goal: ONAP containers built from Java based docker images contain certain security related logging fields which are not present in non-Java (python) based images. Task is to Implement security logging fields into ONAP containers for the python based images to match the Java counterpart.
 
Steps Taken:

  1. First tried building and running one of the python based containers: onap_dcaegen2-collectors-snmptrap to see what the logs looked like
    1. Contained the following logs which had data:
      1. debug.log
      2. metrics.log
      3. error.log
      4. snmptrapd_arriving_traps.log
    2. Debug and error log contained following security log fields: timestamp, logLevel, message; were MISSING: logTypeName (optional), traceId, statusCode, principalID, serviceName
  2. Investigate how security logging is being implemented in the Java based docker images
    1. Seems like it may be using some logstash plugin (which is from the company elastic) - the encoder in one of the xml's for cps-service referenced this: '<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">' (https://gerrit.onap.org/r/c/cps/+/128801/13/cps-service/src/main/resources/logback-spring.xml)
  3. There is a python package called ecs-logging (https://www.elastic.co/guide/en/apm/agent/python/current/log-correlation.html#logging-integrations) which may be relevant
    1. ecs-logging-python supports automatically collecting ECS tracing fields from the Elastic APM Python agent in order to correlate logs to spans, transactions and traces in Elastic APM
    2. could return a trace id and transaction id (not fully certain which one might be mapped to the traceId specified in the wiki (https://wiki.onap.org/display/DW/Jakarta+Best+Practice+Proposal+for+Standardized+Logging+Fields+-+v2)
    3. The Elastic APM agent has built in support for certain web frameworks (Flask being one of them)
  4. Looked for a python based ONAP container that was running Flask (pm subscription handler container)
    1. Since pmsh had some other ONAP dependencies, tried using the docker-compose.yml found here (https://github.com/onap/integration-csit/tree/master/plans/dcaegen2-services-pmsh/testsuite) but encountered many problems (see problem section below)
    2. PMSH code also had some reference to the fields we were interested in: ServiceName, RequestID (https://github.com/onap/dcaegen2-services/blob/master/components/pm-subscription-handler/pmsh_service/mod/__init__.py). Goal was to also see what data populated here, but the container never successfully built

Problems
Issues with using the docker-compose file:

  1.  Since the pmsh container has a few dependencies, opted to use the docker-compose.yml from: https://github.com/onap/integration-csit/blob/master/plans/dcaegen2-services-pmsh/testsuite/docker-compose.yml (does not successfully build)
  2. Issues encountered on pmsh container:
    1. Issue: could not connect to db
      Resolution: Need to install postgres on the host machine (did not find this mentioned anywhere) and configure with username and password according to Dockerfile
    2. Issue: AttributeError: module 'sqlalchemy' has no attribute '__all__'
      Resolution: This looked to be a result of the latest version of the .__all__ attribute being removed in the recently released SQLAlchemy 2.0 - ended up reverting to tagged version 1.0.1 of pmsh instead of latest
    3. Issue:  "Environment variable CONFIG_BINDING_SERVICE_SERVICE_PORT must be set. 'CONFIG_BINDING_SERVICE_SERVICE_PORT'"; this environment variable declaration was nowhere to be found in the git repo
      Resolution: googled CONFIG_BINDING_SERVICE_SERVICE_PORT, and found a value in the ves-collector repo, set it to 10000 (https://github.com/onap/dcaegen2-collectors-hv-ves/blob/master/tools/development/docker-compose.yml)
    4. Issue: "Environment variable CONFIG_BINDING_SERVICE_SERVICE_HOST must be set. 'CONFIG_BINDING_SERVICE_SERVICE_HOST'"; similar to port above, this environment variable declaration was nowhere to be found in the git repo
      Resolution: set to localhost (may need to be changed)
    5. Issue: RetryError[<Future at 0x7f6f0ad16210 state=finished raised Exception>]
      Resolution: Sounds like an async error. Stopped debugging at this point, as there seemed to be no end to the errors on a container that is published to a master branch on git

Issues trying to build containers individually:

  1. Tried building individual ONAP components according to documentation: https://docs.onap.org/projects/onap-dcaegen2/en/kohn/sections/services/pm-subscription-handler/installation.html instead of using the docker-compose.yml
  2. With dcaegen2-config-binding got this error while building the container:
    Step 10/13 : RUN apk add build-base libffi-dev &&     pip install --upgrade pip &&     pip install .
     ---> Running in 875df5da10a7
    fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
    fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
    ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.12/main: temporary error (try again later)
    WARNING: Ignoring APKINDEX.2c4ac24e.tar.gz: No such file or directory
    ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.12/community: temporary error (try again later)
    WARNING: Ignoring APKINDEX.40a3604f.tar.gz: No such file or directory
    ERROR: unsatisfiable constraints:
      build-base (missing):
        required by: world[build-base]
      libffi-dev (missing):
        required by: world[libffi-dev]
    The command '/bin/sh -c apk add build-base libffi-dev &&     pip install --upgrade pip &&     pip install .' returned a non-zero code: 2

Future things to look into once a stable container is built:
Background: Java logging has a feature called Mapped Diagnostic Context (MDC), defined as "a way to enrich log messages with information that could be unavailable in the scope where the logging actually occurs but that can be indeed useful to better track the execution of the program". Log4j allows for the implementation of MDC. The pm subscription handler has a function that tries to implement some sort of MDC logging: https://github.com/onap/dcaegen2-services/blob/master/components/pm-subscription-handler/pmsh_service/mod/__init__.py.
 
Things to explore:
1.    Python library supposed to be similar to log4j2: https://pypi.org/project/log4python/
2.    Logging in Python 3 (Like Java Log4J/Logback): https://coding-stream-of-consciousness.com/2018/11/26/logging-in-python-3-like-java-log4j-logback/

  • No labels