Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

11

#

Issue/Decision

Notes

Decision

1Are the logging guidelines set by the Logging Enhancement Project suitable for a cloud environment?

2Do we need a second appender for errors?
  • one file only

  • Later we will have a file to track who is accessing it, who is registering for the access...but we are not at that point yet
  • Within the log folder we could have multiple files for:

    • debug.log
    • audit.log
    • metric.log
    • error.log

    Logs will be archived each day.

Meeting Notes 11/12/20 

https://docs.onap.org/projects/onap-logging-analytics/en/latest/Logging_Enhancements_Project/logging_enhancements_project.html#id33

The logging enhancement team has proposed to split the log to multiple files: 

  • debug.log
  • audit.log
  • metric.log
  • error.log


Toine has suggested we follow this approach but exclude error logging. 


Before making anymore decisions we will investigate this project as we have some concerns about the logging standards in the logging enhancement project.

3The application should only logs to stdout and not in files?
  • By doing so we can leverage Kubernetes standard logging design.

  • If logging to files is provided, It would be good to also have a way to disable it to avoid file space management questions on worker nodes or pvc depending where logs files would be kept.

  • For monitoring, it is more complex to operate applications that are each one using specific logs files.

  • By having all applications logging to stdout, all logs can be collected in a common standard way and published and re-used anywhere that is convenient for any operations team (using filebeat or elk stack for example).

    See https://kubernetes.io/docs/concepts/cluster-administration/logging/ 


  • Whether user will be using Kubernetes / Azure or single manual deployment, features capability should be the same. We should not enforce to use anything specific to achieve the same service solution.
  • Many company enforce to  have those separate type of log files by security policy. Usually they store the audits logs ones.
    With log properties, you always can change the level log. And also alterate/disable the logs appenders at runtime.

Meeting Notes 11/12/20 

  • Everything a containerized application writes to stdout and stderr is handled and redirected somewhere by a container engine. For example, the Docker container engine redirects those two streams to a logging driver, which is configured in Kubernetes to write to a file in json format. - use a log rotation script

  • We want the logging to be easily configurable and allow for different implementations on how to collect logs. We will expose this configuration in the helm charts.  
  • Our default logging will be std out as it is easily configurable. 



4Is the file location ok?

 ../log/${logName}.log


I think this is ok. Logs will be placed in pods once deployed. 

I think we will need to set a property in our SpringBootApplication class for the log dir.


5Disk space

<property name="maxFileSize" value="20MB" />

Once the log reaches this value it is zipped.


6What kind of message should go with which level for logging?

See 'Summary of each logging level' below

Could consider a (CPSException) property that differentiate between error/warning and even fatal if we want?

Business logic exceptions will be logged in an audit log (future)

For now we will add them at a lower logging level if you feel like the log is helpful - it depends on case

e.g. If the user tries to add an anchor with no dataspace we will log "Dataspace does not exist". This will be logged in the place it is handled. 

Sonar advises to either log the exception or throw it.

You should log the original exception if it provides more information. 

It is up to the consumer of the java API if they want to log the exception or not. 

In the REST API when we catch an exception we should log it as we lose a lot of information when we convert it into a HTTP response code.

For business exceptions we will use error level but we will add configuration so that it is not written to the console by default.

7

Where exactly to log error, at source or a common central place?

    1. Prevent same error being logged may times (some duplication might be unavoidable though)
    2. Consider that REST layer is optional (so that is why the current commit is not a good solution for centralized logging)
It should be logged in the place the error is generated.
8How should we format our logs?logger.debug("No of Orders " + noOfOrder + " for client : " + client); 
logger.debug("No of Executions {} for clients:{}", noOfOrder , client); 
logger.debug("No of Executions {} for clients:{}", noOfOrder , client); 
9Should we use is log.isDebugEnabled()?I think we should if the cost of performing the log is expensive - for example if we need to build a parameter in the log10What logging framework to use?Should we configure SLF4j to use log4j, log4g2, logback etc.?yes, we will check log level.
10What kind of information to log?
  1. Never log sensitive information as plain text 
  2. log all important information that is necessary to debug or troubleshoot a problem if it happens.

  3. Always log decision making statements e.g. the application loads some settings from a preference file and is unable to find the file 
See CPS Logging Guidelines 


Summary of each logging level

...