Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyLOG-395

1. Upgrade of ELK

...

ELK Upgrade

    (info) Bath team (in charge of search-data-service, @Colin Burns) is planning to upgrade elasticsearch to 6.1.2 (based on AT&T approved versions) by the end of June. 

...

  • Automatic deployment of a separate kibana (version 6.1.2) for POMBA (currently, it is manually installed) with all required configuration (kibana.yml, index pattern creation) and pre-installation of the POMBA dashboards 
  • (Q) Would it be a good idea to use the kibana provided in onap-log pod?
    • pros: no redundant install of kibana, an integrated place for all views
    • cons: dependency on the onap-log (e.g., version), getting complex with all different types of dashboards
    • to-do: only configuration and import of POMBA dashboards

2. Data Enrichment

Common Model Data Fields

The data fields that are currently available and will be available from the POMBA context builders are listed up in the page, Context Builders Mapping to Common Model (see the java field name).

Specifically we need to add the highlighted fields for the network discover as developing the Network Discovery Context Builder. 


Data Enrichment

(with Questions)The following discusses any enrichment opportunities of the audit validation/violation data being pushed to elasticsearch. Most jobs could be done in the data-router micro-service code instead of using logstash.

...

  • The violation event need to add more fields from the field "violations" available in the validation info including: modelName, violationDetails,  indicator for manual vs automatic trigger 
  • The field violationDetails (which would tell the exact discrepancies; see the sample event below inside the '?violations') need to be parsed and stored in separate fields. Such nested data cannot be directly used in the kibana visualizations (? mark indicates it).
  • We could further parse out the ONAP components involved with the violations to see the violation stats factored by component (from violationDetails)
  • "Audit duration" stats would be useful? time taken for the auditing itself (from trigger to result).
  • Any other meta-data that would be useful? e.g., who invoked the validation (user, dept)


          

3. Data Preparation in Local Lab

To create proper dashboards, we need to populate a certain amount of audit results data in our local lab environment which consist of various types of validation and violation cases. We want the data to be reflecting more likely the production reality that would help create more useful dashboards.

Approach 1:  Copy the audit-results from IST to dev lab

  • Script A runs to collect a list of info that will be used as input parameters for the audit requests: serviceInstanceId, modelINvariantId, modelVersionId, customerId, serviceType
  • Script B runs to send audit requests with the data collected above: need to properly distribute the requests over time to make it more realistic
  • Manually collect the elasticsearch dump (which will contain all the audit validation/violation events) and import it to the elasticsearch in the dev lab

Approach 2: Component-level copy from IST to dev lab

  • Script X runs to GET all necessary info from each component of interest in the IST or production
  • Script Y prepares APIs to PUT the info into the components in the dev lab
  • Run Script A
  • Run Script B

After that, as necessary, we could manipulate the data to generate many different types of violations 

  • Manually update the data in some components to generate special violation cases

4. Dashboard Ideas

The visualizations and dashboards will need to be designed and created according to the current and any potential use cases of the POMBA services - what the users want/need to check, how the system could help improve the whole platform integrity. We want the POMBA reporting to be informative, insightful, and intuitive from the user's perspective. 

...

  • Where necessary, provide links to switch the dashboards back and forth: e.g., from the violation page to the page displaying its validation info
  • Color coding for the critical violations

3. Data Generation

For the development purpose, we need a certain amount of audit results data which consist of various types of validation and violation cases. We want the data to be reflecting more likely the production reality that would help create more useful dashboards.

Approach 1:  Copy the audit-results from IST to dev lab

  • Script A runs to collect a list of info that will be used as input parameters for the audit requests: serviceInstanceId, modelINvariantId, modelVersionId, customerId, serviceType
  • Script B runs to send audit requests with the data collected above: need to properly distribute the requests over time to make it more realistic
  • Manually collect the elasticsearch dump (which will contain all the audit validation/violation events) and import it to the elasticsearch in the dev lab

Approach 2: Component-level copy from IST to dev lab

  • Script X runs to GET all necessary info from each component of interest in the IST or production
  • Script Y prepares APIs to PUT the info into the components in the dev lab
  • Run Script A
  • Run Script B

After that, as necessary, we could manipulate the data to generate many different types of violations 

  • Manually update the data in some components to generate special violation cases