...
- We have created a few sample validation rules but do not know all the rules to be created by the users in the production. That means most likely we can create and provide some high-level dashboards - for the specific rules and details, we could only provide some sample dashboards to give an idea so that how the end users could create their own dashboards customized for their use cases.
- For the Network Discovery, what specific audits will be executed and what kinds of audit results are expected
Dashboard List
(Note) One dashboard type could need multiple dashboard pages depending on the amount of visualizations.
Dashboard Type | Description (What To Want to See) | Required Information To Show (Visualizations) | |
---|---|---|---|
1 | Overall Audit Monitor | As a general admin, I want to see the whole platform integrity - health status in terms of all validation rules configured |
|
2 | Overall Audit Analysis | What kind of validations mostly executed against which models What kind of violations mostly occur in which components |
|
3 | Individual Audit Analysis | Given a validation job, the user wants to see and quickly recognize all relevant violations detected by POMBA |
|
4 | Violation Analysis for Network Discovery | For the specific use cases of Network Discovery, the user wants to see the audit stats |
|
5 | Violation Summary Report | Provide a list of summary list for the validation and violation cases for any potential fixes |
|
6 | Cure History (stretch) | For the same validation category (e.g., with the same rule and model id, component set?), the user wants to be ensured that the violation has been fixed and gone now, and wants to get an insight on how much the POMBA helps improve the overall system integrity |
|
Supportable Features
- Provide Where necessary, provide links to move switch the dashboards back and forth across the dashboards: e.g., from the violation page to the page displaying its validation info
- Color coding for the critical violations
3. Data Generation based on Audit Use Cases
Generally two The user could take some possible approaches to execute the audits and generate audit results:
- Event-Driven Individual Auditing: e.g., Postpost-orchestration audit trigger triggered by system or user for a single service instance
- Combined Audit Now: audit the service types and rules selected with a specific service instanceone-click command. This requires to collect and keep a list of service instance info in the entire platform.
- Continuous Scheduled Auditing: automated auditing of "Combined Audit" for the selected set of services Continuous or Scheduled Auditing: targeting a pre-configured set of service instances (existing and/or new). It could be against all the deployed artifactsFor some services that need a special care or interest, we could customize different schedule (e.g., more frequent validation).
Configurations
- Audit targets selection: which microservices should be included and cross-checked
- Audit rules selection: which rules should be validated for the target services
- Scheduling parameters: when, which rules will be applied
...