...
Time (EDT) | Categories | Sub-Categories (In Error Mode Component) | Time to Detect Failure and Repair | Pass? | Notes | |
---|---|---|---|---|---|---|
VNF Onboarding and Distribution | SDC | < 5 minutes | Pass | Timing?? 30 minutes. Using a script kills those components randomly, and continue onboarding VNFs. ete-k8s.sh onap healthdist After kicking off the command; waiting for 1 minutes; killed SDC; The first one was failed; then we did redistribute, it was success. | ||
SO | < 5 minutes | Pass | After kicking off the command; waiting for 1 minutes; killed SO; The first one was failed; then we did redistribute, it was success. | |||
A&AI | < 5 minutes | Pass |
| |||
SDNC | < 8 minutes | Pass |
Delete SDNC pod, it took very very long time to get back, it might because of the network issues. And we got a very "weird" system, SDC gives us the following error: | |||
< 5 minutes | Pass |
2. Run health and preload | ||||
VNF Instantiation | SDC | < 2 seconds | Pass | Tested with manually kill the docker container | ||
VID | < 1 minute | Pass |
| |||
SO | 5 minutes | Pass | so pod restarted as part of hard rebooting 2 k8s VMs out of 9 | |||
A&AI | 20 minutes | Pass | restarted aai-model-loader, aai-hbase, and aai-sparky-be due to hard rebooting 2 more k8s VMs probably took extra time due to many other pods restarting at the same time and taking time to converge | |||
SDNC | 5 minutes | Pass | sdnc pods restarted as part of hard rebooting 2 k8s VMs out of 9 | |||
MultiVIM | < 5 minutes | Pass | deleted multicloud pods and verified that new pods that come up can orchestrate VNFs as usual | |||
Closed Loop (Pre-installed manually) | DCAE | 2 minutes | PassNever (observed for > 30 minutes) | Fail | Deleted dep-dcae-ves-collector-767d745fd4-wk4ht. No discernible interruption to closed loop. Pod restarted in 1 minute. Deleted dep-dcae-tca-analytics-d7fb6cffb-6ccpm. No discernible interruption to closed loop. Pod restarted in 2 minutes. Deleted dev-dcae-db-0. Closed loop failed after about 1 minute. Pod restarted in 2 minutes. Closed loop started suffering from intermittent failures and never fully recovered; maybe the two dcae-db instances are out of sync. | |
DMaaP | 10 seconds | Pass | Deleted dev-dmaap-bus-controller-657845b569-q7fr2. No discernible interruption to closed loop. Pod restarted in 10 seconds. | |||
Policy (Policy documentation: Policy on OOM) | 15 minutes | Pass | Deleted dev-pdp-0. No discernible interruption to closed loop. Pod restarted in 2 minutes. Deleted dev-drools-0. Closed loop failed immediately. Pod restarted in 2 minutes. Closed loop recovered in 15 minutes. Deleted dev-pap-5c7995667f-wvrgr. No discernible interruption to closed loop. Pod restarted in 2 minutes. Deleted dev-policydb-5cddbc96cf-hr4jr. No discernible interruption to closed loop. Pod restarted in 2 minutes. Deleted dev-nexus-7cb59bcfb7-prb5v. No discernible interruption to closed loop. Pod restarted in 2 minutes. | |||
A&AI | Never (observed for > 1 hour) | Fail | Deleted aai-modelloader. Closed loop failed immediately. Even though aai-modelloader container restarted within a couple of minutes (when restarted on a VM that already has the image), closed Pod restarted in < 5 minutes. Closed loop never recovered. | |||
APPC (3-node cluster) | 20 minutes | Pass | Deleted dev-appc-0. Closed loop failed immediately. dev-appc-0 pod restarted in 15 minutes. Closed loop recovered in 20 minutes. |
...