Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
languagebash
titledb_migrator_policy_init.sh
#!/bin/sh
/opt/app/policy/bin/prepare_upgrade.sh ${SQL_DB}
/opt/app/policy/bin/db-migrator -s ${SQL_DB} -o upgrade
rc=$?
/opt/app/policy/bin/db-migrator -s ${SQL_DB} -o report
exit $rc



Update kubernetes/policy/templates/configmap.yaml (configmap.yaml)

...

policyadmin: OK: upgrade (0900)
+ rc=0
+ /opt/app/policy/bin/db-migrator -s policyadmin -o report
name version
policyadmin 0900
ID script operation from_version to_version tag success atTime
1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:12
2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:12
3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:12
4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:13
5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:13
6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:13
7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:13
8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:13
9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:13
10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:14
11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:14
12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:14
13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:14
14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:14
15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:15
16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:15
17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:15
18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:15
19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:15
20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:15
21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:15
22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:16
23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:16
24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:16
25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:16
26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:16
27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:17
28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:17
29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:17
30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:17
31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:18
32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:18
33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:18
34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:18
35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:18
36 0450-pdpgroup.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:18
37 0460-pdppolicystatus.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:19
38 0470-pdp.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:19
39 0480-pdpstatistics.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:19
40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:19
41 0500-pdpsubgroup.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:19
42 0510-toscacapabilityassignment.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:19
43 0520-toscacapabilityassignments.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:19
44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:20
45 0540-toscacapabilitytype.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:20
46 0550-toscacapabilitytypes.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:20
47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:20
48 0570-toscadatatype.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:20
49 0580-toscadatatypes.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:20
50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:20
51 0600-toscanodetemplate.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:20
52 0610-toscanodetemplates.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:20
53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:21
54 0630-toscanodetype.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:21
55 0640-toscanodetypes.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:21
56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:21
57 0660-toscaparameter.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:21
58 0670-toscapolicies.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:21
59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:21
60 0690-toscapolicy.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:21
61 0700-toscapolicytype.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:21
62 0710-toscapolicytypes.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:22
63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:22
64 0730-toscaproperty.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:22
65 0740-toscarelationshiptype.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:22
66 0750-toscarelationshiptypes.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:22
67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:22
68 0770-toscarequirement.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:22
69 0780-toscarequirements.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:22
70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:23
71 0800-toscaservicetemplate.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:23
72 0810-toscatopologytemplate.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:23
73 0820-toscatrigger.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:23
74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:23
75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:23
76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:23
77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:23
78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:24
79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:24
80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:24
81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:24
82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:24
83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:24
84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:24
85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:24
86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:25
87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:25
88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:25
89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:25
90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:25
91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:25
92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:25
93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:26
94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:26
95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:26
96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2608211247120800u 1 2021-08-26 12:47:26
97 0100-pdp.sql upgrade 0800 0900 2608211247120900u 1 2021-08-26 12:47:26
98 0110-idx_tsidx1.sql upgrade 0800 0900 2608211247120900u 1 2021-08-26 12:47:26
99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2608211247120900u 1 2021-08-26 12:47:26
100 0130-pdpstatistics.sql upgrade 0800 0900 2608211247120900u 1 2021-08-26 12:47:27
101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2608211247120900u 1 2021-08-26 12:47:27
102 0150-pdpstatistics.sql upgrade 0800 0900 2608211247120900u 1 2021-08-26 12:47:27
103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2608211247120900u 1 2021-08-26 12:47:27
104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2608211247120900u 1 2021-08-26 12:47:27
105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2608211247120900u 1 2021-08-26 12:47:27
106 0190-jpapolicyaudit.sql upgrade 0800 0900 2608211247120900u 1 2021-08-26 12:47:27
107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2608211247120900u 1 2021-08-26 12:47:27
108 0210-sequence.sql upgrade 0800 0900 2608211247120900u 1 2021-08-26 12:47:27
109 0220-sequence.sql upgrade 0800 0900 2608211247120900u 1 2021-08-26 12:47:27
policyadmin: OK @ 0900
+ exit 0
root@esy-master-policy-002-nfs:~/oom/kubernetes#


Other policy pods are not starting:

root@esy-master-policy-002-nfs:~/oom/kubernetes# kubectl get pods | grep policy
dev-policy-apex-pdp-0 0/1 Init:CrashLoopBackOff 6 11m
dev-policy-api-78976fbd5d-pbk4h 0/1 Init:CrashLoopBackOff 5 11m
dev-policy-clamp-be-5c6cb758d8-hgf8w 0/1 Init:CrashLoopBackOff 5 11m
dev-policy-clamp-fe-569494fb5d-5qfkr 0/1 Init:0/3 1 11m
dev-policy-clamp-galera-config-5g2kb 0/1 Completed 0 11m
dev-policy-distribution-7dfdfbd64c-lzj46 0/1 Init:CrashLoopBackOff 6 11m
dev-policy-drools-pdp-0 0/1 Init:CrashLoopBackOff 5 11m
dev-policy-galera-config-xg8nl 0/1 Completed 0 11m
dev-policy-mariadb-0 2/2 Running 0 11m
dev-policy-pap-7899b7f9fb-x6pf8 0/1 Init:CrashLoopBackOff 5 11m
dev-policy-xacml-pdp-68f589ccc4-xmv6h 0/1 Init:CrashLoopBackOff 5 11m
root@esy-master-policy-002-nfs:~/oom/kubernetes# kubectl logs dev-policy-apex-pdp-0
Error from server (BadRequest): container "policy-apex-pdp" in pod "dev-policy-apex-pdp-0" is waiting to start: PodInitializing
root@esy-master-policy-002-nfs:~/oom/kubernetes# kubectl logs dev-policy-api-78976fbd5d-pbk4h
Error from server (BadRequest): container "policy-api" in pod "dev-policy-api-78976fbd5d-pbk4h" is waiting to start: PodInitializing



root@esy-master-policy-002-nfs:~/oom/kubernetes# kubectl describe pod dev-policy-api-78976fbd5d-pbk4h
Name: dev-policy-api-78976fbd5d-pbk4h
Namespace: onap
Priority: 0
Node: esy-master-policy-002-k8s-02/172.16.1.104
Start Time: Thu, 26 Aug 2021 12:45:28 +0000
Labels: app=policy-api
pod-template-hash=78976fbd5d
release=dev
Annotations: cni.projectcalico.org/podIP: 10.42.2.102/32
cni.projectcalico.org/podIPs: 10.42.2.102/32
Status: Pending
IP: 10.42.2.102
IPs:
IP: 10.42.2.102
Controlled By: ReplicaSet/dev-policy-api-78976fbd5d
Init Containers:
policy-api-readiness:
Container ID: docker://39636ea6356d2efffb5a30035650102baeaa23a2cff820e9e089b20f1b158cd6
Image: nexus3.onap.org:10001/onap/oom/readiness:3.0.1
Image ID: docker-pullable://nexus3.onap.org:10001/onap/oom/readiness@sha256:317c8a361ae73750f4d4a1b682c42b73de39083f73228dede31fd68b16c089db
Port: <none>
Host Port: <none>
Command:
/app/ready.py
Args:
--job-name
dev-policy-galera-config
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 26 Aug 2021 12:45:35 +0000
Finished: Thu, 26 Aug 2021 12:47:31 +0000
Ready: True
Restart Count: 0
Environment:
NAMESPACE: onap (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from dev-policy-api-read-token-jxvtg (ro)
policy-api-update-config:
Container ID: docker://5788bd3c5edddf32274b0cba56910788f6682ac057c84619a5f1b40d43805bcf
Image: docker.io/dibi/envsubst:1
Image ID: docker-pullable://dibi/envsubst@sha256:6f1938ec2114e98406b9dcc638929fcbe68add3d94eb8629e0cbed03b72f09f8
Port: <none>
Host Port: <none>
Command:
sh
Args:
-c
cd /config-input && for PFILE in `ls -1`; do envsubst <${PFILE} >/config/${PFILE}; done
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 26 Aug 2021 12:47:34 +0000
Finished: Thu, 26 Aug 2021 12:47:34 +0000
Ready: True
Restart Count: 0
Environment:
SQL_USER: <set to the key 'login' in secret 'dev-policy-db-secret'> Optional: false
SQL_PASSWORD: <set to the key 'password' in secret 'dev-policy-db-secret'> Optional: false
RESTSERVER_USER: <set to the key 'login' in secret 'dev-policy-api-restserver-creds'> Optional: false
RESTSERVER_PASSWORD: <set to the key 'password' in secret 'dev-policy-api-restserver-creds'> Optional: false
Mounts:
/config from apiconfig-processed (rw)
/config-input from apiconfig (rw)
/var/run/secrets/kubernetes.io/serviceaccount from dev-policy-api-read-token-jxvtg (ro)
policy-api-cert-initializer-readiness:
Container ID: docker://a608745a981839876de3e3a1afc6917b5c958c6584a854f214f20356f1ee3346
Image: nexus3.onap.org:10001/onap/oom/readiness:3.0.1
Image ID: docker-pullable://nexus3.onap.org:10001/onap/oom/readiness@sha256:317c8a361ae73750f4d4a1b682c42b73de39083f73228dede31fd68b16c089db
Port: <none>
Host Port: <none>
Command:
/app/ready.py
Args:
--container-name
aaf-locate
--container-name
aaf-cm
--container-name
aaf-service
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Thu, 26 Aug 2021 13:09:26 +0000
Finished: Thu, 26 Aug 2021 13:10:09 +0000
Ready: False
Restart Count: 8
Limits:
cpu: 100m
memory: 100Mi
Requests:
cpu: 3m
memory: 20Mi
Environment:
NAMESPACE: onap (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from dev-policy-api-read-token-jxvtg (ro)
policy-api-aaf-config:
Container ID:
Image: nexus3.onap.org:10001/onap/aaf/aaf_agent:2.1.20
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-c
/opt/app/aaf_config/bin/agent.sh
. /opt/app/aaf_config/bin/retrieval_check.sh
/opt/app/aaf_config/bin/aaf-add-config.sh

State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
APP_FQI: policy@policy.onap.org
aaf_locate_url: https://aaf-locate.onap:8095
aaf_locator_container: oom
aaf_locator_container_ns: onap
aaf_locator_fqdn: policy
aaf_locator_app_ns: org.osaaf.aaf
DEPLOY_FQI: <set to the key 'login' in secret 'dev-policy-api-cert-initializer-deployer-creds'> Optional: false
DEPLOY_PASSWORD: <set to the key 'password' in secret 'dev-policy-api-cert-initializer-deployer-creds'> Optional: false
cadi_longitude: 0.0
cadi_latitude: 0.0
aaf_locator_public_fqdn: policy.onap.org
Mounts:
/opt/app/aaf_config/bin/aaf-add-config.sh from aaf-add-config (rw,path="aaf-add-config.sh")
/opt/app/aaf_config/bin/retrieval_check.sh from aaf-add-config (rw,path="retrieval_check.sh")
/opt/app/aaf_config/cert/truststoreONAP.p12.b64 from aaf-agent-certs (rw,path="truststoreONAP.p12.b64")
/opt/app/aaf_config/cert/truststoreONAPall.jks.b64 from aaf-agent-certs (rw,path="truststoreONAPall.jks.b64")
/opt/app/osaaf from dev-policy-api-aaf-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from dev-policy-api-read-token-jxvtg (ro)
Containers:
policy-api:
Container ID:
Image: nexus3.onap.org:10001/onap/policy-api:2.5-SNAPSHOT-latest
Image ID:
Port: 6969/TCP
Host Port: 0/TCP
Command:
sh
-c
Args:
source /opt/app/osaaf/local/.ci;/opt/app/policy/api/bin/policy-api.sh /opt/app/policy/api/etc/mounted/config.json
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Liveness: tcp-socket :6969 delay=20s timeout=1s period=10s #success=1 #failure=3
Readiness: tcp-socket :6969 delay=20s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/localtime from localtime (ro)
/opt/app/osaaf from dev-policy-api-aaf-config (rw)
/opt/app/policy/api/etc/mounted from apiconfig-processed (rw)
/var/run/secrets/kubernetes.io/serviceaccount from dev-policy-api-read-token-jxvtg (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
dev-policy-api-aaf-config:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
aaf-agent-certs:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-cert-wrapper-certs
Optional: false
aaf-add-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-policy-api-cert-initializer-add-config
Optional: false
localtime:
Type: HostPath (bare host directory volume)
Path: /etc/localtime
HostPathType:
apiconfig:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-policy-api-configmap
Optional: false
apiconfig-processed:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
dev-policy-api-read-token-jxvtg:
Type: Secret (a volume populated by a Secret)
SecretName: dev-policy-api-read-token-jxvtg
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26m default-scheduler Successfully assigned onap/dev-policy-api-78976fbd5d-pbk4h to esy-master-policy-002-k8s-02
Normal Pulled 26m kubelet Container image "nexus3.onap.org:10001/onap/oom/readiness:3.0.1" already present on machine
Normal Created 26m kubelet Created container policy-api-readiness
Normal Started 26m kubelet Started container policy-api-readiness
Normal Pulled 24m kubelet Container image "docker.io/dibi/envsubst:1" already present on machine
Normal Created 24m kubelet Created container policy-api-update-config
Normal Started 24m kubelet Started container policy-api-update-config
Normal Created 21m (x4 over 24m) kubelet Created container policy-api-cert-initializer-readiness
Normal Started 21m (x4 over 24m) kubelet Started container policy-api-cert-initializer-readiness
Normal Pulled 19m (x5 over 24m) kubelet Container image "nexus3.onap.org:10001/onap/oom/readiness:3.0.1" already present on machine
Warning BackOff 65s (x77 over 22m) kubelet Back-off restarting failed container


root@esy-master-policy-002-nfs:~/oom/kubernetes# kubectl describe pod dev-policy-apex-pdp-0
Name: dev-policy-apex-pdp-0
Namespace: onap
Priority: 0
Node: esy-master-policy-002-k8s-02/172.16.1.104
Start Time: Thu, 26 Aug 2021 12:45:31 +0000
Labels: app=policy-apex-pdp
controller-revision-hash=dev-policy-apex-pdp-566cf5665
release=dev
statefulset.kubernetes.io/pod-name=dev-policy-apex-pdp-0
Annotations: cni.projectcalico.org/podIP: 10.42.2.109/32
cni.projectcalico.org/podIPs: 10.42.2.109/32
Status: Pending
IP: 10.42.2.109
IPs:
IP: 10.42.2.109
Controlled By: StatefulSet/dev-policy-apex-pdp
Init Containers:
policy-apex-pdp-update-config:
Container ID: docker://408ca808d6166e320989015311c8de82b6418828b1b05e1bced0c3a05cd7bdd5
Image: docker.io/dibi/envsubst:1
Image ID: docker-pullable://dibi/envsubst@sha256:6f1938ec2114e98406b9dcc638929fcbe68add3d94eb8629e0cbed03b72f09f8
Port: <none>
Host Port: <none>
Command:
sh
Args:
-c
cd /config-input && for PFILE in `ls -1`; do envsubst <${PFILE} >/config/${PFILE}; done
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 26 Aug 2021 12:45:41 +0000
Finished: Thu, 26 Aug 2021 12:45:41 +0000
Ready: True
Restart Count: 0
Environment:
TRUSTSTORE_PASSWORD: <set to the key 'password' in secret 'dev-policy-apex-pdp-truststore-pass'> Optional: false
KEYSTORE_PASSWORD: <set to the key 'password' in secret 'dev-policy-apex-pdp-keystore-pass'> Optional: false
RESTSERVER_USER: <set to the key 'login' in secret 'dev-policy-apex-pdp-restserver-creds'> Optional: false
RESTSERVER_PASSWORD: <set to the key 'password' in secret 'dev-policy-apex-pdp-restserver-creds'> Optional: false
Mounts:
/config from apexconfig (rw)
/config-input from apexconfig-input (rw)
/var/run/secrets/kubernetes.io/serviceaccount from dev-policy-apex-pdp-read-token-x829k (ro)
policy-apex-pdp-cert-initializer-readiness:
Container ID: docker://f8c75fa194e74bb6dc615d0df4cd88bb1b6ddbc488105048ec0e681d8857257e
Image: nexus3.onap.org:10001/onap/oom/readiness:3.0.1
Image ID: docker-pullable://nexus3.onap.org:10001/onap/oom/readiness@sha256:317c8a361ae73750f4d4a1b682c42b73de39083f73228dede31fd68b16c089db
Port: <none>
Host Port: <none>
Command:
/app/ready.py
Args:
--container-name
aaf-locate
--container-name
aaf-cm
--container-name
aaf-service
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Thu, 26 Aug 2021 13:19:09 +0000
Finished: Thu, 26 Aug 2021 13:19:50 +0000
Ready: False
Restart Count: 10
Limits:
cpu: 100m
memory: 100Mi
Requests:
cpu: 3m
memory: 20Mi
Environment:
NAMESPACE: onap (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from dev-policy-apex-pdp-read-token-x829k (ro)
policy-apex-pdp-aaf-config:
Container ID:
Image: nexus3.onap.org:10001/onap/aaf/aaf_agent:2.1.20
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-c
/opt/app/aaf_config/bin/agent.sh
. /opt/app/aaf_config/bin/retrieval_check.sh
/opt/app/aaf_config/bin/aaf-add-config.sh

State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
APP_FQI: policy@policy.onap.org
aaf_locate_url: https://aaf-locate.onap:8095
aaf_locator_container: oom
aaf_locator_container_ns: onap
aaf_locator_fqdn: policy
aaf_locator_app_ns: org.osaaf.aaf
DEPLOY_FQI: <set to the key 'login' in secret 'dev-policy-apex-pdp-cert-initializer-deployer-creds'> Optional: false
DEPLOY_PASSWORD: <set to the key 'password' in secret 'dev-policy-apex-pdp-cert-initializer-deployer-creds'> Optional: false
cadi_longitude: 0.0
cadi_latitude: 0.0
aaf_locator_public_fqdn: policy.onap.org
Mounts:
/opt/app/aaf_config/bin/aaf-add-config.sh from aaf-add-config (rw,path="aaf-add-config.sh")
/opt/app/aaf_config/bin/retrieval_check.sh from aaf-add-config (rw,path="retrieval_check.sh")
/opt/app/aaf_config/cert/truststoreONAP.p12.b64 from aaf-agent-certs (rw,path="truststoreONAP.p12.b64")
/opt/app/aaf_config/cert/truststoreONAPall.jks.b64 from aaf-agent-certs (rw,path="truststoreONAPall.jks.b64")
/opt/app/osaaf from dev-policy-apex-pdp-aaf-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from dev-policy-apex-pdp-read-token-x829k (ro)
Containers:
policy-apex-pdp:
Container ID:
Image: nexus3.onap.org:10001/onap/policy-apex-pdp:2.6-SNAPSHOT-latest
Image ID:
Port: 6969/TCP
Host Port: 0/TCP
Command:
sh
-c
Args:
if [ -f /opt/app/osaaf/local/.ci ]; then . /opt/app/osaaf/local/.ci; fi;/opt/app/policy/apex-pdp/bin/apexOnapPf.sh -c /home/apexuser/config/OnapPfConfig.json
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Liveness: tcp-socket :6969 delay=20s timeout=1s period=10s #success=1 #failure=3
Readiness: tcp-socket :6969 delay=20s timeout=1s period=10s #success=1 #failure=3
Environment:
REPLICAS: 1
Mounts:
/etc/localtime from localtime (ro)
/home/apexuser/config from apexconfig (rw)
/opt/app/osaaf from dev-policy-apex-pdp-aaf-config (rw)
/var/log/onap from policy-logs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from dev-policy-apex-pdp-read-token-x829k (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
dev-policy-apex-pdp-aaf-config:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
aaf-agent-certs:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-cert-wrapper-certs
Optional: false
aaf-add-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-policy-apex-pdp-cert-initializer-add-config
Optional: false
localtime:
Type: HostPath (bare host directory volume)
Path: /etc/localtime
HostPathType:
policy-logs:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
apexconfig-input:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-policy-apex-pdp-configmap
Optional: false
apexconfig:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
dev-policy-apex-pdp-read-token-x829k:
Type: Secret (a volume populated by a Secret)
SecretName: dev-policy-apex-pdp-read-token-x829k
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 35m default-scheduler Successfully assigned onap/dev-policy-apex-pdp-0 to esy-master-policy-002-k8s-02
Warning FailedMount 35m (x2 over 35m) kubelet MountVolume.SetUp failed for volume "apexconfig-input" : failed to sync configmap cache: timed out waiting for the condition
Warning FailedMount 35m (x3 over 35m) kubelet MountVolume.SetUp failed for volume "aaf-add-config" : failed to sync configmap cache: timed out waiting for the condition
Warning FailedMount 35m (x3 over 35m) kubelet MountVolume.SetUp failed for volume "dev-policy-apex-pdp-read-token-x829k" : failed to sync secret cache: timed out waiting for the condition
Normal Pulled 35m kubelet Container image "docker.io/dibi/envsubst:1" already present on machine
Normal Created 35m kubelet Created container policy-apex-pdp-update-config
Normal Started 35m kubelet Started container policy-apex-pdp-update-config
Normal Started 33m (x3 over 35m) kubelet Started container policy-apex-pdp-cert-initializer-readiness
Normal Pulled 32m (x4 over 35m) kubelet Container image "nexus3.onap.org:10001/onap/oom/readiness:3.0.1" already present on machine
Normal Created 32m (x4 over 35m) kubelet Created container policy-apex-pdp-cert-initializer-readiness
Warning BackOff 37s (x123 over 33m) kubelet Back-off restarting failed container


Exit Code

State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 137


 kubectl describe pods dev-policy-api-78976fbd5d-4xfx2 | grep -i exit
Exit Code: 0
Exit Code: 0
Exit Code: 137


kubectl describe pods dev-policy-pap-7899b7f9fb-dzwgh | grep -i exit
Exit Code: 0
Exit Code: 0
Exit Code: 137


kubectl describe pods dev-policy-apex-pdp-0 | grep -i exit
Exit Code: 0
Exit Code: 137


The Exit code 137 is important because it means that the system terminated the container as it tried to use more memory than its limit. In order to monitor this, you always have to look at the use of memory compared to the limit.


State: Running
Started: Thu, 26 Aug 2021 15:10:16 +0000
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Thu, 26 Aug 2021 15:08:03 +0000
Finished: Thu, 26 Aug 2021 15:08:50 +0000
Ready: False
Restart Count: 5
Limits:
cpu: 100m
memory: 100Mi
Requests:
cpu: 3m
memory: 20Mi


View file
namedb-migrator-patch.tar.gz
height250