1
0
-1

Hi Helen Chen


I tried installing ONAP R3 Maintenance with following flavors 

Openstack Pike

K8s Cluster of 13 VMs + 1 Rancher Node

flavors of VMs : 16 G RAM, 8 vCPUs & 70 G Disc


Currently status of pods are as follows:

Total pods: 250

Total Running/ Completed Pods : 217

Failed Number Of Pods : 33


See below screenshot of failed pods


dev-dmaap-dmaap-dr-prov-5c766b8d69-qrrrn                      0/1       CrashLoopBackOff   88         7h        

dev-contrib-netbox-app-provisioning-7gln5                     0/1       Error              0          7h        10.42.223.193   k8-node12   <none>

dev-portal-portal-db-config-7wgnd                             0/2       Error              0          7h        10.42.47.172    k8-node11   <none>

dev-portal-portal-db-config-8zhwq                             0/2       Error              0          7h        10.42.33.206    k8-node11   <none>

dev-portal-portal-db-config-hdttm                             0/2       Error              0          7h        10.42.65.42     k8-node5    <none>

dev-portal-portal-db-config-thts9                             0/2       Error              0          7h        10.42.65.112    k8-node11   <none>

dev-portal-portal-db-config-w5spq                             0/2       Error              0          7h        

10.42.224.233   k8-node9    <none>

dev-sdc-sdc-dcae-be-tools-65h9w                               0/1       Init:Error         0          6h        10.42.171.18    k8-node1    <none>

dev-sdc-sdc-dcae-be-tools-ffv6t                               0/1       Init:Error         0          7h        10.42.44.49     k8-node8    <none>

dev-sdc-sdc-dcae-be-tools-fssjw                               0/1       Init:Error         0          7h        


I tried debug system but not able to conclude exact reason.


Please help me on this.

Thanks

Abhay

    CommentAdd your comment...

    1 answer

    1.  
      1
      0
      -1

      user-764e9  Init:Error for sdc-dcae-be-tools is expected since the dependency job will not completed by that init timer expires. So, you can simply delete sdc-dcae-be-tools pods. 

      For portal try to see the following. There must a pods which is in completed state

      kubectl -n onap get pods -o wide | grep portal


      for dmaap-prov issue, you need to do following tweaks. This is for data-router only.

      apply the changes specified here manually https://gerrit.onap.org/r/#/c/79210/

      For more information look here  DMAAP-1065 - [DR] Casablanca - AAF certs expired on dmaap-dr-prov and dmaap-dr-node Closed

      and redeploy DMAAP.

      1. user-764e9

        Thanks a lot kranthi guttikonda. Let me try this and will get back to you for any further doubts

      2. user-764e9

        kranthi guttikonda we tried work around as suggested and still in the same state as above.


      3. kranthi guttikonda

        user-764e9 Strange it worked for me. However latest comments include image1.0.9 and changes have been merged to casablanca branch. So, I would recommend try git clone the casablanca version and redeploy DMAAP. Make sure you remove all the pvc, pv and secrets for DMAAP before redeploying. 

      4. user-764e9

        kranthi guttikonda Thanks for your response , are you talking about Cloning from R3 Only rather than R3 Maintenance ? however as per my understanding R3 Maintenance is latest stable , Do changes has been cherry picked to R3 from maintenance ?. 

      5. kranthi guttikonda

        user-764e9 clone R3 maintenance release. Make sure image: onap/dmaap/datarouter-node:1.0.9 in Values.yaml for dr-prov and dr-node charts.

      CommentAdd your comment...