1
0
-1

Hi,

After deploying onap ( Cassablanca), I ran the ./ete-k8s.sh onap health script to test the component health.

The APPC and DCAE are failling the test. For the APPC I tried to redeploy it, however nothing changed.


All the pod for the APPC are running:

onap dev-appc-appc-0 2/2 Running 0 45m 10.42.7.98 sb4-k8s-10 <none>
onap dev-appc-appc-ansible-server-589cb48497-v4hmc 1/1 Running 0 45m 10.42.115.161 sb4-k8s-13 <none>
onap dev-appc-appc-cdt-57d685d65c-jfq94 1/1 Running 0 45m 10.42.194.152 sb4-k8s-6 <none>
onap dev-appc-appc-db-0 1/1 Running 0 45m 10.42.61.11 sb4-k8s-10 <none>
onap dev-appc-appc-db-1 1/1 Running 1 43m 10.42.41.47 sb4-k8s-9 <none>
onap dev-appc-appc-db-2 1/1 Running 0 41m 10.42.27.95 sb4-k8s-14 <none>
onap dev-appc-appc-dgbuilder-768d9f4fb8-w6zpc 1/1 Running 0 45m 10.42.38.8 sb4-k8s-12 <none>


Result for the health

=============================hea=================================================
Testsuites
==============================================================================
Testsuites.Health-Check :: Testing ecomp components are available via calls.
==============================================================================
Basic A&AI Health Check | PASS |
------------------------------------------------------------------------------
Basic AAF Health Check | PASS |
------------------------------------------------------------------------------
Basic AAF SMS Health Check | PASS |
------------------------------------------------------------------------------
Basic APPC Health Check | FAIL |
401 != 200
------------------------------------------------------------------------------
Basic CLI Health Check | PASS |
------------------------------------------------------------------------------
Basic CLAMP Health Check | PASS |
------------------------------------------------------------------------------
Basic DCAE Health Check | FAIL |
500 != 200
------------------------------------------------------------------------------
Basic DMAAP Data Router Health Check | PASS |
------------------------------------------------------------------------------
Basic DMAAP Message Router Health Check | PASS |
------------------------------------------------------------------------------
Basic External API NBI Health Check | PASS |
------------------------------------------------------------------------------
Basic Log Elasticsearch Health Check | PASS |
------------------------------------------------------------------------------
Basic Log Kibana Health Check | PASS |
------------------------------------------------------------------------------
Basic Log Logstash Health Check | PASS |
------------------------------------------------------------------------------
Basic Microservice Bus Health Check | PASS |
------------------------------------------------------------------------------
Basic Multicloud API Health Check | PASS |
------------------------------------------------------------------------------
Basic Multicloud-ocata API Health Check | PASS |
------------------------------------------------------------------------------
Basic Multicloud-pike API Health Check | PASS |
------------------------------------------------------------------------------
Basic Multicloud-titanium_cloud API Health Check | PASS |
------------------------------------------------------------------------------
Basic Multicloud-vio API Health Check | PASS |
------------------------------------------------------------------------------
Basic OOF-Homing Health Check | PASS |
------------------------------------------------------------------------------
Basic OOF-SNIRO Health Check | PASS |
------------------------------------------------------------------------------
Basic OOF-CMSO Health Check | PASS |
------------------------------------------------------------------------------
Basic Policy Health Check | PASS |
------------------------------------------------------------------------------
Basic Pomba AAI-context-builder Health Check | PASS |
------------------------------------------------------------------------------
Basic Pomba SDC-context-builder Health Check | PASS |
------------------------------------------------------------------------------
Basic Pomba Network-discovery-context-builder Health Check | PASS |
------------------------------------------------------------------------------
Basic Portal Health Check | PASS |
------------------------------------------------------------------------------
Basic SDC Health Check (DMaaP:UP)| PASS |
------------------------------------------------------------------------------
Basic SDNC Health Check | PASS |
------------------------------------------------------------------------------
Basic SO Health Check | PASS |
------------------------------------------------------------------------------
Basic UseCaseUI API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC catalog API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC emsdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC gvnfmdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC huaweivnfmdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC jujuvnfmdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC multivimproxy API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC nokiavnfmdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC nokiav2driver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC nslcm API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC resmgr API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC vnflcm API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC vnfmgr API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC vnfres API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC workflow API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC ztesdncdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC ztevnfmdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VID Health Check | PASS |
------------------------------------------------------------------------------
Basic VNFSDK Health Check | PASS |
------------------------------------------------------------------------------
Basic Holmes Rule Management API Health Check | FAIL |
502 != 200
------------------------------------------------------------------------------
Basic Holmes Engine Management API Health Check | FAIL |
502 != 200
------------------------------------------------------------------------------
Testsuites.Health-Check :: Testing ecomp components are available ... | FAIL |
51 critical tests, 47 passed, 4 failed
51 tests total, 47 passed, 4 failed
==============================================================================
Testsuites | FAIL |
51 critical tests, 47 passed, 4 failed
51 tests total, 47 passed, 4 failed
==============================================================================


Maybe there is something related to the configuration of the values.yaml (in /onap) for APPC. So here is the config:


appc:
enabled: false
config:

openStackType: OpenStackProvider
openStackName: OpenStack
openStackKeyStoneUrl: http://172.24.52.27/identity/v3
openStackServiceTenantName: default
openStackDomain: default
openStackUserName: admin
openStackEncryptedPassword: de7538d795a3604a29d3dd1e8689f131


I don't know how to fixed this problem, do you have an idea?

Thanks!


    CommentAdd your comment...

    4 answers

    1.  
      1
      0
      -1

      make sure clean up pv, pvc and remove nfs mount contents before re-deploy

      1. Frédéric Larocque

        Thanks,

        Yes I always make sure there is nothing left before redeploying but it's was not relate to PV or PVC. When I copied the file you uploaded this morning on the wiki, APPC test PASS (smile) . So maybe I made mistakes when I edited the secret.yaml and the statefulset.yaml


        For the DCAE, I still get same error (500). Do you suggest me to rewrite a question specific on it ?

      2. Vijay Venkatesh Kumar

        Frederic - For DCAE, can you run the following from dcae bootstrap pod (any other k8s pod with curl setup is also fine) and post the o/p pls?

        curl dcae-healthcheck

        The output will return individual DCAE component status which will help to troubleshoot further. More info on this api can be found here - https://docs.onap.org/en/latest/submodules/dcaegen2.git/docs/sections/healthcheck.html

      3. Frédéric Larocque

        Hi Vijay Venkatesh Kumar

        Here is the result of  "curl dcae-healthcheck" command

        {"type":"summary","count":11,"ready":10,"items":[{"name":"dev-dcaegen2-dcae-cloudify-manager","ready":1,"unavailable":0},{"name":"dep-config-binding-service","ready":1,"unavailable":0},{"name":"dep-deployment-handler","ready":1,"unavailable":0},{"name":"dep-inventory","ready":1,"unavailable":0},{"name":"dep-service-change-handler","ready":0,"unavailable":1},{"name":"dep-policy-handler","ready":1,"unavailable":0},{"name":"dep-dcae-ves-collector","ready":1,"unavailable":0},{"name":"dep-dcae-tca-analytics","ready":1,"unavailable":0},{"name":"dep-dcae-prh","ready":1,"unavailable":0},{"name":"dep-dcae-hv-ves-collector","ready":1,"unavailable":0},{"name":"dep-dcae-datafile-collector","ready":1,"unavailable":0}]}

        There is only 11 elements and 10 ready. On the link you provided to me there is 14. Do you suggest an redeployement of the DCAE?

        Thanks for your help!



      4. Frédéric Larocque

        This seem related to the dep-service-change-handler  which is unavailable

      5. Vijay Venkatesh Kumar

        Frédéric Larocque - yes you are correct. Looks like ServiceChangeHandler (SCH) pod dint come up clean. Can you share the logs and kubectl describe o/p for this pod?

        SCH uses SDC library which has dependency on DMAAP-MR. If the MR gets restarted, we have seen the library connection fail requiring SCH also to be restarted.

        You could also try to delete the pod manually (using kubectl delete) and check if that clears the problem.

      6. Frédéric Larocque

        The problem is that the dep-service-change-handler is not there.

        Here is the output of "kubectl get pod --all-namespaces -o=wide"


        running_pod.txt

        health_test_result.txt


        SDC seem to be O.K. There are some pods in Init:Error. But there are recreated an the new one are marked as completed or running.

      7. Vijay Venkatesh Kumar

        OKay, that is strange. Both inventory and SCH gets deployed under single blueprint. Can you share the logs of dcae bootstrap pod?

        and also from dcae bootstrap pod - output of following command

        cfy deployments list
      8. Frédéric Larocque

        Ok I tried to redeploy DCAE, but nothing changed. So the logs here are after I tried the redeployment:


        1) kubectl -n onap log -f dev-dcaegen2-dcae-bootstrap-7bc68b6b6-bgtnt dcae-bootstrap


        Result:

        dcae_bootstrap_log.txt

        There is an SSL connection error


        2) cfy deployments list

        For this command, I got an error.


        So I decided to run it in the dcae cloudify manager ( I don't know if it's correct)


        Here is the output


        Thanks

      9. Vijay Venkatesh Kumar

        Did you remove existing DCAE deployments prior doing a redeploy?  Based on error it appears CM did not come up clean on the redeploy and bootstrap/other deployments could be stuck in a conflict state with prev deployment.

        Recovery from multiple failures will be more involved at this point. Basically need following - after helm delete/undeploy of DCAE, remove any other dcae pod manually (as not all services are installed via helm - helm delete should not be assumed of removing all DCAE components).  Cleanup /dockerdata-nfs and pv's associated for DCAE before retriggering a new deploy.


        Or if you can do complete ONAP reinstall - that would be cleaner approach. Deleting the onap namespace will ensure all k8s resources associated being removed also. Not sure if this is an option for you.



      10. Frédéric Larocque

        Okay, I understand

        https://onap.readthedocs.io/en/latest/submodules/dcaegen2.git/docs/sections/installation_oom.html

        Using this approach will be okay ?

        • Find the Cloudify Manager pod identifier, using a command like:

          kubectl -n onap get pods | grep dcae-cloudify-manager

        • Execute the DCAE cleanup script on the Cloudify Manager pod, using a command like:

          kubectl -n onap exec cloudify-manager-pod-id -- /scripts/dcae-cleanup.sh

        • Finally, run helm undeploy against the DCAE Helm subrelease

        Then delete the pv, pvc , and  remove dockernfs/dev-dcaegen2

      11. Vijay Venkatesh Kumar

        The procedure/script was introduced for Dublin. You can find that script here https://git.onap.org/dcaegen2/deployments/tree/cm-container/scripts/dcae-cleanup.sh if you want to try, but as CM was redeployed - it might not identify all the residual old deployment.

      12. Frédéric Larocque

        Hi Vijay Venkatesh Kumar,

        I decided to go with a fresh deployement. I rebuilded the system on my OpenStack, cleaned the NFS,  and redeploy ONAP. The same thing happend : the SHC pod is still not present but now the log of the dcae bootstrap may give you a hint why this happended.


        kubectl -n onap log -f dev-dcaegen2-dcae-bootstrap-7bc68b6b6-9fnwm dcae-bootstrap

        dcaea_bootstrap_log.txt


        Thanks

      13. Frédéric Larocque

        And here is the result of the deployment list command in the bootstrap pod


      14. Vijay Venkatesh Kumar

        cc Jack Lucas

        Frédéric Larocque  - From the logs, I see following k8s error post Inventory deployment was submitted; this seem to may have impacted the inventory deployment (which is pre-requisite to be completed before ServiceChangeHandler gets deployed)

        Reason: Internal Server Error
        HTTP response headers: HTTPHeaderDict({'Date': 'Sat, 08 Jun 2019 01:22:51 GMT', 'Content-Length': '238', 'Content-Type': 'application/json'})
        HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"client: etcd cluster is unavailable or misconfigured; error #0: client: etcd member https://etcd.kubernetes.rancher.internal:2379 has no leader\n","code":500}
        ..

        2019-06-08 01:23:52.419  CFY <inventory> 'install' workflow execution failed: RuntimeError: Workflow failed: Task failed 'k8splugin.create_and_start_container_for_platforms' -> (500)
        Reason: Internal Server Error
        HTTP response headers: HTTPHeaderDict({'Date': 'Sat, 08 Jun 2019 01:23:44 GMT', 'Content-Length': '238', 'Content-Type': 'application/json'})
        HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"client: etcd cluster is unavailable or misconfigured; error #0: client: etcd member https://etcd.kubernetes.rancher.internal:2379 has no leader\n","code":500}

        The above error is something localized in your environment - not tied to DCAE component deployment itself.  From the logs it appears this may have impacted more than just inventory and SCH (possibly deployment_handler and policy_handler) also.

        Did the dcae-healthcheck return all other dcae pods has being instantiated?


        If the issue is with just inventory (and SCH) - you could try execute into dcaegen2-dcae-bootstrap pod and redeploy inventory (and sch) module; following are the steps.

        #Remove current deployment from cloudify and blueprint id

        • cfy uninstall inventory

        • cfy blueprints delete inventory

        #Add blueprint and create new deployment and execute install

        • cfy blueprints upload -b inventory /blueprints/k8s-holmes-engine.yaml
        • cfy deployments create -b inventory -i /inputs/k8s-inventory-inputs.yaml inventory
        • cfy executions start -d inventory install


        Note: If any other DCAE blueprint deployed components also has to be redeployed, same steps can be followed, except replace "inventory" with the required component name (shown on the cfy deployments list output).


      15. Frédéric Larocque

        Thanks Vijay Venkatesh Kumar

        I think the error related to my environnement : client: etcd member https://etcd.kubernetes.rancher.internal:2379 has no leader

        caused a lot of problems during the deployment. From time to time, I always get this error.

        I have no idea how to get rid of this error. I mean, I followed the hardware recommandation ( 112 vCPU and 224 go RAM) and the official tutorial for OOM with Rancher and OpenStack.

        Is there someone in you team who encountered this error before ?

        Thank again for you help!




      16. Vijay Venkatesh Kumar

        Sorry, I've not encountered this error - not sure if anyone from DCAE team has faced this also. Did you check with Mike/OOM team? You may post this also on onap-discuss list and check if anyone else has faced similar issue and have a workaround.

      CommentAdd your comment...
    2.  
      1
      0
      -1

      here is the stateful yaml

      apiVersion: apps/v1beta1
      kind: StatefulSet
      metadata:
      name: {{ include "common.fullname" . }}
      namespace: {{ include "common.namespace" . }}
      labels:
      app: {{ include "common.name" . }}
      chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
      release: {{ .Release.Name }}
      heritage: {{ .Release.Service }}
      spec:
      serviceName: "{{ .Values.service.name }}-cluster"
      replicas: {{ .Values.replicaCount }}
      podManagementPolicy: Parallel
      template:
      metadata:
      labels:
      app: {{ include "common.name" . }}
      release: {{ .Release.Name }}
      spec:
      imagePullSecrets:
      - name: "{{ include "common.namespace" . }}-docker-registry-key"
      initContainers:
      - command:
      - /root/ready.py
      args:
      - --container-name
      - {{.Values.config.mariadbGaleraContName}}
      env:
      - name: NAMESPACE
      valueFrom:
      fieldRef:
      apiVersion: v1
      fieldPath: metadata.namespace
      image: "{{ .Values.global.readinessRepository }}/{{ .Values.global.readinessImage }}"
      imagePullPolicy: {{ .Values.global.pullPolicy | default .Values.pullPolicy }}
      name: {{ include "common.name" . }}-readiness
      containers:
      - name: {{ include "common.name" . }}
      image: "{{ include "common.repository" . }}/{{ .Values.image }}"
      imagePullPolicy: {{ .Values.global.pullPolicy | default .Values.pullPolicy }}
      command:
      - /opt/appc/bin/startODL.sh
      ports:
      - containerPort: {{ .Values.service.internalPort }}
      - containerPort: {{ .Values.service.externalPort2 }}
      readinessProbe:
      exec:
      command:
      - /opt/appc/bin/health_check.sh
      initialDelaySeconds: {{ .Values.readiness.initialDelaySeconds }}
      periodSeconds: {{ .Values.readiness.periodSeconds }}
      env:
      - name: MYSQL_ROOT_PASSWORD
      valueFrom:
      secretKeyRef:
      name: {{ template "common.fullname" . }}
      key: db-root-password
      - name: SDNC_CONFIG_DIR
      value: "{{ .Values.config.configDir }}"
      - name: APPC_CONFIG_DIR
      value: "{{ .Values.config.configDir }}"
      - name: DMAAP_TOPIC_ENV
      value: "{{ .Values.config.dmaapTopic }}"
      - name: ENABLE_AAF
      value: "{{ .Values.config.enableAAF }}"
      - name: ENABLE_ODL_CLUSTER
      value: "{{ .Values.config.enableClustering }}"
      - name: APPC_REPLICAS
      value: "{{ .Values.replicaCount }}"
      volumeMounts:
      - mountPath: /etc/localtime
      name: localtime
      readOnly: true
      - mountPath: /opt/onap/appc/data/properties/dblib.properties
      name: onap-appc-data-properties
      subPath: dblib.properties
      - mountPath: /opt/onap/appc/data/properties/svclogic.properties
      name: onap-appc-data-properties
      subPath: svclogic.properties
      - mountPath: /opt/onap/appc/data/properties/appc.properties
      name: onap-appc-data-properties
      subPath: appc.properties
      - mountPath: /opt/onap/appc/data/properties/aaiclient.properties
      name: onap-appc-data-properties
      subPath: aaiclient.properties
      - mountPath: /opt/onap/appc/data/properties/cadi.properties
      name: onap-appc-data-properties
      subPath: cadi.properties
      - mountPath: /opt/onap/appc/data/properties/aaa-app-config.xml
      name: onap-appc-data-properties
      subPath: aaa-app-config.xml
      - mountPath: /opt/onap/appc/svclogic/config/svclogic.properties
      name: onap-appc-svclogic-config
      subPath: svclogic.properties
      - mountPath: /opt/onap/appc/svclogic/bin/showActiveGraphs.sh
      name: onap-appc-svclogic-bin
      subPath: showActiveGraphs.sh
      - mountPath: /opt/onap/appc/bin/startODL.sh
      name: onap-appc-bin
      subPath: startODL.sh
      - mountPath: /opt/onap/appc/bin/installAppcDb.sh
      name: onap-appc-bin
      subPath: installAppcDb.sh
      - mountPath: /opt/onap/appc/bin/installFeatures.sh
      name: onap-appc-bin
      subPath: installFeatures.sh
      - mountPath: /opt/onap/appc/bin/health_check.sh
      name: onap-appc-bin
      subPath: health_check.sh
      - mountPath: /opt/onap/ccsdk/data/properties/dblib.properties
      name: onap-sdnc-data-properties
      subPath: dblib.properties
      - mountPath: /opt/onap/ccsdk/data/properties/svclogic.properties
      name: onap-sdnc-data-properties
      subPath: svclogic.properties
      - mountPath: /opt/onap/ccsdk/data/properties/aaiclient.properties
      name: onap-sdnc-data-properties
      subPath: aaiclient.properties
      - mountPath: /opt/onap/ccsdk/svclogic/config/svclogic.properties
      name: onap-sdnc-svclogic-config
      subPath: svclogic.properties
      - mountPath: /opt/onap/ccsdk/svclogic/bin/showActiveGraphs.sh
      name: onap-sdnc-svclogic-bin
      subPath: showActiveGraphs.sh
      - mountPath: /opt/onap/ccsdk/bin/startODL.sh
      name: onap-sdnc-bin
      subPath: startODL.sh
      - mountPath: /opt/onap/ccsdk/bin/installSdncDb.sh
      name: onap-sdnc-bin
      subPath: installSdncDb.sh
      - mountPath: {{ .Values.persistence.mdsalPath }}
      name: {{ include "common.fullname" . }}-data
      - mountPath: /var/log/onap
      name: logs
      - mountPath: /opt/opendaylight/current/etc/org.ops4j.pax.logging.cfg
      name: log-config
      subPath: org.ops4j.pax.logging.cfg
      - mountPath: /opt/onap/appc/data/stores/org.onap.appc.p12
      name: certs
      subPath: org.onap.appc.p12
      resources:
      {{ include "common.resources" . | indent 12 }}
      {{- if .Values.nodeSelector }}
      nodeSelector:
      {{ toYaml .Values.nodeSelector | indent 10 }}
      {{- end -}}
      {{- if .Values.affinity }}
      affinity:
      {{ toYaml .Values.affinity | indent 10 }}
      {{- end }}

      # side car containers
      - name: filebeat-onap
      image: "{{ .Values.global.loggingRepository }}/{{ .Values.global.loggingImage }}"
      imagePullPolicy: {{ .Values.global.pullPolicy | default .Values.pullPolicy }}
      volumeMounts:
      - mountPath: /usr/share/filebeat/filebeat.yml
      name: filebeat-conf
      subPath: filebeat.yml
      - mountPath: /var/log/onap
      name: logs
      - mountPath: /usr/share/filebeat/data
      name: data-filebeat
      volumes:
      - name: certs
      secret:
      secretName: {{ include "common.fullname" . }}-certs
      - name: localtime
      hostPath:
      path: /etc/localtime
      - name: filebeat-conf
      configMap:
      name: {{ include "common.fullname" . }}-filebeat
      - name: log-config
      configMap:
      name: {{ include "common.fullname" . }}-logging-cfg
      - name: logs
      emptyDir: {}
      - name: data-filebeat
      emptyDir: {}
      - name: onap-appc-data-properties
      configMap:
      name: {{ include "common.fullname" . }}-onap-appc-data-properties
      - name: onap-appc-svclogic-config
      configMap:
      name: {{ include "common.fullname" . }}-onap-appc-svclogic-config
      - name: onap-appc-svclogic-bin
      configMap:
      name: {{ include "common.fullname" . }}-onap-appc-svclogic-bin
      defaultMode: 0755
      - name: onap-appc-bin
      configMap:
      name: {{ include "common.fullname" . }}-onap-appc-bin
      defaultMode: 0755
      - name: onap-sdnc-data-properties
      configMap:
      name: {{ include "common.fullname" . }}-onap-sdnc-data-properties
      - name: onap-sdnc-svclogic-config
      configMap:
      name: {{ include "common.fullname" . }}-onap-sdnc-svclogic-config
      - name: onap-sdnc-svclogic-bin
      configMap:
      name: {{ include "common.fullname" . }}-onap-sdnc-svclogic-bin
      defaultMode: 0755
      - name: onap-sdnc-bin
      configMap:
      name: {{ include "common.fullname" . }}-onap-sdnc-bin
      defaultMode: 0755
      {{ if not .Values.persistence.enabled }}
      - name: {{ include "common.fullname" . }}-data
      emptyDir: {}
      {{ else }}
      volumeClaimTemplates:
      - metadata:
      name: {{ include "common.fullname" . }}-data
      labels:
      name: {{ include "common.fullname" . }}
      spec:
      accessModes: [ {{ .Values.persistence.accessMode }} ]
      storageClassName: {{ include "common.fullname" . }}-data
      resources:
      requests:
      storage: {{ .Values.persistence.size }}
      {{ end }}

        CommentAdd your comment...
      1.  
        1
        0
        -1

        here is the secrets.yaml

        apiVersion: v1
        kind: Secret
        metadata:
        name: {{ include "common.fullname" . }}
        namespace: {{ include "common.namespace" . }}
        labels:
        app: {{ include "common.fullname" . }}
        chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
        release: {{ .Release.Name }}
        heritage: {{ .Release.Service }}
        type: Opaque
        data:
        db-root-password: {{ .Values.config.mariadbRootPassword | b64enc | quote }}
        ---
        apiVersion: v1
        kind: Secret
        metadata:
        name: {{ include "common.fullname" . }}-certs
        namespace: {{ include "common.namespace" . }}
        labels:
        app: {{ include "common.name" . }}
        chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
        release: {{ .Release.Name }}
        heritage: {{ .Release.Service }}
        type: Opaque
        data:
        {{ tpl (.Files.Glob "resources/config/certs/*").AsSecrets . | indent 2 }}

          CommentAdd your comment...
        1.  
          1
          0
          -1

          Those openstack... property does not use for the health check.

          I am not sure what that "enabled: false" is it from value.yaml file?

          follow this wiki for the p12 file update. that might work for appc.

          Modify APPC Helm Chart to override the pk12 certificate

          1. Frédéric Larocque

            Thank, I'll try the link you provided to me and give you feedback.

            The "enabled: false", it's because I copied the text from a template I made where everything is set to false. Sorry for the confusion, in reality in the values.yaml, the enabled is set to "true".

          2. Frédéric Larocque

            Hi,

            So I tried to override the pk12 certififcate but this not seem to work. The appc can't start with the new mount directory :


            Maybe, it's because I don't put it right in the stateful.yaml.

            Can you take a look? ( statefulset.yaml in attachment). 

            Add the following lines in templates/statefulset.yaml under volumeMounts:

            ** I put this at the beginning of the volumeMounts **

            Add the following lines in templates/statefulset.yaml under volumes:

            ** I put this at the end of the volume**

            Statefulset.yaml


            Thanks

          3. Frédéric Larocque

            Subpath is missing , I'll correct it and retry

          4. Frédéric Larocque

            Ok, so the pod dev-appc-appc-0 not even appeared after the deployement with this modification . . . 

          CommentAdd your comment...