Frankfurt Notes

In addition to the mechanisms from El Alto for deploy/undeply the standard helm commands also work. For example in the case to upgrade a specific component like AAI without having to do a helm delete/heml deploy you can use the following:


Example with aai charts

Precondition: 

helm deploy has previously been run with override files on the command line so that the .helm/plugins/deploy/cache has been populated

git clone aai to an aai_oom directory since it is a recursive submodule of oom and upgrade using these newly cloned charts

Helm command:

helm upgrade -i onap-aai  ./aai_oom --namespace onap --timeout 900 -f ${HOME}/.helm/plugins/deploy/cache/onap/global-overrides.yaml -f ${HOME}/.helm/plugins/deploy/cache/onap-subcharts/aai/subchart-overrides.yaml

El Alto Notes with new Deploy/Undeploy plugin from OOM Team


helm deploy dev local/onap -f /root/integration-override.yaml --namespace onap

For slower cloud environment use this to use longer interval for readiness

helm deploy dev local/onap -f /root/oom/kubernetes/onap/resources/environments/public-cloud.yaml -f /root/integration-override.yaml --namespace onap

Example per prodjest with SO:

helm deploy dev-so local/onap -f /root/oom/kubernetes/onap/resources/environments/public-cloud.yaml -f /root/integration-override.yaml --namespace onap  --verbose


If you are using the SNAPSHOT image override file:

helm deploy dev-sdnc local/onap -f /root/oom/kubernetes/onap/resources/environments/public-cloud.yaml -f /root/integration-override.yaml -f /root/integration/deployment/heat/onap-rke/staging-image-override.yaml --namespace onap --verbose


  1. After editing a chart 
    1. cd /root/oom/kubernetes
    2. make project
      1. note that for cds/sdnc you need to do make cds; make sdnc
    3. make onap
  2. helm del project --purge
    1. helm list -a to confirm its gone
    2. also check pvc's for applications like sdnc/appc and kubectl -n onap delete pvc any remaining ones
      1. kubectl -n onap get pv  | grep project
      2. kubectl -n onap get pvc | grep  project
      3. ...
      4. "delete /dockerdata-nfs/dev-project"
    3. Cleanup shared cassandra (aai, sdc) and shared maiadb (sdnc, so)
    4. /root/integration/deployment/heat/onap-rke/cleanup.sh  project(without dev-)
      1. example: ./cleanup.sh sdc
      2. this script cleans up the shared cassandra and mariadb as well as pvc, pv, jobs etc.
      3. if you get an error when doing aai or sdc check to make sure cassandra cleaned up correctly. We have known problem where the cluster does not let schema's to be replicated and you get a Timeout back to cleanup.sh
  3. Rebuild helm charts as necessary
    1. cd /root/oom/kubernetes
    2. make project
    3. make onap
  4. helm deploy dev local/onap -f /root/oom/kubernetes/onap/resources/environments/public-cloud.yaml -f /root/integration-override.yaml --namespace onap  --verbose
  5. list pods and ports (with k8 host)
    1. kubectl -n onap get pods -o=wide 
    2. kubectl -n onap get services
  6. Find out why pod is stuck in initializing or crash loopback
    1. kubectl -n onap describe pod dev-blah-blah-blah
    2. kubectl -n onap logs dev-blah-blah-blah



complete removal steps (same as Beijing) 

### Faster method to do a delete for reinstall


kubectl delete namespace onap

kubectl delete pods -n onap --all

kubectl delete secrets -n onap --all

kubectl delete persistentvolumes -n onap --all

kubectl -n onap delete clusterrolebindings --all

helm del --purge dev

helm list -a

helm del --purge dev-[project] ← use this if helm list -a shows lingering  releases in DELETED state


if you have pods stuck terminating for a long time


kubectl delete pod --grace-period=0 --force --namespace onap --all


CDS Specific Notes (Dublin) - In Dublin release, the CDS charts are added as a subchat in OOM. However, the deployment of CDS charts is achieved as part of the SDN-C deployment in Dublin release. Thus, if any changes required to be made in the CDS chart the following steps taking:  "make cds; make sdnc ; make onap"



SDNC Values.yaml chart in OOM

104 # dependency / sub-chart configuration
105 cds:
106   enabled: true



Beijing Notes 

kubectl config get-contexts

helm list

root@k8s:~# helm list

NAME    REVISION            UPDATED                          STATUS                CHART                 NAMESPACE

dev        2            Mon Apr 16 23:01:06 2018          FAILED  onap-2.0.0          onap    

dev        9            Tue Apr 17 12:59:25 2018            DEPLOYED           onap-2.0.0          onap 

helm repo list

NAME  URL                                             

stable    https://kubernetes-charts.storage.googleapis.com

local      http://127.0.0.1:8879

#helm upgrade -i dev local/onap --namespace onap -f onap/resources/environments/integration.yaml

helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml


# to upgrade robot

# a config upgrade should use the local/onap syntax to let K8 decide based on the parent chart (local/onap)

helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml

# if docker container changes use the enable:false/true

helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml --set robot.enabled=false
helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml --set robot.enabled=true

# if  both the config and the docker container changes use the enable:false, do the make component, make onap  then enable:true

helm upgrade -i dev local/onap --namespace onap -f /root/integration-override.yaml --set robot.enabled=false

Confirm the assets are removed with get pods , get pv, get pvc, get secret, get configmap for those pieces you dont want to preserve

cd  /root/oom/kubernetes

make robot

make onap

helm upgrade -i dev local/onap --namespace onap -f /root/integration-override.yaml --set robot.enabled=true
kubectl get pods --all-namespaces -o=wide


# to check status of a pod like robots pod

kubectl -n onap describe pod dev-robot-5cfddf87fb-65zvv
pullPolicy: Always IfNotPresent option to allow us to 


### Faster method to do a delete for reinstall


kubectl delete namespace onap

kubectl delete pods -n onap --all

kubectl delete secrets -n onap --all

kubectl delete persistentvolumes -n onap --all

kubectl -n onap delete clusterrolebindings --all

helm del --purge dev

helm list -a

helm del --purge dev-[project] ← use this if helm list -a shows lingering  releases in DELETED state


if you have pods stuck terminating for a long time


kubectl delete pod --grace-period=0 --force --namespace onap --all


# of NAME=dev release

helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml


To test with a smaller ConfigMap try to disable some things like:


helm upgrade -i dev local/onap --namespace onap -f /root/integration-override.yaml --set log.enabled=false --set clamp.enabled=false --set pomba.enabled=false --set vnfsdk.enabled=false

(aaf is needed by alot of modules in Casablanca  but this is a near equivalent)

helm upgrade  -i dev local/onap --namespace onap -f /root/integration-override.yaml --set log.enabled=false --set aaf.enabled=false --set pomba.enabled=false --set vnfsdk.enabled=false

Note: setting log.enabled=false means that you will need to hunt down /var/log/onap logs on each docker container - instead of using the kibana search on the ELK stack deployed to port 30253 that consolidates all onap logs


## Slower method to delete full deploy

helm del dev --purge 

kubectl get pods --all-namespaces -o=wide

# look for all Terminating to be gone and wait till they are

kubectl -n onap get pvc

# look for persistant volumes that have not been removed.

kubectl -n onap delete pvc  dev-sdnc-db-data-dev-sdnc-db-0

# dev-sdnc is the name from the left  of the get pvc command


# same for pv (persistant volumes)

kubectl -n onap get pv
kubectl -n onap delete  pv  pvc-c0180abd-4251-11e8-b07c-02ee3a27e357

#same for pv, pvc, secret, configmap, services

kubectl get pods --all-namespaces -o=wide 
kubectl delete  pod dev-sms-857f6dbd87-6lh9k -n onap (stuck terminating pod )


# full install

# of NAME=dev instane 

helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml

# update vm_properties.py
# robot/resources/config/eteshare/vm_properties.py
# cd to oom/kuberneties

Remember: Do the enabled=false BEFORE doing the make onap so that the kubectl processing will use the old chart to delete the POD

#
# helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml - this would just redeploy robot becuase its configMap only


Container debugging commands


kubectl -n onap logs pod/dev-sdnc-0 -c sdnc

21 Comments

  1. Brian, hi can we post the git path to integration-override.yaml so we can keep in sync with the minimal (for vFW at least) onap deployment you guys are using 

    (non-closed loop - so removed appc, clamp - testing if dcae is not needed even though Prudence Au found that the SDC originated healthcheck requires DCAE even if we don't go all the way to vnf deployment and closed loop ops

    aai, dmaap, log, msb, policy, pomba, portal, robot, sdc, sdnc, so = enabled


  2. Ports are OK after clamp fix

    1 hour (both sets of DCAEGEN2 secondary orchestration went through) - on a 256G single AWS VM (with override - Cloud Native Deployment#Changemax-podsfromdefault110podlimit)

    at 1 hour
    ubuntu@ip-172-31-20-218:~$ free
                  total        used        free      shared  buff/cache   available
    Mem:      251754696   111586672    45000724      193628    95167300   137158588
    ubuntu@ip-172-31-20-218:~$ kubectl get pods --all-namespaces | grep onap | wc -l
    164
    ubuntu@ip-172-31-20-218:~$ kubectl get pods --all-namespaces | grep onap | grep -E '1/1|2/2' | wc -l
    155
    ubuntu@ip-172-31-20-218:~$ kubectl get pods --all-namespaces | grep -E '0/|1/2' | wc -l
    8
    ubuntu@ip-172-31-20-218:~$ kubectl get pods --all-namespaces | grep -E '0/|1/2'
    onap          dep-dcae-ves-collector-59d4ff58f7-94rpq                 1/2       Running                 0          4m
    onap          onap-aai-champ-68ff644d85-rv7tr                         0/1       Running                 0          59m
    onap          onap-aai-gizmo-856f86d664-q5pvg                         1/2       CrashLoopBackOff        10         59m
    onap          onap-oof-85864d6586-zcsz5                               0/1       ImagePullBackOff        0          59m
    onap          onap-pomba-kibana-d76b6dd4c-sfbl6                       0/1       Init:CrashLoopBackOff   8          59m
    onap          onap-pomba-networkdiscovery-85d76975b7-mfk92            1/2       CrashLoopBackOff        11         59m
    onap          onap-pomba-networkdiscoveryctxbuilder-c89786dfc-qnlx9   1/2       CrashLoopBackOff        10         59m
    onap          onap-vid-84c88db589-8cpgr                               1/2       CrashLoopBackOff        9          59m
  3. Getting an error on make onap in one of my environment where it complains about a portal chart that I havent touched so I think its something outside the chart.

    ...

    Downloading vnfsdk from repo http://127.0.0.1:8879
    Deleting outdated charts
    ==> Linting onap
    [ERROR] templates/: render error in "onap/charts/portal/charts/portal-zookeeper/templates/service.yaml": template: onap/charts/portal/charts/portal-zookeeper/templates/service.yaml:19:11: executing "onap/charts/portal/charts/portal-zookeeper/templates/service.yaml" at <include "common.serv...>: error calling include: template: onap/charts/common/templates/_service.tpl:30:27: executing "common.servicename" at <.Values.service.name>: can't evaluate field name in type interface {}



  4. Uninstallation procedure doesn't work for us,  we need some way how to clean kubernetes stack reliably.

    What we are getting after cleaning all stuff (inc also deployments.extensions)

    e.g.

    helm deploy dev local/onap --namespace onap --debug --verbose

    Error: release dev-appc failed: deployments.extensions "dev-appc-appc-dgbuilder" already exists

    some (random) components have environment in FAILED state due to some collision on something what was not cleaned ....

    e.g.
    [tiller] 2019/01/23 14:35:58 warning: Release "dev-sdnc" failed: deployments.extensions "dev-sdnc-sdnc-dgbuilder" already exists
    [storage] 2019/01/23 14:35:58 updating release "dev-sdnc.v1"
    [tiller] 2019/01/23 14:35:58 failed install perform step: release dev-sdnc failed: deployments.extensions "dev-sdnc-sdnc-dgbuilder" already exists


    we even reinstalled all k8s nodes, keeping just a node with rancher-server, but we still don't find a reliable way how to restore env ...

    has anyone also such troubles ? any hint how to clean k8s properly welcome ....


  5. Was asked about issues with rogue pods during undeploys in the last meet - OOM Meeting Notes - 2019-01-23

    team note that for cycling an entire onap deployment - in the TSC-25 meet Thursdays at 1230 EST we with the LF shortly run a full or partial redeploy every 2 to 7 hours on the same cluster - details in https://git.onap.org/logging-analytics/tree/deploy/cd.sh#n129 described in https://wiki.onap.org/display/DW/Cloud+Native+Deployment for

    TSC-25 - Getting issue details... STATUS


    for reference - jenkins deploy logs on http://jenkins.onap.info/

    Note the workarounds for out of band hanging pvs between the undeploy/deploy or a pod bounce via --set appc.enabled=false|true is to delete all or specific pv/pvc in between

    Note the managed deployment order - or just put some delay between each deploy

    Note the very infrequently needed force purge on handing Terminating pods in the cd.sh script as well.



  6. Hi All,


    I was trying to install ONAP casablanca release using below Link:

    https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/README.md

    https://docs.onap.org/en/casablanca/submodules/oom.git/docs/oom_setup_kubernetes_rancher.html#onap-on-kubernetes-with-rancher


    But I am not able to install ONAP using nexus repo. Trying to use the minimal-deployment.yaml file for VNF spawning. Below are the errors:

    root@kubernetes:/home/ubuntu# kubectl describe po dev-aai-aai-schema-service-7d545fd565-tzq4t -n onap
    Name: dev-aai-aai-schema-service-7d545fd565-tzq4t
    Namespace: onap
    Node: localhost/172.19.51.202
    Start Time: Wed, 01 May 2019 15:43:58 +0000
    Labels: app=aai-schema-service
    pod-template-hash=3810198121
    release=dev-aai
    Annotations: checksum/config=de556424c5e051ed5b7ffb86e02d72473279ccecc54fab78c5954f67f83f8bcf
    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"onap","name":"dev-aai-aai-schema-service-7d545fd565","uid":"eecdfd1c-6c27-11e9-ba...
    Status: Pending
    IP: 10.42.119.74
    Created By: ReplicaSet/dev-aai-aai-schema-service-7d545fd565
    Controlled By: ReplicaSet/dev-aai-aai-schema-service-7d545fd565
    Containers:
    aai-schema-service:
    Container ID:
    Image: nexus3.onap.org:10001/onap/aai-schema-service:1.0-STAGING-latest
    Image ID:
    Ports: 8452/TCP, 5005/TCP
    State: Waiting
    Reason: ErrImagePull
    Ready: False
    Restart Count: 0
    Readiness: tcp-socket :8452 delay=60s timeout=1s period=10s #success=1 #failure=3
    Environment:
    LOCAL_USER_ID: 1000
    LOCAL_GROUP_ID: 1000
    Mounts:
    /etc/localtime from localtime (ro)
    /opt/aai/logroot/AAI-SS from dev-aai-aai-schema-service-logs (rw)
    /opt/app/aai-schema-service/resources/application.properties from springapp-conf (rw)
    /opt/app/aai-schema-service/resources/etc/appprops/aaiconfig.properties from aaiconfig-conf (rw)
    /opt/app/aai-schema-service/resources/etc/auth/aai_keystore from auth-truststore-sec (rw)
    /opt/app/aai-schema-service/resources/etc/auth/realm.properties from realm-conf (rw)
    /opt/app/aai-schema-service/resources/localhost-access-logback.xml from localhost-access-log-conf (rw)
    /opt/app/aai-schema-service/resources/logback.xml from dev-aai-aai-schema-service-log-conf (rw)
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-tcdfh (ro)
    filebeat-onap:
    Container ID: docker://f017f17a263f55165fdc56622488824929e4dd5ba8c11f34fd30c57c24188721
    Image: docker.elastic.co/beats/filebeat:5.5.0
    Image ID: docker-pullable://docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942
    Port: <none>
    State: Running
    Started: Wed, 01 May 2019 15:50:40 +0000
    Ready: True
    Restart Count: 0
    Environment: <none>
    Mounts:
    /usr/share/filebeat/data from dev-aai-aai-schema-service-filebeat (rw)
    /usr/share/filebeat/filebeat.yml from filebeat-conf (rw)
    /var/log/onap from dev-aai-aai-schema-service-logs (rw)
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-tcdfh (ro)
    Conditions:
    Type Status
    Initialized True
    Ready False
    PodScheduled True
    Volumes:
    aai-common-aai-auth-mount:
    Type: Secret (a volume populated by a Secret)
    SecretName: aai-common-aai-auth
    Optional: false
    localtime:
    Type: HostPath (bare host directory volume)
    Path: /etc/localtime
    filebeat-conf:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: aai-filebeat
    Optional: false
    dev-aai-aai-schema-service-logs:
    Type: EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    dev-aai-aai-schema-service-filebeat:
    Type: EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    dev-aai-aai-schema-service-log-conf:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: dev-aai-aai-schema-service-log
    Optional: false
    localhost-access-log-conf:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: dev-aai-aai-schema-service-localhost-access-log-configmap
    Optional: false
    springapp-conf:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: dev-aai-aai-schema-service-springapp-configmap
    Optional: false
    aaiconfig-conf:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: dev-aai-aai-schema-service-aaiconfig-configmap
    Optional: false
    realm-conf:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: dev-aai-aai-schema-service-realm-configmap
    Optional: false
    auth-truststore-sec:
    Type: Secret (a volume populated by a Secret)
    SecretName: aai-common-truststore
    Optional: false
    default-token-tcdfh:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-tcdfh
    Optional: false
    QoS Class: BestEffort
    Node-Selectors: <none>
    Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
    node.alpha.kubernetes.io/unreachable:NoExecute for 300s
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Scheduled 9m default-scheduler Successfully assigned dev-aai-aai-schema-service-7d545fd565-tzq4t to localhost
    Normal SuccessfulMountVolume 9m kubelet, localhost MountVolume.SetUp succeeded for volume "dev-aai-aai-schema-service-logs"
    Normal SuccessfulMountVolume 9m kubelet, localhost MountVolume.SetUp succeeded for volume "dev-aai-aai-schema-service-filebeat"
    Normal SuccessfulMountVolume 9m kubelet, localhost MountVolume.SetUp succeeded for volume "localtime"
    Normal SuccessfulMountVolume 9m kubelet, localhost MountVolume.SetUp succeeded for volume "filebeat-conf"
    Normal SuccessfulMountVolume 9m kubelet, localhost MountVolume.SetUp succeeded for volume "realm-conf"
    Normal SuccessfulMountVolume 9m kubelet, localhost MountVolume.SetUp succeeded for volume "dev-aai-aai-schema-service-log-conf"
    Normal SuccessfulMountVolume 9m kubelet, localhost MountVolume.SetUp succeeded for volume "localhost-access-log-conf"
    Normal SuccessfulMountVolume 9m kubelet, localhost MountVolume.SetUp succeeded for volume "aaiconfig-conf"
    Normal SuccessfulMountVolume 9m kubelet, localhost MountVolume.SetUp succeeded for volume "springapp-conf"
    Normal SuccessfulMountVolume 8m (x3 over 8m) kubelet, localhost (combined from similar events): MountVolume.SetUp succeeded for volume "default-token-tcdfh"
    Warning Failed 6m kubelet, localhost Failed to pull image "nexus3.onap.org:10001/onap/aai-schema-service:1.0-STAGING-latest": rpc error: code = Unknown desc = Error while pulling image: Get http://nexus3.onap.org:10001/v1/repositories/onap/aai-schema-service/images: dial tcp 199.204.45.137:10001: getsockopt: no route to host
    Normal Pulling 6m kubelet, localhost pulling image "docker.elastic.co/beats/filebeat:5.5.0"
    Normal Pulled 2m kubelet, localhost Successfully pulled image "docker.elastic.co/beats/filebeat:5.5.0"
    Warning FailedSync 2m kubelet, localhost Error syncing pod
    Normal Created 2m kubelet, localhost Created container
    Normal Started 2m kubelet, localhost Started container
    Normal Pulling 2m (x2 over 8m) kubelet, localhost pulling image "nexus3.onap.org:10001/onap/aai-schema-service:1.0-STAGING-latest"
    root@kubernetes:/home/ubuntu# docker pull nexus3.onap.org/onap/aai-schema-service:1.0-STAGING-latest
    Pulling repository nexus3.onap.org/onap/aai-schema-service
    Error: image onap/aai-schema-service:1.0-STAGING-latest not found

    Does we have the required images in nexus repo, if not how are we deploying ONAP components. All the pods are throwing ErrImagePull issue

    NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
    kube-system heapster-76b8cd7b5-7zbzh 1/1 Running 11 5d 10.42.225.253 localhost
    kube-system kube-dns-5d7b4487c9-r8v5k 3/3 Running 30 5d 10.42.94.154 localhost
    kube-system kubernetes-dashboard-f9577fffd-vlmnw 1/1 Running 10 5d 10.42.26.12 localhost
    kube-system monitoring-grafana-997796fcf-x8ldh 1/1 Running 10 5d 10.42.169.42 localhost
    kube-system monitoring-influxdb-56fdcd96b-9675t 1/1 Running 11 5d 10.42.99.139 localhost
    kube-system tiller-deploy-dccdb6fd9-8zwz6 1/1 Running 4 1d 10.42.202.241 localhost
    onap dev-aai-aai-67dc87974b-t4s8j 0/1 Init:ErrImagePull 0 3h 10.42.94.161 localhost
    onap dev-aai-aai-babel-7bf9b8cd55-z7ss7 0/2 ErrImagePull 0 3h 10.42.11.69 localhost
    onap dev-aai-aai-champ-7df94789d5-f4xm9 0/2 Init:ErrImagePull 0 3h 10.42.127.130 localhost
    onap dev-aai-aai-data-router-79dc5c98ff-b44ll 0/2 Init:ErrImagePull 0 3h 10.42.58.117 localhost
    onap dev-aai-aai-elasticsearch-55bb9dbb6c-bv5hl 0/1 Init:ErrImagePull 0 3h 10.42.211.63 localhost
    onap dev-aai-aai-gizmo-5bb859b7db-65v2v 0/2 ErrImagePull 0 3h 10.42.133.140 localhost
    onap dev-aai-aai-graphadmin-76dc9c4574-826t4 0/2 Init:ErrImagePull 0 3h 10.42.58.205 localhost
    onap dev-aai-aai-graphadmin-create-db-schema-mm2xh 0/1 Init:ErrImagePull 0 3h 10.42.65.67 localhost
    onap dev-aai-aai-modelloader-6f79dbd958-jkjfv 0/2 ErrImagePull 0 3h 10.42.62.17 localhost
    onap dev-aai-aai-resources-5cfc9c854b-g8hlz 0/2 Init:ErrImagePull 0 3h 10.42.47.31 localhost
    onap dev-aai-aai-schema-service-7d545fd565-qknrc 0/2 ErrImagePull 0 3h 10.42.32.28 localhost
    onap dev-aai-aai-search-data-6fb67f5b8c-kqpb6 0/2 ErrImagePull 0 3h 10.42.238.99 localhost
    onap dev-aai-aai-sparky-be-6b6c764dbf-qrvlt 0/2 Init:ErrImagePull 0 3h 10.42.11.139 localhost
    onap dev-aai-aai-spike-7fbd58df9c-d4cnj 0/2 Init:ErrImagePull 0 3h 10.42.82.137 localhost
    onap dev-aai-aai-traversal-574d9495c4-9fm74 0/2 Init:ErrImagePull 0 3h 10.42.44.10 localhost
    onap dev-aai-aai-traversal-update-query-data-lgpg8 0/1 Init:ErrImagePull 0 3h 10.42.42.141 localhost
    onap dev-dmaap-dbc-pg-0 0/1 Init:ErrImagePull 0 3h 10.42.60.58 localhost
    onap dev-dmaap-dbc-pgpool-64ffd87f58-8rbsx 0/1 ErrImagePull 0 3h 10.42.225.133 localhost
    onap dev-dmaap-dbc-pgpool-64ffd87f58-hmg9q 0/1 ErrImagePull 0 3h 10.42.132.89 localhost
    onap dev-dmaap-dmaap-bc-86f795c7d7-rzd66 0/1 Init:ErrImagePull 0 3h 10.42.6.134 localhost
    onap dev-dmaap-dmaap-bc-post-install-cklt6 0/1 ErrImagePull 0 3h 10.42.6.109 localhost
    onap dev-dmaap-dmaap-dr-db-0 0/1 Init:ErrImagePull 0 3h 10.42.148.194 localhost
    onap dev-dmaap-dmaap-dr-node-0 0/2 Init:ErrImagePull 0 3h 10.42.215.234 localhost
    onap dev-dmaap-dmaap-dr-prov-6cb4fdf5f5-247lk 0/2 Init:ErrImagePull 0 3h 10.42.33.228 localhost
    onap dev-dmaap-message-router-0 0/1 Init:ErrImagePull 0 3h 10.42.189.4 localhost
    onap dev-dmaap-message-router-kafka-0 0/1 Init:ErrImagePull 0 3h 10.42.197.123 localhost
    onap dev-dmaap-message-router-kafka-1 0/1 Init:ErrImagePull 0 3h 10.42.160.21 localhost
    onap dev-dmaap-message-router-kafka-2 0/1 Init:ErrImagePull 0 3h 10.42.65.237 localhost
    onap dev-dmaap-message-router-mirrormaker-5879bcc59c-kk87r 0/1 Init:ErrImagePull 0 3h 10.42.88.225 localhost
    onap dev-dmaap-message-router-zookeeper-0 0/1 Init:ErrImagePull 0 3h 10.42.99.148 localhost
    onap dev-dmaap-message-router-zookeeper-1 0/1 Init:ErrImagePull 0 3h 10.42.20.211 localhost
    onap dev-dmaap-message-router-zookeeper-2 0/1 Init:ErrImagePull 0 3h 10.42.196.212 localhost
    onap dev-nfs-provisioner-nfs-provisioner-c55796c8-8rz78 0/1 ErrImagePull 0 3h 10.42.97.104 localhost
    onap dev-portal-portal-app-6bb6f9fc84-7ts28 0/2 Init:ErrImagePull 0 3h 10.42.242.31 localhost
    onap dev-portal-portal-cassandra-56b589b85d-bkzn7 0/1 ErrImagePull 0 3h 10.42.197.68 localhost
    onap dev-portal-portal-db-64d77c6965-cdt8z 0/1 ErrImagePull 0 3h 10.42.119.237 localhost
    onap dev-portal-portal-db-config-xdn45 0/2 Init:ErrImagePull 0 3h 10.42.23.217 localhost
    onap dev-portal-portal-sdk-74ddb7b88b-hzw47 0/2 Init:ErrImagePull 0 3h 10.42.186.32 localhost
    onap dev-portal-portal-widget-7c88cc5644-5jt7p 0/1 Init:ErrImagePull 0 3h 10.42.40.35 localhost
    onap dev-portal-portal-zookeeper-dc97c5cb8-rrcmp 0/1 ErrImagePull 0 3h 10.42.179.159 localhost
    onap dev-robot-robot-5f6f964796-7lfjs 0/1 ErrImagePull 0 3h 10.42.101.201 localhost
    onap dev-sdc-sdc-be-7bd86986c-92ns4 0/2 Init:ErrImagePull 0 3h 10.42.217.170 localhost
    onap dev-sdc-sdc-be-config-backend-2wbr7 0/1 Init:ErrImagePull 0 3h 10.42.95.216 localhost
    onap dev-sdc-sdc-cs-6b6df5b7bf-gqd6v 0/1 ErrImagePull 0 3h 10.42.216.158 localhost
    onap dev-sdc-sdc-cs-config-cassandra-n95rr 0/1 Init:ErrImagePull 0 3h 10.42.101.76 localhost
    onap dev-sdc-sdc-dcae-be-5dbf4958d5-mngdg 0/2 Init:ErrImagePull 0 3h 10.42.45.18 localhost
    onap dev-sdc-sdc-dcae-be-tools-wcx5p 0/1 Init:ErrImagePull 0 3h 10.42.170.29 localhost
    onap dev-sdc-sdc-dcae-dt-5486658bd4-q4jgh 0/2 Init:ErrImagePull 0 3h 10.42.236.131 localhost
    onap dev-sdc-sdc-dcae-fe-7bf7bb6868-w942h 0/2 Init:ErrImagePull 0 3h 10.42.106.10 localhost
    onap dev-sdc-sdc-dcae-tosca-lab-7f464d664-59s5r 0/2 Init:ErrImagePull 0 3h 10.42.211.93 localhost
    onap dev-sdc-sdc-es-794fbfdc-cfrk2 0/1 ErrImagePull 0 3h 10.42.171.32 localhost
    onap dev-sdc-sdc-es-config-elasticsearch-jfzdk 0/1 Init:ErrImagePull 0 3h 10.42.133.249 localhost
    onap dev-sdc-sdc-fe-6dbf9b9499-vddzz 0/2 Init:ErrImagePull 0 3h 10.42.56.11 localhost
    onap dev-sdc-sdc-kb-857697d4b9-j4xrq 0/1 Init:ErrImagePull 0 3h 10.42.176.202 localhost
    onap dev-sdc-sdc-onboarding-be-78b9b774d7-hntlc 0/2 Init:ErrImagePull 0 3h 10.42.59.151 localhost
    onap dev-sdc-sdc-onboarding-be-cassandra-init-4zg7x 0/1 Init:ErrImagePull 0 3h 10.42.146.197 localhost
    onap dev-sdc-sdc-wfd-be-84875d7cbc-hp8lq 0/1 Init:ErrImagePull 0 3h 10.42.74.131 localhost
    onap dev-sdc-sdc-wfd-be-workflow-init-b8hb4 0/1 Init:ErrImagePull 0 3h 10.42.201.111 localhost
    onap dev-sdc-sdc-wfd-fe-75b667c9d4-b5mvp 0/2 Init:ErrImagePull 0 3h 10.42.60.77 localhost
    onap dev-sdnc-cds-blueprints-processor-5d8b7df7c9-zxcvd 0/1 Init:ErrImagePull 0 3h 10.42.224.132 localhost
    onap dev-sdnc-cds-command-executor-64b8df54b6-gg6hq 0/1 Init:ErrImagePull 0 3h 10.42.31.166 localhost
    onap dev-sdnc-cds-controller-blueprints-599bf864f8-ss6z6 0/1 Init:ErrImagePull 0 3h 10.42.181.199 localhost
    onap dev-sdnc-cds-db-0 0/1 Init:ErrImagePull 0 3h 10.42.208.121 localhost
    onap dev-sdnc-cds-ui-69b899bc56-9r69r 0/1 ErrImagePull 0 3h 10.42.215.100 localhost
    onap dev-sdnc-nengdb-0 0/1 Init:ErrImagePull 0 3h 10.42.24.96 localhost
    onap dev-sdnc-network-name-gen-5b54568465-lfbkh 0/1 Init:ErrImagePull 0 3h 10.42.177.232 localhost
    onap dev-sdnc-sdnc-0 0/2 Init:ErrImagePull 0 3h 10.42.39.93 localhost
    onap dev-sdnc-sdnc-ansible-server-56bc6fcd6-h6m4z 0/1 Init:ErrImagePull 0 3h 10.42.233.133 localhost
    onap dev-sdnc-sdnc-dgbuilder-55cf7b4d7d-v82l6 0/1 Init:ErrImagePull 0 3h 10.42.8.213 localhost
    onap dev-sdnc-sdnc-dmaap-listener-66585656b5-v7z4w 0/1 Init:ErrImagePull 0 3h 10.42.24.3 localhost
    onap dev-sdnc-sdnc-ueb-listener-789664f965-jz66b 0/1 Init:ErrImagePull 0 3h 10.42.26.58 localhost
    onap dev-so-so-56bdcf95fc-w596f 0/1 Init:ErrImagePull 0 3h 10.42.27.80 localhost
    onap dev-so-so-bpmn-infra-cff9cb58f-mb7qb 0/1 Init:ErrImagePull 0 3h 10.42.250.85 localhost
    onap dev-so-so-catalog-db-adapter-6c99c756fd-4tz8z 0/1 Init:ErrImagePull 0 3h 10.42.199.93 localhost
    onap dev-so-so-mariadb-config-job-6j9cl 0/1 Init:ErrImagePull 0 3h 10.42.140.183 localhost
    onap dev-so-so-monitoring-5f779f77d9-rlml2 0/1 Init:ErrImagePull 0 3h 10.42.140.167 localhost
    onap dev-so-so-openstack-adapter-7c4b76f694-jk7hf 0/1 Init:ErrImagePull 0 3h 10.42.229.180 localhost
    onap dev-so-so-request-db-adapter-74d8bbd8bd-mwktt 0/1 Init:ErrImagePull 0 3h 10.42.169.84 localhost
    onap dev-so-so-sdc-controller-bcccff948-rbbrw 0/1 Init:ErrImagePull 0 3h 10.42.61.156 localhost
    onap dev-so-so-sdnc-adapter-86c457f8b9-hhqst 0/1 ErrImagePull 0 3h 10.42.0.192 localhost
    onap dev-so-so-vfc-adapter-78d6cbff45-kv9b2 0/1 Init:ErrImagePull 0 3h 10.42.218.251 localhost
    onap dev-so-so-vnfm-adapter-79467b8754-cvcnl 0/1 ErrImagePull 0 3h 10.42.155.68 localhost
    onap dev-vid-vid-844f85cdf5-4c6jb 0/2 Init:ErrImagePull 0 3h 10.42.71.181 localhost
    onap dev-vid-vid-galera-config-n46pk 0/1 Init:ErrImagePull 0 3h 10.42.68.120 localhost
    onap dev-vid-vid-mariadb-galera-0 0/1 ErrImagePull 0 3h 10.42.170.206 localhost


    1. Did you check that the image being requested is available in nexus3 ?

      https://nexus3.onap.org/#browse/browse:docker.release

      for release images shows tags 1.0.1 to 1.0.5

      snapshot shows a 1.0-STAGING-latest but that seems like a very old image and not sure it would be served up on port 10001

      (nexus ports confuse me)

    2. FYI aai-schema-service was introduced in Dublin release. It should not be in Casablanca Maintenance Release.

      If you are seeing it at this stage, then it means you have pulled from the master branch instead of Casablanca branch.


  7. Hi Brian

    Thanks for replying back, not sure why this image being queried to Nexus is this old when ideally below should not pull the image this old

    git clone -b casablanca http://gerrit.onap.org/r/oom

    To add, also not sure these are majorly manifests and tags of size 2 or 3 KB

    v2/onap/aai-schema-service/manifests/1.0.1 : Size 2.1 KB

    Can you please redirect me to the page that you used for installation. 1.0-Staging-latest is not even available in Nexus.
    1. https://docs.onap.org/en/casablanca/submodules/integration.git/docs/index.html?highlight=docker-manifest

      Make sure the image references in docker-manifest.csv are used.


      cd /root/integration/version-manifest/src/main/resources
      cp docker-manifest.csv docker-manifest-custom.csv
      
      # customize docker-manifest-custom.csv per your requirements
      
      ../scripts/update-oom-image-versions.sh ./docker-manifest-custom.csv /root/oom/
      
      cd /root/oom/kubernetes/
      git diff # verify that the desired docker image changes are applied successfully
      make all # recompile the helm charts
  8. Hi Brian,


    Thanks for your reply. Tried the default csv and getting same error "ErrImagePull" for minimal-deployment.yaml. Next step that I can think of is to identify the versions that are available in Nexus repo but the challenge that I forsee is compatability issues. Can you please guide me towards the compatible image tags for ONAP components (Casablanca Release) so as to avoid compatibility errors. 

  9. look in nexus3.onap.org in the Release repo and you should see the available versions. Try to pull them manually via docker and see if its a connectivity problem

  10. if you 'git clone b 3.0.2 ONAP  http://gerrit.onap.org/r/oom'  you should get the casablanca MR release helm charts and they should have the consistent
    set of image tag references. If you get an ErrImagePull problem please post to onap-discuss

    BTW I am not seeing aai-schema-service in the list of images to pull ?

    root@njcdtl01bf1936:/tmp/oom/kubernetes/aai# grep -R image: * | grep values
    charts/aai-spike/values.yaml:image: onap/spike:1.3.1
    charts/aai-champ/values.yaml:image: onap/champ:1.3.1
    charts/aai-data-router/values.yaml:image: onap/data-router:1.3.3
    charts/aai-resources/values.yaml:image: onap/aai-resources:1.3.5
    charts/aai-elasticsearch/values.yaml:image: elasticsearch/elasticsearch:6.1.2
    charts/aai-traversal/values.yaml:image: onap/aai-traversal:1.3.4
    charts/aai-cassandra/values.yaml:image: cassandra:2.1
    charts/aai-graphadmin/values.yaml:image: onap/aai-graphadmin:1.0.4
    charts/aai-gizmo/values.yaml:image: onap/gizmo:1.3.2
    charts/aai-search-data/values.yaml:image: onap/search-data-service:1.3.2
    charts/aai-sparky-be/values.yaml:image: onap/sparky-be:1.3.2
    charts/aai-modelloader/values.yaml:image: onap/model-loader:1.3.2
    charts/aai-babel/values.yaml:image: onap/babel:1.3.3
    values.yaml: image: onap/fproxy:2.1-STAGING-latest
    values.yaml: image: onap/rproxy:2.1-STAGING-latest
    values.yaml: image: onap/tproxy-config:2.1-STAGING-latest
    values.yaml:image: aaionap/haproxy:1.2.4
    1. FYI aai-schema-service was introduced in Dublin release. It should not be in Casablanca Maintenance Release.


  11. Thanks Brian for inputs, will post with answers on same.

  12. Hi Brian,


    We tried using the option you suggested. We used the Maintenance release 3.0.2 and are trying to install only sdc for now (so only kept sdc as true in values.yaml.)

    But after helm deploy, we have been getting below error:

    Failed to pull image "nexus3.onap.org:10001/onap/sdc-elasticsearch:1.4.0": rpc error: code = Unknown desc = Error while pulling image: Get http://nexus3.onap.org:10001/v1/repositories/onap/sdc-elasticsearch/images: dial tcp 199.204.45.137:10001: getsockopt: no route to host

    Image exists in Nexus repo


    We tried to lookup to see if DNS is getting resolved properly, can you please suggest if the host resolution is fine.


    We tried to manually build the sdc project from git: 

    https://github.com/onap/sdc.git from Casablanca branch but we are not even able to pull the base image

    Unable to pull 'onap/base_sdc-jetty:1.4.1' from registry 'nexus3.onap.org:10001' : Error while pulling image: Get http://nexus3.onap.org:10001/v1/repositories/onap/base_sdc-jetty/images: dial tcp 199.204.45.137:10001: getsockopt: no route to host  


    Kindly advice what next to lookup for resolution.

    1. can you ping nexus3.onap.org directly from your rancher VM ? Seems like your networking is not allowing you to get to the internet try " curl -v https://nexus3.onap.org/" from the rancher node and you should get back a full page of html from onap.org.


  13. Hi Bryan,


    Please find the outcome:



    <!DOCTYPE html>
    <html lang="en">
    <head>
    <title>Nexus Repository Manager</title>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
    <meta name="description" content="Nexus Repository Manager"/>
    <meta http-equiv="X-UA-Compatible" content="IE=edge"/>


    <!--[if lt IE 9]>
    <script>(new Image).src="https://nexus3.onap.org/static/rapture/resources/favicon.ico?_v=3.15.2-01"</script>
    <![endif]-->
    <link rel="icon" type="image/png" href="https://nexus3.onap.org/static/rapture/resources/favicon-32x32.png?_v=3.15.2-01" sizes="32x32">
    <link rel="mask-icon" href="https://nexus3.onap.org/static/rapture/resources/safari-pinned-tab.svg?_v=3.15.2-01" color="#5bbad5">
    <link rel="icon" type="image/png" href="https://nexus3.onap.org/static/rapture/resources/favicon-16x16.png?_v=3.15.2-01" sizes="16x16">
    <link rel="shortcut icon" href="https://nexus3.onap.org/static/rapture/resources/favicon.ico?_v=3.15.2-01">
    <meta name="msapplication-TileImage" content="https://nexus3.onap.org/static/rapture/resources/mstile-144x144.png?_v=3.15.2-01">
    <meta name="msapplication-TileColor" content="#00a300">



    <link rel="stylesheet" type="text/css" href="https://nexus3.onap.org/static/rapture/resources/loading-prod.css?_v=3.15.2-01">
    <link rel="stylesheet" type="text/css" href="https://nexus3.onap.org/static/rapture/resources/baseapp-prod.css?_v=3.15.2-01">
    <link rel="stylesheet" type="text/css" href="https://nexus3.onap.org/static/rapture/resources/nexus-rapture-prod.css?_v=3.15.2-01">
    <link rel="stylesheet" type="text/css" href="https://nexus3.onap.org/static/rapture/resources/nexus-proximanova-plugin-prod.css?_v=3.15.2-01">
    <link rel="stylesheet" type="text/css" href="https://nexus3.onap.org/static/rapture/resources/nexus-coreui-plugin-prod.css?_v=3.15.2-01">
    <link rel="stylesheet" type="text/css" href="https://nexus3.onap.org/static/rapture/resources/nexus-proui-plugin-prod.css?_v=3.15.2-01">

    <script type="text/javascript">
    function progressMessage(msg) {
    if (console && console.log) {
    console.log(msg);
    }
    document.getElementById('loading-msg').innerHTML=msg;
    }
    </script>
    </head>
    <body class="x-border-box">

    <div id="loading-mask"></div>
    <div id="loading">
    <div id="loading-background">
    <img id="loading-logo" src="https://nexus3.onap.org/static/rapture/resources/images/loading-logo.png?_v=3.15.2-01" alt="Product Logo"/>
    <img id="loading-product" src="https://nexus3.onap.org/static/rapture/resources/images/loading-product.png?_v=3.15.2-01" alt="Nexus Repository Manager"/>
    <div class="loading-indicator">
    <img id="loading-spinner" src="https://nexus3.onap.org/static/rapture/resources/images/loading-spinner.gif?_v=3.15.2-01" alt="Loading Spinner"/>
    <span id="loading-msg">Loading ...</span>
    </div>
    </div>

    <div id="code-load" class="x-hide-display">

    <script type="text/javascript">progressMessage('Loading baseapp-prod.js');</script>
    <script type="text/javascript" src="https://nexus3.onap.org/static/rapture/baseapp-prod.js?_v=3.15.2-01"></script>
    <script type="text/javascript">progressMessage('Loading extdirect-prod.js');</script>
    <script type="text/javascript" src="https://nexus3.onap.org/static/rapture/extdirect-prod.js?_v=3.15.2-01"></script>
    <script type="text/javascript">progressMessage('Loading bootstrap.js');</script>
    <script type="text/javascript" src="https://nexus3.onap.org/static/rapture/bootstrap.js?_v=3.15.2-01"></script>
    <script type="text/javascript">progressMessage('Loading d3.v4.min.js');</script>
    <script type="text/javascript" src="https://nexus3.onap.org/static/rapture/d3.v4.min.js?_v=3.15.2-01"></script>
    <script type="text/javascript">progressMessage('Loading nexus-rapture-prod.js');</script>
    <script type="text/javascript" src="https://nexus3.onap.org/static/rapture/nexus-rapture-prod.js?_v=3.15.2-01"></script>
    <script type="text/javascript">progressMessage('Loading nexus-rutauth-plugin-prod.js');</script>
    <script type="text/javascript" src="https://nexus3.onap.org/static/rapture/nexus-rutauth-plugin-prod.js?_v=3.15.2-01"></script>
    <script type="text/javascript">progressMessage('Loading nexus-coreui-plugin-prod.js');</script>
    <script type="text/javascript" src="https://nexus3.onap.org/static/rapture/nexus-coreui-plugin-prod.js?_v=3.15.2-01"></script>
    <script type="text/javascript">progressMessage('Loading nexus-proui-plugin-prod.js');</script>
    <script type="text/javascript" src="https://nexus3.onap.org/static/rapture/nexus-proui-plugin-prod.js?_v=3.15.2-01"></script>
    <script type="text/javascript">progressMessage('Loading nexus-repository-pypi-prod.js');</script>
    <script type="text/javascript" src="https://nexus3.onap.org/static/rapture/nexus-repository-pypi-prod.js?_v=3.15.2-01"></script>
    <script type="text/javascript">progressMessage('Loading nexus-repository-maven-prod.js');</script>
    <script type="text/javascript" src="https://nexus3.onap.org/static/rapture/nexus-repository-maven-prod.js?_v=3.15.2-01"></script>
    <script type="text/javascript">progressMessage('Loading nexus-repository-nuget-prod.js');</script>
    <script type="text/javascript" src="https://nexus3.onap.org/static/rapture/nexus-repository-nuget-prod.js?_v=3.15.2-01"></script>
    <script type="text/javascript">progressMessage('Loading nexus-repository-rubygems-prod.js');</script>
    <script type="text/javascript" src="https://nexus3.onap.org/static/rapture/nexus-repository-rubygems-prod.js?_v=3.15.2-01"></script>
    <script type="text/javascript">progressMessage('Loading app.js');</script>
    <script type="text/javascript" src="https://nexus3.onap.org/static/rapture/app.js?_v=3.15.2-01"></script>

    <script type="text/javascript">progressMessage('Initializing ...');</script>
    </div>
    </div>

    <form id="history-form" class="x-hide-display" tabindex="-1">
    <input type="hidden" id="x-history-field"/>
    <iframe id="x-history-frame" title="Browse history form"></iframe>
    </form>

    </body>
    </html>

    1. Unable to pull 'onap/base_sdc-jetty:1.4.1' from registry 'nexus3.onap.org:10001' : Error while pulling image: Get http://nexus3.onap.org:10001/v1/repositories/onap/base_sdc-jetty/images: dial tcp 199.204.45.137:10001: getsockopt: no route to host  


      No idea what the problem is. Its not ONAP. Its kubernetes talking to nexus3.onap.org - "no route to host" means the docker engine isn't reaching that network which usually means your docker engine is behind a proxy/firewall or something.


      Can you pull a generic alpine image from dockerhub as a test ?

      You need to find what is wrong with your networking between your k8 host and nexus3.onap.org.


  14. I am getting error while deploying 3.0.2 like below :
    git clone -b 3.0.2-ONAP http://gerrit.onap.org/r/oom

    After following the steps of deployment  we found that portal-app is failing in Helm deployment with below error:

    Error: release onap-portal failed: Deployment.apps "onap-portal-portal-app" is invalid:
    [spec.template.spec.containers[0].env[1].name: Invalid value: "javax.net.ssl.keyStore": a valid C identifier must start with alphabetic character or '_', followed by a string of alphanumeric characters or '_' (e.g. 'my_name', or 'MY_NAME', or 'MyName', regex used for validation is '[A-Za-z_][A-Za-z0-9_]*'),
    spec.template.spec.containers[0].env[2].name: Invalid value: "javax.net.ssl.keyStorePassword": a valid C identifier must start with alphabetic character or '_', followed by a string of alphanumeric characters or '_' (e.g. 'my_name', or 'MY_NAME', or 'MyName', regex used for validation is '[A-Za-z_][A-Za-z0-9_]*'),
    spec.template.spec.containers[0].env[3].name: Invalid value: "javax.net.ssl.trustStore": a valid C identifier must start with alphabetic character or '_', followed by a string of alphanumeric characters or '_' (e.g. 'my_name', or 'MY_NAME', or 'MyName', regex used for validation is '[A-Za-z_][A-Za-z0-9_]*'),
    spec.template.spec.containers[0].env[4].name: Invalid value: "javax.net.ssl.trustStorePassword": a valid C identifier must start with alphabetic character or '_', followed by a string of alphanumeric characters or '_' (e.g. 'my_name', or 'MY_NAME', or 'MyName', regex used for validation is '[A-Za-z_][A-Za-z0-9_]*')]


    I believe this is because a . (dot) is not allowed in the env.name variable till Kubenetes version 1.7. 
    Do we have any workaround or fix for this ?

  15. Note that in El Alto the default install for integration is on the "nfs" server. This is the old "rancher" server just renamed since we dont actually run a rancher container in RKE anymore.