Overview

In the Casablanca release ONAP has become very large with many Helm charts ( ~6 x that of Amsterdam ). Every Helm chart contains some amount of external configuration and, unfortunately, there is a limit to the amount of configuration that can exist in a Helm chart (1 MB).  As of Casablanca, we in ONAP have exceeded this limit. 


ONAP is installed as an umbrella (ie. parent) chart containing many subcharts, each with configuration. The total amount of configuration that resides in configmaps within K8s (exceeding 1MB) causes an installation of ONAP to fail. To work around this issue, a Helm plugin has been introduced that will install/upgrade ONAP by deploying the parent chart and each subchart within its own Helm "release". It is important to note that all releases must be deployed within the same Kubernetes namespace in order for communication between the components to succeed.


Disclaimer:

The plugins described here were introduced to address config map limitations in the Casablanca release. There are alternative projects, such as https://github.com/roboll/helmfilethat can also solve this problem.

The decision to not use an existing project was due to:

  • not wanting to introduce a new project-specific deployment specification this late in the release cycle
  • the desire to stay as close to an existing Helm solution as possible (we anticipate significant improvements in Helm 3)

That said, the use of deploy and undeploy plugins can be viewed as a temporary solution. In their current state, they have not been hardened (eg. resilient to networking errors) but are made available to unblock installation of ONAP. To avoid networking errors that can cause some of the sub-charts to fail to deploy, it is recommend that Helm deploy and undeploy commands execute from within the same network (e.g Rancher node or Jumpnode) as the K8s cluster you are deploying to.

Install Helm Plugins (deploy & undeploy)

Clone oom repository

Copy oom/kubernetes/helm/plugins directory into your local ~/.helm/ folder.

Manual Installation
# ubuntu
sudo cp -R ~/oom/kubernetes/helm/plugins/ ~/.helm
# mac
sudo cp -R ~/oom/kubernetes/helm/plugins/* ~/.helm/plugins


Verify plugins installed correctly

Execute 'helm' (no arguments) should show both 'deploy' and 'undeploy' in list of available commands.

>sudo helm

.......

Usage:
helm [command]

Available Commands:
...

dependency  manage a chart's dependencies
deploy          install (upgrade if release exists) parent chart and subcharts as separate but related releases
fetch              download a chart from a repository and (optionally) unpack it in local directory
...

template        locally render templates
test                test a release
undeploy      delete parent chart and subcharts that were deployed as separate releases
upgrade         upgrade a release
version           print the client/server version information


Deploying ONAP

Deploy from public Helm Chart Repository

Example is using one of the public Helm Chart repos https://nexus.onap.org/content/sites/ (ie. staging).

Prerequisite:

  1. helm repo add staging https://nexus.onap.org/content/sites/oom-helm-staging/


Deploy ONAP from staging repository (using default configuration values defined in onap/values.yaml)
helm deploy demo staging/onap --namespace onap


Deploy from cloned OOM codebase

Prerequisites:

  1. clone oom repository
  2. cd oom/kubernetes
  3. make repo
  4. make; make onap
  5. use sudo for helm commands except when running root


Deploy ONAP from OOM codebase using local Helm Chart Repository (default configuration values defined in onap/values.yaml)
sudo helm deploy demo local/onap --namespace onap

or

Deploy ONAP from OOM codebase using local file changes (default configuration values defined in onap/values.yaml)
sudo helm deploy demo ./onap --namespace onap


Results

ONAP pods deployed into the 'onap' Kubernetes namespace. Each application project is installed as a Helm Release that is prefixed by the parent release name (e.g. "demo-")

> helm list

NAME           	REVISION	UPDATED                 	STATUS  	CHART           	NAMESPACE
demo           	1       	Wed Sep 19 12:04:52 2018	DEPLOYED	onap-2.0.0      	onap
demo-aaf       	1       	Wed Sep 19 12:04:57 2018	DEPLOYED	aaf-2.0.0       	onap
demo-aai       	1       	Wed Sep 19 12:05:09 2018	DEPLOYED	aai-2.0.0       	onap
demo-appc      	1       	Wed Sep 19 12:05:22 2018	DEPLOYED	appc-2.0.0      	onap
demo-clamp     	1       	Wed Sep 19 12:05:28 2018	DEPLOYED	clamp-2.0.0     	onap
demo-cli       	1       	Wed Sep 19 12:05:33 2018	DEPLOYED	cli-2.0.0       	onap
demo-consul    	1       	Wed Sep 19 12:05:38 2018	DEPLOYED	consul-2.0.0    	onap
demo-dcaegen2  	1       	Wed Sep 19 12:05:48 2018	DEPLOYED	dcaegen2-2.0.0  	onap
demo-dmaap     	1       	Wed Sep 19 12:05:54 2018	DEPLOYED  	dmaap-2.0.0     	onap
demo-esr       	1       	Wed Sep 19 12:05:58 2018	DEPLOYED	esr-2.0.0       	onap
demo-log       	1       	Wed Sep 19 12:06:03 2018	DEPLOYED	log-3.0.0       	onap
demo-msb       	1       	Wed Sep 19 12:06:08 2018	DEPLOYED	msb-2.0.0       	onap
demo-multicloud	1       	Wed Sep 19 12:06:13 2018	DEPLOYED	multicloud-2.0.0	onap
demo-nbi       	1       	Wed Sep 19 12:06:18 2018	DEPLOYED	nbi-2.0.0       	onap
demo-oof       	1       	Wed Sep 19 12:06:23 2018	DEPLOYED	oof-2.0.0       	onap
demo-policy    	1       	Wed Sep 19 12:06:30 2018	DEPLOYED	policy-2.0.0    	onap
demo-pomba     	1       	Wed Sep 19 12:06:41 2018	DEPLOYED	pomba-2.0.0     	onap
demo-portal    	1       	Wed Sep 19 12:06:49 2018	DEPLOYED	portal-2.0.0    	onap
demo-robot     	1       	Wed Sep 19 12:06:55 2018	DEPLOYED	robot-2.0.0     	onap
demo-sdc       	1       	Wed Sep 19 12:07:00 2018	DEPLOYED	sdc-2.0.0       	onap
demo-sdnc      	1       	Wed Sep 19 12:07:08 2018	DEPLOYED	sdnc-2.0.0      	onap
demo-so        	1       	Wed Sep 19 12:07:16 2018	DEPLOYED	so-2.0.0        	onap
demo-uui       	1       	Wed Sep 19 12:07:23 2018	DEPLOYED	uui-2.0.0       	onap
demo-vfc       	1       	Wed Sep 19 12:07:29 2018	DEPLOYED	vfc-2.0.0       	onap
demo-vid       	1       	Wed Sep 19 12:07:36 2018	DEPLOYED	vid-2.0.0       	onap
demo-vnfsdk    	1       	Wed Sep 19 12:07:41 2018	DEPLOYED	vnfsdk-2.0.0    	onap


Advanced Options

Note that any overrides that can be used using Helm install or update can be applied.

Customizing ONAP deployment using overrides *

Deploy using configuration overrides in the form of override file and --set
helm deploy demo ./onap --namespace onap -f ~/overrides.yaml --set vid.enabled=false


Update (or install) a specific ONAP component *

* Update (or install) a specific ONAP component (e.g. robot)
helm deploy demo-robot ./onap --namespace onap -f ~/overrides.yaml --set vid.enabled=false


* Note that in order for any changes to a Helm Chart to take affect, a make is required (e.g. "make; make onap" or "make robot; make onap")

Helm Deploy plugin logs

ubuntu@a-ld0:~$ sudo ls ~/.helm/plugins/deploy/cache/onap/logs/onap-
onap-aaf.log             onap-cli.log             onap-dmaap.log           onap-multicloud.log      onap-portal.log          onap-sniro-emulator.log  onap-vid.log
onap-aai.log             onap-consul.log          onap-esr.log             onap-oof.log             onap-robot.log           onap-so.log              onap-vnfsdk.log
onap-appc.log            onap-contrib.log         onap-log.log             onap-policy.log          onap-sdc.log             onap-uui.log             onap-vvp.log
onap-clamp.log           onap-dcaegen2.log        onap-msb.log             onap-pomba.log           onap-sdnc.log            onap-vfc.log             


Undeploying ONAP

Undeploy entire ONAP deployment

Delete entire ONAP deployment
helm undeploy demo --purge


Undeploy specific ONAP component

Undeploy a specific ONAP component (e.g. robot)
helm undeploy demo-robot --purge
  • No labels

19 Comments

  1. Can I do

    helm deploy demo staging/onap --namespace onap -f ~/integration-override.yaml  --set aaf.enabled=false


    helm deploy demo staging/onap --namespace onap -f ~/integration-override.yaml --set aaf.enabled=true


    to upgrade an existing deployment with a new aaf chart/container ?

  2. Hi,

    I didn't find that this worked i.e. it didn't delete the deployment and recreate it.

    But its easy to work around. 

    You can do a 

    helm ls 

    and find the part of onap you are interested in and then

    helm delete <required deployment> --purge

    and then

    helm deploy demo staging/onap --namespace onap -f ~/integration-override.yaml

    will put it back

    /Andrew



  3. How would cross project initContainer dependencies work in this scenario where a pod might be waiting on a job from a different project?

    Would we have to name such jobs with hardcoded values instead of using the Release.Name prefix?

  4. mike notes for helping adjust the RTDs

    OOM-1543 - Getting issue details... STATUS

    Need to triage the logs for failed deployments

    2 issues

    sdc does not deploy - how to get the logs

    aai deploys but is not listed - so we cannot use the plugin - and the base install/upgrade commands are blocked by the configmap issue


    onap          onap-aai-aai-resources-f75d854cd-l4spw                         2/2       Running            0          22h

    ubuntu@a-cd-cas:~$ helm list
    NAME           	REVISION	UPDATED                 	STATUS  	CHART           	NAMESPACE
    onap           	1       	Mon Dec 10 13:10:39 2018	DEPLOYED	onap-3.0.0      	onap     
    onap-aaf       	3       	Mon Dec 10 13:10:40 2018	DEPLOYED	aaf-3.0.0       	onap     
    onap-appc      	3       	Mon Dec 10 13:10:53 2018	DEPLOYED	appc-3.0.0      	onap     
    onap-clamp     	3       	Mon Dec 10 13:11:00 2018	DEPLOYED	clamp-3.0.0     	onap     
    onap-cli       	3       	Mon Dec 10 13:11:03 2018	DEPLOYED	cli-3.0.0       	onap     
    onap-consul    	3       	Mon Dec 10 13:11:04 2018	DEPLOYED	consul-3.0.0    	onap     
    onap-dcaegen2  	3       	Mon Dec 10 13:11:07 2018	DEPLOYED	dcaegen2-3.0.0  	onap     
    onap-dmaap     	3       	Mon Dec 10 13:11:10 2018	DEPLOYED	dmaap-3.0.0     	onap     
    onap-esr       	3       	Mon Dec 10 13:11:15 2018	DEPLOYED	esr-3.0.0       	onap     
    onap-log       	3       	Mon Dec 10 13:11:16 2018	DEPLOYED	log-3.0.0       	onap     
    onap-msb       	3       	Mon Dec 10 13:11:18 2018	DEPLOYED	msb-3.0.0       	onap     
    onap-multicloud	3       	Mon Dec 10 13:11:21 2018	DEPLOYED	multicloud-3.0.0	onap     
    onap-nbi       	3       	Mon Dec 10 13:11:25 2018	DEPLOYED	nbi-3.0.0       	onap     
    onap-oof       	3       	Mon Dec 10 13:11:27 2018	DEPLOYED	oof-3.0.0       	onap     
    onap-policy    	3       	Mon Dec 10 13:11:33 2018	DEPLOYED	policy-3.0.0    	onap     
    onap-pomba     	3       	Mon Dec 10 13:11:40 2018	DEPLOYED	pomba-3.0.0     	onap     
    onap-portal    	3       	Mon Dec 10 13:11:53 2018	DEPLOYED	portal-3.0.0    	onap     
    onap-robot     	3       	Mon Dec 10 13:12:02 2018	DEPLOYED	robot-3.0.0     	onap     
    onap-sdc       	1       	Mon Dec 10 00:35:00 2018	FAILED  	sdc-3.0.0       	onap     
    onap-sdnc      	3       	Mon Dec 10 13:12:04 2018	DEPLOYED	sdnc-3.0.0      	onap     
    onap-so        	3       	Mon Dec 10 13:12:16 2018	DEPLOYED	so-3.0.0        	onap     
    onap-uui       	3       	Mon Dec 10 13:12:24 2018	DEPLOYED	uui-3.0.0       	onap     
    onap-vid       	3       	Mon Dec 10 13:12:27 2018	DEPLOYED	vid-3.0.0       	onap  
    
    cannot use direct helm commands because of the known resource limit issue this plugin fixes
    ubuntu@a-cd-master:~/oom/kubernetes$ sudo helm upgrade -i onap local/onap --namespace onap -f ../../dev.yaml --set aai.enabled=true
    Error: UPGRADE FAILED: ConfigMap "onap.v2" is invalid: []: Too long: must have at most 1048576 characters
    
    
    redeploying lists aai but does not bounce it
    ubuntu@a-cd-master:~/oom/kubernetes$ helm undeploy onap-aai --purge
    ubuntu@a-cd-master:~/oom/kubernetes$ helm undeploy onap-sdc --purge
    release "onap-sdc" deleted
    ubuntu@a-cd-master:~/oom/kubernetes$ sudo helm deploy onap local/onap --namespace onap -f ../../dev.yaml 
    fetching local/onap
    release "onap" deployed
    release "onap-aaf" deployed
    release "onap-aai" deployed
    
    
    
    
    aai still missing
    ubuntu@a-cd-master:~$ helm list
    NAME           	REVISION	UPDATED                 	STATUS  	CHART           	NAMESPACE
    onap           	2       	Tue Dec 11 11:09:48 2018	DEPLOYED	onap-3.0.0      	onap     
    onap-aaf       	4       	Tue Dec 11 11:09:49 2018	DEPLOYED	aaf-3.0.0       	onap     
    onap-appc      	4       	Tue Dec 11 11:09:57 2018	DEPLOYED	appc-3.0.0      	onap  
    
    
     ubuntu@a-cd-master:~/oom/kubernetes$ sudo helm deploy onap local/onap --namespace onap -f ../../dev.yaml --set aai.enabled=false fetching local/onap release "onap" deployed release "onap-aaf" deployed release "onap-appc" deployed release "onap-clamp" deployed this works as expected ubuntu@a-cd-master:~/oom/kubernetes$ helm undeploy onap-esr --purge release "onap-esr" deleted onap          onap-esr-esr-gui-7946c6b68d-2lbhs                              1/1       Terminating        0          22h onap          onap-esr-esr-server-d96fcc7d7-7fnrd                            2/2       Terminating        0          22h and the bounce ubuntu@a-cd-master:~/oom/kubernetes$ sudo helm deploy onap local/onap --namespace onap -f ../../dev.yaml onap          onap-esr-esr-gui-7946c6b68d-nstwc                              0/1       Running             0          2m onap          onap-esr-esr-server-d96fcc7d7-tzwmv                            0/2       ContainerCreating   0          2m 
       
  5. Hello, I have an openstack environment, I followed the instructions to setup rancher and kubernetes cluster,  now I am ready to run helm to install ONAP, do I run helm on the kubernetes node or the rancher VM?

      thanks

  6. Surely you can choose and they can be installed on any machine?

    kubectl and helm have been installed (as a convenience) onto the rancher and kubernetes hosts. Typically you would install them both on your PC and remotely connect to the cluster.

    https://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_setup_kubernetes_rancher.html#configure-kubectl-and-helm

    1. Thank you so much.

      Now I have helm installed on my local linux box following the instructions from this page "ONAP on kubernetes with rancher" (https://docs.onap.org/en/casablanca/submodules/oom.git/docs/oom_setup_kubernetes_rancher.html#onap-on-kubernetes-with-rancher),

      next I run helm to deploy ONAP following this page "OOM quick start guide "(https://docs.onap.org/en/casablanca/submodules/oom.git/docs/oom_quickstart_guide.html#quick-start-label),

      it failed, I think I have to modify the values.yaml file for my environment, I am trying to find out more instructions about the OOM configuration, but not very successful. Is there any detailed configuration available?


  7. What failed and what was failure message?

    1. I could only get about 20 pods running, most of them are not coming up, and a lot of failures are similar to this:

      Failed to pull image "nexus3.onap.org:10001/onap/so/sdnc-adapter:1.3.2": rpc error: code = Canceled desc = context canceled

      Failed to pull image "nexus3.onap.org:10001/mariadb:10.1.11": rpc error: code = Canceled desc = context canceled


  8. The last manual installation, I think these are the only values I changed in the OOM repository where I ran make all:


    kubernetes/onap/values.yaml
    - openStackUserName: "vnf_user"
    - openStackKeyStoneUrl: "http://1.2.3.4:5000"
    - openStackServiceTenantName: "service"
    - openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"


    kubernetes/robot/values.yaml
    -openStackKeyStoneUrl: "http://1.2.3.4:5000"
    -openStackPassword: "tenantPassword"
    -openStackTenantId: "47899782ed714295b1151681fdfd51f5"
    -openStackUserName: "tenantUsername"

    1. Brendan:

        In the onap/values.yaml file, the appc has several config parameters:

      appc:
      enabled: true
      config:
      openStackType: OpenStackProvider
      openStackName: OpenStack
      openStackKeyStoneUrl: http://localhost:8181/apidoc/explorer/index.html


      should I change the openStackKeyStoneUrl to "http://1.2.3.4:5000"?


        thanks

      1. These values are probably needed and if they are they have to be real values for your environment. 1.2.3.4 is an example and used to be in the OOM repository onap/values.yaml. You need to find these for your Open Stack installation. For example in Mirantis Open Stack it is under Identity, Projects.

  9. Also it may be useful for you to clone the integration git repository and run the following script:

    https://gerrit.onap.org/r/gitweb?p=integration.git;a=blob;f=version-manifest/src/main/scripts/update-oom-image-versions.sh;hb=HEAD

    This updates the Docker image versions in the values.yaml files in OOM to the values in the file supplied. You might want to run this script with the release version manifest:

    https://gerrit.onap.org/r/gitweb?p=integration.git;a=blob;f=version-manifest/src/main/resources/docker-manifest.csv;hb=HEAD

    The second argument is the OOM repository directory

    eg.

    ./update-oom-image-versions.sh ../resources.docker-manifest.csv /home/ubuntu/oom

    1. As I understand it - the oom repository is the final word - not the other way around - as in the docker manifest is periodically updated to reflect the image tags in oom.  I would recommend not changing the tags in the oom repo before deployment.  The manifest is good for 90-100% prepull only.

      /michael

      1. still a bit in the air about the source of truth - manifest or oom values.yaml - integration team may be overriding the images tags - everyone else is not - perhaps we should

        put the decision in  TSC-86 - Getting issue details... STATUS

        https://lists.onap.org/g/onap-discuss/topic/oom_onap_deployment/28883609?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,28883609


        quote:

        Brian is right – anything over 2 hours will not restart – bounce your particular pod –set ?.enabled=false/true – with a dockerdata-nfs clean in the middle – and a pv/pvc deletion for some rogue pods that have pvs outside the loop – only if you get a pv exists exception on restart.

        https://wiki.onap.org/display/DW/ONAP+Development#ONAPDevelopment-Bounce/Fixafailedcontainer

        change –all to a particular pod – or use a –force delete like

        kubectl delete pod $ENVIRON-aaf-sms-vault-0 -n $ENVIRON --grace-period=0 --force

        https://wiki.onap.org/display/DW/Cloud+Native+Deployment#CloudNativeDeployment-RemoveaDeployment

        https://git.onap.org/logging-analytics/tree/deploy/cd.sh#n79

        The above script with a 3 min timeout between the 30 pod deploys brings up the whole system except some minor issues with a couple pods

        For your situation – make sure healthcheck is 50/51 before attempting to use the cluster

        3.2.2 ete-k8s.sh health test: 51 critical tests, 7 passed, 44 failed

        Should be 50 passed (verified 20181229 for 3.0.0-ONAP (Casablanca manifest)



        Guys,

           A secondary question – are the values.yaml files in OOM the truth or is the manifest file the truth.

           I also need a bit of clarification on the source of truth for the image tags in the values.yamls for the 30 components in OOM


        • “Did you update the image versions in the OOM clone using the script in the integration project ? No and According to Michael O'Brien, he recommends not to change it...”

           My understanding is that the oom repo is the source of truth – and that the manifest file in the integration repo is 99-100% kept up to date with what is deployed – but the manifest is not the truth- if it is the reverse then we would need to either make sure every deployment (all CD systems including mine) and all developers use a generated values.yaml override – or at least force a change in OOM to a values.yaml everytime the manifest is updated – from reviewing patches between oom and integration it looks like image names in oom are the source.


           I would like to nail this down because no one I know of adjusts any of the merged docker image tag names that OOM uses to deploy – only the prepull script makes use of the manifest – as far as I know – if not we need to adjust the documentation so that we are running the same way integration does.

        The following assumes the manifest is the same as the values.yamls

        https://wiki.onap.org/display/DW/Cloud+Native+Deployment#CloudNativeDeployment-Nexus3proxyusageperclusternode


           Currently we use the manifest as the docker_prepull.sh target (as it is easier to parse and pull from there instead of mining OOM like the previous iteration did) – however oom when we deploy – will still use what is hardcoded into each values.yaml.

        sudo nohup ./docker_prepull.sh -b casablanca -s nexus4.onap.cloud:5000 &

        pulls from

        https://git.onap.org/logging-analytics/tree/deploy/docker_prepull.sh#n35

        https://git.onap.org/integration/plain/version-manifest/src/main/resources/docker-manifest.csv?h=$BRANCH



            If there is a script that we run somewhere that takes the this manifest and overrides all the image tags as a values.yaml overlay before deployment – let us know.  The current wiki and readthedocs do not mention this.


        Current Process for deployment

        https://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_quickstart_guide.html

        https://wiki.onap.org/display/DW/Cloud+Native+Deployment#CloudNativeDeployment-Scriptedundercloud(Helm/Kubernetes/Docker)andONAPinstall-clustered


        thanks guys

        /michael


        From: Dominique Deschênes <dominique.deschenes@gcgenicom.com>
        Sent: Thursday, January 3, 2019 2:37 PM
        To: Borislav Glozman <Borislav.Glozman@amdocs.com>; bf1936@att.com; onap-discuss@lists.onap.org
        Cc: Jacques Faucher <jacques.faucher@gcgenicom.com>; Jasmin Audet <jasmin.audet@gcgenicom.com>; Michael O'Brien <Frank.Obrien@amdocs.com>
        Subject: Re[2]: [onap-discuss] OOM ONAP Deployment


        Hi,


        • Are you using the integration-override.yaml file ? No
        • Did you update the image versions in the OOM clone using the script in the integration project ? No and According to Michael O'Brien, he recommends not to change it...


  10. Tracking issue with the deploy plugin not differentiating between an install and upgrade as of this week - (no script changes - so this is an env issue for now)

    ubuntu@ip-172-31-29-61:~/oom/kubernetes$ sudo helm deploy onap local/onap --namespace onap -f ../../dev.yaml --verbose
    fetching local/onap
    Error: UPGRADE FAILED: "onap-dmaap" has no deployed releases
    Error: UPGRADE FAILED: "onap-log" has no deployed releases
    Error: UPGRADE FAILED: "onap-pomba" has no deployed releases
    Error: UPGRADE FAILED: "onap-robot" has no deployed releases
    onap-dmaap	1       	Thu Dec 27 00:46:47 2018	FAILED  	dmaap-3.0.0	onap     
    onap-log  	1       	Thu Dec 27 00:46:49 2018	FAILED  	log-3.0.0  	onap     
    onap-pomba	1       	Thu Dec 27 00:46:51 2018	FAILED  	pomba-3.0.0	onap     
    onap-robot	1       	Thu Dec 27 00:46:54 2018	FAILED  	robot-3.0.0	onap     
    
    
    fixed by an empty deploy first
    sudo helm deploy onap local/onap --namespace onap -f onap/resources/environments/disable-allcharts.yaml --verbose
    sudo helm deploy onap local/onap --namespace onap -f ../../dev.yaml --verbose
    ubuntu@ip-172-31-29-61:~/oom/kubernetes$ sudo helm list
    NAME      	REVISION	UPDATED                 	STATUS  	CHART      	NAMESPACE
    onap      	2       	Thu Dec 27 01:06:12 2018	DEPLOYED	onap-3.0.0 	onap     
    onap-dmaap	1       	Thu Dec 27 01:06:14 2018	DEPLOYED	dmaap-3.0.0	onap     
    onap-log  	1       	Thu Dec 27 01:06:18 2018	DEPLOYED	log-3.0.0  	onap     
    onap-pomba	1       	Thu Dec 27 01:06:26 2018	DEPLOYED	pomba-3.0.0	onap     
    onap-robot	1       	Thu Dec 27 01:06:38 2018	DEPLOYED	robot-3.0.0	onap  
  11. we have some challenges with reinstalling deployments:

    [root@tomas-infra ~]# helm ls -a

    [root@tomas-infra ~]# kubectl get pods -n onap
    No resources found.

    [root@tomas-infra ~]# kubectl get deployments.extensions --all-namespaces
    NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
    kube-system heapster 1 1 1 1 1h
    kube-system kube-dns 1 1 1 1 1h
    kube-system kubernetes-dashboard 1 1 1 1 1h
    kube-system monitoring-grafana 1 1 1 1 1h
    kube-system monitoring-influxdb 1 1 1 1 1h
    kube-system tiller-deploy 1 1 1 1 1h


    [root@tomas-infra ~]# helm deploy dev local/onap --namespace onap
    fetching local/onap
    release "dev" deployed
    release "dev-sdnc" deployed
    dev-sdnc 1 Wed Jan 23 09:53:21 2019 FAILED sdnc-3.0.0 onap

    #from kubectl logs tiller-deploy-78db58d887-gthv8 -n kube-system

    [tiller] 2019/01/23 14:53:22 warning: Release "dev-sdnc" failed: deployments.extensions "dev-sdnc-sdnc-dgbuilder" already exists
    [storage] 2019/01/23 14:53:22 updating release "dev-sdnc.v1"
    [tiller] 2019/01/23 14:53:22 failed install perform step: release dev-sdnc failed: deployments.extensions "dev-sdnc-sdnc-dgbuilder" already exists


    does anyone know how to clean stack properly ?

    anyway it doesn't looks like cleaning problem but some race-conditions during creating (not reproducible on fresh vanilla system)

  12. Hello, I am a student. I get a problem

    My release is deployed, but there is no pods. 

    I put "helm ls" in but the list is blank. However there is no error.

    I don't know why, who can help me. (sad)(sad)

    Thank you very much.