You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »

(From Brian Freeman's notes)

Helm.notes.txt (this is also on SB04 /root)

root@k8s:~# cat helm.notes.txt

kubectl config get-contexts

helm list

root@k8s:~# helm list

NAME    REVISION            UPDATED                          STATUS                CHART                 NAMESPACE

dev        2            Mon Apr 16 23:01:06 2018          FAILED  onap-2.0.0          onap    

dev        9            Tue Apr 17 12:59:25 2018            DEPLOYED           onap-2.0.0          onap 

helm repo list

NAME  URL                                             

stable    https://kubernetes-charts.storage.googleapis.com

local      http://127.0.0.1:8879

#helm upgrade -i dev local/onap --namespace onap -f onap/resources/environments/integration.yaml

helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml


# to upgrade robot

# a config upgrade should use the local/onap syntax to let K8 decide based on the parent chart (local/onap)

helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml

# if docker container changes use the enable:false/true

helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml --set robot.enabled=false
helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml --set robot.enabled=true
 

# if  both the config and the docker container changes use the enable:false, do the make component, make onap  then enable:true

helm upgrade -i dev local/onap --namespace onap -f /root/integration-override.yaml --set robot.enabled=false

Confirm the assets are removed with get pods , get pv, get pvc, get secret, get configmap for those pieces you dont want to preserve

cd  /root/oom/kubernetes

make robot

make onap

helm upgrade -i dev local/onap --namespace onap -f /root/integration-override.yaml --set robot.enabled=true
kubectl get pods --all-namespaces -o=wide


# to check status of a pod like robots pod

kubectl -n onap describe pod dev-robot-5cfddf87fb-65zvv
pullPolicy: Always IfNotPresent option to allow us to 


### Faster method to do a delete for reinstall


kubectl delete namespace onap

kubectl delete pods -n onap --all

kubectl delete secrets -n onap --all

kubectl delete persistentvolumes -n onap --all

kubectl -n onap delete clusterrolebindings --all

helm del --purge dev

helm list -a

helm del --purge dev-[project] ← use this if helm list -a shows lingering  releases in DELETED state


if you have pods stuck terminating for a long time


kubectl delete pod --grace-period=0 --force --namespace onap --all


# of NAME=dev release

helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml


To test with a smaller ConfigMap try to disable some things like:


helm upgrade -i dev local/onap --namespace onap -f /root/integration-override.yaml --set log.enabled=false --set clamp.enabled=false --set pomba.enabled=false --set vnfsdk.enabled=false

(aaf is needed by alot of modules in Casablanca  but this is a near equivalent)

helm upgrade  -i dev local/onap --namespace onap -f /root/integration-override.yaml --set log.enabled=false --set aaf.enabled=false --set pomba.enabled=false --set vnfsdk.enabled=false

Note: setting log.enabled=false means that you will need to hunt down /var/log/onap logs on each docker container - instead of using the kibana search on the ELK stack deployed to port 30253 that consolidates all onap logs


## Slower method to delete full deploy

helm del dev --purge 

kubectl get pods --all-namespaces -o=wide

# look for all Terminating to be gone and wait till they are

kubectl -n onap get pvc

# look for persistant volumes that have not been removed.

kubectl -n onap delete pvc  dev-sdnc-db-data-dev-sdnc-db-0

# dev-sdnc is the name from the left  of the get pvc command


# same for pv (persistant volumes)

kubectl -n onap get pv
kubectl -n onap delete  pv  pvc-c0180abd-4251-11e8-b07c-02ee3a27e357

#same for pv, pvc, secret, configmap, services

kubectl get pods --all-namespaces -o=wide 
kubectl delete  pod dev-sms-857f6dbd87-6lh9k -n onap (stuck terminating pod )


# full install

# of NAME=dev instane 

helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml
 
# update vm_properties.py
# robot/resources/config/eteshare/vm_properties.py
# cd to oom/kuberneties

Remember: Do the enabled=false BEFORE doing the make onap so that the kubectl processing will use the old chart to delete the POD

# make robot; make onap
# helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml - this would just redeploy robot becuase its configMap only


Container debugging commands


kubectl -n onap logs pod/dev-sdnc-0 -c sdnc

  • No labels