http://10.12.6.190:8080/admin/access/
the IP changes according to the env where the setup is created, use the ui to identify the ip where Rancher is installed and access its external IP. |
http://10.12.6.190:8080/r/projects/1a7/kubernetes-dashboard:9090/#!/service?namespace=default
the IP changes according to the env where the setup is created, use the ui to identify the ip where Rancher is installed and access its external IP. |
root@rancher:~# kubectl get pods -n onap NAME READY STATUS RESTARTS AGE dev-sdc-be-7dfd76f8b7-hc26c 2/2 Running 0 2d dev-sdc-cs-7d7787b7f5-6ch55 1/1 Running 0 2d dev-sdc-es-9477ccd7c-8jt5p 1/1 Running 0 2d dev-sdc-fe-59dbb59656-v7kwn 2/2 Running 0 2d dev-sdc-kb-7cfbd85c7b-pgpnj 1/1 Running 0 2d dev-sdc-onboarding-be-745c794884-6c9tf 2/2 Running 0 2d dev-sdc-wfd-6f7c9d778b-hbzlf 1/1 Running 0 2d |
root@rancher:~# kubectl get pods -n onap -o wide NAME READY STATUS RESTARTS AGE IP NODE dev-sdc-be-7dfd76f8b7-hc26c 2/2 Running 0 2d 10.42.202.150 k8s-3 dev-sdc-cs-7d7787b7f5-6ch55 1/1 Running 0 2d 10.42.100.30 k8s-3 dev-sdc-es-9477ccd7c-8jt5p 1/1 Running 0 2d 10.42.74.202 k8s-4 dev-sdc-fe-59dbb59656-v7kwn 2/2 Running 0 2d 10.42.73.184 k8s-2 dev-sdc-kb-7cfbd85c7b-pgpnj 1/1 Running 0 2d 10.42.44.238 k8s-9 dev-sdc-onboarding-be-745c794884-6c9tf 2/2 Running 0 2d 10.42.7.132 k8s-4 dev-sdc-wfd-6f7c9d778b-hbzlf 1/1 Running 0 2d 10.42.113.182 k8s-8 |
root@rancher:~# kubectl get services -n onap NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE sdc-be NodePort 10.43.1.23 <none> 8443:30204/TCP,8080:30205/TCP 2d sdc-cs ClusterIP 10.43.204.60 <none> 9160/TCP,9042/TCP 2d sdc-es ClusterIP 10.43.48.202 <none> 9200/TCP,9300/TCP 2d sdc-fe NodePort 10.43.43.115 <none> 8181:30206/TCP,9443:30207/TCP 2d sdc-kb ClusterIP 10.43.160.51 <none> 5601/TCP 2d sdc-onboarding-be ClusterIP 10.43.191.47 <none> 8445/TCP,8081/TCP 2d sdc-wfd NodePort 10.43.220.184 <none> 8080:30256/TCP 2d |
root@rancher:~# kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * oom oom oom |
root@k8s:~# helm list NAME REVISION UPDATED STATUS CHART NAMESPACE dev 2 Mon Apr 16 23:01:06 2018 FAILED onap-2.0.0 onap dev 9 Tue Apr 17 12:59:25 2018 DEPLOYED onap-2.0.0 onap |
root@rancher:~# helm repo list NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879 |
should use the local/onap syntax to let K8 decide based on the parent chart (local/onap)
helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml |
disable the pods wait for pods to stop and start them
cd ~ helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml --set sdc.enabled=false helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml --set sdc.enabled=true |
helm list -a .. dev-sdc 2 Wed Oct 17 18:06:08 2018 DEPLOYED sdc-3.0.0 onap .. helm del dev-sdc --purge helm list -a //to confirm its gone kubectl get pods -n onap | grep sdc //check the pods are done helm deploy dev local/onap -f /root/integration-override.yaml --namespace onap |
in case you want to update the env json for sdc.
update the file under oom/kubernetes/sdc/resources/config/environments/AUTO.json
stop pods make the charts and start the pods
helm upgrade -i dev local/onap --namespace onap -f /root/integration-override.yaml --set sdc.enabled=false make /root/oom/kubernetes/onap helm upgrade -i dev local/onap --namespace onap -f /root/integration-override.yaml --set sdc.enabled=true |
kubectl -n onap describe pod dev-robot-5cfddf87fb-65zvv |
kubectl -n onap exec -it dev-sdc-onboarding-be-745c794884-ss8w4 bash |
in case there a number of containers in the same pod use -c
kubectl -n onap exec -it dev-sdc-onboarding-be-745c794884-ss8w4 -c <container name as it is defined int the deployment yaml for the pod> bash |
in case there a number of containers in the same pod use -c
kubectl -n onap logs -it dev-sdc-onboarding-be-745c794884-ss8w4 -c <container name as it is defined int the deployment yaml for the pod> |
kubectl get configMap -n onap |
kubectl get configMaps -n onap dev-sdc-environments-configmap -o yaml |
then check that all pods are stopped.
look for all Terminating to be gone if not wait till they are.
helm del dev --purge kubectl get pods --all-namespaces -o=wide |
that have not been removed.
kubectl -n onap get pvc |
kubectl -n onap delete pvc dev-sdnc-db-data-dev-sdnc-db-0 |
kubectl -n onap get pv |
kubectl -n onap delete pv pvc-c0180abd-4251-11e8-b07c-02ee3a27e357 |
kubectl delete pod dev-sms-857f6dbd87-6lh9k -n onap |
helm upgrade -i dev local/onap --namespace onap -f integration-override.yaml |
portal application is the only on with an exposed external port.
Look for the external IP of the node that has portal IP 10.0.0.4 in the UI interface for the instances.
kubectl get service -o wide -n onap
define the IP in you local machine to allow so that the browser can resolve it to the correct IP
10.12.6.37 portal.api.simpledemo.onap.org
10.12.6.37 sdc.api.fe.simpledemo.onap.org
to access the portal usethe link below after the resolving has been updated.
http://portal.api.simpledemo.onap.org:30215/ONAPPORTAL/login.htm https://portal.api.simpledemo.onap.org:30225/ONAPPORTAL/login.htm |