Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Tracking 

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyOOM-1598
WIP - as of 20190225

NON-HA version in https://git.onap.org/oom/tree/kubernetes/contrib/tools/rke/rke_setup.sh

https://gerrit.onap.org/r/#/c/79067/

Move to https://onap.readthedocs.io/en/beijing/submodules/oom.git/docs/oom_cloud_setup_guide.html or similar when this documentation is released

https://lists.onap.org/g/onap-discuss/topic/rke_ha_collaboration_for/31313969?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,31313969

Automated Installation Video of RKE install

20190227 - VMware

View file
name20190228_rke_install_preliminary.mp4
height250

20190330 - AWS

View file
name20190330_rke_install_nso_blue_zoom_0_5_min.mp4
height250

Quickstart

Get your public and private keys on the Ubuntu 16.04 VM.

...

get rke script from jira, gerrit or by cloning OOM when the review is done.

Code Block
themeMidnight
sudo# on wgetyour https://jira.onap.org/secure/attachment/13589/rke_setup.sh
sudo chmod 777 rke_setup.shlaptop/where your cert is
# chmod 777 your cert before you scp it over
obrienbiometrics:full michaelobrien$ scp ~/wse_onap/onap_rsa ubuntu@rke0.onap.info:~/

# on the host
sudo cp onap_rsa ~/.ssh
sudo chmod 400 ~/.ssh/onap_rsa
sudo chown ubuntu:ubuntu ~/.ssh/onap_rsa 
# just verify
sudo vi ~/.ssh/authorized_keys

git clone --recurse-submodules https://gerrit.onap.org/r/oom
sudo cp oom/kubernetes/contrib/tools/rke/rke_setup.sh .
sudo nohup ./rke_installsetup.sh -b master -s localhost104.209.161.210 -e onap -k onap_rsa -l ubuntu &

Versions

Currently Docker 18.06, RKE 0.1.16, Kubernetes 1.11.6, Kubectl 1.11.6, Helm 2.12.3

TODO: verify later versions of helm and a way to get RKE to install Kubernetes 1.13

Prerequisites

Ubuntu 16.04 VM

Determine RKE and Docker versions

Don't just use the latest docker version - check the RKE release page to get the version pair - 0.1.15/17.03 and 0.1.16/18.06 - see https://github.com/docker/docker-ce/releases - currently https://github.com/docker/docker-ce/releases/tag/v18.06.3-ce

Code Block
themeMidnight
ubuntu@a-rke:~$ sudo curl https://releases.rancher.com/install-docker/18.06.sh | sh
ubuntu@a-rke:~$ sudo usermod -aG docker ubuntu
ubuntu@a-rke:~$ sudo docker version
Client:
 Version:           18.06.3-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        d7080c1
 Built:             Wed Feb 20 02:27:18 2019

# install RKE
sudo wget https://github.com/rancher/rke/releases/download/v0.1.16/rke_linux-amd64
mv rke_linux-amd64 rke
sudo mv ./rke /usr/local/bin/rke

ubuntu@a-rke:~$ rke --version
rke version v0.1.16

...



ubuntu@a-rke0-master:~$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY     STATUS      RESTARTS   AGE
ingress-nginx   default-http-backend-797c5bc547-55fpn     1/1       Running     0          4m
ingress-nginx   nginx-ingress-controller-znhgz            1/1       Running     0          4m
kube-system     canal-dqt2m                               3/3       Running     0          5m
kube-system     kube-dns-7588d5b5f5-pzdfh                 3/3       Running     0          5m
kube-system     kube-dns-autoscaler-5db9bbb766-b7vvg      1/1       Running     0          5m
kube-system     metrics-server-97bc649d5-fmqjd            1/1       Running     0          4m
kube-system     rke-ingress-controller-deploy-job-dxmbd   0/1       Completed   0          4m
kube-system     rke-kubedns-addon-deploy-job-wqccp        0/1       Completed   0          5m
kube-system     rke-metrics-addon-deploy-job-ssrgp        0/1       Completed   0          4m
kube-system     rke-network-plugin-deploy-job-jkffq       0/1       Completed   0          5m
kube-system     tiller-deploy-759cb9df9-rlt7v             1/1       Running     0          2m
ubuntu@a-rke0-master:~$ helm list


Versions

Currently Docker 18.06, RKE 0.1.16, Kubernetes 1.11.6, Kubectl 1.11.6, Helm 2.12.3

TODO: verify later versions of helm and a way to get RKE to install Kubernetes 1.13

Prerequisites

Ubuntu 16.04 VM

Determine RKE and Docker versions

Don't just use the latest docker version - check the RKE release page to get the version pair - 0.1.15/17.03 and 0.1.16/18.06 - see https://github.com/docker/docker-ce/releases - currently https://github.com/docker/docker-ce/releases/tag/v18.06.3-ce

Code Block
themeMidnight
ubuntu@a-rke:~$ sudo curl https://releases.rancher.com/install-docker/18.06.sh | sh
ubuntu@a-rke:~$ sudo usermod -aG docker ubuntu
ubuntu@a-rke:~$ sudo docker version
Client:
 Version:           18.06.3-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        d7080c1
 Built:             Wed Feb 20 02:27:18 2019

# install RKE
sudo wget https://github.com/rancher/rke/releases/download/v0.1.16/rke_linux-amd64
mv rke_linux-amd64 rke
sudo mv ./rke /usr/local/bin/rke

ubuntu@a-rke:~$ rke --version
rke version v0.1.16


Private SSH key

scp your public key to the box - ideally to ~/.ssh and chmod 400 it - make sure you add your key to authorized_keys


Elastic Reserved IP

get a VIP or EIP and assign this to your VM

generate cluster.yml - optional

cluster.yml will generated by the script rke_setup.sh

Code Block
themeMidnight
azure config - no need to hand build the yml
Watch the path of your 2 keys
Also don't add an "addon" until you have one of the config job will fail

{noformat}
ubuntu@a-rke:~$ rke config --name cluster.yml
[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: ~/.ssh/onap_rsa
[+] Number of Hosts [1]: 
[+] SSH Address of host (1) [none]: rke.onap.cloud
[+] SSH Port of host (1) [22]: 
[+] SSH Private Key Path of host (rke.onap.cloud) [none]: ~/.ssh/onap_rsa
[+] SSH User of host (rke.onap.cloud) [ubuntu]: 
[+] Is host (rke.onap.cloud) a Control Plane host (y/n)? [y]: y
[+] Is host (rke.onap.cloud) a Worker host (y/n)? [n]: y
[+] Is host (rke.onap.cloud) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (rke.onap.cloud) [none]: 
[+] Internal IP of host (rke.onap.cloud) [none]: 
[+] Docker socket path on host (rke.onap.cloud) [/var/run/docker.sock]: 
[+] Network Plugin Type (flannel, calico, weave, canal) [canal]: 
[+] Authentication Strategy [x509]: 
[+] Authorization Mode (rbac, none) [rbac]: 
[+] Kubernetes Docker image [rancher/hyperkube:v1.11.6-rancher1]: 
[+] Cluster domain [cluster.local]: 
[+] Service Cluster IP Range [10.43.0.0/16]: 
[+] Enable PodSecurityPolicy [n]: 
[+] Cluster Network CIDR [10.42.0.0/16]: 
[+] Cluster DNS Service IP [10.43.0.10]: 
[+] Add addon manifest URLs or YAML files [no]: no
ubuntu@a-rke:~$ sudo cat cluster.yml 
# If you intened to deploy Kubernetes in an air-gapped environment,
# please consult the documentation on how to configure custom RKE images.
nodes:
- address: rke.onap.cloud
  port: "22"
  internal_address: ""
  role:
  - controlplane
  - worker
  - etcd
  hostname_override: ""
  user: ubuntu
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/onap_rsa
  labels: {}
services:
  etcd:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    external_urls: []
    ca_cert: ""
    cert: ""
    key: ""
    path: ""
    snapshot: null
    retention: ""
    creation: ""
  kube-api:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    service_cluster_ip_range: 10.43.0.0/16
    service_node_port_range: ""
    pod_security_policy: false
  kube-controller:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    cluster_cidr: 10.42.0.0/16
    service_cluster_ip_range: 10.43.0.0/16
  scheduler:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
  kubelet:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    cluster_domain: cluster.local
    infra_container_image: ""
    cluster_dns_server: 10.43.0.10
    fail_swap_on: false
  kubeproxy:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
network:
  plugin: canal
  options: {}
authentication:
  strategy: x509
  options: {}
  sans: []
system_images:
  etcd: rancher/coreos-etcd:v3.2.18
  alpine: rancher/rke-tools:v0.1.15
  nginx_proxy: rancher/rke-tools:v0.1.15
  cert_downloader: rancher/rke-tools:v0.1.15
  kubernetes_services_sidecar: rancher/rke-tools:v0.1.15
  kubedns: rancher/k8s-dns-kube-dns-amd64:1.14.10
  dnsmasq: rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.10
  kubedns_sidecar: rancher/k8s-dns-sidecar-amd64:1.14.10
  kubedns_autoscaler: rancher/cluster-proportional-autoscaler-amd64:1.0.0
  kubernetes: rancher/hyperkube:v1.11.6-rancher1
  flannel: rancher/coreos-flannel:v0.10.0
  flannel_cni: rancher/coreos-flannel-cni:v0.3.0
  calico_node: rancher/calico-node:v3.1.3
  calico_cni: rancher/calico-cni:v3.1.3
  calico_controllers: ""
  calico_ctl: rancher/calico-ctl:v2.0.0
  canal_node: rancher/calico-node:v3.1.3
  canal_cni: rancher/calico-cni:v3.1.3
  canal_flannel: rancher/coreos-flannel:v0.10.0
  wave_node: weaveworks/weave-kube:2.1.2
  weave_cni: weaveworks/weave-npc:2.1.2
  pod_infra_container: rancher/pause-amd64:3.1
  ingress: rancher/nginx-ingress-controller:0.16.2-rancher1
  ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.4
  metrics_server: rancher/metrics-server-amd64:v0.2.1
ssh_key_path: ~/.ssh/onap_rsa
ssh_agent_auth: false
authorization:
  mode: rbac
  options: {}
ignore_docker_version: false
kubernetes_version: ""
private_registries: []
ingress:
  provider: ""
  options: {}
  node_selector: {}
  extra_args: {}
cluster_name: ""
cloud_provider:
  name: ""
prefix_path: ""
addon_job_timeout: 0
bastion_host:
  address: ""
  port: ""
  user: ""
  ssh_key: ""
  ssh_key_path: ""
monitoring:
  provider: ""
  options: {}
{noformat}



Kubernetes Single Node Developer Installation

Code Block
themeMidnight
sudo ./rke_install.sh -b master -s localhost -e onap -l ubuntu


Kubernetes HA Cluster Production Installation

Design Issues

DI 20190225-1: RKE/Docker version pair

As of 20190215 RKE 0.16 supports Docker 18.06-ce (and 18.09 non-ce) (up from 0.15 supporting 17.03)

https://github.com/docker/docker-ce/releases/tag/v18.06.3-ce

https://github.com/rancher/rke/releases/tag/v0.1.16

Code Block
themeMidnight
ubuntu@a-rke:~$ sudo rke up
INFO[0000] Building Kubernetes cluster                  
INFO[0000] [dialer] Setup tunnel for host [rke.onap.cloud] 
FATA[0000] Unsupported Docker version found [18.06.3-ce], supported versions are [1.11.x 1.12.x 1.13.x 17.03.x] 

DI 20190225-2: RKE upgrade from 0.15 to 0.16 - not working

Does rke remove, regenerate the yaml (or hand upgrade the versions) then rke up

Code Block
themeMidnight
ubuntu@a-rke:~$ sudo rke remove
Are you sure you want to remove Kubernetes cluster [y/n]: y
INFO[0002] Tearing down Kubernetes cluster              
INFO[0002] [dialer] Setup tunnel for host [rke.onap.cloud] 
INFO[0002] [worker] Tearing down Worker Plane..         
INFO[0002] [remove/kubelet] Successfully removed container on host [rke.onap.cloud] 
INFO[0003] [remove/kube-proxy] Successfully removed container on host [rke.onap.cloud] 
INFO[0003] [remove/service-sidekick] Successfully removed container on host [rke.onap.cloud] 
INFO[0003] [worker] Successfully tore down Worker Plane.. 
INFO[0003] [controlplane] Tearing down the Controller Plane.. 
INFO[0003] [remove/kube-apiserver] Successfully removed container on host [rke.onap.cloud] 
INFO[0003] [remove/kube-controller-manager] Successfully removed container on host [rke.onap.cloud] 
INFO[0004] [remove/kube-scheduler] Successfully removed container on host [rke.onap.cloud] 
INFO[0004] [controlplane] Host [rke.onap.cloud] is already a worker host, skipping delete kubelet and kubeproxy. 
INFO[0004] [controlplane] Successfully tore down Controller Plane.. 
INFO[0004] [etcd] Tearing down etcd plane..             
INFO[0004] [remove/etcd] Successfully removed container on host [rke.onap.cloud] 
INFO[0004] [etcd] Successfully tore down etcd plane..   
INFO[0004] [hosts] Cleaning up host [rke.onap.cloud]    
INFO[0004] [hosts] Cleaning up host [rke.onap.cloud]    
INFO[0004] [hosts] Running cleaner container on host [rke.onap.cloud] 
INFO[0005] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud] 
INFO[0005] [hosts] Removing cleaner container on host [rke.onap.cloud] 
INFO[0005] [hosts] Removing dead container logs on host [rke.onap.cloud] 
INFO[0006] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud] 
INFO[0006] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud] 
INFO[0006] [hosts] Successfully cleaned up host [rke.onap.cloud] 
INFO[0006] [hosts] Cleaning up host [rke.onap.cloud]    
INFO[0006] [hosts] Cleaning up host [rke.onap.cloud]    
INFO[0006] [hosts] Running cleaner container on host [rke.onap.cloud] 
INFO[0007] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud] 
INFO[0008] [hosts] Removing cleaner container on host [rke.onap.cloud] 
INFO[0008] [hosts] Removing dead container logs on host [rke.onap.cloud] 
INFO[0008] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud] 
INFO[0009] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud] 
INFO[0009] [hosts] Successfully cleaned up host [rke.onap.cloud] 
INFO[0009] [hosts] Cleaning up host [rke.onap.cloud]    
INFO[0009] [hosts] Cleaning up host [rke.onap.cloud]    
INFO[0009] [hosts] Running cleaner container on host [rke.onap.cloud] 
INFO[0010] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud] 
INFO[0010] [hosts] Removing cleaner container on host [rke.onap.cloud] 
INFO[0010] [hosts] Removing dead container logs on host [rke.onap.cloud] 
INFO[0011] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud] 
INFO[0011] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud] 
INFO[0011] [hosts] Successfully cleaned up host [rke.onap.cloud] 
INFO[0011] Removing local admin Kubeconfig: ./kube_config_cluster.yml 
INFO[0011] Local admin Kubeconfig removed successfully  
INFO[0011] Cluster removed successfully  

ubuntu@a-rke:~$ rke config --name cluster.ym
ubuntu@a-rke:~$ sudo rke up
INFO[0000] Building Kubernetes cluster                  
INFO[0000] [dialer] Setup tunnel for host [rke.onap.cloud] 
INFO[0000] [network] Deploying port listener containers 
INFO[0001] [network] Successfully started [rke-etcd-port-listener] container on host [rke.onap.cloud] 
INFO[0001] [network] Successfully started [rke-cp-port-listener] container on host [rke.onap.cloud] 
INFO[0002] [network] Successfully started [rke-worker-port-listener] container on host [rke.onap.cloud] 
INFO[0002] [network] Port listener containers deployed successfully 
INFO[0002] [network] Running control plane -> etcd port checks 
INFO[0003] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud] 
INFO[0003] [network] Running control plane -> worker port checks 
INFO[0004] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud] 
INFO[0004] [network] Running workers -> control plane port checks 
INFO[0005] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud] 
INFO[0005] [network] Checking KubeAPI port Control Plane hosts 
INFO[0005] [network] Removing port listener containers  
INFO[0005] [remove/rke-etcd-port-listener] Successfully removed container on host [rke.onap.cloud] 
INFO[0006] [remove/rke-cp-port-listener] Successfully removed container on host [rke.onap.cloud] 
INFO[0006] [remove/rke-worker-port-listener] Successfully removed container on host [rke.onap.cloud] 
INFO[0006] [network] Port listener containers removed successfully 
INFO[0006] [certificates] Attempting to recover certificates from backup on [etcd,controlPlane] hosts 
INFO[0007] [certificates] No Certificate backup found on [etcd,controlPlane] hosts 
INFO[0007] [certificates] Generating CA kubernetes certificates 
INFO[0007] [certificates] Generating Kubernetes API server certficates 
INFO[0008] [certificates] Generating Kube Controller certificates 
INFO[0008] [certificates] Generating Kube Scheduler certificates 
INFO[0008] [certificates] Generating Kube Proxy certificates 
INFO[0009] [certificates] Generating Node certificate   
INFO[0009] [certificates] Generating admin certificates and kubeconfig 
INFO[0009] [certificates] Generating etcd-rke.onap.cloud certificate and key 
INFO[0009] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates 
INFO[0009] [certificates] Generating Kubernetes API server proxy client certificates 
INFO[0010] [certificates] Temporarily saving certs to [etcd,controlPlane] hosts 
INFO[0016] [certificates] Saved certs to [etcd,controlPlane] hosts 
INFO[0016] [reconcile] Reconciling cluster state        
INFO[0016] [reconcile] This is newly generated cluster  
INFO[0016] [certificates] Deploying kubernetes certificates to Cluster nodes 
INFO[0022] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] 
INFO[0022] [certificates] Successfully deployed kubernetes certificates to Cluster nodes 
INFO[0022] Pre-pulling kubernetes images                
INFO[0022] Kubernetes images pulled successfully        
INFO[0022] [etcd] Building up etcd plane..              
INFO[0023] [etcd] Successfully started [etcd] container on host [rke.onap.cloud] 
INFO[0023] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [rke.onap.cloud] 
INFO[0028] [certificates] Successfully started [rke-bundle-cert] container on host [rke.onap.cloud] 
INFO[0029] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [rke.onap.cloud] 
INFO[0029] [etcd] Successfully started [rke-log-linker] container on host [rke.onap.cloud] 
INFO[0030] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] 
INFO[0030] [etcd] Successfully started etcd plane..     
INFO[0030] [controlplane] Building up Controller Plane.. 
INFO[0031] [controlplane] Successfully started [kube-apiserver] container on host [rke.onap.cloud] 
INFO[0031] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [rke.onap.cloud] 
INFO[0045] [healthcheck] service [kube-apiserver] on host [rke.onap.cloud] is healthy 
INFO[0046] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud] 
INFO[0046] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] 
INFO[0047] [controlplane] Successfully started [kube-controller-manager] container on host [rke.onap.cloud] 
INFO[0047] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [rke.onap.cloud] 
INFO[0052] [healthcheck] service [kube-controller-manager] on host [rke.onap.cloud] is healthy 
INFO[0053] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud] 
INFO[0053] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] 
INFO[0054] [controlplane] Successfully started [kube-scheduler] container on host [rke.onap.cloud] 
INFO[0054] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [rke.onap.cloud] 
INFO[0059] [healthcheck] service [kube-scheduler] on host [rke.onap.cloud] is healthy 
INFO[0060] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud] 
INFO[0060] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] 
INFO[0060] [controlplane] Successfully started Controller Plane.. 
INFO[0060] [authz] Creating rke-job-deployer ServiceAccount 
INFO[0060] [authz] rke-job-deployer ServiceAccount created successfully 
INFO[0060] [authz] Creating system:node ClusterRoleBinding 
INFO[0060] [authz] system:node ClusterRoleBinding created successfully 
INFO[0060] [certificates] Save kubernetes certificates as secrets 
INFO[0060] [certificates] Successfully saved certificates as kubernetes secret [k8s-certs] 
INFO[0060] [state] Saving cluster state to Kubernetes   
INFO[0061] [state] Successfully Saved cluster state to Kubernetes ConfigMap: cluster-state 
INFO[0061] [state] Saving cluster state to cluster nodes 
INFO[0061] [state] Successfully started [cluster-state-deployer] container on host [rke.onap.cloud] 
INFO[0062] [remove/cluster-state-deployer] Successfully removed container on host [rke.onap.cloud] 
INFO[0062] [worker] Building up Worker Plane..          
INFO[0062] [remove/service-sidekick] Successfully removed container on host [rke.onap.cloud] 
INFO[0063] [worker] Successfully started [kubelet] container on host [rke.onap.cloud] 
INFO[0063] [healthcheck] Start Healthcheck on service [kubelet] on host [rke.onap.cloud] 
INFO[0068] [healthcheck] service [kubelet] on host [rke.onap.cloud] is healthy 
INFO[0069] [worker] Successfully started [rke-log-linker] container on host [rke.onap.cloud] 
INFO[0070] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] 
INFO[0070] [worker] Successfully started [kube-proxy] container on host [rke.onap.cloud] 
INFO[0070] [healthcheck] Start Healthcheck on service [kube-proxy] on host [rke.onap.cloud] 
INFO[0076] [healthcheck] service [kube-proxy] on host [rke.onap.cloud] is healthy 
INFO[0076] [worker] Successfully started [rke-log-linker] container on host [rke.onap.cloud] 
INFO[0077] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] 
INFO[0077] [worker] Successfully started Worker Plane.. 
INFO[0077] [sync] Syncing nodes Labels and Taints       
INFO[0077] [sync] Successfully synced nodes Labels and Taints 
INFO[0077] [network] Setting up network plugin: canal   
INFO[0077] [addons] Saving addon ConfigMap to Kubernetes 
INFO[0077] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-network-plugin 
INFO[0077] [addons] Executing deploy job..              
INFO[0082] [addons] Setting up KubeDNS                  
INFO[0082] [addons] Saving addon ConfigMap to Kubernetes 
INFO[0082] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-kubedns-addon 
INFO[0082] [addons] Executing deploy job..              
INFO[0087] [addons] KubeDNS deployed successfully..     
INFO[0087] [addons] Setting up Metrics Server           
INFO[0087] [addons] Saving addon ConfigMap to Kubernetes 
INFO[0087] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-metrics-addon 
INFO[0087] [addons] Executing deploy job..              
INFO[0092] [addons] KubeDNS deployed successfully..     
INFO[0092] [ingress] Setting up nginx ingress controller 
INFO[0092] [addons] Saving addon ConfigMap to Kubernetes 
INFO[0092] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-ingress-controller 
INFO[0092] [addons] Executing deploy job..              
INFO[0097] [ingress] ingress controller nginx is successfully deployed 
INFO[0097] [addons] Setting up user addons              
INFO[0097] [addons] Checking for included user addons   
WARN[0097] [addons] Unable to determine if  is a file path or url, skipping 
INFO[0097] [addons] Deploying rke-user-includes-addons  
INFO[0097] [addons] Saving addon ConfigMap to Kubernetes 
INFO[0097] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-user-includes-addons 
INFO[0097] [addons] Executing deploy job..              
WARN[0128] Failed to deploy addon execute job [rke-user-includes-addons]: Failed to get job complete status: <nil> 
INFO[0128] Finished building Kubernetes cluster successfully 

ubuntu@a-rke:~$ sudo docker ps
CONTAINER ID        IMAGE                                COMMAND                  CREATED             STATUS              PORTS               NAMES
ec26c4bd24b5        846921f0fe0e                         "/server"                10 minutes ago      Up 10 minutes                           k8s_default-http-backend_default-http-backend-797c5bc547-45msr_ingress-nginx_0eddfe19-394e-11e9-b708-000d3a0e23f3_0
f8d5db205e14        8a7739f672b4                         "/sidecar --v=2 --lo…"   10 minutes ago      Up 10 minutes                           k8s_sidecar_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
490461545ae4        rancher/metrics-server-amd64         "/metrics-server --s…"   10 minutes ago      Up 10 minutes                           k8s_metrics-server_metrics-server-97bc649d5-q84tz_kube-system_0c566ec8-394e-11e9-b708-000d3a0e23f3_0
aaf03b62bd41        6816817d9dce                         "/dnsmasq-nanny -v=2…"   10 minutes ago      Up 10 minutes                           k8s_dnsmasq_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
58ec007db72f        55ffe31ac578                         "/kube-dns --domain=…"   10 minutes ago      Up 10 minutes                           k8s_kubedns_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
0a95c06f6aa6        e183460c484d                         "/cluster-proportion…"   10 minutes ago      Up 10 minutes                           k8s_autoscaler_kube-dns-autoscaler-5db9bbb766-6slz7_kube-system_08b5495c-394e-11e9-b708-000d3a0e23f3_0
968a7c99b210        rancher/pause-amd64:3.1              "/pause"                 10 minutes ago      Up 10 minutes                           k8s_POD_default-http-backend-797c5bc547-45msr_ingress-nginx_0eddfe19-394e-11e9-b708-000d3a0e23f3_0
69969b331e49        rancher/pause-amd64:3.1              "/pause"                 10 minutes ago      Up 10 minutes                           k8s_POD_metrics-server-97bc649d5-q84tz_kube-system_0c566ec8-394e-11e9-b708-000d3a0e23f3_0
baa5f03c16ff        rancher/pause-amd64:3.1              "/pause"                 10 minutes ago      Up 10 minutes                           k8s_POD_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
82b2a9f640cb        rancher/pause-amd64:3.1              "/pause"                 10 minutes ago      Up 10 minutes                           k8s_POD_kube-dns-autoscaler-5db9bbb766-6slz7_kube-system_08b5495c-394e-11e9-b708-000d3a0e23f3_0
953a4d4be0c1        df4469c42185                         "/usr/bin/dumb-init …"   10 minutes ago      Up 10 minutes                           k8s_nginx-ingress-controller_nginx-ingress-controller-dfhp8_ingress-nginx_0ed3bdbf-394e-11e9-b708-000d3a0e23f3_0
cce552840749        rancher/pause-amd64:3.1              "/pause"                 10 minutes ago      Up 10 minutes                           k8s_POD_nginx-ingress-controller-dfhp8_ingress-nginx_0ed3bdbf-394e-11e9-b708-000d3a0e23f3_0
baa65f9c6f97        f0fad859c909                         "/opt/bin/flanneld -…"   10 minutes ago      Up 10 minutes                           k8s_kube-flannel_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
1736ce68f41a        9f355e076ea7                         "/install-cni.sh"        10 minutes ago      Up 10 minutes                           k8s_install-cni_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
615d3f702ee7        7eca10056c8e                         "start_runit"            10 minutes ago      Up 10 minutes                           k8s_calico-node_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
1c4a702f0f18        rancher/pause-amd64:3.1              "/pause"                 10 minutes ago      Up 10 minutes                           k8s_POD_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
0da1cada08e1        rancher/hyperkube:v1.11.6-rancher1   "/opt/rke-tools/entr…"   10 minutes ago      Up 10 minutes                           kube-proxy
57f44998f34a        rancher/hyperkube:v1.11.6-rancher1   "/opt/rke-tools/entr…"   11 minutes ago      Up 11 minutes                           kubelet
50f424c4daec        rancher/hyperkube:v1.11.6-rancher1   "/opt/rke-tools/entr…"   11 minutes ago      Up 11 minutes                           kube-scheduler
502d327912d9        rancher/hyperkube:v1.11.6-rancher1   "/opt/rke-tools/entr…"   11 minutes ago      Up 11 minutes                           kube-controller-manager
9fc706bbf3a5        rancher/hyperkube:v1.11.6-rancher1   "/opt/rke-tools/entr…"   11 minutes ago      Up 11 minutes                           kube-apiserver
2e7630c2047c        rancher/coreos-etcd:v3.2.18          "/usr/local/bin/etcd…"   11 minutes ago      Up 11 minutes                           etcd
fef566337eb6        rancher/rke-tools:v0.1.15            "/opt/rke-tools/rke-…"   26 minutes ago      Up 26 minutes                           etcd-rolling-snapshots


amdocs@obriensystemsu0:~$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY     STATUS      RESTARTS   AGE
ingress-nginx   default-http-backend-797c5bc547-m8hbx     1/1       Running     0          1h
ingress-nginx   nginx-ingress-controller-2v7w7            1/1       Running     0          1h
kube-system     canal-thmfg                               3/3       Running     0          1h
kube-system     kube-dns-7588d5b5f5-j66s8                 3/3       Running     0          1h
kube-system     kube-dns-autoscaler-5db9bbb766-rg5n8      1/1       Running     0          1h
kube-system     metrics-server-97bc649d5-jd2rr            1/1       Running     0          1h
kube-system     rke-ingress-controller-deploy-job-znp9n   0/1       Completed   0          1h
kube-system     rke-kubedns-addon-deploy-job-dzxsj        0/1       Completed   0          1h
kube-system     rke-metrics-addon-deploy-job-gpm4j        0/1       Completed   0          1h
kube-system     rke-network-plugin-deploy-job-kqdds       0/1       Completed   0          1h
kube-system     tiller-deploy-69458576b-khgr5             1/1       Running     0          1h

DI 20190226-1: RKE up segmentation fault on 0.1.16 - use correct user

Code Block
themeMidnight
amdocs@obriensystemsu0:~$ sudo rke up
Segmentation fault (core dumped)

# issue was I was using ubuntu as the yml user not amdocs in this case for a particular VM

DI 20190227-1: Verify no 110 pod limit per VM

https://forums.rancher.com/t/solved-setting-max-pods/11866

Code Block
themeMidnight
kubelet:
    image: ""
    extra_args:
      max-pods: 900


DI 20190228-1: deploy casablanca MR to RKE under K8S 1.11.6, Docker 18.06, Helm 2.12.3

Code Block
themeMidnight
sudo git clone https://gerrit.onap.org/r/logging-analytics
sudo wget https://git.onap.org/oom/plain/kubernetes/onap/resources/environments/dev.yaml
sudo cp dev.yaml dev0.yaml
sudo vi dev0.yaml
sudo cp dev0.yaml dev1.yaml
sudo cp logging-analytics/deploy/cd.sh .
sudo ./cd.sh -b casablanca -e onap -p false nexus3.onap.org:10001 -f true -s 300 -c true -d false -w false -r false


no good for helm 2.12.3 deployment - just using 2.9.1 for now - 
Error: Chart incompatible with Tiller v2.12.3


in the casablanca branch only - flip
https://git.onap.org/oom/tree/kubernetes/onap/Chart.yaml?h=casablanca#n24
tillerVersion: "~2.9.1"

DI 20190305-1: Azure 256G VM full ONAP Testing

Code Block
themeMidnight
obrienbiometrics:oom michaelobrien$ ssh ubuntu@onap-dmz.onap.cloud

./oom_deployment.sh -b master -s rke.onap.cloud -e onap -r a_rke0_master -t _arm_deploy_onap_cd.json -p _arm_deploy_onap_rke_z_parameters.json 

DI 20190425: HA RKE install Testing

Manual first for RC0, later retrofit the script in https://git.onap.org/oom/tree/kubernetes/contrib/tools/rke/rke_setup.sh and move/adjust the heat template in https://git.onap.org/logging-analytics/tree/deploy/heat/logging_openstack_13_16g.yaml

Installing on 6 nodes on AWS (windriver is having an issue right now). 

We are good on RKE 0.2.1, Ubuntu 18.04 / Kubernetes/kubectl 1.13.5 / Helm 2.13.1 / Docker 18.09.5

https://github.com/rancher/rke/releases RKE 0.2.2 has experimental k8s 1.14 support - running with 0.2.1 for now

Just need to test onap deployments

I'll do the NFS/EFS later before deployment

Code Block
themeMidnight
# on all VMs (control, etcd, worker)
# move the key to all vms
scp ~/wse_onap/onap_rsa ubuntu@rke0.onap.info:~/
sudo curl https://releases.rancher.com/install-docker/18.09.sh | sh
sudo usermod -aG docker ubuntu

# nfs server

# on control/etcd nodes only
# from script
sudo wget https://github.com/rancher/rke/releases/download/v0.2.1/rke_linux-amd64
mv rke_linux-amd64 rke
sudo chmod +x rke
sudo mv ./rke /usr/local/bin/rke
# one time setup of the yaml or use the generated one
ubuntu@ip-172-31-38-182:~$ sudo rke config
[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: ~/.ssh/onap_rsa
[+] Number of Hosts [1]: 6
[+] SSH Address of host (1) [none]: 3.14.102.175
[+] SSH Port of host (1) [22]: 
[+] SSH Private Key Path of host (3.14.102.175) [none]: ~/.ssh/onap_rsa
[+] SSH User of host (3.14.102.175) [ubuntu]: ubuntu
[+] Is host (3.14.102.175) a Control Plane host (y/n)? [y]: y
[+] Is host (3.14.102.175) a Worker host (y/n)? [n]: n
[+] Is host (3.14.102.175) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (3.14.102.175) [none]: 
[+] Internal IP of host (3.14.102.175) [none]: 
[+] Docker socket path on host (3.14.102.175) [/var/run/docker.sock]: 
[+] SSH Address of host (2) [none]: 18.220.62.6
[+] SSH Port of host (2) [22]: 
[+] SSH Private Key Path of host (18.220.62.6) [none]: ~/.ssh/onap_rsa
[+] SSH User of host (18.220.62.6) [ubuntu]: 
[+] Is host (18.220.62.6) a Control Plane host (y/n)? [y]: y
[+] Is host (18.220.62.6) a Worker host (y/n)? [n]: n
[+] Is host (18.220.62.6) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (18.220.62.6) [none]: 
[+] Internal IP of host (18.220.62.6) [none]: 
[+] Docker socket path on host (18.220.62.6) [/var/run/docker.sock]: 
[+] SSH Address of host (3) [none]: 18.217.96.12
[+] SSH Port of host (3) [22]: 
[+] SSH Private Key Path of host (18.217.96.12) [none]: ~/.ssh/onap_rsa
[+] SSH User of host (18.217.96.12) [ubuntu]: 
[+] Is host (18.217.96.12) a Control Plane host (y/n)? [y]: y
[+] Is host (18.217.96.12) a Worker host (y/n)? [n]: n
[+] Is host (18.217.96.12) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (18.217.96.12) [none]: 
[+] Internal IP of host (18.217.96.12) [none]: 
[+] Docker socket path on host (18.217.96.12) [/var/run/docker.sock]: 
[+] SSH Address of host (4) [none]: 18.188.214.137
[+] SSH Port of host (4) [22]: 
[+] SSH Private Key Path of host (18.188.214.137) [none]: ~/.ssh/onap_rsa
[+] SSH User of host (18.188.214.137) [ubuntu]: 
[+] Is host (18.188.214.137) a Control Plane host (y/n)? [y]: n
[+] Is host (18.188.214.137) a Worker host (y/n)? [n]: y
[+] Is host (18.188.214.137) an etcd host (y/n)? [n]: n
[+] Override Hostname of host (18.188.214.137) [none]: 
[+] Internal IP of host (18.188.214.137) [none]: 
[+] Docker socket path on host (18.188.214.137) [/var/run/docker.sock]: 
[+] SSH Address of host (5) [none]: 18.220.70.253
[+] SSH Port of host (5) [22]: 
[+] SSH Private Key Path of host (18.220.70.253) [none]: ~/.ssh/onap_rsa
[+] SSH User of host (18.220.70.253) [ubuntu]: 
[+] Is host (18.220.70.253) a Control Plane host (y/n)? [y]: n
[+] Is host (18.220.70.253) a Worker host (y/n)? [n]: y
[+] Is host (18.220.70.253) an etcd host (y/n)? [n]: n
[+] Override Hostname of host (18.220.70.253) [none]: 
[+] Internal IP of host (18.220.70.253) [none]: 
[+] Docker socket path on host (18.220.70.253) [/var/run/docker.sock]: 
[+] SSH Address of host (6) [none]: 3.17.76.33
[+] SSH Port of host (6) [22]: 
[+] SSH Private Key Path of host (3.17.76.33) [none]: ~/.ssh/onap_rsa
[+] SSH User of host (3.17.76.33) [ubuntu]: 
[+] Is host (3.17.76.33) a Control Plane host (y/n)? [y]: n
[+] Is host (3.17.76.33) a Worker host (y/n)? [n]: y
[+] Is host (3.17.76.33) an etcd host (y/n)? [n]: n
[+] Override Hostname of host (3.17.76.33) [none]: 
[+] Internal IP of host (3.17.76.33) [none]: 
[+] Docker socket path on host (3.17.76.33) [/var/run/docker.sock]: 
[+] Network Plugin Type (flannel, calico, weave, canal) [canal]: 
[+] Authentication Strategy [x509]: 
[+] Authorization Mode (rbac, none) [rbac]: 
[+] Kubernetes Docker image [rancher/hyperkube:v1.13.5-rancher1]: 
[+] Cluster domain [cluster.local]: 
[+] Service Cluster IP Range [10.43.0.0/16]: 
[+] Enable PodSecurityPolicy [n]: 
[+] Cluster Network CIDR [10.42.0.0/16]: 
[+] Cluster DNS Service IP [10.43.0.10]: 
[+] Add addon manifest URLs or YAML files [no]:

# new
[+] Cluster domain [cluster.local]: 

ubuntu@ip-172-31-38-182:~$ sudo rke up
INFO[0000] Initiating Kubernetes cluster                
INFO[0000] [certificates] Generating CA kubernetes certificates 
INFO[0000] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates 
INFO[0000] [certificates] Generating Kubernetes API server certificates 
INFO[0000] [certificates] Generating Service account token key 
INFO[0000] [certificates] Generating etcd-3.14.102.175 certificate and key 
INFO[0000] [certificates] Generating etcd-18.220.62.6 certificate and key 
INFO[0001] [certificates] Generating etcd-18.217.96.12 certificate and key 
INFO[0001] [certificates] Generating Kube Controller certificates 
INFO[0001] [certificates] Generating Kube Scheduler certificates 
INFO[0001] [certificates] Generating Kube Proxy certificates 
INFO[0001] [certificates] Generating Node certificate   
INFO[0001] [certificates] Generating admin certificates and kubeconfig 
INFO[0001] [certificates] Generating Kubernetes API server proxy client certificates 
INFO[0002] Successfully Deployed state file at [./cluster.rkestate] 
INFO[0002] Building Kubernetes cluster                  
INFO[0002] [dialer] Setup tunnel for host [3.14.102.175] 
INFO[0002] [dialer] Setup tunnel for host [18.188.214.137] 
INFO[0002] [dialer] Setup tunnel for host [18.220.70.253] 
INFO[0002] [dialer] Setup tunnel for host [18.220.62.6] 
INFO[0002] [dialer] Setup tunnel for host [18.217.96.12] 
INFO[0002] [dialer] Setup tunnel for host [3.17.76.33]  
INFO[0002] [network] Deploying port listener containers 
INFO[0002] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [18.220.62.6] 
INFO[0002] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [3.14.102.175] 
INFO[0002] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [18.217.96.12] 
INFO[0006] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [18.220.62.6] 
INFO[0006] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [18.217.96.12] 
INFO[0006] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [3.14.102.175] 
INFO[0007] [network] Successfully started [rke-etcd-port-listener] container on host [18.220.62.6] 
INFO[0007] [network] Successfully started [rke-etcd-port-listener] container on host [18.217.96.12] 
INFO[0007] [network] Successfully started [rke-etcd-port-listener] container on host [3.14.102.175] 
INFO[0008] [network] Successfully started [rke-cp-port-listener] container on host [18.217.96.12] 
INFO[0008] [network] Successfully started [rke-cp-port-listener] container on host [18.220.62.6] 
INFO[0008] [network] Successfully started [rke-cp-port-listener] container on host [3.14.102.175] 
INFO[0008] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [18.188.214.137] 
INFO[0008] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [18.220.70.253] 
INFO[0008] [network] Pulling image [rancher/rke-tools:v0.1.27] on host [3.17.76.33] 
INFO[0012] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [3.17.76.33] 
INFO[0012] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [18.220.70.253] 
INFO[0012] [network] Successfully pulled image [rancher/rke-tools:v0.1.27] on host [18.188.214.137] 
INFO[0013] [network] Successfully started [rke-worker-port-listener] container on host [3.17.76.33] 
INFO[0013] [network] Successfully started [rke-worker-port-listener] container on host [18.220.70.253] 
INFO[0013] [network] Successfully started [rke-worker-port-listener] container on host [18.188.214.137] 
INFO[0013] [network] Port listener containers deployed successfully 
INFO[0013] [network] Running etcd <-> etcd port checks  
INFO[0013] [network] Successfully started [rke-port-checker] container on host [18.220.62.6] 
INFO[0013] [network] Successfully started [rke-port-checker] container on host [3.14.102.175] 
INFO[0013] [network] Successfully started [rke-port-checker] container on host [18.217.96.12] 
INFO[0014] [network] Running control plane -> etcd port checks 
INFO[0014] [network] Successfully started [rke-port-checker] container on host [18.220.62.6] 
INFO[0014] [network] Successfully started [rke-port-checker] container on host [3.14.102.175] 
INFO[0014] [network] Successfully started [rke-port-checker] container on host [18.217.96.12] 
INFO[0014] [network] Running control plane -> worker port checks 
INFO[0015] [network] Successfully started [rke-port-checker] container on host [18.220.62.6] 
INFO[0015] [network] Successfully started [rke-port-checker] container on host [18.217.96.12] 
INFO[0015] [network] Successfully started [rke-port-checker] container on host [3.14.102.175] 
INFO[0015] [network] Running workers -> control plane port checks 
INFO[0015] [network] Successfully started [rke-port-checker] container on host [3.17.76.33] 
INFO[0015] [network] Successfully started [rke-port-checker] container on host [18.220.70.253] 
INFO[0015] [network] Successfully started [rke-port-checker] container on host [18.188.214.137] 
INFO[0016] [network] Checking KubeAPI port Control Plane hosts 
INFO[0016] [network] Removing port listener containers  
INFO[0016] [remove/rke-etcd-port-listener] Successfully removed container on host [18.220.62.6] 
INFO[0016] [remove/rke-etcd-port-listener] Successfully removed container on host [3.14.102.175] 
INFO[0016] [remove/rke-etcd-port-listener] Successfully removed container on host [18.217.96.12] 
INFO[0016] [remove/rke-cp-port-listener] Successfully removed container on host [18.217.96.12] 
INFO[0016] [remove/rke-cp-port-listener] Successfully removed container on host [3.14.102.175] 
INFO[0016] [remove/rke-cp-port-listener] Successfully removed container on host [18.220.62.6] 
INFO[0017] [remove/rke-worker-port-listener] Successfully removed container on host [18.220.70.253] 
INFO[0017] [remove/rke-worker-port-listener] Successfully removed container on host [3.17.76.33] 
INFO[0017] [remove/rke-worker-port-listener] Successfully removed container on host [18.188.214.137] 
INFO[0017] [network] Port listener containers removed successfully 
INFO[0017] [certificates] Deploying kubernetes certificates to Cluster nodes 
INFO[0022] [reconcile] Rebuilding and updating local kube config 
INFO[0022] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] 
INFO[0022] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] 
INFO[0022] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] 
INFO[0022] [certificates] Successfully deployed kubernetes certificates to Cluster nodes 
INFO[0022] [reconcile] Reconciling cluster state        
INFO[0022] [reconcile] This is newly generated cluster  
INFO[0022] Pre-pulling kubernetes images                
INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [3.14.102.175] 
INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [18.188.214.137] 
INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [18.220.70.253] 
INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [18.217.96.12] 
INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [3.17.76.33] 
INFO[0022] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [18.220.62.6] 
INFO[0038] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [18.220.62.6] 
INFO[0038] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [18.220.70.253] 
INFO[0039] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [3.17.76.33] 
INFO[0039] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [18.217.96.12] 
INFO[0039] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [18.188.214.137] 
INFO[0039] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [3.14.102.175] 
INFO[0039] Kubernetes images pulled successfully        
INFO[0039] [etcd] Building up etcd plane..              
INFO[0039] [etcd] Pulling image [rancher/coreos-etcd:v3.2.24-rancher1] on host [3.14.102.175] 
INFO[0041] [etcd] Successfully pulled image [rancher/coreos-etcd:v3.2.24-rancher1] on host [3.14.102.175] 
INFO[0051] [etcd] Successfully started [etcd] container on host [3.14.102.175] 
INFO[0051] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [3.14.102.175] 
INFO[0052] [etcd] Successfully started [etcd-rolling-snapshots] container on host [3.14.102.175] 
INFO[0057] [certificates] Successfully started [rke-bundle-cert] container on host [3.14.102.175] 
INFO[0058] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [3.14.102.175] 
INFO[0058] [etcd] Successfully started [rke-log-linker] container on host [3.14.102.175] 
INFO[0058] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] 
INFO[0058] [etcd] Pulling image [rancher/coreos-etcd:v3.2.24-rancher1] on host [18.220.62.6] 
INFO[0063] [etcd] Successfully pulled image [rancher/coreos-etcd:v3.2.24-rancher1] on host [18.220.62.6] 
INFO[0069] [etcd] Successfully started [etcd] container on host [18.220.62.6] 
INFO[0069] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [18.220.62.6] 
INFO[0069] [etcd] Successfully started [etcd-rolling-snapshots] container on host [18.220.62.6] 
INFO[0075] [certificates] Successfully started [rke-bundle-cert] container on host [18.220.62.6] 
INFO[0075] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [18.220.62.6] 
INFO[0076] [etcd] Successfully started [rke-log-linker] container on host [18.220.62.6] 
INFO[0076] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] 
INFO[0076] [etcd] Pulling image [rancher/coreos-etcd:v3.2.24-rancher1] on host [18.217.96.12] 
INFO[0078] [etcd] Successfully pulled image [rancher/coreos-etcd:v3.2.24-rancher1] on host [18.217.96.12] 
INFO[0078] [etcd] Successfully started [etcd] container on host [18.217.96.12] 
INFO[0078] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [18.217.96.12] 
INFO[0078] [etcd] Successfully started [etcd-rolling-snapshots] container on host [18.217.96.12] 
INFO[0084] [certificates] Successfully started [rke-bundle-cert] container on host [18.217.96.12] 
INFO[0084] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [18.217.96.12] 
INFO[0085] [etcd] Successfully started [rke-log-linker] container on host [18.217.96.12] 
INFO[0085] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] 
INFO[0085] [etcd] Successfully started etcd plane.. Checking etcd cluster health 
INFO[0086] [controlplane] Building up Controller Plane.. 
INFO[0086] [controlplane] Successfully started [kube-apiserver] container on host [18.220.62.6] 
INFO[0086] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [18.220.62.6] 
INFO[0086] [controlplane] Successfully started [kube-apiserver] container on host [3.14.102.175] 
INFO[0086] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [3.14.102.175] 
INFO[0086] [controlplane] Successfully started [kube-apiserver] container on host [18.217.96.12] 
INFO[0086] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [18.217.96.12] 
INFO[0098] [healthcheck] service [kube-apiserver] on host [18.220.62.6] is healthy 
INFO[0099] [healthcheck] service [kube-apiserver] on host [18.217.96.12] is healthy 
INFO[0099] [controlplane] Successfully started [rke-log-linker] container on host [18.220.62.6] 
INFO[0099] [controlplane] Successfully started [rke-log-linker] container on host [18.217.96.12] 
INFO[0099] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] 
INFO[0099] [healthcheck] service [kube-apiserver] on host [3.14.102.175] is healthy 
INFO[0099] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] 
INFO[0099] [controlplane] Successfully started [kube-controller-manager] container on host [18.220.62.6] 
INFO[0099] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [18.220.62.6] 
INFO[0100] [controlplane] Successfully started [kube-controller-manager] container on host [18.217.96.12] 
INFO[0100] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [18.217.96.12] 
INFO[0100] [controlplane] Successfully started [rke-log-linker] container on host [3.14.102.175] 
INFO[0100] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] 
INFO[0100] [healthcheck] service [kube-controller-manager] on host [18.220.62.6] is healthy 
INFO[0100] [healthcheck] service [kube-controller-manager] on host [18.217.96.12] is healthy 
INFO[0100] [controlplane] Successfully started [kube-controller-manager] container on host [3.14.102.175] 
INFO[0100] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [3.14.102.175] 
INFO[0101] [controlplane] Successfully started [rke-log-linker] container on host [18.220.62.6] 
INFO[0101] [controlplane] Successfully started [rke-log-linker] container on host [18.217.96.12] 
INFO[0101] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] 
INFO[0101] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] 
INFO[0101] [controlplane] Successfully started [kube-scheduler] container on host [18.220.62.6] 
INFO[0101] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [18.220.62.6] 
INFO[0101] [healthcheck] service [kube-controller-manager] on host [3.14.102.175] is healthy 
INFO[0101] [controlplane] Successfully started [kube-scheduler] container on host [18.217.96.12] 
INFO[0101] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [18.217.96.12] 
INFO[0102] [controlplane] Successfully started [rke-log-linker] container on host [3.14.102.175] 
INFO[0102] [healthcheck] service [kube-scheduler] on host [18.220.62.6] is healthy 
INFO[0102] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] 
INFO[0102] [healthcheck] service [kube-scheduler] on host [18.217.96.12] is healthy 
INFO[0103] [controlplane] Successfully started [kube-scheduler] container on host [3.14.102.175] 
INFO[0103] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [3.14.102.175] 
INFO[0103] [controlplane] Successfully started [rke-log-linker] container on host [18.220.62.6] 
INFO[0103] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] 
INFO[0103] [controlplane] Successfully started [rke-log-linker] container on host [18.217.96.12] 
INFO[0103] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] 
INFO[0103] [healthcheck] service [kube-scheduler] on host [3.14.102.175] is healthy 
INFO[0104] [controlplane] Successfully started [rke-log-linker] container on host [3.14.102.175] 
INFO[0104] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] 
INFO[0104] [controlplane] Successfully started Controller Plane.. 
INFO[0104] [authz] Creating rke-job-deployer ServiceAccount 
INFO[0104] [authz] rke-job-deployer ServiceAccount created successfully 
INFO[0104] [authz] Creating system:node ClusterRoleBinding 
INFO[0104] [authz] system:node ClusterRoleBinding created successfully 
INFO[0104] Successfully Deployed state file at [./cluster.rkestate] 
INFO[0104] [state] Saving full cluster state to Kubernetes 
INFO[0104] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: cluster-state 
INFO[0104] [worker] Building up Worker Plane..          
INFO[0104] [sidekick] Sidekick container already created on host [18.220.62.6] 
INFO[0104] [sidekick] Sidekick container already created on host [3.14.102.175] 
INFO[0104] [sidekick] Sidekick container already created on host [18.217.96.12] 
INFO[0105] [worker] Successfully started [kubelet] container on host [3.14.102.175] 
INFO[0105] [healthcheck] Start Healthcheck on service [kubelet] on host [3.14.102.175] 
INFO[0105] [worker] Successfully started [kubelet] container on host [18.220.62.6] 
INFO[0105] [healthcheck] Start Healthcheck on service [kubelet] on host [18.220.62.6] 
INFO[0105] [worker] Successfully started [kubelet] container on host [18.217.96.12] 
INFO[0105] [healthcheck] Start Healthcheck on service [kubelet] on host [18.217.96.12] 
INFO[0105] [worker] Successfully started [nginx-proxy] container on host [18.220.70.253] 
INFO[0105] [worker] Successfully started [nginx-proxy] container on host [3.17.76.33] 
INFO[0105] [worker] Successfully started [nginx-proxy] container on host [18.188.214.137] 
INFO[0106] [worker] Successfully started [rke-log-linker] container on host [18.220.70.253] 
INFO[0106] [worker] Successfully started [rke-log-linker] container on host [3.17.76.33] 
INFO[0106] [worker] Successfully started [rke-log-linker] container on host [18.188.214.137] 
INFO[0106] [remove/rke-log-linker] Successfully removed container on host [3.17.76.33] 
INFO[0106] [remove/rke-log-linker] Successfully removed container on host [18.220.70.253] 
INFO[0106] [remove/rke-log-linker] Successfully removed container on host [18.188.214.137] 
INFO[0106] [worker] Successfully started [kubelet] container on host [3.17.76.33] 
INFO[0106] [healthcheck] Start Healthcheck on service [kubelet] on host [3.17.76.33] 
INFO[0107] [worker] Successfully started [kubelet] container on host [18.220.70.253] 
INFO[0107] [healthcheck] Start Healthcheck on service [kubelet] on host [18.220.70.253] 
INFO[0107] [worker] Successfully started [kubelet] container on host [18.188.214.137] 
INFO[0107] [healthcheck] Start Healthcheck on service [kubelet] on host [18.188.214.137] 
INFO[0111] [healthcheck] service [kubelet] on host [18.220.62.6] is healthy 
INFO[0111] [healthcheck] service [kubelet] on host [18.217.96.12] is healthy 
INFO[0111] [healthcheck] service [kubelet] on host [3.14.102.175] is healthy 
INFO[0111] [worker] Successfully started [rke-log-linker] container on host [18.220.62.6] 
INFO[0111] [worker] Successfully started [rke-log-linker] container on host [3.14.102.175] 
INFO[0111] [worker] Successfully started [rke-log-linker] container on host [18.217.96.12] 
INFO[0112] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] 
INFO[0112] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] 
INFO[0112] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] 
INFO[0112] [worker] Successfully started [kube-proxy] container on host [18.220.62.6] 
INFO[0112] [healthcheck] Start Healthcheck on service [kube-proxy] on host [18.220.62.6] 
INFO[0112] [worker] Successfully started [kube-proxy] container on host [18.217.96.12] 
INFO[0112] [healthcheck] Start Healthcheck on service [kube-proxy] on host [18.217.96.12] 
INFO[0112] [worker] Successfully started [kube-proxy] container on host [3.14.102.175] 
INFO[0112] [healthcheck] Start Healthcheck on service [kube-proxy] on host [3.14.102.175] 
INFO[0113] [healthcheck] service [kube-proxy] on host [18.220.62.6] is healthy 
INFO[0113] [healthcheck] service [kube-proxy] on host [18.217.96.12] is healthy 
INFO[0113] [healthcheck] service [kube-proxy] on host [3.14.102.175] is healthy 
INFO[0113] [healthcheck] service [kubelet] on host [18.220.70.253] is healthy 
INFO[0113] [healthcheck] service [kubelet] on host [3.17.76.33] is healthy 
INFO[0113] [healthcheck] service [kubelet] on host [18.188.214.137] is healthy 
INFO[0113] [worker] Successfully started [rke-log-linker] container on host [18.220.62.6] 
INFO[0113] [worker] Successfully started [rke-log-linker] container on host [18.217.96.12] 
INFO[0113] [worker] Successfully started [rke-log-linker] container on host [3.14.102.175] 
INFO[0113] [worker] Successfully started [rke-log-linker] container on host [3.17.76.33] 
INFO[0113] [worker] Successfully started [rke-log-linker] container on host [18.220.70.253] 
INFO[0113] [remove/rke-log-linker] Successfully removed container on host [18.220.62.6] 
INFO[0113] [worker] Successfully started [rke-log-linker] container on host [18.188.214.137] 
INFO[0113] [remove/rke-log-linker] Successfully removed container on host [18.217.96.12] 
INFO[0113] [remove/rke-log-linker] Successfully removed container on host [3.14.102.175] 
INFO[0114] [remove/rke-log-linker] Successfully removed container on host [3.17.76.33] 
INFO[0114] [remove/rke-log-linker] Successfully removed container on host [18.220.70.253] 
INFO[0114] [remove/rke-log-linker] Successfully removed container on host [18.188.214.137] 
INFO[0114] [worker] Successfully started [kube-proxy] container on host [3.17.76.33] 
INFO[0114] [healthcheck] Start Healthcheck on service [kube-proxy] on host [3.17.76.33] 
INFO[0114] [worker] Successfully started [kube-proxy] container on host [18.220.70.253] 
INFO[0114] [healthcheck] Start Healthcheck on service [kube-proxy] on host [18.220.70.253] 
INFO[0114] [worker] Successfully started [kube-proxy] container on host [18.188.214.137] 
INFO[0114] [healthcheck] Start Healthcheck on service [kube-proxy] on host [18.188.214.137] 
INFO[0114] [healthcheck] service [kube-proxy] on host [18.220.70.253] is healthy 
INFO[0114] [healthcheck] service [kube-proxy] on host [3.17.76.33] is healthy 
INFO[0115] [healthcheck] service [kube-proxy] on host [18.188.214.137] is healthy 
INFO[0115] [worker] Successfully started [rke-log-linker] container on host [18.220.70.253] 
INFO[0115] [worker] Successfully started [rke-log-linker] container on host [3.17.76.33] 
INFO[0115] [worker] Successfully started [rke-log-linker] container on host [18.188.214.137] 
INFO[0115] [remove/rke-log-linker] Successfully removed container on host [18.220.70.253] 
INFO[0115] [remove/rke-log-linker] Successfully removed container on host [3.17.76.33] 
INFO[0115] [remove/rke-log-linker] Successfully removed container on host [18.188.214.137] 
INFO[0115] [worker] Successfully started Worker Plane.. 
INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [18.188.214.137] 
INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [3.17.76.33] 
INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [18.220.70.253] 
INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [18.220.62.6] 
INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [18.217.96.12] 
INFO[0116] [cleanup] Successfully started [rke-log-cleaner] container on host [3.14.102.175] 
INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [3.17.76.33] 
INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [18.188.214.137] 
INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [18.220.62.6] 
INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [18.220.70.253] 
INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [18.217.96.12] 
INFO[0116] [remove/rke-log-cleaner] Successfully removed container on host [3.14.102.175] 
INFO[0116] [sync] Syncing nodes Labels and Taints       
INFO[0117] [sync] Successfully synced nodes Labels and Taints 
INFO[0117] [network] Setting up network plugin: canal   
INFO[0117] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes 
INFO[0117] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes 
INFO[0117] [addons] Executing deploy job rke-network-plugin 
INFO[0122] [addons] Setting up kube-dns                 
INFO[0122] [addons] Saving ConfigMap for addon rke-kube-dns-addon to Kubernetes 
INFO[0122] [addons] Successfully saved ConfigMap for addon rke-kube-dns-addon to Kubernetes 
INFO[0122] [addons] Executing deploy job rke-kube-dns-addon 
INFO[0127] [addons] kube-dns deployed successfully      
INFO[0127] [dns] DNS provider kube-dns deployed successfully 
INFO[0127] [addons] Setting up Metrics Server           
INFO[0127] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0127] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0127] [addons] Executing deploy job rke-metrics-addon 
INFO[0132] [addons] Metrics Server deployed successfully 
INFO[0132] [ingress] Setting up nginx ingress controller 
INFO[0132] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0132] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0132] [addons] Executing deploy job rke-ingress-controller 
INFO[0137] [ingress] ingress controller nginx deployed successfully 
INFO[0137] [addons] Setting up user addons              
INFO[0137] [addons] no user addons defined              
INFO[0137] Finished building Kubernetes cluster successfully

# finish kubectl install
sudo curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.13.5/bin/linux/amd64/kubectl
sudo chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
sudo mkdir ~/.kube

# finish helm
#https://github.com/helm/helm/releases
# there is no helm 2.12.5 - last is 2.12.3 - trying 2.13.1
wget http://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz
sudo tar -zxvf helm-v2.13.1-linux-amd64.tar.gz
sudo cp kube_config_cluster.yml ~/.kube/config
sudo chmod 777 ~/.kube/config

# test
ubuntu@ip-172-31-38-182:~$ kubectl get pods --all-namespaces -o wide
NAMESPACE       NAME                                    READY   STATUS    RESTARTS   AGE   IP               NODE             NOMINATED NODE   READINESS GATES
ingress-nginx   default-http-backend-78fccfc5d9-f6z25   1/1     Running   0          17m   10.42.5.2        18.220.70.253    <none>           <none>
ingress-nginx   nginx-ingress-controller-2zxs7          1/1     Running   0          17m   18.188.214.137   18.188.214.137   <none>           <none>
ingress-nginx   nginx-ingress-controller-6b7gs          1/1     Running   0          17m   3.17.76.33       3.17.76.33       <none>           <none>
ingress-nginx   nginx-ingress-controller-nv4qg          1/1     Running   0          17m   18.220.70.253    18.220.70.253    <none>           <none>
kube-system     canal-48579                             2/2     Running   0          17m   18.220.62.6      18.220.62.6      <none>           <none>
kube-system     canal-6skkm                             2/2     Running   0          17m   18.188.214.137   18.188.214.137   <none>           <none>
kube-system     canal-9xmxv                             2/2     Running   0          17m   18.217.96.12     18.217.96.12     <none>           <none>
kube-system     canal-c582x                             2/2     Running   0          17m   18.220.70.253    18.220.70.253    <none>           <none>
kube-system     canal-whbck                             2/2     Running   0          17m   3.14.102.175     3.14.102.175     <none>           <none>
kube-system     canal-xbbnh                             2/2     Running   0          17m   3.17.76.33       3.17.76.33       <none>           <none>
kube-system     kube-dns-58bd5b8dd7-6mcm7               3/3     Running   0          17m   10.42.3.3        3.17.76.33       <none>           <none>
kube-system     kube-dns-58bd5b8dd7-cd5dg               3/3     Running   0          17m   10.42.4.2        18.188.214.137   <none>           <none>
kube-system     kube-dns-autoscaler-77bc5fd84-p4zfd     1/1     Running   0          17m   10.42.3.2        3.17.76.33       <none>           <none>
kube-system     metrics-server-58bd5dd8d7-kftjn         1/1     Running   0          17m   10.42.3.4        3.17.76.33       <none>           <none>


# install tiller

ubuntu@ip-172-31-38-182:~$ kubectl -n kube-system create serviceaccount tiller
serviceaccount/tiller created
ubuntu@ip-172-31-38-182:~$ kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller created
ubuntu@ip-172-31-38-182:~$ helm init --service-account tiller
Creating /home/ubuntu/.helm 
Creating /home/ubuntu/.helm/repository 
Creating /home/ubuntu/.helm/repository/cache 
Creating /home/ubuntu/.helm/repository/local 
Creating /home/ubuntu/.helm/plugins 
Creating /home/ubuntu/.helm/starters 
Creating /home/ubuntu/.helm/cache/archive 
Creating /home/ubuntu/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /home/ubuntu/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
ubuntu@ip-172-31-38-182:~$ kubectl -n kube-system  rollout status deploy/tiller-deploy
deployment "tiller-deploy" successfully rolled out
ubuntu@ip-172-31-38-182:~$ sudo helm init --upgrade
$HELM_HOME has been configured at /home/ubuntu/.helm.

Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!


ubuntu@ip-172-31-38-182:~$ sudo helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
ubuntu@ip-172-31-38-182:~$ sudo helm serve &
[1] 706
ubuntu@ip-172-31-38-182:~$ Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879

ubuntu@ip-172-31-38-182:~$ sudo helm list
ubuntu@ip-172-31-38-182:~$ sudo helm repo add local http://127.0.0.1:8879
"local" has been added to your repositories
ubuntu@ip-172-31-38-182:~$  sudo helm repo list
NAME  	URL                                             
stable	https://kubernetes-charts.storage.googleapis.com
local 	http://127.0.0.1:8879                           

ubuntu@ip-172-31-38-182:~$ kubectl get nodes -o wide
NAME             STATUS   ROLES               AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
18.188.214.137   Ready    worker              22m   v1.13.5   18.188.214.137   <none>        Ubuntu 18.04.1 LTS   4.15.0-1021-aws   docker://18.9.5
18.217.96.12     Ready    controlplane,etcd   22m   v1.13.5   18.217.96.12     <none>        Ubuntu 18.04.1 LTS   4.15.0-1021-aws   docker://18.9.5
18.220.62.6      Ready    controlplane,etcd   22m   v1.13.5   18.220.62.6      <none>        Ubuntu 18.04.1 LTS   4.15.0-1021-aws   docker://18.9.5
18.220.70.253    Ready    worker              22m   v1.13.5   18.220.70.253    <none>        Ubuntu 18.04.1 LTS   4.15.0-1021-aws   docker://18.9.5
3.14.102.175     Ready    controlplane,etcd   22m   v1.13.5   3.14.102.175     <none>        Ubuntu 18.04.1 LTS   4.15.0-1021-aws   docker://18.9.5
3.17.76.33       Ready    worker              22m   v1.13.5   3.17.76.33       <none>        Ubuntu 18.04.1 LTS   4.15.0-1021-aws   docker://18.9.5


# install make
sudo apt-get install make -y

# install nfs/efs
ubuntu@ip-172-31-38-182:~$ sudo apt-get install nfs-common -y
ubuntu@ip-172-31-38-182:~$ sudo mkdir /dockerdata-nfs
ubuntu@ip-172-31-38-182:~$ sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-5fd6ab26.efs.us-east-2.amazonaws.com:/ /dockerdata-nfs



# check
sudo nohup ./cd.sh -b master -e onap -p false -n nexus3.onap.org:10001 -f false -s 600 -c false -d false -w false -r false &
ubuntu@ip-172-31-38-182:~$ kubectl get pods --all-namespaces -o wide
NAMESPACE       NAME                                                     READY   STATUS             RESTARTS   AGE     IP               NODE             NOMINATED NODE   READINESS GATES
ingress-nginx   default-http-backend-78fccfc5d9-f6z25                    1/1     Running            0          103m    10.42.5.2        18.220.70.253    <none>           <none>
ingress-nginx   nginx-ingress-controller-2zxs7                           1/1     Running            0          103m    18.188.214.137   18.188.214.137   <none>           <none>
ingress-nginx   nginx-ingress-controller-6b7gs                           1/1     Running            0          103m    3.17.76.33       3.17.76.33       <none>           <none>
ingress-nginx   nginx-ingress-controller-nv4qg                           1/1     Running            0          103m    18.220.70.253    18.220.70.253    <none>           <none>
kube-system     canal-48579                                              2/2     Running            0          103m    18.220.62.6      18.220.62.6      <none>           <none>
kube-system     canal-6skkm                                              2/2     Running            0          103m    18.188.214.137   18.188.214.137   <none>           <none>
kube-system     canal-9xmxv                                              2/2     Running            0          103m    18.217.96.12     18.217.96.12     <none>           <none>
kube-system     canal-c582x                                              2/2     Running            0          103m    18.220.70.253    18.220.70.253    <none>           <none>
kube-system     canal-whbck                                              2/2     Running            0          103m    3.14.102.175     3.14.102.175     <none>           <none>
kube-system     canal-xbbnh                                              2/2     Running            0          103m    3.17.76.33       3.17.76.33       <none>           <none>
kube-system     kube-dns-58bd5b8dd7-6mcm7                                3/3     Running            0          103m    10.42.3.3        3.17.76.33       <none>           <none>
kube-system     kube-dns-58bd5b8dd7-cd5dg                                3/3     Running            0          103m    10.42.4.2        18.188.214.137   <none>           <none>
kube-system     kube-dns-autoscaler-77bc5fd84-p4zfd                      1/1     Running            0          103m    10.42.3.2        3.17.76.33       <none>           <none>
kube-system     metrics-server-58bd5dd8d7-kftjn                          1/1     Running            0          103m    10.42.3.4        3.17.76.33       <none>           <none>
kube-system     tiller-deploy-5f4fc5bcc6-gc4tc                           1/1     Running            0          84m     10.42.5.3        18.220.70.253    <none>           <none>
onap            onap-aai-aai-587cb79c6d-mzpbs                            0/1     Init:0/1           1          13m     10.42.5.17       18.220.70.253    <none>           <none>
onap            onap-aai-aai-babel-8c755bcfc-kmzdm                       2/2     Running            0          29m     10.42.5.8        18.220.70.253    <none>           <none>
onap            onap-aai-aai-champ-78b9d7f68b-98tm9                      0/2     Init:0/1           2          29m     10.42.3.6        3.17.76.33       <none>           <none>
onap            onap-aai-aai-data-router-64fcfbc5bb-wkkvz                1/2     CrashLoopBackOff   9          29m     10.42.5.7        18.220.70.253    <none>           <none>
onap            onap-aai-aai-elasticsearch-6dcf5d9966-j7z67              1/1     Running            0          29m     10.42.4.4        18.188.214.137   <none>           <none>
onap            onap-aai-aai-gizmo-5bddb87589-zn8pl                      2/2     Running            0          29m     10.42.4.3        18.188.214.137   <none>           <none>
onap            onap-aai-aai-graphadmin-774f9d698f-f8lwv                 0/2     Init:0/1           2          29m     10.42.5.4        18.220.70.253    <none>           <none>
onap            onap-aai-aai-graphadmin-create-db-schema-94q4l           0/1     Init:Error         0          18m     10.42.4.16       18.188.214.137   <none>           <none>
onap            onap-aai-aai-graphadmin-create-db-schema-s42pq           0/1     Init:0/1           0          7m54s   10.42.5.20       18.220.70.253    <none>           <none>
onap            onap-aai-aai-graphadmin-create-db-schema-tsvcw           0/1     Init:Error         0          29m     10.42.4.5        18.188.214.137   <none>           <none>
onap            onap-aai-aai-modelloader-845fc684bd-r7mdw                2/2     Running            0          29m     10.42.4.6        18.188.214.137   <none>           <none>
onap            onap-aai-aai-resources-67f8dfcbdb-kz6cp                  0/2     Init:0/1           2          29m     10.42.5.11       18.220.70.253    <none>           <none>
onap            onap-aai-aai-schema-service-6c56b45b7c-7zlfz             2/2     Running            0          29m     10.42.3.7        3.17.76.33       <none>           <none>
onap            onap-aai-aai-search-data-5d8d7759b8-flxwj                2/2     Running            0          29m     10.42.3.9        3.17.76.33       <none>           <none>
onap            onap-aai-aai-sparky-be-8444df749c-mzc2n  

scp your public key to the box - ideally to ~/.ssh and chmod 400 it - make sure you add your key to authorized_keys

Elastic Reserved IP

get a VIP or EIP and assign this to your VM

generate cluster.yml - optional

cluster.yml will generated by the script rke_setup.sh

Code Block
themeMidnight
azure config - no need to hand build the yml
Watch the path of your 2 keys
Also don't add an "addon" until you have one of the config job will fail

{noformat}
ubuntu@a-rke:~$ rke config --name cluster.yml
[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: ~/.ssh/onap_rsa
[+] Number of Hosts [1]: 
[+] SSH Address of host (1) [none]: rke.onap.cloud
[+] SSH Port of host (1) [22]: 
[+] SSH Private Key Path of host (rke.onap.cloud) [none]: ~/.ssh/onap_rsa
[+] SSH User of host (rke.onap.cloud) [ubuntu]: 
[+] Is host (rke.onap.cloud) a Control Plane host (y/n)? [y]: y
[+] Is host (rke.onap.cloud) a Worker host (y/n)? [n]: y
[+] Is host (rke.onap.cloud) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (rke.onap.cloud) [none]: 
[+] Internal IP of host (rke.onap.cloud) [none]: 
[+] Docker socket path on host (rke.onap.cloud) [/var/run/docker.sock]: 
[+] Network Plugin Type (flannel, calico, weave, canal) [canal]: 
[+] Authentication Strategy [x509]: 
[+] Authorization Mode (rbac, none) [rbac]: 
[+] Kubernetes Docker image [rancher/hyperkube:v1.11.6-rancher1]: 
[+] Cluster domain [cluster.local]: 
[+] Service Cluster IP Range [10.43.0.0/16]: 
[+] Enable PodSecurityPolicy [n]: 
[+] Cluster Network CIDR [10.42.0.0/16]: 
[+] Cluster DNS Service IP [10.43.0.10]: 
[+] Add addon manifest URLs or YAML files [no]: no
ubuntu@a-rke:~$ sudo cat cluster.yml 
# If you intened to deploy Kubernetes in an air-gapped environment,
# please consult the documentation on how to configure custom RKE images.
nodes:
- address: rke.onap.cloud
  port: "22"
  internal_address: ""
  role:
  - controlplane
  - worker
  - etcd
  hostname_override: ""
  user: ubuntu
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/onap_rsa
  labels: {}
services:
  etcd:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    external_urls: []
    ca_cert: ""
    cert: ""
    key: ""
    path: ""
    snapshot: null
    retention: ""
    creation: ""
  kube-api:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    service_cluster_ip_range: 10.43.0.0/16
    service_node_port_range: ""
    pod_security_policy: false
  kube-controller:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    cluster_cidr: 10.42.0.0/16
    service_cluster_ip_range: 10.43.0.0/16
  scheduler:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
  kubelet:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    cluster_domain: cluster.local
    infra_container_image: ""
    cluster_dns_server: 10.43.0.10
    fail_swap_on: false
  kubeproxy:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
network:
  plugin: canal
  options: {}
authentication:
  strategy: x509
  options: {}
  sans: []
system_images:
  etcd: rancher/coreos-etcd:v3.2.18
  alpine: rancher/rke-tools:v0.1.15
  nginx_proxy: rancher/rke-tools:v0.1.15
  cert_downloader: rancher/rke-tools:v0.1.15
  kubernetes_services_sidecar: rancher/rke-tools:v0.1.15
  kubedns: rancher/k8s-dns-kube-dns-amd64:1.14.10
  dnsmasq: rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.10
  kubedns_sidecar: rancher/k8s-dns-sidecar-amd64:1.14.10
  kubedns_autoscaler: rancher/cluster-proportional-autoscaler-amd64:1.0.0
  kubernetes: rancher/hyperkube:v1.11.6-rancher1
  flannel: rancher/coreos-flannel:v0.10.0
  flannel_cni: rancher/coreos-flannel-cni:v0.3.0
  calico_node: rancher/calico-node:v3.1.3
  calico_cni: rancher/calico-cni:v3.1.3
  calico_controllers: ""
  calico_ctl: rancher/calico-ctl:v2.0.0
  canal_node: rancher/calico-node:v3.1.3
  canal_cni: rancher/calico-cni:v3.1.3
  canal_flannel: rancher/coreos-flannel:v0.10.0
  wave_node: weaveworks/weave-kube:2.1.2
  weave_cni: weaveworks/weave-npc:2.1.2
  pod_infra_container: rancher/pause-amd64:3.1
  ingress: rancher/nginx-ingress-controller:0.16.2-rancher1
  ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.4
  metrics_server: rancher/metrics-server-amd64:v0.2.1
ssh_key_path: ~/.ssh/onap_rsa
ssh_agent_auth: false
authorization:
  mode: rbac
  options: {}
ignore_docker_version: false
kubernetes_version: ""
private_registries: []
ingress:
  provider: ""
  options: {}
  node_selector: {}
  extra_args: {}
cluster_name: ""
cloud_provider:
  name: ""
prefix_path: ""
addon_job_timeout: 0
bastion_host:
  address: ""
  port: ""
  user: ""
  ssh_key: ""
  ssh_key_path: ""
monitoring:
  provider: ""
  options: {}
{noformat}

Kubernetes Single Node Developer Installation

Code Block
themeMidnight
sudo ./rke_install.sh -b master -s localhost -e onap -l ubuntu

Kubernetes HA Cluster Production Installation

Design Issues

DI 20190225-1: RKE/Docker version pair

As of 20190215 RKE 0.16 supports Docker 18.06-ce (and 18.09 non-ce) (up from 0.15 supporting 17.03)

https://github.com/docker/docker-ce/releases/tag/v18.06.3-ce

https://github.com/rancher/rke/releases/tag/v0.1.16

Code Block
themeMidnight
ubuntu@a-rke:~$ sudo rke up
INFO[0000] Building Kubernetes cluster                  
INFO[0000] [dialer] Setup tunnel for host [rke.onap.cloud] 
FATA[0000] Unsupported Docker version found [18.06.3-ce], supported versions are [1.11.x 1.12.x 1.13.x 17.03.x] 

DI 20190225-2: RKE upgrade from 0.15 to 0.16 - not working

Does rke remove, regenerate the yaml (or hand upgrade the versions) then rke up

Code Block
themeMidnight
ubuntu@a-rke:~$ sudo rke remove
Are you sure you want to remove Kubernetes cluster [y/n]: y
INFO[0002] Tearing down Kubernetes cluster              
INFO[0002] [dialer] Setup tunnel for host [rke.onap.cloud] 
INFO[0002] [worker] Tearing down Worker Plane..         
INFO[0002] [remove/kubelet] Successfully removed container on host [rke.onap.cloud] 
INFO[0003] [remove/kube-proxy] Successfully removed container on host [rke.onap.cloud] 
INFO[0003] [remove/service-sidekick] Successfully removed container on host [rke.onap.cloud] 
INFO[0003] [worker] Successfully tore down Worker Plane.. 
INFO[0003] [controlplane] Tearing down the Controller Plane.. 
INFO[0003] [remove/kube-apiserver] Successfully removed container on host [rke.onap.cloud] 
INFO[0003] [remove/kube-controller-manager] Successfully removed container on host [rke.onap.cloud] 
INFO[0004] [remove/kube-scheduler] Successfully removed container on host [rke.onap.cloud] 
INFO[0004] [controlplane] Host [rke.onap.cloud] is already a worker host, skipping delete kubelet and kubeproxy. 
INFO[0004] [controlplane] Successfully tore down Controller Plane.. 
INFO[0004] [etcd] Tearing down etcd plane..             
INFO[0004] [remove/etcd] Successfully removed container on host [rke.onap.cloud] 
INFO[0004] [etcd] Successfully tore down etcd plane..   
INFO[0004] [hosts] Cleaning up host [rke.onap.cloud]    
INFO[0004] [hosts] Cleaning up host [rke.onap.cloud]    
INFO[0004] [hosts] Running cleaner container on host [rke.onap.cloud] 
INFO[0005] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud] 
INFO[0005] [hosts] Removing cleaner container on host [rke.onap.cloud] 
INFO[0005] [hosts] Removing dead container logs on host [rke.onap.cloud] 
INFO[0006] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud] 
INFO[0006] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud] 
INFO[0006] [hosts] Successfully cleaned up host [rke.onap.cloud] 
INFO[0006] [hosts] Cleaning up host [rke.onap.cloud]    
INFO[0006] [hosts] Cleaning up host [rke.onap.cloud]    
INFO[0006] [hosts] Running cleaner container on host [rke.onap.cloud] 
INFO[0007] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud] 
INFO[0008] [hosts] Removing cleaner container on host [rke.onap.cloud] 
INFO[0008] [hosts] Removing dead container logs on host [rke.onap.cloud] 
INFO[0008] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud] 
INFO[0009] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud] 
INFO[0009] [hosts] Successfully cleaned up host [rke.onap.cloud] 
INFO[0009] [hosts] Cleaning up host [rke.onap.cloud]    
INFO[0009] [hosts] Cleaning up host [rke.onap.cloud]    
INFO[0009] [hosts] Running cleaner container on host [rke.onap.cloud] 
INFO[0010] [kube-cleaner] Successfully started [kube-cleaner] container on host [rke.onap.cloud] 
INFO[0010] [hosts] Removing cleaner container on host [rke.onap.cloud] 
INFO[0010] [hosts] Removing dead container logs on host [rke.onap.cloud] 
INFO[0011] [cleanup] Successfully started [rke-log-cleaner] container on host [rke.onap.cloud] 
INFO[0011] [remove/rke-log-cleaner] Successfully removed container on host [rke.onap.cloud] 
INFO[0011] [hosts] Successfully cleaned up host [rke.onap.cloud] 
INFO[0011] Removing local admin Kubeconfig: ./kube_config_cluster.yml 
INFO[0011] Local admin Kubeconfig removed successfully  
INFO[0011] Cluster removed successfully  

ubuntu@a-rke:~$ rke config --name cluster.ym
ubuntu@a-rke:~$ sudo rke up
INFO[0000] Building Kubernetes cluster                  
INFO[0000] [dialer] Setup tunnel for host [rke.onap.cloud] 
INFO[0000] [network] Deploying port listener containers 
INFO[0001] [network] Successfully started [rke-etcd-port-listener] container on host [rke.onap.cloud] 
INFO[0001] [network] Successfully started [rke-cp-port-listener] container on host [rke.onap.cloud] 
INFO[0002] [network] Successfully started [rke-worker-port-listener] container on host [rke.onap.cloud] 
INFO[0002] [network] Port listener containers deployed successfully 
INFO[0002] [network] Running control plane -> etcd port checks 
INFO[0003] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud] 
INFO[0003] [network] Running control plane -> worker port checks 
INFO[0004] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud] 
INFO[0004] [network] Running workers -> control plane port checks 
INFO[0005] [network] Successfully started [rke-port-checker] container on host [rke.onap.cloud] 
INFO[0005] [network] Checking KubeAPI port Control Plane hosts 
INFO[0005] [network] Removing port listener containers  
INFO[0005] [remove/rke-etcd-port-listener] Successfully removed container on host [rke.onap.cloud] 
INFO[0006] [remove/rke-cp-port-listener] Successfully removed container on host [rke.onap.cloud] 
INFO[0006] [remove/rke-worker-port-listener] Successfully removed container on host [rke.onap.cloud] 
INFO[0006] [network] Port listener containers removed successfully 
INFO[0006] [certificates] Attempting to recover certificates from backup on [etcd,controlPlane] hosts 
INFO[0007] [certificates] No Certificate backup found on [etcd,controlPlane] hosts 
INFO[0007] [certificates] Generating CA kubernetes certificates 
INFO[0007] [certificates] Generating Kubernetes API server certficates 
INFO[0008] [certificates] Generating Kube Controller certificates 
INFO[0008] [certificates] Generating Kube Scheduler certificates 
INFO[0008] [certificates] Generating Kube Proxy certificates 
INFO[0009] [certificates] Generating Node certificate   
INFO[0009] [certificates] Generating admin certificates and kubeconfig 
INFO[0009] [certificates] Generating etcd-rke.onap.cloud certificate and key 
INFO[0009] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates 
INFO[0009] [certificates] Generating Kubernetes API server proxy client certificates 
INFO[0010] [certificates] Temporarily saving certs to [etcd,controlPlane] hosts 
INFO[0016] [certificates] Saved certs to [etcd,controlPlane] hosts 
INFO[0016] [reconcile] Reconciling cluster state        
INFO[0016] [reconcile] This is newly generated cluster  
INFO[0016] [certificates] Deploying kubernetes certificates to Cluster nodes 
INFO[0022] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] 
INFO[0022] [certificates] Successfully deployed kubernetes certificates to Cluster nodes 
INFO[0022] Pre-pulling kubernetes images                
INFO[0022] Kubernetes images pulled successfully 0/2     Init:0/1       
INFO[0022] [etcd] Building up etcd0 plane..          29m    
INFO[0023] [etcd] Successfully started [etcd] container on host [rke.onap.cloud] 
INFO[0023] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [rke.onap.cloud] 
INFO[0028] [certificates] Successfully started [rke-bundle-cert] container on host [rke.onap.cloud] 
INFO[0029] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [rke.onap.cloud] 
INFO[0029] [etcd] Successfully started [rke-log-linker] container on host [rke.onap.cloud] 
INFO[0030] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] 
INFO[0030] [etcd] Successfully started etcd plane..     
INFO[0030] [controlplane] Building up Controller Plane.. 
INFO[0031] [controlplane] Successfully started [kube-apiserver] container on host [rke.onap.cloud] 
INFO[0031] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [rke.onap.cloud] 
INFO[0045] [healthcheck] service [kube-apiserver] on host [rke.onap.cloud] is healthy 
INFO[0046] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud] 
INFO[0046] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] 
INFO[0047] [controlplane] Successfully started [kube-controller-manager] container on host [rke.onap.cloud] 
INFO[0047] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [rke.onap.cloud] 
INFO[0052] [healthcheck] service [kube-controller-manager] on host [rke.onap.cloud] is healthy 
INFO[0053] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud] 
INFO[0053] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] 
INFO[0054] [controlplane] Successfully started [kube-scheduler] container on host [rke.onap.cloud] 
INFO[0054] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [rke.onap.cloud] 
INFO[0059] [healthcheck] service [kube-scheduler] on host [rke.onap.cloud] is healthy 
INFO[0060] [controlplane] Successfully started [rke-log-linker] container on host [rke.onap.cloud] 
INFO[0060] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] 
INFO[0060] [controlplane] Successfully started Controller Plane.. 
INFO[0060] [authz] Creating rke-job-deployer ServiceAccount 
INFO[0060] [authz] rke-job-deployer ServiceAccount created successfully 
INFO[0060] [authz] Creating system:node ClusterRoleBinding 
INFO[0060] [authz] system:node ClusterRoleBinding created successfully 
INFO[0060] [certificates] Save kubernetes certificates as secrets 
INFO[0060] [certificates] Successfully saved certificates as kubernetes secret [k8s-certs] 
INFO[0060] [state] Saving cluster state to Kubernetes   
INFO[0061] [state] Successfully Saved cluster state to Kubernetes ConfigMap: cluster-state 
INFO[0061] [state] Saving cluster state to cluster nodes 
INFO[0061] [state] Successfully started [cluster-state-deployer] container on host [rke.onap.cloud] 
INFO[0062] [remove/cluster-state-deployer] Successfully removed container on host [rke.onap.cloud] 
INFO[0062] [worker] Building up Worker Plane.. 10.42.4.11       18.188.214.137   <none>           <none>
onap            onap-aai-aai-spike-54ff77787f-d678x                      2/2     Running            0          29m     10.42.5.6        18.220.70.253    <none>           <none>
onap            onap-aai-aai-traversal-6ff868f477-lzv2f                  0/2     Init:0/1           2          29m     10.42.3.8        3.17.76.33       <none>           <none>
onap            onap-aai-aai-traversal-update-query-data-9g2b8           0/1     Init:0/1           2          29m     10.42.5.12       18.220.70.253    <none>           <none>
onap            onap-dmaap-dbc-pg-0                          
INFO[0062] [remove/service-sidekick] Successfully removed container on host [rke.onap.cloud] 
INFO[0063] [worker] Successfully started [kubelet] container on host [rke.onap.cloud] 
INFO[0063] [healthcheck] Start Healthcheck on service [kubelet] on host [rke.onap.cloud] 
INFO[0068] [healthcheck] service [kubelet] on host [rke.onap.cloud] is healthy 
INFO[0069] [worker] Successfully started [rke-log-linker] container on host [rke.onap.cloud] 
INFO[0070] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] 
INFO[0070] [worker] Successfully started [kube-proxy] container on host [rke.onap.cloud] 
INFO[0070] [healthcheck] Start Healthcheck on service [kube-proxy] on host [rke.onap.cloud] 
INFO[0076] [healthcheck] service [kube-proxy] on host [rke.onap.cloud] is healthy 
INFO[0076] [worker] Successfully started [rke-log-linker] container on host [rke.onap.cloud] 
INFO[0077] [remove/rke-log-linker] Successfully removed container on host [rke.onap.cloud] 
INFO[0077] [worker] Successfully started Worker Plane.. 
INFO[0077] [sync] Syncing nodes Labels and Taints       
INFO[0077] [sync] Successfully synced nodes Labels and Taints 
INFO[0077] [network] Setting up network plugin: canal   
INFO[0077] [addons] Saving addon ConfigMap to Kubernetes 
INFO[0077] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-network-plugin 
INFO[0077] [addons] Executing deploy job..       1/1     Running            0          29m     10.42.5.9        18.220.70.253    <none>           <none>
onap            onap-dmaap-dbc-pg-1                                      1/1     Running            0          26m     10.42.3.14       3.17.76.33       <none>        
INFO[0082] [addons] Setting up<none>
onap KubeDNS           onap-dmaap-dbc-pgpool-8666b57857-97zjc       
INFO[0082] [addons] Saving addon ConfigMap to Kubernetes 
INFO[0082] [addons] Successfully Saved addon to1/1 Kubernetes ConfigMap: rke-kubedns-addon 
INFO[0082] [addons] Executing deploy job..  Running            0      
INFO[0087] [addons] KubeDNS deployed successfully..29m     
INFO[0087] [addons] Setting up Metrics Server   10.42.5.5        18.220.70.253    <none>    
INFO[0087] [addons] Saving addon ConfigMap to Kubernetes <none>
INFO[0087] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-metrics-addon 
INFO[0087] [addons] Executing deploy job..onap            onap-dmaap-dbc-pgpool-8666b57857-vr8gk               
INFO[0092] [addons] KubeDNS deployed successfully.. 1/1     
INFO[0092] [ingress] Setting up nginx ingress controller 
INFO[0092] [addons] Saving addon ConfigMap to Kubernetes 
INFO[0092] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-ingress-controller 
INFO[0092] [addons] Executing deploy job..Running            0          29m     10.42.4.8        18.188.214.137   <none>   
INFO[0097] [ingress] ingress controller nginx is successfully deployed <none>
INFO[0097] [addons] Setting up user addonsonap            onap-dmaap-dmaap-bc-745995bf74-m6hhq         
INFO[0097] [addons] Checking for included user addons           
WARN[0097] [addons] Unable to determine if  is a file path or url, skipping 
INFO[0097] [addons] Deploying rke-user-includes-addons  
INFO[0097] [addons] Saving addon ConfigMap to Kubernetes 
INFO[0097] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-user-includes-addons 
INFO[0097] [addons] Executing deploy job..    0/1     Init:0/2           2          29m     10.42.4.12       18.188.214.137   <none>           
WARN[0128] Failed to deploy addon execute job [rke-user-includes-addons]: Failed to get job complete status: <nil> 
INFO[0128] Finished building Kubernetes cluster successfully 

ubuntu@a-rke:~$ sudo docker ps
CONTAINER ID<none>
onap            onap-dmaap-dmaap-bc-post-install-6ff4j                   1/1     Running           IMAGE 0          29m     10.42.4.9        18.188.214.137   <none>     COMMAND      <none>
onap            CREATEDonap-dmaap-dmaap-dr-db-0             STATUS              PORTS      1/1     Running    NAMES
ec26c4bd24b5        846921f0fe0e0          29m     10.42.4.10       18.188.214.137   "/server"<none>           <none>
onap     10 minutes ago      Up 10 minutesonap-dmaap-dmaap-dr-db-1                              k8s_default-http-backend_default-http-backend-797c5bc547-45msr_ingress-nginx_0eddfe19-394e-11e9-b708-000d3a0e23f3_0
f8d5db205e14   1/1     8a7739f672b4Running            1          24m   "/sidecar --v=2 --lo…"   10 minutes ago10.42.5.15       Up 10 minutes18.220.70.253    <none>           <none>
onap              k8s_sidecar_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
490461545ae4onap-dmaap-dmaap-dr-node-0        rancher/metrics-server-amd64         "/metrics-server --s…"   10 minutes ago      Up 10 minutes2/2     Running            0          k8s_metrics-server_metrics-server-97bc649d5-q84tz_kube-system_0c566ec8-394e-11e9-b708-000d3a0e23f3_0
aaf03b62bd4129m     10.42.3.11   6816817d9dce    3.17.76.33       <none>           <none>
onap   "/dnsmasq-nanny -v=2…"   10 minutes ago   onap-dmaap-dmaap-dr-prov-fbf6c94f5-v9bmq   Up 10 minutes            2/2     Running          k8s_dnsmasq_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
58ec007db72f  0      55ffe31ac578    29m     10.42.5.10       18.220.70.253    <none>     "/kube-dns --domain=…"   10 minutes ago<none>
onap      Up 10 minutes    onap-dmaap-message-router-0                       k8s_kubedns_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
0a95c06f6aa6       1/1 e183460c484d    Running            0         "/cluster-proportion…" 29m  10 minutes ago 10.42.4.14     Up 10 minutes 18.188.214.137   <none>           <none>
onap             k8s_autoscaler_kube-dns-autoscaler-5db9bbb766-6slz7_kube-system_08b5495c-394e-11e9-b708-000d3a0e23f3_0
968a7c99b210onap-dmaap-message-router-kafka-0          rancher/pause-amd64:3.1              "/pause"1/1     Running            1      10 minutes ago  29m    Up 10.42.5.13 minutes      18.220.70.253    <none>           <none>
onap      k8s_POD_default-http-backend-797c5bc547-45msr_ingress-nginx_0eddfe19-394e-11e9-b708-000d3a0e23f3_0
69969b331e49        rancher/pause-amd64:3.onap-dmaap-message-router-kafka-1              "/pause"          1/1     Running  10 minutes ago      Up 10 minutes1          29m     10.42.3.13       3.17.76.33       k8s_POD_metrics-server-97bc649d5-q84tz_kube-system_0c566ec8-394e-11e9-b708-000d3a0e23f3_0
baa5f03c16ff<none>        rancher/pause-amd64:3.1   <none>
onap           "/pause"  onap-dmaap-message-router-kafka-2                10 minutes ago      Up1/1 10 minutes   Running            0          29m  k8s_POD_kube-dns-7588d5b5f5-6k286_kube-system_08c13783-394e-11e9-b708-000d3a0e23f3_0
82b2a9f640cb        rancher/pause-amd64:3.1   10.42.4.15       18.188.214.137   <none>    "/pause"       <none>
onap          10 minutes ago onap-dmaap-message-router-mirrormaker-8587c4c9cf-lfnd8   0/1   Up 10 minutesCrashLoopBackOff   9          29m     10.42.4.7         k8s_POD_kube-dns-autoscaler-5db9bbb766-6slz7_kube-system_08b5495c-394e-11e9-b708-000d3a0e23f3_0
953a4d4be0c1  18.188.214.137   <none>      df4469c42185       <none>
onap            onap-dmaap-message-router-zookeeper-0      "/usr/bin/dumb-init …"   10 minutes ago      Up 10 minutes1/1     Running            0          k8s_nginx-ingress-controller_nginx-ingress-controller-dfhp8_ingress-nginx_0ed3bdbf-394e-11e9-b708-000d3a0e23f3_0
cce55284074929m     10.42.5.14   rancher/pause-amd64:3.1    18.220.70.253    <none>      "/pause"     <none>
onap            10 minutes agoonap-dmaap-message-router-zookeeper-1        Up 10 minutes          1/1     Running            k8s_POD_nginx-ingress-controller-dfhp8_ingress-nginx_0ed3bdbf-394e-11e9-b708-000d3a0e23f3_0
baa65f9c6f970         f0fad859c909 29m     10.42.4.13       18.188.214.137   <none>         "/opt/bin/flanneld -…" <none>
onap  10 minutes ago      Up 10 minutes onap-dmaap-message-router-zookeeper-2                    1/1     Running  k8s_kube-flannel_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
1736ce68f41a        9f355e076ea7  0          29m     10.42.3.12        "/install-cni.sh"3.17.76.33       <none> 10 minutes ago      Up 10 minutes<none>
onap            onap-nfs-provisioner-nfs-provisioner-57c999dc57-mdcw5    1/1     Running      k8s_install-cni_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
615d3f702ee7      0  7eca10056c8e        24m      10.42.3.15       3.17.76.33    "start_runit"   <none>         10 minutes ago<none>
onap      Up 10 minutes    onap-robot-robot-677bdbb454-zj9jk                       k8s_calico-node_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
1c4a702f0f18  1/1     Running  rancher/pause-amd64:3.1          0    "/pause"      24m     10.42.5.16      10 minutes ago 18.220.70.253    <none>   Up 10 minutes      <none>
onap            onap-so-so-8569947cbd-jn5x4         k8s_POD_canal-lc6g6_kube-system_05904de9-394e-11e9-b708-000d3a0e23f3_0
0da1cada08e1        rancher/hyperkube:v1.11.6-rancher1   "/opt/rke-tools/entr…"    10 minutes ago     0/1 Up 10 minutes  Init:0/1           1          13m    kube-proxy
57f44998f34a  10.42.4.19       rancher/hyperkube:v1.11.6-rancher1   "/opt/rke-tools/entr…"18.188.214.137   <none>   11 minutes ago      Up<none>
onap 11 minutes          onap-so-so-bpmn-infra-78c8fd665d-b47qn                  kubelet
50f424c4daec 0/1     Init:0/1        rancher/hyperkube:v1.11.6-rancher1   "/opt/rke-tools/entr…"1   11 minutes ago     13m Up 11 minutes  10.42.3.16       3.17.76.33       <none>           kube-scheduler
502d327912d9<none>
onap         rancher/hyperkube:v1.11.6-rancher1   "/opt/rke-tools/entr…"onap-so-so-catalog-db-adapter-565f9767ff-lvbgx    11 minutes ago     0/1 Up 11 minutes  Init:0/1           1          13m    kube-controller-manager
9fc706bbf3a5  10.42.3.17       rancher/hyperkube:v1.11.6-rancher1   "/opt/rke-tools/entr…"3.17.76.33       11<none> minutes ago      Up 11 minutes <none>
onap            onap-so-so-mariadb-config-job-d9sdb              kube-apiserver
2e7630c2047c        rancher/coreos-etcd:v3.2.180/1     Init:0/2     "/usr/local/bin/etcd…"   11 minutes ago 0     Up 11 minutes   3m37s   10.42.3.20         3.17.76.33       <none>     etcd
fef566337eb6      <none>
onap  rancher/rke-tools:v0.1.15            "/opt/rke-tools/rke-…"onap-so-so-mariadb-config-job-rkqdl   26 minutes ago      Up 26 minutes         0/1     Init:Error         0    etcd-rolling-snapshots


amdocs@obriensystemsu0:~$ kubectl get pods --all-namespaces
NAMESPACE  13m     NAME10.42.5.19       18.220.70.253    <none>           <none>
onap            onap-so-so-monitoring-69b9fdd94c-dks4v    READY     STATUS      RESTARTS   AGE
ingress-nginx   default-http-backend-797c5bc547-m8hbx 1/1       1/1Running       Running     0          1h
ingress-nginx13m   nginx-ingress-controller-2v7w7       10.42.4.18       1/118.188.214.137   <none>    Running     0  <none>
onap        1h
kube-system     canal-thmfgonap-so-so-openstack-adapter-5f9cf896d7-mgbdd            0/1     Init:0/1           1   3/3       Running13m     010.42.4.17       18.188.214.137   1h
kube-system<none>     kube-dns-7588d5b5f5-j66s8      <none>
onap           3/3  onap-so-so-request-db-adapter-5c9bfd7b57-2krnp      Running     0/1     Init:0/1     1h
kube-system     kube-dns-autoscaler-5db9bbb766-rg5n8   1   1/1       Running13m     010.42.3.18       3.17.76.33   1h
kube-system    <none> metrics-server-97bc649d5-jd2rr          <none>
onap  1/1       Running   onap-so-so-sdc-controller-6fb5cf5775-bsxhm  0          1h
kube-system   0/1  rke-ingress-controller-deploy-job-znp9n   Init:0/1       Completed    01          1h
kube-system13m     rke-kubedns-addon-deploy-job-dzxsj10.42.4.20        0/118.188.214.137   <none>     Completed   0   <none>
onap       1h
kube-system     rkeonap-so-metricsso-addonsdnc-deployadapter-job-gpm4j8555689c75-r6vkb                 01/1     Running     Completed    0   0       1h
kube-system   13m  rke-network-plugin-deploy-job-kqdds   10.42.5.18    0/1   18.220.70.253    Completed<none>   0          1h
kube-system<none>
onap     tiller-deploy-69458576b-khgr5       onap-so-so-vfc-adapter-68fccc8bb8-c56t2      1/1       Running     0/1          1h

DI 20190226-1: RKE up segmentation fault on 0.1.16 - use correct user

Code Block
themeMidnight
amdocs@obriensystemsu0:~$ sudo rke up
Segmentation fault (core dumped)

# issue was I was using ubuntu as the yml user not amdocs in this case for a particular VM

DI 20190227-1: Verify no 110 pod limit per VM

https://forums.rancher.com/t/solved-setting-max-pods/11866

Code Block
themeMidnight
kubelet:
    image: ""
    extra_args:
Init:0/1           1          13m     10.42.4.21       18.188.214.137   <none>       max-pods: 900

DI 20190228-1: deploy casablanca MR to RKE under K8S 1.11.6, Docker 18.06, Helm 2.12.3

Code Block
themeMidnight
sudo git clone https://gerrit.onap.org/r/logging-analytics
sudo wget https://git.onap.org/oom/plain/kubernetes/onap/resources/environments/dev.yaml
sudo cp dev.yaml dev0.yaml
sudo vi dev0.yaml
sudo cp dev0.yaml dev1.yaml
sudo cp logging-analytics/deploy/cd.sh .
sudo ./cd.sh -b casablanca -e onap -p false nexus3.onap.org:10001 -f true -s 300 -c true -d false -w false -r false


no good for helm 2.12.3 deployment - just using 2.9.1 for now - 
Error: Chart incompatible with Tiller v2.12.3


in the casablanca branch only - flip
https://git.onap.org/oom/tree/kubernetes/onap/Chart.yaml?h=casablanca#n24
tillerVersion: "~2.9.1"

DI 20190305-1: Azure 256G VM full ONAP Testing

...

themeMidnight

...

 <none>
onap            onap-so-so-vnfm-adapter-65c4c5944b-72nlf                 1/1     Running            0          13m     10.42.3.19       3.17.76.33       <none>           <none>


# on worker nodes only
# nfs client


DI 20190507: ARM support using RKE 0.2.1 ARM friendly install

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyLOG-331

a1.4xlarge $0.408

ami-0b9bd0b532ebcf4c9

Notes

Pre-RKE installation details in Cloud Native Deployment

...