Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This document provides instructions on how to setup HA Kubernetes cluster on AWS instances using Rancher Kubernetes Engine (RKE).

This document shows how to create and configure 3 control plane VMs (each with 4 vCPUs, 16GB RAM, 80GB disk storage & Ubuntu 18.04.4) and 12 worker VMs (each with 8 vCPUs, 32GB RAM, 160GB disk storage & Ubuntu 18.04.4) through AWS and deploy with HA kubernetes cluster through RKE.

Prerequisite: Users must have AWS account credentials to login and follow the below steps in AWS EC2/VPC dashboard.

1.Create Key Pair

A Key Pair is required to access the created AWS instances and will be used by RKE to configure the VMs for Kubernetes.

If key is already exists use an existing key pair and import through Import Key Pair

  • Go to AWS EC2 dashboard click on Key Pairs in the left panel then click on Actions to Import

      Image Added

  • To Create New Key Pair:

           Go to AWS EC2 dashboard click on Key Pairs in the left panel then click on Create Key Pair

           Image Added

           

       Note: Keep this downloaded key safe and copied into ~/.ssh/ from which it can be referenced.       

Code Block
Example:
       mv onap-key ~/.ssh
       chmod 600 ~/.ssh/onap-key


2.Create VPC       

Go to AWS VPC dashboard click on Your VPCs in the left panel then click on Create VPC

Image Added

3.Create Subnet

Go to AWS VPC dashboard click on Subnets in the left panel then click on Create Subnet

Image Added

4.Create Internet Gateway

Go to AWS VPC dashboard click on Internet Gateways in the left panel then click on Create Internet Gateway

Image Added

Note: Once IGW is created, on the top right corner you will see Attach to VPC. Click on that button to attach this IGW to your VPC as below

Image Added


5.Add Routes with IGW

Go to AWS VPC dashboard click on Route Tables in the left panel then select your routing table click on Routes and Edit Routes to Add route with IGW

Image Added

6.Create Security Group

Click on Create Security Group under EC2>Security Groups fill the details then click create security group

Image Added

Select the created security group click on edit inbound & outbound rules 

Image Added

Add rules for Inbound:

Click on Add rule and fill the details then click on Save Rules

Image Added

Add Rules for Outbound:

Click on Edit Outbound rule and fill the details then click save rules

Image Added

7.Create Kubernetes control plane VMs

Step-1: Launch new instance from EC2 and select the image, click Next

Image Added

Step-2: Choose Instance Type, click Next

Image Added

Step-3: Configure Instance

Select no of instances 3, network with your created VPC, subnet IDs & Enable Auto-assign Public IP  then click Next

Image Added

Step-4: Add Storage:

Add disk storage as required then click Next

Image Added

Step-5: Add Tags

Add Tags if needed, click Next

Image Added

Step-6: Configure Security Group

Create new security group or select existing security group if already exists then click on Review and Launch

Image Added

Step-7: Review and Launch

Image Added

Note: While Launching select the Key Pair, check acknowledgement box and click Launch Instances

Image Added

8. Apply Customization script for control plane VMs

Below is the Customization script, apply on all control plane VMs by running with “sudo <script.sh>”

Code Block
#!/bin/bash

DOCKER_VERSION=18.09.5
sudo apt-get update
curl https://releases.rancher.com/install-docker/$DOCKER_VERSION.sh | sh
sudo mkdir -p /etc/systemd/system/docker.service.d/
cat > /etc/systemd/system/docker.service.d/docker.conf << EOF
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry=nexus3.onap.org:10001
EOF
sudo usermod -aG docker ubuntu
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo apt-mark hold docker-ce
IP_ADDR=`ip address |grep eth|grep inet|awk '{print $2}'| awk -F / '{print $1}'`
HOSTNAME=`hostname`
echo "$IP_ADDR $HOSTNAME" >> /etc/hosts
sudo docker login -u docker -p docker nexus3.onap.org:10001
sudo apt-get install make -y

#nfs server
sudo apt-get install nfs-kernel-server -y
sudo mkdir -p /nfs_share
sudo chown nobody:nogroup /nfs_share/
exit 0

9. Create Kubernetes Worker VMs

The number and size of Worker VMs is dependent on the size of the ONAP deployment. By default, all ONAP applications are deployed. It’s possible to customize the deployment and enable a subset of the ONAP applications. For the purpose of this guide, however, we will deploy 12 Kubernetes Workers that have been sized to handle the entire ONAP application workload.

Step-1: Launch new instance and select the required image

Image Added

Step-2: Choose an Instance Type

Select required configuration and click Next

Image Added

Step-3: Configure Instances

Select no of instances, network and subnet details then click Next

Image Added

Step-4: Add Storage

Add required disk storage then click Next

Image Added

Step-5: Add Tags

Select Add Tags if needed then click

Image Added

Step-6: Configure Security Group

Create new or Select existing security group then click Review and Launch

Image Added

Step-7: Review Instance Launch

Review the configuration details the click Launch

Image Added

Note: While Launching select an existing key pair or create new, click acknowledgement checkbox then Launch

Image Added

10. Apply Customization script for Kubernetes worker VMs

Below is the Customization script, apply on all worker VMs by running with “sudo <script.sh>”

Code Block
#!/bin/bash

DOCKER_VERSION=18.09.5
sudo apt-get update
curl https://releases.rancher.com/install-docker/$DOCKER_VERSION.sh | sh
mkdir -p /etc/systemd/system/docker.service.d/
cat > /etc/systemd/system/docker.service.d/docker.conf << EOF
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry=nexus3.onap.org:10001
EOF
sudo usermod -aG docker ubuntu
systemctl daemon-reload
systemctl restart docker
apt-mark hold docker-ce
IP_ADDR=`ip address |grep ens|grep inet|awk '{print $2}'| awk -F / '{print $1}'`
HOSTNAME=`hostname`
echo "$IP_ADDR $HOSTNAME" >> /etc/hosts
docker login -u docker -p docker nexus3.onap.org:10001
sudo apt-get install make -y
# install nfs
sudo apt-get install nfs-common -y
exit 0

11. Configure Rancher Kubernetes Engine (RKE)


Download and install RKE on a VM, desktop or laptop. Binaries can be found here for Linux and Mac: 

https://github.com/rancher/rke/releases/tag/v1.0.6

Execute below once RKE installed:

Code Block
mv rke_linux-amd64 rke
sudo chmod +x rke
sudo mv ./rke /usr/local/bin/rke
rke -version

RKE requires a cluster.yml as input. An example file is shown below that describes a Kubernetes cluster that will be mapped onto the AWS instances created earlier in this guide.

Below is an example of an HA Kubernetes cluster for ONAP

Code Block
nodes:
- address: 172.31.47.230
  port: "22"
  internal_address: 172.31.47.230
  role:
  - controlplane
  - etcd
  hostname_override: "onap-control-1"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap_key"
- address: 172.31.37.48
  port: "22"
  internal_address: 172.31.37.48
  role:
  - controlplane
  - etcd
  hostname_override: "onap-control-2"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap_key"
- address: 172.31.35.44
  port: "22"
  internal_address: 172.31.35.44
  role:
  - controlplane
  - etcd
  hostname_override: "onap-control-3"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap_key"
- address: 172.31.44.213
  port: "22"
  internal_address: 172.31.44.213
  role:
  - worker
  hostname_override: "onap-wk-1"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap_key"
- address: 172.31.38.97
  port: "22"
  internal_address: 172.31.38.97
  role:
  - worker
  hostname_override: "onap-wk-2"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap_key"
- address: 172.31.42.252
  port: "22"
  internal_address: 172.31.42.252
  role:
  - worker
  hostname_override: "onap-wk-3"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap_key"
- address: 172.31.39.113
  port: "22"
  internal_address: 172.31.39.113
  role:
  - worker
  hostname_override: "onap-wk-4"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap_key"
- address: 172.31.43.67
  port: "22"
  internal_address: 172.31.43.67
  role:
  - worker
  hostname_override: "onap-wk-5"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap_key"
- address: 172.31.33.217
  port: "22"
  internal_address: 172.31.33.217
  role:
  - worker
  hostname_override: "onap-wk-6"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap_key"
- address: 172.31.35.89
  port: "22"
  internal_address: 172.31.35.89
  role:
  - worker
  hostname_override: "onap-wk-7"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap_key"
- address: 172.31.36.52
  port: "22"
  internal_address: 172.31.36.52
  role:
  - worker
  hostname_override: "onap-wk-8"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap_key"
- address: 172.31.34.169
  port: "22"
  internal_address: 172.31.34.169
  role:
  - worker
  hostname_override: "onap-wk-9"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap_key"
- address: 172.31.32.49
  port: "22"
  internal_address: 172.31.32.49
  role:
  - worker
  hostname_override: "onap-wk-10"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap_key"
- address: 172.31.46.44
  port: "22"
  internal_address: 172.31.46.44
  role:
  - worker
  hostname_override: "onap-wk-11"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap_key"
- address: 172.31.47.93
  port: "22"
  internal_address: 172.31.47.93
  role:
  - worker
  hostname_override: "onap-wk-12"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap_key"
services:
  kube-api:
    service_cluster_ip_range: 10.43.0.0/16
    pod_security_policy: false
    always_pull_images: false
  kube-controller:
    cluster_cidr: 10.42.0.0/16
    service_cluster_ip_range: 10.43.0.0/16
  kubelet:
    cluster_domain: cluster.local
    cluster_dns_server: 10.43.0.10
    fail_swap_on: false
network:
  plugin: canal
authentication:
  strategy: x509
ssh_key_path: "~/.ssh/onap_key"
ssh_agent_auth: false
authorization:
  mode: rbac
ignore_docker_version: false
kubernetes_version: "v1.15.11-rancher1-2"
private_registries:
- url: nexus3.onap.org:10001
  user: docker
  password: docker
  is_default: true
cluster_name: "onap"
restore:
  restore: false
  snapshot_name: ""

Prepare Cluster.yml 

Before this configuration file can be used the external address and the internal_address must be mapped for each control and worker node in this file.

Run RKE:

From within the same directory as the cluster.yml file, simply execute:

Code Block
> rke up
The output will look something like this:
INFO[0210] [cleanup] Successfully started [rke-log-cleaner] container on host [172.31.33.217]
INFO[0210] [cleanup] Successfully started [rke-log-cleaner] container on host [172.31.47.230]
INFO[0210] Removing container [rke-log-cleaner] on host [172.31.33.217], try #1
INFO[0210] Removing container [rke-log-cleaner] on host [172.31.47.230], try #1
INFO[0210] [cleanup] Successfully started [rke-log-cleaner] container on host [172.31.39.113]
INFO[0210] Removing container [rke-log-cleaner] on host [172.31.39.113], try #1
INFO[0210] [cleanup] Successfully started [rke-log-cleaner] container on host [172.31.43.67]
INFO[0210] Removing container [rke-log-cleaner] on host [172.31.43.67], try #1
INFO[0210] [cleanup] Successfully started [rke-log-cleaner] container on host [172.31.35.89]
INFO[0210] [cleanup] Successfully started [rke-log-cleaner] container on host [172.31.32.49]
INFO[0210] Removing container [rke-log-cleaner] on host [172.31.35.89], try #1
INFO[0210] Removing container [rke-log-cleaner] on host [172.31.32.49], try #1
INFO[0210] [cleanup] Successfully started [rke-log-cleaner] container on host [172.31.37.48]
INFO[0210] Removing container [rke-log-cleaner] on host [172.31.37.48], try #1
INFO[0210] [cleanup] Successfully started [rke-log-cleaner] container on host [172.31.44.213]
INFO[0210] Removing container [rke-log-cleaner] on host [172.31.44.213], try #1
INFO[0210] [cleanup] Successfully started [rke-log-cleaner] container on host [172.31.38.97]
INFO[0210] Removing container [rke-log-cleaner] on host [172.31.38.97], try #1
INFO[0210] [cleanup] Successfully started [rke-log-cleaner] container on host [172.31.42.252]
INFO[0210] Removing container [rke-log-cleaner] on host [172.31.42.252], try #1
INFO[0210] [cleanup] Successfully started [rke-log-cleaner] container on host [172.31.35.44]
INFO[0210] Removing container [rke-log-cleaner] on host [172.31.35.44], try #1
INFO[0210] [cleanup] Successfully started [rke-log-cleaner] container on host [172.31.46.44]
INFO[0210] Removing container [rke-log-cleaner] on host [172.31.46.44], try #1
INFO[0210] [cleanup] Successfully started [rke-log-cleaner] container on host [172.31.36.52]
INFO[0210] Removing container [rke-log-cleaner] on host [172.31.36.52], try #1
INFO[0210] [remove/rke-log-cleaner] Successfully removed container on host [172.31.47.230]
INFO[0210] [remove/rke-log-cleaner] Successfully removed container on host [172.31.35.89]
INFO[0210] [remove/rke-log-cleaner] Successfully removed container on host [172.31.34.169]
INFO[0211] [remove/rke-log-cleaner] Successfully removed container on host [172.31.32.49]
INFO[0211] [remove/rke-log-cleaner] Successfully removed container on host [172.31.35.44]
INFO[0211] [remove/rke-log-cleaner] Successfully removed container on host [172.31.39.113]
INFO[0211] [remove/rke-log-cleaner] Successfully removed container on host [172.31.33.217]
INFO[0211] [remove/rke-log-cleaner] Successfully removed container on host [172.31.43.67]
INFO[0211] [remove/rke-log-cleaner] Successfully removed container on host [172.31.37.48]
INFO[0211] [remove/rke-log-cleaner] Successfully removed container on host [172.31.42.252]
INFO[0211] [remove/rke-log-cleaner] Successfully removed container on host [172.31.38.97]
INFO[0211] [remove/rke-log-cleaner] Successfully removed container on host [172.31.46.44]
INFO[0211] [remove/rke-log-cleaner] Successfully removed container on host [172.31.36.52]
INFO[0211] [remove/rke-log-cleaner] Successfully removed container on host [172.31.44.213]
INFO[0211] [remove/rke-log-cleaner] Successfully removed container on host [172.31.47.93]
INFO[0211] [sync] Syncing nodes Labels and Taints
INFO[0215] [sync] Successfully synced nodes Labels and Taints
INFO[0215] [network] Setting up network plugin: canal
INFO[0215] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes
INFO[0215] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes
INFO[0215] [addons] Executing deploy job rke-network-plugin
INFO[0220] [addons] Setting up coredns
INFO[0220] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes
INFO[0220] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes
INFO[0220] [addons] Executing deploy job rke-coredns-addon
INFO[0225] [addons] CoreDNS deployed successfully..
INFO[0225] [dns] DNS provider coredns deployed successfully
INFO[0225] [addons] Setting up Metrics Server
INFO[0225] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes
INFO[0225] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
INFO[0225] [addons] Executing deploy job rke-metrics-addon
INFO[0230] [addons] Metrics Server deployed successfully
INFO[0230] [ingress] Setting up nginx ingress controller
INFO[0230] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0230] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0230] [addons] Executing deploy job rke-ingress-controller
INFO[0235] [ingress] ingress controller nginx deployed successfully
INFO[0235] [addons] Setting up user addons
INFO[0235] [addons] no user addons defined
INFO[0235] Finished building Kubernetes cluster successfully

12. Install Kubectl and Validate K8S cluster Deployment

Download and Install Kubectl. Binaries can be found here for Linux and Mac:

https://storage.googleapis.com/kubernetes-release/release/v1.15.11/bin/linux/amd64/kubectl
https://storage.googleapis.com/kubernetes-release/release/v1.15.11/bin/darwin/amd64/kubectl

Execute below after kubectl installation:

Code Block
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version --short --client

To Validate K8S cluster deployment, execute below:

Code Block
mkdir -p ~/.kube
sudo chown -R ubuntu kube_config_cluster.yml
cp kube_config_cluster.yml ~/.kube/config.onap
export KUBECONFIG=~/.kube/config.onap
kubectl config use-context onap
kubectl get nodes -o=wide

Output will look something like this after successful Deployment:

Code Block
NAME             STATUS   ROLES               AGE   VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
onap-control-1   Ready    controlplane,etcd   26d   v1.15.11   172.31.47.230   <none>        Ubuntu 18.04.4 LTS   5.3.0-1032-aws   docker://18.9.5
onap-control-2   Ready    controlplane,etcd   26d   v1.15.11   172.31.37.48    <none>        Ubuntu 18.04.4 LTS   5.3.0-1032-aws   docker://18.9.5
onap-control-3   Ready    controlplane,etcd   26d   v1.15.11   172.31.35.44    <none>        Ubuntu 18.04.4 LTS   5.3.0-1032-aws   docker://18.9.5
onap-wk-1        Ready    worker              26d   v1.15.11   172.31.44.213   <none>        Ubuntu 18.04.4 LTS   5.3.0-1032-aws   docker://18.9.5
onap-wk-10       Ready    worker              26d   v1.15.11   172.31.32.49    <none>        Ubuntu 18.04.4 LTS   5.3.0-1032-aws   docker://18.9.5
onap-wk-11       Ready    worker              26d   v1.15.11   172.31.46.44    <none>        Ubuntu 18.04.4 LTS   5.3.0-1032-aws   docker://18.9.5
onap-wk-12       Ready    worker              26d   v1.15.11   172.31.47.93    <none>        Ubuntu 18.04.4 LTS   5.3.0-1032-aws   docker://18.9.5
onap-wk-2        Ready    worker              26d   v1.15.11   172.31.38.97    <none>        Ubuntu 18.04.4 LTS   5.3.0-1032-aws   docker://18.9.5
onap-wk-3        Ready    worker              26d   v1.15.11   172.31.42.252   <none>        Ubuntu 18.04.4 LTS   5.3.0-1032-aws   docker://18.9.5
onap-wk-4        Ready    worker              26d   v1.15.11   172.31.39.113   <none>        Ubuntu 18.04.4 LTS   5.3.0-1032-aws   docker://18.9.5
onap-wk-5        Ready    worker              26d   v1.15.11   172.31.43.67    <none>        Ubuntu 18.04.4 LTS   5.3.0-1032-aws   docker://18.9.5
onap-wk-6        Ready    worker              26d   v1.15.11   172.31.33.217   <none>        Ubuntu 18.04.4 LTS   5.3.0-1032-aws   docker://18.9.5
onap-wk-7        Ready    worker              26d   v1.15.11   172.31.35.89    <none>        Ubuntu 18.04.4 LTS   5.3.0-1032-aws   docker://18.9.5
onap-wk-8        Ready    worker              26d   v1.15.11   172.31.36.52    <none>        Ubuntu 18.04.4 LTS   5.3.0-1032-aws   docker://18.9.5
onap-wk-9        Ready    worker              26d   v1.15.11   172.31.34.169   <none>        Ubuntu 18.04.4 LTS   5.3.0-1032-aws   docker://18.9.5