You can skip this step if your Kubernetes cluster deployment is on a single VM.


When setting up a Kubernetes cluster, the folder /dockerdata-nfs must be shared between all of the Kubernetes worker nodes. This folder is used as a volume by the ONAP pods to share data, so there can only be one copy.


On this page we will attempt to do this by setting up an NFS server on the Kubernetes Master and then mount the shared directory on all Kubernetes worker nodes. 

These instruction where written using VMs created from a ubuntu-16.04-server-cloudimg-amd64-disk1 image.

Any user can be used to run the steps in this page, as all the commands are "sudo".



On the NFS Server VM (Kubernetes Master Node)

The actual /dockerdata-nfs folder will live on Kubernetes Master node which will also be running the NFS server to export this folder.

Set up the /dockerdata-nfs Folder

Choose one of the following to create the /dockerdata-nfs folder on this VM:

Use local directory

Run the following command as root:

#id is ubuntu
sudo mkdir -p /dockerdata-nfs
sudo chmod 777 /dockerdata-nfs
Use separate volume

Following instruction from Create an OpenStack Volume to:

(where the VM Instance is the one that you have chosen)

Setup the NFS Server and Export  /dockerdata-nfs Folder

Execute the following commands as ubuntu user.

nfs server
sudo apt update
sudo apt  install nfs-kernel-server

sudo vi /etc/exports
# append the following
/dockerdata-nfs *(rw,no_root_squash,no_subtree_check)

sudo service nfs-kernel-server restart

$ ps -ef|grep nfs
root 2205 2 0 15:59 ? 00:00:00 [nfsiod]
root 2215 2 0 15:59 ? 00:00:00 [nfsv4.0-svc]
root 13756 2 0 18:19 ? 00:00:00 [nfsd4_callbacks]
root 13758 2 0 18:19 ? 00:00:00 [nfsd]
root 13759 2 0 18:19 ? 00:00:00 [nfsd]
root 13760 2 0 18:19 ? 00:00:00 [nfsd]
root 13761 2 0 18:19 ? 00:00:00 [nfsd]
root 13762 2 0 18:19 ? 00:00:00 [nfsd]
root 13763 2 0 18:19 ? 00:00:00 [nfsd]
root 13764 2 0 18:19 ? 00:00:00 [nfsd]
root 13765 2 0 18:19 ? 00:00:00 [nfsd]
ubuntu 13820 23326 0 18:19 pts/0 00:00:00 grep --color=auto nfs
$


On the other VMs (Kubernetes Worker Nodes)

Mount the /dockerdata-nfs Folder

On each of the Kubernetes worker nodes, mount the /dockerdata-nfs folder. Run the followings as ubuntu user.

mount nfs mount
sudo apt update
sudo apt install nfs-common -y
sudo mkdir /dockerdata-nfs
sudo chmod 777 /dockerdata-nfs


# Option 1:
sudo mount -t nfs -o proto=tcp,port=2049 <hostname or IP address of NFS server>:/dockerdata-nfs /dockerdata-nfs
sudo vi /etc/fstab
# append the following
<hostname or IP address of NFS server>:/dockerdata-nfs /dockerdata-nfs   nfs    auto  0  0


# Option 2:
sudo vi /etc/fstab
# append the following line.
<hostname or IP address of NFS server>:/dockerdata-nfs /dockerdata-nfs   nfs    auto  0  0
# run the following line
sudo mount -a

Verify it :

Tocuh a file inside /dockerdata-nfs directory on the Kubernetes Master and check to see if the same file is found under /dockerdata-nfs on all Kubernetes worker nodes.

Unmount the share directory

Use the lazy (-l) option on Kubernetes worker nodes to force unmount the mount point.

For example,

sudo umount -l /dockerdata-nfs


  • No labels

4 Comments

  1. sudo chmod 777 /export
    sudo chmod 777 /export/dockerdata-nfs

    needs a mkdir for both dirs

    verifying before editing the page...

    ubuntu@ip-172-31-82-43:~$ sudo mount -t nfs -o proto=tcp,port=2049 cdrancher.onap.info:/dockerdata-nfs /dockerdata-nfs   nfs    auto  0  0:/dockerdata-nfs /dockerdata-nfs

    does not work in my ubuntu 16.04 VM on AWS

    the following reverse sequence on the client works

    sudo vi /etc/fstab

    "cdrancher.onap.info:/dockerdata-nfs /dockerdata-nfs   nfs    auto  0  0"

    sudo mount -a


    The above is specific to EC2 EBS volumes


    1. I have validated, these commands are not needed (smile)

  2. very nice and easy to setup. Thanks for documenting!

  3. Isn't this the bad practice? Everything should be code. Kubernetes helm charts should use `nfs` instead of `hostPath` to mount shared nfs store directly on the container