...
Once you have your infrastructure running and your RAW volumes mounted to your kubernetes nodes, deploy your Heketi / GlusterFS infrastructure. You can use the scripts included in OOM to automate this in your lab. (Not recommended for production install).
Downloading the scriptsÂ
There are 2 scripts contained within OOM resources: deploy_glusterfs.bash, which will set up your initial GlusterFS infrastructure. This will also deploy a Heketi pod, which is the RestAPI interface to manage your GlusterFS cluster.
There is also a cleanup script that you can use to cleanup your GlusterFS infrastructure when you are done or would like to re-deploy a clean GlusterFS infrastructure.
Grab the OOM artifacts from Gerrit (we did this on the Rancher master node in our deployment):
...
<<TODO: video on running script>>
Validation
Once the script is finished, check to make sure you have a valid StorageClass defined, and GlusterFS/Heketi Pods running on each Kubernetes node:
(Pod names and IP addresses will vary)
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
kubectl get pods --namespace onap kubectl describe sc --namespace onap kubectl get service --namespace onap |
e.g
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
NAME READY STATUS RESTARTS AGE |
...
glusterfs-cxqc2 1/1 Running 0 4d |
...
glusterfs-djq4x 1/1 Running 0 4d |
...
glusterfs-t7cj5 1/1 Running 0 4d |
...
glusterfs-z4vk6 1/1 Running 0 4d |
...
heketi-5876bd4875-hzw2d 1/1 Running 0 4d |
...
Name: glusterfs-sc |
...
IsDefaultClass: No |
...
Annotations: <none> |
...
Provisioner: kubernetes.io/glusterfs |
...
Parameters: resturl=http://10.43.185.167:8080,restuser=,restuserkey= |
...
Events: |
...
<none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) heketi ClusterIP 10.43.185.167 <none> 8080/TCP 4d |
...
heketi-storage-endpoints ClusterIP 10.43.227.203 <none> 1/TCP 4d |
Deploy ONAP with OOM
You can choose any of the documented methods on this site or on onap.readthedocs.io, but here is a brief example of how you can deploy ONAP with GlusterFS.
Note: any persistent storage technology can be used in the example going forward, just make sure you have a storageClass already defined.
Edit / validate your values.yaml file
There is a custom values file in ~oom/kubernetes/onap/resources/environments called values_global_gluster.yaml that can be used, or you can edit the master values.yaml file at ~oom/kubernetes/onap/values.yaml.
We will assume that you are doing the former.
Ensure that you have your storageClass defined in the global section of your values file:
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
vim ~oom/kubernetes/onap/resources/environments called values_global_gluster.yaml |
Ensure you have your storage class defined within global:persistence section:
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
global:
# Change to an unused port prefix range to prevent port conflicts
...snip...
# default mount path root directory referenced
# by persistent volumes and log files
persistence:
storageClass: glusterfs-sc
mountPath: /dockerdata-nfs |