Steps described in this page are run by "ubuntu", a non-root user.
Clone the OOM project from ONAP gerrit
Run the following command on master node to clone OOM project from ONAP gerrit at any directory you prefer, and that directory will be referred as "{$OOM}" in this page.
git clone http://gerrit.onap.org/r/oom
SDN-C Cluster Deployment
Configure SDN-C Cluster Deployment
We are using Kubernetes replicas to achieve the SDN-C cluster deployment (see details from About SDN-C Clustering for the desired goal).
This only needs to be done one time and, at the moment, all modifications are done manually (they can be automated via scripting in the future when the needs come up).
Get New startODL.sh Script From Gerrit Topic SDNC-163
The source of the new startODL.sh script, gerrit change 25475, has been merged into sdnc/oam project on December 15th, 2017.
Do the following to get the new startODL.sh script which provides the configuration of ODL clustering for SDN-C cluster.
# | Purpose | Command Examples |
---|---|---|
1 | Get the shared new startODL.sh script content | Go to gerrit change 25475 click on installation/sdnc/src/main/scripts/startODL.sh under the Files section to view the text of the script. click on the Download button ( ) to download the startODL_new.sh.zip fileand extract open the sh file inside the zip file, rename it to "startODL.sh". |
2 | Create new startODL.sh on the Kubernetes node VM | mkdir -p /dockerdata-nfs/cluster/script vi /dockerdata-nfs/cluster/script/startODL.sh paste the copied content from step 1 to this file |
3 | Give execution permission to the new startODL.sh script | chmod 777 /dockerdata-nfs/cluster/script/startODL.sh
|
Get SDN-C Cluster Templates From Gerrit Topic SDNC-163
The source of the templates, gerrit change 25467, has been merged into sdnc/oam project on December 20th, 2017.
Skip step 1 and 2 if your cloned OOM project includes this change.
Skip step 3 if you skipped previous section (adding startODL.sh script).
Skip step 4, if you dont want to add/deploy extra features/bundles/packages.
Step 5 in important. It determines number of sdnc and db pods.
# | Purpose | Command and Examples | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | Get the shared templates code git fetch command | Go to gerrit change 25467 Click Download downward arrow, From the right bottom corner drop list, select anonymous http, Click the click board in the same line as Checkout to get (copy to clipboard) the git commands (which includes the git fetch and checkout commands). | ||||||||||||
2 | Fetch the shared template to the oom directory on the Kubernetes node VM | cd {$OOM} Execute the git command from step 1. | ||||||||||||
3 | Link the new startODL.sh | Skip this change if you have skipped the get new startODL.sh script section Be careful with editing YAML files. They are sensitive to number of spaces. Be careful with copy/paste from browser. vi kubernetes/sdnc/templates/sdnc-statefulset.yaml Make the following changes:
| ||||||||||||
4 | Link the ODL deploy directory | If you are not going to use the test bundle to test out SDN-C cluster and load balancing, you can skip this step. ODL automatically install bundles/pacakges that are put under deploy directory, this mount point provides capability to drop a bundle/package in the Kubernetes node at /dockerdata-nfs/cluster/deploy directory and it will automativally be installed in the sdnc pods (under opt/opendaylight/current/deploy directory). vi kubernetes/sdnc/templates/sdnc-statefulset.yaml Make the following changes:
| ||||||||||||
5 | Enable cluster configuration | vi kubernetes/sdnc/values.yaml Change the following fields with the new value:
|
Make nfs-provisioner Pod Runs On Node Where NFS Server Runs
Skip the following section, if any of thew following coditions matches.
- you have skipped 2. Share the /dockerdata-nfs Folder between Kubernetes Nodes
- Kubernetes Master and workers are located on the same VM
- Kubernetes Federation (on multiple VMs verified)
Verify (from Master node)
# | Purpose | Command and Example |
---|---|---|
1 | Find the node name | kubectl get node |
2 | Set label on the node | kubectl label nodes <NODE_NAME_FROM_LAST_STEP> disktype=ssd |
3 | Check the label has been set on the node | kubectl get node --show-labels |
4 | Update nfs-provisioner pod template to force it running on the nfs server node | In nfs-provisoner-deployment.yaml file, add “spec.template.spec.nodeSelector” for pod “nfs-provisioner” |
Create the ONAP Config
Setup onap-parameters.yaml file
The following commands must run on master node before creating ONAP configuration.
cd {$OOM}/kubernetes/config
cp onap-parameters-sample.yaml onap-parameters.yaml
Run createConfig
To simplify steps in this section
You can skip the steps in this section by following instruction in autoCreateOnapConfig of Scripts section to
- create {$OOM}/kubernetes/oneclick/tools/autoCreateOnapConfig.bash file
- run it and wait until the script completes
# | Purpose | Command and Examples | |||
---|---|---|---|---|---|
0 | Set the OOM Kubernetes config environment | cd {$OOM}/kubernetes/oneclick source setenv.bash | |||
1 | Run the createConfig script to create the ONAP config | cd {$OOM}/kubernetes/config | |||
2 | Wait for the config-init container to finish | Use the following command to monitor onap config init intil it reaches to Completed STATUS:
| |||
Additional checks for config-init |
|
Deploy the SDN-C Application
To simplify steps in this section
You can skip the steps in this section by following instruction in autoDeploySdnc of Scripts section to
- create {$OOM}/kubernetes/oneclick/tools/autoDeploySdnc.bash file
- run it and wait until the script completes
Execute the followings on the master node.
# | Purpose | Command and Examples | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
0 | Set the OOM Kubernetes config environment(If you have set the OOM Kubernetes config enviorment in the same terminal, you can skip this step) | cd {$OOM}/kubernetes/oneclick source setenv.bash | ||||||||
1 | Run the createAll script to deploy the SDN-C appilication | cd {$OOM}/kubernetes/oneclick ./createAll.bash -n onap -a sdnc | ||||||||
Ensure that the SDN-C appication has started | Use the kubectl get pods command to monitor the SDN-C startup; you should observe:
| |||||||||
2 | Validate that all SDN-C pods and services are created properly | helm ls --all kubectl get namespace kubectl get deployment --all-namespaces kubectl get clusterrolebinding --all-namespaces kubectl get serviceaccounts --all-namespaces kubectl get service -n onap-sdnc kubectl get pods --all-namespaces -a docker ps |grep sdnc | ||||||||
3 | Validate that the SDN-C bundlers are up | |||||||||
4 | Validate that the SDN-C APIs are shown on the ODL RestConf page | Access the ODL RestConf page from the following URL:
| ||||||||
5 | Validate the SDN-C ODL cluster | Goal:Verify if the SDN-C ODL-Cluster is running properly Prerequisites
Use ODL intergration tool to monitor ODL cluster
Use testCluster RPC to test SDN-C load sharingThe testCluster-bundle.zip provides a testBundle which offers a testCluster API to help with validating SDN-C RPC load sharing in the deployed SDN-C cluster. It's just as easy as to do the following:
|
Undeploy the SDN-C Application
To simplify steps in this section
You can skip the steps in this section by following instruction in autoCleanSdnc of Scripts section to
- create {$OOM}/kubernetes/oneclick/tools/autoCleanSdnc.bash file
- run it and wait until the script completes
# | Purpose | Command and Examples |
---|---|---|
0 |
(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step) | cd {$OOM}/kubernetes/oneclick source setenv.bash |
1 |
| ./deleteAll.bash -n onap -a sdnc |
2 |
| docker ps |grep sdnc kubectl get pods --all-namespaces -a kubectl get service --all-namespaces kubectl get serviceaccounts --all-namespaces kubectl get clusterrolebinding --all-namespaces kubectl get deployment --all-namespaces kubectl get namespaces helm ls --all |
Remove the ONAP Config
To simplify steps in this section
You can skip the steps in this section by following instruction in autoCleanOnapConfig of Scripts section to
- create {$OOM}/kubernetes/oneclick/tools/autoCleanOnapConfig.bash file
- run it and wait until the script completes
# | Purpose | Command and Examples | ||||||
---|---|---|---|---|---|---|---|---|
0 |
(If you have set OOM kubernetes config enviorment in the same terminal, you can skip this step) | cd {$OOM}/kubernetes/oneclick source setenv.bash | ||||||
1 |
| ./deleteAll.bash -n onap | ||||||
2 |
This step is to clean up the leftover items which were created by the config/createConfig script but not cleaned up by the oneclick/deleteAll script. |
| ||||||
3 |
| sudo rm -rf /dockerdata-nfs/onap |
Scripts
The following scripts are intended to be placed under directory {$OOM}/kubernetes/oneclick/tools and help to simplify various procedures by automating them
7 Comments
Brian Freeman
I think the version of SDNC that you are using is out of date.
There should be containers for ueb-listener and dmaap-listener from the Amsterdam release.
Will this work for the non-voting cluster members in the geo-redundant site ?
Brian
Beili Zhou
Yes, Brian Freeman, we are aware that the SDNC image is out of date, we will update image reference in OOM kubernetes/sdnc/values.yaml file a bit later![(smile)](/s/z10khv/8804/xgjkrn/_/images/icons/emoticons/smile.svg)
Good point of `ueb-listener` and `dmaap-listener` containers, we will take that into count.
Hao Kuang would you please answer the question of
Hao Kuang
Good point. For ueb-listener and dmaap-listener, we discussed internally before. A possible solution could be message-bus container behind it's own services but needs more investigation.
For the second question, theatrically, it should work but we haven't tried that yet. For geo-redundant site, a possible solution is that we can utilize kubernetes Custers, for example we can have one node (vm) X in place A and one node (vm) Y in place B. Starting from k8s version 1.2, it supports nodeAffinity which provides the ability of using matchExpressions to deploy your pods on specific node. Based on this, we can deploy 3 voting nodes on X and 3 non-voting nodes on Y. But shortage here is place A and B should be very close. Otherwise it has to consider federation.
Reference: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
Brian Freeman
Opendaylight implements a RAFT protocol for replicating data and picking the shard leader so I doubt that K8 alone can do the stateful data replication and consistency needed. My question was about the configuration of Opendaylight clustering in startODL.sh etc. so that the mate site that has non-voting members in the ODL cluster (not to be confused with the K8 cluster)
Hao Kuang
I guess you are talking about turning on persistence of data-replication for ODL data cluster. K8S cluster can do the data replication among nodes as you said, but on the application level, ODL cluster of using AKKA which implements RAFT still has to be used. Assume we have three SDN-C pods and two K8S nodes now. My concern is that if those MD-SAL data of each pod are persisted on each k8s node, since the MD-SAL data replications are same, we waste once more storage room of saving same data. May be we can use external storage cluster system.
---
Currently, this solution doesn't include non-voting members. We thought it would be better to test this with geo-redundant site of using K8S.
Abdelmuhaimen Seaudi
I pulled the oom latest source, but it does not contain the oom/kubernetes/oneclick directory
I am now stuck at the create onap config step.
How can I proceed ?
cd {$OOM}/kubernetes/oneclick
source setenv.bash
./createAll.bash -n onap -a sdnc
Michael O'Brien
Beijing has switched to a helm based install about 6-8 weeks ago - those instructions are for an earlier version of master.
/michael