...
Get New startODL.sh Script From Gerrit Topic SDNC-163
# | Purpose | Command Examples |
---|
1 | Get the shared new startODL.sh script content | Go to gerrit change 25475 click on installation/sdnc/src/main/scripts/startODL.sh under the Files section to view the details of the changes click on the Download button to download the startODL_new.sh.zip file open the sh file inside the zip file, and copy the content (to be used in step 2) |
2 | Create new startODL.sh on the Kubernetes node VM | vi /home/ubuntu/cluster/script/startODL.sh paste the copied content from step 1 to this file |
3 | Give execution permission to the new startODL.sh script | chmod 777 /home/ubuntu/cluster/script/startODL.sh |
Get SDN-C Cluster Templates From Gerrit Topic SDNC-163
Only do this before the SDN-C cluster code is merged into gerrit OOM project.
# | Purpose | Command and Examples |
---|
1 | Get the shared templates code git fetch command | Go to gerrit change 25467 Click Download downward arrow, and click the click board in the same line as Checkout to get the git fetch command. |
2 | Fetch the shared template to the oom directory on the Kubernetes node VM | cd oom |
git fetch https://gerrit.onap.org/r/oom refs/changes/67/25467/19 && git checkout FETCH_HEADrun the git fetch command got from step 1
|
3 | Link the new startODL.sh | vi kubernetes/sdnc/templates/sdnc-statefulset.yaml add the following fields: Field | Value |
---|
.spec.containers.volumeMounts | - mountPath: /opt/onap/sdnc/bin/startODL.sh name: sdnc-startodl - mountPath: /opt/opendaylight/current/deploy name: sdnc-deploy | .spec.volumes | - name: sdnc-deploy hostPath: path: /home/ubuntu/cluster/deploy - name: sdnc-startodl hostPath: path: /home/ubuntu/cluster/script/startODL.sh |
|
4 | Enable cluster configuration | vi kubernetes/sdnc/values.yaml change the following fields with the new value: field | new value | old value |
---|
enableODLCluster | true | false | numberOfODLReplicas | 3 | 1 | numberOfDbReplicas | 2 | 1 |
|
Make nfs-provisioner Pod Runs On Node Where NFS Server Runs
On the node where you have configured nfs server (from step 3. Share the /dockerdata-nfs Folder between Kubernetes Nodes), run the following:
# | Purpose | Command and Example |
---|
1 | Find the node name | Expand |
---|
title | find nfs server node |
---|
| Run command "ps -ef|grep nfs", you should - node with nfs server runs nfsd process:
ubuntu@sdnc-k8s:~$ ps -ef|grep nfs root 3473 2 0 Dec07 ? 00:00:00 [nfsiod] root 11072 2 0 Dec06 ? 00:00:00 [nfsd4_callbacks] root 11074 2 0 Dec06 ? 00:00:00 [nfsd] root 11075 2 0 Dec06 ? 00:00:00 [nfsd] root 11076 2 0 Dec06 ? 00:00:00 [nfsd] root 11077 2 0 Dec06 ? 00:00:00 [nfsd] root 11078 2 0 Dec06 ? 00:00:00 [nfsd] root 11079 2 0 Dec06 ? 00:00:03 [nfsd] root 11080 2 0 Dec06 ? 00:00:13 [nfsd] root 11081 2 0 Dec06 ? 00:00:42 [nfsd] ubuntu@sdnc-k8s:~$
- node with nfs client runs nfs svc process:
ubuntu@sdnc-k8s-2:~$ ps -ef|grep nfs ubuntu 5911 5890 0 20:10 pts/0 00:00:00 grep --color=auto nfs root 18739 2 0 Dec06 ? 00:00:00 [nfsiod] root 18749 2 0 Dec06 ? 00:00:00 [nfsv4.0-svc] ubuntu@sdnc-k8s-2:~$
|
kubectl get node Expand |
---|
| ubuntu@sdnc-k8s:~$ kubectl get node NAME STATUS ROLES AGE VERSION sdnc-k8s Ready master 6d v1.8.4 sdnc-k8s-2 Ready <none> 6d v1.8.4 ubuntu@sdnc-k8s:~$ |
|
2 | Set label on the node | kubectl label nodes <NODE_NAME_FROM_LAST_STEP> disktype=ssd Expand |
---|
| ubuntu@sdnc-k8s:~$ kubectl label nodes sdnc-k8s disktype=ssd node "sdnc-k8s" labeled |
|
3 | Check the label has been set on the node | kubectl get node --show-labels |
4 | Update nfs-provisioner pod template to force it running on the nfs server node | In nfs-provisoner-deployment.yaml file, add “spec.template.spec.nodeSelector” for pod “nfs-provisioner” Expand |
---|
title | An example of the nfs-provisioner pod with nodeSelector |
---|
| Image Modified |
|
Create the ONAP Config
# | Purpose | Command and Examples |
---|
0.1 | (Only Once) Create the ONAP config using a sample YAML file | cd {$OOM}/kubernetes/config cp onap-parameters-sample.yaml onap-parameters.yaml |
0 | Set the OOM Kubernetes config environment | cd {$OOM}/kubernetes/oneclick source setenv.bash |
1 | Run the createConfig script to create the ONAP config | cd {$OOM}/kubernetes/config ./createConfig.sh -n onap Expand |
---|
title | Example of createConfig output |
---|
| **** Creating configuration for ONAP instance: onap namespace "onap" created NAME: onap-config LAST DEPLOYED: Wed Nov 8 20:47:35 2017 NAMESPACE: onap STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE global-onap-configmap 15 0s ==> v1/Pod NAME READY STATUS RESTARTS AGE config 0/1 ContainerCreating 0 0s **** Done **** |
|
2 | Wait for the config-init container to finish | Use the following command to monitor onap config init intil it reaches to Completed STATUS: kubectl get pod --all-namespaces -a
Expand |
---|
title | Example of final output |
---|
| The final output should be shown as the the following with onap config in Completed STATUS: |
|
Additional checks for config-init | helm | helm ls --all Expand |
---|
| NAME REVISION UPDATED STATUS CHART NAMESPACE onap-config 1 Tue Nov 21 17:07:13 2017 DEPLOYED config-1.1.0 onap |
helm status onap-config Expand |
---|
| LAST DEPLOYED: Tue Nov 21 17:07:13 2017 NAMESPACE: onap STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE global-onap-configmap 15 2d ==> v1/Pod NAME READY STATUS RESTARTS AGE config 0/1 Completed 0 2d |
|
---|
kubernetes namespaces | kubectl get namespaces Expand |
---|
| NAME STATUS AGE default Active 15d kube-public Active 15d kube-system Active 15d onap Active 2d |
|
---|
|
...