Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Anchor
_obu4yammp2j4
_obu4yammp2j4
OOM ONAP offline installer
Anchor
_9adst5jv3f6y
_9adst5jv3f6y
Installation Guide
Anchor
_7dlm4zxv3k8o
_7dlm4zxv3k8o
(Community Edition)
Anchor
_tehhkujnplzy
_tehhkujnplzy
Image Added
Anchor
_uzc5r74dw0c2
_uzc5r74dw0c2


1. Introduction for CE1 delivery
2. Environment
3. Preparation (before installation)
4. Installation
4.1 Deploy infrastructure
4.2 ONAP deployment
Appendix 1: Troubleshooting
Appendix 2: Release Manifest
Appendix 3: ONAP values.yaml configuration
Robot values.yaml configuration



Version Control

Version

Date

Modified by

Comment

0.1

13.10.2018

Michal Ptacek

First draft

 

 

 

 

 

 

 

 

Contributors

Name

Mail

Michal Ptá?ek

m.ptacek@partner.samsung.com

Samuli Slivius

s.silvius@partner.samsung.com

Timo Puha

t.puha@partner.samsung.com

Petr Ospaly

p.ospaly@partner.samsung.com

Pawel Mentel

p.mentel@samsung.com

Witold Kopeld

w.kopel@samsung.com

Anchor
_es4ewpqefe03
_es4ewpqefe03

Anchor
_4qdslb9lzehr
_4qdslb9lzehr

Anchor
_whecaju3xy71
_whecaju3xy71
1. Introduction for CE1 delivery


This installation guide is covering instruction on how to deploy ONAP using Samsung offline installer. Precondition is successfully build SI (Self-Installer) package, which is addressed in previous guide. All artifacts needed for this deployment were collected from online OOM ONAP Beijing deployment from Beijing branch. Release was verified on RHEL7.4 deployments (rhel cloud image). If different rhel74 images are used, there might be some problems related to package clashes. Image was downloaded from RedHat official site.
{+}https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.4/x86_64/product-software+
Red Hat Enterprise Linux 7.4 KVM Guest Image
Last modified: 2018-03-23
SHA-256 Checksum: b9fd65e22e8d3eb82eecf78b36471109fa42ee46fc12fd2ba2fa02e663fd21ef
Image Added
Later on it might be possible to use different than cloud rhel74 image, but this decision must be done on PO level and aligned with business.



Current limitations are:

  • Tested on rhel74 cloud image (on openstack VMs only)
  • Verified by vFWCL demo (In OpenStack environment only, inside same tenant where ONAP is deployed).














Anchor
_b2zwan590co0
_b2zwan590co0

Anchor
_nah4f7reosir
_nah4f7reosir
2. Environment


Image Added
Image Added
Install_server is in some context also referenced as infrastructure node, also it's node hosting rancher server container.




HW footprint:

    • install_server: (nexus, nginx,dns,rancher_server)
      • Red Hat Enterprise Linux 7.4 KVM Guest Image
      • 16G+ RAM
      • 200G+ disk space (minimum 160GB)
      • 10G+ swap
      • 8+ vCPU


    • kubernetes_node(s): (rancher_agent, ONAP OOM node)
      • Red Hat Enterprise Linux 7.4 KVM Guest Image
      • 64G+ RAM
      • 120G+ disk space
      • 10G+ swap
      • 16+ vCPU

Anchor
_ahm9qygja45x
_ahm9qygja45x

Anchor
_qzk5ctwrug82
_qzk5ctwrug82
3. Preparation (before installation)


  • (Step1) Ensure passwordless root login

Wiki Markup
From install_server to kubernetes_nodes.
\\
As we're using cloud rhel7.4 image, root access is by default disabled. It's possible to login into all VM's just by using cloud-user and using known key inserted during spawning by cloud-init.
\\
*Installation scripts requires to be executed under root user* (non-root user with sudoers right is not sufficient, will be done later as a hardening topic) *and it requires passwordless login to other k8s nodes.*
\\
*In general cases* to achieve passwordless connection, one can just create own key using ssh-keygen and distribute public key created for example in /root/.ssh/id_rsa.pub to /root/.ssh/authorized_keys file on all k8s nodes. On OpenStack created instances cloud related text prohibiting root login should be removed as well (described in step below).
\\
If VMs were spawned in Openstack, this procedure could be used to allow ssh under root user (with private key):
\[root@rc3l-install-server ~\]# ssh rc3-install-compute2
The authenticity of host 'r2-install-compute2 (10.6.6.13)' can't be established.
ECDSA key fingerprint is SHA256:645LdswQyZtoxHBv3+6hvC62liAdwEkbr8w6sN392YI.
ECDSA key fingerprint is MD5:32:a6:70:26:0a:ae:56:c1:e3:2a:b6:fa:b7:40:5a:d6.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rc3-install-compute2,10.6.6.13' (ECDSA) to the list of known hosts.
Please login as the user "cloud-user" rather than the user "root".
\\
One need to login into those servers as cloud-user (using correct openstack key)
\[root@rc3l-install-server ~\]# ssh -i ~/correct_key cloud-user@rc3-install-compute2
\\
Switch to root user:
\[cloud-user@rc3-install-compute2 ~\]$ sudo su -
\\
And adapt it by removing highlighted text (cloud related text prohibiting root login):
\[root@rc3-install-compute2 ~\]# vi /root/.ssh/authorized_keys

  1. The following ssh key was injected by Nova
    no-port-forwarding,no-agent-forwarding,no-X11-forwarding,command="echo 'Please login as the user \"cloud-user\" rather than the user \"root\".';echo;sleep 10" ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPwF2bYm2QuqZpjuAcZDJTcFdUkKv4Hbd/3qqbxf6g5ZgfQarCi+mYnKe9G9Px3CgFLPdgkBBnMSYaAzMjdIYOEdPKFTMQ9lIF0+i5KsrXvszWraGKwHjAflECfpTAWkPq2UJUvwkV/g7NS5lJN3fKa9LaqlXdtdQyeSBZAUJ6QeCE5vFUplk3X6QFbMXOHbZh2ziqu8mMtP+cWjHNBB47zHQ3RmNl81Rjv+QemD5zpdbK/h6AahDncOY3cfN88/HPWrENiSSxLC020sgZNYgERqfw+1YhHrclhf3jrSwCpZikjl7rqKroua2LBI/yeWEta3amTVvUnR2Y7gM8kHyh Generated-by-Nova


    Image Added
    In some environments RootLogin might be prohibited completely, this might be enabled by setting-up PermitRootLogin yes in /etc/ssh/sshd_config and service sshd reload.


    After having done this to all kubernetes nodes, one should check access as root to
    verify that password-less root login works, it should be able to access all k8s nodes
    w/o any password prompt.
    e.g.:
    root@oom-beijing-rc3-install:~# ssh oom-beijing-RC3-node1
    root@oom-beijing-rc3-install:~# ssh oom-beijing-RC3-node2

  • (Step 2) Create installation directory on install_server (e.g. /root/installer) and move self-contained archive (installation script) into it.

Be sure there is enough space (more than 160G)

    • mkdir /root/installer


Note: this is the place, where archive will be extracted. Original file can be removed after deployment.


  • (Step 3) Create new file local_repo.conf in installation directory, with following information:


LOCAL_IP=<install_server_ip> NODES_IPS='<node_ip1> <node_ip2> … <node_ipn>' E.g. LOCAL_IP=10.8.8.7 NODES_IPS='10.8.8.10 10.8.8.11'

This will ensure that infrastructure deployment together with setting-up kubernetes will be done non-interactively.
We should be ready to proceed with installation part.

Anchor
_o2vvn3m4pw5n
_o2vvn3m4pw5n
4. Installation

Anchor
_camjxgwrve0e
_camjxgwrve0e
4.1 Deploy infrastructure

In this part infrastructure will be deployed. More specifically local nexus, dns, rancher & docker will be deployed on install server. Kubernetes nodes will get rancher agent running and form kubernetes cluster.

  • (Step1) To execute a script simply run (from installation directory):
    • cd /root/installer
    • /root/installer/selfinstall_onap_beijing_RC3.sh


  • (Step2) Answer questions asked by the script (if needed)


Note: questions will be asked only when script won't be able to find config file (local_repo.conf) in the current folder, otherwise script will use existing config file.
Image Added
And Wait until script finish execution.

  • (Step3) Verify that k8s cluster is ready and operational


One can verify, that infrastructure deployment was successful in following way:

  1. following should display healthy etcd-0 component
    kubectl get cs
    Image Added
  2. following should display 2 kubernetes nodes in "Ready" state.
    kubectl get nodes
    Image Added

    Anchor
    _e5qi838ehofk
    _e5qi838ehofk
    4.2 ONAP deployment

    Before ONAP is deployed ./oom/kubernetes/onap/values.yaml in OOM should be configured accordingly to cover correct VIM (Openstack) credentials. Also number of deployed ONAP components might be modified in there.If ONAP is going to be also tested by reproducing vFWCL Demo ./oom/kubernetes/robot/values.yaml should be configured before ONAP is deployed in OOM.
    Configuration of onap & robot values.yaml described in "Appendix 4" paragraph.

  • (Step1) Trigger the deployment of ONAP


To execute ONAP installation run (from installation directory):

    • ./deploy_onap.sh


This script will finish quite quickly and it will just launch ONAP deployment.

  • (Step2) Check the progress of the deployment


Progress of the real deployment can be followed by monitoring number of "not running pods" and it usually takes around ~1 hr.
Deployment is done when all components are up !
Check-box: Image Added
E.g. following command can be used to track progress of deployment
$ while true; do date;kubectl get pods -n onap -o=wide| grep -v 'Runn|NAME' | wc -l;sleep 30;done
The only not running pod using sign-off Beijing images (2.0.0 branch) is:
dev-aai-champ-9889557bb-5fzvl 0/1 Running 0 2h
(Step3) Verify it's functionality
All ONAP health-checks should pass, launch robot health checks, from inside oom/kubernetes/robot folder.

root@oom-beijing-rc3-master:oom/kubernetes/robot# ./ete-k8s.sh onap health
Starting Xvfb on display :88 with res 1280x1024x24
Executing robot tests at log level TRACE
==============================================================================
OpenECOMP ETE
==============================================================================
OpenECOMP ETE.Robot
==============================================================================
OpenECOMP ETE.Robot.Testsuites
==============================================================================
OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp components are...
==============================================================================
Basic A&AI Health Check | PASS |
------------------------------------------------------------------------------
Basic AAF Health Check | PASS |
------------------------------------------------------------------------------
Basic AAF SMS Health Check | PASS |
------------------------------------------------------------------------------
Basic APPC Health Check | PASS |
------------------------------------------------------------------------------
Basic CLI Health Check | PASS |
------------------------------------------------------------------------------
Basic CLAMP Health Check | PASS |
------------------------------------------------------------------------------
Basic DCAE Health Check | PASS |
------------------------------------------------------------------------------
Basic DMAAP Message Router Health Check | PASS |
------------------------------------------------------------------------------
Basic External API NBI Health Check | PASS |
------------------------------------------------------------------------------
Basic Log Elasticsearch Health Check | PASS |
------------------------------------------------------------------------------
Basic Log Kibana Health Check | PASS |
------------------------------------------------------------------------------
Basic Log Logstash Health Check | PASS |
------------------------------------------------------------------------------
Basic Microservice Bus Health Check | PASS |
------------------------------------------------------------------------------
Basic Multicloud API Health Check | PASS |
------------------------------------------------------------------------------
Basic Multicloud-ocata API Health Check | PASS |
------------------------------------------------------------------------------
Basic Multicloud-titanium_cloud API Health Check | PASS |
------------------------------------------------------------------------------
Basic Multicloud-vio API Health Check | PASS |
------------------------------------------------------------------------------
Basic OOF-Homing Health Check | PASS |
------------------------------------------------------------------------------
Basic OOF-SNIRO Health Check | PASS |
------------------------------------------------------------------------------
Basic Policy Health Check | PASS |
------------------------------------------------------------------------------
Basic Portal Health Check | PASS |
------------------------------------------------------------------------------
Basic SDC Health Check | PASS |
------------------------------------------------------------------------------
Basic SDNC Health Check | PASS |
------------------------------------------------------------------------------
Basic SO Health Check | PASS |
------------------------------------------------------------------------------
Basic UseCaseUI API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC catalog API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC emsdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC gvnfmdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC huaweivnfmdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC jujuvnfmdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC multivimproxy API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC nokiavnfmdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC nokiav2driver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC nslcm API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC resmgr API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC vnflcm API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC vnfmgr API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC vnfres API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC workflow API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC ztesdncdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VFC ztevnfmdriver API Health Check | PASS |
------------------------------------------------------------------------------
Basic VID Health Check | PASS |
------------------------------------------------------------------------------
Basic VNFSDK Health Check | PASS |
------------------------------------------------------------------------------
OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp compo... | PASS |
43 critical tests, 43 passed, 0 failed
43 tests total, 43 passed, 0 failed
==============================================================================
OpenECOMP ETE.Robot.Testsuites | PASS |
43 critical tests, 43 passed, 0 failed
43 tests total, 43 passed, 0 failed
==============================================================================
OpenECOMP ETE.Robot | PASS |
43 critical tests, 43 passed, 0 failed
43 tests total, 43 passed, 0 failed
==============================================================================
OpenECOMP ETE | PASS |
43 critical tests, 43 passed, 0 failed
43 tests total, 43 passed, 0 failed
==============================================================================
Output: /share/logs/ETE_0001_health/output.xml
Log: /share/logs/ETE_0001_health/log.html
Report: /share/logs/ETE_0001_health/report.html

Anchor
_9xv0zgnafxb3
_9xv0zgnafxb3
Appendix 1: Troubleshooting


During our deployments, occasionally some issues pops-up. For example sdc-be pod was not initialized but readiness probe was still reporting problems and therefore dependent pods were not coming up, we believe it's not offline deployment specific problem.
Solution was to delete that pod and new one was started automatically once old one was terminated. Deleting of hanging pods seems to be quite safe process to unlock us.
e.g.
kubectl delete pod dev-sdc-be-6447776995-psn8f -n onap
For some envs with limited computation resources, ONAPs container liveness/readiness time configuration is too small (10 sec). This means that container will be restarting all the time, because it is not able to start in expected time interval. It is usually visible in next containers: uui-server, clamp-dash-es, clamp-dash-kibana.
This can be fixed by increasing liveness/readiness time in containers values.yaml, and applying this change to container.
Containers configuration file examples:

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="7e687174-6306-4cb7-8809-ae8cda15f746"><ac:plain-text-body><![CDATA[

[root@rc3-install uui-server]# pwd
]]></ac:plain-text-body></ac:structured-macro>
/root/installer/oom/kubernetes/uui/charts/uui-server
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="dc4263f3-6e1c-47aa-b1fd-c0d1b2a62273"><ac:plain-text-body><![CDATA[[root@rc3-install uui-server]# cat values.yaml
]]></ac:plain-text-body></ac:structured-macro>
...

  1. probe configuration parameters
    liveness:
    initialDelaySeconds: 50
    periodSeconds: 15
  2. necessary to disable liveness probe when setting breakpoints
  3. in debugger so K8s doesn't restart unresponsive container
    enabled: true

    readiness:
    initialDelaySeconds: 30
    periodSeconds: 12
    ...

    <ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="ba272fe3-42bc-4b18-86a0-2ba88d294d46"><ac:plain-text-body><![CDATA[root@rc3-install clamp-dash-es]# pwd \]></ac:plain-text-body></ac:structured-macro>
    /root/installer/oom/kubernetes/clamp/charts/clamp-dash-es
    <ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="784914aa-dc11-4560-857d-8f46d576b68f"><ac:plain-text-body><![CDATA[root@rc3-install clamp-dash-es]# cat values.yaml \]></ac:plain-text-body></ac:structured-macro>
    ...
  4. probe configuration parameters
    liveness:
    initialDelaySeconds: 50
    periodSeconds: 15
  5. necessary to disable liveness probe when setting breakpoints
  6. in debugger so K8s doesn't restart unresponsive container
    enabled: true

    readiness:
    initialDelaySeconds: 45
    periodSeconds: 15
    ...

    <ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="684686be-1186-4008-afd0-e8821c14556f"><ac:plain-text-body><![CDATA[root@rc3-install clamp-dash-kibana]# pwd \]></ac:plain-text-body></ac:structured-macro>
    /root/installer/oom/kubernetes/clamp/charts/clamp-dash-kibana
    <ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="fe813fa0-a05c-4765-982a-cd222e1fd951"><ac:plain-text-body><![CDATA[root@rc3-install clamp-dash-kibana]# cat values.yaml \]></ac:plain-text-body></ac:structured-macro>
    ...
  7. probe configuration parameters
    liveness:
    initialDelaySeconds: 360
    periodSeconds: 10
  8. necessary to disable liveness probe when setting breakpoints
  9. in debugger so K8s doesn't restart unresponsive container
    enabled: true

    readiness:
    initialDelaySeconds: 45
    periodSeconds: 12
    ...


Apply this changes using next commands from: /root/installer/oom/kubernetes

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="3e630902-2fab-4bc5-a278-59d89b480d73"><ac:plain-text-body><![CDATA[

[root@rc3-install kubernetes]# pwd
]]></ac:plain-text-body></ac:structured-macro>
/root/installer/oom/kubernetes
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="6c0f87e6-945b-42c9-a40d-e4de52da1a6f"><ac:plain-text-body><![CDATA[[root@rc3-install kubernetes]# make uui
]]></ac:plain-text-body></ac:structured-macro>
...
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /root/installer/oom/kubernetes/dist/packages/uui-2.0.0.tgz
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="b5d2cb27-f82e-4d6d-a755-ab2bb06de0d1"><ac:plain-text-body><![CDATA[make[1]: Leaving directory `/root/installer/oom/kubernetes'
]]></ac:plain-text-body></ac:structured-macro>
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="543032d7-4cec-47dc-88ac-7f0a625b9098"><ac:plain-text-body><![CDATA[[root@rc3-install kubernetes]# make clamp
]]></ac:plain-text-body></ac:structured-macro>
...
1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /root/installer/oom/kubernetes/dist/packages/clamp-2.0.0.tgz
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="e862d203-9196-4ce2-be41-473ebe55e79a"><ac:plain-text-body><![CDATA[make[1]: Leaving directory `/root/installer/oom/kubernetes'
]]></ac:plain-text-body></ac:structured-macro>
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="17a8cd8a-7a80-4220-910d-9cb054fe9e73"><ac:plain-text-body><![CDATA[[root@rc3-install kubernetes]# make onap
]]></ac:plain-text-body></ac:structured-macro>
...
Update Complete. ?Happy Helming!?
Saving 26 charts
Downloading aaf from repo

http://127.0.0.1:8879


Downloading aai from repo

http://127.0.0.1:8879


Downloading appc from repo

http://127.0.0.1:8879


Downloading clamp from repo

http://127.0.0.1:8879


Downloading cli from repo

http://127.0.0.1:8879


Downloading common from repo

http://127.0.0.1:8879


Downloading consul from repo

http://127.0.0.1:8879


Downloading dcaegen2 from repo

http://127.0.0.1:8879


Downloading dmaap from repo

http://127.0.0.1:8879


Downloading esr from repo

http://127.0.0.1:8879


Downloading log from repo

http://127.0.0.1:8879


Downloading sniro-emulator from repo

http://127.0.0.1:8879


Downloading msb from repo

http://127.0.0.1:8879


Downloading multicloud from repo

http://127.0.0.1:8879


Downloading nbi from repo

http://127.0.0.1:8879


Downloading policy from repo

http://127.0.0.1:8879


Downloading portal from repo

http://127.0.0.1:8879


Downloading oof from repo

http://127.0.0.1:8879


Downloading robot from repo

http://127.0.0.1:8879


Downloading sdc from repo

http://127.0.0.1:8879


Downloading sdnc from repo

http://127.0.0.1:8879


Downloading so from repo

http://127.0.0.1:8879


Downloading uui from repo

http://127.0.0.1:8879


Downloading vfc from repo

http://127.0.0.1:8879


Downloading vid from repo

http://127.0.0.1:8879


Downloading vnfsdk from repo

http://127.0.0.1:8879


Deleting outdated charts
==> Linting onap
Lint OK

1 chart(s) linted, no failures
Successfully packaged chart and saved it to: /root/installer/

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="16d7a8f7-cb70-4934-a579-088535813631"><ac:plain-text-body><![CDATA[[root@rc3-install kubernetes]# helm upgrade -i dev local/onap --namespace onap
]]></ac:plain-text-body></ac:structured-macro>
...


Containers that were updated should be recreated automatically, and started correctly with new configuration values.

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="ec93050e-3e21-4c26-aafc-6d0b3a3dec31"><ac:plain-text-body><![CDATA[

[root@rc3-install kubernetes]# *kubectl get pods --all-namespaces

grep 'uui|clamp'*
]]></ac:plain-text-body></ac:structured-macro>
onap dev-clamp-87d65c5d6-t9xjs 2/2 Running 0 1d
onap dev-clamp-dash-es-5f876f97ff-9fpxt 1/1 Running 0 2m
onap dev-clamp-dash-kibana-56cdbcd7f6-2m489 1/1 Running 0 2m
onap dev-clamp-dash-logstash-67b59f9cb4-nxv86 1/1 Running 0 1d
onap dev-clampdb-5c5c6f5594-cjlmf 1/1 Running 0 1d
onap dev-uui-66f5d746f4-86nhw 1/1 Running 0 1d
onap dev-uui-server-f774979c4-knxjc 1/1 Running 0 2m

In other case recreate them manually:

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="d18e9eb7-738b-46b9-8a56-2a0f52cf21c5"><ac:plain-text-body><![CDATA[

[root@rc3-install kubernetes]# kubectl delete pod <pod_name_1> <pod_name_n> -n onap

]]></ac:plain-text-body></ac:structured-macro>

Anchor
_fh370q5xkqjq
_fh370q5xkqjq
Appendix 2: Release Manifest

All used docker, npm and other artifact names including versions should be stored in git repo under following directory
/root/installer/bash/tools/data_list

Anchor
_n2j1x7tvwrus
_n2j1x7tvwrus
Appendix 3: ONAP values.yaml configuration

Before ONAP is deployed ./oom/kubernetes/onap/values.yaml in OOM should be configured accordingly to cover correct VIM (Openstack) credentials. Also number of deployed ONAP components might be modified in there.

  • openStackKeyStoneUrl: OpenStack keystone URL (OS_AUTH_URL, Note: dont write api version).
    • Example: http://<OpenStack_ip>:5000
  • openStackServiceTenantName: OpenStack services tenant, name.
  • openStackDomain: OpenStack domain name (OS_USER_DOMAIN_NAME)
  • openStackUserName: OpenStack username (OS_USERNAME)
  • openStackEncryptedPassword: OpenStack password, encrypted with next command:
    • echo -n <OS_PASSWORD>| openssl aes-128-ecb -e -K aa3871669d893c7fb8abbcda31b88b4f -nosalt | xxd -c 256 -p
  • openStackRegion: OpenStack region.
  • Example (some values should be fulfilled):
    1. Copyright © 2017 Amdocs, Bell Canada

  1. Licensed under the Apache License, Version 2.0 (the "License");
  2. you may not use this file except in compliance with the License.
  3. You may obtain a copy of the License at

  4. http://www.apache.org/licenses/LICENSE-2.0

  5. Unless required by applicable law or agreed to in writing, software
  6. distributed under the License is distributed on an "AS IS" BASIS,
  7. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  8. See the License for the specific language governing permissions and
  9. limitations under the License.


  10. Global configuration overrides.

  11. These overrides will affect all helm charts (ie. applications)
  12. that are listed below and are 'enabled'.

                                                                                                                                  1. global:
  13. Change to an unused port prefix range to prevent port conflicts
  14. with other instances running within the same k8s cluster
    nodePortPrefix: 302

  15. ONAP Repository
  16. Uncomment the following to enable the use of a single docker
  17. repository but ONLY if your repository mirrors all ONAP
  18. docker images. This includes all images from dockerhub and
  19. any other repository that hosts images for ONAP components.
    #repository: nexus3.onap.org:10001
    repositoryCred:
    user: docker
    password: docker

  20. readiness check - temporary repo until images migrated to nexus3
    readinessRepository: oomk8s
  21. logging agent - temporary repo until images migrated to nexus3
    loggingRepository: docker.elastic.co

  22. image pull policy
    #pullPolicy: Always
    pullPolicy: IfNotPresent

  23. default mount path root directory referenced
  24. by persistent volumes and log files
    persistence:
    mountPath: /dockerdata-nfs

  25. flag to enable debugging - application support required
    debugEnabled: false

  26. Repository for creation of nexus3.onap.org secret
    repository: nexus3.onap.org:10001



  27. Enable/disable and configure helm charts (ie. applications)
  28. to customize the ONAP deployment.

                                                                                                                                  1. aaf:
                                                                                                                                    enabled: true
                                                                                                                                    aai:
                                                                                                                                    enabled: true
                                                                                                                                    appc:
                                                                                                                                    enabled: true
                                                                                                                                    config:
                                                                                                                                    openStackType: OpenStackProvider
                                                                                                                                    openStackName: OpenStack
                                                                                                                                    openStackKeyStoneUrl: http://<OpenStack_ip>:5000
                                                                                                                                    openStackServiceTenantName: services
                                                                                                                                    openStackDomain: Default
                                                                                                                                    openStackUserName: onap
                                                                                                                                    openStackEncryptedPassword: f7920677e15e2678b0f33736189e8965
                                                                                                                                    clamp:
                                                                                                                                    enabled: true
                                                                                                                                    cli:
                                                                                                                                    enabled: true
                                                                                                                                    consul:
                                                                                                                                    enabled: true
                                                                                                                                    dcaegen2:
                                                                                                                                    enabled: true
                                                                                                                                    dmaap:
                                                                                                                                    enabled: true
                                                                                                                                    esr:
                                                                                                                                    enabled: true
                                                                                                                                    log:
                                                                                                                                    enabled: true
                                                                                                                                    sniro-emulator:
                                                                                                                                    enabled: true
                                                                                                                                    oof:
                                                                                                                                    enabled: true
                                                                                                                                    msb:
                                                                                                                                    enabled: true
                                                                                                                                    multicloud:
                                                                                                                                    enabled: true
                                                                                                                                    nbi:
                                                                                                                                    enabled: true
                                                                                                                                    config:
  29. openstack configuration
    openStackUserName: "onap"
    openStackRegion: "RegionOne"
    openStackKeyStoneUrl: "http://<OpenStack_ip>:5000"
    openStackServiceTenantName: "services"
    openStackEncryptedPasswordHere: "f7920677e15e2678b0f33736189e8965"
    policy:
    enabled: true
    portal:
    enabled: true
    robot:
    enabled: true
    sdc:
    enabled: true
    sdnc:
    enabled: true

    replicaCount: 1

    config:
    enableClustering: false

    mysql:
    disableNfsProvisioner: true
    replicaCount: 1
    so:
    enabled: true

    replicaCount: 1

    liveness:
  30. necessary to disable liveness probe when setting breakpoints
  31. in debugger so K8s doesn't restart unresponsive container
    enabled: true

  32. so server configuration
    config:
  33. message router configuration
    dmaapTopic: "AUTO"
  34. openstack configuration
    openStackUserName: "onap"
    openStackRegion: "RegionOne"
    openStackKeyStoneUrl: "http://<OpenStack_ip>:5000"
    openStackServiceTenantName: "services"
    openStackEncryptedPasswordHere: "f7920677e15e2678b0f33736189e8965"

  35. configure embedded mariadb
    mariadb:
    config:
    mariadbRootPassword: password
    uui:
    enabled: true
    vfc:
    enabled: true
    vid:
    enabled: true
    vnfsdk:
    enabled: true|

    Anchor
    _j3qqre62nl1
    _j3qqre62nl1
    Robot values.yaml configuration

    If ONAP is going to be also tested e.g by vFWCL Demo, ./oom/kubernetes/robot/values.yaml should be configured before ONAP is deployed in OOM.
  • lightHttpdUsername/lightHttpdPassword: credentials to access robot portal (too be able watch the logs in browser)
  • openStackFlavourMedium: Openstack flavour name, corresponding to m1.medium size.
  • openStackKeyStoneUrl: Openstack keystone URL (OS_AUTH_URL, Note: dont write API version, see example).
    • Example: http://<OpenStack_ip>:5000
  • Wiki Markup
    *openStackPublicNetId (Tenant network):* Openstack network id, from which instances would be able to access ONAP (not necessarily public \[e.g. if ONAP is running in same VIM as vFWCL\], should have DHCP enabled).
  • openStackPassword: Openstack password in open format (not encrypted)
  • openStackRegion: Openstack region.
  • openStackTenantId: Openstack tenant (in which VNFs will be created)
  • openStackUserName: Openstack username (OS_USERNAME)
  • ubuntu14Image: Openstack image name of ubuntu 14.04-trusty
  • ubuntu16Image: Openstack image name of ubuntu 16.04-xenial
  • openStackPrivateNetId (ONAP network): Openstack private network to which instances would be connected, to be able to access each other (should start with 10.0, should have DHCP enabled).
  • openStackPrivateSubnetId: Openstack subnet id, for private network.
  • openStackPrivateNetCidr: CIDR notation for the Openstack private network where VNFs will be spawned.
  • vnfPubKey: Public Key to access inside VNFs (instances).
  • dcaeCollectorIp: At this step parameter is unknown, leave it empty, it won't be used during ONAP deployment. (We will set this parameter during the Close Loop Demo "Preload" step).


  • Example (some values should be fulfilled)
    1. Copyright © 2017 Amdocs, Bell Canada

  1. Licensed under the Apache License, Version 2.0 (the "License");
  2. you may not use this file except in compliance with the License.
  3. You may obtain a copy of the License at

  4. http://www.apache.org/licenses/LICENSE-2.0

  5. Unless required by applicable law or agreed to in writing, software
  6. distributed under the License is distributed on an "AS IS" BASIS,
  7. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  8. See the License for the specific language governing permissions and
  9. limitations under the License.


  10. Global configuration defaults.

                                                                                                                                  1. global: # global defaults
                                                                                                                                    nodePortPrefix: 302
                                                                                                                                    ubuntuInitRepository: registry.hub.docker.com
                                                                                                                                    persistence: {}

  11. application image
    repository: nexus3.onap.org:10001
    image: onap/testsuite:1.2.1
    pullPolicy: Always

    ubuntuInitImage: oomk8s/ubuntu-init:2.0.0

  12. flag to enable debugging - application support required
    debugEnabled: false


  13. Application configuration defaults.

                                                                                                                                  1. config:
  14. Username of the lighthttpd server. Used for HTML auth for webpage access
    lightHttpdUsername: robot
  15. Password of the lighthttpd server. Used for HTML auth for webpage access
    lightHttpdPassword: robot
  16. gerrit branch where the latest heat code is checked in
    gerritBranch: master
  17. gerrit project where the latest heat code is checked in
    gerritProject: http://gerrit.onap.org/r/demo.git


  18. Demo configuration
  19. Nexus demo artifact version. Maps to GLOBAL_INJECTED_ARTIFACTS_VERSION
    demoArtifactsVersion: "1.3.0"
  20. Openstack medium sized flavour name. Maps GLOBAL_INJECTED_VM_FLAVOR
    openStackFlavourMedium: "m1.medium"
  21. Openstack keystone URL. Maps to GLOBAL_INJECTED_KEYSTONE
    openStackKeyStoneUrl: "http://<OpenStack_ip>:5000"
  22. UUID of the Openstack network that can assign floating ips. Maps to GLOBAL_INJECTED_PUBLIC_NET_ID
    openStackPublicNetId: "57948215-0ca0-496f-bc7d-9fab66bc91aa"
  23. password for Openstack tenant where VNFs will be spawned. Maps to GLOBAL_INJECTED_OPENSTACK_PASSWORD
    openStackPassword: "OpenStackOpenPassword"
  24. Openstack region. Maps to GLOBAL_INJECTED_REGION
    openStackRegion: "RegionOne"
  25. Openstack tenant UUID where VNFs will be spawned. Maps to GLOBAL_INJECTED_OPENSTACK_TENANT_ID
    openStackTenantId: "b1ce7742d956463999923ceaed71786e"
  26. username for Openstack tenant where VNFs will be spawned. Maps to GLOBAL_INJECTED_OPENSTACK_USERNAME
    openStackUserName: "onap"
  27. Openstack glance image name for Ubuntu 14. Maps to GLOBAL_INJECTED_UBUNTU_1404_IMAGE
    ubuntu14Image: "ubuntu-14.04-server-cloudimg-amd64"
  28. Openstack glance image name for Ubuntu 16. Maps to GLOBAL_INJECTED_UBUNTU_1604_IMAGE
    ubuntu16Image: "ubuntu-16.04-server-cloudimg-amd64"
  29. GLOBAL_INJECTED_SCRIPT_VERSION. Maps to GLOBAL_INJECTED_SCRIPT_VERSION
    scriptVersion: "1.2.1"
  30. Openstack network to which VNFs will bind their primary (first) interface. Maps to GLOBAL_INJECTED_NETWORK
    openStackPrivateNetId: "b5f175c4-733c-4734-a878-290a35fb495d"

  31. SDNC Preload configuration
  32. Openstack subnet UUID for the network defined by openStackPrivateNetId. Maps to onap_private_subnet_id
    openStackPrivateSubnetId: "cfe28d43-cc80-4b9a-8aac-d0fe29327c52"
  33. CIDR notation for the Openstack private network where VNFs will be spawned. Maps to onap_private_net_cidr
    openStackPrivateNetCidr: "10.0.50.0/24"
  34. The first 2 octets of the private Openstack subnet where VNFs will be spawned.
  35. Needed because sdnc preload templates hardcodes things like this 10.0.${ecompnet}.X
    openStackOamNetworkCidrPrefix: "10.0"
  36. Override with Pub Key for access to VNF
    vnfPubKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPwF2bYm2QuqZpjuAcZDJTcFdUkKv4Hbd/3qqbxf6g5ZgfQarCi+mYnKe9G9Px3CgFLPdgkBBnMSYaAzMjdIYOEdPKFTMQ9lIF0+i5KsrXvszWraGKwHjAflECfpTAWkPq2UJUvwkV/g7NS5lJN3fKa9LaqlXdtdQyeSBZAUJ6QeCE5vFUplk3X6QFbMXOHbZh2ziqu8mMtP+cWjHNBB47zHQ3RmNl81Rjv+QemD5zpdbK/h6AahDncOY3cfN88/HPWrENiSSxLC020sgZNYgERqfw+1YhHrclhf3jrSwCpZikjl7rqKroua2LBI/yeWEta3amTVvUnR2Y7gM8kHyh Generated-by-Nova"
  37. Override with DCAE VES Collector external IP
    dcaeCollectorIp: ""

  38. default number of instances
    replicaCount: 1

    nodeSelector: {}

    affinity: {}

  39. probe configuration parameters
    liveness:
    initialDelaySeconds: 10
    periodSeconds: 10
  40. necessary to disable liveness probe when setting breakpoints
  41. in debugger so K8s doesn't restart unresponsive container
    enabled: true

    readiness:
    initialDelaySeconds: 10
    periodSeconds: 10


    service:
    name: robot
    type: NodePort
    portName: httpd
    externalPort: 88
    internalPort: 88
    nodePort: "09"


    ingress:
    enabled: false


    resources: {}
  42. We usually recommend not to specify default resources and to leave this as a conscious
  43. choice for the user. This also increases chances charts run on environments with little
  44. resources, such as Minikube. If you do want to specify resources, uncomment the following
  45. lines, adjust them as necessary, and remove the curly braces after 'resources:'.

  46. Example:
  47. Configure resource requests and limits
  48. ref: http://kubernetes.io/docs/user-guide/compute-resources/
  49. Minimum memory for development is 2 CPU cores and 4GB memory
  50. Minimum memory for production is 4 CPU cores and 8GB memory
    #resources:
  51. limits:
  52. cpu: 2
  53. memory: 4Gi
  54. requests:
  55. cpu: 2
  56. memory: 4Gi

    1. Persist data to a persitent volume
      persistence:
      enabled: true

    2. A manually managed Persistent Volume and Claim
    3. Requires persistence.enabled: true
    4. If defined, PVC must be created manually before volume will be bound
  57. existingClaim:
    volumeReclaimPolicy: Retain

    1. database data Persistent Volume Storage Class
    2. If defined, storageClassName: <storageClass>
    3. If set to "-", storageClassName: "", which disables dynamic provisioning
    4. If undefined (the default) or set to null, no storageClassName spec is
    5. set, choosing the default provisioner. (gp2 on AWS, standard on
    6. GKE, AWS & OpenStack)

  58. storageClass: "-"
    accessMode: ReadWriteMany
    size: 2Gi
    mountPath: /dockerdata-nfs
    mountSubPath: robot/logs|