Microsoft Azure | Google Compute | OpenStack | ||
Sponsor | Microsoft (201801-) | Intel/Windriver (2017-) |
This is a private page under daily continuous modification to keep it relevant as a live reference (don't edit it) For general support consult the official documentation at http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_quickstart_guide.html and https://onap.readthedocs.io/en/beijing/submodules/oom.git/docs/oom_cloud_setup_guide.html and raise DOC JIRA's for any modifications required to them. |
---|
This page details deployment of ONAP on any environment that supports Kubernetes based containers.
Chat: http://onap-integration.eastus.cloudapp.azure.com:3000/group/onap-integration
Separate namespaces - to avoid the 1MB configmap limit - or just helm install/delete everything (no helm upgrade)
Type | VMs | VM Flavor | Total RAM | ONAP RAM | Max vCores | Idle vCores | HD/VM | IOPS | Date |
---|---|---|---|---|---|---|---|---|---|
Full Cluster (14 + 1) - recommended | 15 | 16G, 8 vCores | 120+G master | Max: 550/sec Idle: 220/sec | 20181105 | ||||
Single VM (possible - not recommended) | 1 | 55 | 22 | 20181105 | |||||
Developer 1-n pods | 1 | 16/32G 4-16 vCores | 120+G |
In review
, , ,
Run the following script on a clean Ubuntu 16.04 VM anywhere - it will provision and register your kubernetes system as a collocated master/host.
Ideally you install a clustered set of hosts away from the master VM - you can do this by deleting the host from the cluster after it is installed below and run the (docker, nfs and the rancher agent docker on each host)/
The cd.sh script will fix your VM for this limitation first found in . If you don't run the cd.sh script - run the following command manually on each VM so that any elasticsearch container comes up properly - this is a base OS issue.
https://git.onap.org/logging-analytics/tree/deploy/cd.sh#n49
# fix virtual memory for onap-log:elasticsearch under Rancher 1.6.11 - OOM-431 sudo sysctl -w vm.max_map_count=262144 |
Create a single VM - 128G+
See recommended cluster configurations on ONAP Deployment Specification for Finance and Operations#AmazonAWS
Create a 0.0.0.0/0 ::/O open security group
Use github to OAUTH authenticate your cluster just after installing it.
Last test 20180905
# 0 - verify the security group has all protocols (TCP/UCP) for 0.0.0.0/0 and ::/0 # 1 - configure combined master/host VM - 26 min sudo git clone https://gerrit.onap.org/r/logging-analytics sudo logging-analytics/deploy/rancher/oom_rancher_setup.sh -b master -s <your domain/ip> -e onap before the environment (1a7) is created from the kubernetes template (1pt2) - edit it via https://wiki.onap.org/display/DW/Cloud+Native+Deployment#CloudNativeDeployment-Changemax-podsfromdefault110podlimit https://lists.onap.org/g/onap-discuss/topic/oom_110_kubernetes_pod/25213556?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,25213556 # on a 244G R4.8xlarge vm - 26 min later k8s cluster is up NAMESPACE NAME READY STATUS RESTARTS AGE kube-system heapster-6cfb49f776-5pq45 1/1 Running 0 10m kube-system kube-dns-75c8cb4ccb-7dlsh 3/3 Running 0 10m kube-system kubernetes-dashboard-6f4c8b9cd5-v625c 1/1 Running 0 10m kube-system monitoring-grafana-76f5b489d5-zhrjc 1/1 Running 0 10m kube-system monitoring-influxdb-6fc88bd58d-9494h 1/1 Running 0 10m kube-system tiller-deploy-8b6c5d4fb-52zmt 1/1 Running 0 2m # 3 - secure via github oauth the master - immediately to lock out crypto miners http://cd.onap.info:8880 # check the master cluster ubuntu@ip-172-31-14-89:~$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-172-31-8-245.us-east-2.compute.internal 179m 2% 2494Mi 4% ubuntu@ip-172-31-14-89:~$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-172-31-8-245.us-east-2.compute.internal Ready <none> 13d v1.10.3-rancher1 172.17.0.1 Ubuntu 16.04.1 LTS 4.4.0-1049-aws docker://17.3.2 # 7 - after cluster is up - run cd.sh script to get onap up - customize your values.yaml - the 2nd time you run the script - a clean install - will clone new oom repo sudo logging-analytics/deploy/cd.sh -b master -e onap -c true -d true -w true # check around 55 min (on a 256G single node - with 32 vCores pods/failed/up @ min and ram 161/13/153 @ 50m 107g @55 min ubuntu@ip-172-31-20-218:~$ kubectl get pods --all-namespaces | grep onap | grep -E '1/1|2/2' | wc -l 152 ubuntu@ip-172-31-20-218:~$ kubectl get pods --all-namespaces | grep -E '0/|1/2' onap dep-deployment-handler-5789b89d4b-s6fzw 1/2 Running 0 8m onap dep-service-change-handler-76dcd99f84-fchxd 0/1 ContainerCreating 0 3m onap onap-aai-champ-68ff644d85-rv7tr 0/1 Running 0 53m onap onap-aai-gizmo-856f86d664-q5pvg 1/2 CrashLoopBackOff 9 53m onap onap-oof-85864d6586-zcsz5 0/1 ImagePullBackOff 0 53m onap onap-pomba-kibana-d76b6dd4c-sfbl6 0/1 Init:CrashLoopBackOff 7 53m onap onap-pomba-networkdiscovery-85d76975b7-mfk92 1/2 CrashLoopBackOff 9 53m onap onap-pomba-networkdiscoveryctxbuilder-c89786dfc-qnlx9 1/2 CrashLoopBackOff 9 53m onap onap-vid-84c88db589-8cpgr 1/2 CrashLoopBackOff 7 52m Note: DCAE has 2 sets of orchestration after the initial k8s orchestration - another at 57 min ubuntu@ip-172-31-20-218:~$ kubectl get pods --all-namespaces | grep -E '0/|1/2' onap dep-dcae-prh-6b5c6ff445-pr547 0/2 ContainerCreating 0 2m onap dep-dcae-tca-analytics-7dbd46d5b5-bgrn9 0/2 ContainerCreating 0 1m onap dep-dcae-ves-collector-59d4ff58f7-94rpq 0/2 ContainerCreating 0 1m onap onap-aai-champ-68ff644d85-rv7tr 0/1 Running 0 57m onap onap-aai-gizmo-856f86d664-q5pvg 1/2 CrashLoopBackOff 10 57m onap onap-oof-85864d6586-zcsz5 0/1 ImagePullBackOff 0 57m onap onap-pomba-kibana-d76b6dd4c-sfbl6 0/1 Init:CrashLoopBackOff 8 57m onap onap-pomba-networkdiscovery-85d76975b7-mfk92 1/2 CrashLoopBackOff 11 57m onap onap-pomba-networkdiscoveryctxbuilder-c89786dfc-qnlx9 1/2 Error 10 57m onap onap-vid-84c88db589-8cpgr 1/2 CrashLoopBackOff 9 57m at 1 hour ubuntu@ip-172-31-20-218:~$ free total used free shared buff/cache available Mem: 251754696 111586672 45000724 193628 95167300 137158588 ubuntu@ip-172-31-20-218:~$ kubectl get pods --all-namespaces | grep onap | wc -l 164 ubuntu@ip-172-31-20-218:~$ kubectl get pods --all-namespaces | grep onap | grep -E '1/1|2/2' | wc -l 155 ubuntu@ip-172-31-20-218:~$ kubectl get pods --all-namespaces | grep -E '0/|1/2' | wc -l 8 ubuntu@ip-172-31-20-218:~$ kubectl get pods --all-namespaces | grep -E '0/|1/2' onap dep-dcae-ves-collector-59d4ff58f7-94rpq 1/2 Running 0 4m onap onap-aai-champ-68ff644d85-rv7tr 0/1 Running 0 59m onap onap-aai-gizmo-856f86d664-q5pvg 1/2 CrashLoopBackOff 10 59m onap onap-oof-85864d6586-zcsz5 0/1 ImagePullBackOff 0 59m onap onap-pomba-kibana-d76b6dd4c-sfbl6 0/1 Init:CrashLoopBackOff 8 59m onap onap-pomba-networkdiscovery-85d76975b7-mfk92 1/2 CrashLoopBackOff 11 59m onap onap-pomba-networkdiscoveryctxbuilder-c89786dfc-qnlx9 1/2 CrashLoopBackOff 10 59m onap onap-vid-84c88db589-8cpgr 1/2 CrashLoopBackOff 9 59m ubuntu@ip-172-31-20-218:~$ df Filesystem 1K-blocks Used Available Use% Mounted on udev 125869392 0 125869392 0% /dev tmpfs 25175472 54680 25120792 1% /run /dev/xvda1 121914320 91698036 30199900 76% / tmpfs 125877348 30312 125847036 1% /dev/shm tmpfs 5120 0 5120 0% /run/lock tmpfs 125877348 0 125877348 0% /sys/fs/cgroup tmpfs 25175472 0 25175472 0% /run/user/1000 ubuntu@ip-172-31-20-218:~$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system heapster-6cfb49f776-ntdj6 1/1 Running 0 2h kube-system kube-dns-75c8cb4ccb-hrz94 3/3 Running 0 2h kube-system kubernetes-dashboard-6f4c8b9cd5-kx8hp 1/1 Running 0 2h kube-system monitoring-grafana-76f5b489d5-shnd2 1/1 Running 0 2h kube-system monitoring-influxdb-6fc88bd58d-gskvv 1/1 Running 0 2h kube-system tiller-deploy-8b6c5d4fb-7hwvq 1/1 Running 0 1h onap dep-config-binding-service-64699f4c44-5qkbn 2/2 Running 0 55m onap dep-dcae-prh-6b5c6ff445-pr547 2/2 Running 0 38m onap dep-dcae-tca-analytics-7dbd46d5b5-bgrn9 2/2 Running 0 38m onap dep-dcae-ves-collector-59d4ff58f7-94rpq 2/2 Running 0 38m onap dep-deployment-handler-5789b89d4b-s6fzw 2/2 Running 0 49m onap dep-inventory-847d9468bc-zpkpg 1/1 Running 0 49m onap dep-policy-handler-55557f9cd-gnstz 2/2 Running 0 49m onap dep-pstg-write-77c89cb8c4-h4rn5 1/1 Running 0 49m onap dep-service-change-handler-76dcd99f84-fchxd 1/1 Running 0 44m onap onap-aaf-cm-65b685449-ntmcz 1/1 Running 0 1h onap onap-aaf-cs-6ddbb9d674-mhj7w 1/1 Running 0 1h onap onap-aaf-fs-7b96687d64-m6275 1/1 Running 0 1h onap onap-aaf-gui-77d9f8f54-jqqph 1/1 Running 0 1h onap onap-aaf-hello-76cb4d748c-kfmmj 1/1 Running 0 1h onap onap-aaf-locate-7866799fd9-jbmpn 1/1 Running 0 1h onap onap-aaf-oauth-564bbbf568-8j7f9 1/1 Running 0 1h onap onap-aaf-service-657df4578d-mlvjf 1/1 Running 0 1h onap onap-aaf-sms-844c974c6-zvzq8 1/1 Running 0 1h onap onap-aaf-sms-quorumclient-0 1/1 Running 0 1h onap onap-aaf-sms-quorumclient-1 1/1 Running 0 1h onap onap-aaf-sms-quorumclient-2 1/1 Running 0 1h onap onap-aaf-sms-vault-0 2/2 Running 1 1h onap onap-aai-58d48f4f94-mxzcx 1/1 Running 0 1h onap onap-aai-babel-7dcc9f857c-lv6gp 2/2 Running 0 1h onap onap-aai-cassandra-0 1/1 Running 0 1h onap onap-aai-cassandra-1 1/1 Running 1 1h onap onap-aai-cassandra-2 1/1 Running 0 1h onap onap-aai-champ-68ff644d85-rv7tr 0/1 Running 0 1h onap onap-aai-data-router-666d76b58-flcr8 1/1 Running 0 1h onap onap-aai-elasticsearch-6fd5bb7756-qrh5h 1/1 Running 0 1h onap onap-aai-gizmo-856f86d664-q5pvg 1/2 Running 19 1h onap onap-aai-modelloader-688b8f8cf-tvvkr 2/2 Running 0 1h onap onap-aai-resources-65c9779cdc-h5dl5 2/2 Running 0 1h onap onap-aai-search-data-64495685dd-vgld2 2/2 Running 0 1h onap onap-aai-sparky-be-7964567995-7pq2r 2/2 Running 0 1h onap onap-aai-traversal-697c9bc645-stlfm 2/2 Running 0 1h onap onap-appc-0 2/2 Running 0 1h onap onap-appc-ansible-server-5cc8b5d65b-jdlvz 1/1 Running 0 1h onap onap-appc-cdt-6b747775d5-ww2n9 1/1 Running 0 1h onap onap-appc-db-0 2/2 Running 0 1h onap onap-appc-dgbuilder-6bcd76fc85-5txd8 1/1 Running 0 1h onap onap-brmsgw-74d69dcbd-5x2mc 1/1 Running 0 1h onap onap-clamp-6644d787dc-4w8cn 2/2 Running 0 1h onap onap-clamp-dash-es-589bb7f658-lldz8 1/1 Running 0 1h onap onap-clamp-dash-kibana-64ffb8c479-w8ftv 1/1 Running 0 1h onap onap-clamp-dash-logstash-c56cdb596-td6h9 1/1 Running 0 1h onap onap-clampdb-59cc6b8cc8-4wkts 1/1 Running 0 1h onap onap-cli-65558c66cf-jb59c 1/1 Running 0 1h onap onap-consul-68dfcd6d85-5sf49 1/1 Running 0 1h onap onap-consul-server-0 1/1 Running 0 1h onap onap-consul-server-1 1/1 Running 0 1h onap onap-consul-server-2 1/1 Running 0 1h onap onap-dbc-pg-0 1/1 Running 0 1h onap onap-dbc-pg-1 1/1 Running 0 1h onap onap-dcae-bootstrap-7fbf9d6846-b2qdf 1/1 Running 0 1h onap onap-dcae-cloudify-manager-759dc977fc-nrs9h 1/1 Running 0 1h onap onap-dcae-db-0 1/1 Running 0 1h onap onap-dcae-db-1 1/1 Running 0 1h onap onap-dcae-healthcheck-7bc4c7ddfb-7pt2d 1/1 Running 0 1h onap onap-dcae-redis-0 1/1 Running 0 1h onap onap-dcae-redis-1 1/1 Running 0 1h onap onap-dcae-redis-2 1/1 Running 0 1h onap onap-dcae-redis-3 1/1 Running 0 59m onap onap-dcae-redis-4 1/1 Running 0 55m onap onap-dcae-redis-5 1/1 Running 0 49m onap onap-dmaap-bus-controller-79b79dbfdf-vmmv9 1/1 Running 0 1h onap onap-dmaap-dr-db-5b9898d7d9-6ljqb 1/1 Running 0 1h onap onap-dmaap-dr-node-66cb7b978f-2cwhb 1/1 Running 0 1h onap onap-dmaap-dr-prov-65f6979c56-kg2m2 1/1 Running 0 1h onap onap-drools-0 1/1 Running 0 1h onap onap-esr-8698b9645-6c47s 2/2 Running 0 1h onap onap-esr-gui-859fd85568-lhl68 1/1 Running 0 1h onap onap-kube2msb-fbbdf4499-gnd6j 1/1 Running 0 1h onap onap-log-elasticsearch-7557486bc4-m7tfl 1/1 Running 0 1h onap onap-log-kibana-fc88b6b79-64p9p 1/1 Running 0 1h onap onap-log-logstash-5rdt2 1/1 Running 0 1h onap onap-message-router-546c549f8d-tf4ph 1/1 Running 0 1h onap onap-message-router-kafka-747885ffc9-9k9t8 1/1 Running 0 1h onap onap-message-router-zookeeper-54bf7fc969-mftl5 1/1 Running 0 1h onap onap-msb-consul-df5cdcdbb-f4fpd 1/1 Running 0 1h onap onap-msb-discovery-748d88fb76-5lvx7 2/2 Running 0 1h onap onap-msb-eag-c89d486c4-hx6w5 2/2 Running 0 1h onap onap-msb-iag-77f69996d9-7fqfc 2/2 Running 0 1h onap onap-multicloud-6dc6c6b7c7-ntc9v 2/2 Running 0 1h onap onap-multicloud-ocata-59f57458c9-6zgt6 2/2 Running 0 1h onap onap-multicloud-vio-85b5bfc64d-2pkrm 2/2 Running 0 1h onap onap-multicloud-windriver-7f54bd5849-2tm2g 2/2 Running 0 1h onap onap-nbi-66746c558-htwwp 1/1 Running 1 1h onap onap-nbi-mariadb-67db6865f4-6dkjc 1/1 Running 0 1h onap onap-nbi-mongo-0 1/1 Running 0 1h onap onap-nexus-d8dd55b95-lpc2q 1/1 Running 0 1h onap onap-oof-85864d6586-zcsz5 0/1 ImagePullBackOff 0 1h onap onap-oof-has-api-8594d77774-4n5hx 1/1 Running 0 1h onap onap-oof-has-cassandra-56dd9c466c-j9ld5 1/1 Running 0 1h onap onap-oof-has-controller-5977b5cc7f-zkl4j 1/1 Running 0 1h onap onap-oof-has-data-ccc79dbbc-8xjqs 1/1 Running 0 1h onap onap-oof-has-music-6f78f5565c-jjrdf 1/1 Running 0 1h onap onap-oof-has-reservation-6d584d75f7-pgpdn 1/1 Running 0 1h onap onap-oof-has-solver-556775f99c-nj9s6 1/1 Running 0 1h onap onap-oof-has-zookeeper-5bd9cdc875-gw8m8 1/1 Running 0 1h onap onap-pap-76486b4b54-4kjpm 2/2 Running 0 1h onap onap-pdp-0 2/2 Running 0 1h onap onap-policy-apex-pdp-0 1/1 Running 0 1h onap onap-policydb-5f876cf8bf-zk7pb 1/1 Running 0 1h onap onap-pomba-aaictxbuilder-56d98d7649-ndcsh 2/2 Running 0 1h onap onap-pomba-contextaggregator-765844bbc9-wsvn6 1/1 Running 0 1h onap onap-pomba-data-router-97fcf9597-ksr2c 1/1 Running 1 1h onap onap-pomba-elasticsearch-6c8d7bb4f9-22hct 1/1 Running 0 1h onap onap-pomba-kibana-d76b6dd4c-sfbl6 0/1 Init:Error 14 1h onap onap-pomba-networkdiscovery-85d76975b7-mfk92 1/2 CrashLoopBackOff 19 1h onap onap-pomba-networkdiscoveryctxbuilder-c89786dfc-qnlx9 1/2 CrashLoopBackOff 19 1h onap onap-pomba-sdcctxbuilder-5bc9cdcd77-rjvmw 1/1 Running 0 1h onap onap-pomba-search-data-65c86fbcf9-fnb2j 2/2 Running 0 1h onap onap-pomba-servicedecomposition-557dd9d669-9l2vl 2/2 Running 0 1h onap onap-pomba-validation-service-946ff6dfd-9vcz7 1/1 Running 0 1h onap onap-portal-app-8486dc7ff8-cb9q8 2/2 Running 0 1h onap onap-portal-cassandra-8588fbd698-x565f 1/1 Running 0 1h onap onap-portal-db-7d6b95cd94-54krd 1/1 Running 0 1h onap onap-portal-sdk-77cd558c98-hnsfq 2/2 Running 0 1h onap onap-portal-widget-6469f4bc56-wx249 1/1 Running 0 1h onap onap-portal-zookeeper-5d8c598c4c-4xssk 1/1 Running 0 1h onap onap-robot-67fdd89766-8zr2t 1/1 Running 0 1h onap onap-sdc-be-8447b4d544-582b6 2/2 Running 0 1h onap onap-sdc-cs-6cb64768b8-sr4bn 1/1 Running 0 1h onap onap-sdc-es-dd9fd5967-htsgv 1/1 Running 0 1h onap onap-sdc-fe-6967b675bf-xxbs8 2/2 Running 0 1h onap onap-sdc-kb-6dc5df7864-27q84 1/1 Running 0 1h onap onap-sdc-onboarding-be-6cb8cbcb95-mhfsp 2/2 Running 0 1h onap onap-sdc-wfd-be-869dfc785c-h4nmb 1/1 Running 0 1h onap onap-sdc-wfd-fe-579c5b5ffb-2x9r7 2/2 Running 0 1h onap onap-sdnc-0 2/2 Running 0 1h onap onap-sdnc-ansible-server-9c6fd76-qgc4c 1/1 Running 0 1h onap onap-sdnc-db-0 2/2 Running 0 1h onap onap-sdnc-dgbuilder-75cdf97945-9f9bc 1/1 Running 0 1h onap onap-sdnc-dmaap-listener-7cb56d7bcb-4w954 1/1 Running 0 1h onap onap-sdnc-portal-7b779f87f5-8c7mb 1/1 Running 0 1h onap onap-sdnc-ueb-listener-5bcdbb8677-98wq9 1/1 Running 0 1h onap onap-sniro-emulator-f668fdb9-427t4 1/1 Running 0 1h onap onap-so-57d4b4f65c-92pv6 2/2 Running 0 1h onap onap-so-db-765df45b64-sdwp4 1/1 Running 0 1h onap onap-uui-cf7f9c4c4-vmq78 1/1 Running 0 1h onap onap-uui-server-6c8ff6544-89crz 1/1 Running 0 1h onap onap-vfc-catalog-5bd7d6bddf-fn6sz 2/2 Running 0 1h onap onap-vfc-db-678f484cdd-4nqhj 1/1 Running 0 1h onap onap-vfc-ems-driver-b58764c48-ng7j9 1/1 Running 3 1h onap onap-vfc-generic-vnfm-driver-84bf45c5df-2xbvl 2/2 Running 0 1h onap onap-vfc-huawei-vnfm-driver-64c9ddcffd-vwxfp 2/2 Running 0 1h onap onap-vfc-juju-vnfm-driver-557758b87f-fhhk6 2/2 Running 0 1h onap onap-vfc-multivim-proxy-5986c8695-xnn94 1/1 Running 0 1h onap onap-vfc-nokia-v2vnfm-driver-56955fcd5d-8ck74 1/1 Running 0 1h onap onap-vfc-nokia-vnfm-driver-65fdfd478-42k4d 2/2 Running 0 1h onap onap-vfc-nslcm-74cfb48956-z8hcg 2/2 Running 0 1h onap onap-vfc-resmgr-5b55888bb4-5pkdk 2/2 Running 0 1h onap onap-vfc-vnflcm-646584d587-w4m4k 2/2 Running 0 1h onap onap-vfc-vnfmgr-bcbc9c877-jrwt5 2/2 Running 0 1h onap onap-vfc-vnfres-6848d65656-x9r6t 2/2 Running 0 1h onap onap-vfc-workflow-5c788dd6d5-vsflc 1/1 Running 0 1h onap onap-vfc-workflow-engine-59d9f97858-c9hjn 1/1 Running 0 1h onap onap-vfc-zte-sdnc-driver-856fc764f5-xrt88 1/1 Running 0 1h onap onap-vfc-zte-vnfm-driver-75d9b75b4b-zzg87 2/2 Running 0 1h onap onap-vid-84c88db589-8cpgr 1/2 Running 18 1h onap onap-vid-mariadb-galera-0 1/1 Running 0 1h onap onap-vnfsdk-58d8dfbbc-drgxw 1/1 Running 0 1h onap onap-vnfsdk-postgres-c4685d6c4-nlwx9 1/1 Running 0 1h todo: verify the release is there after a helm install - as the configMap size issue is breaking the release for now |
Create a single VM - 256G+
ubuntu@a-onap-dmz-nodelete:~$ ./oom_deployment.sh -b master -s att.onap.cloud -e onap -r a_ONAP_CD_master -t _arm_deploy_onap_cd.json -p _arm_deploy_onap_cd_z_parameters.json # register the IP to DNS with route53 for att.onap.info - using this for the ONAP academic summit on the 22nd 13.68.113.104 = att.onap.cloud |
Create a single VM - 256G+
ubuntu@a-onap-dmz-nodelete:~$ ./oom_deployment.sh -b master -s att.onap.cloud -e onap -r a_ONAP_CD_master -t _arm_deploy_onap_cd.json -p _arm_deploy_onap_cd_z_parameters.json # register the IP to DNS with route53 for att.onap.info - using this for the ONAP academic summit on the 22nd 13.68.113.104 = att.onap.cloud |
Add an NFS (EFS on AWS) share
Create a 1 + N cluster
See recommended cluster configurations on ONAP Deployment Specification for Finance and Operations#AmazonAWS
Create a 0.0.0.0/0 ::/O open security group
Use github to OAUTH authenticate your cluster just after installing it.
Last tested on ld.onap.info 20181029
# 0 - verify the security group has all protocols (TCP/UCP) for 0.0.0.0/0 and ::/0 # 1 - configure master - 15 min sudo git clone https://gerrit.onap.org/r/logging-analytics sudo logging-analytics/deploy/rancher/oom_rancher_setup.sh -b master -s <your domain/ip> -e onap # on a 64G R4.2xlarge vm - 23 min later k8s cluster is up kubectl get pods --all-namespaces kube-system heapster-76b8cd7b5-g7p6n 1/1 Running 0 8m kube-system kube-dns-5d7b4487c9-jjgvg 3/3 Running 0 8m kube-system kubernetes-dashboard-f9577fffd-qldrw 1/1 Running 0 8m kube-system monitoring-grafana-997796fcf-g6tr7 1/1 Running 0 8m kube-system monitoring-influxdb-56fdcd96b-x2kvd 1/1 Running 0 8m kube-system tiller-deploy-54bcc55dd5-756gn 1/1 Running 0 2m # 2 - secure via github oauth the master - immediately to lock out crypto miners http://ld.onap.info:8880 # 3 - delete the master from the hosts in rancher http://ld.onap.info:8880 # 4 - create NFS share on master https://us-east-2.console.aws.amazon.com/efs/home?region=us-east-2#/filesystems/fs-92xxxxx # add -h 1.2.10 (if upgrading from 1.6.14 to 1.6.18 of rancher) sudo logging-analytics/deploy/aws/oom_cluster_host_install.sh -n false -s <your domain/ip> -e fs-nnnnnn1b -r us-west-1 -t 371AEDC88zYAZdBXPM -c true -v true # 5 - create NFS share and register each node - do this for all nodes git clone https://gerrit.onap.org/r/logging-analytics # add -h 1.2.10 (if upgrading from 1.6.14 to 1.6.18 of rancher) sudo logging-analytics/deploy/aws/oom_cluster_host_install.sh -n true -s <your domain/ip> -e fs-nnnnnn1b -r us-west-1 -t 371AEDC88zYAZdBXPM -c true -v true # it takes about 1 min to run the script and 1 minute for the etcd and healthcheck containers to go green on each host # check the master cluster kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-172-31-19-9.us-east-2.compute.internal 9036m 56% 53266Mi 43% ip-172-31-21-129.us-east-2.compute.internal 6840m 42% 47654Mi 38% ip-172-31-18-85.us-east-2.compute.internal 6334m 39% 49545Mi 40% ip-172-31-26-114.us-east-2.compute.internal 3605m 22% 25816Mi 21% # fix helm on the master after adding nodes to the master - only if the server helm version is less than the client helm version (rancher 1.6.18 does not have this issue) ubuntu@ip-172-31-14-89:~$ sudo helm version Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"} ubuntu@ip-172-31-14-89:~$ sudo helm init --upgrade $HELM_HOME has been configured at /home/ubuntu/.helm. Tiller (the Helm server-side component) has been upgraded to the current version. ubuntu@ip-172-31-14-89:~$ sudo helm version Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"} # 7a - manual: follow the helm plugin page # https://wiki.onap.org/display/DW/OOM+Helm+%28un%29Deploy+plugins sudo git clone https://gerrit.onap.org/r/oom sudo cp -R ~/oom/kubernetes/helm/plugins/ ~/.helm cd oom/kubernetes sudo helm serve & sudo make all sudo make onap sudo helm deploy onap local/onap --namespace onap fetching local/onap release "onap" deployed release "onap-aaf" deployed release "onap-aai" deployed release "onap-appc" deployed release "onap-clamp" deployed release "onap-cli" deployed release "onap-consul" deployed release "onap-contrib" deployed release "onap-dcaegen2" deployed release "onap-dmaap" deployed release "onap-esr" deployed release "onap-log" deployed release "onap-msb" deployed release "onap-multicloud" deployed release "onap-nbi" deployed release "onap-oof" deployed release "onap-policy" deployed release "onap-pomba" deployed release "onap-portal" deployed release "onap-robot" deployed release "onap-sdc" deployed release "onap-sdnc" deployed release "onap-sniro-emulator" deployed release "onap-so" deployed release "onap-uui" deployed release "onap-vfc" deployed release "onap-vid" deployed release "onap-vnfsdk" deployed # 7b - automated: after cluster is up - run cd.sh script to get onap up - customize your values.yaml - the 2nd time you run the script # clean install - will clone new oom repo sudo logging-analytics/deploy/cd.sh -b master -e onap -c true -d true -w true # rerun install - no delete of oom repo sudo logging-analytics/deploy/cd.sh -b master -e onap -c false -d true -w true |
Two choices, run the single oom_deployment.sh via your ARM, CloudFormation, Heat template wrapper as a oneclick or use the 2 step procedure above.
entrypoint aws/azure/openstack | Ubuntu 16 rancher install | oom deployment CD script | |
---|---|---|---|
Access the ONAP portal via the 8989 LoadBalancer Mandeep Khinda merged in for and documented at http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_user_guide.html#accessing-the-onap-portal-using-oom-and-a-kubernetes-cluster
ubuntu@a-onap-devopscd:~$ kubectl -n onap get services|grep "portal-app" portal-app LoadBalancer 10.43.145.94 13.68.113.105 8989:30215/TCP,8006:30213/TCP,8010:30214/TCP,8443:30225/TCP 20h |
add the following and prefix with the IP above to your client's /etc/hosts
in this case I am using the public 13... ip (elastic or generated public ip) - AWS in this example 13.68.113.105 portal.api.simpledemo.onap.org 13.68.113.105 vid.api.simpledemo.onap.org 13.68.113.105 sdc.api.fe.simpledemo.onap.org 13.68.113.105 portal-sdk.simpledemo.onap.org 13.68.113.105 policy.api.simpledemo.onap.org 13.68.113.105 aai.api.sparky.simpledemo.onap.org 13.68.113.105 cli.api.simpledemo.onap.org 13.68.113.105 msb.api.discovery.simpledemo.onap.org |
launch
http://portal.api.simpledemo.onap.org:8989/ONAPPORTAL/login.htm
login with demo user
A single 122g R4.4xlarge VM in progress
see also
helm install will bring up everything without the configmap failure - but the release is busted - pods come up though ubuntu@ip-172-31-27-63:~$ sudo helm install local/onap -n onap --namespace onap -f onap/resources/environments/disable-allcharts.yaml --set aai.enabled=true --set dmaap.enabled=true --set log.enabled=true --set policy.enabled=true --set portal.enabled=true --set robot.enabled=true --set sdc.enabled=true --set sdnc.enabled=true --set so.enabled=true --set vid.enabled=true |
# | Team | container | Required y/n/fw/fwCl | Ram | Cpu | nodeport | Type | logback | Dependencies | |
---|---|---|---|---|---|---|---|---|---|---|
aaf | ||||||||||
aai | y | |||||||||
appc | fwCL | |||||||||
clamp | fwCL | |||||||||
cli | ||||||||||
consul | ||||||||||
fwCL | ||||||||||
dmaap | y | |||||||||
esr | ||||||||||
log | n | |||||||||
sniro- emulator | n | |||||||||
oof | n | |||||||||
msb | n | |||||||||
multicloud | n | |||||||||
nbi | ||||||||||
policy | y | |||||||||
pomba | ||||||||||
portal | y | |||||||||
robot | y | |||||||||
sdc | y | |||||||||
sdnc | y | |||||||||
so | y | |||||||||
uui | n | |||||||||
vfc | n | |||||||||
vid | y | |||||||||
vnfsdk | n |
deployment | containers | |
---|---|---|
minimum (no vfwCL) | ||
medium (vfwCL) | ||
full |
amdocs@ubuntu:~/_dev/oom/kubernetes$ kubectl get pods --all-namespaces | grep 0/1 onap onap-aai-champ-68ff644d85-mpkb9 0/1 Running 0 1d onap onap-pomba-kibana-d76b6dd4c-j4q9m 0/1 Init:CrashLoopBackOff 472 1d amdocs@ubuntu:~/_dev/oom/kubernetes$ kubectl get pods --all-namespaces | grep 1/2 onap onap-aai-gizmo-856f86d664-mf587 1/2 CrashLoopBackOff 568 1d onap onap-pomba-networkdiscovery-85d76975b7-w9sjl 1/2 CrashLoopBackOff 573 1d onap onap-pomba-networkdiscoveryctxbuilder-c89786dfc-rtdqc 1/2 CrashLoopBackOff 569 1d onap onap-vid-84c88db589-vbfht 1/2 CrashLoopBackOff 616 1d with clamp and pomba enabled (ran clamp first) amdocs@ubuntu:~/_dev/oom/kubernetes$ sudo helm upgrade -i onap local/onap --namespace onap -f dev.yaml Error: UPGRADE FAILED: failed to create resource: Service "pomba-kibana" is invalid: spec.ports[0].nodePort: Invalid value: 30234: provided port is already allocated |
see the AWS cluster install below
VMs | RAM | HD | vCores | Ports | Network |
---|---|---|---|---|---|
1 | 55-70G at startup | 40G per host min (30G for dockers) 100G after a week 5G min per NFS 4GBPS peak | (need to reduce 152 pods to 110) 8 min 60 peak at startup recommended 16-64 vCores | see list on PortProfile Recommend 0.0.0.0/0 (all open) inside VPC Block 10249-10255 outside secure 8888 with oauth | 170 MB/sec peak 1200 |
3+ | 85G Recommend min 3 x 64G class VMs Try for 4 | master: 40G hosts: 80G (30G of dockers) NFS: 5G | 24 to 64 | ||
This is snapshot of the CD system running on Amazon AWS at http://jenkins.onap.info/job/oom-cd-master/ It is a 1 + 4 node cluster composed of four 64G/8vCore R4.2xLarge VMs |
Account Provider: (2) Robin of Amazon and Michael O'Brien of Amdocs
Amazon has donated an allocation enough for 512G of VM space (a large 4 x 122G/16vCore cluster and a secondary 9 x 16G cluster) in order to run CD systems since Dec 2017 - at a cost savings of at least $500/month - thank you very much Amazon in supporting ONAP See example max/med allocations for IT/Finance in ONAP Deployment Specification for Finance and Operations#AmazonAWS |
Amazon AWS is currently hosting our RI for ONAP Continuous Deployment - this is a joint Proof Of Concept between Amazon and ONAP.
Auto Continuous Deployment via Jenkins and Kibana
https://docs.aws.amazon.com/cli/latest/userguide/cli-install-macos.html
obrien:obrienlabs amdocs$ pip --version pip 9.0.1 from /Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg (python 2.7) obrien:obrienlabs amdocs$ curl -O https://bootstrap.pypa.io/get-pip.py obrien:obrienlabs amdocs$ python3 get-pip.py --user Requirement already up-to-date: pip in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages obrien:obrienlabs amdocs$ pip3 install awscli --upgrade --user Successfully installed awscli-1.14.41 botocore-1.8.45 pyasn1-0.4.2 s3transfer-0.1.13 |
obrien:obrienlabs amdocs$ ssh ubuntu@<your domain/ip> $ sudo apt install python-pip $ pip install awscli --upgrade --user $ aws --version aws-cli/1.14.41 Python/2.7.12 Linux/4.4.0-1041-aws botocore/1.8.45 |
$aws configure AWS Access Key ID [None]: AK....Q AWS Secret Access Key [None]: Dl....l Default region name [None]: us-east-1 Default output format [None]: json $aws ec2 describe-regions --output table || ec2.ca-central-1.amazonaws.com | ca-central-1 || .... |
https://docs.aws.amazon.com/cli/latest/reference/ec2/allocate-address.html
$aws ec2 allocate-address { "PublicIp": "35.172..", "Domain": "vpc", "AllocationId": "eipalloc-2f743..."} |
$ cat route53-a-record-change-set.json {"Comment": "comment","Changes": [ { "Action": "CREATE", "ResourceRecordSet": { "Name": "amazon.onap.cloud", "Type": "A", "TTL": 300, "ResourceRecords": [ { "Value": "35.172.36.." }]}}]} $ aws route53 change-resource-record-sets --hosted-zone-id Z...7 --change-batch file://route53-a-record-change-set.json { "ChangeInfo": { "Status": "PENDING", "Comment": "comment", "SubmittedAt": "2018-02-17T15:02:46.512Z", "Id": "/change/C2QUNYTDVF453x" }} $ dig amazon.onap.cloud ; <<>> DiG 9.9.7-P3 <<>> amazon.onap.cloud amazon.onap.cloud. 300 IN A 35.172.36.. onap.cloud. 172800 IN NS ns-1392.awsdns-46.org. |
# request the usually cheapest $0.13 spot 64G EBS instance at AWS aws ec2 request-spot-instances --spot-price "0.25" --instance-count 1 --type "one-time" --launch-specification file://aws_ec2_spot_cli.json # don't pass in the the following - it will be generated for the EBS volume "SnapshotId": "snap-0cfc17b071e696816" launch specification json { "ImageId": "ami-c0ddd64ba", "InstanceType": "r4.2xlarge", "KeyName": "obrien_systems_aws_2015", "BlockDeviceMappings": [ {"DeviceName": "/dev/sda1", "Ebs": { "DeleteOnTermination": true, "VolumeType": "gp2", "VolumeSize": 120 }}], "SecurityGroupIds": [ "sg-322c4nnn42" ]} # results { "SpotInstanceRequests": [{ "Status": { "Message": "Your Spot request has been submitted for review, and is pending evaluation.", "Code": "pending-evaluation", |
aws ec2 describe-spot-instance-requests --spot-instance-request-id sir-1tyr5etg "InstanceId": "i-02a653592cb748e2x", |
Can be done separately as long as it is in the first 30 sec during initialization and before rancher starts on the instance.
$aws ec2 associate-address --instance-id i-02a653592cb748e2x --allocation-id eipalloc-375c1d0x { "AssociationId": "eipassoc-a4b5a29x"} |
$aws ec2 reboot-instances --instance-ids i-02a653592cb748e2x |
look at https://github.com/kubernetes-incubator/external-storage
"From the NFS wizard"
Setting up your EC2 instance
Mounting your file system
If you are unable to connect, see our troubleshooting documentation.
https://docs.aws.amazon.com/efs/latest/ug/mounting-fs.html
ubuntu@ip-172-31-19-239:~$ git clone https://gerrit.onap.org/r/logging-analytics Cloning into 'logging-analytics'... ubuntu@ip-172-31-19-239:~$ sudo cp logging-analytics/deploy/aws/oom_cluster_host_install.sh . ubuntu@ip-172-31-19-239:~$ sudo ./oom_cluster_host_install.sh -n true -s <your domain/ip> -e fs-0000001b -r us-west-1 -t 5EA8A:15000:MWcEyoKw -c true -v # fix helm after adding nodes to the master ubuntu@ip-172-31-31-219:~$ sudo helm init --upgrade $HELM_HOME has been configured at /home/ubuntu/.helm. Tiller (the Helm server-side component) has been upgraded to the current version. ubuntu@ip-172-31-31-219:~$ sudo helm repo add local http://127.0.0.1:8879 "local" has been added to your repositories ubuntu@ip-172-31-31-219:~$ sudo helm repo list NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879 |
Notice that we are vCore bound Ideally we need 64 vCores for a minimal production system
# setup the master git clone https://gerrit.onap.org/r/logging-analytics sudo logging-analytics/deploy/rancher/oom_rancher_setup.sh -b master -s <your domain/ip> -e onap # manually delete the host that was installed on the master - in the rancher gui for now # run without a client on the master sudo logging-analytics/deploy/aws/oom_cluster_host_install.sh -n false -s <your domain/ip> -e fs-nnnnnn1b -r us-west-1 -t 371AEDC88zYAZdBXPM -c true -v true ls /dockerdata-nfs/ onap test.sh # run the script from git on each cluster node git clone https://gerrit.onap.org/r/logging-analytics sudo logging-analytics/deploy/aws/oom_cluster_host_install.sh -n true -s <your domain/ip> -e fs-nnnnnn1b -r us-west-1 -t 371AEDC88zYAZdBXPM -c true -v true # check a node ls /dockerdata-nfs/ onap test.sh sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6e4a57e19c39 rancher/healthcheck:v0.3.3 "/.r/r /rancher-en..." 1 second ago Up Less than a second r-healthcheck-healthcheck-5-f0a8f5e8 f9bffc6d9b3e rancher/network-manager:v0.7.19 "/rancher-entrypoi..." 1 second ago Up 1 second r-network-services-network-manager-5-103f6104 460f31281e98 rancher/net:holder "/.r/r /rancher-en..." 4 seconds ago Up 4 seconds r-ipsec-ipsec-5-2e22f370 3e30b0cf91bb rancher/agent:v1.2.9 "/run.sh run" 17 seconds ago Up 16 seconds rancher-agent # On the master - fix helm after adding nodes to the master sudo helm init --upgrade $HELM_HOME has been configured at /home/ubuntu/.helm. Tiller (the Helm server-side component) has been upgraded to the current version. sudo helm repo add local http://127.0.0.1:8879 # check the cluster on the master kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-172-31-16-85.us-west-1.compute.internal 129m 3% 1805Mi 5% ip-172-31-25-15.us-west-1.compute.internal 43m 1% 1065Mi 3% ip-172-31-28-145.us-west-1.compute.internal 40m 1% 1049Mi 3% ip-172-31-21-240.us-west-1.compute.internal 30m 0% 965Mi 3% # important: secure your rancher cluster by adding an oauth github account - to keep out crypto miners http://cluster.onap.info:8880/admin/access/github # now back to master to install onap sudo cp logging-analytics/deploy/cd.sh . sudo ./cd.sh -b master -e onap -c true -d true -w false -r false 136 pending > 0 at the 1th 15 sec interval ubuntu@ip-172-31-28-152:~$ kubectl get pods -n onap | grep -E '1/1|2/2' | wc -l 20 120 pending > 0 at the 39th 15 sec interval ubuntu@ip-172-31-28-152:~$ kubectl get pods -n onap | grep -E '1/1|2/2' | wc -l 47 99 pending > 0 at the 93th 15 sec interval after an hour most of the 136 containers should be up kubectl get pods --all-namespaces | grep -E '0/|1/2' onap onap-aaf-cs-59954bd86f-vdvhx 0/1 CrashLoopBackOff 7 37m onap onap-aaf-oauth-57474c586c-f9tzc 0/1 Init:1/2 2 37m onap onap-aai-champ-7d55cbb956-j5zvn 0/1 Running 0 37m onap onap-drools-0 0/1 Init:0/1 0 1h onap onap-nexus-54ddfc9497-h74m2 0/1 CrashLoopBackOff 17 1h onap onap-sdc-be-777759bcb9-ng7zw 1/2 Running 0 1h onap onap-sdc-es-66ffbcd8fd-v8j7g 0/1 Running 0 1h onap onap-sdc-fe-75fb4965bd-bfb4l 0/2 Init:1/2 6 1h # cpu bound - a small cluster has 4x4 cores - try to run with 4x16 cores ubuntu@ip-172-31-28-152:~$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-172-31-28-145.us-west-1.compute.internal 3699m 92% 26034Mi 85% ip-172-31-21-240.us-west-1.compute.internal 3741m 93% 3872Mi 12% ip-172-31-16-85.us-west-1.compute.internal 3997m 99% 23160Mi 75% ip-172-31-25-15.us-west-1.compute.internal 3998m 99% 27076Mi 88% |
Notice that we are vCore bound Ideally we need 64 vCores for a minimal production system - this runs with 12 x 4 vCores = 48
30 min after helm install start - DCAE containers come at at 55
ssh ubuntu@ld.onap.info # setup the master git clone https://gerrit.onap.org/r/logging-analytics sudo logging-analytics/deploy/rancher/oom_rancher_setup.sh -b master -s <your domain/ip> -e onap # manually delete the host that was installed on the master - in the rancher gui for now # get the token for use with the EFS/NFS share ubuntu@ip-172-31-8-245:~$ cat ~/.kube/config | grep token token: "QmFzaWMgTVVORk4wRkdNalF3UXpNNE9E.........RtNWxlbXBCU0hGTE1reEJVamxWTjJ0Tk5sWlVjZz09" # run without a client on the master ubuntu@ip-172-31-8-245:~$ sudo logging-analytics/deploy/aws/oom_cluster_host_install.sh -n false -s ld.onap.info -e fs-....eb -r us-east-2 -t QmFzaWMgTVVORk4wRkdNalF3UX..........aU1dGSllUVkozU0RSTmRtNWxlbXBCU0hGTE1reEJVamxWTjJ0Tk5sWlVjZz09 -c true -v true ls /dockerdata-nfs/ onap test.sh # run the script from git on each cluster node git clone https://gerrit.onap.org/r/logging-analytics sudo logging-analytics/deploy/aws/oom_cluster_host_install.sh -n true -s <your domain/ip> -e fs-nnnnnn1b -r us-west-1 -t 371AEDC88zYAZdBXPM -c true -v true ubuntu@ip-172-31-8-245:~$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-172-31-14-254.us-east-2.compute.internal 45m 1% 1160Mi 7% ip-172-31-3-195.us-east-2.compute.internal 29m 0% 1023Mi 6% ip-172-31-2-105.us-east-2.compute.internal 31m 0% 1004Mi 6% ip-172-31-0-159.us-east-2.compute.internal 30m 0% 1018Mi 6% ip-172-31-12-122.us-east-2.compute.internal 34m 0% 1002Mi 6% ip-172-31-0-197.us-east-2.compute.internal 30m 0% 1015Mi 6% ip-172-31-2-244.us-east-2.compute.internal 123m 3% 2032Mi 13% ip-172-31-11-30.us-east-2.compute.internal 38m 0% 1142Mi 7% ip-172-31-9-203.us-east-2.compute.internal 33m 0% 998Mi 6% ip-172-31-1-101.us-east-2.compute.internal 32m 0% 996Mi 6% ip-172-31-9-128.us-east-2.compute.internal 31m 0% 1037Mi 6% ip-172-31-3-141.us-east-2.compute.internal 30m 0% 1011Mi 6% # now back to master to install onap sudo cp logging-analytics/deploy/cd.sh . sudo ./cd.sh -b master -e onap -c true -d true -w false -r false after an hour most of the 136 containers should be up kubectl get pods --all-namespaces | grep -E '0/|1/2' |
oom_rancher_install.sh is in under https://gerrit.onap.org/r/#/c/32019/
see
cd.sh in under https://gerrit.onap.org/r/#/c/32653/
Scenario: installing Rancher on clean Ubuntu 16.04 64g VM (single collocated server/host) and the master branch of onap via OOM deployment (2 scripts)
1 hour video of automated installation on an AWS EC2 spot instance
$ aws ec2 terminate-instances --instance-ids i-0040425ac8c0d8f6x { "TerminatingInstances": [ { "InstanceId": "i-0040425ac8c0d8f63", "CurrentState": { "Code": 32, "Name": "shutting-down" }, "PreviousState": { "Code": 16, "Name": "running" } } ]} |
Video on Installing and Running the ONAP Demos#ONAPDeploymentVideos
WE can run ONAP on an AWS EC2 instance for $0.17/hour as opposed to Rackspace at $1.12/hour for a 64G Ubuntu host VM.
I have created an AMI on Amazon AWS under the following ID that has a reference 20170825 tag of ONAP 1.0 running on top of Rancher
ami-b8f3f3c3 : onap-oom-k8s-10
EIP 34.233.240.214 maps to http://dev.onap.info:8880/env/1a7/infra/hosts
A D2.2xlarge with 61G ram on the spot market https://console.aws.amazon.com/ec2sp/v1/spot/launch-wizard?region=us-east-1 at $0.16/hour for all of ONAP
It may take up to 3-8 min for kubernetes pods to initialize as long as you preload the docker images
Workaround for the disk space error - even though we are running with a 1.9 TB NVMe SSD
https://github.com/kubernetes/kubernetes/issues/48703
Use a flavor that uses EBS like M4.4xLarge which is OK
Use a flavor that uses EBS like M4.4xLarge which is OK - except for AAI right now
r4.2xlarge is the smallest and most cost effective 64g min instance to use for full ONAP deployment - it requires EBS stores. This is assuming 1 instance up at all times and a couple ad-hoc instances up a couple hours for testing/experimentation.
Resource Correspondence
ID | Type | Parent | AWS | Openstack |
---|---|---|---|---|
https://console.aws.amazon.com/cloudformation/designer/home?region=us-east-1#
Part of getting another infrastructure provider like AWS to work with ONAP will be in identifying and decoupling southbound logic from any particular cloud provider using an extensible plugin architecture on the SBI interface.
see Multi VIM/Cloud (5/11/17), VID project (5/17/17), Service Orchestrator (5/14/17), ONAP Operations Manager (5/10/17), ONAP Operations Manager / ONAP on Containers
Replace the DCAE Controller
Cloudify is Tosca based - https://github.com/cloudify-cosmo/cloudify-aws-plugin
https://istio.io/docs/setup/kubernetes/quick-start/
Waiting for the EC2 C5 instance types under the C620 chipset to arrive at AWS so we can experiment under EC2 Spot - http://technewshunter.com/cpus/intel-launches-xeon-w-cpus-for-workstations-skylake-sp-ecc-for-lga2066-41771/ https://aws.amazon.com/about-aws/whats-new/2016/11/coming-soon-amazon-ec2-c5-instances-the-next-generation-of-compute-optimized-instances/
http://docs.aws.amazon.com/cli/latest/userguide/cli-install-macos.html
use
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" unzip awscli-bundle.zip sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws aws --version aws-cli/1.11.170 Python/2.7.13 Darwin/16.7.0 botocore/1.7.28 |
You need an NFS share between the VM's in your Kubernetes cluster - an Elastic File System share will wrap NFS
"From the NFS wizard"
Setting up your EC2 instance
Mounting your file system
If you are unable to connect, see our troubleshooting documentation.
https://docs.aws.amazon.com/efs/latest/ug/mounting-fs.html
Automated
Manual
ubuntu@ip-10-0-0-66:~$ sudo apt-get install nfs-common ubuntu@ip-10-0-0-66:~$ cd / ubuntu@ip-10-0-0-66:~$ sudo mkdir /dockerdata-nfs root@ip-10-0-0-19:/# sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 fs-43b2763a.efs.us-east-2.amazonaws.com:/ /dockerdata-nfs # write something on one vm - and verify it shows on another ubuntu@ip-10-0-0-8:~$ ls /dockerdata-nfs/ test.sh |
Subscription Sponsor: (1) Microsoft
Deliverables are deployment scripts, arm/cli templates for various deployment scenarios (single, multiple, federated servers)
In review
Automation is currently only written for single VM that hosts both the rancher server and the deployed onap pods. Use the ARM template below to deploy your VM and provision it (adjust your config parameters)
Two choices, run the single oom_deployment.sh ARM wrapper - or use it to bring up an empty vm and run oom_entrypoint.sh manually. Once the VM comes up the oom_entrypoint.sh script will run - which will download the oom_rancher_setup.sh script to setup docker, rancher, kubernetes and helm - the entrypoint script will then run the cd.sh script to bring up onap based on your values.yaml config by running helm install on it.
# login to az cli, wget the deployment script, arm template and parameters file - edit the parameters file (dns, ssh key ...) and run the arm template wget https://git.onap.org/logging-analytics/plain/deploy/azure/oom_deployment.sh wget https://git.onap.org/logging-analytics/plain/deploy/azure/_arm_deploy_onap_cd.json wget https://git.onap.org/logging-analytics/plain/deploy/azure/_arm_deploy_onap_cd_z_parameters.json # either run the entrypoint which creates a resource template and runs the stack - or do those two commands manually ./oom_deployment.sh -b master -s azure.onap.cloud -e onap -r a_auto-youruserid_20180421 -t arm_deploy_onap_cd.json -p arm_deploy_onap_cd_z_parameters.json # wait for the VM to finish in about 75 min or watch progress by ssh'ing into the vm and doing root@ons-auto-201803181110z: sudo tail -f /var/lib/waagent/custom-script/download/0/stdout # if you wish to run the oom_entrypoint script yourself - edit/break the cloud init section at the end of the arm template and do it yourself below # download and edit values.yaml with your onap preferences and openstack tenant config wget https://jira.onap.org/secure/attachment/11414/values.yaml # download and run the bootstrap and onap install script, the -s server name can be an IP, FQDN or hostname wget https://git.onap.org/logging-analytics/plain/deploy/rancher/oom_entrypoint.sh chmod 777 oom_entrypoint.sh sudo ./oom_entrypoint.sh -b master -s devops.onap.info -e onap # wait 15 min for rancher to finish, then 30-90 min for onap to come up #20181015 - delete the deployment, recreate the onap environment in rancher with the template adjusted for more than the default 110 container limit - by adding --max-pods=500 # then redo the helm install |
see https://jira.onap.org/secure/attachment/11455/oom_openstack.yaml and https://jira.onap.org/secure/attachment/11454/oom_openstack_oom.env
see https://git.onap.org/logging-analytics/tree/deploy/rancher/oom_entrypoint.sh
customize your template (true/false for any components, docker overrides etc...)
https://jira.onap.org/secure/attachment/11414/values.yaml
Run oom_entrypoint.sh after you verified values.yaml - it will run both scripts below for you - a single node kubernetes setup running what you configured in values.yaml will be up in 50-90 min. If you want to just configure your vm without bringing up ONAP - comment out the cd.sh line and run that separately.
see wget https://git.onap.org/logging-analytics/plain/deploy/rancher/oom_rancher_setup.sh
see wget https://git.onap.org/logging-analytics/plain/deploy/cd.sh
Verify your system is up by doing a kubectl get pods --all-namespaces and checking the 8880 port to bring up the rancher or kubernetes gui.
https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.Resources%2Fresources
see
az group create --name onap_eastus --location eastus |
az group deployment create --resource-group onap_eastus --template-file oom_azure_arm_deploy.json --parameters @oom_azure_arm_deploy_parameters.json |
The oom_entrypoint.sh script will be run as a cloud-init script on the VM - see
which runs
see
kubectl get pods --all-namespaces # raise/lower onap components from the installed directory if using the oneclick arm template # amsterdam only root@ons-auto-master-201803191429z:/var/lib/waagent/custom-script/download/0/oom/kubernetes/oneclick# ./createAll.bash -n onap |
Azure subscription
https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest
Install homebrew first (reinstall if you are on the latest OSX 10.13.2 https://github.com/Homebrew/install because of 3718)
Will install Python 3.6
$brew update $brew install azure-cli |
https://docs.microsoft.com/en-us/cli/azure/get-started-with-azure-cli?view=azure-cli-latest
$ az login To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code E..D to authenticate. [ { "cloudName": "AzureCloud", "id": "f4...b", "isDefault": true, "name": "Pay-As-You-Go", "state": "Enabled", "tenantId": "bcb.....f", "user": { "name": "michael@....org", "type": "user" }}] |
https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-apt?view=azure-cli-latest
# in root AZ_REPO=$(lsb_release -cs) echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | sudo tee /etc/apt/sources.list.d/azure-cli.list apt-key adv --keyserver packages.microsoft.com --recv-keys 52E16F86FEE04B979B07E28DB02C46DF417A0893 apt-get install apt-transport-https apt-get update && sudo apt-get install azure-cli az login # verify root@ons-dmz:~# ps -ef | grep az root 1427 1 0 Mar17 ? 00:00:00 /usr/lib/linux-tools/4.13.0-1011-azure/hv_vss_daemon -n |
https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-windows?view=azure-cli-latest
Follow https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-create-first-template
$ az login To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code E...Z to authenticate. $ az group create --name examplegroup --location "South Central US" { "id": "/subscriptions/f4b...e8b/resourceGroups/examplegroup", "location": "southcentralus", "managedBy": null, "name": "examplegroup", "properties": { "provisioningState": "Succeeded" }, "tags": null } obrien:obrienlabs amdocs$ vi azuredeploy_storageaccount.json obrien:obrienlabs amdocs$ az group deployment create --resource-group examplegroup --template-file azuredeploy_storageaccount.json { "id": "/subscriptions/f4...e8b/resourceGroups/examplegroup/providers/Microsoft.Resources/deployments/azuredeploy_storageaccount", "name": "azuredeploy_storageaccount", "properties": { "additionalProperties": { "duration": "PT32.9822642S", "outputResources": [ { "id": "/subscriptions/f4..e8b/resourceGroups/examplegroup/providers/Microsoft.Storage/storageAccounts/storagekj6....kk2w", "resourceGroup": "examplegroup" }], "templateHash": "11440483235727994285"}, "correlationId": "41a0f79..90c291", "debugSetting": null, "dependencies": [], "mode": "Incremental", "outputs": {}, "parameters": {}, "parametersLink": null, "providers": [ { "id": null, "namespace": "Microsoft.Storage", "registrationState": null, "resourceTypes": [ { "aliases": null, "apiVersions": null, "locations": [ "southcentralus" ], "properties": null, "resourceType": "storageAccounts" }]}], "provisioningState": "Succeeded", "template": null, "templateLink": null, "timestamp": "2018-02-17T16:15:11.562170+00:00" }, "resourceGroup": "examplegroup"} |
az account list-locations northcentralus for example |
# create a resource group if not already there az group create --name obrien_jenkins_b_westus2 --location westus2 |
We need a 128G VM with at least 8vCores (peak is 60) and a 100+GB drive. The sizes are detailed on https://docs.microsoft.com/en-ca/azure/virtual-machines/windows/sizes-memory - we will use the Standard_D32s_v3 type
We need an "all open 0.0.0.0/0" security group and a reassociated data drive as boot drive - see the arm template in LOG-321
see open review in
"ubuntuOSVersion": "16.04.0-LTS" "imagePublisher": "Canonical", "imageOffer": "UbuntuServer", "vmSize": "Standard_E8s_v3" "osDisk": {"createOption": "FromImage"},"dataDisks": [{"diskSizeGB": 511,"lun": 0, "createOption": "Empty" }] |
Follow
https://github.com/Azure/azure-quickstart-templates/tree/master/101-acs-kubernetes
https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-template-deploy
https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-simple-linux
It needs a security group https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-create-nsg-arm-template
{ "apiVersion": "2017-03-01", "type": "Microsoft.Network/networkSecurityGroups", "name": "[variables('networkSecurityGroupName')]", "location": "[resourceGroup().location]", "tags": { "displayName": "NSG - Front End" }, "properties": { "securityRules": [ { "name": "in-rule", "properties": { "description": "All in", "protocol": "Tcp", "sourcePortRange": "*", "destinationPortRange": "*", "sourceAddressPrefix": "Internet", "destinationAddressPrefix": "*", "access": "Allow", "priority": 100, "direction": "Inbound" } }, { "name": "out-rule", "properties": { "description": "All out", "protocol": "Tcp", "sourcePortRange": "*", "destinationPortRange": "*", "sourceAddressPrefix": "Internet", "destinationAddressPrefix": "*", "access": "Allow", "priority": 101, "direction": "Outbound" } } ] } } , { "apiVersion": "2017-04-01", "type": "Microsoft.Network/virtualNetworks", "name": "[variables('virtualNetworkName')]", "location": "[resourceGroup().location]", "dependson": [ "[concat('Microsoft.Network/networkSecurityGroups/', variables('networkSecurityGroupName'))]" ], "properties": { "addressSpace": { "addressPrefixes": [ "[variables('addressPrefix')]" ] }, "subnets": [ { "name": "[variables('subnetName')]", "properties": { "addressPrefix": "[variables('subnetPrefix')]", "networkSecurityGroup": { "id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]" } } } ] } }, |
# validate first (validate instead of create) az group deployment create --resource-group obrien_jenkins_b_westus2 --template-file oom_azure_arm_deploy.json --parameters @oom_azure_arm_cd_amsterdam_deploy_parameters.json |
Use the entrypoint script in
# clone the oom repo to get the install directory git clone https://gerrit.onap.org/r/oom # run the Rancher RI installation (to install kubernetes) oom/install/rancher/oom_rancher_install.sh -b master -s 192.168.240.32 -e onap # run the oom deployment script # get a copy of onap-parametes.yaml and place in this folder oom/install/deployment/cd.sh -b master -s 192.168.240.32 -e onap |
oom_rancher_install.sh is in under https://gerrit.onap.org/r/#/c/32019/
cd.sh in under https://gerrit.onap.org/r/#/c/32653/
# delete the vm and resources az group deployment delete --resource-group ONAPAMDOCS --name oom_azure_arm_deploy # the above deletion will not delete the actual resources - only a delete of the group or each individual resource works # optionally delete the resource group az group delete --name ONAPAMDOCS -y |
az network public-ip create --name onap-argon --resource-group a_ONAP_argon_prod_donotdelete --location eastus --allocation-method Static
Follow https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-cluster
obrienbiometrics:obrienlabs michaelobrien$ az provider register -n Microsoft.ContainerService Registering is still on-going. You can monitor using 'az provider show -n Microsoft.ContainerService' |
http://aka.ms/corequotaincrease
https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest
Deployment failed. Correlation ID: 4b4707a7-2244-4557-855e-11bcced556de. Provisioning of resource(s) for container service onapAKSCluster in resource group onapAKS failed. Message: Operation results in exceeding quota limits of Core. Maximum allowed: 10, Current in use: 10, Additional requested: 1. Please read more about quota increase at http://aka.ms/corequotaincrease.. Details: |
obrienbiometrics:obrienlabs michaelobrien$ az aks create --resource-group onapAKS --name onapAKSCluster --node-count 1 --generate-ssh-keys - Running .. "fqdn": "onapaksclu-onapaks-f4....3.hcp.eastus.azmk8s.io", |
The cluster will start with a 3.5G VM before scaling
Resources for your AKS cluster
A resource group makes it easier to package and remove everything for a deployment - essentially making the deployment stateless
Global or local to the resource group?
Register a CNAME for an existing domain and use the same IP address everytime the deployment comes up
How to attach the cloud init script to provision the VM
ARM template chaining
passing derived varialbles into the next arm template - for example when bringing up an entire federated set in one or more DCs
see script attached to
It takes about 2 min for DNS entries to propagate out from A record DNS changes. For example the following IP/DNS association took 2 min to appear in dig.
obrienbiometrics:onap_oom_711_azure michaelobrien$ dig azure.onap.info ; <<>> DiG 9.9.7-P3 <<>> azure.onap.info ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10599 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 512 ;; QUESTION SECTION: ;azure.onap.info. IN A ;; ANSWER SECTION: azure.onap.info. 251 IN A 52.224.233.230 ;; Query time: 68 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Tue Feb 20 10:26:59 EST 2018 ;; MSG SIZE rcvd: 60 obrienbiometrics:onap_oom_711_azure michaelobrien$ dig azure.onap.info ; <<>> DiG 9.9.7-P3 <<>> azure.onap.info ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 30447 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 512 ;; QUESTION SECTION: ;azure.onap.info. IN A ;; ANSWER SECTION: azure.onap.info. 299 IN A 13.92.225.167 ;; Query time: 84 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Tue Feb 20 10:27:04 EST 2018 |
Inside the corporate firewall - avoid it PS C:\> az login Please ensure you have network connection. Error detail: HTTPSConnectionPool(host='login.microsoftonline.com', port=443) : Max retries exceeded with url: /common/oauth2/devicecode?api-version=1.0 (Caused by NewConnectionError('<urllib3.conne ction.VerifiedHTTPSConnection object at 0x04D18730>: Failed to establish a new connection: [Errno 11001] getaddrinfo fai led',)) at home or cell hotspot PS C:\> az login To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code E...2W to authenticate. [ { "cloudName": "AzureCloud", "id": "4...da1", "isDefault": true, "name": "Microsoft Azure Internal Consumption", "state": "Enabled", "tenantId": "72f98....47", "user": { "name": "fran...ocs.com", "type": "user" }] On corporate account (need permissions bump to be able to create a resource group prior to running an arm template https://wiki.onap.org/display/DW/ONAP+on+Kubernetes+on+Microsoft+Azure#ONAPonKubernetesonMicrosoftAzure-ARMTemplate PS C:\> az group create --name onapKubernetes --location eastus The client 'fra...s.com' with object id '08f98c7e-...ed' does not have authorization to per form action 'Microsoft.Resources/subscriptions/resourcegroups/write' over scope '/subscriptions/42e...8 7da1/resourcegroups/onapKubernetes'. try my personal = OK PS C:\> az login To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code EE...ULR to authenticate. Terminate batch job (Y/N)? y # hangs when first time login in a new pc PS C:\> az login To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code E.PBKS to authenticate. [ { "cloudName": "AzureCloud", "id": "f4b...b", "isDefault": true "name": "Pay-As-You-Go", "state": "Enabled", "tenantId": "bcb...f4f", "user": "name": "michael@obrien...org", "type": "user" } }] PS C:\> az group create --name onapKubernetes2 --location eastus { "id": "/subscriptions/f4b....b/resourceGroups/onapKubernetes2", "location": "eastus", "managedBy": null, "name": "onapKubernetes2", "properties": { "provisioningState": "Succeeded" }, "tags": null} |
I find that a delete deployment deletes the deployment but not the actual resources. The workaround is to delete the resource group - but in some constrained subscriptions the cli user may not have the ability to create a resource group - and hence delete it.
see
https://github.com/Azure/azure-sdk-for-java/issues/1167
deleting the resources manually for now - is a workaround if you cannot create/delete resource groups
# delete the vm and resources az group deployment delete --resource-group ONAPAMDOCS --name oom_azure_arm_deploy
# the above deletion will not delete the actual resources - only a delete of the group or each individual resource works
# optionally delete the resource group az group delete --name ONAPAMDOCS -y
However modifying the template to add resources works well. For example adding a reference to a network security group
20180228: Resize the OS disk
ONAP requires at least 75g - the issue is than in most VM templates on Azure - the OS disk is 30g - we need to either switch to the data disk or resize the os disk.
# add diskSizeGB to the template "osDisk": { "diskSizeGB": 255, "createOption": "FromImage" }, ubuntu@oom-auto-deploy:~$ df Filesystem 1K-blocks Used Available Use% Mounted on udev 65989400 0 65989400 0% /dev tmpfs 13201856 8848 13193008 1% /run /dev/sda1 259142960 1339056 257787520 1% / tmpfs 66009280 0 66009280 0% /dev/shm tmpfs 5120 0 5120 0% /run/lock tmpfs 66009280 0 66009280 0% /sys/fs/cgroup none 64 0 64 0% /etc/network/interfaces.dynamic.d /dev/sdb1 264091588 60508 250592980 1% /mnt tmpfs 13201856 0 13201856 0% /run/user/1000 ubuntu@oom-auto-deploy:~$ free total used free shared buff/cache available Mem: 132018560 392336 131242164 8876 384060 131012328 |
in review under OOM-715
https://jira.onap.org/secure/attachment/11206/oom_entrypoint.sh
If using amsterdam - swap out the onap-parameters.yaml (the curl is hardcoded to a master branch version)
use this method instead of installing az cli directly - for certain corporate oauth configurations
https://azure.microsoft.com/en-us/features/storage-explorer/
Install AZM using the name and access key of a storage account created manually or by enabling the az cli on the browser
See https://docs.microsoft.com/en-us/azure/templates/microsoft.compute/virtualmachines/extensions it looks like Azure has a similar setup to AWS ebextentions
Targetting
type | string | No | Specifies the type of the extension; an example is "CustomScriptExtension". |
https://docs.microsoft.com/en-us/azure/virtual-machines/linux/extensions-customscript
deprecated { "apiVersion": "2015-06-15", "type": "Microsoft.Compute/virtualMachines/extensions", "name": "[concat(parameters('vmName'),'/onap')]", "location": "[resourceGroup().location]", "dependsOn": ["[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"], "properties": { "publisher": "Microsoft.Azure.Extensions", "type": "CustomScript", "typeHandlerVersion": "1.9", "autoUpgradeMinorVersion": true, "settings": { "fileUris": [ "https://jira.onap.org/secure/attachment/11263/oom_entrypoint.sh" ], "commandToExecute": "[concat('./' , parameters('scriptName'), ' -b master -s dns/pub/pri-ip -e onap' )]" } } } use { "apiVersion": "2017-12-01", "type": "Microsoft.Compute/virtualMachines/extensions", "name": "[concat(parameters('vmName'),'/onap')]", "location": "[resourceGroup().location]", "dependsOn": ["[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"], "properties": { "publisher": "Microsoft.Azure.Extensions", "type": "CustomScript", "typeHandlerVersion": "2.0", "autoUpgradeMinorVersion": true, "settings": { "fileUris": [ "https://jira.onap.org/secure/attachment/11281/oom_entrypoint.sh" ], "commandToExecute": "[concat('./' , parameters('scriptName'), ' -b master ', ' -s ', 'ons-auto-201803181110z', ' -e onap' )]" } } } |
ubuntu@ons-dmz:~$ ./oom_deployment.sh
Deployment template validation failed: 'The template resource 'entrypoint' for type 'Microsoft.Compute/virtualMachines/extensions' at line '1' and column '6182' has incorrect segment lengths. A nested resource type must have identical number of segments as its resource name. A root resource type must have segment length one greater than its resource name. Please see https://aka.ms/arm-template/#resources for usage details.'.
ubuntu@ons-dmz:~$ ./oom_deployment.sh
Deployment failed. Correlation ID: 532b9a9b-e0e8-4184-9e46-6c2e7c15e7c7. {
"error": {
"code": "ParentResourceNotFound",
"message": "Can not perform requested operation on nested resource. Parent resource '[concat(parameters('vmName'),'' not found."
}
}
fixed 20180318:1600
Install runs - but I need visibility - checking /var/lib/waagent/custom-script/download/0/
progress
./oom_deployment.sh # 7 min to delete old deployment ubuntu@ons-dmz:~$ az vm extension list -g a_ONAP_auto_201803181110z --vm-name ons-auto-201803181110z .. "provisioningState": "Creating", "settings": { "commandToExecute": "./oom_entrypoint.sh -b master -s ons-auto-201803181110zons-auto-201803181110z.eastus.cloudapp.azure.com -e onap", "fileUris": [ "https://jira.onap.org/secure/attachment/11263/oom_entrypoint.sh" ubuntu@ons-auto-201803181110z:~$ sudo su - root@ons-auto-201803181110z:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 83458596d7a6 rancher/server:v1.6.14 "/usr/bin/entry /u..." 3 minutes ago Up 3 minutes 3306/tcp, 0.0.0.0:8880->8080/tcp rancher_server root@ons-auto-201803181110z:~# tail -f /var/log/azure/custom-script/handler.log time=2018-03-18T22:51:59Z version=v2.0.6/git@1008306-clean operation=enable seq=0 file=0 event="download complete" output=/var/lib/waagent/custom-script/download/0 time=2018-03-18T22:51:59Z version=v2.0.6/git@1008306-clean operation=enable seq=0 event="executing command" output=/var/lib/waagent/custom-script/download/0 time=2018-03-18T22:51:59Z version=v2.0.6/git@1008306-clean operation=enable seq=0 event="executing public commandToExecute" output=/var/lib/waagent/custom-script/download/0 root@ons-auto-201803181110z:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 539733f24c01 rancher/agent:v1.2.9 "/run.sh run" 13 seconds ago Up 13 seconds rancher-agent 83458596d7a6 rancher/server:v1.6.14 "/usr/bin/entry /u..." 5 minutes ago Up 5 minutes 3306/tcp, 0.0.0.0:8880->8080/tcp rancher_server root@ons-auto-201803181110z:~# ls -la /var/lib/waagent/custom-script/download/0/ total 31616 -rw-r--r-- 1 root root 16325186 Aug 31 2017 helm-v2.6.1-linux-amd64.tar.gz -rw-r--r-- 1 root root 4 Mar 18 22:55 kube_env_id.json drwxrwxr-x 2 ubuntu ubuntu 4096 Mar 18 22:53 linux-amd64 -r-x------ 1 root root 2822 Mar 18 22:51 oom_entrypoint.sh -rwxrwxrwx 1 root root 7288 Mar 18 22:52 oom_rancher_setup.sh -rwxr-xr-x 1 root root 12213376 Mar 18 22:53 rancher -rw-r--r-- 1 root root 3736787 Dec 20 19:41 rancher-linux-amd64-v0.6.7.tar.gz drwxr-xr-x 2 root root 4096 Dec 20 19:39 rancher-v0.6.7 |
testing via http://jenkins.onap.cloud/job/oom_azure_deployment/
Need the ip address and not the domain name - via linked template
or
https://docs.microsoft.com/en-us/azure/templates/microsoft.network/publicipaddresses
https://github.com/Azure/azure-quickstart-templates/issues/583
Arm templates cannot specify a static ip - without a private subnet
reference(variables('publicIPAddressName')).ipAddress
for
reference(variables('nicName')).ipConfigurations[0].properties.privateIPAddress
Using the hostname instead of the private/public ip works (verify /etc/hosts though)
obrienbiometrics:oom michaelobrien$ ssh ubuntu@13.99.207.60 ubuntu@ons-auto-201803181110z:~$ sudo su - root@ons-auto-201803181110z:/var/lib/waagent/custom-script/download/0# cat stdout INFO: Running Agent Registration Process, CATTLE_URL=http://ons-auto-201803181110z:8880/v1 INFO: Attempting to connect to: http://ons-auto-201803181110z:8880/v1 INFO: http://ons-auto-201803181110z:8880/v1 is accessible INFO: Inspecting host capabilities INFO: Boot2Docker: false INFO: Host writable: true INFO: Token: xxxxxxxx INFO: Running registration INFO: Printing Environment INFO: ENV: CATTLE_ACCESS_KEY=9B0FA1695A3E3CFD07DB INFO: ENV: CATTLE_HOME=/var/lib/cattle INFO: ENV: CATTLE_REGISTRATION_ACCESS_KEY=registrationToken INFO: ENV: CATTLE_REGISTRATION_SECRET_KEY=xxxxxxx INFO: ENV: CATTLE_SECRET_KEY=xxxxxxx INFO: ENV: CATTLE_URL=http://ons-auto-201803181110z:8880/v1 INFO: ENV: DETECTED_CATTLE_AGENT_IP=172.17.0.1 INFO: ENV: RANCHER_AGENT_IMAGE=rancher/agent:v1.2.9 INFO: Launched Rancher Agent: b44bd62fd21c961f32f642f7c3b24438fc4129eabbd1f91e1cf58b0ed30b5876 waiting 7 min for host registration to finish 1 more min KUBECTL_TOKEN base64 encoded: QmFzaWMgUWpBNE5EWkdRlRNN.....Ukc1d2MwWTJRZz09 run the following if you installed a higher kubectl version than the server helm init --upgrade Verify all pods up on the kubernetes system - will return localhost:8080 until a host is added kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system heapster-76b8cd7b5-v5jrd 1/1 Running 0 5m kube-system kube-dns-5d7b4487c9-9bwk5 3/3 Running 0 5m kube-system kubernetes-dashboard-f9577fffd-cpwv7 1/1 Running 0 5m kube-system monitoring-grafana-997796fcf-s4sjm 1/1 Running 0 5m kube-system monitoring-influxdb-56fdcd96b-2mn6r 1/1 Running 0 5m kube-system tiller-deploy-cc96d4f6b-fll4t 1/1 Running 0 5m |
In AWS we can select the "no reboot" option and create an image from a running VM as-is with no effect on the running system.
Having issues with the Azure image creator - it is looking for the ubuntu pw even though I only use key based access
aka: travellers guide
If you run into issues doing a make all - your helm server is not running
# rerun helm serve & helm repo add local http://127.0.0.1:8879 |
Need a cloud native NFS wrapper like EFS(AWS) - looking at Azure files
(Links below from Microsoft - thank you)
General Azure Documentation
Azure Site http://azure.microsoft.com
Azure Documentation Site https://docs.microsoft.com/en-us/azure/
Azure Training Courses https://azure.microsoft.com/en-us/training/free-online-courses/
Azure Portal http://portal.azure.com
Developer Documentation
Azure AD Authentication Libraries https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-authentication-libraries
Java Overview on Azure https://azure.microsoft.com/en-us/develop/java/
Java Docs for Azure https://docs.microsoft.com/en-us/java/azure/
Java SDK on GitHub https://github.com/Azure/azure-sdk-for-java
Python Overview on Azure https://azure.microsoft.com/en-us/develop/python/
Python Docs for Azure https://docs.microsoft.com/en-us/python/azure/
Python SDK on GitHub https://github.com/Azure/azure-sdk-for-python
REST Api and CLI Documentation
REST API Documentation https://docs.microsoft.com/en-us/rest/api/
CLI Documentation https://docs.microsoft.com/en-us/cli/azure/index
Other Documentation
Using Automation for VM shutdown & startup https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management
Azure Resource Manager (ARM) QuickStart Templates https://github.com/Azure/azure-quickstart-templates
The code in this github repo has 2 month old copies of cd.sh and oom_rancher_install.sh
https://github.com/taranki/onap-azure
Use the official ONAP code in
https://gerrit.onap.org/r/logging-analytics
The original seed source from 2017 below is deprecated - use onap links above
https://github.com/obrienlabs/onap-root
https://azure.microsoft.com/en-us/services/container-service/
https://docs.microsoft.com/en-us/azure/templates/microsoft.compute/virtualmachines
https://kubernetes.io/docs/concepts/containers/images/#using-azure-container-registry-acr
https://azure.microsoft.com/en-us/features/storage-explorer/
https://docs.microsoft.com/en-ca/azure/virtual-machines/linux/capture-image
Account Provider: Michael O'Brien of Amdocs
The purpose of this page is to detail getting ONAP on Kubernetes (OOM) setup on a GCE VM.
I recommend using the ONAP on Kubernetes on Amazon EC2 Amazon EC2 Spot API - as it runs around $0.12-0.25/hr at 75% off instead of the $0.60 below (33% off for reserved instances) - this page is here so we can support GCE and also work with the kubernetes open source project in a space it was originally designed in at Google.
Login to your google account and start creating a 128g Ubuntu 16.04 VM
?????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? Components ? ??????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? Status ? Name ? ID ? Size ? ??????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? Not Installed ? App Engine Go Extensions ? app-engine-go ? 97.7 MiB ? ? Not Installed ? Cloud Bigtable Command Line Tool ? cbt ? 4.0 MiB ? ? Not Installed ? Cloud Bigtable Emulator ? bigtable ? 3.5 MiB ? ? Not Installed ? Cloud Datalab Command Line Tool ? datalab ? < 1 MiB ? ? Not Installed ? Cloud Datastore Emulator ? cloud-datastore-emulator ? 17.7 MiB ? ? Not Installed ? Cloud Datastore Emulator (Legacy) ? gcd-emulator ? 38.1 MiB ? ? Not Installed ? Cloud Pub/Sub Emulator ? pubsub-emulator ? 33.2 MiB ? ? Not Installed ? Emulator Reverse Proxy ? emulator-reverse-proxy ? 14.5 MiB ? ? Not Installed ? Google Container Local Builder ? container-builder-local ? 3.7 MiB ? ? Not Installed ? Google Container Registry's Docker credential helper ? docker-credential-gcr ? 2.2 MiB ? ? Not Installed ? gcloud Alpha Commands ? alpha ? < 1 MiB ? ? Not Installed ? gcloud Beta Commands ? beta ? < 1 MiB ? ? Not Installed ? gcloud app Java Extensions ? app-engine-java ? 116.0 MiB ? ? Not Installed ? gcloud app PHP Extensions ? app-engine-php ? 21.9 MiB ? ? Not Installed ? gcloud app Python Extensions ? app-engine-python ? 6.2 MiB ? ? Not Installed ? kubectl ? kubectl ? 15.9 MiB ? ? Installed ? BigQuery Command Line Tool ? bq ? < 1 MiB ? ? Installed ? Cloud SDK Core Libraries ? core ? 5.9 MiB ? ? Installed ? Cloud Storage Command Line Tool ? gsutil ? 3.3 MiB ? ??????????????????????????????????????????????????????????????????????????????????????????????????????????????? ==> Source [/Users/michaelobrien/gce/google-cloud-sdk/completion.bash.inc] in your profile to enable shell command completion for gcloud. ==> Source [/Users/michaelobrien/gce/google-cloud-sdk/path.bash.inc] in your profile to add the Google Cloud SDK command line tools to your $PATH. gcloud init obrienbiometrics:google-cloud-sdk michaelobrien$ source ~/.bash_profile obrienbiometrics:google-cloud-sdk michaelobrien$ gcloud components update All components are up to date. |
obrienbiometrics:google-cloud-sdk michaelobrien$ gcloud compute ssh instance-1 WARNING: The public SSH key file for gcloud does not exist. WARNING: The private SSH key file for gcloud does not exist. WARNING: You do not have an SSH key for gcloud. WARNING: SSH keygen will be executed to generate a key. Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /Users/michaelobrien/.ssh/google_compute_engine. Your public key has been saved in /Users/michaelobrien/.ssh/google_compute_engine.pub. The key fingerprint is: SHA256:kvS8ZIE1egbY+bEpY1RGN45ruICBo1WH8fLWqO435+Y michaelobrien@obrienbiometrics.local The key's randomart image is: +---[RSA 2048]----+ | o=o+* o | | . .oo+*.= . | |o o ..=.=+. | |.o o ++X+o | |. . ..BoS | | + * . | | . . . | | . o o | | .o. *E | +----[SHA256]-----+ Updating project ssh metadata.../Updated [https://www.googleapis.com/compute/v1/projects/onap-184300]. Updating project ssh metadata...done. Waiting for SSH key to propagate. Warning: Permanently added 'compute.2865548946042680113' (ECDSA) to the list of known hosts. Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-37-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage Get cloud support with Ubuntu Advantage Cloud Guest: http://www.ubuntu.com/business/services/cloud 0 packages can be updated. 0 updates are security updates. michaelobrien@instance-1:~$ |
We need at least port 8880 for rancher
obrienbiometrics:20171027_log_doc michaelobrien$ gcloud compute firewall-rules create open8880 --allow tcp:8880 --source-tags=instance-1 --source-ranges=0.0.0.0/0 --description="8880" Creating firewall...|Created [https://www.googleapis.com/compute/v1/projects/onap-184300/global/firewalls/open8880]. Creating firewall...done. NAME NETWORK DIRECTION PRIORITY ALLOW DENY open8880 default INGRESS 1000 tcp:8880 |
ONAP on Kubernetes#QuickstartInstallation
in progress - needs values.yaml global override
ubuntu@a-onap-devopscd:~$ docker run -d -p 5000:5000 --restart=unless-stopped --name registry -e REGISTRY_PROXY_REMOTEURL=https://nexus3.onap.org:10001 registry:2 Unable to find image 'registry:2' locally 2: Pulling from library/registry Status: Downloaded newer image for registry:2 bd216e444f133b30681dab8b144a212d84e1c231cc12353586b7010b3ae9d24b ubuntu@a-onap-devopscd:~$ sudo docker ps | grep registry bd216e444f13 registry:2 "/entrypoint.sh /e..." 2 minutes ago Up About a minute 0.0.0.0:5000->5000/tcp registry |
http://dev.onap.info:8880/r/projects/1a7/kubernetes-dashboard:9090/#!/pod?namespace=_all
check tiller container is in state Running - not just tiller-deploy
ubuntu@a-onap-devops:~$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system heapster-6cfb49f776-9lqt2 1/1 Running 0 20d kube-system kube-dns-75c8cb4ccb-tw992 3/3 Running 0 20d kube-system kubernetes-dashboard-6f4c8b9cd5-rcbp2 1/1 Running 0 20d kube-system monitoring-grafana-76f5b489d5-r99rh 1/1 Running 0 20d kube-system monitoring-influxdb-6fc88bd58d-h875w 1/1 Running 0 20d kube-system tiller-deploy-645bd55c5d-bmxs7 1/1 Running 0 20d onap logdemonode-logdemonode-5c8bffb468-phbzd 2/2 Running 0 20d onap onap-log-elasticsearch-7557486bc4-72vpw 1/1 Running 0 20d onap onap-log-kibana-fc88b6b79-d88r7 1/1 Running 0 20d onap onap-log-logstash-9jlf2 1/1 Running 0 20d onap onap-portal-app-8486dc7ff8-tssd2 2/2 Running 0 5d onap onap-portal-cassandra-8588fbd698-dksq5 1/1 Running 0 5d onap onap-portal-db-7d6b95cd94-66474 1/1 Running 0 5d onap onap-portal-sdk-77cd558c98-6rsvq 2/2 Running 0 5d onap onap-portal-widget-6469f4bc56-hms24 1/1 Running 0 5d onap onap-portal-zookeeper-5d8c598c4c-hck2d 1/1 Running 0 5d onap onap-robot-6f99cb989f-kpwdr 1/1 Running 0 20d ubuntu@a-onap-devops:~$ kubectl describe pod tiller-deploy-645bd55c5d-bmxs7 -n kube-system Name: tiller-deploy-645bd55c5d-bmxs7 Namespace: kube-system Node: a-onap-devops/172.17.0.1 Start Time: Mon, 30 Jul 2018 22:20:09 +0000 Labels: app=helm name=tiller pod-template-hash=2016811718 Annotations: <none> Status: Running IP: 10.42.0.5 Controlled By: ReplicaSet/tiller-deploy-645bd55c5d Containers: tiller: Container ID: docker://a26420061a01a5791401c2519974c3190bf9f53fce5a9157abe7890f1f08146a Image: gcr.io/kubernetes-helm/tiller:v2.8.2 Image ID: docker-pullable://gcr.io/kubernetes-helm/tiller@sha256:9b373c71ea2dfdb7d42a6c6dada769cf93be682df7cfabb717748bdaef27d10a Port: 44134/TCP Command: /tiller --v=2 State: Running Started: Mon, 30 Jul 2018 22:20:14 +0000 Ready: True |
There is a built in grafana dashboard (thanks Mandeep Khinda and James MacNider) that once enabled can show more detail about the cluster you are running - you need to expose the nodeport and target the VM the pod is on.
The CD system one is running below http://master3.onap.info:32628/dashboard/db/cluster?orgId=1&from=now-12h&to=now
# expose the nodeport kubectl expose -n kube-system deployment monitoring-grafana --type=LoadBalancer --name monitoring-grafana-client service "monitoring-grafana-client" exposed # get the nodeport pod is running on kubectl get services --all-namespaces -o wide | grep graf kube-system monitoring-grafana ClusterIP 10.43.44.197 <none> 80/TCP 7d k8s-app=grafana kube-system monitoring-grafana-client LoadBalancer 10.43.251.214 18.222.4.161 3000:32628/TCP 15s k8s-app=grafana,task=monitoring # get the cluster vm DNS name ubuntu@ip-10-0-0-169:~$ kubectl get pods --all-namespaces -o wide | grep graf kube-system monitoring-grafana-997796fcf-7kkl4 1/1 Running 0 5d 10.42.84.138 ip-10-0-0-80.us-east-2.compute.internal |
see also
ONAP Development#KubernetesDevOps
see
The following is missing some sections and a bit out of date (v2 deprecated in favor of v3)
Get an openlab account - Integration / Developer Lab Access | Stephen Gooch provides excellent/fast service - raise a JIRA like the following | |
Install openVPN - Using Lab POD-ONAP-01 Environment For OSX both Viscosity and TunnelBlick work fine | ||
Login to Openstack | ||
Install openstack command line tools | Tutorial: Configuring and Starting Up the Base ONAP Stack#InstallPythonvirtualenvTools(optional,butrecommended) | |
get your v3 rc file | ||
verify your openstack cli access (or just use the jumpbox) |
| |
get some elastic IP's | You may need to release unused IPs from other tenants - as we have 4 pools of 50 | |
fill in your stack env parameters | to fill in your config (mso) settings in values.yaml follow https://onap.readthedocs.io/en/beijing/submodules/oom.git/docs/oom_quickstart_guide.html section "To generate openStackEncryptedPasswordHere" example ubuntu@ip-172-31-54-73:~/_dev/log-137-57171/oom/kubernetes/so/resources/config/mso$ cat encryption.key aa3871669d893c7fb8abbcda31b88b4f ubuntu@ip-172-31-54-73:~/_dev/log-137-57171/oom/kubernetes/so/resources/config/mso$ echo -n "55" | openssl aes-128-ecb -e -K aa3871669d893c7fb8abbcda31b88b4f -nosalt | xxd -c 256 -p a355b08d52c73762ad9915d98736b23b | |
Run the HEAT stack to create the kubernetes undercloud VMs |
| |
ssh in |
| |
install Kubernetes stack (rancher, k8s, helm) | Get the latest oom_entrypoint.sh until OOM-710 is merged directly on the JIRA
| |
create the NFS share | ||
deploy onap |
|
Accessing an external Node Port
# get pod names and the actual VM that any pod is on ubuntu@ip-10-0-0-169:~$ kubectl get pods --all-namespaces -o wide | grep log- onap onap-log-elasticsearch-756cfb559b-wk8c6 1/1 Running 0 2h 10.42.207.254 ip-10-0-0-227.us-east-2.compute.internal onap onap-log-kibana-6bb55fc66b-kxtg6 0/1 Running 16 1h 10.42.54.76 ip-10-0-0-111.us-east-2.compute.internal onap onap-log-logstash-689ccb995c-7zmcq 1/1 Running 0 2h 10.42.166.241 ip-10-0-0-111.us-east-2.compute.internal onap onap-vfc-catalog-5fbdfc7b6c-xc84b 2/2 Running 0 2h 10.42.206.141 ip-10-0-0-227.us-east-2.compute.internal # get nodeport ubuntu@ip-10-0-0-169:~$ kubectl get services --all-namespaces -o wide | grep log- onap log-es NodePort 10.43.82.53 <none> 9200:30254/TCP 2h app=log-elasticsearch,release=onap onap log-es-tcp ClusterIP 10.43.90.198 <none> 9300/TCP 2h app=log-elasticsearch,release=onap onap log-kibana NodePort 10.43.167.146 <none> 5601:30253/TCP 2h app=log-kibana,release=onap onap log-ls NodePort 10.43.250.182 <none> 5044:30255/TCP 2h app=log-logstash,release=onap onap log-ls-http ClusterIP 10.43.81.173 <none> 9600/TCP 2h app=log-logstash,release=onap # check nodeport outside container ubuntu@ip-10-0-0-169:~$ curl ip-10-0-0-111.us-east-2.compute.internal:30254 { "name" : "-pEf9q9", "cluster_name" : "onap-log", "cluster_uuid" : "ferqW-rdR_-Ys9EkWw82rw", "version" : { "number" : "5.5.0", "build_hash" : "260387d", "build_date" : "2017-06-30T23:16:05.735Z", "build_snapshot" : false, "lucene_version" : "6.6.0" }, "tagline" : "You Know, for Search" } # check inside docker container - for reference ubuntu@ip-10-0-0-169:~$ kubectl exec -it -n onap onap-log-elasticsearch-756cfb559b-wk8c6 bash [elasticsearch@onap-log-elasticsearch-756cfb559b-wk8c6 ~]$ curl http://127.0.0.1:9200 { "name" : "-pEf9q9", |
Longest lived deployment so far
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system heapster-6cfb49f776-479mx 1/1 Running 7 59d kube-system kube-dns-75c8cb4ccb-sqxbr 3/3 Running 45 59d kube-system kubernetes-dashboard-6f4c8b9cd5-w5xr2 1/1 Running 8 59d kube-system monitoring-grafana-76f5b489d5-sj9tl 1/1 Running 6 59d kube-system monitoring-influxdb-6fc88bd58d-22vg2 1/1 Running 6 59d kube-system tiller-deploy-8b6c5d4fb-4rbb4 1/1 Running 7 19d |
ONAP runs best on a large cluster. As of 20180508 there are 152 pods (above the 110 limit per VM). ONAP is also vCPU bound - therefore try to run with a minimum of 24 vCores, ideally 32 to 64.
Even though most replicaSets are set at 3 - try to have at least 4 nodes so we can survive a node failure and still be able to run all the pods. The memory profile is around 85g right now.
ONAP will require certain ports open by CIDR to several static domain names in order to deploy defined in a security group. At runtime the list is reduced.
Ideally these are all inside a private network.
It looks like we will need a standard public/private network locked down behind a combined ACL/SG for AWS VPC or a NSG for Azure where we only expose what we need outside the private network.
Still working on a list of ports but we should not need any of these exposed if we use a bastion/jumpbox + nat combo inside the network.
https://medium.com/handy-tech/analysis-of-a-kubernetes-hack-backdooring-through-kubelet-823be5c3d67c
https://github.com/kubernetes/kubernetes/pull/59666 fixed in Kubernetes 1.10
ONAP on deployment will require the following incoming and outgoing ports. Note: within ONAP rest calls between components will be handled inside the Kubernetes namespace by the DNS server running as part of K8S.
port | protocol | incoming/outgoing | application | source | destination | Notes |
---|---|---|---|---|---|---|
22 | ssh | ssh | developer vm | host | ||
443 | tiller | client | host | |||
8880 | http | rancher | client | host | ||
9090 | http | kubernetes | host | |||
10001 | https | nexus3 | nexus3.onap.org | |||
10003 | https | nexus3 | nexus3.onap.org | |||
https | nexus | nexus.onap.org | ||||
https ssh | git | git.onap.org | ||||
30200-30399 | http/https | REST api | developer vm | host | ||
32628 | http | grafana | dashboard for the kubernetes cluster - must be enabled | |||
5005 | tcp | java debug port | developer vm | host | ||
Lockdown ports | ||||||
8080 | outgoing | |||||
10250-10255 | in/out | Lock these down via VPC or a source CIDR that equals only the server/client IP list https://medium.com/handy-tech/analysis-of-a-kubernetes-hack-backdooring-through-kubelet-823be5c3d67c |
The generated host registration docker call is the same as the one generated by the wiki - minus server IP (currently single node cluster) | |
https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/
ONAP on Kubernetes#QuickstartInstallation
https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/