Developer Setup
Gerrit/Git
Quickstarts
Committing Code
# stage your changes git add . git commit -am "your commit message" # commit your staged changes with sign-off git commit -s --amend # add Issue-ID after Change-ID # Submit your commit to ONAP Gerrit for review git review # goto https://gerrit.onap.org/r/#/dashboard/self
Amending existing gerrit changes in review
# add new files/changes git add . # dont use -m - keep the same Issue-ID: line from original commmit git commit --amend git review -R # see the change set number increase - https://gerrit.onap.org/r/#/c/17203/2
If you get a 404 on commit hooks - reconfigure - https://lists.onap.org/pipermail/onap-discuss/2018-May/009737.html
curl -kLo `git rev-parse --git-dir`/hooks/commit-msg http://gerrit.onap.org/r/tools/hooks/commit-msg; chmod +x `git rev-parse --git-dir`/hooks/commit-msg git commit --amend git review -R
Filter gerrit reviews - Thanks Mandeep Khinda
https://gerrit.onap.org/r/#/q/is:reviewer+AND+status:open+AND+label:Code-Review%253D0
Workstation configuration
Ubuntu 16.04 on VMware Workstation 14 or Fusion 8
# start with clean VM, I use root, you can use the recommended non-root account apt install openjdk-8-jdk apt-get install ubuntu-desktop apt-get install git apt-get install maven # scp onap gerrit cert into VM from host macbook obrien:obrienlabs amdocs$ scp ~/.ssh/onap_rsa amdocs@192.168.211.129:~/ root@obriensystemsu0:~# cp /home/amdocs/onap_rsa . ls /home/amdocs/.m2 cp onap_rsa ~/.ssh/id_rsa chmod 400 ~/.ssh/id_rsa # test your gerrit access sudo git config --global --add gitreview.username michaelobrien sudo git config --global user.email frank.obrien@amdocs.com sudo mkdir log-326-rancher-ver cd log-326-rancher-ver/ sudo git clone ssh://michaelobrien@gerrit.onap.org:29418/logging-analytics cd logging-analytics/ sudo vi deploy/rancher/oom_rancher_setup.sh sudo git add deploy/rancher/oom_rancher_setup.sh . # setup git-review sudo apt-get install git-review sudo git config --global gitreview.remote origin # upload a patch sudo git commit -am "update rancher version to 1.6.18" # 2nd line should be "Issue-ID: LOG-326" sudo git commit -s --amend sudo git review Your change was committed before the commit hook was installed. Amending the commit to add a gerrit change id. remote: Processing changes: new: 1, refs: 1, done remote: New Changes: remote: https://gerrit.onap.org/r/55299 update rancher version to 1.6.18 remote: To ssh://michaelobrien@gerrit.onap.org:29418/logging-analytics * [new branch] HEAD -> refs/publish/master # see https://gerrit.onap.org/r/#/c/55299/
OSX 10.13
# turn of host checking
Maven Configuration
add ~/.m2/settings.xml from https://jira.onap.org/secure/attachment/10829/settings.xml
Test your environment
Verify maven builds work
Will test nexus.onap.org
get clone string from https://gerrit.onap.org/r/#/admin/projects/logging-analytics # anon for now git clone https://gerrit.onap.org/r/logging-analytics amdocs@obriensystemsu0:~$ cd logging-analytics/ amdocs@obriensystemsu0:~/logging-analytics$ mvn clean install -U [INFO] Finished at: 2018-06-22T16:11:47-05:00
Helm/Rancher/Kubernetes/Docker stack installation
Either install all the current versions manually or use the script in
# fully automated (override 16.14 to 1.6.18) amdocs@obriensystemsu0:~$ sudo logging-analytics/deploy/rancher/oom_rancher_setup.sh -b master -s 192.168.211.129 -e onap # or docker only if you kubernetes cluster is in a separate vm curl https://releases.rancher.com/install-docker/17.03.sh | sh
Verify Docker can pull from nexus3
ubuntu@ip-10-0-0-144:~$ sudo docker login -u docker -p docker nexus3.onap.org:10001 Login Succeeded ubuntu@ip-10-0-0-144:~$ sudo docker pull docker.elastic.co/beats/filebeat:5.5.0 5.5.0: Pulling from beats/filebeat e6e5bfbc38e5: Pull complete ubuntu@ip-10-0-0-144:~$ sudo docker pull nexus3.onap.org:10001/aaionap/haproxy:1.1.0 1.1.0: Pulling from aaionap/haproxy 10a267c67f42: Downloading [==============================================> ] 49.07 MB/52.58 MB
Install Eclipse or STS
download and run the installer for https://www.eclipse.org/downloads/download.php?file=/oomph/epp/oxygen/R2/eclipse-inst-linux64.tar.gz
up the allocation of
Xmx4096m in eclipse.ini
start eclipse with sudo /root/eclipse/jee-oxygen/eclipse/eclipse &
Developer Testing
Sonar
Having trouble getting the "run-sonar" command to run sonar - it skips the modules in the pom.
Looking at verifying sonar locally using eclemma
Kubernetes DevOps
Restarting a container
The container will automatically restart.
Restarting a pod
If you change configuration like the logback.xml in a pod or would like restart an entire pod like the log and portal pods
cd oom/kubernetes ubuntu@ip-172-31-19-23:~/oom/kubernetes$ sudo helm upgrade -i onap local/onap --namespace onap --set log.enabled=false # wait and check in another terminal for all containers to terminate ubuntu@ip-172-31-19-23:~$ kubectl get pods --all-namespaces | grep onap-log onap onap-log-elasticsearch-7557486bc4-5mng9 0/1 CrashLoopBackOff 9 29m onap onap-log-kibana-fc88b6b79-nt7sd 1/1 Running 0 35m onap onap-log-logstash-c5z4d 1/1 Terminating 0 4h onap onap-log-logstash-ftxfz 1/1 Terminating 0 4h onap onap-log-logstash-gl59m 1/1 Terminating 0 4h onap onap-log-logstash-nxsf8 1/1 Terminating 0 4h onap onap-log-logstash-w8q8m 1/1 Terminating 0 4h sudo helm upgrade -i onap local/onap --namespace onap --set portal.enabled=false sudo vi portal/charts/portal-sdk/resources/config/deliveries/properties/ONAPPORTALSDK/logback.xml sudo make portal sudo make onap ubuntu@ip-172-31-19-23:~$ kubectl get pods --all-namespaces | grep onap-log sudo helm upgrade -i onap local/onap --namespace onap --set log.enabled=true sudo helm upgrade -i onap local/onap --namespace onap --set portal.enabled=true ubuntu@ip-172-31-19-23:~$ kubectl get pods --all-namespaces | grep onap-log onap onap-log-elasticsearch-7557486bc4-2jd65 0/1 Init:0/1 0 31s onap onap-log-kibana-fc88b6b79-5xqg4 0/1 Init:0/1 0 31s onap onap-log-logstash-5vq82 0/1 Init:0/1 0 31s onap onap-log-logstash-gvr9z 0/1 Init:0/1 0 31s onap onap-log-logstash-qqzq5 0/1 Init:0/1 0 31s onap onap-log-logstash-vbp2x 0/1 Init:0/1 0 31s onap onap-log-logstash-wr9rd 0/1 Init:0/1 0 31s ubuntu@ip-172-31-19-23:~$ kubectl get pods --all-namespaces | grep onap-portal onap onap-portal-app-8486dc7ff8-nbps7 0/2 Init:0/1 0 9m onap onap-portal-cassandra-8588fbd698-4wthv 1/1 Running 0 9m onap onap-portal-db-7d6b95cd94-9x4kf 0/1 Running 0 9m onap onap-portal-db-config-dpqkq 0/2 Init:0/1 0 9m onap onap-portal-sdk-77cd558c98-5255r 0/2 Init:0/1 0 9m onap onap-portal-widget-6469f4bc56-g8s62 0/1 Init:0/1 0 9m onap onap-portal-zookeeper-5d8c598c4c-czpnz 1/1 Running 0 9m
Developer Deployment
Deployment Integrity
ELK containers
Logstash port
Elasticsearch port
# get pod names and the actual VM that any pod is on ubuntu@ip-10-0-0-169:~$ kubectl get pods --all-namespaces -o wide | grep log- onap onap-log-elasticsearch-756cfb559b-wk8c6 1/1 Running 0 2h 10.42.207.254 ip-10-0-0-227.us-east-2.compute.internal onap onap-log-kibana-6bb55fc66b-kxtg6 0/1 Running 16 1h 10.42.54.76 ip-10-0-0-111.us-east-2.compute.internal onap onap-log-logstash-689ccb995c-7zmcq 1/1 Running 0 2h 10.42.166.241 ip-10-0-0-111.us-east-2.compute.internal onap onap-vfc-catalog-5fbdfc7b6c-xc84b 2/2 Running 0 2h 10.42.206.141 ip-10-0-0-227.us-east-2.compute.internal # get nodeport ubuntu@ip-10-0-0-169:~$ kubectl get services --all-namespaces -o wide | grep log- onap log-es NodePort 10.43.82.53 <none> 9200:30254/TCP 2h app=log-elasticsearch,release=onap onap log-es-tcp ClusterIP 10.43.90.198 <none> 9300/TCP 2h app=log-elasticsearch,release=onap onap log-kibana NodePort 10.43.167.146 <none> 5601:30253/TCP 2h app=log-kibana,release=onap onap log-ls NodePort 10.43.250.182 <none> 5044:30255/TCP 2h app=log-logstash,release=onap onap log-ls-http ClusterIP 10.43.81.173 <none> 9600/TCP 2h app=log-logstash,release=onap # check nodeport outside container ubuntu@ip-10-0-0-169:~$ curl ip-10-0-0-111.us-east-2.compute.internal:30254 { "name" : "-pEf9q9", "cluster_name" : "onap-log", "cluster_uuid" : "ferqW-rdR_-Ys9EkWw82rw", "version" : { "number" : "5.5.0", "build_hash" : "260387d", "build_date" : "2017-06-30T23:16:05.735Z", "build_snapshot" : false, "lucene_version" : "6.6.0" }, "tagline" : "You Know, for Search" } # check inside docker container - for reference ubuntu@ip-10-0-0-169:~$ kubectl exec -it -n onap onap-log-elasticsearch-756cfb559b-wk8c6 bash [elasticsearch@onap-log-elasticsearch-756cfb559b-wk8c6 ~]$ curl http://127.0.0.1:9200 { "name" : "-pEf9q9",
Kibana port
Pairwise Testing
AAI and Log Deployment
AAI, Log and Robot will fit on a 16G VM
Deployment Issues
ran into an issue running champ on a 16g VM (AAI/LOG/Robot only)
master 20180509 build
but it runs fine on a normal cluster with the rest of ONAP
19:56:05 onap onap-aai-champ-85f97f5d7c-zfkdp 1/1 Running 0 2h 10.42.234.99 ip-10-0-0-227.us-east-2.compute.internal
http://jenkins.onap.info/job/oom-cd-master/2915/consoleFull
- OOM-1015Getting issue details... STATUS
Every 2.0s: kubectl get pods --all-namespaces Thu May 10 13:52:47 2018 NAMESPACE NAME READY STATUS RESTARTS AGE kube-system heapster-76b8cd7b5-9dg8j 1/1 Running 0 10h kube-system kube-dns-5d7b4487c9-fj2wv 3/3 Running 2 10h kube-system kubernetes-dashboard-f9577fffd-c9nwp 1/1 Running 0 10h kube-system monitoring-grafana-997796fcf-jdx8q 1/1 Running 0 10h kube-system monitoring-influxdb-56fdcd96b-zpjmz 1/1 Running 0 10h kube-system tiller-deploy-54bcc55dd5-mvbb4 1/1 Running 2 10h onap dev-aai-babel-6b79c6bc5b-7srxz 2/2 Running 0 10h onap dev-aai-cassandra-0 1/1 Running 0 10h onap dev-aai-cassandra-1 1/1 Running 0 10h onap dev-aai-cassandra-2 1/1 Running 0 10h onap dev-aai-cdc9cdb76-mmc4r 1/1 Running 0 10h onap dev-aai-champ-845ff6b947-l8jqt 0/1 Terminating 0 10h onap dev-aai-champ-845ff6b947-r69bj 0/1 Init:0/1 0 25s onap dev-aai-data-router-8c77ff9dd-7dkmg 1/1 Running 3 10h onap dev-aai-elasticsearch-548b68c46f-djmtd 1/1 Running 0 10h onap dev-aai-gizmo-657cb8556c-z7c2q 2/2 Running 0 10h onap dev-aai-hbase-868f949597-xp2b9 1/1 Running 0 10h onap dev-aai-modelloader-6687fcc84-2pz8n 2/2 Running 0 10h onap dev-aai-resources-67c58fbdc-g22t6 2/2 Running 0 10h onap dev-aai-search-data-8686bbd58c-ft7h2 2/2 Running 0 10h onap dev-aai-sparky-be-54889bbbd6-rgrr5 2/2 Running 1 10h onap dev-aai-traversal-7bb98d854d-2fhjc 2/2 Running 0 10h onap dev-log-elasticsearch-5656984bc4-n2n46 1/1 Running 0 10h onap dev-log-kibana-567557fb9d-7ksdn 1/1 Running 50 10h onap dev-log-logstash-fcc7d68bd-49rv8 1/1 Running 0 10h onap dev-robot-6cc48c696b-875p5 1/1 Running 0 10h ubuntu@obrien-cluster:~$ kubectl describe pod dev-aai-champ-845ff6b947-l8jqt -n onap Name: dev-aai-champ-845ff6b947-l8jqt Namespace: onap Node: obrien-cluster/10.69.25.12 Start Time: Thu, 10 May 2018 03:32:21 +0000 Labels: app=aai-champ pod-template-hash=4019926503 release=dev Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"onap","name":"dev-aai-champ-845ff6b947","uid":"bf48c0cd-5402-11e8-91b1-020cc142d4... Status: Pending IP: 10.42.23.228 Created By: ReplicaSet/dev-aai-champ-845ff6b947 Controlled By: ReplicaSet/dev-aai-champ-845ff6b947 Init Containers: aai-champ-readiness: Container ID: docker://46197a2e7383437ed7d8319dec052367fd78f8feb826d66c42312b035921eb7a Image: oomk8s/readiness-check:2.0.0 Image ID: docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed Port: <none> Command: /root/ready.py Args: --container-name aai-resources --container-name message-router-kafka State: Running Started: Thu, 10 May 2018 03:46:14 +0000 Last State: Terminated Reason: Error Exit Code: 1 Started: Thu, 10 May 2018 03:34:58 +0000 Finished: Thu, 10 May 2018 03:45:04 +0000 Ready: False Restart Count: 1 Environment: NAMESPACE: onap (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-2jccm (ro) Containers: aai-champ: Container ID: Image: nexus3.onap.org:10001/onap/champ:1.2-STAGING-latest Image ID: Port: 9522/TCP State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Readiness: tcp-socket :9522 delay=10s timeout=1s period=10s #success=1 #failure=3 Environment: CONFIG_HOME: /opt/app/champ-service/appconfig GRAPHIMPL: janus-deps KEY_STORE_PASSWORD: <set to the key 'KEY_STORE_PASSWORD' in secret 'dev-aai-champ-pass'> Optional: false KEY_MANAGER_PASSWORD: <set to the key 'KEY_MANAGER_PASSWORD' in secret 'dev-aai-champ-pass'> Optional: false SERVICE_BEANS: /opt/app/champ-service/dynamic/conf Mounts: /etc/localtime from localtime (ro) /logs from dev-aai-champ-logs (rw) /opt/app/champ-service/appconfig/auth from dev-aai-champ-secrets (rw) /opt/app/champ-service/appconfig/champ-api.properties from dev-aai-champ-config (rw) /opt/app/champ-service/dynamic/conf/champ-beans.xml from dev-aai-champ-dynamic-config (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-2jccm (ro) Conditions: Type Status Initialized False Ready False PodScheduled True Volumes: localtime: Type: HostPath (bare host directory volume) Path: /etc/localtime dev-aai-champ-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: dev-aai-champ Optional: false dev-aai-champ-secrets: Type: Secret (a volume populated by a Secret) SecretName: dev-aai-champ-champ Optional: false dev-aai-champ-dynamic-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: dev-aai-champ-dynamic Optional: false dev-aai-champ-logs: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-2jccm: Type: Secret (a volume populated by a Secret) SecretName: default-token-2jccm Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s node.alpha.kubernetes.io/unreachable:NoExecute for 300s Events: <none> ubuntu@obrien-cluster:~$ kubectl delete pod dev-aai-champ-845ff6b947-l8jqt -n onap pod "dev-aai-champ-845ff6b947-l8jqt" deleted