Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Quickstart - getting your ELK Dashboard up

Logging Analytics Dashboards (Kibana is running out of the box on port 30253 - setup indices on "onap*" and file "@timestamp", hit save and navigate to the discover tab to see logs.

Details:

Login to the kibana dashboard on the ONAP host machine (http://staging.onap.org:30253)

$elasticsearch_username: "elastic"
$elasticsearch_password: "changeme"

# verify elastic search 
root@k8s:~# curl -u elastic:changeme http://127.0.0.1:30254
{
"name" : "DAKPX9c",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "GRQR74EjT6K8f3Eq93Ms6A",
"version" : {
"number" : "5.5.0",
"build_hash" : "260387d",
"build_date" : "2017-06-30T23:16:05.735Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
"tagline" : "You Know, for Search"
}

)

Enabling the ELK stack

Target the VM in your cluster that is running log-kibana - in the live CD system for example it is usually on http://master3.onap.info:30253/

The index needs to be enabled - this is currently being automated in 

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyLOG-152
 for now you can flip the following dropdown.

Image Added

You will be able to search on logs - separate from also using the kibana dashoards.

Image Added


Troubleshooting:

# check indices to verify data exists in Elasticsearch# check indices
root@k8s:~# curl -u elastic:changeme http://127.0.0.1:30254/_cat/indices
red open .monitoring-es-6-2017.10.04 X9nA9PHsR92u9VA1EjvrjA 1 1
yellow open .monitoring-es-6-2017.10.05 GEWfbYV8Qu632ILcOaGufg 1 1 70825 324 49mb 49mb
yellow open onaplogs-2017.10.05 qOErjm_zR1yJbES2A94GuA 5 1 417450 0 146.9mb 146.9mb
yellow open .triggered_watches sJWEmGgYRmuOsOa0ouWgaw 1 1 0 0 365.7kb 365.7kb
yellow open onaplogs-2017.10.04 NoEhHdK3ToeUC2KGqo3XlA 5 1 270744 0 103.1mb 103.1mb
yellow open .monitoring-alerts-6 84pDDo6HQOGIqgJQ5nMriA 1 1 1 0 6.3kb 6.3kb
yellow open .watcher-history-3-2017.10.04 lZ_1PBP-RiuevffPV3a1-g 1 1 2612 0 2.6mb 2.6mb
yellow open .watcher-history-3-2017.10.05 IDLSTtl9Thq0pTuTyzngNQ 1 1 4544 0 3.2mb 3.2mb
yellow open .kibana WWyQNR5HTzCRsqEQkPR3YA 1 1 1 0 3.2kb 3.2kb
yellow open .watches vS6TCNdiTwSL9JJOx_N8QQ 1 1 4 0 63.4kb 63.4kb


Configuration
search on onap* not log*
change "index name or pattern" to onap* and select a "time filter field name" to @timestamp


onap | split rows | aggregatiuonaggregation=terms | field=source.keyword | size=100, play

...

Discuss shared library approach to logging (Java only for now)

Logging DevOps

ONAP Development#KubernetesDevOps

Logging Framework Deployment

Triage Log Capture

The ELK stack containers are under the onap-log namespace in the OOM deployment of ONAP.  They should be started by default - if not you can start them manually.

Kibana is on port 30253, Elasticsearch is on port 30253

Code Block
# look for log containers
root@ip-172-31-82-46:~# kubectl get pods --all-namespaces -a | grep onap-log
onap-log              elasticsearch-2934588800-st9j3           1/1       Running            0          3h
onap-log              kibana-3372627750-ff8rv                  1/1       Running            0          3h
onap-log              logstash-1708188010-703pk                1/1       Running            0          3h

# start the ELK stack if required
root@ip-172-31-82-46:~/oom/kubernetes/oneclick# ./createAll.bash -n onap -a log

# check access ports (30254 and 30253)
root@ip-172-31-82-46:~/oom/kubernetes/oneclick# kubectl get services --all-namespaces -a | grep onap-log
onap-log              elasticsearch           10.43.86.120    <nodes>       9200:30254/TCP                                                               
onap-log              kibana                  10.43.165.215   <nodes>       5601:30253/TCP                                                               
onap-log              logstash                10.43.72.107    <none>        5044/TCP  

# check for pods with filebeat containers (will be 2 per pod)
root@kos1001:/dockerdata-nfs/onap/robot# kubectl get pods --all-namespaces -a | grep 2/2
onap-aai              aai-resources-338473047-8k6vr           2/2       Running            0          7h
onap-aai              aai-traversal-2033243133-6cr9v          2/2       Running            0          7h
onap-aai              model-loader-service-3356570452-25fjp   2/2       Running            0          7h
onap-aai              search-data-service-2366687049-jt0nb    2/2       Running            0          7h
onap-aai              sparky-be-3141964573-f2mhr              2/2       Running            0          7h
onap-appc             appc-1335254431-v1pcs                   2/2       Running            0          7h
onap-mso              mso-3911927766-bmww7                    2/2       Running            0          7h
onap-policy           drools-2302173499-t0zmt                 2/2       Running            0          7h
onap-policy           pap-1954142582-vsrld                    2/2       Running            0          7h
onap-policy           pdp-4137191120-qgqnj                    2/2       Running            0          7h
onap-portal           portalapps-4168271938-4kp32             2/2       Running            0          7h
onap-portal           portaldb-2821262885-0t32z               2/2       Running            0          7h
onap-sdc              sdc-be-2986438255-sdqj6                 2/2       Running            0          7h
onap-sdc              sdc-fe-1573125197-7j3gp                 2/2       Running            0          7h
onap-sdnc             sdnc-3858151307-w9h7j                   2/2       Running            0          7h
onap-vid              vid-server-1837290631-x4ttc             2/2       Running            0          7h
                                                                   

Logging Access

Robot logs

after a

/dockerdata-nfs/onap/robot# ./demo-k8s.sh init_customer

http://host:30209/logs/demo/InitCustomer/report.html

user:robot pass:robot

Host VM logs

/dockerdata-nfs/onap/aai/aai-traversal/logs/

...

following are procedures to determine the state of logs traversing from the container under use via the filebeat, logstash, elasticsearch pipeline.

I would start with the deployment.yaml and verify the filebeat section is the same as other working pods that are listed in the last check I did for pairwise testing.  I would expect logs from for example portal in portal-app and portal-sdk – at least error logs.

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyLOG-230

After this I would check the logstash service to see if it is receiving logs its port – I’ll add instructions to the wiki.

We can also check the docker pv for the 2 portal containers – the emptydir should be in /var/lib/docker – this is a secondary place we can pickup/verify the logs besides filebeat.

Originating Container

Filebeat Sidecar Container

Filebeat sidecar container setup and configuration in OOM

Logstash DaemonSet

Elasticsearch container

Kibana container

Enabling Debug Logs

Via URL


Via Logback.xml


Logging Access

Robot logs


after a

/dockerdata-nfs/onap/robot# ./demo-k8s.sh init_customer

http://host:30209/logs/demo/InitCustomer/report.html

user:robot pass:robot

Host VM logs

/dockerdata-nfs/onap/aai/aai-traversal/logs/

/dockerdata-nfs/onap/sdc/logs/ASDC/ASDC-BE/

Code Block
themeMidnight
ubuntu@ip-172-31-15-18:/dockerdata-nfs/onap$ ls -la aai/data-router/logs/AAI-DR/
-rw-r--r-- 1 root root       0 Aug 16 01:11 audit.log
-rw-r--r-- 1 root root       0 Aug 16 01:11 debug.log
-rw-r--r-- 1 root root  106459 Aug 17 00:01 error.2018-08-16.log.zip
-rw-r--r-- 1 root root 5562166 Aug 17 19:50 error.log
-rw-r--r-- 1 root root     536 Aug 16 01:12 metrics.log


ELK Logs




Training Videos

VideoDetails

View file
name20171005_enable_kibana.mov
height250

Configuring a new ONAP install's ELK stack

...