You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 23 Next »

see:  LOG-66 - Getting issue details... STATUS    LOG-49 - Getting issue details... STATUS

Quickstart - getting your ELK Dashboard up

Logging Analytics Dashboards (Kibana)

Enabling the ELK stack

Target the VM in your cluster that is running log-kibana - in the live CD system for example it is usually on http://master3.onap.info:30253/

The index needs to be enabled - this is currently being automated in  LOG-152 - Getting issue details... STATUS  for now you can flip the following dropdown.

You will be able to search on logs - separate from also using the kibana dashoards.


Troubleshooting:

# check indices to verify data exists in Elasticsearch
root@k8s:~# curl -u elastic:changeme http://127.0.0.1:30254/_cat/indices
red open .monitoring-es-6-2017.10.04 X9nA9PHsR92u9VA1EjvrjA 1 1
yellow open .monitoring-es-6-2017.10.05 GEWfbYV8Qu632ILcOaGufg 1 1 70825 324 49mb 49mb
yellow open onaplogs-2017.10.05 qOErjm_zR1yJbES2A94GuA 5 1 417450 0 146.9mb 146.9mb
yellow open .triggered_watches sJWEmGgYRmuOsOa0ouWgaw 1 1 0 0 365.7kb 365.7kb
yellow open onaplogs-2017.10.04 NoEhHdK3ToeUC2KGqo3XlA 5 1 270744 0 103.1mb 103.1mb
yellow open .monitoring-alerts-6 84pDDo6HQOGIqgJQ5nMriA 1 1 1 0 6.3kb 6.3kb
yellow open .watcher-history-3-2017.10.04 lZ_1PBP-RiuevffPV3a1-g 1 1 2612 0 2.6mb 2.6mb
yellow open .watcher-history-3-2017.10.05 IDLSTtl9Thq0pTuTyzngNQ 1 1 4544 0 3.2mb 3.2mb
yellow open .kibana WWyQNR5HTzCRsqEQkPR3YA 1 1 1 0 3.2kb 3.2kb
yellow open .watches vS6TCNdiTwSL9JJOx_N8QQ 1 1 4 0 63.4kb 63.4kb


Configuration
search on log* and select a "time filter field name" to @timestamp


onap | split rows | aggregation=terms | field=source.keyword | size=100, play


http://k8s:30253/app/kibana#/discover?_g=()&_a=(columns:!(_source),index:'onap*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'*')),sort:!('@timestamp',desc))

# filter by aai
October 5th 2017, 11:10:39.087 source:/var/log/onap/aai/aai-ml/error.log offset:605,688 DistributionClientResultImpl .responseStatus:ASDC_SERVER_PROBLEM, responseMessage=ASDC server problem] 2017-10-05T15:10:37.393Z input_type:log message: MDLSVC2001E|MDLSVC2001E Unable to register with ASDC: Failed to initialize distribution client: ASDC server problem| type:log Logger:org.onap.aai.modelloader.service.SdcConnectionJob Timestamp:October 5th 2017, 11:10:37.347 Thread:[Timer-0] INFO org.openecomp.sdc.impl.DistributionClientImpl DistributionClient - init 2017-10-05T15:10:37.393Z [Timer-0] ERROR org.openecomp.sdc.http.AsdcConnectorClient status from ASDC is org.openecomp.sdc.http.HttpAsdcResponse@2c523e67 2017-10-05T15:10:37.393Z [Timer-0] ERROR org.open

ONAP Kibana Dashboard

Logging Design

Repositories

RepoDirectoryDetails
oom
Deployment yamls
oom
configs
logging-analytics
docker image configs
logging-analytics
reference logback.xml configs
(various) aai....
runtime logback.xml/log4j configs


Logging API

Discuss shared library approach to logging (Java only for now)

Logging DevOps

Logging Framework Deployment

The ELK stack containers are under the onap-log namespace in the OOM deployment of ONAP.  They should be started by default - if not you can start them manually.

Kibana is on port 30253, Elasticsearch is on port 30254

# look for log containers
root@ip-172-31-82-46:~# kubectl get pods --all-namespaces -a | grep onap-log
onap-log              elasticsearch-2934588800-st9j3           1/1       Running            0          3h
onap-log              kibana-3372627750-ff8rv                  1/1       Running            0          3h
onap-log              logstash-1708188010-703pk                1/1       Running            0          3h

# start the ELK stack if required
root@ip-172-31-82-46:~/oom/kubernetes/oneclick# ./createAll.bash -n onap -a log

# check access ports (30254 and 30253)
root@ip-172-31-82-46:~/oom/kubernetes/oneclick# kubectl get services --all-namespaces -a | grep onap-log
onap-log              elasticsearch           10.43.86.120    <nodes>       9200:30254/TCP                                                               
onap-log              kibana                  10.43.165.215   <nodes>       5601:30253/TCP                                                               
onap-log              logstash                10.43.72.107    <none>        5044/TCP  

# check for pods with filebeat containers (will be 2 per pod)
root@kos1001:/dockerdata-nfs/onap/robot# kubectl get pods --all-namespaces -a | grep 2/2
onap-aai              aai-resources-338473047-8k6vr           2/2       Running            0          7h
onap-aai              aai-traversal-2033243133-6cr9v          2/2       Running            0          7h
onap-aai              model-loader-service-3356570452-25fjp   2/2       Running            0          7h
onap-aai              search-data-service-2366687049-jt0nb    2/2       Running            0          7h
onap-aai              sparky-be-3141964573-f2mhr              2/2       Running            0          7h
onap-appc             appc-1335254431-v1pcs                   2/2       Running            0          7h
onap-mso              mso-3911927766-bmww7                    2/2       Running            0          7h
onap-policy           drools-2302173499-t0zmt                 2/2       Running            0          7h
onap-policy           pap-1954142582-vsrld                    2/2       Running            0          7h
onap-policy           pdp-4137191120-qgqnj                    2/2       Running            0          7h
onap-portal           portalapps-4168271938-4kp32             2/2       Running            0          7h
onap-portal           portaldb-2821262885-0t32z               2/2       Running            0          7h
onap-sdc              sdc-be-2986438255-sdqj6                 2/2       Running            0          7h
onap-sdc              sdc-fe-1573125197-7j3gp                 2/2       Running            0          7h
onap-sdnc             sdnc-3858151307-w9h7j                   2/2       Running            0          7h
onap-vid              vid-server-1837290631-x4ttc             2/2       Running            0          7h
                                                                   

Triage Log Capture

The following are procedures to determine the state of logs traversing from the container under use via the filebeat, logstash, elasticsearch pipeline.

I would start with the deployment.yaml and verify the filebeat section is the same as other working pods that are listed in the last check I did for pairwise testing.  I would expect logs from for example portal in portal-app and portal-sdk – at least error logs.

LOG-230 - Getting issue details... STATUS

After this I would check the logstash service to see if it is receiving logs its port – I’ll add instructions to the wiki.

We can also check the docker pv for the 2 portal containers – the emptydir should be in /var/lib/docker – this is a secondary place we can pickup/verify the logs besides filebeat.

Originating Container

Filebeat Sidecar Container

Logstash DaemonSet

Elasticsearch container

Kibana container

Enabling Debug Logs

Via URL


Via Logback.xml


Logging Access

Robot logs


after a

/dockerdata-nfs/onap/robot# ./demo-k8s.sh init_customer

http://host:30209/logs/demo/InitCustomer/report.html

user:robot pass:robot

Host VM logs

/dockerdata-nfs/onap/aai/aai-traversal/logs/

/dockerdata-nfs/onap/sdc/logs/ASDC/ASDC-BE/

ELK Logs




Training Videos

VideoDetails
Configuring a new ONAP install's ELK stack


References



  • No labels