In collaboration with the ONAP Operations Manager project at ONAP - our current RI.
The goal of this page is to provide an E2E infrastructure for testing an hourly or triggered master/tagged build for the purposes of declaring it ready in terms of health check and use case functionality. CD functionality includes providing real-time and historical analytics of build health via stored/indexed logs from the deployment jobs in our ELK stack that sits outside of ONAP.
Amazon AWS is currently hosting our RI for ONAP Continuous Deployment on my private account for now - I have requested a grant specific to the jenkins, kibana and cd instances.
ONAP Live AWS CD Servers
Live Cassablanca/master server
Login to Rancher/Kubernetes only in the last 45 min of the hour
Use the system only in the last 10 min of the hour
Currently off until the account resets to the next bill on 2nd Jan
view deployment status, deployment (pod up status)
Paused until 2 Jan 2018
|http://kibana.onap.info:5601||query "message" logs or view the dashboard|
CD Demo Videos
20171210 showing a full
CD job on the jenkins server
Kibana Dashboard of CD system diagnosing health check issues in an Hourly ONAP OOM Deploy
In the combined ELK and Kibana CD system below we can see that SDC is failing healthcheck on average about 35% of the time - this may be due to a gap between healthcheck using a 200 HTTP return, the SDC rest call timing out when Spring is still coming up on the servlet container or a dependency check in SDC itself on another component where a particular startup order or timing of calls exposes an issue - anyway the ELK system that consumes logs from the hourly build can identify issues like this or the 1 hour healthcheck failure in MSB below that for 14 components that was transient.
Shane Daniel has created a dashboard on our AWS POC that can be used to diagnose the health of the current hourly build based on logs generated by the health check running in robot off an hourly deploy of ONAP OOM (CI triggers are pending)
For example there was a hard coded token in kube2msb that was causing some healthcheck failures - notice the drop in failures 3 hours ago within an hour after the submit to the OOM framework (Immediate because the config is not currently part of the daily-only docker builds)
Automated POC ONAP CD Infrastructure
A custom Jenkins job that runs a full deployment of ONAP OOM on another 64G VM is currently in progress. There are pending design issues under vFirewall automation, reporting results and general resiliency.
OOM Jenkins Automated Continuous Deployment Results
There are several methods of communicating the current deployment status being worked out. Currently raw Jenkins build pages and POC dashboards in Kibana are being worked out.
|Hourly Deployment Results|
|Live Kibana Dashboards|
24 hour Dashboard - filtered on "message: PASS"
The following is a split screen on an hourly build of OOM and the logs generated by the deployment process.
The following shows a rudimentry kibana dashboard of the current PASS numbers for ONAP OOM heath check.
DI 1: 20171112: dockerdata-nfs mounted as root conflicts with Ubuntu or Jenkins user
DI 2: 20171112: Reference ELK stack outside of ONAP for CD infrastructure
DI 3: 20171112: DevOps Jenkins and CD Docker Infrastructure
On AWS as EC2 instances running docker versions of Jenkins, Nexus and Gitlab
CD instance is currently static until Rancher 2.0 is finished acceptance testing at ONAP
DI 4: 20171112: OOM Docker Image preload - to speed up pods to 8 min
DI 5: 20171112: Strategy for Manual Config of Rancher 1.6 for Auto Create/Delete of CD VM
DI 6: 20171112: Migrate Jenkins job to ONAP sandbox
current ssh config
Automated ONAP CD Infrastructure
We need sufficient resources to run two (amsterdam and beijing/master) deployments either hourly or on commit-trigger demand.
We also need devops infrastructure to provision the servers (an ARM DMZ jumbox), run the jenkins container and ELK containers (a single Kubernetes cluster)
|chaos monkey b*||Azure||chaos.onap.cloud||k8s||Microsoft||hammer the system up/down|