Skip to end of metadata
Go to start of metadata

Summary

OpenECOMP 1.0.0 represents a complete demo platform with two service examples as contributed by AT&T to the Linux Foundation ONAP project.


Installation Instructions

BasicOpenECOMP installation instructions are available as a README.md file. Step by step tutorials for setting up a Rackspace account, using the portal, designing services, and instantiating services are provided here.

Interacting with VMs and Containers

All VMs created have 2 network interfaces. One is public and visible to the Internet (eth0) the other is on a private 10.0.0.0/8 network (eth1). The public IPs are being used for user log in and remote management whereas the 10.0.0.0/8 addresses are used for all internal ONAP communication. This allows pre-configuration of DNS names, certificates and IP addresses within the individual components. It also allows spin up with multiple ONAP instances within one Rackspace tenant without conflict.

For example if the public IP address for a VM is 1.2.3.4 and the private key matching the public key from the heat template is stored in a file of the name openecomp_key you can log into the VM with:

ssh –i path/to/openecomp_key root@1.2.3.4

Please note that we do not provide any public/private key pair to access ONAP VMs. That key pair will be created and uploaded by users, as explained in the README.md file and tutorial.

Inside the VM you can list thedockercontainers by typing:

docker ps

and can get a shell prompt by executing the bash command.

For example:

   root@vm1-robot:~# docker ps
CONTAINER ID        IMAGE                            COMMAND                  CREATED             STATUS              PORTS                NAMES
129725d30b7f        ecomp-nexus:51215/openecompete   "lighttpd -D -f /etc/"   3 minutes ago       Up 3 minutes        0.0.0.0:88->88/tcp   openecompete_container
root@vm1-robot:~# docker exec -it openecompete_container bash
root@e37dde66316e:/#


Please note that this configuration is for demo purposes only. For a real productiondeploymenta more secure network, certificate and key design would be used.

VMs and Containers

Additional information about VMs and containers are provided below


 AAI

Delivery

  • 1 VM setup – compute1-15 which has 8CPUs, 15GBmemoryand 50 GB hard disk with an additional volume of 50 Gb
  • 1 Docker image (Ubuntu 14.04 LTS OS layer including AJSC, Open JDK) for AAI Service
  • 1 Docker image (Ubuntu 14.04 LTS OS layer including Jetty, Open JDK) for Model Loader
  • 1 Docker image forHbase/Hadoop

Docker Diagram

 

AAI has a core AAI service which is a REST service which maintains and provides areal timeview of the inventory. Amdocs has contributed the Model Loader micro service which loads models that are defined in SDC.

Offered APIs

Container/VM name
API name
API purpose
protocol used
port number or range used
TCP/UDP
AAI Service/aai /aai/v8/*Rest Web Service for AAIhttps8443TCP
AAI hbase/aaizookeeper interface to hbase
2181TCP


Consumed APIs

Container/VM name
API name
API purpose
protocol used
port number or range used
TCP/UDP
AAI service/aaizookeeper interface to hbase
2181TCP
AAI Model Loader/aai<namespace>.<topic name retrieved at initial handshake with SDC>dmaap topic name used to process model updates from SDChttps<port retrieved at initial handshake with SDC>

TCP

AAI Model Loader/aai/aai/v8/*REST Web Service for AAIhttps8443

TCP

AAI Model Loader/aai

/asdc/v1

REST Web Service for SDC

https

8443

TCP

Logging/Diagnostic Information

AAI Service

The logs for AAI REST Service can be found at /opt/app/aai/logs/rest:

-rw-r--r-- 1 aaiadmin aaiadmin 154978 Mar  1 19:50 error.log

 -rw-r--r-- 1 aaiadmin aaiadmin 350245 Mar  1 19:50 audit.log

 -rw-r--r-- 1 aaiadmin aaiadmin 628194 Mar  1 21:20 metrics.log

The AJSC logs are at /opt/app/aai/logs/ajsc-jetty

-rw-r--r-- 1 aaiadmin aaiadmin 83546 Mar  1 19:50 localhost_access.log

Other components logs are in folders under /opt/app/aai/logs such as createDBSchema and putTool.

Model Loader

The model loader has 3 log types which are of use in troubleshooting:

  • /opt/jetty/jetty-distribution-9.3.9.v20160517/logs/AAI-ML/error.log:  general info and error logs

  • /opt/jetty/jetty-distribution-9.3.9.v20160517/logs/AAI-ML/audit.log:  logs result of incoming transactions
  • /opt/jetty/jetty-distribution-9.3.9.v20160517/logs/AAI-ML/metrics.log:  logs result of outgoing transactions

On startup, the Model Loader will shut itself down in the event that the initial handshake with SDC is unsuccessful.  To debug such a scenario, it is useful to be able to view the logs from the stopped container.  This can be done by executing the following docker command from the host VM:

docker cp <container-id>:/opt/jetty/jetty-distribution-9.3.9.v20160517/logs/AAI-ML/error.log <destination-filename>

This will copy the log file from the stopped container to a destination on the host VM.

 APPC

Delivery

The APPC package is composed of three Docker images hosted on a single Ubuntu 14.04 LTS VM instance (medium flavor (Memory 4 Gb, 4 vCPU, 80 Gb Disk)

Docker Diagram

Offered APIs

Container/VM name
API name
API purpose
protocol used
port number or range used
TCP/UDP
APPC Container/appc<http-protocol>://<appc-ip>:<appc-api-port>/restconf/operations/appc-provider-lcm:<command-name> (ex: https://<appc-ip>:8443/restconf/operations/appc-provider:modifiy-config)Offer APP-C APIshttps8181TCP
APPC Container/appcDMaaP Adapterinterface to DMaaPhttps3904TCP
DB Container/appcMySQLAccess to MySQL DB
3306TCP


Consumed APIs

Container/VM name
API name
API purpose
protocol used
port number or range used
TCP/UDP
APPC Container/appc<http-protocol>://<appc-ip>:<appc-api-port>/restconf/operations/appc-provider-lcm:<command-name> (ex: https://<appc-ip>:8443/restconf/operations/appc-provider:modifiy-config)Offer APP-C APIshttps8181TCP
APPC Container/appcDMaaP Adapterinterface to DMaaPhttps3904TCP

Logging/Diagnostic Information

There are three places where we can look for diagnostics/logging information based on the situation at hand:

  • Cloud-Init Console Log: This log shows the output results of the cloud-init script that is deploying the APP-C VM. The log can be found on the Rackspace/OpenStack Dashboard where you are deploying your VM from, in the deployed VM itself (usually found in /var/log/cloud-init.log), or by calling the OpenStack API to output the cloud-init log (by obtaining information from the metadata server of your cloud platform).


  • Karaf Log: This log shows the output results of the APP-C/SDN-C Karaf features & bundles installed in the OpenDaylight framework. You can look here to troubleshoot any Karaf features and/or bundles that failed to install properly. The log can be found inside the APP-C Docker Container in the /opt/opendaylight/current/log
    • NOTE: Some errors will not show in detail unless the verbose option is enabled for karaf. To do this, log in to the Karaf client (explained in the APP-C documentation) and type in "log:set DEBUG" to enable verbose logging, and "log:set INFO" to disable verbose logging.

  • Docker-Compose Logs: This log is the output from when you trigger the docker-compose process to start (which starts thedocker containers). It can be found in two ways:
    • If running docker-compose right off the shell session ("docker-compose up"), the output will be displayed there.
    • If running docker-compose as a daemon/background process (RECOMMENDED - "docker-compose up -d"), you can open any other shell session and run "docker-compose logs -f" to obtain live logs from the docker-compose containers' instantiations.

DCAE

 DCAE

Delivery

DCAE infrastructure resources:

  • The CDAP/Hadoop cluster: This is a 3-VM cluster for running CDAP and Hadoop. 
  • The DCAE Docker Host:  DCAE has its own Docker host to run collectors and other DCAEdockercontainers.  There are special processes (ie. Manager) running on the DCAE Docker host VM for supporting command and control interactions with the DCAE Controller.  In addition, DCAE Controller has its own mechanism to manage collector resource uses that requires the control of its own Docker Host.  There are also special mechanisms required for container configuration such as host naming and certificates. 
  • The DMaaP:  For 1701, the DMaaP is represented in its minimum configuration, with only a DMaaP Bus Controller and one Message Router "node". The Message Router node consists of aKafakcontainer, a Zookeep container, and an AuthZ container.
  • The Postgresstorage: This is to be stood up by DCAE Controller in Rack Space tenant as Ubuntu 14.04 VM.

DCAE application resources

  • VES collector: This collector is to receive performance metrics from VNFs in a format that is in compliance with AT&T's Virtual function Event Streaming protocol. This collector is available as Docker container image. It is pulled from OpenECOMPdockerregistry, then started, stopped, re-deployed and configured by the DCAE Controller.  The only configuration currently required is the Message Router topic for DMaaP publishing to downstream DCAE components such as CDAP Analytics.
  • DCAE CDAP analytics: Threshold Crossing Analytics (TCA), and message router publishers and subscribers: the analytics are available as jar files.  DCAE Controller will deploy these jar files into CDAP clusters and start them up using CDAP mechanism.

Deployment Strategy for 1701 Demos

The DCAE controller VM and Docker container are launched by the ONAP level mechanism i.e. Heat template. From the Heat template, a VM for running the DCAE Controller is launched, necessary docker software installed, then the DCAE Controller Docker container is started on this VM. From here on, the DCAE Controller will initialize with the initial configurations, and launch/deploy the rest of DCAE resources, including the 3-VM CDAP cluster, the Postgres VM, and the DCAE Docker host VM. In the next phase, the VES Collector Docker container, the DMaaP Bus Controller Docker container, and the Message Router node (3 Docker containers).

Docker Diagram

Offered APIs

Container/VM name 

 API name 

 API purpose (one line please) 

 protocol used 

 port number or range used 

 TCP/UDP

DCAE controller

Controller API

Lots of stuff

HTTP/HTTPS

9998

TCP

DCAE GUI

GUI

Operational and deployment support (may not be part of release but needed for testing etc)

HTTP/HTTPS

80/8080/

TCP

DCAE Docker Host 

DCAE Docker Host Manager API


HTTP/HTTPS

9999

TCP

DCAE CDAP Cluster 

DCAE CDAP Cluster Manager API


HTTP/HTTPS

1999

TCP

DCAE CDAP Cluster 

CDAP GUI API


HTTP/HTTPS

9999

TCP

DCAE CDAP Cluster 

CDAP API


HTTP/HTTPS

10000

TCP

VES Collector

SEC/VES Collector API

Collector API to received SE events from VNF/VM

HTTP/HTTPS

8080 (HTTP)/8443 (HTTPS)

TCP

DMaaP Bus Controller

DMaaP Bus Controller API

Single entry point for provisioning DMaaP environment (esp feeds and topics)

https/http

https (8443), http (8080)

TCP

DMaaP MR

MR provisioning API

Provisioning API for MR topics

https

3905

TCP

DMaaP MR

MR client API

Client API for publishers and subscribers

https/http

https(3905), http(3904)

TCP

PostgreSQL

PostgreSQL connection

connect to PostgreSQL database 

SQL

5432

TCP


Consumed APIs

Container/VM name consuming the API 

 Container/VM/component name offering the API 

 internal to ECOMP or external e.g. openstack 

 protocol used 

 port number or range used 

 TCP/UDP

Notes

DCAE CDAP Cluster Manager

CDAP APIs

internal

HTTP/HTTPS

10000

TCP

Only on localhost

DCAE controller

openstack

external

HTTP/HTTPS

??

TCP


DCAE controller

DCAE Policy configuration API

internal

HTTP/HTTPS

??

TCP


DCAE controller

DCAE Docker Host Manager

internal

HTTP/HTTPS

9999

TCP


DCAE controller

DCAE CDAP Cluster Manager

internal

HTTP/HTTPS

1999

TCP


DCAE controller

DCAE Postgres Manager

internal

HTTP/HTTPS

9999

TCP


DCAE controller

DCAE DataBus Controller

internal

HTTP/HTTPS




DCAE Docker Host Manager

Docker API 

internal

HTTP/HTTPS

???

TCP

Only on localhost

DCAE controller

DCAE VES Collector Manager

internal

HTTP/HTTPS

9999

TCP


DMaaP Bus Controller

PostgreSQL VM (write)

internal
5432TCP


DMaaP MR (all)

internal

https

3905

TCP

topic provisioning


Policy Engine PAP

external

https

9091

TCP



Policy Engine PDP

external

https

8081, 8082

TCP


DMaaP MR

AAF

external

https

8095

TCP


PostgreSQL

iDNS /rw

external

http

8000

TCP

test for iDNS to determine if VM is a master and alive

PostgreSQL

iDNS /ro

external

http

8000

TCO

test for iDNS to determine if VM is a secondary and alive

Logging/Diagnostic Information

For each VM that is launched by the DCAE Controller, the post launching operations are through the cloud init process. A script is installed and run the /tmp/dcae_install.sh. The output of the running of this installation script is kept as the /tmp/dcae_install.log file.

The DCAE Controller also keeps extensive logging. The log files are viewable under the /opt/app/dcae-controller-platform-server/logs directory with in the DCAE Controller docker container.

 MSO

Delivery

  • Standard VM size should fit for MSO 4 G ram
  • Deliver MSO for Open Source in a common, easy anduser friendlyway.
  • Containers
    • 1 Container for JBOSS process (JBOSS + MSO core + BMPN ), this will include API handlers and JRA
    • 1 Container for MariaDB

Docker Diagram


    

Offered APIs

Container/VMNameAPI NameAPI purposeprotocol usedport number or rangeTCP/UDP
MSO Core and BPMN / MSO VMVID api handlerRequest coming from portalRest over http(s)8080 (http) or 8443 (https)TCP
MSO Database / MSO VMMSO MariaDB connectionInternal MSO Database APIproprietary3306TCP
MSO Core and BPMN / MSO VMJBOSS ManagementManagement console for BPMNshttp(s)9990TCP



Consumed APIs

Container/VMNameAPI NameAPI Purposeprotocol usedport number or rangeTCP/UDP
MSO Core and BPMN / MSO VMUEB/DMaappublish/receive events and artifacts from ASDChttp/https3904/3905TCP


Logging/Diagnostic Information

MSO log files are located under a specific folder in the JBOSS container.

JBOSS main log file (server.log) is accessible through the JBOSS console UI or by running a shell inside thecontainer :

/opt/jboss/standalone/log

The Debug level for server.log can be changed by in the JBOSS console UI.

EELF framework is used for specific logs (audit, metric and error logs), these are trackinginter componentlogs (request and response) and allow to follow a complete flow through the MSO subsystem

The EELF logs are located at the following location on the MSO JBOSS container :

/var/log/ecomp/MSO (each module has its own folder)
 Policy

Delivery

  • Packaging Structure - seven Docker containers in one VM:
    • PAP + Console (ECOMP Portal app) + LogParser (OS, Java, Tomcat, application)
    • PDP + LogParser (OS, Java, Tomcat, application)
    • PyPDP Server (OS, Java, application)
    • BRMS Gateway (OS, Java, application)
    • Drools PDP: (OS, Java, Drools runtime, Maven, application)
    • Nexus Repository: (OS, Java, Sonatype Nexus OSS repository manager)
    • Database (OS, MariaDB)
  • All images based on Ubuntu 14.04 LTS release, using OpenJDK 8 (replacing Oracle Java SE 8)
  • Rackspace VM will use the "15 GB Compute" (8 vCPU, 15 GB memory, 50 GB disk) configuration

Docker Diagram

Offered APIs

Container/VM nameAPI nameAPI purposeprotocol usedport number or range usedTCP/UDP <fill in>
Console (Portal)
UI, and interface fromECOMPPortalhttp8443TCP
PAP
manages the PDP Groups and Nodeshttp9091TCP
PDP
policy publishing and PIP configuration changeshttp8081TCP
PyPDP Server
queries against Policy Enginehttp8480TCP
Nexus Repository
Nexus OSS repository for Drools model & rule artifactshttp8081TCP
Database
MariaDB http3306TCP


Consumed APIs

Container/VM nameContainer/VM/ offering the APIAPI nameAPI purposeprotocol usedport number or range usedTCP/UDP <fill in>
Drools PDPDMaaP
publish/receive eventshttp/https3904/3905RCP
BRMS GatewayDMaaP
publish configuration change events to Drools PDPhttp/https3904/3905TCP
Console (Portal)ECOMP Portal/ecompuiInterface to ECOMP Portal from Portal apphttps8443?TCP
Drools PDPAAI Service/aai /aai/v8/*Rest Web Service for AAIhttps8443TCP
***Drools PDPMSO Core and BPMN / MSO VMVID api handlerRequest coming from portalhttp/https8080/8443TCP

Logging/Diagnostic Information

In the Drools PDP container, query the system healtcheck using appropriate credentials from $POLICY_HOME/config/policy-healthcheck.properties

curl --silent --user '<user>:<password>' GET http://localhost:6969/healthcheck | python -m json.tool

Note the healthy status of the different component groups:   PDP-D (Drools PDP server), PAP (PAP and Console server), PDP (PDP and PyPDP servers).

The following log directories contain log information for each component:

Container nameComponentFilesystem LocationDescription
papConsole UI$POLICY_HOME/servers/console/logsTomcat and Console UI logs
papPAP REST$POLICY_HOME/servers/pap/logsTomcat and PAP REST logs
papPAP LP$POLICY_HOME/servers/paplp/logsPAP Log Parser logs
pdpPDP$POLICY_HOME/servers/pdp/logsXACML based PDP Logs
pdpPDP LP$POLICY_HOME/servers/pdplp/logsPDP Log Parser Logs
pypdpPyPDP$POLICY_HOME/servers/pypdp/logsPDP REST Server
brmsgwBRMS GW$POLICY_HOME/servers/brmsgw/logs/Policy/PolicyEngineAPIBRMS Gateway Framework Logs
brmsgwBRMS GW$POLICY_HOME/logsBRMS Gateway log
droolsDrools PDP$POLICY_HOME/logsDrools PDP logs



 Portal

Delivery

  • Portal,SDKand DBC should be good to use the medium-size VM flavor.
  • Flavor used on current Rackspace (VM name: vm-ecomp-portal-os-01)
  • CPU: 2 vCPUs
  • RAM: 15 GB
  • System Disk: 50 GB
  • Network: 625 Mb / s
  • Disk I/O: Good
  • Container 1 - Ubuntu 14.04, openjdk8 (GPL License), apache tomcat 8.0.37 (Apache License)
  • Container 2 - MariaDB (GPL License)

Connectivity Matrix


Docker Diagram

Logging/Diagnostic Information

Application logs: The Portal application logs are mapped from Docker container to below folder under portal's VM.

/PROJECT/OpenSource/UbuntuEP/log/ecompportal

  • audit.log for transaction details
  • application.log for detailed application logs and debugging
  • debug.log for debugging output
  • error.log for exception debugging
  • metrics.log for metrics on transactions

Application server logs are one level up:

/PROJECT/OpenSource/UbuntuEP/log/

  • localhost_access_log.YYYY-mm-dd.txt contains access log info
  • catalina.2017-02-28.log tomcat information
 Robot Framework

Delivery

Robotframeorkshould be good to use ant small sized VM flavor. It installs only 1 container. The machine OS is UBUNTU 16.04

 SDC

Delivery

The machine OS is UBUNTU 16.04

The machine used by SDC has:

  •  8 vCPU,
  • 15GB RAM,
  • 50GB system disk.


Docker Diagram

Docker name

Description


 
sdc-cassandraThe docker contains our Cassandra server and the logic for creating the needed schemas for SDC. On docker startup, the schemes are created and Cassandra server is started.
sdc-elasticsearchThe docker contains Elastic Search server and the logic for creating the needed mapping for SDC. On docker startup, the mapping is created and Elastic Search server is started.
sdc-kibanaThe docker contains the Kibana server and the logic needed for creating the SDC views there. On docker startup, the views are configured and the Kibana server is started.
sdc-backendThe docker contains the SDC Backend Jetty server. On docker startup, the Jetty server is started with our application.
sdc-frontend The docker contains the SDC Fronted Jetty server. On docker startup, the Jetty server is started with our application.


Connectivity Matrix


Consumed APIs

  • Consumed APIs: This is a list of all consumed (used) APIs which are offered by another docker container or VM either from within the ECOMP component or from outside of the ECOMP component
  • Container/VM name consuming the API | Container/VM/component name offering the API | internal to ECOMP or external to ECOMP.  e.g. Openstack | protocol used | port number or range used | TCP/UDP <fill in>
  • DMaaP Port: 3904


Offered APIs

Container/VM name

API name

API purpose

protocol used

port number or range used

TCP/UDP

sdc-fe/sdc1/feproxy/*Proxy for all the REST calls from the SDC UIHTTP/HTTPS8181/8443TCP
sdc-be/sdc2/*Internal APIs used by the UI. The request is passed through the Front end proxy serverHTTP/HTTPS8080/8443TCP
sdc-be/asdc/*External APIs offered to the different components for retrieving information from the SDC Catalog. These APIs
are protected by basic authentication. 
HTTP/HTTPS8080/8443TCP

  

Logging/Diagnostic Information

Diagnostic:

We provide a health check script that can show the state of our application.
The script is located at  /data/scripts/docker_health.sh. 
The script is taken from our repository in LF on VM spin.
The script calls a REST API in the FE and BE server.

The Back end health check provides the following INFO, in case one of the components is down the server will fail requests:

type

section

description

general info"sdcVersion": "1.0.0-SNAPSHOT",
"siteMode": "unknown",
This shows the current version of the application installed.
The site mode is not used int the current version. 
general info{
"healthCheckComponent": "BE",
"healthCheckStatus": "UP",
"version": "1.0.0-SNAPSHOT",
"description": "OK"
}
This shows the current version of the application installed.

Elastic Search{
"healthCheckComponent": "ES",
"healthCheckStatus": "UP",
"description": "OK"
}
This describes our connectivity to Elastic Search.
TITAN{
"healthCheckComponent": "TITAN",
"healthCheckStatus": "UP",
"description": "OK"
}
This describes our connectivity to and from the Titan client and the Cassandra server.
Demaap{
"healthCheckComponent": "DE",
"healthCheckStatus": "UP",
"description": "OK"
}
This describes our connectivity to the Dmaap.


The Front end server health check places a REST call to the Back end server to check the connectivity status between the servers.

Logging:

serverlocationtypedescriptionrolling
BE/data/logs/BE/2017_03_10.stderrout.logJetty server logThe log describes info regarding Jetty startup and executionthe log rolls daily
/data/logs/BE/ASDC/ASDC-BE/audit.logaplication auditAn audit record is created for each operation in SDCrolls at 20 mb
/data/logs/BE/ASDC/ASDC-BE/debug.logaplication loggingWe can enable higher logging on demand by editing the logback.xml inside the server docker.
The file is located under:  config/catalog-be/logback.xml. 
This log holds the debug and trace level output of the application.
rolls at 20 mb
/data/logs/BE/ASDC/ASDC-BE/error.logaplication loggingThis log holds the info and error level output of the application.rolls at 20 mb
/data/logs/BE/ASDC/ASDC-BE/transaction.logaplication loggingNot currently in use. will be used in future relases.rolls at 20 mb
/data/logs/BE/ASDC/ASDC-BE/all.logaplication logging

On demand, we can enable log aggregation into one file for easier debugging. This is done by editing the logback.xml inside the server docker.
The file is located under:  config/catalog-be/logback.xml. 
To allow this logger, set the value for this property to true <property scope="context" name="enable-all-log" value="false" />

This log holds all logging output of the application.
rolls at 20 mb
FE/data/logs/FE/2017_03_10.stderrout.logJetty server logThe log describes info regarding the Jetty startup and executionthe log rolls daily
/data/logs/FE/ASDC/ASDC-FE/debug.logaplication loggingWe can enable higher logging on demand by editing the logback.xml inside the server docker.
The file is located  under: config/catalog-fe/logback.xml. 
This log holds the debug and trace level output of the application.
rolls at 20 mb
/data/logs/FE/ASDC/ASDC-FE/error.logaplication loggingThis log holds the Info and Error level output of the application.rolls at 20 mb
/data/logs/FE/ASDC/ASDC-FE/all.logaplication logging

On demand we can enable log aggregation into one file for easier debuging, by editing the logback.xml inside the server docker.
The file is located under: config/catalog-fe/logback.xml. 
To allow this logger set this property to true <property scope="context" name="enable-all-log" value="false" />

This log holds all the logging output of the application.

rolls at 20 mb

 The logs are mapped from the docker to an outside path so that on docker failure the logs will still be available.



 SDNC

Delivery

  • SDN-C VM assumed to use "medium" flavor (Memory 4 G, 4 vCPU, 80 G Disk)
  • SDN-C Packaging Structure

The SDN-C package should be composed of four Docker images hosted on a single Ubuntu 14.04 LTS VM instance

  • Directed graph builder container, used by developers to create directed graphs (contains NodeRed + SDN-C extensions)
  • "Controller tier" container  consisting of the OpenDaylight container + SDN-C extensions such as SLI
  • "Admin tier" container, consisting of admin portal
  • "Database tier" container consisting of MySQL community edition database server

Docker Diagram


Offered APIs


Consumed APIs


Logging/Diagnostic Information

<where to look>

 VID

Delivery

  • MariaDB Image -

    Create a container using the docker MariaDB image.

  • VID Image -
    Create a docker image which extends the Tomcat docker image and links to the MariaDB container created earlier.  Configuration of thedocker container will be customized by providing environment variables to the "docker run" command.  The environment variables provided will be the same set of variables that VID currently supports using the SWM deployment method (which includes the ASDC server host:port, MSO host:port, A&AI host:port, etc.)
  • Recommended Rackspace VM Flavor
IDFlavor nameMemoryDiskEphemeralVCPUsRTTX factor
general1-22GB General Purpose v120484002400.0


Docker Diagram


Offered APIs


Consumed APIs


Logging/Diagnostic Information

Application logs:

/opt/app/vid/logs/vid

  • audit.log for transaction details
  • application.log for detailed application logs and debugging
  • debug.log for debugging output
  • error.log for exception debugging
  • metrics.log for metrics on transactions

Application server logs:

/usr/local/tomcat/logs/

  • localhost_access_log.YYYY-mm-dd.txt contains access log info
  • catalina.2017-02-28.log tomcat information


 Message Router

Delivery

There are two instances of the Message Router present in the demos. One is used within the DCAE component for its communication needs and deployed onto the DCAE's own Docker host. The other is deployed on its own Docket hostvmfornon-DCAE Open eCOMP communications. Both instances are launched from the same start-up scripts and configurations.

Each Message Router instance consists of three Docker containers, anattos/dmaapimage container, awurstmeister/zookeeper image container, and aKafakcontainer built from a Dockerfile. The containers are started from a Docker composefile. Because there are API dependencies between the containers, it will take a little time (e.g. ~10 seconds) for all the containers to complete booting and become ready to serve.

The Message Route instances used in the demos are pre-programmed with a number of topics and API keys for the demo uses. They are stored in the state data files that are also cloned from the start-up scripts and configuration filesrepo,and mounted to the Docker containers at their start-up time.

If a Message Router is ever suspected for being in a incoherent state and needs a "reboot", it is important to do in a fashion that this initial state is properly loaded. There are two approaches. The first if to terminate the VM running the Message Router instance then relaunch the VM and load the Message Router files and start the containers. The other approach requires to first bring down the running docker containers (e..g docker-compose down), then remove the state data directories, namely the data-kafka and data-zookeeper directories, and finally unarchive a cached state tar ball state-20170301.tar.gz to restore the state data directories before "docker-compose up" the containers again.

The Message Router provides a listing API for querying what topics are currently provisioned on the Message Router. It is mapped and available on the Docker host VM as: http://${VM_HOST_IP}:3904/topics

More over, when supplied with an individual topic name, this API returns more details of this topic, such as the owner's API key, API keys of the authorized readers and writers, etc:  http://${VM_HOST_IP}:3904/topics/${TOPIC_NAME}

More ways of using the Message Router API can be found in the test scripts which can be found under the /opt/dcae.demo.startup.message-router/docker_files/tests directory of the VM.


Testing

To run it manually please:

-   Log into the vm1-robot VM (get <publicNetIP> from cloud provider)

-   In the shell run: docker exec openecompete_container /var/opt/OpenECOMP_ETE/runTags.sh -i health

Build and Test Environments

  • Ubuntu 14.04 LTS – 64bits (VID, SDNC, AAI, Portal, Policy, APPC, DCAE)
  • Ubuntu 16.04 LTS – 64bits (MSO, SDC)
  • Docker 1.12
  • Supported Web Browsers : Mozilla Firefox/Chrome

Features

Initial Open Source version of ECOMP including

  • AAI, Portal, MSO, SDNC, VID, SDC
  • Closed Loop applications (APPC, Policy and DCAE)

Known Restrictions

General:

  • This release is not for production use out of the box since it is not configured for any auto-scaling(healing as in min=1, max=1), resiliency, scalability, clustering, disaster recovery capabilities neither any persistency after a VM and/or container is rebooted
  • User can install without Docker but no documentation has been provided in this release.
  • HEAT templates apply to Rackspace and OpenStack environments only. Consult Rackspace documentation to get memory, vCPU for other openstack image sizes
  • Startup sequence is critical since the DMaaP Message Router (MR) and its topics must be in place before flows will work
  • The messaging topics used by the OpenECOMP components have already been pre-configured into DMaaP Message Router initialization state.  However, it is advised to leave sufficient time in the system startup sequence for the Message Router to complete its booting (e.g. ~20 seconds) before proceeding with launching of OpenECOMP components that depend on Message Router for communications.
  • Healthcheck details can provide information on sub-component issues in many cases
  • Rackspace monitoring of VM’s is first level of VM health then robot framework health check.
  • Healthcheck can be used for monitoring key components like DMaaP MR (including the list topics command)
  • Additional application monitoring (e.g., key log files and alarms) will be documented in future.
  • Application monitoring tools are not included in this release.

AAI:

  • The hbase configuration is for a demo use case only – it should be configured as per application needs.

APPC:

  • The APPC LCM APIs (appc-provider-lcm) only work when there is an AAI instance available and set up to interface with APPC
  • Currently, the "ModifyConfig" API and the implementation in the Master Directed Graph is only designed to work with the vFW Closed-Loop Demo.
  • The “appc-iaas-adapter” is not starting automatically during Docker instantiation.

  • A maven settings script (which lets APPC know where to download/upload maven artifacts to/from) needs to be provided if building the project on a local machine. An example to set up a settings script can be found here: https://maven.apache.org/settings.html
  • see example root pom

 Portal:

  • Portal depends on DNS names to access the landing page login URL. So this requires manual setup of DNS host details in system.properties file as detailed in the instructions

 MSO:

  • Although MSO should run on any cloud integrated with Openstack (i.e. Icehouse), it has only been validated on Rackspace

Robot Framework:

  • Robot container does not contain any persistent volumes, results should be stored and viewed elsewhere

  • No labels