Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. A model project which contains the data model  using XCORE file. These projects always have a name (and artifactId) that ends in -model.This file is located on the src/main/xcore directory. And the project is setup so Eclipse will update automatically the generated Java code in the src/main/xcore-gen directory. This project is a straight EMF type project and contain very little SOMF specifics.
  2. A SOMF project which contains SOMF generated code and implementation of the APIs that the model defines.
    1. SOMF generator class Generator.java under the src/main/java folder. It contains the defined customization for creating the SOMF implementation. Running this class will generate new contents in the two generated folders.
    2. src/main/sirius-gen folder contains generated Java code for implementing Client and Server side functions.
    3. src/main/server-gen folder contains generated Bash and Groovy scripts for implementing Client and Server side functions.
    4. SOMF provider *Provider.java Java classes under the src/main/java folder. These implements the operation/methods that are defined in the model.

DCAE Configuration Setup

The DCAE configuration setup is trying to carefully handle Environmental and Non-Environmental configuration.

The ONAP Demo setup is creating another level of complexity to the setup since the overall ONAP Heat templates will deliver environmental information that need to be include in the Environmental configuration. 

This is the various configuration entities.

  1. DCAE Non Environmental ONAP Demo configuration. https://gerrit.onap.org/r/gitweb?p=dcae/demo.git;a=tree;f=OPENECOMP-DEMO;h=38b2f51a1be5c12f357e228b7a655c0ee97606b8;hb=HEAD with the main files
    1. location-types.yaml define the various DCAE entities (e.g., docker-host VMs, cdap cluster VMs, VES collector etc) that are part of the demo deployment
    2. vm-templates/vm-docker-host.yaml, vm-templates/vm-cdap-cluster.yaml, vm-templates/vm-postgresql.yaml . Define the deployment configuration for various VM deployments.
    3. docker-templates/docker-XX.yaml. Similar for docker deployments.
    4. cdap-templates/cdap-YY.yaml. Similar for docker deployments.
    5. steams.yaml. Define DCAE DMaaP setup. 
  2. HEAT provided information. When the ONAP demo is getting deploy the vm1-dcae-controller VM which will come up with the DCAE controller Docker container. This container will use configuration attributes setup in the file /opt/app/dcae-controller/config.yaml to apply to DCAE controller environment file, which will have expressions like @{XXX} replaced with the value of XXX in the config.yaml file.
  3. DCAE Environmental ONAP Demo configuration. Currently 3 deployment scenarios are supported (See DCAE-7 for details about current status.)
    1. RACKSPACE. This scenario only work when the Cloud provider is Rackspace. In this each DCAE VM will be setup with a predefined fixed IP on a private network and it will get a random public IP on the public network. This scenario depends on the following values from the HEAT provided environment file: DCAE-VERSION, DOCKER-REGISTRY, DOCKER-VERSION, GIT-MR-REPO, HORIZON-URL, KEYSTONE-URL, NEXUS-PASSWORD, NEXUS-RAWURL, NEXUS-USER, OPENSTACK-KEYNAME, OPENSTACK-PASSWORD, OPENSTACK-PRIVATE-NETWORK, OPENSTACK-PUBKEY, OPENSTACK-REGION, OPENSTACK-TENANT-ID, OPENSTACK-TENANT-NAME, OPENSTACK-USER, POLICY-IP, STATE, ZONE.
    2. 2-NIC. This scenario works in most OpenStack environments and allows higher level of flexibility in assigning IPs etc. Similar to the RACKSPACE setup each DCAE VM will be setup with a predefined fixed IP on a private network and it will get a random public IP on the public network.This scenario depends on the following values from the HEAT provided environment file: DCAE-VERSION, DNS-IP-ADDR, DOCKER-REGISTRY, DOCKER-VERSION, FLAVOR-LARGE, GIT-MR-REPO, HORIZON-URL, KEYSTONE-URL, NEXUS-PASSWORD, NEXUS-RAWURL, NEXUS-USER, OPENSTACK-AUTH-METHOD, OPENSTACK-KEYNAME, OPENSTACK-PASSWORD, OPENSTACK-PRIVATE-NETWORK, OPENSTACK-PUBKEY, OPENSTACK-REGION, OPENSTACK-TENANT-ID, OPENSTACK-TENANT-NAME, OPENSTACK-USER, POLICY-IP, STATE, UBUNTU-1404-IMAGE, UBUNTU-1604-IMAGE, ZONE, dcae_cdap00_ip_addr, dcae_cdap01_ip_addr, dcae_cdap02_ip_addr, dcae_coll00_ip_addr, dcae_ip_addr, dcae_pstg00_ip_addr, public_net_id. The HEAT template demo/heat/OpenECOMP/onap_openstack_nofloat.yaml provides these variables.
    3. 1-NIC-FLOATING-IPS. This scenario works in most OpenStack environments and allows higher level of flexibility in assigning IPs etc. In this setup each DCAE VM will only have one NIC with an IP on the private network. In addition each VM will have a floating IP from the public network. In this case it is the floating VMs that can get predefined values and the IPs from the private network that are assigned to the NIC are randomly setup. This makes a few issues with the rest of the demo setup but should be working in the 1.1 release. This scenario depends on the following values from the HEAT provided environment file:DCAE-VERSION, DNS-IP-ADDR, DOCKER-REGISTRY, DOCKER-VERSION, FLAVOR-LARGE, GIT-MR-REPO, HORIZON-URL, KEYSTONE-URL, NEXUS-PASSWORD, NEXUS-RAWURL, NEXUS-USER, OPENSTACK-KEYNAME, OPENSTACK-PASSWORD, OPENSTACK-PRIVATE-NETWORK, OPENSTACK-PUBKEY, OPENSTACK-REGION, OPENSTACK-TENANT-ID, OPENSTACK-TENANT-NAME, OPENSTACK-USER, POLICY-IP, STATE, UBUNTU-1404-IMAGE, UBUNTU-1604-IMAGE, ZONE, dcae_cdap00_float_ip_addr, dcae_cdap01_float_ip_addr, dcae_cdap02_float_ip_addr, dcae_coll00_float_ip_addr, dcae_float_ip_addr, dcae_pstg00_float_ip_addr. The HEAT template demo/heat/OpenECOMP/onap_openstack_float.yaml provides these variables.

The overall deployment flow is the following.

  1. HEAT template
    1. Create the vm1-dcae-controller VM.
    2. Setup  /opt/app/dcae-controller/config.yaml
    3. Start the Docker Container for the DCAE Controller
  2. The DCAE controller will (running /opt/app/dcae-controller-platform-server/bin/controller-startup.sh)
    1. Determine the BASE based on the BASE attribute in config.yaml
    2. Determine the ZONE based on the ZONE attribute in config.yaml
    3. Substitute the HEAT delivered attributes from config.yaml into /opt/app/dcae-controller-platform-server/OPENECOMP-DEMO-$BASE and produce a /opt/app/dcae-controller-platform-server//opt/app/dcae-controller-platform-server/OPENECOMP-DEMO-$ZONE.
      1. bin/dcae-controller.sh rackspace-substitute --from OPENECOMP-DEMO-$BASE --to OPENECOMP-DEMO-$ZONE --file /opt/app/dcae-controller/config.yaml
    4. Merge the Non-environmental configuration OPENECOMP-DEMO with the environmental configuration OPENECOMP-DEMO-$ZONE to create the complete configuration setup for this specific environment which will be placed in GITLINK/OPENECOMP-DEMO-$ZONE
      1. java -cp 'lib/*' org.openecomp.dcae.controller.operation.utils.GenControllerConfiguration $ZONE . GITLINK OPENECOMP-DEMO
    5. The DCAE Controller will be started up and synced with the configuration
      1. bin/dcae-controller.sh start
      2. bin/dcae-controller.sh sync-configuration --environment OPENECOMP-DEMO-$ZONE

    6. The DCAE Controller can then Deploy the various DCAE components define in the demo environment

      1. bin/dcae-controller.sh deploy-service-instance -i $ZONE -s vm-docker-host-1
      2. bin/dcae-controller.sh deploy-service-instance -i $ZONE -s vm-postgresql  
      3. bin/dcae-controller.sh deploy-service-instance -i $ZONE -s vm-cdap-cluster
      4. bin/dcae-controller.sh deploy-service-instance -i $ZONE -s docker-databus-controller
      5. bin/dcae-controller.sh deploy-service-instance -i $ZONE -s cdap-helloworld
      6. bin/dcae-controller.sh deploy-service-instance -i $ZONE -s cdap-tca-hi-lo
      7. bin/dcae-controller.sh deploy-service-instance -i $ZONE -s docker-common-event