Pre-requisite

The supported versions are as follows:

ONAP ReleaseRancherKubernetesHelmKubectlDocker
Amsterdam1.6.101.7.72.3.01.7.71.12.x
Beijing1.6.141.8.102.8.21.8.1017.03-ce
Casablanca1.6.181.8.102.9.11.8.1017.03-ce

This is the amsterdam branch supporting DCAEGEN2 which has different rancher/helm/kubernetes/docker version levels 

see ONAP on Kubernetes#HardwareRequirements

Installing Docker/Rancher/Helm/Kubectl

Run the following script as root to install the appropriate versions of Docker and Rancher.  Rancher will install Kubernetes and helm, the script will install the helm and kubectl clients.

Adding hosts to the kubernetes cluster is done through the rancher UI.

OOM-715 - Getting issue details... STATUS

https://gerrit.onap.org/r/#/c/32019/17/install/rancher/oom_rancher_setup.sh


Overall required resources:


Nbr VMvCPUsRAM (GB)Disk (GB)Swap (GB)Floating IPs
Rancher12440--
Kubernetes1880-12810016-
DCAE154488880-15
Total1754156-22010201615

NumberFlavorsize (vCPU/RAM/HD)

1m1.small1/2/20

7 (pg, doks, dokp, cnsl, orcl)m1.medium2/4/40

7 (cdap)m1.large8/8/80

0m1.xlarge8/16/160

1(oom)m1.xxlarge (oom)12/64/160

1(oom)m1.medium2/4/40

Below is the HEAT portion of the setup in OOM - minus the 64G OOM and 4G OOM-Rancher VM's

Setup infrastructure

Rancher

  1. Create a plain Ubuntu VM in your cloud infrastructure.

    The following specs are enough for Rancher

    VCPUs2
    RAM4Go
    Taille40Go
  2. Setup Rancher 1.6.10 (amsterdam branch only) by running this command:

    docker run -d -p 8880:8080 rancher/server:v1.6.10
  3. Navigate to Rancher UI

    http://<rancher-vm-ip>:8880
  4. Setup basic access control: Admin → Access Control
  5. Install Openstack as machine driver: Admin → Machine Drivers

We're now all set to create our Kubernetes host.

Kubernetes on Rancher

see related ONAP on Kubernetes#QuickstartInstallation

Video describing all the steps

zoom_0.mp4

  1. Create an environment
    1. Default → Managed Environments
    2. Click Add Environment
    3. Fill in the Name and the Description
    4. Select Kubernetes as Environment Template
    5. Click Create
  2. Create an API key: API → Keys
    1. Click Add Account API Key
    2. Fill in the Name and the Description
    3. Click Create
    4. Backup the Access Key and the Secret Key
  3. Retrieve your environment ID
    1. Navigate to the previously created environment
    2. In the browser URL, you should see the following, containing your <env-id>

      http://<rancher-vm-ip>:8080/env/<env-id>/kubernetes/dashboard
  4. Create the Kubernetes host on OpenStack.


    1. Using Rancher API
      Make sure to fill in the placeholder as follow:

      {API_ACCESS_KEY}: The API KEY created in the previous step
      {API_SECRET_KEY}: The API Secret created in the previous step
      {OPENSTACK_INSTANCE}: The OpenStack Instance Name to give to your K8S VM
      {OPENSTACK_IP}: The IP of your OpenStack deployment
      {RANCHER_IP}: The IP of the Rancher VM created previously
      {K8S_FLAVOR}: The Flavor to use for the kubernetes VM. Recommanded specs:

      VCPUs8
      RAM64Go
      Taille100Go
      Swap16Go

      I added some swap because in ONAP, most of the app are not always active, most of them are idle, so it's fine to let the host store dirty page in the swap memory.
      {UBUNTU_1604}: The Ubuntu 16.04 image
      {PRIVATE_NETWORK_NAME}: a private network
      {OPENSTACK_TENANT_NAME}: Openstack tenant
      {OPENSTACK_USERNAME}: Openstack username
      {OPENSTACK_PASSWORD}: OpenStack password

      curl -u "{API_ACCESS_KEY}:{API_SECRET_KEY}" \
      -X POST \
      -H 'Accept: application/json' \
      -H 'Content-Type: application/json' \
      -d '{
      "hostname":"{OPENSTACK_INSTANCE}",
      "engineInstallUrl":"wget https://raw.githubusercontent.com/rancher/install-docker/master/1.12.6.sh",
      "openstackConfig":{
          "authUrl":"http://{OPENSTACK_IP}:5000/v3",
          "domainName":"Default",
          "endpointType":"adminURL",
          "flavorName":"{K8S_FLAVOR}",
          "imageName":"{UBUNTU_1604}",
          "netName":"{PRIVATE_NETWORK_NAME}",
          "password":"{OPENSTACK_PASSWORD}",
          "sshUser":"ubuntu",
          "tenantName":"{OPENSTACK_TENANT_NAME}",
          "username":"{OPENSTACK_USERNAME}"}
      }' \
      'http://{RANCHER_IP}:8080/v2-beta/projects/{ENVIRONMENT_ID}/hosts/'
    2. Doing it manually
      1. Create a VM in your VIM with the appropriate specs (see point above)
      2. Provision it with docker: You can find the proper version of Docker to install here (for Rancher 1.6): http://rancher.com/docs/rancher/v1.6/en/hosts/#supported-docker-versions

      3. Go in Rancher, Infrastructure → Hosts and click the button Add Host. "Copy, paste, and run the command below to register the host with Rancher:”
  5. Let's wait a few minutes until it's ready.
  6. Get your kubectl config
    1. Click Kubernetes → CLI
    2. Click Generate Config
    3. Copy/Paste in your host, under

      ~/.kube/config

      If you have multiple Kubernetes environments, you can give it a different name, instead of config. Then reference all your kubectl config in your bash_profile as follow

      KUBECONFIG=\
      /Users/adetalhouet/.kube/k8s.adetalhouet1.env:\
      /Users/adetalhouet/.kube/k8s.adetalhouet2.env:\
      /Users/adetalhouet/.kube/k8s.adetalhouet3.env
      export KUBECONFIG
  7. Install kubectl

    curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubectl
    chmod +x ./kubectl
    sudo mv ./kubectl /usr/local/bin/kubectl
  8. Make your kubectl use this new environment

    kubectl config use-context <rancher-environment-name>
  9. After a little bit, your environment should be ready. To verify, use the following command

    $ kubectl get pods --all-namespaces
    NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
    kube-system   heapster-4285517626-4dst0              1/1       Running   0          4m
    kube-system   kube-dns-638003847-lx9f2               3/3       Running   0          4m
    kube-system   kubernetes-dashboard-716739405-f0kgq   1/1       Running   0          4m
    kube-system   monitoring-grafana-2360823841-0hm22    1/1       Running   0          4m
    kube-system   monitoring-influxdb-2323019309-4mh1k   1/1       Running   0          4m
    kube-system   tiller-deploy-737598192-8nb31          1/1       Running   0          4m

Deploy OOM

Video describing all the steps

zoom_1.mp4

We will basically follow this guide: http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_user_guide.html?highlight=oom

  1. Clone OOM amsterdam branch

    git clone -b amsterdam https://gerrit.onap.org/r/p/oom.git
  2. Prepare configuration
    1. Edit the onap-parameters.yaml under

      oom/kubernetes/config


      Update (04/01/2018):
      Since c/26645 is merged, you now have the ability to deploy DCAE from OOM.
      You can disable DCAE deployment through the following parameters:

      1. set DEPLOY_DCAE=false

      2. set disableDcae: true in the dcaegen2 values.yaml located under oom/kubernetes/dcaegen2/values.yaml.

    2. To have endpoints registering to MSB, add your kubectl config token. The token can be found within the text you pasted from Rancher into ~/.kube/config. Past this token string into under kubeMasterAuthToken located at

      oom/kubernetes/kube2msb/values.yaml
  3. Create the config

    In Amsterdam, the namespace, pass with the -n param, has to be onap.
    Work is currently being done so this is configurable in Bejing.

    cd oom/kubernetes/config
    ./createConfig.sh -n onap


    This step is creating the tree structure in the VM hosting ONAP to persist the data, along with adding the initial config data in there. In Amsterdam, this is hardcoded to the path /dockerdata-nfs/onap/
    Work is being done so this is configurable in Bejing, see OOM-145 - Getting issue details... STATUS


  4. Deploy ONAP

    In Amsterdam, the namespace, pass with the -n param, has to be onap.
    Work is currently being done so this is configurable in Bejing.

    cd oom/kubernetes/oneclick
    ./createAll.bash -n onap



    Update
    (04/01/2018)
    :
    Since c/26645 is merged, two new containers are being deployed, under the onap-dcaegen2 namespace, as shown in the following diagram.

    Deploying the dcaegen2 pod


    The dcae-boostrap VM created in OpenStack will run the dcae-boostrap container, responsible for brigging up the whole DCAE stack. Its progress can be followed by tailing the logs of the docker container boot running in the VM. Bellow the full log output of a successful run.

    DCAE Bootstrap container logs
    root@dcae-dcae-bootstrap:~# docker logs -f boot
    + PVTKEY=./key600
    + cp ./config/key ./key600
    + chmod 600 ./key600
    + virtualenv dcaeinstall
    New python executable in /opt/app/installer/dcaeinstall/bin/python2
    Also creating executable in /opt/app/installer/dcaeinstall/bin/python
    Installing setuptools, pkg_resources, pip, wheel...done.
    Running virtualenv with interpreter /usr/bin/python2
    + source dcaeinstall/bin/activate
    ++ deactivate nondestructive
    ++ unset -f pydoc
    ++ '[' -z '' ']'
    ++ '[' -z '' ']'
    ++ '[' -n /bin/bash ']'
    ++ hash -r
    ++ '[' -z '' ']'
    ++ unset VIRTUAL_ENV
    ++ '[' '!' nondestructive = nondestructive ']'
    ++ VIRTUAL_ENV=/opt/app/installer/dcaeinstall
    ++ export VIRTUAL_ENV
    ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    ++ PATH=/opt/app/installer/dcaeinstall/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    ++ export PATH
    ++ '[' -z '' ']'
    ++ '[' -z '' ']'
    ++ _OLD_VIRTUAL_PS1=
    ++ '[' x '!=' x ']'
    +++ basename /opt/app/installer/dcaeinstall
    Collecting cloudify==3.4.0
    ++ PS1='(dcaeinstall) '
    ++ export PS1
    ++ alias pydoc
    ++ '[' -n /bin/bash ']'
    ++ hash -r
    + pip install cloudify==3.4.0
      Downloading cloudify-3.4.tar.gz (60kB)
    Collecting cloudify-plugins-common==3.4 (from cloudify==3.4.0)
      Downloading cloudify-plugins-common-3.4.tar.gz (72kB)
    Collecting cloudify-rest-client==3.4 (from cloudify==3.4.0)
      Downloading cloudify-rest-client-3.4.tar.gz
    Collecting cloudify-dsl-parser==3.4 (from cloudify==3.4.0)
      Downloading cloudify-dsl-parser-3.4.tar.gz (48kB)
    Collecting cloudify-script-plugin==1.4 (from cloudify==3.4.0)
      Downloading cloudify-script-plugin-1.4.tar.gz
    Collecting pyyaml==3.10 (from cloudify==3.4.0)
      Downloading PyYAML-3.10.tar.gz (241kB)
    Collecting argcomplete==1.1.0 (from cloudify==3.4.0)
      Downloading argcomplete-1.1.0-py2.py3-none-any.whl
    Collecting fabric==1.8.3 (from cloudify==3.4.0)
      Downloading Fabric-1.8.3-py2-none-any.whl (91kB)
    Collecting PrettyTable<0.8,>=0.7 (from cloudify==3.4.0)
      Downloading prettytable-0.7.2.zip
    Collecting colorama==0.3.3 (from cloudify==3.4.0)
      Downloading colorama-0.3.3.tar.gz
    Collecting jinja2==2.7.2 (from cloudify==3.4.0)
      Downloading Jinja2-2.7.2.tar.gz (378kB)
    Collecting itsdangerous==0.24 (from cloudify==3.4.0)
      Downloading itsdangerous-0.24.tar.gz (46kB)
    Collecting retrying==1.3.3 (from cloudify==3.4.0)
      Downloading retrying-1.3.3.tar.gz
    Collecting wagon==0.3.2 (from cloudify==3.4.0)
      Downloading wagon-0.3.2.tar.gz
    Collecting pika==0.9.14 (from cloudify-plugins-common==3.4->cloudify==3.4.0)
      Downloading pika-0.9.14.tar.gz (72kB)
    Collecting networkx==1.8.1 (from cloudify-plugins-common==3.4->cloudify==3.4.0)
      Downloading networkx-1.8.1.tar.gz (806kB)
    Collecting proxy_tools==0.1.0 (from cloudify-plugins-common==3.4->cloudify==3.4.0)
      Downloading proxy_tools-0.1.0.tar.gz
    Collecting bottle==0.12.7 (from cloudify-plugins-common==3.4->cloudify==3.4.0)
      Downloading bottle-0.12.7.tar.gz (69kB)
    Collecting requests==2.7.0 (from cloudify-rest-client==3.4->cloudify==3.4.0)
      Downloading requests-2.7.0-py2.py3-none-any.whl (470kB)
    Collecting requests_toolbelt (from cloudify-rest-client==3.4->cloudify==3.4.0)
      Downloading requests_toolbelt-0.8.0-py2.py3-none-any.whl (54kB)
    Collecting paramiko<1.13,>=1.10 (from fabric==1.8.3->cloudify==3.4.0)
      Downloading paramiko-1.12.4.tar.gz (1.1MB)
    Collecting markupsafe (from jinja2==2.7.2->cloudify==3.4.0)
      Downloading MarkupSafe-1.0.tar.gz
    Collecting six>=1.7.0 (from retrying==1.3.3->cloudify==3.4.0)
      Downloading six-1.11.0-py2.py3-none-any.whl
    Requirement already satisfied: wheel>=0.24.0 in ./dcaeinstall/lib/python2.7/site-packages (from wagon==0.3.2->cloudify==3.4.0)
    Collecting virtualenv>=12.1 (from wagon==0.3.2->cloudify==3.4.0)
      Downloading virtualenv-15.1.0-py2.py3-none-any.whl (1.8MB)
    Collecting click==4.0 (from wagon==0.3.2->cloudify==3.4.0)
      Downloading click-4.0-py2.py3-none-any.whl (62kB)
    Collecting pycrypto!=2.4,>=2.1 (from paramiko<1.13,>=1.10->fabric==1.8.3->cloudify==3.4.0)
      Downloading pycrypto-2.6.1.tar.gz (446kB)
    Collecting ecdsa (from paramiko<1.13,>=1.10->fabric==1.8.3->cloudify==3.4.0)
      Downloading ecdsa-0.13-py2.py3-none-any.whl (86kB)
    Building wheels for collected packages: cloudify, cloudify-plugins-common, cloudify-rest-client, cloudify-dsl-parser, cloudify-script-plugin, pyyaml, PrettyTable, colorama, jinja2, itsdangerous, retrying, wagon, pika, networkx, proxy-tools, bottle, paramiko, markupsafe, pycrypto
      Running setup.py bdist_wheel for cloudify: started
      Running setup.py bdist_wheel for cloudify: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/f2/c3/82/67178b6763f55a90e44ab2275208275a5a17a67bc79f9db0b7
      Running setup.py bdist_wheel for cloudify-plugins-common: started
      Running setup.py bdist_wheel for cloudify-plugins-common: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/00/28/99/38e5cd3877708a00e49a462159693320f11a16336a523c363c
      Running setup.py bdist_wheel for cloudify-rest-client: started
      Running setup.py bdist_wheel for cloudify-rest-client: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/86/21/96/7090ccf2eb840d5b59f8d87eeb15c5177f6fc4efaaf3376cfb
      Running setup.py bdist_wheel for cloudify-dsl-parser: started
      Running setup.py bdist_wheel for cloudify-dsl-parser: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/60/98/2c/6ddda245951daf800173aa74c2ed0f579515eedf88c4b81f10
      Running setup.py bdist_wheel for cloudify-script-plugin: started
      Running setup.py bdist_wheel for cloudify-script-plugin: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/84/0d/cf/561f77378a6491dd737e0b21e3661f5b978b58282cae1c83df
      Running setup.py bdist_wheel for pyyaml: started
      Running setup.py bdist_wheel for pyyaml: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/cc/2a/d6/5a7108e2281e4c783740d79c40eac3ebc2d4157b1c7e4f17ef
      Running setup.py bdist_wheel for PrettyTable: started
      Running setup.py bdist_wheel for PrettyTable: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/b6/90/7b/1c22b89217d0eba6d5f406e562365ebee804f0d4595b2bdbcd
      Running setup.py bdist_wheel for colorama: started
      Running setup.py bdist_wheel for colorama: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/21/c5/cf/63fb92293f3ad402644ccaf882903cacdb8fe87c80b62c84df
      Running setup.py bdist_wheel for jinja2: started
      Running setup.py bdist_wheel for jinja2: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/1f/e8/83/446db446804a75b7ac97bcece9a72325ee13e11f89478ead03
      Running setup.py bdist_wheel for itsdangerous: started
      Running setup.py bdist_wheel for itsdangerous: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/fc/a8/66/24d655233c757e178d45dea2de22a04c6d92766abfb741129a
      Running setup.py bdist_wheel for retrying: started
      Running setup.py bdist_wheel for retrying: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/d9/08/aa/49f7c109140006ea08a7657640aee3feafb65005bcd5280679
      Running setup.py bdist_wheel for wagon: started
      Running setup.py bdist_wheel for wagon: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/60/c9/56/5bb85a3cc242525888a4a77165a6c1a99a0fb50b13ece972d6
      Running setup.py bdist_wheel for pika: started
      Running setup.py bdist_wheel for pika: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/1f/30/61/abd15514f79d65426bfb7df4912228bed212304cc040bd33da
      Running setup.py bdist_wheel for networkx: started
      Running setup.py bdist_wheel for networkx: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/66/dc/f0/3d55dcd87c83d2826aa023eaa47f442195ddf3a08275129907
      Running setup.py bdist_wheel for proxy-tools: started
      Running setup.py bdist_wheel for proxy-tools: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/05/35/e8/91e7c22ff1016128555386c1bff3c9234e45ca298859946920
      Running setup.py bdist_wheel for bottle: started
      Running setup.py bdist_wheel for bottle: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/a8/0e/e5/14a22cb2840d2cf25da1d5ab58d5c4e803dd7cf68f53fde1f6
      Running setup.py bdist_wheel for paramiko: started
      Running setup.py bdist_wheel for paramiko: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/04/cc/a8/9855f412b03e47f02b0a9ac3a7ada561cbd69e925467ad1857
      Running setup.py bdist_wheel for markupsafe: started
      Running setup.py bdist_wheel for markupsafe: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/88/a7/30/e39a54a87bcbe25308fa3ca64e8ddc75d9b3e5afa21ee32d57
      Running setup.py bdist_wheel for pycrypto: started
      Running setup.py bdist_wheel for pycrypto: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/80/1f/94/f76e9746864f198eb0e304aeec319159fa41b082f61281ffce
    Successfully built cloudify cloudify-plugins-common cloudify-rest-client cloudify-dsl-parser cloudify-script-plugin pyyaml PrettyTable colorama jinja2 itsdangerous retrying wagon pika networkx proxy-tools bottle paramiko markupsafe pycrypto
    Installing collected packages: requests, requests-toolbelt, cloudify-rest-client, pika, networkx, proxy-tools, bottle, markupsafe, jinja2, cloudify-plugins-common, pyyaml, six, retrying, cloudify-dsl-parser, cloudify-script-plugin, argcomplete, pycrypto, ecdsa, paramiko, fabric, PrettyTable, colorama, itsdangerous, virtualenv, click, wagon, cloudify
    Successfully installed PrettyTable-0.7.2 argcomplete-1.1.0 bottle-0.12.7 click-4.0 cloudify-3.4 cloudify-dsl-parser-3.4 cloudify-plugins-common-3.4 cloudify-rest-client-3.4 cloudify-script-plugin-1.4 colorama-0.3.3 ecdsa-0.13 fabric-1.8.3 itsdangerous-0.24 jinja2-2.7.2 markupsafe-1.0 networkx-1.8.1 paramiko-1.12.4 pika-0.9.14 proxy-tools-0.1.0 pycrypto-2.6.1 pyyaml-3.10 requests-2.7.0 requests-toolbelt-0.8.0 retrying-1.3.3 six-1.11.0 virtualenv-15.1.0 wagon-0.3.2
    + wget -qO- https://github.com/cloudify-cosmo/cloudify-openstack-plugin/archive/1.4.zip
    + pip install openstack.zip
    Processing ./openstack.zip
    Requirement already satisfied: cloudify-plugins-common>=3.3.1 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-openstack-plugin==1.4)
    Collecting python-novaclient==2.26.0 (from cloudify-openstack-plugin==1.4)
      Downloading python_novaclient-2.26.0-py2.py3-none-any.whl (295kB)
    Collecting python-keystoneclient==1.6.0 (from cloudify-openstack-plugin==1.4)
      Downloading python_keystoneclient-1.6.0-py2.py3-none-any.whl (418kB)
    Collecting python-neutronclient==2.6.0 (from cloudify-openstack-plugin==1.4)
      Downloading python_neutronclient-2.6.0-py2.py3-none-any.whl (217kB)
    Collecting python-cinderclient==1.2.2 (from cloudify-openstack-plugin==1.4)
      Downloading python_cinderclient-1.2.2-py2.py3-none-any.whl (225kB)
    Collecting IPy==0.81 (from cloudify-openstack-plugin==1.4)
      Downloading IPy-0.81.tar.gz
    Requirement already satisfied: proxy-tools==0.1.0 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-openstack-plugin==1.4)
    Requirement already satisfied: cloudify-rest-client==3.4 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-openstack-plugin==1.4)
    Requirement already satisfied: bottle==0.12.7 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-openstack-plugin==1.4)
    Requirement already satisfied: pika==0.9.14 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-openstack-plugin==1.4)
    Requirement already satisfied: networkx==1.8.1 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-openstack-plugin==1.4)
    Requirement already satisfied: jinja2==2.7.2 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-openstack-plugin==1.4)
    Collecting pbr<2.0,>=0.11 (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
      Downloading pbr-1.10.0-py2.py3-none-any.whl (96kB)
    Collecting oslo.serialization>=1.4.0 (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
      Downloading oslo.serialization-2.22.0-py2.py3-none-any.whl
    Collecting oslo.utils>=1.4.0 (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
      Downloading oslo.utils-3.33.0-py2.py3-none-any.whl (90kB)
    Collecting Babel>=1.3 (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
      Downloading Babel-2.5.1-py2.py3-none-any.whl (6.8MB)
    Collecting oslo.i18n>=1.5.0 (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
      Downloading oslo.i18n-3.19.0-py2.py3-none-any.whl (42kB)
    Collecting iso8601>=0.1.9 (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
      Downloading iso8601-0.1.12-py2.py3-none-any.whl
    Requirement already satisfied: PrettyTable<0.8,>=0.7 in ./dcaeinstall/lib/python2.7/site-packages (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
    Requirement already satisfied: requests>=2.5.2 in ./dcaeinstall/lib/python2.7/site-packages (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
    Requirement already satisfied: argparse in /usr/lib/python2.7 (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
    Requirement already satisfied: six>=1.9.0 in ./dcaeinstall/lib/python2.7/site-packages (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
    Collecting simplejson>=2.2.0 (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
      Downloading simplejson-3.13.2.tar.gz (79kB)
    Collecting netaddr>=0.7.12 (from python-keystoneclient==1.6.0->cloudify-openstack-plugin==1.4)
      Downloading netaddr-0.7.19-py2.py3-none-any.whl (1.6MB)
    Collecting stevedore>=1.3.0 (from python-keystoneclient==1.6.0->cloudify-openstack-plugin==1.4)
      Downloading stevedore-1.28.0-py2.py3-none-any.whl
    Collecting oslo.config>=1.11.0 (from python-keystoneclient==1.6.0->cloudify-openstack-plugin==1.4)
      Downloading oslo.config-5.1.0-py2.py3-none-any.whl (109kB)
    Collecting cliff>=1.10.0 (from python-neutronclient==2.6.0->cloudify-openstack-plugin==1.4)
      Downloading cliff-2.9.1-py2-none-any.whl (69kB)
    Requirement already satisfied: requests-toolbelt in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-rest-client==3.4->cloudify-plugins-common>=3.3.1->cloudify-openstack-plugin==1.4)
    Requirement already satisfied: markupsafe in ./dcaeinstall/lib/python2.7/site-packages (from jinja2==2.7.2->cloudify-plugins-common>=3.3.1->cloudify-openstack-plugin==1.4)
    Collecting msgpack-python>=0.4.0 (from oslo.serialization>=1.4.0->python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
      Downloading msgpack-python-0.4.8.tar.gz (113kB)
    Collecting pytz>=2013.6 (from oslo.serialization>=1.4.0->python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
      Downloading pytz-2017.3-py2.py3-none-any.whl (511kB)
    Collecting monotonic>=0.6 (from oslo.utils>=1.4.0->python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
      Downloading monotonic-1.4-py2.py3-none-any.whl
    Collecting funcsigs>=1.0.0; python_version == "2.7" or python_version == "2.6" (from oslo.utils>=1.4.0->python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
      Downloading funcsigs-1.0.2-py2.py3-none-any.whl
    Collecting netifaces>=0.10.4 (from oslo.utils>=1.4.0->python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
      Downloading netifaces-0.10.6.tar.gz
    Collecting pyparsing>=2.1.0 (from oslo.utils>=1.4.0->python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
      Downloading pyparsing-2.2.0-py2.py3-none-any.whl (56kB)
    Collecting debtcollector>=1.2.0 (from oslo.utils>=1.4.0->python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
      Downloading debtcollector-1.19.0-py2.py3-none-any.whl
    Collecting rfc3986>=0.3.1 (from oslo.config>=1.11.0->python-keystoneclient==1.6.0->cloudify-openstack-plugin==1.4)
      Downloading rfc3986-1.1.0-py2.py3-none-any.whl
    Requirement already satisfied: PyYAML>=3.10 in ./dcaeinstall/lib/python2.7/site-packages (from oslo.config>=1.11.0->python-keystoneclient==1.6.0->cloudify-openstack-plugin==1.4)
    Collecting unicodecsv>=0.8.0; python_version < "3.0" (from cliff>=1.10.0->python-neutronclient==2.6.0->cloudify-openstack-plugin==1.4)
      Downloading unicodecsv-0.14.1.tar.gz
    Collecting cmd2>=0.6.7 (from cliff>=1.10.0->python-neutronclient==2.6.0->cloudify-openstack-plugin==1.4)
      Downloading cmd2-0.7.8.tar.gz (71kB)
    Collecting wrapt>=1.7.0 (from debtcollector>=1.2.0->oslo.utils>=1.4.0->python-novaclient==2.26.0->cloudify-openstack-plugin==1.4)
      Downloading wrapt-1.10.11.tar.gz
    Collecting pyperclip (from cmd2>=0.6.7->cliff>=1.10.0->python-neutronclient==2.6.0->cloudify-openstack-plugin==1.4)
      Downloading pyperclip-1.6.0.tar.gz
    Building wheels for collected packages: IPy, simplejson, msgpack-python, netifaces, unicodecsv, cmd2, wrapt, pyperclip
      Running setup.py bdist_wheel for IPy: started
      Running setup.py bdist_wheel for IPy: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/40/aa/69/f958b958aa39158a9f76fc8f770614c34ab854b9684f638134
      Running setup.py bdist_wheel for simplejson: started
      Running setup.py bdist_wheel for simplejson: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/c2/d0/42/5d1d1290c19d999277582c585f80426c61987aff01eb104ed6
      Running setup.py bdist_wheel for msgpack-python: started
      Running setup.py bdist_wheel for msgpack-python: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/2c/e7/e7/9031652a69d594665c5ca25e41d0fb3faa024e730b590e4402
      Running setup.py bdist_wheel for netifaces: started
      Running setup.py bdist_wheel for netifaces: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/28/e1/08/e66a4f207479500a27eae682a4773fa00605f2c5d953257824
      Running setup.py bdist_wheel for unicodecsv: started
      Running setup.py bdist_wheel for unicodecsv: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/97/e2/16/219fa93b83edaff912b6805cfa19d0597e21f8d353f3e2d22f
      Running setup.py bdist_wheel for cmd2: started
      Running setup.py bdist_wheel for cmd2: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/ec/23/e4/0b231234e6c498de81347e4d2044498263fa544d1253b9fb43
      Running setup.py bdist_wheel for wrapt: started
      Running setup.py bdist_wheel for wrapt: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/56/e1/0f/f7ccf1ed8ceaabccc2a93ce0481f73e589814cbbc439291345
      Running setup.py bdist_wheel for pyperclip: started
      Running setup.py bdist_wheel for pyperclip: finished with status 'done'
      Stored in directory: /opt/app/installer/.cache/pip/wheels/a9/22/c3/8116911c3273f6aa0a90ce69c44fb8a6a0e139d79aeda5a73e
    Successfully built IPy simplejson msgpack-python netifaces unicodecsv cmd2 wrapt pyperclip
    Installing collected packages: pbr, monotonic, netaddr, funcsigs, netifaces, pytz, Babel, oslo.i18n, pyparsing, iso8601, wrapt, debtcollector, oslo.utils, msgpack-python, oslo.serialization, stevedore, rfc3986, oslo.config, python-keystoneclient, simplejson, python-novaclient, unicodecsv, pyperclip, cmd2, cliff, python-neutronclient, python-cinderclient, IPy, cloudify-openstack-plugin
      Running setup.py install for cloudify-openstack-plugin: started
        Running setup.py install for cloudify-openstack-plugin: finished with status 'done'
    Successfully installed Babel-2.5.1 IPy-0.81 cliff-2.9.1 cloudify-openstack-plugin-1.4 cmd2-0.7.8 debtcollector-1.19.0 funcsigs-1.0.2 iso8601-0.1.12 monotonic-1.4 msgpack-python-0.4.8 netaddr-0.7.19 netifaces-0.10.6 oslo.config-5.1.0 oslo.i18n-3.19.0 oslo.serialization-2.22.0 oslo.utils-3.33.0 pbr-1.10.0 pyparsing-2.2.0 pyperclip-1.6.0 python-cinderclient-1.2.2 python-keystoneclient-1.6.0 python-neutronclient-2.6.0 python-novaclient-2.26.0 pytz-2017.3 rfc3986-1.1.0 simplejson-3.13.2 stevedore-1.28.0 unicodecsv-0.14.1 wrapt-1.10.11
    + mkdir types
    + wget -qO- https://nexus.onap.org/service/local/repositories/raw/content/org.onap.ccsdk.platform.plugins/type_files/dnsdesig/dns_types.yaml
    + wget -qO- https://nexus.onap.org/service/local/repositories/raw/content/org.onap.ccsdk.platform.plugins/type_files/sshkeyshare/sshkey_types.yaml
    + wget -O dnsdesig.wgn https://nexus.onap.org/service/local/repositories/raw/content/org.onap.ccsdk.platform.plugins/plugins/dnsdesig-1.0.0-py27-none-any.wgn
    --2018-01-04 19:11:01--  https://nexus.onap.org/service/local/repositories/raw/content/org.onap.ccsdk.platform.plugins/plugins/dnsdesig-1.0.0-py27-none-any.wgn
    Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137, 2604:e100:1:0:f816:3eff:fefb:56ed
    Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 4269 (4.2K) [application/octet-stream]
    Saving to: 'dnsdesig.wgn'
    
         0K ....                                                  100%  507M=0s
    
    2018-01-04 19:11:01 (507 MB/s) - 'dnsdesig.wgn' saved [4269/4269]
    
    + wget -O sshkeyshare.wgn https://nexus.onap.org/service/local/repositories/raw/content/org.onap.ccsdk.platform.plugins/plugins/sshkeyshare-1.0.0-py27-none-any.wgn
    --2018-01-04 19:11:01--  https://nexus.onap.org/service/local/repositories/raw/content/org.onap.ccsdk.platform.plugins/plugins/sshkeyshare-1.0.0-py27-none-any.wgn
    Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137, 2604:e100:1:0:f816:3eff:fefb:56ed
    Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 3287 (3.2K) [application/octet-stream]
    Saving to: 'sshkeyshare.wgn'
    
         0K ...                                                   100%  438M=0s
    
    2018-01-04 19:11:01 (438 MB/s) - 'sshkeyshare.wgn' saved [3287/3287]
    
    + wagon install -s dnsdesig.wgn
    INFO - Installing dnsdesig.wgn
    INFO - Installing dnsdesig...
    INFO - Installing within current virtualenv: True...
    + wagon install -s sshkeyshare.wgn
    INFO - Installing sshkeyshare.wgn
    INFO - Installing sshkeyshare...
    INFO - Installing within current virtualenv: True...
    + sed -e 's#key_filename:.*#key_filename: ./key600#'
    + set +e
    + wget -O /tmp/centos_vm.yaml https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/centos_vm.yaml
    --2018-01-04 19:11:02--  https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/centos_vm.yaml
    Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137, 2604:e100:1:0:f816:3eff:fefb:56ed
    Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 7653 (7.5K) [text/x-yaml]
    Saving to: '/tmp/centos_vm.yaml'
    
         0K .......                                               100%  197M=0s
    
    2018-01-04 19:11:03 (197 MB/s) - '/tmp/centos_vm.yaml' saved [7653/7653]
    
    + mv -f /tmp/centos_vm.yaml ./blueprints/
    Succeeded in getting the newest centos_vm.yaml
    + echo 'Succeeded in getting the newest centos_vm.yaml'
    + set -e
    + cfy local init --install-plugins -p ./blueprints/centos_vm.yaml -i /tmp/local_inputs -i datacenter=MbOr
    Collecting https://github.com/cloudify-cosmo/cloudify-openstack-plugin/archive/1.4.zip (from -r /tmp/requirements_2Dnd1m.txt (line 1))
      Downloading https://github.com/cloudify-cosmo/cloudify-openstack-plugin/archive/1.4.zip (85kB)
      Requirement already satisfied (use --upgrade to upgrade): cloudify-openstack-plugin==1.4 from https://github.com/cloudify-cosmo/cloudify-openstack-plugin/archive/1.4.zip in ./dcaeinstall/lib/python2.7/site-packages (from -r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: cloudify-plugins-common>=3.3.1 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: python-novaclient==2.26.0 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: python-keystoneclient==1.6.0 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: python-neutronclient==2.6.0 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: python-cinderclient==1.2.2 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: IPy==0.81 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: proxy-tools==0.1.0 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: cloudify-rest-client==3.4 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: bottle==0.12.7 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: pika==0.9.14 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: networkx==1.8.1 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: jinja2==2.7.2 in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: pbr<2.0,>=0.11 in ./dcaeinstall/lib/python2.7/site-packages (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: oslo.serialization>=1.4.0 in ./dcaeinstall/lib/python2.7/site-packages (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: oslo.utils>=1.4.0 in ./dcaeinstall/lib/python2.7/site-packages (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: Babel>=1.3 in ./dcaeinstall/lib/python2.7/site-packages (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: oslo.i18n>=1.5.0 in ./dcaeinstall/lib/python2.7/site-packages (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: iso8601>=0.1.9 in ./dcaeinstall/lib/python2.7/site-packages (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: PrettyTable<0.8,>=0.7 in ./dcaeinstall/lib/python2.7/site-packages (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: requests>=2.5.2 in ./dcaeinstall/lib/python2.7/site-packages (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: argparse in /usr/lib/python2.7 (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: six>=1.9.0 in ./dcaeinstall/lib/python2.7/site-packages (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: simplejson>=2.2.0 in ./dcaeinstall/lib/python2.7/site-packages (from python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: netaddr>=0.7.12 in ./dcaeinstall/lib/python2.7/site-packages (from python-keystoneclient==1.6.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: stevedore>=1.3.0 in ./dcaeinstall/lib/python2.7/site-packages (from python-keystoneclient==1.6.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: oslo.config>=1.11.0 in ./dcaeinstall/lib/python2.7/site-packages (from python-keystoneclient==1.6.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: cliff>=1.10.0 in ./dcaeinstall/lib/python2.7/site-packages (from python-neutronclient==2.6.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: requests-toolbelt in ./dcaeinstall/lib/python2.7/site-packages (from cloudify-rest-client==3.4->cloudify-plugins-common>=3.3.1->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: markupsafe in ./dcaeinstall/lib/python2.7/site-packages (from jinja2==2.7.2->cloudify-plugins-common>=3.3.1->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: msgpack-python>=0.4.0 in ./dcaeinstall/lib/python2.7/site-packages (from oslo.serialization>=1.4.0->python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: pytz>=2013.6 in ./dcaeinstall/lib/python2.7/site-packages (from oslo.serialization>=1.4.0->python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: monotonic>=0.6 in ./dcaeinstall/lib/python2.7/site-packages (from oslo.utils>=1.4.0->python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: funcsigs>=1.0.0; python_version == "2.7" or python_version == "2.6" in ./dcaeinstall/lib/python2.7/site-packages (from oslo.utils>=1.4.0->python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: netifaces>=0.10.4 in ./dcaeinstall/lib/python2.7/site-packages (from oslo.utils>=1.4.0->python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: pyparsing>=2.1.0 in ./dcaeinstall/lib/python2.7/site-packages (from oslo.utils>=1.4.0->python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: debtcollector>=1.2.0 in ./dcaeinstall/lib/python2.7/site-packages (from oslo.utils>=1.4.0->python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: rfc3986>=0.3.1 in ./dcaeinstall/lib/python2.7/site-packages (from oslo.config>=1.11.0->python-keystoneclient==1.6.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: PyYAML>=3.10 in ./dcaeinstall/lib/python2.7/site-packages (from oslo.config>=1.11.0->python-keystoneclient==1.6.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: unicodecsv>=0.8.0; python_version < "3.0" in ./dcaeinstall/lib/python2.7/site-packages (from cliff>=1.10.0->python-neutronclient==2.6.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: cmd2>=0.6.7 in ./dcaeinstall/lib/python2.7/site-packages (from cliff>=1.10.0->python-neutronclient==2.6.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: wrapt>=1.7.0 in ./dcaeinstall/lib/python2.7/site-packages (from debtcollector>=1.2.0->oslo.utils>=1.4.0->python-novaclient==2.26.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Requirement already satisfied: pyperclip in ./dcaeinstall/lib/python2.7/site-packages (from cmd2>=0.6.7->cliff>=1.10.0->python-neutronclient==2.6.0->cloudify-openstack-plugin==1.4->-r /tmp/requirements_2Dnd1m.txt (line 1))
    Processing inputs source: /tmp/local_inputs
    Processing inputs source: datacenter=MbOr
    Initiated ./blueprints/centos_vm.yaml
    If you make changes to the blueprint, run `cfy local init -p ./blueprints/centos_vm.yaml` again to apply them
    + cfy local execute -w install --task-retries=10
    2018-01-04 19:11:12 CFY <local> Starting 'install' workflow execution
    2018-01-04 19:11:12 CFY <local> [floatingip_vm00_6c172] Creating node
    2018-01-04 19:11:12 CFY <local> [security_group_5e486] Creating node
    2018-01-04 19:11:12 CFY <local> [private_net_dfbf6] Creating node
    2018-01-04 19:11:12 CFY <local> [key_pair_5e616] Creating node
    2018-01-04 19:11:13 CFY <local> [private_net_dfbf6.create] Sending task 'neutron_plugin.network.create'
    2018-01-04 19:11:13 CFY <local> [floatingip_vm00_6c172.create] Sending task 'neutron_plugin.floatingip.create'
    2018-01-04 19:11:13 CFY <local> [key_pair_5e616.create] Sending task 'nova_plugin.keypair.create'
    2018-01-04 19:11:13 CFY <local> [security_group_5e486.create] Sending task 'neutron_plugin.security_group.create'
    2018-01-04 19:11:13 CFY <local> [private_net_dfbf6.create] Task started 'neutron_plugin.network.create'
    2018-01-04 19:11:13 LOG <local> [private_net_dfbf6.create] INFO: Using external resource network: oam_onap_MbOr
    2018-01-04 19:11:13 CFY <local> [private_net_dfbf6.create] Task succeeded 'neutron_plugin.network.create'
    2018-01-04 19:11:13 CFY <local> [floatingip_vm00_6c172.create] Task started 'neutron_plugin.floatingip.create'
    2018-01-04 19:11:15 LOG <local> [floatingip_vm00_6c172.create] INFO: Floating IP creation response: {u'router_id': None, u'status': u'DOWN', u'description': u'', u'tags': [], u'dns_name': u'', u'created_at': u'2018-01-04T19:11:14Z', u'updated_at': u'2018-01-04T19:11:14Z', u'dns_domain': u'', u'floating_network_id': u'af6880a2-3173-430a-aaa2-6229df57ee15', u'fixed_ip_address': None, u'floating_ip_address': u'10.195.200.42', u'revision_number': 0, u'tenant_id': u'5c59f02201d54aa89af1f2207f7be2c1', u'project_id': u'5c59f02201d54aa89af1f2207f7be2c1', u'port_id': None, u'id': u'1a6cfd1b-8b1e-41c0-8000-77c267cf59ca'}
    2018-01-04 19:11:15 CFY <local> [floatingip_vm00_6c172.create] Task succeeded 'neutron_plugin.floatingip.create'
    2018-01-04 19:11:15 CFY <local> [key_pair_5e616.create] Task started 'nova_plugin.keypair.create'
    2018-01-04 19:11:15 LOG <local> [key_pair_5e616.create] INFO: Using external resource keypair: onap_key_MbOr
    2018-01-04 19:11:15 CFY <local> [key_pair_5e616.create] Task succeeded 'nova_plugin.keypair.create'
    2018-01-04 19:11:15 CFY <local> [security_group_5e486.create] Task started 'neutron_plugin.security_group.create'
    2018-01-04 19:11:15 LOG <local> [security_group_5e486.create] INFO: Using external resource security_group: onap_sg_MbOr
    2018-01-04 19:11:15 CFY <local> [security_group_5e486.create] Task succeeded 'neutron_plugin.security_group.create'
    2018-01-04 19:11:16 CFY <local> [private_net_dfbf6] Configuring node
    2018-01-04 19:11:16 CFY <local> [key_pair_5e616] Configuring node
    2018-01-04 19:11:16 CFY <local> [floatingip_vm00_6c172] Configuring node
    2018-01-04 19:11:16 CFY <local> [security_group_5e486] Configuring node
    2018-01-04 19:11:16 CFY <local> [private_net_dfbf6] Starting node
    2018-01-04 19:11:16 CFY <local> [key_pair_5e616] Starting node
    2018-01-04 19:11:16 CFY <local> [floatingip_vm00_6c172] Starting node
    2018-01-04 19:11:16 CFY <local> [security_group_5e486] Starting node
    2018-01-04 19:11:17 CFY <local> [fixedip_vm00_bb782] Creating node
    2018-01-04 19:11:17 CFY <local> [dns_cm_108d8] Creating node
    2018-01-04 19:11:17 CFY <local> [dns_vm00_7ecd2] Creating node
    2018-01-04 19:11:17 CFY <local> [dns_cm_108d8.create] Sending task 'dnsdesig.dns_plugin.aneeded'
    2018-01-04 19:11:17 CFY <local> [fixedip_vm00_bb782.create] Sending task 'neutron_plugin.port.create'
    2018-01-04 19:11:17 CFY <local> [dns_vm00_7ecd2.create] Sending task 'dnsdesig.dns_plugin.aneeded'
    2018-01-04 19:11:17 CFY <local> [dns_cm_108d8.create] Task started 'dnsdesig.dns_plugin.aneeded'
    2018-01-04 19:11:17 CFY <local> [dns_cm_108d8.create] Task succeeded 'dnsdesig.dns_plugin.aneeded'
    2018-01-04 19:11:17 CFY <local> [fixedip_vm00_bb782.create] Task started 'neutron_plugin.port.create'
    2018-01-04 19:11:18 CFY <local> [fixedip_vm00_bb782.create] Task succeeded 'neutron_plugin.port.create'
    2018-01-04 19:11:18 CFY <local> [dns_vm00_7ecd2.create] Task started 'dnsdesig.dns_plugin.aneeded'
    2018-01-04 19:11:18 CFY <local> [dns_vm00_7ecd2.create] Task succeeded 'dnsdesig.dns_plugin.aneeded'
    2018-01-04 19:11:18 CFY <local> [fixedip_vm00_bb782] Configuring node
    2018-01-04 19:11:19 CFY <local> [dns_vm00_7ecd2] Configuring node
    2018-01-04 19:11:19 CFY <local> [dns_cm_108d8] Configuring node
    2018-01-04 19:11:19 CFY <local> [fixedip_vm00_bb782] Starting node
    2018-01-04 19:11:19 CFY <local> [dns_cm_108d8] Starting node
    2018-01-04 19:11:19 CFY <local> [dns_vm00_7ecd2] Starting node
    2018-01-04 19:11:20 CFY <local> [dns_cname_be7b3] Creating node
    2018-01-04 19:11:20 CFY <local> [dns_cname_be7b3.create] Sending task 'dnsdesig.dns_plugin.cnameneeded'
    2018-01-04 19:11:20 CFY <local> [host_vm00_c3a93] Creating node
    2018-01-04 19:11:20 CFY <local> [dns_cname_be7b3.create] Task started 'dnsdesig.dns_plugin.cnameneeded'
    2018-01-04 19:11:20 CFY <local> [host_vm00_c3a93.create] Sending task 'nova_plugin.server.create'
    2018-01-04 19:11:20 CFY <local> [dns_cname_be7b3.create] Task succeeded 'dnsdesig.dns_plugin.cnameneeded'
    2018-01-04 19:11:20 CFY <local> [host_vm00_c3a93.create] Task started 'nova_plugin.server.create'
    2018-01-04 19:11:21 LOG <local> [host_vm00_c3a93.create] INFO: Creating VM with parameters: {u'userdata': u'#!/bin/sh\nset -x\nDATACENTER=MbOr\nCONSULVER=0.8.3\nCONSULNAME=consul_${CONSULVER}_linux_amd64\nMYIP=`curl -Ss http://169.254.169.254/2009-04-04/meta-data/local-ipv4`\nMYNAME=`hostname`\nif [ ! -z "$(echo $MYNAME |grep \'.\')" ]; then MYNAME="$(echo $MYNAME | cut -f1 -d \'.\')"; fi\necho >>/etc/hosts\necho $MYIP $MYNAME >>/etc/hosts\nmkdir -p /opt/consul/config /opt/consul/data /opt/consul/bin\nyum install -y unzip\n# Download Consul\ncurl -Ss   https://releases.hashicorp.com/consul/${CONSULVER}/${CONSULNAME}.zip > ${CONSULNAME}.zip\nunzip -d /opt/consul/bin  ${CONSULNAME}.zip\nrm ${CONSULNAME}.zip\nchmod +x  /opt/consul/bin/consul\ncat <<EOF > /opt/consul/config/consul.json\n{\n  "bind_addr" : "0.0.0.0",\n  "client_addr" : "0.0.0.0",\n  "data_dir" : "/opt/consul/data",\n  "datacenter": "$DATACENTER",\n  "rejoin_after_leave": true,\n  "http_api_response_headers": {\n     "Access-Control-Allow-Origin" : "*"\n  },\n  "server": false,\n  "ui": false,\n  "enable_syslog": true,\n  "log_level": "info"\n}\nEOF\ncat <<EOF > /lib/systemd/system/consul.service\n[Unit]\nDescription=Consul\nRequires=network-online.target\nAfter=network.target\n[Service]\nType=simple\nExecStart=/opt/consul/bin/consul agent -config-dir=/opt/consul/config\nExecReload=/bin/kill -HUP \\$MAINPID\n[Install]\nWantedBy=multi-user.target\nEOF\nsystemctl enable consul\nsystemctl start consul\nyum install -y python-psycopg2\n', 'name': u'dcaeorcl00', 'key_name': u'onap_key_MbOr', 'image': u'fa4b9999-b287-455e-8a7c-30aa3578894a', 'meta': {'cloudify_management_network_name': u'oam_onap_MbOr', 'cloudify_management_network_id': u'b4049b01-0581-4d4e-b7c5-6c3e0c956051'}, 'nics': [{'port-id': u'fdaa7318-1607-42a5-89a0-6fc95f9a6b2f'}], 'flavor': u'5'}
    2018-01-04 19:11:22 CFY <local> [host_vm00_c3a93.create] Task succeeded 'nova_plugin.server.create'
    2018-01-04 19:11:22 CFY <local> [dns_cname_be7b3] Configuring node
    2018-01-04 19:11:22 CFY <local> [dns_cname_be7b3] Starting node
    2018-01-04 19:11:23 CFY <local> [host_vm00_c3a93] Configuring node
    2018-01-04 19:11:24 CFY <local> [host_vm00_c3a93] Starting node
    2018-01-04 19:11:24 CFY <local> [host_vm00_c3a93.start] Sending task 'nova_plugin.server.start'
    2018-01-04 19:11:24 CFY <local> [host_vm00_c3a93.start] Task started 'nova_plugin.server.start'
    2018-01-04 19:11:24 CFY <local> [host_vm00_c3a93.start] Task rescheduled 'nova_plugin.server.start' -> Waiting for server to be in ACTIVE state but is in BUILD:spawning state. Retrying... [retry_after=30]
    2018-01-04 19:11:54 CFY <local> [host_vm00_c3a93.start] Sending task 'nova_plugin.server.start' [retry 1/10]
    2018-01-04 19:11:54 CFY <local> [host_vm00_c3a93.start] Task started 'nova_plugin.server.start' [retry 1/10]
    2018-01-04 19:11:55 LOG <local> [host_vm00_c3a93.start] INFO: Server is ACTIVE
    2018-01-04 19:11:55 CFY <local> [host_vm00_c3a93.start] Task succeeded 'nova_plugin.server.start' [retry 1/10]
    2018-01-04 19:11:55 CFY <local> [host_vm00_c3a93->security_group_5e486|establish] Sending task 'nova_plugin.server.connect_security_group'
    2018-01-04 19:11:55 CFY <local> [host_vm00_c3a93->security_group_5e486|establish] Task started 'nova_plugin.server.connect_security_group'
    2018-01-04 19:11:57 CFY <local> [host_vm00_c3a93->security_group_5e486|establish] Task succeeded 'nova_plugin.server.connect_security_group'
    2018-01-04 19:11:57 CFY <local> [host_vm00_c3a93->floatingip_vm00_6c172|establish] Sending task 'nova_plugin.server.connect_floatingip'
    2018-01-04 19:11:57 CFY <local> [host_vm00_c3a93->floatingip_vm00_6c172|establish] Task started 'nova_plugin.server.connect_floatingip'
    2018-01-04 19:11:59 CFY <local> [host_vm00_c3a93->floatingip_vm00_6c172|establish] Task succeeded 'nova_plugin.server.connect_floatingip'
    2018-01-04 19:11:59 CFY <local> 'install' workflow execution succeeded
    ++ grep -Po '"public_ip": "\K.*?(?=")'
    ++ cfy local outputs
    + PUBIP=10.195.200.42
    ++ wc -l
    ++ grep 'icmp*'
    ++ ping -c 1 10.195.200.42
    + '[' 1 -eq 0 ']'
    + sleep 10
    Installing Cloudify Manager on 10.195.200.42.
    + echo 'Installing Cloudify Manager on 10.195.200.42.'
    ++ sed s/PVTIP=//
    ++ grep PVTIP
    ++ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ./key600 centos@10.195.200.42 'echo PVTIP=`curl --silent http://169.254.169.254/2009-04-04/meta-data/local-ipv4`'
    Warning: Permanently added '10.195.200.42' (ECDSA) to the list of known hosts.
    + PVTIP=10.0.0.11
    + '[' 10.0.0.11 = '' ']'
    ++ cut -d \' -f2
    ++ grep key_filename
    ++ cat ./config/inputs.yaml
    + PVTKEYPATH=/opt/dcae/key
    ++ basename /opt/dcae/key
    + PVTKEYNAME=key
    ++ dirname /opt/dcae/key
    + PVTKEYDIR=/opt/dcae
    + scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ./key600 ./key600 centos@10.195.200.42:/tmp/key
    Warning: Permanently added '10.195.200.42' (ECDSA) to the list of known hosts.
    + ssh -t -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ./key600 centos@10.195.200.42 sudo mkdir -p /opt/dcae
    Pseudo-terminal will not be allocated because stdin is not a terminal.
    Warning: Permanently added '10.195.200.42' (ECDSA) to the list of known hosts.
    + ssh -t -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ./key600 centos@10.195.200.42 sudo mv /tmp/key /opt/dcae/key
    Pseudo-terminal will not be allocated because stdin is not a terminal.
    Warning: Permanently added '10.195.200.42' (ECDSA) to the list of known hosts.
    ++ uuidgen -r
    + ESMAGIC=4198d02e-7ea7-44fe-a91d-ecb8454f625a
    + WORKDIR=/opt/app/installer/cmtmp
    + BSDIR=/opt/app/installer/cmtmp/cmbootstrap
    + PVTKEY2=/opt/app/installer/cmtmp/cmbootstrap/id_rsa.cfybootstrap
    + TMPBASE=/opt/app/installer/cmtmp/tmp
    + TMPDIR=/opt/app/installer/cmtmp/tmp/lib
    + SRCS=/opt/app/installer/cmtmp/srcs.tar
    + TOOL=/opt/app/installer/cmtmp/tool.py
    + rm -rf /opt/app/installer/cmtmp
    + mkdir -p /opt/app/installer/cmtmp/cmbootstrap /opt/app/installer/cmtmp/tmp/lib/cloudify/wheels /opt/app/installer/cmtmp/tmp/lib/cloudify/sources /opt/app/installer/cmtmp/tmp/lib/manager
    + chmod 700 /opt/app/installer/cmtmp
    + cp ./key600 /opt/app/installer/cmtmp/cmbootstrap/id_rsa.cfybootstrap
    + cat
    + ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -t -i /opt/app/installer/cmtmp/cmbootstrap/id_rsa.cfybootstrap centos@10.195.200.42 'sudo bash -xc "echo y; mkdir -p /root/.virtualenv; echo '\''[virtualenv]'\'' >/root/.virtualenv/virtualenv.ini; echo no-download=true >>/root/.virtualenv/virtualenv.ini"'
    Pseudo-terminal will not be allocated because stdin is not a terminal.
    Warning: Permanently added '10.195.200.42' (ECDSA) to the list of known hosts.
    + echo y
    + mkdir -p /root/.virtualenv
    y
    + echo '[virtualenv]'
    + echo no-download=true
    + BSURL=https://github.com/cloudify-cosmo/cloudify-manager-blueprints/archive/3.4.tar.gz
    ++ basename https://github.com/cloudify-cosmo/cloudify-manager-blueprints/archive/3.4.tar.gz
    + BSFILE=3.4.tar.gz
    + umask 022
    + wget -qO- https://github.com/cloudify-cosmo/cloudify-manager-blueprints/archive/3.4.tar.gz
    + cd /opt/app/installer/cmtmp/cmbootstrap
    + tar xzvf 3.4.tar.gz
    cloudify-manager-blueprints-3.4/
    cloudify-manager-blueprints-3.4/.gitignore
    cloudify-manager-blueprints-3.4/.travis.yml
    cloudify-manager-blueprints-3.4/LICENSE
    cloudify-manager-blueprints-3.4/README.md
    cloudify-manager-blueprints-3.4/aws-ec2-manager-blueprint-inputs.yaml
    cloudify-manager-blueprints-3.4/aws-ec2-manager-blueprint.yaml
    cloudify-manager-blueprints-3.4/azure-manager-blueprint-inputs.yaml
    cloudify-manager-blueprints-3.4/azure-manager-blueprint.yaml
    cloudify-manager-blueprints-3.4/circle.yml
    cloudify-manager-blueprints-3.4/components/
    cloudify-manager-blueprints-3.4/components/amqpinflux/
    cloudify-manager-blueprints-3.4/components/amqpinflux/NOTICE.txt
    cloudify-manager-blueprints-3.4/components/amqpinflux/config/
    cloudify-manager-blueprints-3.4/components/amqpinflux/config/cloudify-amqpinflux
    cloudify-manager-blueprints-3.4/components/amqpinflux/config/cloudify-amqpinflux.service
    cloudify-manager-blueprints-3.4/components/amqpinflux/scripts/
    cloudify-manager-blueprints-3.4/components/amqpinflux/scripts/create.py
    cloudify-manager-blueprints-3.4/components/amqpinflux/scripts/creation_validation.py
    cloudify-manager-blueprints-3.4/components/amqpinflux/scripts/start.py
    cloudify-manager-blueprints-3.4/components/amqpinflux/scripts/stop.py
    cloudify-manager-blueprints-3.4/components/cli/
    cloudify-manager-blueprints-3.4/components/cli/scripts/
    cloudify-manager-blueprints-3.4/components/cli/scripts/create.sh
    cloudify-manager-blueprints-3.4/components/cli/scripts/start.sh
    cloudify-manager-blueprints-3.4/components/elasticsearch/
    cloudify-manager-blueprints-3.4/components/elasticsearch/NOTICE.txt
    cloudify-manager-blueprints-3.4/components/elasticsearch/config/
    cloudify-manager-blueprints-3.4/components/elasticsearch/config/elasticsearch.yml
    cloudify-manager-blueprints-3.4/components/elasticsearch/config/logging.yml
    cloudify-manager-blueprints-3.4/components/elasticsearch/config/logrotate
    cloudify-manager-blueprints-3.4/components/elasticsearch/config/restart.conf
    cloudify-manager-blueprints-3.4/components/elasticsearch/config/scripts/
    cloudify-manager-blueprints-3.4/components/elasticsearch/config/scripts/append.groovy
    cloudify-manager-blueprints-3.4/components/elasticsearch/scripts/
    cloudify-manager-blueprints-3.4/components/elasticsearch/scripts/create.py
    cloudify-manager-blueprints-3.4/components/elasticsearch/scripts/creation_validation.py
    cloudify-manager-blueprints-3.4/components/elasticsearch/scripts/rotate_es_indices
    cloudify-manager-blueprints-3.4/components/elasticsearch/scripts/start.py
    cloudify-manager-blueprints-3.4/components/elasticsearch/scripts/stop.py
    cloudify-manager-blueprints-3.4/components/influxdb/
    cloudify-manager-blueprints-3.4/components/influxdb/NOTICE.txt
    cloudify-manager-blueprints-3.4/components/influxdb/config/
    cloudify-manager-blueprints-3.4/components/influxdb/config/cloudify-influxdb
    cloudify-manager-blueprints-3.4/components/influxdb/config/cloudify-influxdb.service
    cloudify-manager-blueprints-3.4/components/influxdb/config/config.toml
    cloudify-manager-blueprints-3.4/components/influxdb/config/logrotate
    cloudify-manager-blueprints-3.4/components/influxdb/scripts/
    cloudify-manager-blueprints-3.4/components/influxdb/scripts/create.py
    cloudify-manager-blueprints-3.4/components/influxdb/scripts/creation_validation.py
    cloudify-manager-blueprints-3.4/components/influxdb/scripts/start.py
    cloudify-manager-blueprints-3.4/components/influxdb/scripts/stop.py
    cloudify-manager-blueprints-3.4/components/java/
    cloudify-manager-blueprints-3.4/components/java/NOTICE.txt
    cloudify-manager-blueprints-3.4/components/java/scripts/
    cloudify-manager-blueprints-3.4/components/java/scripts/create.py
    cloudify-manager-blueprints-3.4/components/java/scripts/validate.py
    cloudify-manager-blueprints-3.4/components/logstash/
    cloudify-manager-blueprints-3.4/components/logstash/NOTICE.txt
    cloudify-manager-blueprints-3.4/components/logstash/config/
    cloudify-manager-blueprints-3.4/components/logstash/config/cloudify-logstash
    cloudify-manager-blueprints-3.4/components/logstash/config/logrotate
    cloudify-manager-blueprints-3.4/components/logstash/config/logstash.conf
    cloudify-manager-blueprints-3.4/components/logstash/config/restart.conf
    cloudify-manager-blueprints-3.4/components/logstash/scripts/
    cloudify-manager-blueprints-3.4/components/logstash/scripts/create.py
    cloudify-manager-blueprints-3.4/components/logstash/scripts/creation_validation.py
    cloudify-manager-blueprints-3.4/components/logstash/scripts/start.py
    cloudify-manager-blueprints-3.4/components/logstash/scripts/stop.py
    cloudify-manager-blueprints-3.4/components/manager/
    cloudify-manager-blueprints-3.4/components/manager/scripts/
    cloudify-manager-blueprints-3.4/components/manager/scripts/aws-ec2/
    cloudify-manager-blueprints-3.4/components/manager/scripts/aws-ec2/configure.py
    cloudify-manager-blueprints-3.4/components/manager/scripts/azure/
    cloudify-manager-blueprints-3.4/components/manager/scripts/azure/configure.py
    cloudify-manager-blueprints-3.4/components/manager/scripts/configure_manager.sh
    cloudify-manager-blueprints-3.4/components/manager/scripts/create.py
    cloudify-manager-blueprints-3.4/components/manager/scripts/create_conf.py
    cloudify-manager-blueprints-3.4/components/manager/scripts/creation_validation.py
    cloudify-manager-blueprints-3.4/components/manager/scripts/openstack/
    cloudify-manager-blueprints-3.4/components/manager/scripts/openstack/configure.py
    cloudify-manager-blueprints-3.4/components/manager/scripts/sanity/
    cloudify-manager-blueprints-3.4/components/manager/scripts/sanity/create_sanity.py
    cloudify-manager-blueprints-3.4/components/manager/scripts/sanity/get_rest_protocol.py
    cloudify-manager-blueprints-3.4/components/manager/scripts/sanity/sanity.py
    cloudify-manager-blueprints-3.4/components/manager/scripts/set_manager_private_ip.py
    cloudify-manager-blueprints-3.4/components/manager/scripts/set_manager_public_ip.py
    cloudify-manager-blueprints-3.4/components/manager/scripts/validate.py
    cloudify-manager-blueprints-3.4/components/manager/scripts/vcloud/
    cloudify-manager-blueprints-3.4/components/manager/scripts/vcloud/configure.py
    cloudify-manager-blueprints-3.4/components/manager/scripts/vcloud/update-vcloud-vm.sh
    cloudify-manager-blueprints-3.4/components/manager/scripts/vsphere/
    cloudify-manager-blueprints-3.4/components/manager/scripts/vsphere/configure.py
    cloudify-manager-blueprints-3.4/components/manager/scripts/vsphere/update-firewall.sh
    cloudify-manager-blueprints-3.4/components/mgmtworker/
    cloudify-manager-blueprints-3.4/components/mgmtworker/NOTICE.txt
    cloudify-manager-blueprints-3.4/components/mgmtworker/config/
    cloudify-manager-blueprints-3.4/components/mgmtworker/config/broker_config.json
    cloudify-manager-blueprints-3.4/components/mgmtworker/config/cloudify-mgmtworker
    cloudify-manager-blueprints-3.4/components/mgmtworker/config/cloudify-mgmtworker.service
    cloudify-manager-blueprints-3.4/components/mgmtworker/config/logrotate
    cloudify-manager-blueprints-3.4/components/mgmtworker/config/startup.sh
    cloudify-manager-blueprints-3.4/components/mgmtworker/scripts/
    cloudify-manager-blueprints-3.4/components/mgmtworker/scripts/create.py
    cloudify-manager-blueprints-3.4/components/mgmtworker/scripts/creation_validation.py
    cloudify-manager-blueprints-3.4/components/mgmtworker/scripts/start.py
    cloudify-manager-blueprints-3.4/components/mgmtworker/scripts/stop.py
    cloudify-manager-blueprints-3.4/components/nginx/
    cloudify-manager-blueprints-3.4/components/nginx/NOTICE.txt
    cloudify-manager-blueprints-3.4/components/nginx/config/
    cloudify-manager-blueprints-3.4/components/nginx/config/default.conf
    cloudify-manager-blueprints-3.4/components/nginx/config/fileserver-location.cloudify
    cloudify-manager-blueprints-3.4/components/nginx/config/http-rest-server.cloudify
    cloudify-manager-blueprints-3.4/components/nginx/config/https-rest-server.cloudify
    cloudify-manager-blueprints-3.4/components/nginx/config/logrotate
    cloudify-manager-blueprints-3.4/components/nginx/config/logs-conf.cloudify
    cloudify-manager-blueprints-3.4/components/nginx/config/nginx.conf
    cloudify-manager-blueprints-3.4/components/nginx/config/rest-location.cloudify
    cloudify-manager-blueprints-3.4/components/nginx/config/restart.conf
    cloudify-manager-blueprints-3.4/components/nginx/config/ui-locations.cloudify
    cloudify-manager-blueprints-3.4/components/nginx/scripts/
    cloudify-manager-blueprints-3.4/components/nginx/scripts/create.py
    cloudify-manager-blueprints-3.4/components/nginx/scripts/creation_validation.py
    cloudify-manager-blueprints-3.4/components/nginx/scripts/preconfigure.py
    cloudify-manager-blueprints-3.4/components/nginx/scripts/retrieve_agents.py
    cloudify-manager-blueprints-3.4/components/nginx/scripts/start.py
    cloudify-manager-blueprints-3.4/components/nginx/scripts/stop.py
    cloudify-manager-blueprints-3.4/components/python/
    cloudify-manager-blueprints-3.4/components/python/NOTICE.txt
    cloudify-manager-blueprints-3.4/components/python/scripts/
    cloudify-manager-blueprints-3.4/components/python/scripts/create.py
    cloudify-manager-blueprints-3.4/components/python/scripts/validate.py
    cloudify-manager-blueprints-3.4/components/rabbitmq/
    cloudify-manager-blueprints-3.4/components/rabbitmq/NOTICE.txt
    cloudify-manager-blueprints-3.4/components/rabbitmq/config/
    cloudify-manager-blueprints-3.4/components/rabbitmq/config/cloudify-rabbitmq
    cloudify-manager-blueprints-3.4/components/rabbitmq/config/cloudify-rabbitmq.service
    cloudify-manager-blueprints-3.4/components/rabbitmq/config/logrotate
    cloudify-manager-blueprints-3.4/components/rabbitmq/config/rabbitmq.config-nossl
    cloudify-manager-blueprints-3.4/components/rabbitmq/config/rabbitmq.config-ssl
    cloudify-manager-blueprints-3.4/components/rabbitmq/config/rabbitmq_ulimit.conf
    cloudify-manager-blueprints-3.4/components/rabbitmq/scripts/
    cloudify-manager-blueprints-3.4/components/rabbitmq/scripts/create.py
    cloudify-manager-blueprints-3.4/components/rabbitmq/scripts/creation_validation.py
    cloudify-manager-blueprints-3.4/components/rabbitmq/scripts/start.py
    cloudify-manager-blueprints-3.4/components/rabbitmq/scripts/stop.py
    cloudify-manager-blueprints-3.4/components/restservice/
    cloudify-manager-blueprints-3.4/components/restservice/NOTICE.txt
    cloudify-manager-blueprints-3.4/components/restservice/config/
    cloudify-manager-blueprints-3.4/components/restservice/config/cloudify-rest.conf
    cloudify-manager-blueprints-3.4/components/restservice/config/cloudify-restservice
    cloudify-manager-blueprints-3.4/components/restservice/config/cloudify-restservice.service
    cloudify-manager-blueprints-3.4/components/restservice/config/logrotate
    cloudify-manager-blueprints-3.4/components/restservice/scripts/
    cloudify-manager-blueprints-3.4/components/restservice/scripts/create.py
    cloudify-manager-blueprints-3.4/components/restservice/scripts/creation_validation.py
    cloudify-manager-blueprints-3.4/components/restservice/scripts/install_plugins.py
    cloudify-manager-blueprints-3.4/components/restservice/scripts/install_plugins.sh
    cloudify-manager-blueprints-3.4/components/restservice/scripts/preconfigure.py
    cloudify-manager-blueprints-3.4/components/restservice/scripts/start.py
    cloudify-manager-blueprints-3.4/components/restservice/scripts/stop.py
    cloudify-manager-blueprints-3.4/components/riemann/
    cloudify-manager-blueprints-3.4/components/riemann/NOTICE.txt
    cloudify-manager-blueprints-3.4/components/riemann/config/
    cloudify-manager-blueprints-3.4/components/riemann/config/cloudify-riemann
    cloudify-manager-blueprints-3.4/components/riemann/config/cloudify-riemann.service
    cloudify-manager-blueprints-3.4/components/riemann/config/logrotate
    cloudify-manager-blueprints-3.4/components/riemann/config/main.clj
    cloudify-manager-blueprints-3.4/components/riemann/scripts/
    cloudify-manager-blueprints-3.4/components/riemann/scripts/create.py
    cloudify-manager-blueprints-3.4/components/riemann/scripts/creation_validation.py
    cloudify-manager-blueprints-3.4/components/riemann/scripts/start.py
    cloudify-manager-blueprints-3.4/components/riemann/scripts/stop.py
    cloudify-manager-blueprints-3.4/components/utils.py
    cloudify-manager-blueprints-3.4/components/webui/
    cloudify-manager-blueprints-3.4/components/webui/LICENSE
    cloudify-manager-blueprints-3.4/components/webui/NOTICE.txt
    cloudify-manager-blueprints-3.4/components/webui/config/
    cloudify-manager-blueprints-3.4/components/webui/config/cloudify-webui
    cloudify-manager-blueprints-3.4/components/webui/config/cloudify-webui.service
    cloudify-manager-blueprints-3.4/components/webui/config/grafana_config.js
    cloudify-manager-blueprints-3.4/components/webui/config/gsPresets.json
    cloudify-manager-blueprints-3.4/components/webui/config/logrotate
    cloudify-manager-blueprints-3.4/components/webui/scripts/
    cloudify-manager-blueprints-3.4/components/webui/scripts/create.py
    cloudify-manager-blueprints-3.4/components/webui/scripts/start.py
    cloudify-manager-blueprints-3.4/components/webui/scripts/stop.py
    cloudify-manager-blueprints-3.4/openstack-manager-blueprint-inputs.yaml
    cloudify-manager-blueprints-3.4/openstack-manager-blueprint.yaml
    cloudify-manager-blueprints-3.4/resources/
    cloudify-manager-blueprints-3.4/resources/rest/
    cloudify-manager-blueprints-3.4/resources/rest/roles_config.yaml
    cloudify-manager-blueprints-3.4/resources/rest/userstore.yaml
    cloudify-manager-blueprints-3.4/resources/ssl/
    cloudify-manager-blueprints-3.4/resources/ssl/server.crt
    cloudify-manager-blueprints-3.4/resources/ssl/server.key
    cloudify-manager-blueprints-3.4/run_test.sh
    cloudify-manager-blueprints-3.4/simple-manager-blueprint-inputs.yaml
    cloudify-manager-blueprints-3.4/simple-manager-blueprint.yaml
    cloudify-manager-blueprints-3.4/tests/
    cloudify-manager-blueprints-3.4/tests/bootstrap-sanity-requirements.txt
    cloudify-manager-blueprints-3.4/tests/sanity.py
    cloudify-manager-blueprints-3.4/tests/unit-tests/
    cloudify-manager-blueprints-3.4/tests/unit-tests/test_upgrade.py
    cloudify-manager-blueprints-3.4/tests/unit-tests/test_validations.py
    cloudify-manager-blueprints-3.4/types/
    cloudify-manager-blueprints-3.4/types/manager-types.yaml
    cloudify-manager-blueprints-3.4/vcloud-manager-blueprint-inputs.yaml
    cloudify-manager-blueprints-3.4/vcloud-manager-blueprint.yaml
    cloudify-manager-blueprints-3.4/vsphere-manager-blueprint-inputs.yaml
    cloudify-manager-blueprints-3.4/vsphere-manager-blueprint.yaml
    ++ python /opt/app/installer/cmtmp/tool.py /opt/app/installer/cmtmp/cmbootstrap/cloudify-manager-blueprints-3.4
    + MRPURL=http://repository.cloudifysource.org/org/cloudify3/3.4.0/ga-RELEASE/cloudify-manager-resources_3.4.0-ga-b400.tar.gz
    ++ basename http://repository.cloudifysource.org/org/cloudify3/3.4.0/ga-RELEASE/cloudify-manager-resources_3.4.0-ga-b400.tar.gz
    + MRPFILE=cloudify-manager-resources_3.4.0-ga-b400.tar.gz
    + wget -qO- http://repository.cloudifysource.org/org/cloudify3/3.4.0/ga-RELEASE/cloudify-manager-resources_3.4.0-ga-b400.tar.gz
    + tar cf /opt/app/installer/cmtmp/srcs.tar -C /opt/app/installer/cmtmp/tmp/lib cloudify
    + rm -rf /opt/app/installer/cmtmp/tmp
    + scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /opt/app/installer/cmtmp/cmbootstrap/id_rsa.cfybootstrap /opt/app/installer/cmtmp/srcs.tar centos@10.195.200.42:/tmp/.
    Warning: Permanently added '10.195.200.42' (ECDSA) to the list of known hosts.
    + ssh -t -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /opt/app/installer/cmtmp/cmbootstrap/id_rsa.cfybootstrap centos@10.195.200.42 'sudo bash -xc "cd /opt; tar xf /tmp/srcs.tar; chown -R root:root /opt/cloudify /opt/manager; rm -rf /tmp/srcs.tar"'
    Pseudo-terminal will not be allocated because stdin is not a terminal.
    Warning: Permanently added '10.195.200.42' (ECDSA) to the list of known hosts.
    + cd /opt
    + tar xf /tmp/srcs.tar
    + chown -R root:root /opt/cloudify /opt/manager
    chown: cannot access ‘/opt/manager’: No such file or directory
    + rm -rf /tmp/srcs.tar
    + ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -t -i /opt/app/installer/cmtmp/cmbootstrap/id_rsa.cfybootstrap centos@10.195.200.42 'sudo bash -xc '\''mkdir -p /opt/dcae; if [ -f /tmp/cfy-config.txt ]; then cp /tmp/cfy-config.txt /opt/dcae/config.txt && chmod 644 /opt/dcae/config.txt; fi'\'''
    Pseudo-terminal will not be allocated because stdin is not a terminal.
    Warning: Permanently added '10.195.200.42' (ECDSA) to the list of known hosts.
    + mkdir -p /opt/dcae
    + '[' -f /tmp/cfy-config.txt ']'
    + cd /opt/app/installer/cmtmp
    + rm -f /opt/app/installer/cmtmp/cmbootstrap/cloudify-manager-blueprints-3.4/resources/ssl/server.key /opt/app/installer/cmtmp/cmbootstrap/cloudify-manager-blueprints-3.4/resources/ssl/server.crt
    + awk 'BEGIN{x="/dev/null";}/-----BEGIN CERTIFICATE-----/{x="/opt/app/installer/cmtmp/cmbootstrap/cloudify-manager-blueprints-3.4/resources/ssl/server.crt";}/-----BEGIN PRIVATE KEY-----/{x="/opt/app/installer/cmtmp/cmbootstrap/cloudify-manager-blueprints-3.4/resources/ssl/server.key";}{print >x;}/-----END /{x="/dev/null";}'
    + ssh -t -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /opt/app/installer/cmtmp/cmbootstrap/id_rsa.cfybootstrap centos@10.195.200.42 'sudo bash -xc "openssl pkcs12 -in /opt/app/dcae-certificate/certificate.pkcs12 -passin file:/opt/app/dcae-certificate/.password -nodes -chain"'
    Pseudo-terminal will not be allocated because stdin is not a terminal.
    Warning: Permanently added '10.195.200.42' (ECDSA) to the list of known hosts.
    + openssl pkcs12 -in /opt/app/dcae-certificate/certificate.pkcs12 -passin file:/opt/app/dcae-certificate/.password -nodes -chain
    Can't open file /opt/app/dcae-certificate/.password
    Error getting passwords
    + USESSL=false
    + '[' -f /opt/app/installer/cmtmp/cmbootstrap/cloudify-manager-blueprints-3.4/resources/ssl/server.key -a -f /opt/app/installer/cmtmp/cmbootstrap/cloudify-manager-blueprints-3.4/resources/ssl/server.crt ']'
    + export CLOUDIFY_USERNAME=admin CLOUDIFY_PASSWORD=encc0fba9f6d618a1a51935b42342b17658
    + CLOUDIFY_USERNAME=admin
    + CLOUDIFY_PASSWORD=encc0fba9f6d618a1a51935b42342b17658
    + cd /opt/app/installer/cmtmp/cmbootstrap/cloudify-manager-blueprints-3.4
    + cp simple-manager-blueprint.yaml bootstrap-blueprint.yaml
    + ed bootstrap-blueprint.yaml
    28170
    28446
    + sed -e 's;.*public_ip: .*;public_ip: '\''10.195.200.42'\'';' -e 's;.*private_ip: .*;private_ip: '\''10.0.0.11'\'';' -e 's;.*ssh_user: .*;ssh_user: '\''centos'\'';' -e 's;.*ssh_key_filename: .*;ssh_key_filename: '\''/opt/app/installer/cmtmp/cmbootstrap/id_rsa.cfybootstrap'\'';' -e 's;.*elasticsearch_java_opts: .*;elasticsearch_java_opts: '\''-Des.cluster.name=4198d02e-7ea7-44fe-a91d-ecb8454f625a'\'';' -e '/ssl_enabled: /s/.*/ssl_enabled: false/' -e '/security_enabled: /s/.*/security_enabled: false/' -e '/admin_password: /s/.*/admin_password: '\''encc0fba9f6d618a1a51935b42342b17658'\''/' -e '/admin_username: /s/.*/admin_username: '\''admin'\''/' -e 's;.*manager_resources_package: .*;manager_resources_package: '\''http://169.254.169.254/nosuchthing/cloudify-manager-resources_3.4.0-ga-b400.tar.gz'\'';' -e 's;.*ignore_bootstrap_validations: .*;ignore_bootstrap_validations: true;'
    + cat
    + cfy init -r
    Initialization completed successfully
    + cfy bootstrap --install-plugins -p bootstrap-blueprint.yaml -i bootstrap-inputs.yaml
    Executing bootstrap validation...
    Collecting https://github.com/cloudify-cosmo/cloudify-fabric-plugin/archive/1.4.1.zip (from -r /tmp/requirements_LPfZjI.txt (line 1))
      Downloading https://github.com/cloudify-cosmo/cloudify-fabric-plugin/archive/1.4.1.zip
    Requirement already satisfied: cloudify-plugins-common>=3.3.1 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_LPfZjI.txt (line 1))
    Requirement already satisfied: fabric==1.8.3 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_LPfZjI.txt (line 1))
    Requirement already satisfied: six>=1.8.0 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_LPfZjI.txt (line 1))
    Requirement already satisfied: proxy-tools==0.1.0 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_LPfZjI.txt (line 1))
    Requirement already satisfied: cloudify-rest-client==3.4 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_LPfZjI.txt (line 1))
    Requirement already satisfied: bottle==0.12.7 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_LPfZjI.txt (line 1))
    Requirement already satisfied: pika==0.9.14 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_LPfZjI.txt (line 1))
    Requirement already satisfied: networkx==1.8.1 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_LPfZjI.txt (line 1))
    Requirement already satisfied: jinja2==2.7.2 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_LPfZjI.txt (line 1))
    Requirement already satisfied: paramiko<1.13,>=1.10 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from fabric==1.8.3->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_LPfZjI.txt (line 1))
    Requirement already satisfied: requests==2.7.0 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-rest-client==3.4->cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_LPfZjI.txt (line 1))
    Requirement already satisfied: requests-toolbelt in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-rest-client==3.4->cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_LPfZjI.txt (line 1))
    Requirement already satisfied: markupsafe in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from jinja2==2.7.2->cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_LPfZjI.txt (line 1))
    Requirement already satisfied: pycrypto!=2.4,>=2.1 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from paramiko<1.13,>=1.10->fabric==1.8.3->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_LPfZjI.txt (line 1))
    Requirement already satisfied: ecdsa in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from paramiko<1.13,>=1.10->fabric==1.8.3->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_LPfZjI.txt (line 1))
    Installing collected packages: cloudify-fabric-plugin
      Running setup.py install for cloudify-fabric-plugin: started
        Running setup.py install for cloudify-fabric-plugin: finished with status 'done'
    Successfully installed cloudify-fabric-plugin-1.4.1
    Processing inputs source: bootstrap-inputs.yaml
    2018-01-04 19:13:09 CFY <manager> Starting 'execute_operation' workflow execution
    2018-01-04 19:13:09 CFY <manager> [amqp_influx_63aa3] Starting operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:09 CFY <manager> [manager_host_7aff9] Starting operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:09 CFY <manager> [logstash_e7d8f] Starting operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:09 CFY <manager> [mgmt_worker_400da] Starting operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:09 CFY <manager> [sanity_db880] Starting operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:09 CFY <manager> [python_runtime_00bd6] Starting operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:09 CFY <manager> [rest_service_cb328] Starting operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:09 CFY <manager> [rabbitmq_eaac4] Starting operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:09 CFY <manager> [nginx_f9d95] Starting operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:09 CFY <manager> [webui_f7a96] Starting operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:09 CFY <manager> [manager_resources_a868d] Starting operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:09 CFY <manager> [manager_configuration_49446] Starting operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:09 CFY <manager> [elasticsearch_412aa] Starting operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:09 CFY <manager> [riemann_19b6e] Starting operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:09 CFY <manager> [java_runtime_255e1] Starting operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:09 CFY <manager> [influxdb_583cd] Starting operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:10 CFY <manager> [logstash_e7d8f.creation] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:10 CFY <manager> [influxdb_583cd.creation] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:10 CFY <manager> [rabbitmq_eaac4.creation] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:10 CFY <manager> [manager_configuration_49446.creation] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:10 CFY <manager> [mgmt_worker_400da.creation] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:10 CFY <manager> [amqp_influx_63aa3.creation] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:10 CFY <manager> [rest_service_cb328.creation] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:10 CFY <manager> [elasticsearch_412aa.creation] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:10 CFY <manager> [riemann_19b6e.creation] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:10 CFY <manager> [nginx_f9d95.creation] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:10 CFY <manager> [logstash_e7d8f.creation] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:10 LOG <manager> [logstash_e7d8f.creation] INFO: Preparing fabric environment...
    2018-01-04 19:13:10 LOG <manager> [logstash_e7d8f.creation] INFO: Environment prepared successfully
    2018-01-04 19:13:11 CFY <manager> [logstash_e7d8f.creation] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:11 CFY <manager> [influxdb_583cd.creation] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:11 LOG <manager> [influxdb_583cd.creation] INFO: Preparing fabric environment...
    2018-01-04 19:13:11 LOG <manager> [influxdb_583cd.creation] INFO: Environment prepared successfully
    2018-01-04 19:13:12 CFY <manager> [influxdb_583cd.creation] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:12 CFY <manager> [rabbitmq_eaac4.creation] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:12 LOG <manager> [rabbitmq_eaac4.creation] INFO: Preparing fabric environment...
    2018-01-04 19:13:12 LOG <manager> [rabbitmq_eaac4.creation] INFO: Environment prepared successfully
    2018-01-04 19:13:12 CFY <manager> [rabbitmq_eaac4.creation] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:12 CFY <manager> [manager_configuration_49446.creation] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:12 LOG <manager> [manager_configuration_49446.creation] INFO: Preparing fabric environment...
    2018-01-04 19:13:12 LOG <manager> [manager_configuration_49446.creation] INFO: Environment prepared successfully
    2018-01-04 19:13:13 CFY <manager> [manager_configuration_49446.creation] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:13 CFY <manager> [mgmt_worker_400da.creation] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:13 LOG <manager> [mgmt_worker_400da.creation] INFO: Preparing fabric environment...
    2018-01-04 19:13:13 LOG <manager> [mgmt_worker_400da.creation] INFO: Environment prepared successfully
    2018-01-04 19:13:13 CFY <manager> [mgmt_worker_400da.creation] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:13 CFY <manager> [amqp_influx_63aa3.creation] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:13 LOG <manager> [amqp_influx_63aa3.creation] INFO: Preparing fabric environment...
    2018-01-04 19:13:13 LOG <manager> [amqp_influx_63aa3.creation] INFO: Environment prepared successfully
    2018-01-04 19:13:14 CFY <manager> [amqp_influx_63aa3.creation] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:14 CFY <manager> [rest_service_cb328.creation] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:14 LOG <manager> [rest_service_cb328.creation] INFO: Preparing fabric environment...
    2018-01-04 19:13:14 LOG <manager> [rest_service_cb328.creation] INFO: Environment prepared successfully
    2018-01-04 19:13:15 CFY <manager> [rest_service_cb328.creation] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:15 CFY <manager> [elasticsearch_412aa.creation] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:15 LOG <manager> [elasticsearch_412aa.creation] INFO: Preparing fabric environment...
    2018-01-04 19:13:15 LOG <manager> [elasticsearch_412aa.creation] INFO: Environment prepared successfully
    2018-01-04 19:13:15 CFY <manager> [elasticsearch_412aa.creation] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:15 CFY <manager> [riemann_19b6e.creation] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:15 LOG <manager> [riemann_19b6e.creation] INFO: Preparing fabric environment...
    2018-01-04 19:13:15 LOG <manager> [riemann_19b6e.creation] INFO: Environment prepared successfully
    2018-01-04 19:13:16 CFY <manager> [riemann_19b6e.creation] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:16 CFY <manager> [nginx_f9d95.creation] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:16 LOG <manager> [nginx_f9d95.creation] INFO: Preparing fabric environment...
    2018-01-04 19:13:16 LOG <manager> [nginx_f9d95.creation] INFO: Environment prepared successfully
    2018-01-04 19:13:17 CFY <manager> [nginx_f9d95.creation] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:17 CFY <manager> [python_runtime_00bd6] Finished operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:17 CFY <manager> [java_runtime_255e1] Finished operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:17 CFY <manager> [manager_host_7aff9] Finished operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:17 CFY <manager> [webui_f7a96] Finished operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:17 CFY <manager> [sanity_db880] Finished operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:17 CFY <manager> [manager_resources_a868d] Finished operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:17 CFY <manager> [logstash_e7d8f] Finished operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:17 CFY <manager> [influxdb_583cd] Finished operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:17 CFY <manager> [rabbitmq_eaac4] Finished operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:17 CFY <manager> [manager_configuration_49446] Finished operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:17 CFY <manager> [mgmt_worker_400da] Finished operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:17 CFY <manager> [amqp_influx_63aa3] Finished operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:17 CFY <manager> [rest_service_cb328] Finished operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:17 CFY <manager> [elasticsearch_412aa] Finished operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:17 CFY <manager> [riemann_19b6e] Finished operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:17 CFY <manager> [nginx_f9d95] Finished operation cloudify.interfaces.validation.creation
    2018-01-04 19:13:17 CFY <manager> 'execute_operation' workflow execution succeeded
    Bootstrap validation completed successfully
    Executing manager bootstrap...
    Collecting https://github.com/cloudify-cosmo/cloudify-fabric-plugin/archive/1.4.1.zip (from -r /tmp/requirements_zP8m_9.txt (line 1))
      Downloading https://github.com/cloudify-cosmo/cloudify-fabric-plugin/archive/1.4.1.zip
      Requirement already satisfied (use --upgrade to upgrade): cloudify-fabric-plugin==1.4.1 from https://github.com/cloudify-cosmo/cloudify-fabric-plugin/archive/1.4.1.zip in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from -r /tmp/requirements_zP8m_9.txt (line 1))
    Requirement already satisfied: cloudify-plugins-common>=3.3.1 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_zP8m_9.txt (line 1))
    Requirement already satisfied: fabric==1.8.3 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_zP8m_9.txt (line 1))
    Requirement already satisfied: six>=1.8.0 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_zP8m_9.txt (line 1))
    Requirement already satisfied: proxy-tools==0.1.0 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_zP8m_9.txt (line 1))
    Requirement already satisfied: cloudify-rest-client==3.4 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_zP8m_9.txt (line 1))
    Requirement already satisfied: bottle==0.12.7 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_zP8m_9.txt (line 1))
    Requirement already satisfied: pika==0.9.14 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_zP8m_9.txt (line 1))
    Requirement already satisfied: networkx==1.8.1 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_zP8m_9.txt (line 1))
    Requirement already satisfied: jinja2==2.7.2 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_zP8m_9.txt (line 1))
    Requirement already satisfied: paramiko<1.13,>=1.10 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from fabric==1.8.3->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_zP8m_9.txt (line 1))
    Requirement already satisfied: requests==2.7.0 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-rest-client==3.4->cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_zP8m_9.txt (line 1))
    Requirement already satisfied: requests-toolbelt in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from cloudify-rest-client==3.4->cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_zP8m_9.txt (line 1))
    Requirement already satisfied: markupsafe in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from jinja2==2.7.2->cloudify-plugins-common>=3.3.1->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_zP8m_9.txt (line 1))
    Requirement already satisfied: pycrypto!=2.4,>=2.1 in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from paramiko<1.13,>=1.10->fabric==1.8.3->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_zP8m_9.txt (line 1))
    Requirement already satisfied: ecdsa in /opt/app/installer/dcaeinstall/lib/python2.7/site-packages (from paramiko<1.13,>=1.10->fabric==1.8.3->cloudify-fabric-plugin==1.4.1->-r /tmp/requirements_zP8m_9.txt (line 1))
    Processing inputs source: bootstrap-inputs.yaml
    2018-01-04 19:13:28 CFY <manager> Starting 'install' workflow execution
    2018-01-04 19:13:28 CFY <manager> [manager_host_4f071] Creating node
    2018-01-04 19:13:29 CFY <manager> [manager_host_4f071] Configuring node
    2018-01-04 19:13:29 CFY <manager> [manager_host_4f071] Starting node
    2018-01-04 19:13:30 CFY <manager> [manager_resources_c994e] Creating node
    2018-01-04 19:13:30 CFY <manager> [manager_resources_c994e.create] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:30 CFY <manager> [manager_resources_c994e.create] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:30 LOG <manager> [manager_resources_c994e.create] INFO: Preparing fabric environment...
    2018-01-04 19:13:30 LOG <manager> [manager_resources_c994e.create] INFO: Environment prepared successfully
    2018-01-04 19:13:31 LOG <manager> [manager_resources_c994e.create] INFO: Validating Python version...
    2018-01-04 19:13:31 LOG <manager> [manager_resources_c994e.create] INFO: Validating supported distributions...
    2018-01-04 19:13:31 LOG <manager> [manager_resources_c994e.create] INFO: Validating memory requirement...
    2018-01-04 19:13:32 LOG <manager> [manager_resources_c994e.create] INFO: Validating disk space requirement...
    2018-01-04 19:13:32 LOG <manager> [manager_resources_c994e.create] INFO: Validating Elasticsearch heap size requirement...
    2018-01-04 19:13:32 LOG <manager> [manager_resources_c994e.create] WARNING: Ignoring validation errors.
    Validation Error: The Manager's Resources Package http://169.254.169.254/nosuchthing/cloudify-manager-resources_3.4.0-ga-b400.tar.gz is not accessible (HTTP Error: 404)
    2018-01-04 19:13:32 CFY <manager> [manager_resources_c994e.create] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:33 CFY <manager> [manager_resources_c994e] Configuring node
    2018-01-04 19:13:33 CFY <manager> [manager_resources_c994e.configure] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:33 CFY <manager> [manager_resources_c994e.configure] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:33 LOG <manager> [manager_resources_c994e.configure] INFO: Preparing fabric environment...
    2018-01-04 19:13:33 LOG <manager> [manager_resources_c994e.configure] INFO: Environment prepared successfully
    2018-01-04 19:13:34 LOG <manager> [manager_resources_c994e.configure] INFO: Saving manager-resources input configuration to /opt/cloudify/manager-resources/node_properties/properties.json
    2018-01-04 19:13:37 LOG <manager> [manager_resources_c994e.configure] INFO: Skipping resources package checksum validation...
    2018-01-04 19:13:43 CFY <manager> [manager_resources_c994e.configure] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:44 CFY <manager> [manager_resources_c994e] Starting node
    2018-01-04 19:13:45 CFY <manager> [manager_configuration_bb27d] Creating node
    2018-01-04 19:13:45 CFY <manager> [manager_configuration_bb27d] Configuring node
    2018-01-04 19:13:45 CFY <manager> [manager_configuration_bb27d.configure] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:45 CFY <manager> [manager_configuration_bb27d.configure] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:45 LOG <manager> [manager_configuration_bb27d.configure] INFO: Preparing fabric environment...
    2018-01-04 19:13:45 LOG <manager> [manager_configuration_bb27d.configure] INFO: Environment prepared successfully
    2018-01-04 19:13:46 LOG <manager> [manager_configuration_bb27d.configure] INFO: Saving manager-config input configuration to /opt/cloudify/manager-config/node_properties/properties.json
    2018-01-04 19:13:47 LOG <manager> [manager_configuration_bb27d.configure] INFO: Deploying blueprint resource components/manager/scripts/configure_manager.sh to /tmp/configure_manager.sh
    2018-01-04 19:13:47 LOG <manager> [manager_configuration_bb27d.configure] INFO: Downloading resource configure_manager.sh to /opt/cloudify/manager-config/resources/configure_manager.sh
    2018-01-04 19:13:49 CFY <manager> [manager_configuration_bb27d.configure] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:49 CFY <manager> [manager_configuration_bb27d->manager_host_4f071|postconfigure] Sending task 'script_runner.tasks.run'
    2018-01-04 19:13:49 CFY <manager> [manager_configuration_bb27d->manager_host_4f071|postconfigure] Task started 'script_runner.tasks.run'
    2018-01-04 19:13:49 LOG <manager> [manager_configuration_bb27d->manager_host_4f071|postconfigure] INFO: Setting Public Manager IP Runtime Property.
    2018-01-04 19:13:49 LOG <manager> [manager_configuration_bb27d->manager_host_4f071|postconfigure] INFO: Manager Public IP is: 10.195.200.42
    2018-01-04 19:13:49 CFY <manager> [manager_configuration_bb27d->manager_host_4f071|postconfigure] Task succeeded 'script_runner.tasks.run'
    2018-01-04 19:13:49 CFY <manager> [manager_configuration_bb27d] Starting node
    2018-01-04 19:13:50 CFY <manager> [influxdb_73514] Creating node
    2018-01-04 19:13:50 CFY <manager> [rabbitmq_0c517] Creating node
    2018-01-04 19:13:50 CFY <manager> [python_runtime_5c4af] Creating node
    2018-01-04 19:13:50 CFY <manager> [java_runtime_5c9ad] Creating node
    2018-01-04 19:13:50 CFY <manager> [rabbitmq_0c517.create] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:50 CFY <manager> [influxdb_73514.create] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:50 CFY <manager> [python_runtime_5c4af.create] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:50 CFY <manager> [java_runtime_5c9ad.create] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:50 CFY <manager> [rabbitmq_0c517.create] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:13:50 LOG <manager> [rabbitmq_0c517.create] INFO: Preparing fabric environment...
    2018-01-04 19:13:50 LOG <manager> [rabbitmq_0c517.create] INFO: Environment prepared successfully
    2018-01-04 19:13:51 LOG <manager> [rabbitmq_0c517.create] INFO: Saving rabbitmq input configuration to /opt/cloudify/rabbitmq/node_properties/properties.json
    2018-01-04 19:13:52 LOG <manager> [rabbitmq_0c517.create] INFO: Installing RabbitMQ...
    2018-01-04 19:13:52 LOG <manager> [rabbitmq_0c517.create] INFO: Checking whether SELinux in enforced...
    2018-01-04 19:13:52 LOG <manager> [rabbitmq_0c517.create] INFO: SELinux is enforcing, setting permissive state...
    2018-01-04 19:13:53 LOG <manager> [rabbitmq_0c517.create] INFO: Replacing SELINUX=enforcing with SELINUX=permissive in /etc/selinux/config...
    2018-01-04 19:13:53 LOG <manager> [rabbitmq_0c517.create] INFO: Downloading resource rabbitmq_NOTICE.txt to /opt/cloudify/rabbitmq/resources/rabbitmq_NOTICE.txt
    2018-01-04 19:13:56 LOG <manager> [rabbitmq_0c517.create] INFO: Checking whether /opt/cloudify/rabbitmq/resources/erlang-17.4-1.el6.x86_64.rpm is already installed...
    2018-01-04 19:13:56 LOG <manager> [rabbitmq_0c517.create] INFO: yum installing /opt/cloudify/rabbitmq/resources/erlang-17.4-1.el6.x86_64.rpm...
    2018-01-04 19:13:59 LOG <manager> [rabbitmq_0c517.create] INFO: Checking whether /opt/cloudify/rabbitmq/resources/rabbitmq-server-3.5.3-1.noarch.rpm is already installed...
    2018-01-04 19:14:00 LOG <manager> [rabbitmq_0c517.create] INFO: yum installing /opt/cloudify/rabbitmq/resources/rabbitmq-server-3.5.3-1.noarch.rpm...
    2018-01-04 19:14:01 LOG <manager> [rabbitmq_0c517.create] INFO: Deploying logrotate hourly cron job...
    2018-01-04 19:14:02 LOG <manager> [rabbitmq_0c517.create] INFO: Deploying logrotate config...
    2018-01-04 19:14:02 LOG <manager> [rabbitmq_0c517.create] INFO: Deploying blueprint resource components/rabbitmq/config/logrotate to /etc/logrotate.d/rabbitmq
    2018-01-04 19:14:02 LOG <manager> [rabbitmq_0c517.create] INFO: Downloading resource rabbitmq to /opt/cloudify/rabbitmq/resources/rabbitmq
    2018-01-04 19:14:03 LOG <manager> [rabbitmq_0c517.create] INFO: chmoding /etc/logrotate.d/rabbitmq: 644
    2018-01-04 19:14:04 LOG <manager> [rabbitmq_0c517.create] INFO: chowning /etc/logrotate.d/rabbitmq by root:root...
    2018-01-04 19:14:04 LOG <manager> [rabbitmq_0c517.create] INFO: Deploying systemd EnvironmentFile...
    2018-01-04 19:14:04 LOG <manager> [rabbitmq_0c517.create] INFO: Deploying blueprint resource components/rabbitmq/config/cloudify-rabbitmq to /etc/sysconfig/cloudify-rabbitmq
    2018-01-04 19:14:04 LOG <manager> [rabbitmq_0c517.create] INFO: Downloading resource cloudify-rabbitmq to /opt/cloudify/rabbitmq/resources/cloudify-rabbitmq
    2018-01-04 19:14:05 LOG <manager> [rabbitmq_0c517.create] INFO: Deploying systemd .service file...
    2018-01-04 19:14:05 LOG <manager> [rabbitmq_0c517.create] INFO: Deploying blueprint resource components/rabbitmq/config/cloudify-rabbitmq.service to /usr/lib/systemd/system/cloudify-rabbitmq.service
    2018-01-04 19:14:05 LOG <manager> [rabbitmq_0c517.create] INFO: Downloading resource cloudify-rabbitmq.service to /opt/cloudify/rabbitmq/resources/cloudify-rabbitmq.service
    2018-01-04 19:14:07 LOG <manager> [rabbitmq_0c517.create] INFO: Enabling systemd .service...
    2018-01-04 19:14:07 LOG <manager> [rabbitmq_0c517.create] INFO: Configuring File Descriptors Limit...
    2018-01-04 19:14:07 LOG <manager> [rabbitmq_0c517.create] INFO: Deploying blueprint resource components/rabbitmq/config/rabbitmq_ulimit.conf to /etc/security/limits.d/rabbitmq.conf
    2018-01-04 19:14:07 LOG <manager> [rabbitmq_0c517.create] INFO: Downloading resource rabbitmq.conf to /opt/cloudify/rabbitmq/resources/rabbitmq.conf
    2018-01-04 19:14:09 LOG <manager> [rabbitmq_0c517.create] INFO: chowning /var/log/cloudify/rabbitmq by rabbitmq:rabbitmq...
    2018-01-04 19:14:21 LOG <manager> [rabbitmq_0c517.create] INFO: Waiting for localhost:5672 to become available...
    2018-01-04 19:14:22 LOG <manager> [rabbitmq_0c517.create] INFO: localhost:5672 is open!
    2018-01-04 19:14:22 LOG <manager> [rabbitmq_0c517.create] INFO: Enabling RabbitMQ Plugins...
    2018-01-04 19:14:25 LOG <manager> [rabbitmq_0c517.create] INFO: Disabling RabbitMQ guest user...
    2018-01-04 19:14:27 LOG <manager> [rabbitmq_0c517.create] INFO: Creating new user cloudify:c10udify and setting permissions...
    2018-01-04 19:14:28 LOG <manager> [rabbitmq_0c517.create] INFO: Deploying blueprint resource components/rabbitmq/config/rabbitmq.config-nossl to /etc/rabbitmq/rabbitmq.config
    2018-01-04 19:14:28 LOG <manager> [rabbitmq_0c517.create] INFO: Downloading resource rabbitmq.config to /opt/cloudify/rabbitmq/resources/rabbitmq.config
    2018-01-04 19:14:29 LOG <manager> [rabbitmq_0c517.create] INFO: Stopping systemd service cloudify-rabbitmq...
    2018-01-04 19:14:33 CFY <manager> [rabbitmq_0c517.create] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:14:33 CFY <manager> [influxdb_73514.create] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:14:33 LOG <manager> [influxdb_73514.create] INFO: Preparing fabric environment...
    2018-01-04 19:14:33 LOG <manager> [influxdb_73514.create] INFO: Environment prepared successfully
    2018-01-04 19:14:33 LOG <manager> [influxdb_73514.create] INFO: Saving influxdb input configuration to /opt/cloudify/influxdb/node_properties/properties.json
    2018-01-04 19:14:34 LOG <manager> [influxdb_73514.create] INFO: Installing InfluxDB...
    2018-01-04 19:14:34 LOG <manager> [influxdb_73514.create] INFO: Checking whether SELinux in enforced...
    2018-01-04 19:14:34 LOG <manager> [influxdb_73514.create] INFO: SELinux is not enforced.
    2018-01-04 19:14:35 LOG <manager> [influxdb_73514.create] INFO: Downloading resource influxdb_NOTICE.txt to /opt/cloudify/influxdb/resources/influxdb_NOTICE.txt
    2018-01-04 19:14:37 LOG <manager> [influxdb_73514.create] INFO: Checking whether /opt/cloudify/influxdb/resources/influxdb-0.8.8-1.x86_64.rpm is already installed...
    2018-01-04 19:14:38 LOG <manager> [influxdb_73514.create] INFO: yum installing /opt/cloudify/influxdb/resources/influxdb-0.8.8-1.x86_64.rpm...
    2018-01-04 19:14:40 LOG <manager> [influxdb_73514.create] INFO: Deploying InfluxDB config.toml...
    2018-01-04 19:14:40 LOG <manager> [influxdb_73514.create] INFO: Deploying blueprint resource components/influxdb/config/config.toml to /opt/influxdb/shared/config.toml
    2018-01-04 19:14:40 LOG <manager> [influxdb_73514.create] INFO: Downloading resource config.toml to /opt/cloudify/influxdb/resources/config.toml
    2018-01-04 19:14:41 LOG <manager> [influxdb_73514.create] INFO: Fixing user permissions...
    2018-01-04 19:14:42 LOG <manager> [influxdb_73514.create] INFO: chowning /opt/influxdb by influxdb:influxdb...
    2018-01-04 19:14:42 LOG <manager> [influxdb_73514.create] INFO: chowning /var/log/cloudify/influxdb by influxdb:influxdb...
    2018-01-04 19:14:42 LOG <manager> [influxdb_73514.create] INFO: Deploying systemd EnvironmentFile...
    2018-01-04 19:14:42 LOG <manager> [influxdb_73514.create] INFO: Deploying blueprint resource components/influxdb/config/cloudify-influxdb to /etc/sysconfig/cloudify-influxdb
    2018-01-04 19:14:42 LOG <manager> [influxdb_73514.create] INFO: Downloading resource cloudify-influxdb to /opt/cloudify/influxdb/resources/cloudify-influxdb
    2018-01-04 19:14:43 LOG <manager> [influxdb_73514.create] INFO: Deploying systemd .service file...
    2018-01-04 19:14:44 LOG <manager> [influxdb_73514.create] INFO: Deploying blueprint resource components/influxdb/config/cloudify-influxdb.service to /usr/lib/systemd/system/cloudify-influxdb.service
    2018-01-04 19:14:44 LOG <manager> [influxdb_73514.create] INFO: Downloading resource cloudify-influxdb.service to /opt/cloudify/influxdb/resources/cloudify-influxdb.service
    2018-01-04 19:14:45 LOG <manager> [influxdb_73514.create] INFO: Enabling systemd .service...
    2018-01-04 19:14:45 LOG <manager> [influxdb_73514.create] INFO: Path does not exist: /etc/init.d/influxdb. Skipping...
    2018-01-04 19:14:46 LOG <manager> [influxdb_73514.create] INFO: Deploying logrotate config...
    2018-01-04 19:14:46 LOG <manager> [influxdb_73514.create] INFO: Deploying blueprint resource components/influxdb/config/logrotate to /etc/logrotate.d/influxdb
    2018-01-04 19:14:46 LOG <manager> [influxdb_73514.create] INFO: Downloading resource influxdb to /opt/cloudify/influxdb/resources/influxdb
    2018-01-04 19:14:47 LOG <manager> [influxdb_73514.create] INFO: chmoding /etc/logrotate.d/influxdb: 644
    2018-01-04 19:14:47 LOG <manager> [influxdb_73514.create] INFO: chowning /etc/logrotate.d/influxdb by root:root...
    2018-01-04 19:14:48 LOG <manager> [influxdb_73514.create] INFO: Waiting for 10.0.0.11:8086 to become available...
    2018-01-04 19:14:48 LOG <manager> [influxdb_73514.create] INFO: 10.0.0.11:8086 is not available yet, retrying... (1/24)
    2018-01-04 19:14:50 LOG <manager> [influxdb_73514.create] INFO: 10.0.0.11:8086 is not available yet, retrying... (2/24)
    2018-01-04 19:14:52 LOG <manager> [influxdb_73514.create] INFO: 10.0.0.11:8086 is not available yet, retrying... (3/24)
    2018-01-04 19:14:54 LOG <manager> [influxdb_73514.create] INFO: 10.0.0.11:8086 is open!
    2018-01-04 19:14:54 LOG <manager> [influxdb_73514.create] INFO: Creating InfluxDB Database...
    2018-01-04 19:14:54 LOG <manager> [influxdb_73514.create] INFO: Request is: http://10.0.0.11:8086/db?p=root&u=root '{'name': 'cloudify'}'
    2018-01-04 19:14:55 LOG <manager> [influxdb_73514.create] INFO: Verifying database create successfully...
    2018-01-04 19:14:55 LOG <manager> [influxdb_73514.create] INFO: Databased cloudify created successfully.
    2018-01-04 19:14:55 LOG <manager> [influxdb_73514.create] INFO: Stopping systemd service cloudify-influxdb...
    2018-01-04 19:14:55 CFY <manager> [influxdb_73514.create] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:14:55 CFY <manager> [python_runtime_5c4af.create] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:14:55 LOG <manager> [python_runtime_5c4af.create] INFO: Preparing fabric environment...
    2018-01-04 19:14:55 LOG <manager> [python_runtime_5c4af.create] INFO: Environment prepared successfully
    2018-01-04 19:14:56 LOG <manager> [python_runtime_5c4af.create] INFO: Saving python input configuration to /opt/cloudify/python/node_properties/properties.json
    2018-01-04 19:14:57 LOG <manager> [python_runtime_5c4af.create] INFO: Installing Python Requirements...
    2018-01-04 19:14:57 LOG <manager> [python_runtime_5c4af.create] INFO: Checking whether SELinux in enforced...
    2018-01-04 19:14:57 LOG <manager> [python_runtime_5c4af.create] INFO: SELinux is not enforced.
    2018-01-04 19:14:58 LOG <manager> [python_runtime_5c4af.create] INFO: Downloading resource python_NOTICE.txt to /opt/cloudify/python/resources/python_NOTICE.txt
    2018-01-04 19:15:00 LOG <manager> [python_runtime_5c4af.create] INFO: Checking whether /opt/cloudify/python/resources/python-pip-7.1.0-1.el7.noarch.rpm is already installed...
    2018-01-04 19:15:00 LOG <manager> [python_runtime_5c4af.create] INFO: yum installing /opt/cloudify/python/resources/python-pip-7.1.0-1.el7.noarch.rpm...
    2018-01-04 19:15:02 CFY <manager> [python_runtime_5c4af.create] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:02 CFY <manager> [java_runtime_5c9ad.create] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:02 LOG <manager> [java_runtime_5c9ad.create] INFO: Preparing fabric environment...
    2018-01-04 19:15:02 LOG <manager> [java_runtime_5c9ad.create] INFO: Environment prepared successfully
    2018-01-04 19:15:03 LOG <manager> [java_runtime_5c9ad.create] INFO: Saving java input configuration to /opt/cloudify/java/node_properties/properties.json
    2018-01-04 19:15:04 LOG <manager> [java_runtime_5c9ad.create] INFO: Installing Java...
    2018-01-04 19:15:04 LOG <manager> [java_runtime_5c9ad.create] INFO: Checking whether SELinux in enforced...
    2018-01-04 19:15:04 LOG <manager> [java_runtime_5c9ad.create] INFO: SELinux is not enforced.
    2018-01-04 19:15:04 LOG <manager> [java_runtime_5c9ad.create] INFO: Downloading resource java_NOTICE.txt to /opt/cloudify/java/resources/java_NOTICE.txt
    2018-01-04 19:15:06 LOG <manager> [java_runtime_5c9ad.create] INFO: Checking whether /opt/cloudify/java/resources/jre1.8.0_45-1.8.0_45-fcs.x86_64.rpm is already installed...
    2018-01-04 19:15:07 LOG <manager> [java_runtime_5c9ad.create] INFO: yum installing /opt/cloudify/java/resources/jre1.8.0_45-1.8.0_45-fcs.x86_64.rpm...
    2018-01-04 19:15:12 CFY <manager> [java_runtime_5c9ad.create] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:12 CFY <manager> [rabbitmq_0c517] Configuring node
    2018-01-04 19:15:12 CFY <manager> [python_runtime_5c4af] Configuring node
    2018-01-04 19:15:12 CFY <manager> [influxdb_73514] Configuring node
    2018-01-04 19:15:12 CFY <manager> [java_runtime_5c9ad] Configuring node
    2018-01-04 19:15:13 CFY <manager> [rabbitmq_0c517] Starting node
    2018-01-04 19:15:13 CFY <manager> [rabbitmq_0c517.start] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:13 CFY <manager> [rabbitmq_0c517.start] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:13 LOG <manager> [rabbitmq_0c517.start] INFO: Preparing fabric environment...
    2018-01-04 19:15:13 LOG <manager> [rabbitmq_0c517.start] INFO: Environment prepared successfully
    2018-01-04 19:15:13 LOG <manager> [rabbitmq_0c517.start] INFO: Starting RabbitMQ Service...
    2018-01-04 19:15:16 LOG <manager> [rabbitmq_0c517.start] INFO: Waiting for localhost:5672 to become available...
    2018-01-04 19:15:16 LOG <manager> [rabbitmq_0c517.start] INFO: localhost:5672 is open!
    2018-01-04 19:15:26 LOG <manager> [rabbitmq_0c517.start] INFO: Setting RabbitMQ Policies...
    2018-01-04 19:15:27 LOG <manager> [rabbitmq_0c517.start] INFO: Setting policy logs_queue_message_policy on queues ^cloudify-logs$ to {"message-ttl": 60000, "max-length": 1000000}
    2018-01-04 19:15:27 LOG <manager> [rabbitmq_0c517.start] INFO: Setting policy events_queue_message_policy on queues ^cloudify-events$ to {"message-ttl": 60000, "max-length": 1000000}
    2018-01-04 19:15:28 LOG <manager> [rabbitmq_0c517.start] INFO: Setting policy metrics_queue_message_policy on queues ^amq\.gen.*$ to {"message-ttl": 60000, "max-length": 1000000}
    2018-01-04 19:15:29 LOG <manager> [rabbitmq_0c517.start] INFO: Setting policy riemann_deployment_queues_message_ttl on queues ^.*-riemann$ to {"message-ttl": 60000, "max-length": 1000000}
    2018-01-04 19:15:30 LOG <manager> [rabbitmq_0c517.start] INFO: Starting systemd service cloudify-rabbitmq...
    2018-01-04 19:15:30 LOG <manager> [rabbitmq_0c517.start] INFO: rabbitmq is running
    2018-01-04 19:15:31 CFY <manager> [rabbitmq_0c517.start] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:31 CFY <manager> [python_runtime_5c4af] Starting node
    2018-01-04 19:15:31 CFY <manager> [influxdb_73514] Starting node
    2018-01-04 19:15:31 CFY <manager> [java_runtime_5c9ad] Starting node
    2018-01-04 19:15:31 CFY <manager> [python_runtime_5c4af.start] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:31 CFY <manager> [influxdb_73514.start] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:31 CFY <manager> [java_runtime_5c9ad.start] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:31 CFY <manager> [python_runtime_5c4af.start] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:31 LOG <manager> [python_runtime_5c4af.start] INFO: Preparing fabric environment...
    2018-01-04 19:15:31 LOG <manager> [python_runtime_5c4af.start] INFO: Environment prepared successfully
    2018-01-04 19:15:32 CFY <manager> [python_runtime_5c4af.start] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:32 CFY <manager> [influxdb_73514.start] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:32 LOG <manager> [influxdb_73514.start] INFO: Preparing fabric environment...
    2018-01-04 19:15:32 LOG <manager> [influxdb_73514.start] INFO: Environment prepared successfully
    2018-01-04 19:15:33 LOG <manager> [influxdb_73514.start] INFO: Starting InfluxDB Service...
    2018-01-04 19:15:33 LOG <manager> [influxdb_73514.start] INFO: Starting systemd service cloudify-influxdb...
    2018-01-04 19:15:33 LOG <manager> [influxdb_73514.start] INFO: influxdb is running
    2018-01-04 19:15:33 LOG <manager> [influxdb_73514.start] WARNING: <urlopen error [Errno 111] Connection refused>, Retrying in 3 seconds...
    2018-01-04 19:15:36 LOG <manager> [influxdb_73514.start] WARNING: <urlopen error [Errno 111] Connection refused>, Retrying in 6 seconds...
    2018-01-04 19:15:43 CFY <manager> [influxdb_73514.start] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:43 CFY <manager> [java_runtime_5c9ad.start] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:43 LOG <manager> [java_runtime_5c9ad.start] INFO: Preparing fabric environment...
    2018-01-04 19:15:43 LOG <manager> [java_runtime_5c9ad.start] INFO: Environment prepared successfully
    2018-01-04 19:15:43 CFY <manager> [java_runtime_5c9ad.start] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:44 CFY <manager> [amqp_influx_ea8bf] Creating node
    2018-01-04 19:15:44 CFY <manager> [amqp_influx_ea8bf.create] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:44 CFY <manager> [amqp_influx_ea8bf.create] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:44 LOG <manager> [amqp_influx_ea8bf.create] INFO: Preparing fabric environment...
    2018-01-04 19:15:44 LOG <manager> [amqp_influx_ea8bf.create] INFO: Environment prepared successfully
    2018-01-04 19:15:44 LOG <manager> [amqp_influx_ea8bf.create] INFO: Saving amqpinflux input configuration to /opt/cloudify/amqpinflux/node_properties/properties.json
    2018-01-04 19:15:46 LOG <manager> [amqp_influx_ea8bf.create] INFO: Installing AQMPInflux...
    2018-01-04 19:15:46 LOG <manager> [amqp_influx_ea8bf.create] INFO: Checking whether SELinux in enforced...
    2018-01-04 19:15:46 LOG <manager> [amqp_influx_ea8bf.create] INFO: SELinux is not enforced.
    2018-01-04 19:15:47 LOG <manager> [amqp_influx_ea8bf.create] INFO: Downloading resource amqpinflux_NOTICE.txt to /opt/cloudify/amqpinflux/resources/amqpinflux_NOTICE.txt
    2018-01-04 19:15:49 LOG <manager> [amqp_influx_ea8bf.create] INFO: Checking whether /opt/cloudify/amqpinflux/resources/cloudify-amqp-influx-3.4.0-ga_b400.x86_64.rpm is already installed...
    2018-01-04 19:15:50 LOG <manager> [amqp_influx_ea8bf.create] INFO: yum installing /opt/cloudify/amqpinflux/resources/cloudify-amqp-influx-3.4.0-ga_b400.x86_64.rpm...
    2018-01-04 19:15:53 LOG <manager> [amqp_influx_ea8bf.create] INFO: Checking whether user amqpinflux exists...
    2018-01-04 19:15:53 LOG <manager> [amqp_influx_ea8bf.create] INFO: Creating user amqpinflux, home: /opt/amqpinflux...
    2018-01-04 19:15:54 LOG <manager> [amqp_influx_ea8bf.create] WARNING: Broker SSL cert supplied but SSL not enabled (broker_ssl_enabled is False).
    2018-01-04 19:15:54 LOG <manager> [amqp_influx_ea8bf.create] INFO: Fixing permissions...
    2018-01-04 19:15:54 LOG <manager> [amqp_influx_ea8bf.create] INFO: chowning /opt/amqpinflux by amqpinflux:amqpinflux...
    2018-01-04 19:15:54 LOG <manager> [amqp_influx_ea8bf.create] INFO: Deploying systemd EnvironmentFile...
    2018-01-04 19:15:54 LOG <manager> [amqp_influx_ea8bf.create] INFO: Deploying blueprint resource components/amqpinflux/config/cloudify-amqpinflux to /etc/sysconfig/cloudify-amqpinflux
    2018-01-04 19:15:54 LOG <manager> [amqp_influx_ea8bf.create] INFO: Downloading resource cloudify-amqpinflux to /opt/cloudify/amqpinflux/resources/cloudify-amqpinflux
    2018-01-04 19:15:56 LOG <manager> [amqp_influx_ea8bf.create] INFO: Deploying systemd .service file...
    2018-01-04 19:15:56 LOG <manager> [amqp_influx_ea8bf.create] INFO: Deploying blueprint resource components/amqpinflux/config/cloudify-amqpinflux.service to /usr/lib/systemd/system/cloudify-amqpinflux.service
    2018-01-04 19:15:56 LOG <manager> [amqp_influx_ea8bf.create] INFO: Downloading resource cloudify-amqpinflux.service to /opt/cloudify/amqpinflux/resources/cloudify-amqpinflux.service
    2018-01-04 19:15:57 LOG <manager> [amqp_influx_ea8bf.create] INFO: Enabling systemd .service...
    2018-01-04 19:15:58 CFY <manager> [amqp_influx_ea8bf.create] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:58 CFY <manager> [elasticsearch_75b27] Creating node
    2018-01-04 19:15:58 CFY <manager> [elasticsearch_75b27.create] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:58 CFY <manager> [elasticsearch_75b27.create] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:15:58 LOG <manager> [elasticsearch_75b27.create] INFO: Preparing fabric environment...
    2018-01-04 19:15:58 LOG <manager> [elasticsearch_75b27.create] INFO: Environment prepared successfully
    2018-01-04 19:15:59 LOG <manager> [elasticsearch_75b27.create] INFO: Saving elasticsearch input configuration to /opt/cloudify/elasticsearch/node_properties/properties.json
    2018-01-04 19:16:00 LOG <manager> [elasticsearch_75b27.create] INFO: Installing Elasticsearch...
    2018-01-04 19:16:00 LOG <manager> [elasticsearch_75b27.create] INFO: Checking whether SELinux in enforced...
    2018-01-04 19:16:00 LOG <manager> [elasticsearch_75b27.create] INFO: SELinux is not enforced.
    2018-01-04 19:16:00 LOG <manager> [elasticsearch_75b27.create] INFO: Downloading resource elasticsearch_NOTICE.txt to /opt/cloudify/elasticsearch/resources/elasticsearch_NOTICE.txt
    2018-01-04 19:16:03 LOG <manager> [elasticsearch_75b27.create] INFO: Checking whether /opt/cloudify/elasticsearch/resources/elasticsearch-1.6.0.noarch.rpm is already installed...
    2018-01-04 19:16:04 LOG <manager> [elasticsearch_75b27.create] INFO: yum installing /opt/cloudify/elasticsearch/resources/elasticsearch-1.6.0.noarch.rpm...
    2018-01-04 19:16:05 LOG <manager> [elasticsearch_75b27.create] INFO: Chowning /var/log/cloudify/elasticsearch by elasticsearch user...
    2018-01-04 19:16:05 LOG <manager> [elasticsearch_75b27.create] INFO: chowning /var/log/cloudify/elasticsearch by elasticsearch:elasticsearch...
    2018-01-04 19:16:06 LOG <manager> [elasticsearch_75b27.create] INFO: Creating systemd unit override...
    2018-01-04 19:16:06 LOG <manager> [elasticsearch_75b27.create] INFO: Deploying blueprint resource components/elasticsearch/config/restart.conf to /etc/systemd/system/elasticsearch.service.d/restart.conf
    2018-01-04 19:16:06 LOG <manager> [elasticsearch_75b27.create] INFO: Downloading resource restart.conf to /opt/cloudify/elasticsearch/resources/restart.conf
    2018-01-04 19:16:07 LOG <manager> [elasticsearch_75b27.create] INFO: Deploying Elasticsearch Configuration...
    2018-01-04 19:16:07 LOG <manager> [elasticsearch_75b27.create] INFO: Deploying blueprint resource components/elasticsearch/config/elasticsearch.yml to /etc/elasticsearch/elasticsearch.yml
    2018-01-04 19:16:08 LOG <manager> [elasticsearch_75b27.create] INFO: Downloading resource elasticsearch.yml to /opt/cloudify/elasticsearch/resources/elasticsearch.yml
    2018-01-04 19:16:09 LOG <manager> [elasticsearch_75b27.create] INFO: chowning /etc/elasticsearch/elasticsearch.yml by elasticsearch:elasticsearch...
    2018-01-04 19:16:09 LOG <manager> [elasticsearch_75b27.create] INFO: Deploying elasticsearch logging configuration file...
    2018-01-04 19:16:09 LOG <manager> [elasticsearch_75b27.create] INFO: Deploying blueprint resource components/elasticsearch/config/logging.yml to /etc/elasticsearch/logging.yml
    2018-01-04 19:16:09 LOG <manager> [elasticsearch_75b27.create] INFO: Downloading resource logging.yml to /opt/cloudify/elasticsearch/resources/logging.yml
    2018-01-04 19:16:10 LOG <manager> [elasticsearch_75b27.create] INFO: chowning /etc/elasticsearch/logging.yml by elasticsearch:elasticsearch...
    2018-01-04 19:16:11 LOG <manager> [elasticsearch_75b27.create] INFO: Creating Elasticsearch scripts folder and additional external Elasticsearch scripts...
    2018-01-04 19:16:11 LOG <manager> [elasticsearch_75b27.create] INFO: Deploying blueprint resource components/elasticsearch/config/scripts/append.groovy to /etc/elasticsearch/scripts/append.groovy
    2018-01-04 19:16:11 LOG <manager> [elasticsearch_75b27.create] INFO: Downloading resource append.groovy to /opt/cloudify/elasticsearch/resources/append.groovy
    2018-01-04 19:16:12 LOG <manager> [elasticsearch_75b27.create] INFO: Setting Elasticsearch Heap Size...
    2018-01-04 19:16:13 LOG <manager> [elasticsearch_75b27.create] INFO: Replacing (?:#|)ES_HEAP_SIZE=(.*) with ES_HEAP_SIZE=2g in /etc/sysconfig/elasticsearch...
    2018-01-04 19:16:13 LOG <manager> [elasticsearch_75b27.create] INFO: Setting additional JAVA_OPTS...
    2018-01-04 19:16:13 LOG <manager> [elasticsearch_75b27.create] INFO: Replacing (?:#|)ES_JAVA_OPTS=(.*) with ES_JAVA_OPTS=-Des.cluster.name=4198d02e-7ea7-44fe-a91d-ecb8454f625a in /etc/sysconfig/elasticsearch...
    2018-01-04 19:16:14 LOG <manager> [elasticsearch_75b27.create] INFO: Setting Elasticsearch logs path...
    2018-01-04 19:16:14 LOG <manager> [elasticsearch_75b27.create] INFO: Replacing (?:#|)LOG_DIR=(.*) with LOG_DIR=/var/log/cloudify/elasticsearch in /etc/sysconfig/elasticsearch...
    2018-01-04 19:16:14 LOG <manager> [elasticsearch_75b27.create] INFO: Replacing (?:#|)ES_GC_LOG_FILE=(.*) with ES_GC_LOG_FILE=/var/log/cloudify/elasticsearch/gc.log in /etc/sysconfig/elasticsearch...
    2018-01-04 19:16:15 LOG <manager> [elasticsearch_75b27.create] INFO: Deploying logrotate config...
    2018-01-04 19:16:15 LOG <manager> [elasticsearch_75b27.create] INFO: Deploying blueprint resource components/elasticsearch/config/logrotate to /etc/logrotate.d/elasticsearch
    2018-01-04 19:16:15 LOG <manager> [elasticsearch_75b27.create] INFO: Downloading resource elasticsearch to /opt/cloudify/elasticsearch/resources/elasticsearch
    2018-01-04 19:16:16 LOG <manager> [elasticsearch_75b27.create] INFO: chmoding /etc/logrotate.d/elasticsearch: 644
    2018-01-04 19:16:17 LOG <manager> [elasticsearch_75b27.create] INFO: chowning /etc/logrotate.d/elasticsearch by root:root...
    2018-01-04 19:16:17 LOG <manager> [elasticsearch_75b27.create] INFO: Installing Elasticsearch Curator...
    2018-01-04 19:16:18 LOG <manager> [elasticsearch_75b27.create] INFO: Checking whether /opt/cloudify/elasticsearch/resources/elasticsearch-curator-3.2.3-1.x86_64.rpm is already installed...
    2018-01-04 19:16:18 LOG <manager> [elasticsearch_75b27.create] INFO: yum installing /opt/cloudify/elasticsearch/resources/elasticsearch-curator-3.2.3-1.x86_64.rpm...
    2018-01-04 19:16:20 LOG <manager> [elasticsearch_75b27.create] INFO: Configuring Elasticsearch Index Rotation cronjob for logstash-YYYY.mm.dd index patterns...
    2018-01-04 19:16:20 LOG <manager> [elasticsearch_75b27.create] INFO: Deploying blueprint resource components/elasticsearch/scripts/rotate_es_indices to /etc/cron.daily/rotate_es_indices
    2018-01-04 19:16:20 LOG <manager> [elasticsearch_75b27.create] INFO: Downloading resource rotate_es_indices to /opt/cloudify/elasticsearch/resources/rotate_es_indices
    2018-01-04 19:16:21 LOG <manager> [elasticsearch_75b27.create] INFO: chowning /etc/cron.daily/rotate_es_indices by root:root...
    2018-01-04 19:16:22 LOG <manager> [elasticsearch_75b27.create] INFO: Enabling systemd service elasticsearch...
    2018-01-04 19:16:22 LOG <manager> [elasticsearch_75b27.create] INFO: Waiting for 10.0.0.11:9200 to become available...
    2018-01-04 19:16:22 LOG <manager> [elasticsearch_75b27.create] INFO: 10.0.0.11:9200 is not available yet, retrying... (1/24)
    2018-01-04 19:16:24 LOG <manager> [elasticsearch_75b27.create] INFO: 10.0.0.11:9200 is not available yet, retrying... (2/24)
    2018-01-04 19:16:27 LOG <manager> [elasticsearch_75b27.create] INFO: 10.0.0.11:9200 is not available yet, retrying... (3/24)
    2018-01-04 19:16:29 LOG <manager> [elasticsearch_75b27.create] INFO: 10.0.0.11:9200 is open!
    2018-01-04 19:16:29 LOG <manager> [elasticsearch_75b27.create] INFO: Deleting `cloudify_storage` index if exists...
    2018-01-04 19:16:29 LOG <manager> [elasticsearch_75b27.create] INFO: Failed to DELETE http://10.0.0.11:9200/cloudify_storage/ (reason: Not Found)
    2018-01-04 19:16:29 LOG <manager> [elasticsearch_75b27.create] INFO: Creating `cloudify_storage` index...
    2018-01-04 19:16:30 LOG <manager> [elasticsearch_75b27.create] INFO: Declaring blueprint mapping...
    2018-01-04 19:16:30 LOG <manager> [elasticsearch_75b27.create] INFO: Declaring deployment mapping...
    2018-01-04 19:16:30 LOG <manager> [elasticsearch_75b27.create] INFO: Declaring execution mapping...
    2018-01-04 19:16:30 LOG <manager> [elasticsearch_75b27.create] INFO: Declaring node mapping...
    2018-01-04 19:16:30 LOG <manager> [elasticsearch_75b27.create] INFO: Declaring node instance mapping...
    2018-01-04 19:16:31 LOG <manager> [elasticsearch_75b27.create] INFO: Declaring deployment modification mapping...
    2018-01-04 19:16:31 LOG <manager> [elasticsearch_75b27.create] INFO: Declaring deployment update mapping...
    2018-01-04 19:16:31 LOG <manager> [elasticsearch_75b27.create] INFO: Waiting for shards to be active...
    2018-01-04 19:16:31 CFY <manager> [elasticsearch_75b27.create] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:16:31 CFY <manager> [amqp_influx_ea8bf] Configuring node
    2018-01-04 19:16:32 CFY <manager> [elasticsearch_75b27] Configuring node
    2018-01-04 19:16:32 CFY <manager> [amqp_influx_ea8bf] Starting node
    2018-01-04 19:16:32 CFY <manager> [amqp_influx_ea8bf.start] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:16:32 CFY <manager> [amqp_influx_ea8bf.start] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:16:32 LOG <manager> [amqp_influx_ea8bf.start] INFO: Preparing fabric environment...
    2018-01-04 19:16:32 LOG <manager> [amqp_influx_ea8bf.start] INFO: Environment prepared successfully
    2018-01-04 19:16:33 LOG <manager> [amqp_influx_ea8bf.start] INFO: Starting AMQP-Influx Broker Service...
    2018-01-04 19:16:33 LOG <manager> [amqp_influx_ea8bf.start] INFO: Starting systemd service cloudify-amqpinflux...
    2018-01-04 19:16:33 CFY <manager> [amqp_influx_ea8bf.start] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:16:33 CFY <manager> [elasticsearch_75b27] Starting node
    2018-01-04 19:16:33 CFY <manager> [elasticsearch_75b27.start] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:16:33 CFY <manager> [elasticsearch_75b27.start] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:16:33 LOG <manager> [elasticsearch_75b27.start] INFO: Preparing fabric environment...
    2018-01-04 19:16:33 LOG <manager> [elasticsearch_75b27.start] INFO: Environment prepared successfully
    2018-01-04 19:16:34 LOG <manager> [elasticsearch_75b27.start] INFO: Starting Elasticsearch Service...
    2018-01-04 19:16:34 LOG <manager> [elasticsearch_75b27.start] INFO: Starting systemd service elasticsearch...
    2018-01-04 19:16:34 LOG <manager> [elasticsearch_75b27.start] INFO: elasticsearch is running
    2018-01-04 19:16:35 CFY <manager> [elasticsearch_75b27.start] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:16:35 CFY <manager> [rest_service_ec35b] Creating node
    2018-01-04 19:16:35 CFY <manager> [logstash_2a99b] Creating node
    2018-01-04 19:16:36 CFY <manager> [logstash_2a99b.create] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:16:36 CFY <manager> [rest_service_ec35b.create] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:16:36 CFY <manager> [logstash_2a99b.create] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:16:36 LOG <manager> [logstash_2a99b.create] INFO: Preparing fabric environment...
    2018-01-04 19:16:36 LOG <manager> [logstash_2a99b.create] INFO: Environment prepared successfully
    2018-01-04 19:16:36 LOG <manager> [logstash_2a99b.create] INFO: Saving logstash input configuration to /opt/cloudify/logstash/node_properties/properties.json
    2018-01-04 19:16:38 LOG <manager> [logstash_2a99b.create] INFO: Installing Logstash...
    2018-01-04 19:16:38 LOG <manager> [logstash_2a99b.create] INFO: Checking whether SELinux in enforced...
    2018-01-04 19:16:38 LOG <manager> [logstash_2a99b.create] INFO: SELinux is not enforced.
    2018-01-04 19:16:38 LOG <manager> [logstash_2a99b.create] INFO: Downloading resource logstash_NOTICE.txt to /opt/cloudify/logstash/resources/logstash_NOTICE.txt
    2018-01-04 19:16:41 LOG <manager> [logstash_2a99b.create] INFO: Checking whether /opt/cloudify/logstash/resources/logstash-1.5.0-1.noarch.rpm is already installed...
    2018-01-04 19:16:41 LOG <manager> [logstash_2a99b.create] INFO: yum installing /opt/cloudify/logstash/resources/logstash-1.5.0-1.noarch.rpm...
    2018-01-04 19:16:48 LOG <manager> [logstash_2a99b.create] INFO: chowning /var/log/cloudify/logstash by logstash:logstash...
    2018-01-04 19:16:48 LOG <manager> [logstash_2a99b.create] INFO: Creating systemd unit override...
    2018-01-04 19:16:48 LOG <manager> [logstash_2a99b.create] INFO: Deploying blueprint resource components/logstash/config/restart.conf to /etc/systemd/system/logstash.service.d/restart.conf
    2018-01-04 19:16:49 LOG <manager> [logstash_2a99b.create] INFO: Downloading resource restart.conf to /opt/cloudify/logstash/resources/restart.conf
    2018-01-04 19:16:50 LOG <manager> [logstash_2a99b.create] INFO: Deploying Logstash conf...
    2018-01-04 19:16:50 LOG <manager> [logstash_2a99b.create] INFO: Deploying blueprint resource components/logstash/config/logstash.conf to /etc/logstash/conf.d/logstash.conf
    2018-01-04 19:16:50 LOG <manager> [logstash_2a99b.create] INFO: Downloading resource logstash.conf to /opt/cloudify/logstash/resources/logstash.conf
    2018-01-04 19:16:51 LOG <manager> [logstash_2a99b.create] INFO: Replacing sysconfig/\$name with sysconfig/cloudify-$name in /etc/init.d/logstash...
    2018-01-04 19:16:52 LOG <manager> [logstash_2a99b.create] INFO: chmoding /etc/init.d/logstash: 755
    2018-01-04 19:16:52 LOG <manager> [logstash_2a99b.create] INFO: chowning /etc/init.d/logstash by root:root...
    2018-01-04 19:16:52 LOG <manager> [logstash_2a99b.create] INFO: Deploying Logstash sysconfig...
    2018-01-04 19:16:52 LOG <manager> [logstash_2a99b.create] INFO: Deploying blueprint resource components/logstash/config/cloudify-logstash to /etc/sysconfig/cloudify-logstash
    2018-01-04 19:16:52 LOG <manager> [logstash_2a99b.create] INFO: Downloading resource cloudify-logstash to /opt/cloudify/logstash/resources/cloudify-logstash
    2018-01-04 19:16:54 LOG <manager> [logstash_2a99b.create] INFO: Deploying logrotate config...
    2018-01-04 19:16:54 LOG <manager> [logstash_2a99b.create] INFO: Deploying blueprint resource components/logstash/config/logrotate to /etc/logrotate.d/logstash
    2018-01-04 19:16:54 LOG <manager> [logstash_2a99b.create] INFO: Downloading resource logstash to /opt/cloudify/logstash/resources/logstash
    2018-01-04 19:16:55 LOG <manager> [logstash_2a99b.create] INFO: chmoding /etc/logrotate.d/logstash: 644
    2018-01-04 19:16:55 LOG <manager> [logstash_2a99b.create] INFO: chowning /etc/logrotate.d/logstash by root:root...
    2018-01-04 19:16:56 CFY <manager> [logstash_2a99b.create] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:16:56 CFY <manager> [rest_service_ec35b.create] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:16:56 LOG <manager> [rest_service_ec35b.create] INFO: Preparing fabric environment...
    2018-01-04 19:16:56 LOG <manager> [rest_service_ec35b.create] INFO: Environment prepared successfully
    2018-01-04 19:16:57 LOG <manager> [rest_service_ec35b.create] INFO: Saving restservice input configuration to /opt/cloudify/restservice/node_properties/properties.json
    2018-01-04 19:16:57 LOG <manager> [rest_service_ec35b.create] INFO: Installing REST Service...
    2018-01-04 19:16:57 LOG <manager> [rest_service_ec35b.create] INFO: Checking whether SELinux in enforced...
    2018-01-04 19:16:58 LOG <manager> [rest_service_ec35b.create] INFO: SELinux is not enforced.
    2018-01-04 19:16:58 LOG <manager> [rest_service_ec35b.create] INFO: Downloading resource restservice_NOTICE.txt to /opt/cloudify/restservice/resources/restservice_NOTICE.txt
    2018-01-04 19:17:00 LOG <manager> [rest_service_ec35b.create] WARNING: Broker SSL cert supplied but SSL not enabled (broker_ssl_enabled is False).
    2018-01-04 19:17:01 LOG <manager> [rest_service_ec35b.create] INFO: Checking whether /opt/cloudify/restservice/resources/cloudify-rest-service-3.4.0-ga_b400.x86_64.rpm is already installed...
    2018-01-04 19:17:02 LOG <manager> [rest_service_ec35b.create] INFO: yum installing /opt/cloudify/restservice/resources/cloudify-rest-service-3.4.0-ga_b400.x86_64.rpm...
    2018-01-04 19:17:28 LOG <manager> [rest_service_ec35b.create] INFO: Installing Optional Packages if supplied...
    2018-01-04 19:17:28 LOG <manager> [rest_service_ec35b.create] INFO: Deploying logrotate config...
    2018-01-04 19:17:28 LOG <manager> [rest_service_ec35b.create] INFO: Deploying blueprint resource components/restservice/config/logrotate to /etc/logrotate.d/restservice
    2018-01-04 19:17:28 LOG <manager> [rest_service_ec35b.create] INFO: Downloading resource restservice to /opt/cloudify/restservice/resources/restservice
    2018-01-04 19:17:29 LOG <manager> [rest_service_ec35b.create] INFO: chmoding /etc/logrotate.d/restservice: 644
    2018-01-04 19:17:30 LOG <manager> [rest_service_ec35b.create] INFO: chowning /etc/logrotate.d/restservice by root:root...
    2018-01-04 19:17:30 LOG <manager> [rest_service_ec35b.create] INFO: Copying role configuration files...
    2018-01-04 19:17:30 LOG <manager> [rest_service_ec35b.create] INFO: Deploying blueprint resource resources/rest/roles_config.yaml to /opt/manager/roles_config.yaml
    2018-01-04 19:17:30 LOG <manager> [rest_service_ec35b.create] INFO: Downloading resource roles_config.yaml to /opt/cloudify/restservice/resources/roles_config.yaml
    2018-01-04 19:17:31 LOG <manager> [rest_service_ec35b.create] INFO: Deploying blueprint resource resources/rest/userstore.yaml to /opt/manager/userstore.yaml
    2018-01-04 19:17:31 LOG <manager> [rest_service_ec35b.create] INFO: Downloading resource userstore.yaml to /opt/cloudify/restservice/resources/userstore.yaml
    2018-01-04 19:17:33 LOG <manager> [rest_service_ec35b.create] INFO: Deploying REST Service Configuration file...
    2018-01-04 19:17:33 LOG <manager> [rest_service_ec35b.create] INFO: Deploying blueprint resource components/restservice/config/cloudify-rest.conf to /opt/manager/cloudify-rest.conf
    2018-01-04 19:17:33 LOG <manager> [rest_service_ec35b.create] INFO: Downloading resource cloudify-rest.conf to /opt/cloudify/restservice/resources/cloudify-rest.conf
    2018-01-04 19:17:34 CFY <manager> [rest_service_ec35b.create] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:17:35 CFY <manager> [rest_service_ec35b->manager_configuration_bb27d|preconfigure] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:17:35 CFY <manager> [rest_service_ec35b->manager_configuration_bb27d|preconfigure] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:17:35 LOG <manager> [rest_service_ec35b->manager_configuration_bb27d|preconfigure] INFO: Preparing fabric environment...
    2018-01-04 19:17:35 LOG <manager> [rest_service_ec35b->manager_configuration_bb27d|preconfigure] INFO: Environment prepared successfully
    2018-01-04 19:17:35 LOG <manager> [rest_service_ec35b->manager_configuration_bb27d|preconfigure] INFO: Deploying REST Security configuration file...
    2018-01-04 19:17:35 LOG <manager> [rest_service_ec35b->manager_configuration_bb27d|preconfigure] INFO: Loading security configuration
    2018-01-04 19:17:36 LOG <manager> [rest_service_ec35b->manager_configuration_bb27d|preconfigure] INFO: Deploying systemd EnvironmentFile...
    2018-01-04 19:17:36 LOG <manager> [rest_service_ec35b->manager_configuration_bb27d|preconfigure] INFO: Deploying blueprint resource components/restservice/config/cloudify-restservice to /etc/sysconfig/cloudify-restservice
    2018-01-04 19:17:36 LOG <manager> [rest_service_ec35b->manager_configuration_bb27d|preconfigure] INFO: Downloading resource cloudify-restservice to /opt/cloudify/restservice/resources/cloudify-restservice
    2018-01-04 19:17:37 LOG <manager> [rest_service_ec35b->manager_configuration_bb27d|preconfigure] INFO: Deploying systemd .service file...
    2018-01-04 19:17:37 LOG <manager> [rest_service_ec35b->manager_configuration_bb27d|preconfigure] INFO: Deploying blueprint resource components/restservice/config/cloudify-restservice.service to /usr/lib/systemd/system/cloudify-restservice.service
    2018-01-04 19:17:38 LOG <manager> [rest_service_ec35b->manager_configuration_bb27d|preconfigure] INFO: Downloading resource cloudify-restservice.service to /opt/cloudify/restservice/resources/cloudify-restservice.service
    2018-01-04 19:17:39 LOG <manager> [rest_service_ec35b->manager_configuration_bb27d|preconfigure] INFO: Enabling systemd .service...
    2018-01-04 19:17:39 CFY <manager> [rest_service_ec35b->manager_configuration_bb27d|preconfigure] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:17:39 CFY <manager> [logstash_2a99b] Configuring node
    2018-01-04 19:17:40 CFY <manager> [rest_service_ec35b] Configuring node
    2018-01-04 19:17:40 CFY <manager> [rest_service_ec35b.configure] Sending task 'fabric_plugin.tasks.run_task'
    2018-01-04 19:17:40 CFY <manager> [rest_service_ec35b.configure] Task started 'fabric_plugin.tasks.run_task'
    2018-01-04 19:17:40 LOG <manager> [rest_service_ec35b.configure] INFO: Running task: install_plugins from components/restservice/scripts/install_plugins.py
    2018-01-04 19:17:40 LOG <manager> [rest_service_ec35b.configure] INFO: Preparing fabric environment...
    2018-01-04 19:17:40 LOG <manager> [rest_service_ec35b.configure] INFO: Environment prepared successfully
    2018-01-04 19:17:40 LOG <manager> [rest_service_ec35b.configure] INFO: Installing plugins
    2018-01-04 19:17:40 CFY <manager> [rest_service_ec35b.configure] Task succeeded 'fabric_plugin.tasks.run_task'
    2018-01-04 19:17:40 CFY <manager> [logstash_2a99b] Starting node
    2018-01-04 19:17:40 CFY <manager> [logstash_2a99b.start] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:17:40 CFY <manager> [logstash_2a99b.start] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:17:40 LOG <manager> [logstash_2a99b.start] INFO: Preparing fabric environment...
    2018-01-04 19:17:40 LOG <manager> [logstash_2a99b.start] INFO: Environment prepared successfully
    2018-01-04 19:17:41 LOG <manager> [logstash_2a99b.start] INFO: Starting Logstash Service...
    2018-01-04 19:17:41 LOG <manager> [logstash_2a99b.start] INFO: Starting systemd service logstash...
    2018-01-04 19:17:42 LOG <manager> [logstash_2a99b.start] INFO: logstash is running
    2018-01-04 19:17:42 CFY <manager> [logstash_2a99b.start] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:17:42 CFY <manager> [rest_service_ec35b] Starting node
    2018-01-04 19:17:42 CFY <manager> [rest_service_ec35b.start] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:17:42 CFY <manager> [rest_service_ec35b.start] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:17:42 LOG <manager> [rest_service_ec35b.start] INFO: Preparing fabric environment...
    2018-01-04 19:17:42 LOG <manager> [rest_service_ec35b.start] INFO: Environment prepared successfully
    2018-01-04 19:17:42 LOG <manager> [rest_service_ec35b.start] INFO: Starting Cloudify REST Service...
    2018-01-04 19:17:43 LOG <manager> [rest_service_ec35b.start] INFO: Starting systemd service cloudify-restservice...
    2018-01-04 19:17:43 LOG <manager> [rest_service_ec35b.start] INFO: restservice is running
    2018-01-04 19:17:44 CFY <manager> [rest_service_ec35b.start] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:17:45 CFY <manager> [nginx_f39ff] Creating node
    2018-01-04 19:17:45 CFY <manager> [webui_1fb18] Creating node
    2018-01-04 19:17:45 CFY <manager> [webui_1fb18.create] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:17:45 CFY <manager> [nginx_f39ff.create] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:17:45 CFY <manager> [webui_1fb18.create] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:17:45 LOG <manager> [webui_1fb18.create] INFO: Preparing fabric environment...
    2018-01-04 19:17:45 LOG <manager> [webui_1fb18.create] INFO: Environment prepared successfully
    2018-01-04 19:17:45 LOG <manager> [webui_1fb18.create] INFO: Saving webui input configuration to /opt/cloudify/webui/node_properties/properties.json
    2018-01-04 19:17:46 LOG <manager> [webui_1fb18.create] INFO: Installing Cloudify's WebUI...
    2018-01-04 19:17:46 LOG <manager> [webui_1fb18.create] INFO: Checking whether SELinux in enforced...
    2018-01-04 19:17:46 LOG <manager> [webui_1fb18.create] INFO: SELinux is not enforced.
    2018-01-04 19:17:47 LOG <manager> [webui_1fb18.create] INFO: Downloading resource webui_NOTICE.txt to /opt/cloudify/webui/resources/webui_NOTICE.txt
    2018-01-04 19:17:49 LOG <manager> [webui_1fb18.create] INFO: Checking whether user webui exists...
    2018-01-04 19:17:49 LOG <manager> [webui_1fb18.create] INFO: Creating user webui, home: /opt/cloudify-ui...
    2018-01-04 19:17:50 LOG <manager> [webui_1fb18.create] INFO: Installing NodeJS...
    2018-01-04 19:17:51 LOG <manager> [webui_1fb18.create] INFO: Installing Cloudify's WebUI...
    2018-01-04 19:17:53 LOG <manager> [webui_1fb18.create] INFO: Installing Grafana...
    2018-01-04 19:17:55 LOG <manager> [webui_1fb18.create] INFO: Deploying WebUI Configuration...
    2018-01-04 19:17:55 LOG <manager> [webui_1fb18.create] INFO: Deploying blueprint resource components/webui/config/gsPresets.json to /opt/cloudify-ui/backend/gsPresets.json
    2018-01-04 19:17:55 LOG <manager> [webui_1fb18.create] INFO: Downloading resource gsPresets.json to /opt/cloudify/webui/resources/gsPresets.json
    2018-01-04 19:17:56 LOG <manager> [webui_1fb18.create] INFO: Deploying Grafana Configuration...
    2018-01-04 19:17:56 LOG <manager> [webui_1fb18.create] INFO: Deploying blueprint resource components/webui/config/grafana_config.js to /opt/cloudify-ui/grafana/config.js
    2018-01-04 19:17:56 LOG <manager> [webui_1fb18.create] INFO: Downloading resource config.js to /opt/cloudify/webui/resources/config.js
    2018-01-04 19:17:58 LOG <manager> [webui_1fb18.create] INFO: Fixing permissions...
    2018-01-04 19:17:58 LOG <manager> [webui_1fb18.create] INFO: chowning /opt/cloudify-ui by webui:webui...
    2018-01-04 19:17:58 LOG <manager> [webui_1fb18.create] INFO: chowning /opt/nodejs by webui:webui...
    2018-01-04 19:17:58 LOG <manager> [webui_1fb18.create] INFO: chowning /var/log/cloudify/webui by webui:webui...
    2018-01-04 19:17:59 LOG <manager> [webui_1fb18.create] INFO: Deploying logrotate config...
    2018-01-04 19:17:59 LOG <manager> [webui_1fb18.create] INFO: Deploying blueprint resource components/webui/config/logrotate to /etc/logrotate.d/webui
    2018-01-04 19:17:59 LOG <manager> [webui_1fb18.create] INFO: Downloading resource webui to /opt/cloudify/webui/resources/webui
    2018-01-04 19:18:00 LOG <manager> [webui_1fb18.create] INFO: chmoding /etc/logrotate.d/webui: 644
    2018-01-04 19:18:00 LOG <manager> [webui_1fb18.create] INFO: chowning /etc/logrotate.d/webui by root:root...
    2018-01-04 19:18:00 LOG <manager> [webui_1fb18.create] INFO: Deploying systemd EnvironmentFile...
    2018-01-04 19:18:01 LOG <manager> [webui_1fb18.create] INFO: Deploying blueprint resource components/webui/config/cloudify-webui to /etc/sysconfig/cloudify-webui
    2018-01-04 19:18:01 LOG <manager> [webui_1fb18.create] INFO: Downloading resource cloudify-webui to /opt/cloudify/webui/resources/cloudify-webui
    2018-01-04 19:18:02 LOG <manager> [webui_1fb18.create] INFO: Deploying systemd .service file...
    2018-01-04 19:18:02 LOG <manager> [webui_1fb18.create] INFO: Deploying blueprint resource components/webui/config/cloudify-webui.service to /usr/lib/systemd/system/cloudify-webui.service
    2018-01-04 19:18:02 LOG <manager> [webui_1fb18.create] INFO: Downloading resource cloudify-webui.service to /opt/cloudify/webui/resources/cloudify-webui.service
    2018-01-04 19:18:03 LOG <manager> [webui_1fb18.create] INFO: Enabling systemd .service...
    2018-01-04 19:18:04 CFY <manager> [webui_1fb18.create] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:18:04 CFY <manager> [nginx_f39ff.create] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:18:04 LOG <manager> [nginx_f39ff.create] INFO: Preparing fabric environment...
    2018-01-04 19:18:04 LOG <manager> [nginx_f39ff.create] INFO: Environment prepared successfully
    2018-01-04 19:18:05 LOG <manager> [nginx_f39ff.create] INFO: Saving nginx input configuration to /opt/cloudify/nginx/node_properties/properties.json
    2018-01-04 19:18:06 LOG <manager> [nginx_f39ff.create] INFO: Installing Nginx...
    2018-01-04 19:18:06 LOG <manager> [nginx_f39ff.create] INFO: Checking whether SELinux in enforced...
    2018-01-04 19:18:06 LOG <manager> [nginx_f39ff.create] INFO: SELinux is not enforced.
    2018-01-04 19:18:06 LOG <manager> [nginx_f39ff.create] INFO: Downloading resource nginx_NOTICE.txt to /opt/cloudify/nginx/resources/nginx_NOTICE.txt
    2018-01-04 19:18:09 LOG <manager> [nginx_f39ff.create] INFO: Checking whether /opt/cloudify/nginx/resources/nginx-1.8.0-1.el7.ngx.x86_64.rpm is already installed...
    2018-01-04 19:18:10 LOG <manager> [nginx_f39ff.create] INFO: yum installing /opt/cloudify/nginx/resources/nginx-1.8.0-1.el7.ngx.x86_64.rpm...
    2018-01-04 19:18:11 LOG <manager> [nginx_f39ff.create] INFO: Creating systemd unit override...
    2018-01-04 19:18:11 LOG <manager> [nginx_f39ff.create] INFO: Deploying blueprint resource components/nginx/config/restart.conf to /etc/systemd/system/nginx.service.d/restart.conf
    2018-01-04 19:18:11 LOG <manager> [nginx_f39ff.create] INFO: Downloading resource restart.conf to /opt/cloudify/nginx/resources/restart.conf
    2018-01-04 19:18:13 LOG <manager> [nginx_f39ff.create] INFO: Deploying logrotate config...
    2018-01-04 19:18:13 LOG <manager> [nginx_f39ff.create] INFO: Deploying blueprint resource components/nginx/config/logrotate to /etc/logrotate.d/nginx
    2018-01-04 19:18:13 LOG <manager> [nginx_f39ff.create] INFO: Downloading resource nginx to /opt/cloudify/nginx/resources/nginx
    2018-01-04 19:18:14 LOG <manager> [nginx_f39ff.create] INFO: chmoding /etc/logrotate.d/nginx: 644
    2018-01-04 19:18:14 LOG <manager> [nginx_f39ff.create] INFO: chowning /etc/logrotate.d/nginx by root:root...
    2018-01-04 19:18:15 CFY <manager> [nginx_f39ff.create] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:18:15 CFY <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:18:15 CFY <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:18:15 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Preparing fabric environment...
    2018-01-04 19:18:15 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Environment prepared successfully
    2018-01-04 19:18:16 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Deploying Nginx configuration files...
    2018-01-04 19:18:16 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Deploying blueprint resource components/nginx/config/http-rest-server.cloudify to /etc/nginx/conf.d/http-rest-server.cloudify
    2018-01-04 19:18:16 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Downloading resource http-rest-server.cloudify to /opt/cloudify/nginx/resources/http-rest-server.cloudify
    2018-01-04 19:18:17 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Deploying blueprint resource components/nginx/config/nginx.conf to /etc/nginx/nginx.conf
    2018-01-04 19:18:17 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Downloading resource nginx.conf to /opt/cloudify/nginx/resources/nginx.conf
    2018-01-04 19:18:19 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Deploying blueprint resource components/nginx/config/default.conf to /etc/nginx/conf.d/default.conf
    2018-01-04 19:18:19 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Downloading resource default.conf to /opt/cloudify/nginx/resources/default.conf
    2018-01-04 19:18:20 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Deploying blueprint resource components/nginx/config/rest-location.cloudify to /etc/nginx/conf.d/rest-location.cloudify
    2018-01-04 19:18:20 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Downloading resource rest-location.cloudify to /opt/cloudify/nginx/resources/rest-location.cloudify
    2018-01-04 19:18:21 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Deploying blueprint resource components/nginx/config/fileserver-location.cloudify to /etc/nginx/conf.d/fileserver-location.cloudify
    2018-01-04 19:18:21 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Downloading resource fileserver-location.cloudify to /opt/cloudify/nginx/resources/fileserver-location.cloudify
    2018-01-04 19:18:23 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Deploying blueprint resource components/nginx/config/ui-locations.cloudify to /etc/nginx/conf.d/ui-locations.cloudify
    2018-01-04 19:18:23 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Downloading resource ui-locations.cloudify to /opt/cloudify/nginx/resources/ui-locations.cloudify
    2018-01-04 19:18:24 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Deploying blueprint resource components/nginx/config/logs-conf.cloudify to /etc/nginx/conf.d/logs-conf.cloudify
    2018-01-04 19:18:24 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Downloading resource logs-conf.cloudify to /opt/cloudify/nginx/resources/logs-conf.cloudify
    2018-01-04 19:18:25 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] INFO: Enabling systemd service nginx...
    2018-01-04 19:18:26 CFY <manager> [nginx_f39ff->manager_configuration_bb27d|preconfigure] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:18:26 CFY <manager> [webui_1fb18] Configuring node
    2018-01-04 19:18:26 CFY <manager> [nginx_f39ff] Configuring node
    2018-01-04 19:18:26 CFY <manager> [nginx_f39ff.configure] Sending task 'fabric_plugin.tasks.run_task'
    2018-01-04 19:18:26 CFY <manager> [nginx_f39ff.configure] Task started 'fabric_plugin.tasks.run_task'
    2018-01-04 19:18:26 LOG <manager> [nginx_f39ff.configure] INFO: Running task: retrieve from components/nginx/scripts/retrieve_agents.py
    2018-01-04 19:18:26 LOG <manager> [nginx_f39ff.configure] INFO: Preparing fabric environment...
    2018-01-04 19:18:26 LOG <manager> [nginx_f39ff.configure] INFO: Environment prepared successfully
    2018-01-04 19:18:26 LOG <manager> [nginx_f39ff.configure] INFO: Downloading Cloudify Agents...
    2018-01-04 19:18:26 CFY <manager> [nginx_f39ff.configure] Task succeeded 'fabric_plugin.tasks.run_task'
    2018-01-04 19:18:26 CFY <manager> [nginx_f39ff->manager_configuration_bb27d|postconfigure] Sending task 'script_runner.tasks.run'
    2018-01-04 19:18:26 CFY <manager> [nginx_f39ff->manager_configuration_bb27d|postconfigure] Task started 'script_runner.tasks.run'
    2018-01-04 19:18:26 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|postconfigure] INFO: Setting Private Manager IP Runtime Property.
    2018-01-04 19:18:26 LOG <manager> [nginx_f39ff->manager_configuration_bb27d|postconfigure] INFO: Manager Private IP is: 10.0.0.11
    2018-01-04 19:18:26 CFY <manager> [nginx_f39ff->manager_configuration_bb27d|postconfigure] Task succeeded 'script_runner.tasks.run'
    2018-01-04 19:18:26 CFY <manager> [webui_1fb18] Starting node
    2018-01-04 19:18:26 CFY <manager> [webui_1fb18.start] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:18:26 CFY <manager> [webui_1fb18.start] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:18:26 LOG <manager> [webui_1fb18.start] INFO: Preparing fabric environment...
    2018-01-04 19:18:26 LOG <manager> [webui_1fb18.start] INFO: Environment prepared successfully
    2018-01-04 19:18:27 LOG <manager> [webui_1fb18.start] INFO: Starting WebUI Service...
    2018-01-04 19:18:27 LOG <manager> [webui_1fb18.start] INFO: Starting systemd service cloudify-webui...
    2018-01-04 19:18:28 CFY <manager> [webui_1fb18.start] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:18:28 CFY <manager> [nginx_f39ff] Starting node
    2018-01-04 19:18:28 CFY <manager> [nginx_f39ff.start] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:18:28 CFY <manager> [nginx_f39ff.start] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:18:28 LOG <manager> [nginx_f39ff.start] INFO: Preparing fabric environment...
    2018-01-04 19:18:28 LOG <manager> [nginx_f39ff.start] INFO: Environment prepared successfully
    2018-01-04 19:18:28 LOG <manager> [nginx_f39ff.start] INFO: Starting systemd service nginx...
    2018-01-04 19:18:29 LOG <manager> [nginx_f39ff.start] INFO: nginx is running
    2018-01-04 19:18:29 CFY <manager> [nginx_f39ff.start] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:18:30 CFY <manager> [riemann_bc532] Creating node
    2018-01-04 19:18:30 CFY <manager> [mgmt_worker_71792] Creating node
    2018-01-04 19:18:30 CFY <manager> [mgmt_worker_71792.create] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:18:30 CFY <manager> [riemann_bc532.create] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:18:30 CFY <manager> [mgmt_worker_71792.create] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:18:30 LOG <manager> [mgmt_worker_71792.create] INFO: Preparing fabric environment...
    2018-01-04 19:18:30 LOG <manager> [mgmt_worker_71792.create] INFO: Environment prepared successfully
    2018-01-04 19:18:31 LOG <manager> [mgmt_worker_71792.create] INFO: Saving mgmtworker input configuration to /opt/cloudify/mgmtworker/node_properties/properties.json
    2018-01-04 19:18:32 LOG <manager> [mgmt_worker_71792.create] INFO: rabbitmq_ssl_enabled: False
    2018-01-04 19:18:32 LOG <manager> [mgmt_worker_71792.create] INFO: Installing Management Worker...
    2018-01-04 19:18:32 LOG <manager> [mgmt_worker_71792.create] INFO: Checking whether SELinux in enforced...
    2018-01-04 19:18:32 LOG <manager> [mgmt_worker_71792.create] INFO: SELinux is not enforced.
    2018-01-04 19:18:33 LOG <manager> [mgmt_worker_71792.create] INFO: Downloading resource mgmtworker_NOTICE.txt to /opt/cloudify/mgmtworker/resources/mgmtworker_NOTICE.txt
    2018-01-04 19:18:36 LOG <manager> [mgmt_worker_71792.create] INFO: Checking whether /opt/cloudify/mgmtworker/resources/cloudify-management-worker-3.4.0-ga_b400.x86_64.rpm is already installed...
    2018-01-04 19:18:37 LOG <manager> [mgmt_worker_71792.create] INFO: yum installing /opt/cloudify/mgmtworker/resources/cloudify-management-worker-3.4.0-ga_b400.x86_64.rpm...
    2018-01-04 19:19:02 LOG <manager> [mgmt_worker_71792.create] INFO: Installing Optional Packages if supplied...
    2018-01-04 19:19:03 LOG <manager> [mgmt_worker_71792.create] WARNING: Broker SSL cert supplied but SSL not enabled (broker_ssl_enabled is False).
    2018-01-04 19:19:03 LOG <manager> [mgmt_worker_71792.create] INFO: broker_port: 5672
    2018-01-04 19:19:03 LOG <manager> [mgmt_worker_71792.create] INFO: Configuring Management worker...
    2018-01-04 19:19:03 LOG <manager> [mgmt_worker_71792.create] INFO: Deploying blueprint resource components/mgmtworker/config/broker_config.json to /opt/mgmtworker/work/broker_config.json
    2018-01-04 19:19:03 LOG <manager> [mgmt_worker_71792.create] INFO: Downloading resource broker_config.json to /opt/cloudify/mgmtworker/resources/broker_config.json
    2018-01-04 19:19:05 LOG <manager> [mgmt_worker_71792.create] INFO: Deploying systemd EnvironmentFile...
    2018-01-04 19:19:05 LOG <manager> [mgmt_worker_71792.create] INFO: Deploying blueprint resource components/mgmtworker/config/cloudify-mgmtworker to /etc/sysconfig/cloudify-mgmtworker
    2018-01-04 19:19:05 LOG <manager> [mgmt_worker_71792.create] INFO: Downloading resource cloudify-mgmtworker to /opt/cloudify/mgmtworker/resources/cloudify-mgmtworker
    2018-01-04 19:19:06 LOG <manager> [mgmt_worker_71792.create] INFO: Deploying systemd .service file...
    2018-01-04 19:19:06 LOG <manager> [mgmt_worker_71792.create] INFO: Deploying blueprint resource components/mgmtworker/config/cloudify-mgmtworker.service to /usr/lib/systemd/system/cloudify-mgmtworker.service
    2018-01-04 19:19:06 LOG <manager> [mgmt_worker_71792.create] INFO: Downloading resource cloudify-mgmtworker.service to /opt/cloudify/mgmtworker/resources/cloudify-mgmtworker.service
    2018-01-04 19:19:07 LOG <manager> [mgmt_worker_71792.create] INFO: Enabling systemd .service...
    2018-01-04 19:19:08 LOG <manager> [mgmt_worker_71792.create] INFO: Deploying logrotate config...
    2018-01-04 19:19:08 LOG <manager> [mgmt_worker_71792.create] INFO: Deploying blueprint resource components/mgmtworker/config/logrotate to /etc/logrotate.d/mgmtworker
    2018-01-04 19:19:08 LOG <manager> [mgmt_worker_71792.create] INFO: Downloading resource mgmtworker to /opt/cloudify/mgmtworker/resources/mgmtworker
    2018-01-04 19:19:09 LOG <manager> [mgmt_worker_71792.create] INFO: chmoding /etc/logrotate.d/mgmtworker: 644
    2018-01-04 19:19:10 LOG <manager> [mgmt_worker_71792.create] INFO: chowning /etc/logrotate.d/mgmtworker by root:root...
    2018-01-04 19:19:10 CFY <manager> [mgmt_worker_71792.create] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:19:10 CFY <manager> [riemann_bc532.create] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:19:10 LOG <manager> [riemann_bc532.create] INFO: Preparing fabric environment...
    2018-01-04 19:19:10 LOG <manager> [riemann_bc532.create] INFO: Environment prepared successfully
    2018-01-04 19:19:11 LOG <manager> [riemann_bc532.create] INFO: Saving riemann input configuration to /opt/cloudify/riemann/node_properties/properties.json
    2018-01-04 19:19:12 LOG <manager> [riemann_bc532.create] INFO: Installing Riemann...
    2018-01-04 19:19:12 LOG <manager> [riemann_bc532.create] INFO: Checking whether SELinux in enforced...
    2018-01-04 19:19:12 LOG <manager> [riemann_bc532.create] INFO: SELinux is not enforced.
    2018-01-04 19:19:13 LOG <manager> [riemann_bc532.create] INFO: Downloading resource riemann_NOTICE.txt to /opt/cloudify/riemann/resources/riemann_NOTICE.txt
    2018-01-04 19:19:16 LOG <manager> [riemann_bc532.create] INFO: Applying Langohr permissions...
    2018-01-04 19:19:17 LOG <manager> [riemann_bc532.create] INFO: Checking whether /opt/cloudify/riemann/resources/daemonize-1.7.3-7.el7.x86_64.rpm is already installed...
    2018-01-04 19:19:18 LOG <manager> [riemann_bc532.create] INFO: yum installing /opt/cloudify/riemann/resources/daemonize-1.7.3-7.el7.x86_64.rpm...
    2018-01-04 19:19:19 LOG <manager> [riemann_bc532.create] INFO: Checking whether /opt/cloudify/riemann/resources/riemann-0.2.6-1.noarch.rpm is already installed...
    2018-01-04 19:19:20 LOG <manager> [riemann_bc532.create] INFO: yum installing /opt/cloudify/riemann/resources/riemann-0.2.6-1.noarch.rpm...
    2018-01-04 19:19:22 LOG <manager> [riemann_bc532.create] INFO: Deploying logrotate config...
    2018-01-04 19:19:22 LOG <manager> [riemann_bc532.create] INFO: Deploying blueprint resource components/riemann/config/logrotate to /etc/logrotate.d/riemann
    2018-01-04 19:19:22 LOG <manager> [riemann_bc532.create] INFO: Downloading resource riemann to /opt/cloudify/riemann/resources/riemann
    2018-01-04 19:19:23 LOG <manager> [riemann_bc532.create] INFO: chmoding /etc/logrotate.d/riemann: 644
    2018-01-04 19:19:23 LOG <manager> [riemann_bc532.create] INFO: chowning /etc/logrotate.d/riemann by root:root...
    2018-01-04 19:19:24 LOG <manager> [riemann_bc532.create] INFO: Downloading cloudify-manager Repository...
    2018-01-04 19:19:25 LOG <manager> [riemann_bc532.create] INFO: Extracting Manager Repository...
    2018-01-04 19:19:25 LOG <manager> [riemann_bc532.create] INFO: Deploying Riemann manager.config...
    2018-01-04 19:19:26 LOG <manager> [riemann_bc532.create] INFO: Deploying Riemann conf...
    2018-01-04 19:19:26 LOG <manager> [riemann_bc532.create] INFO: Deploying blueprint resource components/riemann/config/main.clj to /etc/riemann/main.clj
    2018-01-04 19:19:26 LOG <manager> [riemann_bc532.create] INFO: Downloading resource main.clj to /opt/cloudify/riemann/resources/main.clj
    2018-01-04 19:19:27 LOG <manager> [riemann_bc532.create] INFO: Deploying systemd EnvironmentFile...
    2018-01-04 19:19:27 LOG <manager> [riemann_bc532.create] INFO: Deploying blueprint resource components/riemann/config/cloudify-riemann to /etc/sysconfig/cloudify-riemann
    2018-01-04 19:19:27 LOG <manager> [riemann_bc532.create] INFO: Downloading resource cloudify-riemann to /opt/cloudify/riemann/resources/cloudify-riemann
    2018-01-04 19:19:29 LOG <manager> [riemann_bc532.create] INFO: Deploying systemd .service file...
    2018-01-04 19:19:29 LOG <manager> [riemann_bc532.create] INFO: Deploying blueprint resource components/riemann/config/cloudify-riemann.service to /usr/lib/systemd/system/cloudify-riemann.service
    2018-01-04 19:19:29 LOG <manager> [riemann_bc532.create] INFO: Downloading resource cloudify-riemann.service to /opt/cloudify/riemann/resources/cloudify-riemann.service
    2018-01-04 19:19:30 LOG <manager> [riemann_bc532.create] INFO: Enabling systemd .service...
    2018-01-04 19:19:31 CFY <manager> [riemann_bc532.create] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:19:31 CFY <manager> [mgmt_worker_71792] Configuring node
    2018-01-04 19:19:31 CFY <manager> [riemann_bc532] Configuring node
    2018-01-04 19:19:32 CFY <manager> [mgmt_worker_71792->nginx_f39ff|postconfigure] Sending task 'script_runner.tasks.run'
    2018-01-04 19:19:32 CFY <manager> [mgmt_worker_71792->nginx_f39ff|postconfigure] Task started 'script_runner.tasks.run'
    2018-01-04 19:19:32 LOG <manager> [mgmt_worker_71792->nginx_f39ff|postconfigure] INFO: Setting Private Manager IP Runtime Property.
    2018-01-04 19:19:32 LOG <manager> [mgmt_worker_71792->nginx_f39ff|postconfigure] INFO: Manager Private IP is: 10.0.0.11
    2018-01-04 19:19:32 CFY <manager> [mgmt_worker_71792->nginx_f39ff|postconfigure] Task succeeded 'script_runner.tasks.run'
    2018-01-04 19:19:32 CFY <manager> [mgmt_worker_71792] Starting node
    2018-01-04 19:19:32 CFY <manager> [mgmt_worker_71792.start] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:19:32 CFY <manager> [mgmt_worker_71792.start] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:19:32 LOG <manager> [mgmt_worker_71792.start] INFO: Preparing fabric environment...
    2018-01-04 19:19:32 LOG <manager> [mgmt_worker_71792.start] INFO: Environment prepared successfully
    2018-01-04 19:19:33 LOG <manager> [mgmt_worker_71792.start] INFO: Starting Management Worker Service...
    2018-01-04 19:19:33 LOG <manager> [mgmt_worker_71792.start] INFO: Starting systemd service cloudify-mgmtworker...
    2018-01-04 19:19:34 LOG <manager> [mgmt_worker_71792.start] INFO: mgmtworker is running
    2018-01-04 19:19:34 LOG <manager> [mgmt_worker_71792.start] WARNING: celery status: worker not running, Retrying in 3 seconds...
    2018-01-04 19:19:38 CFY <manager> [mgmt_worker_71792.start] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:19:38 CFY <manager> [riemann_bc532] Starting node
    2018-01-04 19:19:38 CFY <manager> [riemann_bc532.start] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:19:38 CFY <manager> [riemann_bc532.start] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:19:38 LOG <manager> [riemann_bc532.start] INFO: Preparing fabric environment...
    2018-01-04 19:19:38 LOG <manager> [riemann_bc532.start] INFO: Environment prepared successfully
    2018-01-04 19:19:39 LOG <manager> [riemann_bc532.start] INFO: Starting Riemann Service...
    2018-01-04 19:19:39 LOG <manager> [riemann_bc532.start] INFO: Starting systemd service cloudify-riemann...
    2018-01-04 19:19:39 LOG <manager> [riemann_bc532.start] INFO: riemann is running
    2018-01-04 19:19:40 CFY <manager> [riemann_bc532.start] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:19:41 CFY <manager> [sanity_a7795] Creating node
    2018-01-04 19:19:41 CFY <manager> [sanity_a7795.create] Sending task 'fabric_plugin.tasks.run_task'
    2018-01-04 19:19:41 CFY <manager> [sanity_a7795.create] Task started 'fabric_plugin.tasks.run_task'
    2018-01-04 19:19:41 LOG <manager> [sanity_a7795.create] INFO: Running task: upload_keypair from components/manager/scripts/sanity/create_sanity.py
    2018-01-04 19:19:41 LOG <manager> [sanity_a7795.create] INFO: Preparing fabric environment...
    2018-01-04 19:19:41 LOG <manager> [sanity_a7795.create] INFO: Environment prepared successfully
    2018-01-04 19:19:41 LOG <manager> [sanity_a7795.create] INFO: Uploading key /opt/app/installer/cmtmp/cmbootstrap/id_rsa.cfybootstrap...
    2018-01-04 19:19:41 CFY <manager> [sanity_a7795.create] Task succeeded 'fabric_plugin.tasks.run_task'
    2018-01-04 19:19:42 CFY <manager> [sanity_a7795] Configuring node
    2018-01-04 19:19:43 CFY <manager> [sanity_a7795->manager_configuration_bb27d|postconfigure] Sending task 'script_runner.tasks.run'
    2018-01-04 19:19:43 CFY <manager> [sanity_a7795->manager_configuration_bb27d|postconfigure] Task started 'script_runner.tasks.run'
    2018-01-04 19:19:43 CFY <manager> [sanity_a7795->manager_configuration_bb27d|postconfigure] Task succeeded 'script_runner.tasks.run'
    2018-01-04 19:19:44 CFY <manager> [sanity_a7795] Starting node
    2018-01-04 19:19:44 CFY <manager> [sanity_a7795.start] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:19:44 CFY <manager> [sanity_a7795.start] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:19:44 LOG <manager> [sanity_a7795.start] INFO: Preparing fabric environment...
    2018-01-04 19:19:44 LOG <manager> [sanity_a7795.start] INFO: Environment prepared successfully
    2018-01-04 19:19:45 LOG <manager> [sanity_a7795.start] INFO: Saving sanity input configuration to /opt/cloudify/sanity/node_properties/properties.json
    2018-01-04 19:19:46 CFY <manager> [sanity_a7795.start] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:19:48 CFY <manager> 'install' workflow execution succeeded
    Downloading from http://repository.cloudifysource.org/org/cloudify3/wagons/cloudify-openstack-plugin/1.4/cloudify_openstack_plugin-1.4-py27-none-linux_x86_64-centos-Core.wgn to /tmp/tmpct6F7l/cloudify_openstack_plugin-1.4-py27-none-linux_x86_64-centos-Core.wgn
    Download complete
    Validating plugin /tmp/tmpct6F7l/cloudify_openstack_plugin-1.4-py27-none-linux_x86_64-centos-Core.wgn...
    Plugin validated successfully
    Uploading plugin /tmp/tmpct6F7l/cloudify_openstack_plugin-1.4-py27-none-linux_x86_64-centos-Core.wgn
    Plugin uploaded. The plugin's id is f4df5b79-c85c-4379-89d9-23afad9635d0
    Downloading from http://repository.cloudifysource.org/org/cloudify3/wagons/cloudify-fabric-plugin/1.4.1/cloudify_fabric_plugin-1.4.1-py27-none-linux_x86_64-centos-Core.wgn to /tmp/tmpct6F7l/cloudify_fabric_plugin-1.4.1-py27-none-linux_x86_64-centos-Core.wgn
    Download complete
    Validating plugin /tmp/tmpct6F7l/cloudify_fabric_plugin-1.4.1-py27-none-linux_x86_64-centos-Core.wgn...
    Plugin validated successfully
    Uploading plugin /tmp/tmpct6F7l/cloudify_fabric_plugin-1.4.1-py27-none-linux_x86_64-centos-Core.wgn
    Plugin uploaded. The plugin's id is f1eb9c9d-2402-4bf1-bb22-f00c71c217d5
    Downloading from https://nexus.onap.org/service/local/repositories/raw/content/org.onap.ccsdk.platform.plugins/plugins/dnsdesig-1.0.0-py27-none-any.wgn to /tmp/tmpct6F7l/dnsdesig-1.0.0-py27-none-any.wgn
    Download complete
    Validating plugin /tmp/tmpct6F7l/dnsdesig-1.0.0-py27-none-any.wgn...
    Plugin validated successfully
    Uploading plugin /tmp/tmpct6F7l/dnsdesig-1.0.0-py27-none-any.wgn
    Plugin uploaded. The plugin's id is 6bd1e681-ea68-4f40-a453-1b51cc9fea9d
    Downloading from https://nexus.onap.org/service/local/repositories/raw/content/org.onap.ccsdk.platform.plugins/plugins/sshkeyshare-1.0.0-py27-none-any.wgn to /tmp/tmpct6F7l/sshkeyshare-1.0.0-py27-none-any.wgn
    Download complete
    Validating plugin /tmp/tmpct6F7l/sshkeyshare-1.0.0-py27-none-any.wgn...
    Plugin validated successfully
    Uploading plugin /tmp/tmpct6F7l/sshkeyshare-1.0.0-py27-none-any.wgn
    Plugin uploaded. The plugin's id is 75ba10ed-1310-415b-a0b7-b351263cbd25
    Downloading from https://nexus.onap.org/service/local/repositories/raw/content/org.onap.ccsdk.platform.plugins/plugins/pgaas-1.0.0-py27-none-any.wgn to /tmp/tmpct6F7l/pgaas-1.0.0-py27-none-any.wgn
    Download complete
    Validating plugin /tmp/tmpct6F7l/pgaas-1.0.0-py27-none-any.wgn...
    Plugin validated successfully
    Uploading plugin /tmp/tmpct6F7l/pgaas-1.0.0-py27-none-any.wgn
    Plugin uploaded. The plugin's id is c80a7e20-e57b-45c9-9b0b-5cad1ca1423b
    Downloading from https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.plugins/releases/plugins/cdapcloudify/cdapcloudify-14.2.5-py27-none-any.wgn to /tmp/tmpct6F7l/cdapcloudify-14.2.5-py27-none-any.wgn
    Download complete
    Validating plugin /tmp/tmpct6F7l/cdapcloudify-14.2.5-py27-none-any.wgn...
    Plugin validated successfully
    Uploading plugin /tmp/tmpct6F7l/cdapcloudify-14.2.5-py27-none-any.wgn
    Plugin uploaded. The plugin's id is cfd440c9-105e-415a-9f93-5992b990995a
    Downloading from https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.plugins/releases/plugins/dcaepolicyplugin/dcaepolicyplugin-1.0.0-py27-none-any.wgn to /tmp/tmpct6F7l/dcaepolicyplugin-1.0.0-py27-none-any.wgn
    Download complete
    Validating plugin /tmp/tmpct6F7l/dcaepolicyplugin-1.0.0-py27-none-any.wgn...
    Plugin validated successfully
    Uploading plugin /tmp/tmpct6F7l/dcaepolicyplugin-1.0.0-py27-none-any.wgn
    Plugin uploaded. The plugin's id is d14978a0-a8ad-4e3b-9cc1-8e5c919f5c5b
    Downloading from https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.plugins/releases/plugins/dockerplugin/dockerplugin-2.4.0-py27-none-any.wgn to /tmp/tmpct6F7l/dockerplugin-2.4.0-py27-none-any.wgn
    Download complete
    Validating plugin /tmp/tmpct6F7l/dockerplugin-2.4.0-py27-none-any.wgn...
    Plugin validated successfully
    Uploading plugin /tmp/tmpct6F7l/dockerplugin-2.4.0-py27-none-any.wgn
    Plugin uploaded. The plugin's id is 51759cb0-a58b-493f-b233-198593e1a5b1
    Downloading from https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.plugins/releases/plugins/relationshipplugin/relationshipplugin-1.0.0-py27-none-any.wgn to /tmp/tmpct6F7l/relationshipplugin-1.0.0-py27-none-any.wgn
    Download complete
    Validating plugin /tmp/tmpct6F7l/relationshipplugin-1.0.0-py27-none-any.wgn...
    Plugin validated successfully
    Uploading plugin /tmp/tmpct6F7l/relationshipplugin-1.0.0-py27-none-any.wgn
    Plugin uploaded. The plugin's id is 618f4de6-68c0-4ebf-8550-4505da61d338
    Downloading from http://www.getcloudify.org/spec/openstack-plugin/1.4/plugin.yaml to /tmp/tmpct6F7l/plugin.yaml
    Download complete
    Uploading resources from /tmp/tmpct6F7l/plugin.yaml to /opt/manager/resources/spec/openstack-plugin/1.4
    [10.195.200.42] run: sudo mkdir -p /opt/manager/resources/spec/openstack-plugin/1.4
    [10.195.200.42] put: /tmp/tmpct6F7l/plugin.yaml -> /opt/manager/resources/spec/openstack-plugin/1.4/plugin.yaml
    Downloading from http://www.getcloudify.org/spec/aws-plugin/1.4.1/plugin.yaml to /tmp/tmpct6F7l/plugin.yaml
    Download complete
    Uploading resources from /tmp/tmpct6F7l/plugin.yaml to /opt/manager/resources/spec/aws-plugin/1.4.1
    [10.195.200.42] run: sudo mkdir -p /opt/manager/resources/spec/aws-plugin/1.4.1
    [10.195.200.42] put: /tmp/tmpct6F7l/plugin.yaml -> /opt/manager/resources/spec/aws-plugin/1.4.1/plugin.yaml
    Downloading from http://www.getcloudify.org/spec/tosca-vcloud-plugin/1.3.1/plugin.yaml to /tmp/tmpct6F7l/plugin.yaml
    Download complete
    Uploading resources from /tmp/tmpct6F7l/plugin.yaml to /opt/manager/resources/spec/tosca-vcloud-plugin/1.3.1
    [10.195.200.42] run: sudo mkdir -p /opt/manager/resources/spec/tosca-vcloud-plugin/1.3.1
    [10.195.200.42] put: /tmp/tmpct6F7l/plugin.yaml -> /opt/manager/resources/spec/tosca-vcloud-plugin/1.3.1/plugin.yaml
    Downloading from http://www.getcloudify.org/spec/vsphere-plugin/2.0/plugin.yaml to /tmp/tmpct6F7l/plugin.yaml
    Download complete
    Uploading resources from /tmp/tmpct6F7l/plugin.yaml to /opt/manager/resources/spec/vsphere-plugin/2.0
    [10.195.200.42] run: sudo mkdir -p /opt/manager/resources/spec/vsphere-plugin/2.0
    [10.195.200.42] put: /tmp/tmpct6F7l/plugin.yaml -> /opt/manager/resources/spec/vsphere-plugin/2.0/plugin.yaml
    Downloading from http://www.getcloudify.org/spec/fabric-plugin/1.4.1/plugin.yaml to /tmp/tmpct6F7l/plugin.yaml
    Download complete
    Uploading resources from /tmp/tmpct6F7l/plugin.yaml to /opt/manager/resources/spec/fabric-plugin/1.4.1
    [10.195.200.42] run: sudo mkdir -p /opt/manager/resources/spec/fabric-plugin/1.4.1
    [10.195.200.42] put: /tmp/tmpct6F7l/plugin.yaml -> /opt/manager/resources/spec/fabric-plugin/1.4.1/plugin.yaml
    Downloading from http://www.getcloudify.org/spec/diamond-plugin/1.3.3/plugin.yaml to /tmp/tmpct6F7l/plugin.yaml
    Download complete
    Uploading resources from /tmp/tmpct6F7l/plugin.yaml to /opt/manager/resources/spec/diamond-plugin/1.3.3
    [10.195.200.42] run: sudo mkdir -p /opt/manager/resources/spec/diamond-plugin/1.3.3
    [10.195.200.42] put: /tmp/tmpct6F7l/plugin.yaml -> /opt/manager/resources/spec/diamond-plugin/1.3.3/plugin.yaml
    Downloading from http://www.getcloudify.org/spec/cloudify/3.4/types.yaml to /tmp/tmpct6F7l/types.yaml
    Download complete
    Uploading resources from /tmp/tmpct6F7l/types.yaml to /opt/manager/resources/spec/cloudify/3.4
    [10.195.200.42] run: sudo mkdir -p /opt/manager/resources/spec/cloudify/3.4
    [10.195.200.42] put: /tmp/tmpct6F7l/types.yaml -> /opt/manager/resources/spec/cloudify/3.4/types.yaml
    2018-01-04 19:20:33 CFY <manager> Starting 'execute_operation' workflow execution
    2018-01-04 19:20:33 CFY <manager> [sanity_a7795] Starting operation cloudify.interfaces.lifecycle.start (Operation parameters: {'manager_ip': u'10.195.200.42', 'run_sanity': 'true', 'fabric_env': {'key_filename': u'/opt/app/installer/cmtmp/cmbootstrap/id_rsa.cfybootstrap', 'host_string': u'10.195.200.42', 'user': u'centos'}})
    2018-01-04 19:20:33 CFY <manager> [sanity_a7795.start] Sending task 'fabric_plugin.tasks.run_script'
    2018-01-04 19:20:33 CFY <manager> [sanity_a7795.start] Task started 'fabric_plugin.tasks.run_script'
    2018-01-04 19:20:33 LOG <manager> [sanity_a7795.start] INFO: Preparing fabric environment...
    2018-01-04 19:20:33 LOG <manager> [sanity_a7795.start] INFO: Environment prepared successfully
    2018-01-04 19:20:34 LOG <manager> [sanity_a7795.start] INFO: Saving sanity input configuration to /opt/cloudify/sanity/node_properties/properties.json
    2018-01-04 19:20:35 LOG <manager> [sanity_a7795.start] INFO: Starting Manager sanity check...
    2018-01-04 19:20:40 LOG <manager> [sanity_a7795.start] INFO: Installing sanity app...
    2018-01-04 19:21:10 LOG <manager> [sanity_a7795.start] INFO: Sanity app installed. Performing sanity test...
    2018-01-04 19:21:10 LOG <manager> [sanity_a7795.start] INFO: Manager sanity check successful, cleaning up sanity resources.
    2018-01-04 19:21:44 CFY <manager> [sanity_a7795.start] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-01-04 19:21:44 CFY <manager> [sanity_a7795] Finished operation cloudify.interfaces.lifecycle.start
    2018-01-04 19:21:44 CFY <manager> 'execute_operation' workflow execution succeeded
    Bootstrap complete
    Manager is up at 10.195.200.42
    + rm -f resources/ssl/server.key
    + cd /opt/app/installer
    + mkdir consul
    + cd consul
    + cfy init -r
    Initialization completed successfully
    + cfy use -t 10.195.200.42
    Using manager 10.195.200.42 with port 80
    Deploying Consul VM
    + echo 'Deploying Consul VM'
    + set +e
    + wget -O /tmp/consul_cluster.yaml https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/consul_cluster.yaml
    --2018-01-04 19:21:46--  https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/consul_cluster.yaml
    Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137, 2604:e100:1:0:f816:3eff:fefb:56ed
    Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 12727 (12K) [text/x-yaml]
    Saving to: '/tmp/consul_cluster.yaml'
    
         0K .......... ..                                         100% 16.9M=0.001s
    
    2018-01-04 19:21:46 (16.9 MB/s) - '/tmp/consul_cluster.yaml' saved [12727/12727]
    
    + mv -f /tmp/consul_cluster.yaml ../blueprints/
    Succeeded in getting the newest consul_cluster.yaml
    + echo 'Succeeded in getting the newest consul_cluster.yaml'
    + set -e
    + cfy install -p ../blueprints/consul_cluster.yaml -d consul -i .././config/inputs.yaml -i datacenter=MbOr
    Uploading blueprint ../blueprints/consul_cluster.yaml...
    Blueprint uploaded. The blueprint's id is blueprints
    Processing inputs source: .././config/inputs.yaml
    Processing inputs source: datacenter=MbOr
    Creating new deployment from blueprint blueprints...
    Deployment created. The deployment's id is consul
    Executing workflow install on deployment consul [timeout=900 seconds]
    Deployment environment creation is in progress...
    2018-01-04T19:21:52 CFY <consul> Starting 'create_deployment_environment' workflow execution
    2018-01-04T19:21:52 CFY <consul> Installing deployment plugins
    2018-01-04T19:21:52 CFY <consul> Sending task 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:21:52 CFY <consul> Task started 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:21:53 CFY <consul> Task succeeded 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:21:53 CFY <consul> Skipping starting deployment policy engine core - no policies defined
    2018-01-04T19:21:54 CFY <consul> Creating deployment work directory
    2018-01-04T19:21:54 CFY <consul> 'create_deployment_environment' workflow execution succeeded
    2018-01-04T19:21:59 CFY <consul> Starting 'install' workflow execution
    2018-01-04T19:22:00 CFY <consul> [floatingip_cnsl00_1fcf2] Creating node
    2018-01-04T19:22:00 CFY <consul> [floatingip_cnsl02_6658a] Creating node
    2018-01-04T19:22:00 CFY <consul> [floatingip_cnsl02_6658a.create] Sending task 'neutron_plugin.floatingip.create'
    2018-01-04T19:22:00 CFY <consul> [floatingip_cnsl02_6658a.create] Task started 'neutron_plugin.floatingip.create'
    2018-01-04T19:22:00 CFY <consul> [private_net_c5d7d] Creating node
    2018-01-04T19:22:00 CFY <consul> [floatingip_cnsl01_5a8ae] Creating node
    2018-01-04T19:22:00 CFY <consul> [security_group_c6c1c] Creating node
    2018-01-04T19:22:00 CFY <consul> [floatingip_cnsl01_5a8ae.create] Sending task 'neutron_plugin.floatingip.create'
    2018-01-04T19:22:00 CFY <consul> [private_net_c5d7d.create] Sending task 'neutron_plugin.network.create'
    2018-01-04T19:22:00 CFY <consul> [floatingip_cnsl00_1fcf2.create] Sending task 'neutron_plugin.floatingip.create'
    2018-01-04T19:22:00 CFY <consul> [key_pair_a721c] Creating node
    2018-01-04T19:22:00 CFY <consul> [floatingip_cnsl01_5a8ae.create] Task started 'neutron_plugin.floatingip.create'
    2018-01-04T19:22:00 CFY <consul> [private_net_c5d7d.create] Task started 'neutron_plugin.network.create'
    2018-01-04T19:22:00 CFY <consul> [floatingip_cnsl00_1fcf2.create] Task started 'neutron_plugin.floatingip.create'
    2018-01-04T19:22:00 CFY <consul> [key_pair_a721c.create] Sending task 'nova_plugin.keypair.create'
    2018-01-04T19:22:00 CFY <consul> [security_group_c6c1c.create] Sending task 'neutron_plugin.security_group.create'
    2018-01-04T19:22:01 CFY <consul> [key_pair_a721c.create] Task started 'nova_plugin.keypair.create'
    2018-01-04T19:22:02 CFY <consul> [private_net_c5d7d.create] Task succeeded 'neutron_plugin.network.create'
    2018-01-04T19:22:02 CFY <consul> [security_group_c6c1c.create] Task started 'neutron_plugin.security_group.create'
    2018-01-04T19:22:02 CFY <consul> [key_pair_a721c.create] Task succeeded 'nova_plugin.keypair.create'
    2018-01-04T19:22:02 CFY <consul> [private_net_c5d7d] Configuring node
    2018-01-04T19:22:02 CFY <consul> [key_pair_a721c] Configuring node
    2018-01-04T19:22:02 CFY <consul> [key_pair_a721c] Starting node
    2018-01-04T19:22:02 CFY <consul> [private_net_c5d7d] Starting node
    2018-01-04T19:22:03 CFY <consul> [security_group_c6c1c.create] Task succeeded 'neutron_plugin.security_group.create'
    2018-01-04T19:22:03 CFY <consul> [security_group_c6c1c] Configuring node
    2018-01-04T19:22:03 CFY <consul> [floatingip_cnsl02_6658a.create] Task succeeded 'neutron_plugin.floatingip.create'
    2018-01-04T19:22:03 CFY <consul> [floatingip_cnsl00_1fcf2.create] Task succeeded 'neutron_plugin.floatingip.create'
    2018-01-04T19:22:03 CFY <consul> [floatingip_cnsl01_5a8ae.create] Task succeeded 'neutron_plugin.floatingip.create'
    2018-01-04T19:22:03 CFY <consul> [floatingip_cnsl02_6658a] Configuring node
    2018-01-04T19:22:03 CFY <consul> [floatingip_cnsl00_1fcf2] Configuring node
    2018-01-04T19:22:03 CFY <consul> [fixedip_cnsl00_e8dab] Creating node
    2018-01-04T19:22:03 CFY <consul> [fixedip_cnsl00_e8dab.create] Sending task 'neutron_plugin.port.create'
    2018-01-04T19:22:03 CFY <consul> [fixedip_cnsl00_e8dab.create] Task started 'neutron_plugin.port.create'
    2018-01-04T19:22:03 CFY <consul> [security_group_c6c1c] Starting node
    2018-01-04T19:22:03 CFY <consul> [fixedip_cnsl01_eb2f0] Creating node
    2018-01-04T19:22:03 CFY <consul> [fixedip_cnsl02_9ec54] Creating node
    2018-01-04T19:22:04 CFY <consul> [floatingip_cnsl01_5a8ae] Configuring node
    2018-01-04T19:22:04 CFY <consul> [fixedip_cnsl02_9ec54.create] Sending task 'neutron_plugin.port.create'
    2018-01-04T19:22:04 CFY <consul> [fixedip_cnsl01_eb2f0.create] Sending task 'neutron_plugin.port.create'
    2018-01-04T19:22:04 CFY <consul> [fixedip_cnsl02_9ec54.create] Task started 'neutron_plugin.port.create'
    2018-01-04T19:22:04 CFY <consul> [fixedip_cnsl01_eb2f0.create] Task started 'neutron_plugin.port.create'
    2018-01-04T19:22:04 CFY <consul> [floatingip_cnsl02_6658a] Starting node
    2018-01-04T19:22:04 CFY <consul> [floatingip_cnsl00_1fcf2] Starting node
    2018-01-04T19:22:04 CFY <consul> [floatingip_cnsl01_5a8ae] Starting node
    2018-01-04T19:22:04 CFY <consul> [dns_cnsl02_b57d0] Creating node
    2018-01-04T19:22:05 CFY <consul> [dns_cnsl02_b57d0.create] Sending task 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:22:05 CFY <consul> [dns_cnsl02_b57d0.create] Task started 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:22:05 CFY <consul> [dns_cnsl00_4e838] Creating node
    2018-01-04T19:22:05 CFY <consul> [dns_cnsl00_4e838.create] Sending task 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:22:05 CFY <consul> [dns_cnsl00_4e838.create] Task started 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:22:05 CFY <consul> [dns_cluster_b8642] Creating node
    2018-01-04T19:22:05 CFY <consul> [dns_cnsl01_16e6f] Creating node
    2018-01-04T19:22:05 CFY <consul> [dns_cnsl01_16e6f.create] Sending task 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:22:05 CFY <consul> [dns_cluster_b8642.create] Sending task 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:22:05 CFY <consul> [fixedip_cnsl02_9ec54.create] Task succeeded 'neutron_plugin.port.create'
    2018-01-04T19:22:05 CFY <consul> [dns_cnsl01_16e6f.create] Task started 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:22:05 CFY <consul> [dns_cnsl02_b57d0.create] Task succeeded 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:22:05 CFY <consul> [dns_cluster_b8642.create] Task started 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:22:06 CFY <consul> [dns_cnsl00_4e838.create] Task succeeded 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:22:06 CFY <consul> [fixedip_cnsl00_e8dab.create] Task succeeded 'neutron_plugin.port.create'
    2018-01-04T19:22:06 CFY <consul> [fixedip_cnsl01_eb2f0.create] Task succeeded 'neutron_plugin.port.create'
    2018-01-04T19:22:06 CFY <consul> [fixedip_cnsl02_9ec54] Configuring node
    2018-01-04T19:22:06 CFY <consul> [dns_cnsl00_4e838] Configuring node
    2018-01-04T19:22:06 CFY <consul> [dns_cnsl02_b57d0] Configuring node
    2018-01-04T19:22:06 CFY <consul> [fixedip_cnsl00_e8dab] Configuring node
    2018-01-04T19:22:06 CFY <consul> [fixedip_cnsl01_eb2f0] Configuring node
    2018-01-04T19:22:06 CFY <consul> [dns_cnsl01_16e6f.create] Task succeeded 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:22:06 CFY <consul> [fixedip_cnsl02_9ec54] Starting node
    2018-01-04T19:22:06 CFY <consul> [dns_cluster_b8642.create] Task succeeded 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:22:06 CFY <consul> [dns_cnsl02_b57d0] Starting node
    2018-01-04T19:22:06 CFY <consul> [fixedip_cnsl00_e8dab] Starting node
    2018-01-04T19:22:07 CFY <consul> [dns_cnsl01_16e6f] Configuring node
    2018-01-04T19:22:07 CFY <consul> [dns_cnsl00_4e838] Starting node
    2018-01-04T19:22:07 CFY <consul> [fixedip_cnsl01_eb2f0] Starting node
    2018-01-04T19:22:07 CFY <consul> [dns_cluster_b8642] Configuring node
    2018-01-04T19:22:07 CFY <consul> [host_cnsl02_63ce0] Creating node
    2018-01-04T19:22:07 CFY <consul> [dns_cnsl01_16e6f] Starting node
    2018-01-04T19:22:07 CFY <consul> [host_cnsl02_63ce0.create] Sending task 'nova_plugin.server.create'
    2018-01-04T19:22:07 CFY <consul> [host_cnsl02_63ce0.create] Task started 'nova_plugin.server.create'
    2018-01-04T19:22:07 CFY <consul> [dns_cluster_b8642] Starting node
    2018-01-04T19:22:08 CFY <consul> [host_cnsl01_d1821] Creating node
    2018-01-04T19:22:08 CFY <consul> [host_cnsl01_d1821.create] Sending task 'nova_plugin.server.create'
    2018-01-04T19:22:08 CFY <consul> [host_cnsl01_d1821.create] Task started 'nova_plugin.server.create'
    2018-01-04T19:22:09 CFY <consul> [host_cnsl02_63ce0.create] Task succeeded 'nova_plugin.server.create'
    2018-01-04T19:22:10 CFY <consul> [host_cnsl01_d1821.create] Task succeeded 'nova_plugin.server.create'
    2018-01-04T19:22:10 CFY <consul> [host_cnsl02_63ce0] Configuring node
    2018-01-04T19:22:11 CFY <consul> [host_cnsl01_d1821] Configuring node
    2018-01-04T19:22:11 CFY <consul> [host_cnsl02_63ce0] Starting node
    2018-01-04T19:22:11 CFY <consul> [host_cnsl02_63ce0.start] Sending task 'nova_plugin.server.start'
    2018-01-04T19:22:11 CFY <consul> [host_cnsl02_63ce0.start] Task started 'nova_plugin.server.start'
    2018-01-04T19:22:12 CFY <consul> [host_cnsl01_d1821] Starting node
    2018-01-04T19:22:12 CFY <consul> [host_cnsl01_d1821.start] Sending task 'nova_plugin.server.start'
    2018-01-04T19:22:12 CFY <consul> [host_cnsl01_d1821.start] Task started 'nova_plugin.server.start'
    2018-01-04T19:22:12 CFY <consul> [host_cnsl02_63ce0.start] Task rescheduled 'nova_plugin.server.start' -> Waiting for server to be in ACTIVE state but is in BUILD:spawning state. Retrying... [retry_after=30]
    2018-01-04T19:22:13 CFY <consul> [host_cnsl01_d1821.start] Task rescheduled 'nova_plugin.server.start' -> Waiting for server to be in ACTIVE state but is in BUILD:spawning state. Retrying... [retry_after=30]
    2018-01-04T19:22:42 CFY <consul> [host_cnsl02_63ce0.start] Sending task 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:22:42 CFY <consul> [host_cnsl02_63ce0.start] Task started 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:22:43 CFY <consul> [host_cnsl01_d1821.start] Sending task 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:22:43 CFY <consul> [host_cnsl01_d1821.start] Task started 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:22:44 CFY <consul> [host_cnsl02_63ce0.start] Task succeeded 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:22:44 CFY <consul> [host_cnsl02_63ce0->security_group_c6c1c|establish] Sending task 'nova_plugin.server.connect_security_group'
    2018-01-04T19:22:44 CFY <consul> [host_cnsl02_63ce0->security_group_c6c1c|establish] Task started 'nova_plugin.server.connect_security_group'
    2018-01-04T19:22:44 CFY <consul> [host_cnsl01_d1821.start] Task succeeded 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:22:45 CFY <consul> [host_cnsl01_d1821->security_group_c6c1c|establish] Sending task 'nova_plugin.server.connect_security_group'
    2018-01-04T19:22:45 CFY <consul> [host_cnsl01_d1821->security_group_c6c1c|establish] Task started 'nova_plugin.server.connect_security_group'
    2018-01-04T19:22:47 CFY <consul> [host_cnsl02_63ce0->security_group_c6c1c|establish] Task succeeded 'nova_plugin.server.connect_security_group'
    2018-01-04T19:22:47 CFY <consul> [host_cnsl02_63ce0->floatingip_cnsl02_6658a|establish] Sending task 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:22:47 CFY <consul> [host_cnsl02_63ce0->floatingip_cnsl02_6658a|establish] Task started 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:22:47 CFY <consul> [host_cnsl01_d1821->security_group_c6c1c|establish] Task succeeded 'nova_plugin.server.connect_security_group'
    2018-01-04T19:22:47 CFY <consul> [host_cnsl01_d1821->floatingip_cnsl01_5a8ae|establish] Sending task 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:22:48 CFY <consul> [host_cnsl01_d1821->floatingip_cnsl01_5a8ae|establish] Task started 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:22:50 CFY <consul> [host_cnsl02_63ce0->floatingip_cnsl02_6658a|establish] Task succeeded 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:22:50 CFY <consul> [host_cnsl01_d1821->floatingip_cnsl01_5a8ae|establish] Task succeeded 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:22:51 CFY <consul> [host_cnsl00_3ca84] Creating node
    2018-01-04T19:22:51 CFY <consul> [host_cnsl00_3ca84.create] Sending task 'nova_plugin.server.create'
    2018-01-04T19:22:51 CFY <consul> [host_cnsl00_3ca84.create] Task started 'nova_plugin.server.create'
    2018-01-04T19:22:53 CFY <consul> [host_cnsl00_3ca84.create] Task succeeded 'nova_plugin.server.create'
    2018-01-04T19:22:54 CFY <consul> [host_cnsl00_3ca84] Configuring node
    2018-01-04T19:22:55 CFY <consul> [host_cnsl00_3ca84] Starting node
    2018-01-04T19:22:55 CFY <consul> [host_cnsl00_3ca84.start] Sending task 'nova_plugin.server.start'
    2018-01-04T19:22:55 CFY <consul> [host_cnsl00_3ca84.start] Task started 'nova_plugin.server.start'
    2018-01-04T19:22:56 CFY <consul> [host_cnsl00_3ca84.start] Task rescheduled 'nova_plugin.server.start' -> Waiting for server to be in ACTIVE state but is in BUILD:spawning state. Retrying... [retry_after=30]
    2018-01-04T19:23:26 CFY <consul> [host_cnsl00_3ca84.start] Sending task 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:23:26 CFY <consul> [host_cnsl00_3ca84.start] Task started 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:23:27 CFY <consul> [host_cnsl00_3ca84.start] Task succeeded 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:23:28 CFY <consul> [host_cnsl00_3ca84->security_group_c6c1c|establish] Sending task 'nova_plugin.server.connect_security_group'
    2018-01-04T19:23:28 CFY <consul> [host_cnsl00_3ca84->security_group_c6c1c|establish] Task started 'nova_plugin.server.connect_security_group'
    2018-01-04T19:23:30 CFY <consul> [host_cnsl00_3ca84->security_group_c6c1c|establish] Task succeeded 'nova_plugin.server.connect_security_group'
    2018-01-04T19:23:30 CFY <consul> [host_cnsl00_3ca84->floatingip_cnsl00_1fcf2|establish] Sending task 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:23:30 CFY <consul> [host_cnsl00_3ca84->floatingip_cnsl00_1fcf2|establish] Task started 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:23:34 CFY <consul> [host_cnsl00_3ca84->floatingip_cnsl00_1fcf2|establish] Task succeeded 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:23:34 CFY <consul> 'install' workflow execution succeeded
    Finished executing workflow install on deployment consul
    * Run 'cfy events list --include-logs --execution-id 082ca307-9c72-435d-9040-897a77d8a9d6' to retrieve the execution's events/logs
    ++ grep -Po 'Value: \K.*'
    ++ cfy deployments outputs -d consul
    + CONSULIP=10.195.200.32
    Consul deployed at 10.195.200.32
    + echo Consul deployed at 10.195.200.32
    + curl http://10.195.200.32:8500/v1/agent/services
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to 10.195.200.32 port 8500: Connection refused
    Waiting for Consul API
    + echo Waiting for Consul API
    + sleep 60
    + curl http://10.195.200.32:8500/v1/agent/services
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100   138  100   138    0     0  30673      0 --:--:-- --:--:-- --:--:-- 34500
    ++ curl -Ss http://10.195.200.32:8500/v1/status/leader
    + [[ "10.0.0.9:8300" != \"\" ]]
    + curl http://10.195.200.42:8500/v1/agent/join/10.195.200.32
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
    + REGREQ='
    {
      "Name" : "cloudify_manager",
      "ID" : "cloudify_manager",
      "Tags" : ["http://10.195.200.42/api/v2.1"],
      "Address": "10.195.200.42",
      "Port": 80,
      "Check" : {
        "Name" : "cloudify_manager_health",
        "Interval" : "300s",
        "HTTP" : "http://10.195.200.42/api/v2.1/status",
        "Status" : "passing",
        "DeregisterCriticalServiceAfter" : "30m"
      }
    }
    '
    + curl -X PUT -H 'Content-Type: application/json' --data-binary '
    {
      "Name" : "cloudify_manager",
      "ID" : "cloudify_manager",
      "Tags" : ["http://10.195.200.42/api/v2.1"],
      "Address": "10.195.200.42",
      "Port": 80,
      "Check" : {
        "Name" : "cloudify_manager_health",
        "Interval" : "300s",
        "HTTP" : "http://10.195.200.42/api/v2.1/status",
        "Status" : "passing",
        "DeregisterCriticalServiceAfter" : "30m"
      }
    }
    ' http://10.195.200.42:8500/v1/agent/service/register
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100   362    0     0  100   362      0  15041 --:--:-- --:--:-- --:--:-- 15739
    ++ mktemp
    + ENVINI=/tmp/tmp.vW3hr2dYeH
    + cat
    + scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i .././key600 /tmp/tmp.vW3hr2dYeH centos@10.195.200.42:/tmp/env.ini
    Warning: Permanently added '10.195.200.42' (ECDSA) to the list of known hosts.
    + ssh -t -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i .././key600 centos@10.195.200.42 sudo mv /tmp/env.ini /opt/env.ini
    Pseudo-terminal will not be allocated because stdin is not a terminal.
    Warning: Permanently added '10.195.200.42' (ECDSA) to the list of known hosts.
    + rm /tmp/tmp.vW3hr2dYeH
    + wget -P ./blueprints/docker/ https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/DockerBP.yaml
    --2018-01-04 19:24:41--  https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/DockerBP.yaml
    Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137, 2604:e100:1:0:f816:3eff:fefb:56ed
    Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 12527 (12K) [text/x-yaml]
    Saving to: './blueprints/docker/DockerBP.yaml'
    
         0K .......... ..                                         100% 15.5M=0.001s
    
    2018-01-04 19:24:41 (15.5 MB/s) - './blueprints/docker/DockerBP.yaml' saved [12527/12527]
    
    + wget -P ./blueprints/cbs/ https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/config_binding_service.yaml
    --2018-01-04 19:24:41--  https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/config_binding_service.yaml
    Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137, 2604:e100:1:0:f816:3eff:fefb:56ed
    Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 2456 (2.4K) [text/x-yaml]
    Saving to: './blueprints/cbs/config_binding_service.yaml'
    
         0K ..                                                    100%  262M=0s
    
    2018-01-04 19:24:41 (262 MB/s) - './blueprints/cbs/config_binding_service.yaml' saved [2456/2456]
    
    + wget -P ./blueprints/pg/ https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/pgaas-onevm.yaml
    --2018-01-04 19:24:41--  https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/pgaas-onevm.yaml
    Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137, 2604:e100:1:0:f816:3eff:fefb:56ed
    Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 11759 (11K) [text/x-yaml]
    Saving to: './blueprints/pg/pgaas-onevm.yaml'
    
         0K .......... .                                          100% 17.6M=0.001s
    
    2018-01-04 19:24:41 (17.6 MB/s) - './blueprints/pg/pgaas-onevm.yaml' saved [11759/11759]
    
    + wget -P ./blueprints/cdap/ https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/cdapbp7.yaml
    --2018-01-04 19:24:41--  https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/cdapbp7.yaml
    Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137, 2604:e100:1:0:f816:3eff:fefb:56ed
    Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 41241 (40K) [text/x-yaml]
    Saving to: './blueprints/cdap/cdapbp7.yaml'
    
         0K .......... .......... .......... ..........           100% 8.00M=0.005s
    
    2018-01-04 19:24:42 (8.00 MB/s) - './blueprints/cdap/cdapbp7.yaml' saved [41241/41241]
    
    + wget -P ./blueprints/cdapbroker/ https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/cdap_broker.yaml
    --2018-01-04 19:24:42--  https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/cdap_broker.yaml
    Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137, 2604:e100:1:0:f816:3eff:fefb:56ed
    Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 4155 (4.1K) [text/x-yaml]
    Saving to: './blueprints/cdapbroker/cdap_broker.yaml'
    
         0K ....                                                  100%  446M=0s
    
    2018-01-04 19:24:42 (446 MB/s) - './blueprints/cdapbroker/cdap_broker.yaml' saved [4155/4155]
    
    + wget -P ./blueprints/inv/ https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/inventory.yaml
    --2018-01-04 19:24:42--  https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/inventory.yaml
    Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137, 2604:e100:1:0:f816:3eff:fefb:56ed
    Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 6818 (6.7K) [text/x-yaml]
    Saving to: './blueprints/inv/inventory.yaml'
    
         0K ......                                                100%  170M=0s
    
    2018-01-04 19:24:42 (170 MB/s) - './blueprints/inv/inventory.yaml' saved [6818/6818]
    
    + wget -P ./blueprints/dh/ https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/DeploymentHandler.yaml
    --2018-01-04 19:24:42--  https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/DeploymentHandler.yaml
    Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137, 2604:e100:1:0:f816:3eff:fefb:56ed
    Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 3449 (3.4K) [text/x-yaml]
    Saving to: './blueprints/dh/DeploymentHandler.yaml'
    
         0K ...                                                   100%  523M=0s
    
    2018-01-04 19:24:42 (523 MB/s) - './blueprints/dh/DeploymentHandler.yaml' saved [3449/3449]
    
    + wget -P ./blueprints/ph/ https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/policy_handler.yaml
    --2018-01-04 19:24:42--  https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/policy_handler.yaml
    Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137, 2604:e100:1:0:f816:3eff:fefb:56ed
    Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 2635 (2.6K) [text/x-yaml]
    Saving to: './blueprints/ph/policy_handler.yaml'
    
         0K ..                                                    100%  325M=0s
    
    2018-01-04 19:24:42 (325 MB/s) - './blueprints/ph/policy_handler.yaml' saved [2635/2635]
    
    + wget -P ./blueprints/ves/ https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/ves.yaml
    --2018-01-04 19:24:42--  https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/ves.yaml
    Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137, 2604:e100:1:0:f816:3eff:fefb:56ed
    Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 4285 (4.2K) [text/x-yaml]
    Saving to: './blueprints/ves/ves.yaml'
    
         0K ....                                                  100%  377M=0s
    
    2018-01-04 19:24:42 (377 MB/s) - './blueprints/ves/ves.yaml' saved [4285/4285]
    
    + wget -P ./blueprints/tca/ https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/tca.yaml
    --2018-01-04 19:24:42--  https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/tca.yaml
    Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137, 2604:e100:1:0:f816:3eff:fefb:56ed
    Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 5903 (5.8K) [text/x-yaml]
    Saving to: './blueprints/tca/tca.yaml'
    
         0K .....                                                 100%  521M=0s
    
    2018-01-04 19:24:42 (521 MB/s) - './blueprints/tca/tca.yaml' saved [5903/5903]
    
    + wget -P ./blueprints/hrules/ https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/holmes-rules.yaml
    --2018-01-04 19:24:42--  https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/holmes-rules.yaml
    Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137, 2604:e100:1:0:f816:3eff:fefb:56ed
    Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 7034 (6.9K) [text/x-yaml]
    Saving to: './blueprints/hrules/holmes-rules.yaml'
    
         0K ......                                                100%  863M=0s
    
    2018-01-04 19:24:42 (863 MB/s) - './blueprints/hrules/holmes-rules.yaml' saved [7034/7034]
    
    + wget -P ./blueprints/hengine/ https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/holmes-engine.yaml
    --2018-01-04 19:24:42--  https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/blueprints/holmes-engine.yaml
    Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137, 2604:e100:1:0:f816:3eff:fefb:56ed
    Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 3458 (3.4K) [text/x-yaml]
    Saving to: './blueprints/hengine/holmes-engine.yaml'
    
         0K ...                                                   100% 78.1M=0s
    
    2018-01-04 19:24:42 (78.1 MB/s) - './blueprints/hengine/holmes-engine.yaml' saved [3458/3458]
    
    + curl -X PUT -H 'Content-Type: application/json' --data-binary '[{"username":"docker", "password":"docker", "registry": "nexus3.onap.org:10001"}]' http://10.195.200.32:8500/v1/kv/docker_plugin/docker_logins
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100    85  100     4  100    81    416   8440 --:--:-- --:--:-- --:--:--  9000
    + set +e
    + cfy install -v -p ./blueprints/docker/DockerBP.yaml -b DockerBP -d DockerPlatform -i .././config/inputs.yaml -i registered_dockerhost_name=platform_dockerhost -i registrator_image=onapdcae/registrator:v7 -i location_id=MbOr -i node_name=dokp00 -i target_datacenter=MbOr
    {"consul":{"ID":"consul","Service":"consul","Tags":[],"Address":"","Port":8300,"EnableTagOverride":false,"CreateIndex":0,"ModifyIndex":0}}trueUploading blueprint ./blueprints/docker/DockerBP.yaml...
    Blueprint uploaded. The blueprint's id is DockerBP
    Processing inputs source: .././config/inputs.yaml
    Processing inputs source: registered_dockerhost_name=platform_dockerhost
    Processing inputs source: registrator_image=onapdcae/registrator:v7
    Processing inputs source: location_id=MbOr
    Processing inputs source: node_name=dokp00
    Processing inputs source: target_datacenter=MbOr
    Creating new deployment from blueprint DockerBP...
    Deployment created. The deployment's id is DockerPlatform
    Executing workflow install on deployment DockerPlatform [timeout=900 seconds]
    Deployment environment creation is in progress...
    2018-01-04T19:24:48 CFY <DockerPlatform> Starting 'create_deployment_environment' workflow execution
    2018-01-04T19:24:48 CFY <DockerPlatform> Installing deployment plugins
    2018-01-04T19:24:48 CFY <DockerPlatform> Sending task 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:24:48 CFY <DockerPlatform> Task started 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:24:50 CFY <DockerPlatform> Task succeeded 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:24:50 CFY <DockerPlatform> Skipping starting deployment policy engine core - no policies defined
    2018-01-04T19:24:51 CFY <DockerPlatform> Creating deployment work directory
    2018-01-04T19:24:51 CFY <DockerPlatform> 'create_deployment_environment' workflow execution succeeded
    2018-01-04T19:24:55 CFY <DockerPlatform> Starting 'install' workflow execution
    2018-01-04T19:24:56 CFY <DockerPlatform> [private_net_90a71] Creating node
    2018-01-04T19:24:56 CFY <DockerPlatform> [security_group_85b13] Creating node
    2018-01-04T19:24:56 CFY <DockerPlatform> [key_pair_ae478] Creating node
    2018-01-04T19:24:56 CFY <DockerPlatform> [floatingip_dokr00_8e046] Creating node
    2018-01-04T19:24:56 CFY <DockerPlatform> [security_group_85b13.create] Sending task 'neutron_plugin.security_group.create'
    2018-01-04T19:24:56 CFY <DockerPlatform> [key_pair_ae478.create] Sending task 'nova_plugin.keypair.create'
    2018-01-04T19:24:56 CFY <DockerPlatform> [private_net_90a71.create] Sending task 'neutron_plugin.network.create'
    2018-01-04T19:24:56 CFY <DockerPlatform> [security_group_85b13.create] Task started 'neutron_plugin.security_group.create'
    2018-01-04T19:24:56 CFY <DockerPlatform> [key_pair_ae478.create] Task started 'nova_plugin.keypair.create'
    2018-01-04T19:24:56 CFY <DockerPlatform> [private_net_90a71.create] Task started 'neutron_plugin.network.create'
    2018-01-04T19:24:56 CFY <DockerPlatform> [floatingip_dokr00_8e046.create] Sending task 'neutron_plugin.floatingip.create'
    2018-01-04T19:24:56 CFY <DockerPlatform> [floatingip_dokr00_8e046.create] Task started 'neutron_plugin.floatingip.create'
    2018-01-04T19:24:57 CFY <DockerPlatform> [key_pair_ae478.create] Task succeeded 'nova_plugin.keypair.create'
    2018-01-04T19:24:57 CFY <DockerPlatform> [security_group_85b13.create] Task succeeded 'neutron_plugin.security_group.create'
    2018-01-04T19:24:57 CFY <DockerPlatform> [private_net_90a71.create] Task succeeded 'neutron_plugin.network.create'
    2018-01-04T19:24:57 CFY <DockerPlatform> [security_group_85b13] Configuring node
    2018-01-04T19:24:57 CFY <DockerPlatform> [key_pair_ae478] Configuring node
    2018-01-04T19:24:57 CFY <DockerPlatform> [private_net_90a71] Configuring node
    2018-01-04T19:24:58 CFY <DockerPlatform> [security_group_85b13] Starting node
    2018-01-04T19:24:58 CFY <DockerPlatform> [private_net_90a71] Starting node
    2018-01-04T19:24:58 CFY <DockerPlatform> [key_pair_ae478] Starting node
    2018-01-04T19:24:58 CFY <DockerPlatform> [fixedip_dokr00_885b8] Creating node
    2018-01-04T19:24:58 CFY <DockerPlatform> [fixedip_dokr00_885b8.create] Sending task 'neutron_plugin.port.create'
    2018-01-04T19:24:58 CFY <DockerPlatform> [fixedip_dokr00_885b8.create] Task started 'neutron_plugin.port.create'
    2018-01-04T19:24:59 CFY <DockerPlatform> [floatingip_dokr00_8e046.create] Task succeeded 'neutron_plugin.floatingip.create'
    2018-01-04T19:24:59 CFY <DockerPlatform> [floatingip_dokr00_8e046] Configuring node
    2018-01-04T19:24:59 CFY <DockerPlatform> [floatingip_dokr00_8e046] Starting node
    2018-01-04T19:25:00 CFY <DockerPlatform> [dns_dokr00_11b93] Creating node
    2018-01-04T19:25:00 CFY <DockerPlatform> [dns_dokr00_11b93.create] Sending task 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:25:00 CFY <DockerPlatform> [dns_dokr00_11b93.create] Task started 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:25:00 CFY <DockerPlatform> [fixedip_dokr00_885b8.create] Task succeeded 'neutron_plugin.port.create'
    2018-01-04T19:25:01 CFY <DockerPlatform> [fixedip_dokr00_885b8] Configuring node
    2018-01-04T19:25:01 CFY <DockerPlatform> [dns_dokr00_11b93.create] Task succeeded 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:25:01 CFY <DockerPlatform> [fixedip_dokr00_885b8] Starting node
    2018-01-04T19:25:01 CFY <DockerPlatform> [dns_dokr00_11b93] Configuring node
    2018-01-04T19:25:02 CFY <DockerPlatform> [dns_dokr00_11b93] Starting node
    2018-01-04T19:25:02 CFY <DockerPlatform> [host_dokr00_e4365] Creating node
    2018-01-04T19:25:02 CFY <DockerPlatform> [host_dokr00_e4365.create] Sending task 'nova_plugin.server.create'
    2018-01-04T19:25:03 CFY <DockerPlatform> [host_dokr00_e4365.create] Task started 'nova_plugin.server.create'
    2018-01-04T19:25:05 CFY <DockerPlatform> [host_dokr00_e4365.create] Task succeeded 'nova_plugin.server.create'
    2018-01-04T19:25:05 CFY <DockerPlatform> [host_dokr00_e4365] Configuring node
    2018-01-04T19:25:06 CFY <DockerPlatform> [host_dokr00_e4365] Starting node
    2018-01-04T19:25:06 CFY <DockerPlatform> [host_dokr00_e4365.start] Sending task 'nova_plugin.server.start'
    2018-01-04T19:25:06 CFY <DockerPlatform> [host_dokr00_e4365.start] Task started 'nova_plugin.server.start'
    2018-01-04T19:25:08 CFY <DockerPlatform> [host_dokr00_e4365.start] Task rescheduled 'nova_plugin.server.start' -> Waiting for server to be in ACTIVE state but is in BUILD:spawning state. Retrying... [retry_after=30]
    Traceback (most recent call last):
      File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py", line 596, in main
      File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py", line 376, in handle
    OperationRetry: Waiting for server to be in ACTIVE state but is in BUILD:spawning state. Retrying... [retry_after=30]
    
    2018-01-04T19:25:38 CFY <DockerPlatform> [host_dokr00_e4365.start] Sending task 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:25:38 CFY <DockerPlatform> [host_dokr00_e4365.start] Task started 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:25:39 CFY <DockerPlatform> [host_dokr00_e4365.start] Task succeeded 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:25:40 CFY <DockerPlatform> [host_dokr00_e4365->security_group_85b13|establish] Sending task 'nova_plugin.server.connect_security_group'
    2018-01-04T19:25:40 CFY <DockerPlatform> [host_dokr00_e4365->security_group_85b13|establish] Task started 'nova_plugin.server.connect_security_group'
    2018-01-04T19:25:42 CFY <DockerPlatform> [host_dokr00_e4365->security_group_85b13|establish] Task succeeded 'nova_plugin.server.connect_security_group'
    2018-01-04T19:25:43 CFY <DockerPlatform> [host_dokr00_e4365->floatingip_dokr00_8e046|establish] Sending task 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:25:43 CFY <DockerPlatform> [host_dokr00_e4365->floatingip_dokr00_8e046|establish] Task started 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:25:45 CFY <DockerPlatform> [host_dokr00_e4365->floatingip_dokr00_8e046|establish] Task succeeded 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:25:46 CFY <DockerPlatform> [docker_host_93428] Creating node
    2018-01-04T19:25:46 CFY <DockerPlatform> [docker_host_93428.create] Sending task 'dockerplugin.select_docker_host'
    2018-01-04T19:25:46 CFY <DockerPlatform> [docker_host_93428.create] Task started 'dockerplugin.select_docker_host'
    2018-01-04T19:25:47 CFY <DockerPlatform> [docker_host_93428.create] Task succeeded 'dockerplugin.select_docker_host'
    2018-01-04T19:25:47 CFY <DockerPlatform> [docker_host_93428] Configuring node
    2018-01-04T19:25:48 CFY <DockerPlatform> [docker_host_93428] Starting node
    2018-01-04T19:25:48 CFY <DockerPlatform> [registrator_19c9a] Creating node
    2018-01-04T19:25:49 CFY <DockerPlatform> [registrator_19c9a->docker_host_93428|preconfigure] Sending task 'relationshipplugin.forward_destination_info'
    2018-01-04T19:25:49 CFY <DockerPlatform> [registrator_19c9a->docker_host_93428|preconfigure] Task started 'relationshipplugin.forward_destination_info'
    2018-01-04T19:25:49 CFY <DockerPlatform> [registrator_19c9a->docker_host_93428|preconfigure] Task succeeded 'relationshipplugin.forward_destination_info'
    2018-01-04T19:25:49 CFY <DockerPlatform> [registrator_19c9a] Configuring node
    2018-01-04T19:25:50 CFY <DockerPlatform> [registrator_19c9a] Starting node
    2018-01-04T19:25:50 CFY <DockerPlatform> [registrator_19c9a.start] Sending task 'dockerplugin.create_and_start_container'
    2018-01-04T19:25:50 CFY <DockerPlatform> [registrator_19c9a.start] Task started 'dockerplugin.create_and_start_container'
    2018-01-04T19:25:51 CFY <DockerPlatform> [registrator_19c9a.start] Task failed 'dockerplugin.create_and_start_container' -> Failed to find: platform_dockerhost
    Traceback (most recent call last):
      File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py", line 596, in main
      File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py", line 366, in handle
      File "/opt/mgmtworker/env/plugins/dockerplugin-2.4.0/lib/python2.7/site-packages/dockerplugin/decorators.py", line 53, in wrapper
        raise RecoverableError(e)
    RecoverableError: Failed to find: platform_dockerhost
    
    2018-01-04T19:26:21 CFY <DockerPlatform> [registrator_19c9a.start] Sending task 'dockerplugin.create_and_start_container' [retry 1]
    2018-01-04T19:26:21 CFY <DockerPlatform> [registrator_19c9a.start] Task started 'dockerplugin.create_and_start_container' [retry 1]
    2018-01-04T19:26:22 CFY <DockerPlatform> [registrator_19c9a.start] Task failed 'dockerplugin.create_and_start_container' -> ('Connection aborted.', error(111, 'Connection refused')) [retry 1]
    Traceback (most recent call last):
      File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py", line 596, in main
      File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py", line 366, in handle
      File "/opt/mgmtworker/env/plugins/dockerplugin-2.4.0/lib/python2.7/site-packages/dockerplugin/decorators.py", line 53, in wrapper
        raise RecoverableError(e)
    RecoverableError: ('Connection aborted.', error(111, 'Connection refused'))
    
    2018-01-04T19:26:52 CFY <DockerPlatform> [registrator_19c9a.start] Sending task 'dockerplugin.create_and_start_container' [retry 2]
    2018-01-04T19:26:52 CFY <DockerPlatform> [registrator_19c9a.start] Task started 'dockerplugin.create_and_start_container' [retry 2]
    2018-01-04T19:26:52 CFY <DockerPlatform> [registrator_19c9a.start] Task failed 'dockerplugin.create_and_start_container' -> ('Connection aborted.', error(111, 'Connection refused')) [retry 2]
    Traceback (most recent call last):
      File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py", line 596, in main
      File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py", line 366, in handle
      File "/opt/mgmtworker/env/plugins/dockerplugin-2.4.0/lib/python2.7/site-packages/dockerplugin/decorators.py", line 53, in wrapper
        raise RecoverableError(e)
    RecoverableError: ('Connection aborted.', error(111, 'Connection refused'))
    
    2018-01-04T19:27:22 CFY <DockerPlatform> [registrator_19c9a.start] Sending task 'dockerplugin.create_and_start_container' [retry 3]
    2018-01-04T19:27:23 CFY <DockerPlatform> [registrator_19c9a.start] Task started 'dockerplugin.create_and_start_container' [retry 3]
    2018-01-04T19:27:23 CFY <DockerPlatform> [registrator_19c9a.start] Task failed 'dockerplugin.create_and_start_container' -> ('Connection aborted.', error(111, 'Connection refused')) [retry 3]
    Traceback (most recent call last):
      File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py", line 596, in main
      File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py", line 366, in handle
      File "/opt/mgmtworker/env/plugins/dockerplugin-2.4.0/lib/python2.7/site-packages/dockerplugin/decorators.py", line 53, in wrapper
        raise RecoverableError(e)
    RecoverableError: ('Connection aborted.', error(111, 'Connection refused'))
    
    2018-01-04T19:27:53 CFY <DockerPlatform> [registrator_19c9a.start] Sending task 'dockerplugin.create_and_start_container' [retry 4]
    2018-01-04T19:27:53 CFY <DockerPlatform> [registrator_19c9a.start] Task started 'dockerplugin.create_and_start_container' [retry 4]
    2018-01-04T19:28:07 CFY <DockerPlatform> [registrator_19c9a.start] Task succeeded 'dockerplugin.create_and_start_container' [retry 4]
    2018-01-04T19:28:07 CFY <DockerPlatform> 'install' workflow execution succeeded
    Finished executing workflow install on deployment DockerPlatform
    * Run 'cfy events list --include-logs --execution-id 70b1fab8-9c95-43bd-8ccb-f9fc0fb3a7ad' to retrieve the execution's events/logs
    + cfy deployments create -b DockerBP -d DockerComponent -i .././config/inputs.yaml -i registered_dockerhost_name=component_dockerhost -i location_id=MbOr -i registrator_image=onapdcae/registrator:v7 -i node_name=doks00 -i target_datacenter=MbOr
    Processing inputs source: .././config/inputs.yaml
    Processing inputs source: registered_dockerhost_name=component_dockerhost
    Processing inputs source: location_id=MbOr
    Processing inputs source: registrator_image=onapdcae/registrator:v7
    Processing inputs source: node_name=doks00
    Processing inputs source: target_datacenter=MbOr
    Creating new deployment from blueprint DockerBP...
    Deployment created. The deployment's id is DockerComponent
    + cfy executions start -d DockerComponent -w install
    Executing workflow install on deployment DockerComponent [timeout=900 seconds]
    Deployment environment creation is in progress...
    2018-01-04T19:28:14 CFY <DockerComponent> Starting 'create_deployment_environment' workflow execution
    2018-01-04T19:28:14 CFY <DockerComponent> Installing deployment plugins
    2018-01-04T19:28:14 CFY <DockerComponent> Sending task 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:28:14 CFY <DockerComponent> Task started 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:28:16 CFY <DockerComponent> Task succeeded 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:28:16 CFY <DockerComponent> Skipping starting deployment policy engine core - no policies defined
    2018-01-04T19:28:16 CFY <DockerComponent> Creating deployment work directory
    2018-01-04T19:28:16 CFY <DockerComponent> 'create_deployment_environment' workflow execution succeeded
    2018-01-04T19:28:21 CFY <DockerComponent> Starting 'install' workflow execution
    2018-01-04T19:28:22 CFY <DockerComponent> [private_net_71c2c] Creating node
    2018-01-04T19:28:22 CFY <DockerComponent> [security_group_a8752] Creating node
    2018-01-04T19:28:22 CFY <DockerComponent> [security_group_a8752.create] Sending task 'neutron_plugin.security_group.create'
    2018-01-04T19:28:22 CFY <DockerComponent> [floatingip_dokr00_7d781] Creating node
    2018-01-04T19:28:22 CFY <DockerComponent> [key_pair_b3f96] Creating node
    2018-01-04T19:28:22 CFY <DockerComponent> [security_group_a8752.create] Task started 'neutron_plugin.security_group.create'
    2018-01-04T19:28:22 CFY <DockerComponent> [floatingip_dokr00_7d781.create] Sending task 'neutron_plugin.floatingip.create'
    2018-01-04T19:28:22 CFY <DockerComponent> [key_pair_b3f96.create] Sending task 'nova_plugin.keypair.create'
    2018-01-04T19:28:22 CFY <DockerComponent> [private_net_71c2c.create] Sending task 'neutron_plugin.network.create'
    2018-01-04T19:28:22 CFY <DockerComponent> [floatingip_dokr00_7d781.create] Task started 'neutron_plugin.floatingip.create'
    2018-01-04T19:28:22 CFY <DockerComponent> [key_pair_b3f96.create] Task started 'nova_plugin.keypair.create'
    2018-01-04T19:28:22 CFY <DockerComponent> [private_net_71c2c.create] Task started 'neutron_plugin.network.create'
    2018-01-04T19:28:23 CFY <DockerComponent> [key_pair_b3f96.create] Task succeeded 'nova_plugin.keypair.create'
    2018-01-04T19:28:23 CFY <DockerComponent> [security_group_a8752.create] Task succeeded 'neutron_plugin.security_group.create'
    2018-01-04T19:28:23 CFY <DockerComponent> [private_net_71c2c.create] Task succeeded 'neutron_plugin.network.create'
    2018-01-04T19:28:23 CFY <DockerComponent> [security_group_a8752] Configuring node
    2018-01-04T19:28:23 CFY <DockerComponent> [key_pair_b3f96] Configuring node
    2018-01-04T19:28:23 CFY <DockerComponent> [private_net_71c2c] Configuring node
    2018-01-04T19:28:24 CFY <DockerComponent> [security_group_a8752] Starting node
    2018-01-04T19:28:24 CFY <DockerComponent> [key_pair_b3f96] Starting node
    2018-01-04T19:28:24 CFY <DockerComponent> [private_net_71c2c] Starting node
    2018-01-04T19:28:24 CFY <DockerComponent> [floatingip_dokr00_7d781.create] Task succeeded 'neutron_plugin.floatingip.create'
    2018-01-04T19:28:25 CFY <DockerComponent> [fixedip_dokr00_75e79] Creating node
    2018-01-04T19:28:25 CFY <DockerComponent> [fixedip_dokr00_75e79.create] Sending task 'neutron_plugin.port.create'
    2018-01-04T19:28:25 CFY <DockerComponent> [fixedip_dokr00_75e79.create] Task started 'neutron_plugin.port.create'
    2018-01-04T19:28:25 CFY <DockerComponent> [floatingip_dokr00_7d781] Configuring node
    2018-01-04T19:28:25 CFY <DockerComponent> [floatingip_dokr00_7d781] Starting node
    2018-01-04T19:28:26 CFY <DockerComponent> [dns_dokr00_3c596] Creating node
    2018-01-04T19:28:26 CFY <DockerComponent> [dns_dokr00_3c596.create] Sending task 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:28:26 CFY <DockerComponent> [dns_dokr00_3c596.create] Task started 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:28:27 CFY <DockerComponent> [fixedip_dokr00_75e79.create] Task succeeded 'neutron_plugin.port.create'
    2018-01-04T19:28:27 CFY <DockerComponent> [dns_dokr00_3c596.create] Task succeeded 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:28:27 CFY <DockerComponent> [fixedip_dokr00_75e79] Configuring node
    2018-01-04T19:28:27 CFY <DockerComponent> [dns_dokr00_3c596] Configuring node
    2018-01-04T19:28:27 CFY <DockerComponent> [fixedip_dokr00_75e79] Starting node
    2018-01-04T19:28:28 CFY <DockerComponent> [dns_dokr00_3c596] Starting node
    2018-01-04T19:28:29 CFY <DockerComponent> [host_dokr00_1928a] Creating node
    2018-01-04T19:28:29 CFY <DockerComponent> [host_dokr00_1928a.create] Sending task 'nova_plugin.server.create'
    2018-01-04T19:28:29 CFY <DockerComponent> [host_dokr00_1928a.create] Task started 'nova_plugin.server.create'
    2018-01-04T19:28:31 CFY <DockerComponent> [host_dokr00_1928a.create] Task succeeded 'nova_plugin.server.create'
    2018-01-04T19:28:32 CFY <DockerComponent> [host_dokr00_1928a] Configuring node
    2018-01-04T19:28:33 CFY <DockerComponent> [host_dokr00_1928a] Starting node
    2018-01-04T19:28:33 CFY <DockerComponent> [host_dokr00_1928a.start] Sending task 'nova_plugin.server.start'
    2018-01-04T19:28:33 CFY <DockerComponent> [host_dokr00_1928a.start] Task started 'nova_plugin.server.start'
    2018-01-04T19:28:34 CFY <DockerComponent> [host_dokr00_1928a.start] Task rescheduled 'nova_plugin.server.start' -> Waiting for server to be in ACTIVE state but is in BUILD:spawning state. Retrying... [retry_after=30]
    2018-01-04T19:29:04 CFY <DockerComponent> [host_dokr00_1928a.start] Sending task 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:29:04 CFY <DockerComponent> [host_dokr00_1928a.start] Task started 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:29:06 CFY <DockerComponent> [host_dokr00_1928a.start] Task succeeded 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:29:06 CFY <DockerComponent> [host_dokr00_1928a->security_group_a8752|establish] Sending task 'nova_plugin.server.connect_security_group'
    2018-01-04T19:29:06 CFY <DockerComponent> [host_dokr00_1928a->security_group_a8752|establish] Task started 'nova_plugin.server.connect_security_group'
    2018-01-04T19:29:09 CFY <DockerComponent> [host_dokr00_1928a->security_group_a8752|establish] Task succeeded 'nova_plugin.server.connect_security_group'
    2018-01-04T19:29:09 CFY <DockerComponent> [host_dokr00_1928a->floatingip_dokr00_7d781|establish] Sending task 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:29:09 CFY <DockerComponent> [host_dokr00_1928a->floatingip_dokr00_7d781|establish] Task started 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:29:12 CFY <DockerComponent> [host_dokr00_1928a->floatingip_dokr00_7d781|establish] Task succeeded 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:29:13 CFY <DockerComponent> [docker_host_bc033] Creating node
    2018-01-04T19:29:13 CFY <DockerComponent> [docker_host_bc033.create] Sending task 'dockerplugin.select_docker_host'
    2018-01-04T19:29:13 CFY <DockerComponent> [docker_host_bc033.create] Task started 'dockerplugin.select_docker_host'
    2018-01-04T19:29:13 CFY <DockerComponent> [docker_host_bc033.create] Task succeeded 'dockerplugin.select_docker_host'
    2018-01-04T19:29:14 CFY <DockerComponent> [docker_host_bc033] Configuring node
    2018-01-04T19:29:14 CFY <DockerComponent> [docker_host_bc033] Starting node
    2018-01-04T19:29:15 CFY <DockerComponent> [registrator_25df6] Creating node
    2018-01-04T19:29:15 CFY <DockerComponent> [registrator_25df6->docker_host_bc033|preconfigure] Sending task 'relationshipplugin.forward_destination_info'
    2018-01-04T19:29:15 CFY <DockerComponent> [registrator_25df6->docker_host_bc033|preconfigure] Task started 'relationshipplugin.forward_destination_info'
    2018-01-04T19:29:16 CFY <DockerComponent> [registrator_25df6->docker_host_bc033|preconfigure] Task succeeded 'relationshipplugin.forward_destination_info'
    2018-01-04T19:29:16 CFY <DockerComponent> [registrator_25df6] Configuring node
    2018-01-04T19:29:16 CFY <DockerComponent> [registrator_25df6] Starting node
    2018-01-04T19:29:16 CFY <DockerComponent> [registrator_25df6.start] Sending task 'dockerplugin.create_and_start_container'
    2018-01-04T19:29:16 CFY <DockerComponent> [registrator_25df6.start] Task started 'dockerplugin.create_and_start_container'
    2018-01-04T19:29:17 CFY <DockerComponent> [registrator_25df6.start] Task failed 'dockerplugin.create_and_start_container' -> Failed to find: component_dockerhost
    2018-01-04T19:29:47 CFY <DockerComponent> [registrator_25df6.start] Sending task 'dockerplugin.create_and_start_container' [retry 1]
    2018-01-04T19:29:47 CFY <DockerComponent> [registrator_25df6.start] Task started 'dockerplugin.create_and_start_container' [retry 1]
    2018-01-04T19:29:48 CFY <DockerComponent> [registrator_25df6.start] Task failed 'dockerplugin.create_and_start_container' -> ('Connection aborted.', error(111, 'Connection refused')) [retry 1]
    2018-01-04T19:30:18 CFY <DockerComponent> [registrator_25df6.start] Sending task 'dockerplugin.create_and_start_container' [retry 2]
    2018-01-04T19:30:18 CFY <DockerComponent> [registrator_25df6.start] Task started 'dockerplugin.create_and_start_container' [retry 2]
    2018-01-04T19:30:19 CFY <DockerComponent> [registrator_25df6.start] Task failed 'dockerplugin.create_and_start_container' -> ('Connection aborted.', error(111, 'Connection refused')) [retry 2]
    2018-01-04T19:30:49 CFY <DockerComponent> [registrator_25df6.start] Sending task 'dockerplugin.create_and_start_container' [retry 3]
    2018-01-04T19:30:49 CFY <DockerComponent> [registrator_25df6.start] Task started 'dockerplugin.create_and_start_container' [retry 3]
    2018-01-04T19:30:49 CFY <DockerComponent> [registrator_25df6.start] Task failed 'dockerplugin.create_and_start_container' -> ('Connection aborted.', error(111, 'Connection refused')) [retry 3]
    2018-01-04T19:31:20 CFY <DockerComponent> [registrator_25df6.start] Sending task 'dockerplugin.create_and_start_container' [retry 4]
    2018-01-04T19:31:20 CFY <DockerComponent> [registrator_25df6.start] Task started 'dockerplugin.create_and_start_container' [retry 4]
    2018-01-04T19:31:20 CFY <DockerComponent> [registrator_25df6.start] Task failed 'dockerplugin.create_and_start_container' -> ('Connection aborted.', error(111, 'Connection refused')) [retry 4]
    2018-01-04T19:31:51 CFY <DockerComponent> [registrator_25df6.start] Sending task 'dockerplugin.create_and_start_container' [retry 5]
    2018-01-04T19:31:51 CFY <DockerComponent> [registrator_25df6.start] Task started 'dockerplugin.create_and_start_container' [retry 5]
    2018-01-04T19:32:04 CFY <DockerComponent> [registrator_25df6.start] Task succeeded 'dockerplugin.create_and_start_container' [retry 5]
    2018-01-04T19:32:04 CFY <DockerComponent> 'install' workflow execution succeeded
    Finished executing workflow install on deployment DockerComponent
    * Run 'cfy events list --include-logs --execution-id 5a98b171-a2c6-4bf1-a872-169eb6b6e09b' to retrieve the execution's events/logs
    + cfy install -p ./blueprints/cdap/cdapbp7.yaml -b cdapbp7 -d cdap7 -i ../config/cdapinputs.yaml -i location_id=MbOr
    Uploading blueprint ./blueprints/cdap/cdapbp7.yaml...
    Blueprint uploaded. The blueprint's id is cdapbp7
    Processing inputs source: ../config/cdapinputs.yaml
    Processing inputs source: location_id=MbOr
    Creating new deployment from blueprint cdapbp7...
    Deployment created. The deployment's id is cdap7
    Executing workflow install on deployment cdap7 [timeout=900 seconds]
    Deployment environment creation is in progress...
    2018-01-04T19:32:19 CFY <cdap7> Starting 'create_deployment_environment' workflow execution
    2018-01-04T19:32:19 CFY <cdap7> Installing deployment plugins
    2018-01-04T19:32:19 CFY <cdap7> Sending task 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:32:19 CFY <cdap7> Task started 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:32:20 CFY <cdap7> Task succeeded 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:32:21 CFY <cdap7> Skipping starting deployment policy engine core - no policies defined
    2018-01-04T19:32:21 CFY <cdap7> Creating deployment work directory
    2018-01-04T19:32:21 CFY <cdap7> 'create_deployment_environment' workflow execution succeeded
    2018-01-04T19:32:26 CFY <cdap7> Starting 'install' workflow execution
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap04_46ee7] Creating node
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap05_2dabb] Creating node
    2018-01-04T19:32:27 CFY <cdap7> [private_net_40f29] Creating node
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap00_8bf09] Creating node
    2018-01-04T19:32:27 CFY <cdap7> [security_group_edd3e] Creating node
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap01_17032] Creating node
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap04_46ee7.create] Sending task 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap05_2dabb.create] Sending task 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:27 CFY <cdap7> [private_net_40f29.create] Sending task 'neutron_plugin.network.create'
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap04_46ee7.create] Task started 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap05_2dabb.create] Task started 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:27 CFY <cdap7> [key_pair_ab53f] Creating node
    2018-01-04T19:32:27 CFY <cdap7> [private_net_40f29.create] Task started 'neutron_plugin.network.create'
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap00_8bf09.create] Sending task 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap01_17032.create] Sending task 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:27 CFY <cdap7> [key_pair_ab53f.create] Sending task 'nova_plugin.keypair.create'
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap06_14419] Creating node
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap00_8bf09.create] Task started 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap01_17032.create] Task started 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap03_2ecea] Creating node
    2018-01-04T19:32:27 CFY <cdap7> [sharedsshkey_cdap_7e819] Creating node
    2018-01-04T19:32:27 CFY <cdap7> [security_group_edd3e.create] Sending task 'neutron_plugin.security_group.create'
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap02_86177] Creating node
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap06_14419.create] Sending task 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap03_2ecea.create] Sending task 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:27 CFY <cdap7> [floatingip_cdap02_86177.create] Sending task 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:27 CFY <cdap7> [sharedsshkey_cdap_7e819.create] Sending task 'sshkeyshare.keyshare_plugin.generate'
    2018-01-04T19:32:28 CFY <cdap7> [private_net_40f29.create] Task succeeded 'neutron_plugin.network.create'
    2018-01-04T19:32:28 CFY <cdap7> [key_pair_ab53f.create] Task started 'nova_plugin.keypair.create'
    2018-01-04T19:32:28 CFY <cdap7> [private_net_40f29] Configuring node
    2018-01-04T19:32:29 CFY <cdap7> [private_net_40f29] Starting node
    2018-01-04T19:32:29 CFY <cdap7> [key_pair_ab53f.create] Task succeeded 'nova_plugin.keypair.create'
    2018-01-04T19:32:29 CFY <cdap7> [security_group_edd3e.create] Task started 'neutron_plugin.security_group.create'
    2018-01-04T19:32:29 CFY <cdap7> [floatingip_cdap04_46ee7.create] Task succeeded 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:29 CFY <cdap7> [floatingip_cdap06_14419.create] Task started 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:29 CFY <cdap7> [fixedip_cdap00_ad41d] Creating node
    2018-01-04T19:32:29 CFY <cdap7> [fixedip_cdap02_3037c] Creating node
    2018-01-04T19:32:29 CFY <cdap7> [key_pair_ab53f] Configuring node
    2018-01-04T19:32:30 CFY <cdap7> [fixedip_cdap02_3037c.create] Sending task 'neutron_plugin.port.create'
    2018-01-04T19:32:30 CFY <cdap7> [fixedip_cdap06_0cf14] Creating node
    2018-01-04T19:32:30 CFY <cdap7> [fixedip_cdap03_11749] Creating node
    2018-01-04T19:32:30 CFY <cdap7> [fixedip_cdap05_72159] Creating node
    2018-01-04T19:32:30 CFY <cdap7> [fixedip_cdap01_9a945] Creating node
    2018-01-04T19:32:30 CFY <cdap7> [fixedip_cdap03_11749.create] Sending task 'neutron_plugin.port.create'
    2018-01-04T19:32:30 CFY <cdap7> [fixedip_cdap00_ad41d.create] Sending task 'neutron_plugin.port.create'
    2018-01-04T19:32:30 CFY <cdap7> [fixedip_cdap05_72159.create] Sending task 'neutron_plugin.port.create'
    2018-01-04T19:32:30 CFY <cdap7> [fixedip_cdap06_0cf14.create] Sending task 'neutron_plugin.port.create'
    2018-01-04T19:32:30 CFY <cdap7> [floatingip_cdap01_17032.create] Task succeeded 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:30 CFY <cdap7> [fixedip_cdap04_8020f] Creating node
    2018-01-04T19:32:30 CFY <cdap7> [floatingip_cdap03_2ecea.create] Task started 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:30 CFY <cdap7> [floatingip_cdap05_2dabb.create] Task succeeded 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:30 CFY <cdap7> [floatingip_cdap02_86177.create] Task started 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:30 CFY <cdap7> [fixedip_cdap01_9a945.create] Sending task 'neutron_plugin.port.create'
    2018-01-04T19:32:30 CFY <cdap7> [fixedip_cdap04_8020f.create] Sending task 'neutron_plugin.port.create'
    2018-01-04T19:32:30 CFY <cdap7> [floatingip_cdap00_8bf09.create] Task succeeded 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:30 CFY <cdap7> [sharedsshkey_cdap_7e819.create] Task started 'sshkeyshare.keyshare_plugin.generate'
    2018-01-04T19:32:30 CFY <cdap7> [floatingip_cdap04_46ee7] Configuring node
    2018-01-04T19:32:30 CFY <cdap7> [security_group_edd3e.create] Task succeeded 'neutron_plugin.security_group.create'
    2018-01-04T19:32:30 CFY <cdap7> [fixedip_cdap02_3037c.create] Task started 'neutron_plugin.port.create'
    2018-01-04T19:32:30 CFY <cdap7> [key_pair_ab53f] Starting node
    2018-01-04T19:32:30 CFY <cdap7> [floatingip_cdap01_17032] Configuring node
    2018-01-04T19:32:30 CFY <cdap7> [floatingip_cdap05_2dabb] Configuring node
    2018-01-04T19:32:30 CFY <cdap7> [floatingip_cdap00_8bf09] Configuring node
    2018-01-04T19:32:30 CFY <cdap7> [security_group_edd3e] Configuring node
    2018-01-04T19:32:31 CFY <cdap7> [floatingip_cdap04_46ee7] Starting node
    2018-01-04T19:32:31 CFY <cdap7> [floatingip_cdap01_17032] Starting node
    2018-01-04T19:32:31 CFY <cdap7> [sharedsshkey_cdap_7e819.create] Task succeeded 'sshkeyshare.keyshare_plugin.generate'
    2018-01-04T19:32:31 CFY <cdap7> [fixedip_cdap03_11749.create] Task started 'neutron_plugin.port.create'
    2018-01-04T19:32:31 CFY <cdap7> [floatingip_cdap05_2dabb] Starting node
    2018-01-04T19:32:31 CFY <cdap7> [floatingip_cdap00_8bf09] Starting node
    2018-01-04T19:32:31 CFY <cdap7> [security_group_edd3e] Starting node
    2018-01-04T19:32:31 CFY <cdap7> [sharedsshkey_cdap_7e819] Configuring node
    2018-01-04T19:32:31 CFY <cdap7> [dns_cdap04_b26dc] Creating node
    2018-01-04T19:32:31 CFY <cdap7> [sharedsshkey_cdap_7e819] Starting node
    2018-01-04T19:32:31 CFY <cdap7> [dns_cdap05_a911e] Creating node
    2018-01-04T19:32:31 CFY <cdap7> [dns_cdap04_b26dc.create] Sending task 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:31 CFY <cdap7> [dns_cdap05_a911e.create] Sending task 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:31 CFY <cdap7> [dns_cdap01_fe11a] Creating node
    2018-01-04T19:32:31 CFY <cdap7> [dns_cdap00_29740] Creating node
    2018-01-04T19:32:31 CFY <cdap7> [dns_cdap00_29740.create] Sending task 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:31 CFY <cdap7> [dns_cdap01_fe11a.create] Sending task 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:32 CFY <cdap7> [fixedip_cdap02_3037c.create] Task succeeded 'neutron_plugin.port.create'
    2018-01-04T19:32:32 CFY <cdap7> [fixedip_cdap00_ad41d.create] Task started 'neutron_plugin.port.create'
    2018-01-04T19:32:32 CFY <cdap7> [floatingip_cdap06_14419.create] Task succeeded 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:32 CFY <cdap7> [fixedip_cdap05_72159.create] Task started 'neutron_plugin.port.create'
    2018-01-04T19:32:32 CFY <cdap7> [floatingip_cdap06_14419] Configuring node
    2018-01-04T19:32:32 CFY <cdap7> [floatingip_cdap03_2ecea.create] Task succeeded 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:32 CFY <cdap7> [fixedip_cdap06_0cf14.create] Task started 'neutron_plugin.port.create'
    2018-01-04T19:32:32 CFY <cdap7> [fixedip_cdap02_3037c] Configuring node
    2018-01-04T19:32:32 CFY <cdap7> [floatingip_cdap02_86177.create] Task succeeded 'neutron_plugin.floatingip.create'
    2018-01-04T19:32:32 CFY <cdap7> [fixedip_cdap01_9a945.create] Task started 'neutron_plugin.port.create'
    2018-01-04T19:32:32 CFY <cdap7> [fixedip_cdap03_11749.create] Task succeeded 'neutron_plugin.port.create'
    2018-01-04T19:32:32 CFY <cdap7> [fixedip_cdap04_8020f.create] Task started 'neutron_plugin.port.create'
    2018-01-04T19:32:32 CFY <cdap7> [floatingip_cdap02_86177] Configuring node
    2018-01-04T19:32:33 CFY <cdap7> [floatingip_cdap03_2ecea] Configuring node
    2018-01-04T19:32:33 CFY <cdap7> [fixedip_cdap03_11749] Configuring node
    2018-01-04T19:32:33 CFY <cdap7> [floatingip_cdap06_14419] Starting node
    2018-01-04T19:32:33 CFY <cdap7> [fixedip_cdap02_3037c] Starting node
    2018-01-04T19:32:33 CFY <cdap7> [floatingip_cdap02_86177] Starting node
    2018-01-04T19:32:33 CFY <cdap7> [floatingip_cdap03_2ecea] Starting node
    2018-01-04T19:32:33 CFY <cdap7> [fixedip_cdap03_11749] Starting node
    2018-01-04T19:32:33 CFY <cdap7> [dns_cdap06_6c3e7] Creating node
    2018-01-04T19:32:33 CFY <cdap7> [fixedip_cdap00_ad41d.create] Task succeeded 'neutron_plugin.port.create'
    2018-01-04T19:32:33 CFY <cdap7> [dns_cdap04_b26dc.create] Task started 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:33 CFY <cdap7> [dns_cdap06_6c3e7.create] Sending task 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:34 CFY <cdap7> [dns_cdap02_dbb7e] Creating node
    2018-01-04T19:32:34 CFY <cdap7> [dns_cdap02_dbb7e.create] Sending task 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:34 CFY <cdap7> [dns_cdap03_76896] Creating node
    2018-01-04T19:32:34 CFY <cdap7> [dns_cdap03_76896.create] Sending task 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:34 CFY <cdap7> [fixedip_cdap00_ad41d] Configuring node
    2018-01-04T19:32:34 CFY <cdap7> [fixedip_cdap05_72159.create] Task succeeded 'neutron_plugin.port.create'
    2018-01-04T19:32:34 CFY <cdap7> [dns_cdap05_a911e.create] Task started 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:34 CFY <cdap7> [fixedip_cdap05_72159] Configuring node
    2018-01-04T19:32:34 CFY <cdap7> [fixedip_cdap00_ad41d] Starting node
    2018-01-04T19:32:34 CFY <cdap7> [dns_cdap04_b26dc.create] Task succeeded 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:34 CFY <cdap7> [dns_cdap00_29740.create] Task started 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:34 CFY <cdap7> [fixedip_cdap06_0cf14.create] Task succeeded 'neutron_plugin.port.create'
    2018-01-04T19:32:34 CFY <cdap7> [dns_cdap01_fe11a.create] Task started 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:34 CFY <cdap7> [fixedip_cdap01_9a945.create] Task succeeded 'neutron_plugin.port.create'
    2018-01-04T19:32:34 CFY <cdap7> [dns_cdap06_6c3e7.create] Task started 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:34 CFY <cdap7> [fixedip_cdap04_8020f.create] Task succeeded 'neutron_plugin.port.create'
    2018-01-04T19:32:34 CFY <cdap7> [dns_cdap02_dbb7e.create] Task started 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:35 CFY <cdap7> [dns_cdap05_a911e.create] Task succeeded 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:35 CFY <cdap7> [fixedip_cdap05_72159] Starting node
    2018-01-04T19:32:35 CFY <cdap7> [dns_cdap03_76896.create] Task started 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:35 CFY <cdap7> [dns_cdap04_b26dc] Configuring node
    2018-01-04T19:32:35 CFY <cdap7> [fixedip_cdap04_8020f] Configuring node
    2018-01-04T19:32:35 CFY <cdap7> [fixedip_cdap01_9a945] Configuring node
    2018-01-04T19:32:35 CFY <cdap7> [fixedip_cdap06_0cf14] Configuring node
    2018-01-04T19:32:35 CFY <cdap7> [dns_cdap00_29740.create] Task succeeded 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:35 CFY <cdap7> [dns_cdap04_b26dc] Starting node
    2018-01-04T19:32:35 CFY <cdap7> [dns_cdap01_fe11a.create] Task succeeded 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:35 CFY <cdap7> [dns_cdap05_a911e] Configuring node
    2018-01-04T19:32:35 CFY <cdap7> [dns_cdap02_dbb7e.create] Task succeeded 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:35 CFY <cdap7> [fixedip_cdap01_9a945] Starting node
    2018-01-04T19:32:35 CFY <cdap7> [dns_cdap06_6c3e7.create] Task succeeded 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:35 CFY <cdap7> [fixedip_cdap04_8020f] Starting node
    2018-01-04T19:32:35 CFY <cdap7> [fixedip_cdap06_0cf14] Starting node
    2018-01-04T19:32:36 CFY <cdap7> [dns_cdap03_76896.create] Task succeeded 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:32:36 CFY <cdap7> [dns_cdap01_fe11a] Configuring node
    2018-01-04T19:32:36 CFY <cdap7> [dns_cdap00_29740] Configuring node
    2018-01-04T19:32:36 CFY <cdap7> [dns_cdap05_a911e] Starting node
    2018-01-04T19:32:36 CFY <cdap7> [dns_cdap02_dbb7e] Configuring node
    2018-01-04T19:32:36 CFY <cdap7> [dns_cdap06_6c3e7] Configuring node
    2018-01-04T19:32:36 CFY <cdap7> [dns_cdap03_76896] Configuring node
    2018-01-04T19:32:36 CFY <cdap7> [dns_cdap00_29740] Starting node
    2018-01-04T19:32:36 CFY <cdap7> [dns_cdap06_6c3e7] Starting node
    2018-01-04T19:32:36 CFY <cdap7> [dns_cdap01_fe11a] Starting node
    2018-01-04T19:32:36 CFY <cdap7> [dns_cdap02_dbb7e] Starting node
    2018-01-04T19:32:37 CFY <cdap7> [dns_cdap03_76896] Starting node
    2018-01-04T19:32:37 CFY <cdap7> [hostdeps_cdap_2000e] Creating node
    2018-01-04T19:32:39 CFY <cdap7> [hostdeps_cdap_2000e] Configuring node
    2018-01-04T19:32:41 CFY <cdap7> [hostdeps_cdap_2000e] Starting node
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap00_3a75e] Creating node
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap02_a1c90] Creating node
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap00_3a75e.create] Sending task 'nova_plugin.server.create'
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap02_a1c90.create] Sending task 'nova_plugin.server.create'
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap00_3a75e.create] Task started 'nova_plugin.server.create'
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap02_a1c90.create] Task started 'nova_plugin.server.create'
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap04_144be] Creating node
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap01_56310] Creating node
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap06_662d8] Creating node
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap03_831d4] Creating node
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap03_831d4.create] Sending task 'nova_plugin.server.create'
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap01_56310.create] Sending task 'nova_plugin.server.create'
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap03_831d4.create] Task started 'nova_plugin.server.create'
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap06_662d8.create] Sending task 'nova_plugin.server.create'
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap01_56310.create] Task started 'nova_plugin.server.create'
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap05_6675e] Creating node
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap06_662d8.create] Task started 'nova_plugin.server.create'
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap04_144be.create] Sending task 'nova_plugin.server.create'
    2018-01-04T19:32:43 CFY <cdap7> [host_cdap05_6675e.create] Sending task 'nova_plugin.server.create'
    2018-01-04T19:32:45 CFY <cdap7> [host_cdap01_56310.create] Task succeeded 'nova_plugin.server.create'
    2018-01-04T19:32:45 CFY <cdap7> [host_cdap04_144be.create] Task started 'nova_plugin.server.create'
    2018-01-04T19:32:45 CFY <cdap7> [host_cdap06_662d8.create] Task succeeded 'nova_plugin.server.create'
    2018-01-04T19:32:45 CFY <cdap7> [host_cdap05_6675e.create] Task started 'nova_plugin.server.create'
    2018-01-04T19:32:45 CFY <cdap7> [host_cdap03_831d4.create] Task succeeded 'nova_plugin.server.create'
    2018-01-04T19:32:45 CFY <cdap7> [host_cdap00_3a75e.create] Task succeeded 'nova_plugin.server.create'
    2018-01-04T19:32:45 CFY <cdap7> [host_cdap02_a1c90.create] Task succeeded 'nova_plugin.server.create'
    2018-01-04T19:32:46 CFY <cdap7> [host_cdap01_56310] Configuring node
    2018-01-04T19:32:46 CFY <cdap7> [host_cdap03_831d4] Configuring node
    2018-01-04T19:32:46 CFY <cdap7> [host_cdap06_662d8] Configuring node
    2018-01-04T19:32:46 CFY <cdap7> [host_cdap00_3a75e] Configuring node
    2018-01-04T19:32:46 CFY <cdap7> [host_cdap02_a1c90] Configuring node
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap01_56310] Starting node
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap01_56310.start] Sending task 'nova_plugin.server.start'
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap00_3a75e] Starting node
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap01_56310.start] Task started 'nova_plugin.server.start'
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap06_662d8] Starting node
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap02_a1c90] Starting node
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap03_831d4] Starting node
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap03_831d4.start] Sending task 'nova_plugin.server.start'
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap00_3a75e.start] Sending task 'nova_plugin.server.start'
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap03_831d4.start] Task started 'nova_plugin.server.start'
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap00_3a75e.start] Task started 'nova_plugin.server.start'
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap04_144be.create] Task succeeded 'nova_plugin.server.create'
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap05_6675e.create] Task succeeded 'nova_plugin.server.create'
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap02_a1c90.start] Sending task 'nova_plugin.server.start'
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap06_662d8.start] Sending task 'nova_plugin.server.start'
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap02_a1c90.start] Task started 'nova_plugin.server.start'
    2018-01-04T19:32:47 CFY <cdap7> [host_cdap06_662d8.start] Task started 'nova_plugin.server.start'
    2018-01-04T19:32:48 CFY <cdap7> [host_cdap01_56310.start] Task rescheduled 'nova_plugin.server.start' -> Waiting for server to be in ACTIVE state but is in BUILD:spawning state. Retrying... [retry_after=30]
    2018-01-04T19:32:48 CFY <cdap7> [host_cdap05_6675e] Configuring node
    2018-01-04T19:32:48 CFY <cdap7> [host_cdap03_831d4.start] Task rescheduled 'nova_plugin.server.start' -> Waiting for server to be in ACTIVE state but is in BUILD:spawning state. Retrying... [retry_after=30]
    2018-01-04T19:32:48 CFY <cdap7> [host_cdap00_3a75e.start] Task rescheduled 'nova_plugin.server.start' -> Waiting for server to be in ACTIVE state but is in BUILD:networking state. Retrying... [retry_after=30]
    2018-01-04T19:32:48 CFY <cdap7> [host_cdap04_144be] Configuring node
    2018-01-04T19:32:49 CFY <cdap7> [host_cdap02_a1c90.start] Task rescheduled 'nova_plugin.server.start' -> Waiting for server to be in ACTIVE state but is in BUILD:networking state. Retrying... [retry_after=30]
    2018-01-04T19:32:49 CFY <cdap7> [host_cdap06_662d8.start] Task rescheduled 'nova_plugin.server.start' -> Waiting for server to be in ACTIVE state but is in BUILD:spawning state. Retrying... [retry_after=30]
    2018-01-04T19:32:49 CFY <cdap7> [host_cdap05_6675e] Starting node
    2018-01-04T19:32:49 CFY <cdap7> [host_cdap05_6675e.start] Sending task 'nova_plugin.server.start'
    2018-01-04T19:32:49 CFY <cdap7> [host_cdap04_144be] Starting node
    2018-01-04T19:32:49 CFY <cdap7> [host_cdap05_6675e.start] Task started 'nova_plugin.server.start'
    2018-01-04T19:32:50 CFY <cdap7> [host_cdap04_144be.start] Sending task 'nova_plugin.server.start'
    2018-01-04T19:32:50 CFY <cdap7> [host_cdap04_144be.start] Task started 'nova_plugin.server.start'
    2018-01-04T19:32:50 CFY <cdap7> [host_cdap05_6675e.start] Task rescheduled 'nova_plugin.server.start' -> Waiting for server to be in ACTIVE state but is in BUILD:None state. Retrying... [retry_after=30]
    2018-01-04T19:32:51 CFY <cdap7> [host_cdap04_144be.start] Task rescheduled 'nova_plugin.server.start' -> Waiting for server to be in ACTIVE state but is in BUILD:None state. Retrying... [retry_after=30]
    2018-01-04T19:33:18 CFY <cdap7> [host_cdap01_56310.start] Sending task 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:18 CFY <cdap7> [host_cdap01_56310.start] Task started 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:18 CFY <cdap7> [host_cdap03_831d4.start] Sending task 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:18 CFY <cdap7> [host_cdap00_3a75e.start] Sending task 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:18 CFY <cdap7> [host_cdap03_831d4.start] Task started 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:19 CFY <cdap7> [host_cdap00_3a75e.start] Task started 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:19 CFY <cdap7> [host_cdap02_a1c90.start] Sending task 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:19 CFY <cdap7> [host_cdap06_662d8.start] Sending task 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:19 CFY <cdap7> [host_cdap02_a1c90.start] Task started 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:19 CFY <cdap7> [host_cdap06_662d8.start] Task started 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:19 CFY <cdap7> [host_cdap01_56310.start] Task succeeded 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:20 CFY <cdap7> [host_cdap03_831d4.start] Task succeeded 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:20 CFY <cdap7> [host_cdap06_662d8.start] Task succeeded 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:20 CFY <cdap7> [host_cdap02_a1c90.start] Task succeeded 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:20 CFY <cdap7> [host_cdap00_3a75e.start] Task succeeded 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:20 CFY <cdap7> [host_cdap01_56310->security_group_edd3e|establish] Sending task 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:20 CFY <cdap7> [host_cdap01_56310->security_group_edd3e|establish] Task started 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:20 CFY <cdap7> [host_cdap03_831d4->security_group_edd3e|establish] Sending task 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:20 CFY <cdap7> [host_cdap03_831d4->security_group_edd3e|establish] Task started 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:20 CFY <cdap7> [host_cdap06_662d8->security_group_edd3e|establish] Sending task 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:20 CFY <cdap7> [host_cdap06_662d8->security_group_edd3e|establish] Task started 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:20 CFY <cdap7> [host_cdap02_a1c90->security_group_edd3e|establish] Sending task 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:20 CFY <cdap7> [host_cdap00_3a75e->security_group_edd3e|establish] Sending task 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:20 CFY <cdap7> [host_cdap02_a1c90->security_group_edd3e|establish] Task started 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:20 CFY <cdap7> [host_cdap00_3a75e->security_group_edd3e|establish] Task started 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:21 CFY <cdap7> [host_cdap05_6675e.start] Sending task 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:21 CFY <cdap7> [host_cdap04_144be.start] Sending task 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:23 CFY <cdap7> [host_cdap06_662d8->security_group_edd3e|establish] Task succeeded 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:23 CFY <cdap7> [host_cdap05_6675e.start] Task started 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:23 CFY <cdap7> [host_cdap01_56310->security_group_edd3e|establish] Task succeeded 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:23 CFY <cdap7> [host_cdap01_56310->floatingip_cdap01_17032|establish] Sending task 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:23 CFY <cdap7> [host_cdap04_144be.start] Task started 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:23 CFY <cdap7> [host_cdap06_662d8->floatingip_cdap06_14419|establish] Sending task 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:23 CFY <cdap7> [host_cdap00_3a75e->security_group_edd3e|establish] Task succeeded 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:23 CFY <cdap7> [host_cdap01_56310->floatingip_cdap01_17032|establish] Task started 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:23 CFY <cdap7> [host_cdap03_831d4->security_group_edd3e|establish] Task succeeded 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:23 CFY <cdap7> [host_cdap03_831d4->floatingip_cdap03_2ecea|establish] Sending task 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:23 CFY <cdap7> [host_cdap06_662d8->floatingip_cdap06_14419|establish] Task started 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:23 CFY <cdap7> [host_cdap00_3a75e->floatingip_cdap00_8bf09|establish] Sending task 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:23 CFY <cdap7> [host_cdap02_a1c90->security_group_edd3e|establish] Task succeeded 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:23 CFY <cdap7> [host_cdap03_831d4->floatingip_cdap03_2ecea|establish] Task started 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:23 CFY <cdap7> [host_cdap02_a1c90->floatingip_cdap02_86177|establish] Sending task 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:24 CFY <cdap7> [host_cdap04_144be.start] Task succeeded 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:24 CFY <cdap7> [host_cdap00_3a75e->floatingip_cdap00_8bf09|establish] Task started 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:24 CFY <cdap7> [host_cdap05_6675e.start] Task succeeded 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:33:24 CFY <cdap7> [host_cdap02_a1c90->floatingip_cdap02_86177|establish] Task started 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:24 CFY <cdap7> [host_cdap05_6675e->security_group_edd3e|establish] Sending task 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:24 CFY <cdap7> [host_cdap04_144be->security_group_edd3e|establish] Sending task 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:25 CFY <cdap7> [host_cdap06_662d8->floatingip_cdap06_14419|establish] Task succeeded 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:25 CFY <cdap7> [host_cdap05_6675e->security_group_edd3e|establish] Task started 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:25 CFY <cdap7> [host_cdap01_56310->floatingip_cdap01_17032|establish] Task succeeded 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:25 CFY <cdap7> [host_cdap04_144be->security_group_edd3e|establish] Task started 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:26 CFY <cdap7> [host_cdap03_831d4->floatingip_cdap03_2ecea|establish] Task succeeded 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:26 CFY <cdap7> [host_cdap00_3a75e->floatingip_cdap00_8bf09|establish] Task succeeded 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:27 CFY <cdap7> [host_cdap02_a1c90->floatingip_cdap02_86177|establish] Task succeeded 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:27 CFY <cdap7> [host_cdap04_144be->security_group_edd3e|establish] Task succeeded 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:27 CFY <cdap7> [host_cdap05_6675e->security_group_edd3e|establish] Task succeeded 'nova_plugin.server.connect_security_group'
    2018-01-04T19:33:27 CFY <cdap7> [host_cdap05_6675e->floatingip_cdap05_2dabb|establish] Sending task 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:27 CFY <cdap7> [host_cdap04_144be->floatingip_cdap04_46ee7|establish] Sending task 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:27 CFY <cdap7> [host_cdap05_6675e->floatingip_cdap05_2dabb|establish] Task started 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:27 CFY <cdap7> [host_cdap04_144be->floatingip_cdap04_46ee7|establish] Task started 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:30 CFY <cdap7> [host_cdap04_144be->floatingip_cdap04_46ee7|establish] Task succeeded 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:30 CFY <cdap7> [host_cdap05_6675e->floatingip_cdap05_2dabb|establish] Task succeeded 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:33:30 CFY <cdap7> 'install' workflow execution succeeded
    Finished executing workflow install on deployment cdap7
    * Run 'cfy events list --include-logs --execution-id 3da33b6b-6d4b-4013-8a29-ed550c634553' to retrieve the execution's events/logs
    + cfy install -p ./blueprints/cbs/config_binding_service.yaml -b config_binding_service -d config_binding_service -i location_id=MbOr
    Uploading blueprint ./blueprints/cbs/config_binding_service.yaml...
    Blueprint uploaded. The blueprint's id is config_binding_service
    Processing inputs source: location_id=MbOr
    Creating new deployment from blueprint config_binding_service...
    Deployment created. The deployment's id is config_binding_service
    Executing workflow install on deployment config_binding_service [timeout=900 seconds]
    Deployment environment creation is in progress...
    2018-01-04T19:33:38 CFY <config_binding_service> Starting 'create_deployment_environment' workflow execution
    2018-01-04T19:33:39 CFY <config_binding_service> Installing deployment plugins
    2018-01-04T19:33:39 CFY <config_binding_service> Sending task 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:33:39 CFY <config_binding_service> Task started 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:33:40 CFY <config_binding_service> Task succeeded 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:33:40 CFY <config_binding_service> Skipping starting deployment policy engine core - no policies defined
    2018-01-04T19:33:40 CFY <config_binding_service> Creating deployment work directory
    2018-01-04T19:33:40 CFY <config_binding_service> 'create_deployment_environment' workflow execution succeeded
    2018-01-04T19:33:45 CFY <config_binding_service> Starting 'install' workflow execution
    2018-01-04T19:33:46 CFY <config_binding_service> [docker_host_5feb1] Creating node
    2018-01-04T19:33:46 CFY <config_binding_service> [docker_host_5feb1.create] Sending task 'dockerplugin.select_docker_host'
    2018-01-04T19:33:46 CFY <config_binding_service> [docker_host_5feb1.create] Task started 'dockerplugin.select_docker_host'
    2018-01-04T19:33:47 CFY <config_binding_service> [docker_host_5feb1.create] Task succeeded 'dockerplugin.select_docker_host'
    2018-01-04T19:33:47 CFY <config_binding_service> [docker_host_5feb1] Configuring node
    2018-01-04T19:33:47 CFY <config_binding_service> [docker_host_5feb1] Starting node
    2018-01-04T19:33:48 CFY <config_binding_service> [service-config-binding_ff57f] Creating node
    2018-01-04T19:33:48 CFY <config_binding_service> [service-config-binding_ff57f.create] Sending task 'dockerplugin.create_for_platforms'
    2018-01-04T19:33:48 CFY <config_binding_service> [service-config-binding_ff57f.create] Task started 'dockerplugin.create_for_platforms'
    2018-01-04T19:33:49 CFY <config_binding_service> [service-config-binding_ff57f.create] Task succeeded 'dockerplugin.create_for_platforms'
    2018-01-04T19:33:49 CFY <config_binding_service> [service-config-binding_ff57f->docker_host_5feb1|preconfigure] Sending task 'relationshipplugin.forward_destination_info'
    2018-01-04T19:33:49 CFY <config_binding_service> [service-config-binding_ff57f->docker_host_5feb1|preconfigure] Task started 'relationshipplugin.forward_destination_info'
    2018-01-04T19:33:49 CFY <config_binding_service> [service-config-binding_ff57f->docker_host_5feb1|preconfigure] Task succeeded 'relationshipplugin.forward_destination_info'
    2018-01-04T19:33:50 CFY <config_binding_service> [service-config-binding_ff57f] Configuring node
    2018-01-04T19:33:50 CFY <config_binding_service> [service-config-binding_ff57f] Starting node
    2018-01-04T19:33:50 CFY <config_binding_service> [service-config-binding_ff57f.start] Sending task 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:33:50 CFY <config_binding_service> [service-config-binding_ff57f.start] Task started 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:34:32 CFY <config_binding_service> [service-config-binding_ff57f.start] Task succeeded 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:34:33 CFY <config_binding_service> 'install' workflow execution succeeded
    Finished executing workflow install on deployment config_binding_service
    * Run 'cfy events list --include-logs --execution-id 8bdc0946-9572-4a21-90fa-af6e6c39e119' to retrieve the execution's events/logs
    + cfy install -p ./blueprints/pg/pgaas-onevm.yaml -b pgaas -d pgaas -i .././config/inputs.yaml
    Uploading blueprint ./blueprints/pg/pgaas-onevm.yaml...
    Blueprint uploaded. The blueprint's id is pgaas
    Processing inputs source: .././config/inputs.yaml
    Creating new deployment from blueprint pgaas...
    Deployment created. The deployment's id is pgaas
    Executing workflow install on deployment pgaas [timeout=900 seconds]
    Deployment environment creation is in progress...
    2018-01-04T19:34:42 CFY <pgaas> Starting 'create_deployment_environment' workflow execution
    2018-01-04T19:34:43 CFY <pgaas> Installing deployment plugins
    2018-01-04T19:34:43 CFY <pgaas> Sending task 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:34:43 CFY <pgaas> Task started 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:34:45 CFY <pgaas> Task succeeded 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:34:45 CFY <pgaas> Skipping starting deployment policy engine core - no policies defined
    2018-01-04T19:34:45 CFY <pgaas> Creating deployment work directory
    2018-01-04T19:34:45 CFY <pgaas> 'create_deployment_environment' workflow execution succeeded
    2018-01-04T19:34:50 CFY <pgaas> Starting 'install' workflow execution
    2018-01-04T19:34:50 CFY <pgaas> [sharedsshkey_pgrs_afc6d] Creating node
    2018-01-04T19:34:50 CFY <pgaas> [dns_pgrs_rw_673bc] Creating node
    2018-01-04T19:34:50 CFY <pgaas> [dns_pgrs_ro_104a8] Creating node
    2018-01-04T19:34:50 CFY <pgaas> [key_pair_3051b] Creating node
    2018-01-04T19:34:50 CFY <pgaas> [sharedsshkey_pgrs_afc6d.create] Sending task 'sshkeyshare.keyshare_plugin.generate'
    2018-01-04T19:34:50 CFY <pgaas> [dns_pgrs_rw_673bc.create] Sending task 'dnsdesig.dns_plugin.cnameneeded'
    2018-01-04T19:34:50 CFY <pgaas> [sharedsshkey_pgrs_afc6d.create] Task started 'sshkeyshare.keyshare_plugin.generate'
    2018-01-04T19:34:50 CFY <pgaas> [dns_pgrs_rw_673bc.create] Task started 'dnsdesig.dns_plugin.cnameneeded'
    2018-01-04T19:34:50 CFY <pgaas> [security_group_ee9c0] Creating node
    2018-01-04T19:34:50 CFY <pgaas> [private_net_ffb53] Creating node
    2018-01-04T19:34:50 CFY <pgaas> [key_pair_3051b.create] Sending task 'nova_plugin.keypair.create'
    2018-01-04T19:34:51 CFY <pgaas> [dns_pgrs_ro_104a8.create] Sending task 'dnsdesig.dns_plugin.cnameneeded'
    2018-01-04T19:34:51 CFY <pgaas> [security_group_ee9c0.create] Sending task 'neutron_plugin.security_group.create'
    2018-01-04T19:34:51 CFY <pgaas> [floatingip_pgrs00_99dfd] Creating node
    2018-01-04T19:34:51 CFY <pgaas> [key_pair_3051b.create] Task started 'nova_plugin.keypair.create'
    2018-01-04T19:34:51 CFY <pgaas> [dns_pgrs_ro_104a8.create] Task started 'dnsdesig.dns_plugin.cnameneeded'
    2018-01-04T19:34:51 CFY <pgaas> [security_group_ee9c0.create] Task started 'neutron_plugin.security_group.create'
    2018-01-04T19:34:51 CFY <pgaas> [floatingip_pgrs00_99dfd.create] Sending task 'neutron_plugin.floatingip.create'
    2018-01-04T19:34:51 CFY <pgaas> [private_net_ffb53.create] Sending task 'neutron_plugin.network.create'
    2018-01-04T19:34:51 CFY <pgaas> [sharedsshkey_pgrs_afc6d.create] Task succeeded 'sshkeyshare.keyshare_plugin.generate'
    2018-01-04T19:34:51 CFY <pgaas> [floatingip_pgrs00_99dfd.create] Task started 'neutron_plugin.floatingip.create'
    2018-01-04T19:34:51 CFY <pgaas> [dns_pgrs_rw_673bc.create] Task succeeded 'dnsdesig.dns_plugin.cnameneeded'
    2018-01-04T19:34:51 CFY <pgaas> [private_net_ffb53.create] Task started 'neutron_plugin.network.create'
    2018-01-04T19:34:51 CFY <pgaas> [sharedsshkey_pgrs_afc6d] Configuring node
    2018-01-04T19:34:51 CFY <pgaas> [key_pair_3051b.create] Task succeeded 'nova_plugin.keypair.create'
    2018-01-04T19:34:52 CFY <pgaas> [dns_pgrs_ro_104a8.create] Task succeeded 'dnsdesig.dns_plugin.cnameneeded'
    2018-01-04T19:34:52 CFY <pgaas> [dns_pgrs_rw_673bc] Configuring node
    2018-01-04T19:34:52 CFY <pgaas> [security_group_ee9c0.create] Task succeeded 'neutron_plugin.security_group.create'
    2018-01-04T19:34:52 CFY <pgaas> [dns_pgrs_ro_104a8] Configuring node
    2018-01-04T19:34:52 CFY <pgaas> [sharedsshkey_pgrs_afc6d] Starting node
    2018-01-04T19:34:52 CFY <pgaas> [key_pair_3051b] Configuring node
    2018-01-04T19:34:52 CFY <pgaas> [dns_pgrs_rw_673bc] Starting node
    2018-01-04T19:34:52 CFY <pgaas> [security_group_ee9c0] Configuring node
    2018-01-04T19:34:52 CFY <pgaas> [dns_pgrs_ro_104a8] Starting node
    2018-01-04T19:34:52 CFY <pgaas> [private_net_ffb53.create] Task succeeded 'neutron_plugin.network.create'
    2018-01-04T19:34:53 CFY <pgaas> [key_pair_3051b] Starting node
    2018-01-04T19:34:53 CFY <pgaas> [security_group_ee9c0] Starting node
    2018-01-04T19:34:53 CFY <pgaas> [private_net_ffb53] Configuring node
    2018-01-04T19:34:53 CFY <pgaas> [pgaas_cluster_4111b] Creating node
    2018-01-04T19:34:53 CFY <pgaas> [private_net_ffb53] Starting node
    2018-01-04T19:34:53 CFY <pgaas> [pgaas_cluster_4111b.create] Sending task 'pgaas.pgaas_plugin.add_pgaas_cluster'
    2018-01-04T19:34:53 CFY <pgaas> [pgaas_cluster_4111b.create] Task started 'pgaas.pgaas_plugin.add_pgaas_cluster'
    2018-01-04T19:34:54 CFY <pgaas> [floatingip_pgrs00_99dfd.create] Task succeeded 'neutron_plugin.floatingip.create'
    2018-01-04T19:34:54 CFY <pgaas> [pgaas_cluster_4111b.create] Task succeeded 'pgaas.pgaas_plugin.add_pgaas_cluster'
    2018-01-04T19:34:54 CFY <pgaas> [fixedip_pgrs00_85733] Creating node
    2018-01-04T19:34:54 CFY <pgaas> [fixedip_pgrs00_85733.create] Sending task 'neutron_plugin.port.create'
    2018-01-04T19:34:54 CFY <pgaas> [fixedip_pgrs00_85733.create] Task started 'neutron_plugin.port.create'
    2018-01-04T19:34:54 CFY <pgaas> [floatingip_pgrs00_99dfd] Configuring node
    2018-01-04T19:34:54 CFY <pgaas> [floatingip_pgrs00_99dfd] Starting node
    2018-01-04T19:34:54 CFY <pgaas> [pgaas_cluster_4111b] Configuring node
    2018-01-04T19:34:55 CFY <pgaas> [pgaas_cluster_4111b] Starting node
    2018-01-04T19:34:55 CFY <pgaas> [dns_pgrs00_ec537] Creating node
    2018-01-04T19:34:55 CFY <pgaas> [dns_pgrs00_ec537.create] Sending task 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:34:55 CFY <pgaas> [dns_pgrs00_ec537.create] Task started 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:34:56 CFY <pgaas> [fixedip_pgrs00_85733.create] Task succeeded 'neutron_plugin.port.create'
    2018-01-04T19:34:56 CFY <pgaas> [fixedip_pgrs00_85733] Configuring node
    2018-01-04T19:34:56 CFY <pgaas> [dns_pgrs00_ec537.create] Task succeeded 'dnsdesig.dns_plugin.aneeded'
    2018-01-04T19:34:57 CFY <pgaas> [dns_pgrs00_ec537] Configuring node
    2018-01-04T19:34:57 CFY <pgaas> [fixedip_pgrs00_85733] Starting node
    2018-01-04T19:34:57 CFY <pgaas> [dns_pgrs00_ec537] Starting node
    2018-01-04T19:34:58 CFY <pgaas> [host_pgrs00_66678] Creating node
    2018-01-04T19:34:58 CFY <pgaas> [host_pgrs00_66678.create] Sending task 'nova_plugin.server.create'
    2018-01-04T19:34:58 CFY <pgaas> [host_pgrs00_66678.create] Task started 'nova_plugin.server.create'
    2018-01-04T19:35:00 CFY <pgaas> [host_pgrs00_66678.create] Task succeeded 'nova_plugin.server.create'
    2018-01-04T19:35:01 CFY <pgaas> [host_pgrs00_66678] Configuring node
    2018-01-04T19:35:02 CFY <pgaas> [host_pgrs00_66678] Starting node
    2018-01-04T19:35:02 CFY <pgaas> [host_pgrs00_66678.start] Sending task 'nova_plugin.server.start'
    2018-01-04T19:35:02 CFY <pgaas> [host_pgrs00_66678.start] Task started 'nova_plugin.server.start'
    2018-01-04T19:35:03 CFY <pgaas> [host_pgrs00_66678.start] Task rescheduled 'nova_plugin.server.start' -> Waiting for server to be in ACTIVE state but is in BUILD:spawning state. Retrying... [retry_after=30]
    2018-01-04T19:35:33 CFY <pgaas> [host_pgrs00_66678.start] Sending task 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:35:34 CFY <pgaas> [host_pgrs00_66678.start] Task started 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:35:35 CFY <pgaas> [host_pgrs00_66678.start] Task succeeded 'nova_plugin.server.start' [retry 1]
    2018-01-04T19:35:35 CFY <pgaas> [host_pgrs00_66678->security_group_ee9c0|establish] Sending task 'nova_plugin.server.connect_security_group'
    2018-01-04T19:35:35 CFY <pgaas> [host_pgrs00_66678->security_group_ee9c0|establish] Task started 'nova_plugin.server.connect_security_group'
    2018-01-04T19:35:38 CFY <pgaas> [host_pgrs00_66678->security_group_ee9c0|establish] Task succeeded 'nova_plugin.server.connect_security_group'
    2018-01-04T19:35:38 CFY <pgaas> [host_pgrs00_66678->floatingip_pgrs00_99dfd|establish] Sending task 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:35:38 CFY <pgaas> [host_pgrs00_66678->floatingip_pgrs00_99dfd|establish] Task started 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:35:41 CFY <pgaas> [host_pgrs00_66678->floatingip_pgrs00_99dfd|establish] Task succeeded 'nova_plugin.server.connect_floatingip'
    2018-01-04T19:35:42 CFY <pgaas> 'install' workflow execution succeeded
    Finished executing workflow install on deployment pgaas
    * Run 'cfy events list --include-logs --execution-id 71b9cfe3-cddb-44ca-ab3d-e818007f0765' to retrieve the execution's events/logs
    + cfy install -p ./blueprints/inv/inventory.yaml -b PlatformServicesInventory -d PlatformServicesInventory -i location_id=MbOr -i ../config/invinputs.yaml
    Uploading blueprint ./blueprints/inv/inventory.yaml...
    Blueprint uploaded. The blueprint's id is PlatformServicesInventory
    Processing inputs source: location_id=MbOr
    Processing inputs source: ../config/invinputs.yaml
    Creating new deployment from blueprint PlatformServicesInventory...
    Deployment created. The deployment's id is PlatformServicesInventory
    Executing workflow install on deployment PlatformServicesInventory [timeout=900 seconds]
    Deployment environment creation is in progress...
    2018-01-04T19:35:53 CFY <PlatformServicesInventory> Starting 'create_deployment_environment' workflow execution
    2018-01-04T19:35:53 CFY <PlatformServicesInventory> Installing deployment plugins
    2018-01-04T19:35:53 CFY <PlatformServicesInventory> Sending task 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:35:53 CFY <PlatformServicesInventory> Task started 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:35:54 CFY <PlatformServicesInventory> Task succeeded 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:35:54 CFY <PlatformServicesInventory> Skipping starting deployment policy engine core - no policies defined
    2018-01-04T19:35:54 CFY <PlatformServicesInventory> Creating deployment work directory
    2018-01-04T19:35:54 CFY <PlatformServicesInventory> 'create_deployment_environment' workflow execution succeeded
    2018-01-04T19:36:00 CFY <PlatformServicesInventory> Starting 'install' workflow execution
    2018-01-04T19:36:00 CFY <PlatformServicesInventory> [docker_host_88070] Creating node
    2018-01-04T19:36:00 CFY <PlatformServicesInventory> [docker_host_88070.create] Sending task 'dockerplugin.select_docker_host'
    2018-01-04T19:36:00 CFY <PlatformServicesInventory> [docker_host_88070.create] Task started 'dockerplugin.select_docker_host'
    2018-01-04T19:36:01 CFY <PlatformServicesInventory> [docker_host_88070.create] Task succeeded 'dockerplugin.select_docker_host'
    2018-01-04T19:36:01 CFY <PlatformServicesInventory> [docker_host_88070] Configuring node
    2018-01-04T19:36:02 CFY <PlatformServicesInventory> [docker_host_88070] Starting node
    2018-01-04T19:36:02 CFY <PlatformServicesInventory> [postgres_e227d] Creating node
    2018-01-04T19:36:03 CFY <PlatformServicesInventory> [postgres_e227d->docker_host_88070|preconfigure] Sending task 'relationshipplugin.forward_destination_info'
    2018-01-04T19:36:03 CFY <PlatformServicesInventory> [postgres_e227d->docker_host_88070|preconfigure] Task started 'relationshipplugin.forward_destination_info'
    2018-01-04T19:36:03 CFY <PlatformServicesInventory> [postgres_e227d->docker_host_88070|preconfigure] Task succeeded 'relationshipplugin.forward_destination_info'
    2018-01-04T19:36:03 CFY <PlatformServicesInventory> [postgres_e227d] Configuring node
    2018-01-04T19:36:04 CFY <PlatformServicesInventory> [postgres_e227d] Starting node
    2018-01-04T19:36:04 CFY <PlatformServicesInventory> [postgres_e227d.start] Sending task 'dockerplugin.create_and_start_container'
    2018-01-04T19:36:04 CFY <PlatformServicesInventory> [postgres_e227d.start] Task started 'dockerplugin.create_and_start_container'
    2018-01-04T19:36:23 CFY <PlatformServicesInventory> [postgres_e227d.start] Task succeeded 'dockerplugin.create_and_start_container'
    2018-01-04T19:36:23 CFY <PlatformServicesInventory> [inventory_ba158] Creating node
    2018-01-04T19:36:23 CFY <PlatformServicesInventory> [inventory_ba158.create] Sending task 'dockerplugin.create_for_platforms'
    2018-01-04T19:36:23 CFY <PlatformServicesInventory> [inventory_ba158.create] Task started 'dockerplugin.create_for_platforms'
    2018-01-04T19:36:24 CFY <PlatformServicesInventory> [inventory_ba158.create] Task succeeded 'dockerplugin.create_for_platforms'
    2018-01-04T19:36:24 CFY <PlatformServicesInventory> [inventory_ba158->docker_host_88070|preconfigure] Sending task 'relationshipplugin.forward_destination_info'
    2018-01-04T19:36:24 CFY <PlatformServicesInventory> [inventory_ba158->docker_host_88070|preconfigure] Task started 'relationshipplugin.forward_destination_info'
    2018-01-04T19:36:25 CFY <PlatformServicesInventory> [inventory_ba158->docker_host_88070|preconfigure] Task succeeded 'relationshipplugin.forward_destination_info'
    2018-01-04T19:36:25 CFY <PlatformServicesInventory> [inventory_ba158] Configuring node
    2018-01-04T19:36:26 CFY <PlatformServicesInventory> [inventory_ba158] Starting node
    2018-01-04T19:36:26 CFY <PlatformServicesInventory> [inventory_ba158.start] Sending task 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:36:26 CFY <PlatformServicesInventory> [inventory_ba158.start] Task started 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:37:22 CFY <PlatformServicesInventory> [service-change-handler_5c9bd] Starting node
    2018-01-04T19:37:20 CFY <PlatformServicesInventory> [service-change-handler_5c9bd] Creating node
    2018-01-04T19:37:20 CFY <PlatformServicesInventory> [service-change-handler_5c9bd.create] Sending task 'dockerplugin.create_for_platforms'
    2018-01-04T19:37:20 CFY <PlatformServicesInventory> [service-change-handler_5c9bd.create] Task started 'dockerplugin.create_for_platforms'
    2018-01-04T19:37:20 CFY <PlatformServicesInventory> [service-change-handler_5c9bd.create] Task succeeded 'dockerplugin.create_for_platforms'
    2018-01-04T19:37:21 CFY <PlatformServicesInventory> [service-change-handler_5c9bd->docker_host_88070|preconfigure] Sending task 'relationshipplugin.forward_destination_info'
    2018-01-04T19:37:21 CFY <PlatformServicesInventory> [service-change-handler_5c9bd->docker_host_88070|preconfigure] Task started 'relationshipplugin.forward_destination_info'
    2018-01-04T19:37:21 CFY <PlatformServicesInventory> [service-change-handler_5c9bd->docker_host_88070|preconfigure] Task succeeded 'relationshipplugin.forward_destination_info'
    2018-01-04T19:37:21 CFY <PlatformServicesInventory> [service-change-handler_5c9bd] Configuring node
    2018-01-04T19:37:22 CFY <PlatformServicesInventory> [service-change-handler_5c9bd] Starting node
    2018-01-04T19:37:22 CFY <PlatformServicesInventory> [service-change-handler_5c9bd.start] Sending task 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:37:22 CFY <PlatformServicesInventory> [service-change-handler_5c9bd.start] Task started 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:37:44 CFY <PlatformServicesInventory> [service-change-handler_5c9bd.start] Task succeeded 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:37:45 CFY <PlatformServicesInventory> 'install' workflow execution succeeded
    Finished executing workflow install on deployment PlatformServicesInventory
    * Run 'cfy events list --include-logs --execution-id 3fb2ec52-c86a-4b3e-9208-ecc3cca7dc58' to retrieve the execution's events/logs
    + cat
    + cfy install -p ./blueprints/dh/DeploymentHandler.yaml -b DeploymentHandlerBP -d DeploymentHandler -i location_id=MbOr -i ../dhinputs
    Uploading blueprint ./blueprints/dh/DeploymentHandler.yaml...
    Blueprint uploaded. The blueprint's id is DeploymentHandlerBP
    Processing inputs source: location_id=MbOr
    Processing inputs source: ../dhinputs
    Creating new deployment from blueprint DeploymentHandlerBP...
    Deployment created. The deployment's id is DeploymentHandler
    Executing workflow install on deployment DeploymentHandler [timeout=900 seconds]
    Deployment environment creation is in progress...
    2018-01-04T19:37:55 CFY <DeploymentHandler> Starting 'create_deployment_environment' workflow execution
    2018-01-04T19:37:55 CFY <DeploymentHandler> Installing deployment plugins
    2018-01-04T19:37:55 CFY <DeploymentHandler> Sending task 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:37:55 CFY <DeploymentHandler> Task started 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:37:56 CFY <DeploymentHandler> Task succeeded 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:37:56 CFY <DeploymentHandler> Skipping starting deployment policy engine core - no policies defined
    2018-01-04T19:37:56 CFY <DeploymentHandler> Creating deployment work directory
    2018-01-04T19:37:57 CFY <DeploymentHandler> 'create_deployment_environment' workflow execution succeeded
    2018-01-04T19:38:02 CFY <DeploymentHandler> Starting 'install' workflow execution
    2018-01-04T19:38:02 CFY <DeploymentHandler> [docker_host_61886] Creating node
    2018-01-04T19:38:02 CFY <DeploymentHandler> [docker_host_61886.create] Sending task 'dockerplugin.select_docker_host'
    2018-01-04T19:38:02 CFY <DeploymentHandler> [docker_host_61886.create] Task started 'dockerplugin.select_docker_host'
    2018-01-04T19:38:03 CFY <DeploymentHandler> [docker_host_61886.create] Task succeeded 'dockerplugin.select_docker_host'
    2018-01-04T19:38:03 CFY <DeploymentHandler> [docker_host_61886] Configuring node
    2018-01-04T19:38:04 CFY <DeploymentHandler> [docker_host_61886] Starting node
    2018-01-04T19:38:04 CFY <DeploymentHandler> [deployment-handler_76671] Creating node
    2018-01-04T19:38:04 CFY <DeploymentHandler> [deployment-handler_76671.create] Sending task 'dockerplugin.create_for_platforms'
    2018-01-04T19:38:04 CFY <DeploymentHandler> [deployment-handler_76671.create] Task started 'dockerplugin.create_for_platforms'
    2018-01-04T19:38:05 CFY <DeploymentHandler> [deployment-handler_76671.create] Task succeeded 'dockerplugin.create_for_platforms'
    2018-01-04T19:38:05 CFY <DeploymentHandler> [deployment-handler_76671->docker_host_61886|preconfigure] Sending task 'relationshipplugin.forward_destination_info'
    2018-01-04T19:38:05 CFY <DeploymentHandler> [deployment-handler_76671->docker_host_61886|preconfigure] Task started 'relationshipplugin.forward_destination_info'
    2018-01-04T19:38:06 CFY <DeploymentHandler> [deployment-handler_76671->docker_host_61886|preconfigure] Task succeeded 'relationshipplugin.forward_destination_info'
    2018-01-04T19:38:06 CFY <DeploymentHandler> [deployment-handler_76671] Configuring node
    2018-01-04T19:38:06 CFY <DeploymentHandler> [deployment-handler_76671] Starting node
    2018-01-04T19:38:06 CFY <DeploymentHandler> [deployment-handler_76671.start] Sending task 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:38:06 CFY <DeploymentHandler> [deployment-handler_76671.start] Task started 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:41:52 CFY <DeploymentHandler> [deployment-handler_76671.start] Task succeeded 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:41:53 CFY <DeploymentHandler> 'install' workflow execution succeeded
    Finished executing workflow install on deployment DeploymentHandler
    * Run 'cfy events list --include-logs --execution-id 006c73dd-7a87-4441-a128-a26fbbf9e1da' to retrieve the execution's events/logs
    + cfy install -p ./blueprints/ph/policy_handler.yaml -b policy_handler_BP -d policy_handler -i policy_handler_image=nexus3.onap.org:10001/onap/org.onap.dcaegen2.platform.policy-handler:1.1-latest -i location_id=MbOr -i ../config/phinputs.yaml
    Uploading blueprint ./blueprints/ph/policy_handler.yaml...
    Blueprint uploaded. The blueprint's id is policy_handler_BP
    Processing inputs source: policy_handler_image=nexus3.onap.org:10001/onap/org.onap.dcaegen2.platform.policy-handler:1.1-latest
    Processing inputs source: location_id=MbOr
    Processing inputs source: ../config/phinputs.yaml
    Creating new deployment from blueprint policy_handler_BP...
    Deployment created. The deployment's id is policy_handler
    Executing workflow install on deployment policy_handler [timeout=900 seconds]
    Deployment environment creation is in progress...
    2018-01-04T19:41:58 CFY <policy_handler> Starting 'create_deployment_environment' workflow execution
    2018-01-04T19:41:58 CFY <policy_handler> Installing deployment plugins
    2018-01-04T19:41:58 CFY <policy_handler> Sending task 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:41:58 CFY <policy_handler> Task started 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:41:59 CFY <policy_handler> Task succeeded 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:42:00 CFY <policy_handler> Skipping starting deployment policy engine core - no policies defined
    2018-01-04T19:42:00 CFY <policy_handler> Creating deployment work directory
    2018-01-04T19:42:00 CFY <policy_handler> 'create_deployment_environment' workflow execution succeeded
    2018-01-04T19:42:05 CFY <policy_handler> Starting 'install' workflow execution
    2018-01-04T19:42:05 CFY <policy_handler> [docker_host_15bee] Creating node
    2018-01-04T19:42:06 CFY <policy_handler> [docker_host_15bee.create] Sending task 'dockerplugin.select_docker_host'
    2018-01-04T19:42:06 CFY <policy_handler> [docker_host_15bee.create] Task started 'dockerplugin.select_docker_host'
    2018-01-04T19:42:06 CFY <policy_handler> [docker_host_15bee.create] Task succeeded 'dockerplugin.select_docker_host'
    2018-01-04T19:42:07 CFY <policy_handler> [docker_host_15bee] Configuring node
    2018-01-04T19:42:07 CFY <policy_handler> [docker_host_15bee] Starting node
    2018-01-04T19:42:08 CFY <policy_handler> [policy_handler_eee42] Creating node
    2018-01-04T19:42:08 CFY <policy_handler> [policy_handler_eee42.create] Sending task 'dockerplugin.create_for_platforms'
    2018-01-04T19:42:08 CFY <policy_handler> [policy_handler_eee42.create] Task started 'dockerplugin.create_for_platforms'
    2018-01-04T19:42:08 CFY <policy_handler> [policy_handler_eee42.create] Task succeeded 'dockerplugin.create_for_platforms'
    2018-01-04T19:42:08 CFY <policy_handler> [policy_handler_eee42->docker_host_15bee|preconfigure] Sending task 'relationshipplugin.forward_destination_info'
    2018-01-04T19:42:08 CFY <policy_handler> [policy_handler_eee42->docker_host_15bee|preconfigure] Task started 'relationshipplugin.forward_destination_info'
    2018-01-04T19:42:09 CFY <policy_handler> [policy_handler_eee42->docker_host_15bee|preconfigure] Task succeeded 'relationshipplugin.forward_destination_info'
    2018-01-04T19:42:09 CFY <policy_handler> [policy_handler_eee42] Configuring node
    2018-01-04T19:42:10 CFY <policy_handler> [policy_handler_eee42] Starting node
    2018-01-04T19:42:10 CFY <policy_handler> [policy_handler_eee42.start] Sending task 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:42:10 CFY <policy_handler> [policy_handler_eee42.start] Task started 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:42:38 CFY <policy_handler> [policy_handler_eee42.start] Task succeeded 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:42:38 CFY <policy_handler> 'install' workflow execution succeeded
    Finished executing workflow install on deployment policy_handler
    * Run 'cfy events list --include-logs --execution-id 2d4dafa6-b187-45a4-8312-9bca2f85b314' to retrieve the execution's events/logs
    Waiting for CDAP cluster to register
    + echo 'Waiting for CDAP cluster to register'
    + grep cdap
    + curl -Ss http://10.195.200.32:8500/v1/catalog/service/cdap
    [{"ID":"4fc3f38c-259f-1172-1313-8e8404eb27fe","Node":"dcaecdap02","Address":"10.0.0.15","Datacenter":"mbor","TaggedAddresses":{"lan":"10.0.0.15","wan":"10.0.0.15"},"NodeMeta":{},"ServiceID":"cdap","ServiceName":"cdap","ServiceTags":[],"ServiceAddress":"10.195.200.50","ServicePort":11015,"ServiceEnableTagOverride":false,"CreateIndex":207,"ModifyIndex":207}]
    CDAP cluster registered
    + echo 'CDAP cluster registered'
    + cfy install -p ./blueprints/cdapbroker/cdap_broker.yaml -b cdapbroker -d cdapbroker -i location_id=MbOr
    Uploading blueprint ./blueprints/cdapbroker/cdap_broker.yaml...
    Blueprint uploaded. The blueprint's id is cdapbroker
    Processing inputs source: location_id=MbOr
    Creating new deployment from blueprint cdapbroker...
    Deployment created. The deployment's id is cdapbroker
    Executing workflow install on deployment cdapbroker [timeout=900 seconds]
    Deployment environment creation is in progress...
    2018-01-04T19:42:45 CFY <cdapbroker> Starting 'create_deployment_environment' workflow execution
    2018-01-04T19:42:45 CFY <cdapbroker> Installing deployment plugins
    2018-01-04T19:42:45 CFY <cdapbroker> Sending task 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:42:45 CFY <cdapbroker> Task started 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:42:47 CFY <cdapbroker> Task succeeded 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:42:47 CFY <cdapbroker> Skipping starting deployment policy engine core - no policies defined
    2018-01-04T19:42:47 CFY <cdapbroker> Creating deployment work directory
    2018-01-04T19:42:47 CFY <cdapbroker> 'create_deployment_environment' workflow execution succeeded
    2018-01-04T19:42:52 CFY <cdapbroker> Starting 'install' workflow execution
    2018-01-04T19:42:53 CFY <cdapbroker> [docker_host_eb0fc] Creating node
    2018-01-04T19:42:53 CFY <cdapbroker> [docker_host_eb0fc.create] Sending task 'dockerplugin.select_docker_host'
    2018-01-04T19:42:53 CFY <cdapbroker> [docker_host_eb0fc.create] Task started 'dockerplugin.select_docker_host'
    2018-01-04T19:42:53 CFY <cdapbroker> [docker_host_eb0fc.create] Task succeeded 'dockerplugin.select_docker_host'
    2018-01-04T19:42:54 CFY <cdapbroker> [docker_host_eb0fc] Configuring node
    2018-01-04T19:42:54 CFY <cdapbroker> [docker_host_eb0fc] Starting node
    2018-01-04T19:42:55 CFY <cdapbroker> [cdap_broker_9f679] Creating node
    2018-01-04T19:42:55 CFY <cdapbroker> [cdap_broker_9f679.create] Sending task 'dockerplugin.create_for_platforms'
    2018-01-04T19:42:55 CFY <cdapbroker> [cdap_broker_9f679.create] Task started 'dockerplugin.create_for_platforms'
    2018-01-04T19:42:55 CFY <cdapbroker> [cdap_broker_9f679.create] Task succeeded 'dockerplugin.create_for_platforms'
    2018-01-04T19:42:55 CFY <cdapbroker> [cdap_broker_9f679->docker_host_eb0fc|preconfigure] Sending task 'relationshipplugin.forward_destination_info'
    2018-01-04T19:42:55 CFY <cdapbroker> [cdap_broker_9f679->docker_host_eb0fc|preconfigure] Task started 'relationshipplugin.forward_destination_info'
    2018-01-04T19:42:56 CFY <cdapbroker> [cdap_broker_9f679->docker_host_eb0fc|preconfigure] Task succeeded 'relationshipplugin.forward_destination_info'
    2018-01-04T19:42:56 CFY <cdapbroker> [cdap_broker_9f679] Configuring node
    2018-01-04T19:42:57 CFY <cdapbroker> [cdap_broker_9f679] Starting node
    2018-01-04T19:42:57 CFY <cdapbroker> [cdap_broker_9f679.start] Sending task 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:42:57 CFY <cdapbroker> [cdap_broker_9f679.start] Task started 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:45:10 CFY <cdapbroker> [cdap_broker_9f679.start] Task succeeded 'dockerplugin.create_and_start_container_for_platforms'
    2018-01-04T19:45:11 CFY <cdapbroker> [broker_deleter_f7729] Creating node
    2018-01-04T19:45:11 CFY <cdapbroker> [broker_deleter_f7729] Configuring node
    2018-01-04T19:45:12 CFY <cdapbroker> [broker_deleter_f7729] Starting node
    2018-01-04T19:45:12 CFY <cdapbroker> 'install' workflow execution succeeded
    Finished executing workflow install on deployment cdapbroker
    * Run 'cfy events list --include-logs --execution-id f40d8da7-0fa7-4c1e-b8d1-127fdec05778' to retrieve the execution's events/logs
    + cfy install -p ./blueprints/ves/ves.yaml -b ves -d ves -i ../config/vesinput.yaml
    Uploading blueprint ./blueprints/ves/ves.yaml...
    Blueprint uploaded. The blueprint's id is ves
    Processing inputs source: ../config/vesinput.yaml
    Creating new deployment from blueprint ves...
    Deployment created. The deployment's id is ves
    Executing workflow install on deployment ves [timeout=900 seconds]
    Deployment environment creation is in progress...
    2018-01-04T19:45:21 CFY <ves> Starting 'create_deployment_environment' workflow execution
    2018-01-04T19:45:21 CFY <ves> Installing deployment plugins
    2018-01-04T19:45:21 CFY <ves> Sending task 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:45:21 CFY <ves> Task started 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:45:22 CFY <ves> Task succeeded 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:45:22 CFY <ves> Skipping starting deployment policy engine core - no policies defined
    2018-01-04T19:45:22 CFY <ves> Creating deployment work directory
    2018-01-04T19:45:23 CFY <ves> 'create_deployment_environment' workflow execution succeeded
    2018-01-04T19:45:25 CFY <ves> Starting 'install' workflow execution
    2018-01-04T19:45:25 CFY <ves> [docker_collector_host_eb9ee] Creating node
    2018-01-04T19:45:26 CFY <ves> [docker_collector_host_eb9ee.create] Sending task 'dockerplugin.select_docker_host'
    2018-01-04T19:45:26 CFY <ves> [docker_collector_host_eb9ee.create] Task started 'dockerplugin.select_docker_host'
    2018-01-04T19:45:26 CFY <ves> [docker_collector_host_eb9ee.create] Task succeeded 'dockerplugin.select_docker_host'
    2018-01-04T19:45:27 CFY <ves> [docker_collector_host_eb9ee] Configuring node
    2018-01-04T19:45:27 CFY <ves> [docker_collector_host_eb9ee] Starting node
    2018-01-04T19:45:28 CFY <ves> [ves_58b13] Creating node
    2018-01-04T19:45:28 CFY <ves> [ves_58b13.create] Sending task 'dockerplugin.create_for_components_with_streams'
    2018-01-04T19:45:28 CFY <ves> [ves_58b13.create] Task started 'dockerplugin.create_for_components_with_streams'
    2018-01-04T19:45:29 CFY <ves> [ves_58b13.create] Task succeeded 'dockerplugin.create_for_components_with_streams'
    2018-01-04T19:45:29 CFY <ves> [ves_58b13->docker_collector_host_eb9ee|preconfigure] Sending task 'relationshipplugin.forward_destination_info'
    2018-01-04T19:45:29 CFY <ves> [ves_58b13->docker_collector_host_eb9ee|preconfigure] Task started 'relationshipplugin.forward_destination_info'
    2018-01-04T19:45:29 CFY <ves> [ves_58b13->docker_collector_host_eb9ee|preconfigure] Task succeeded 'relationshipplugin.forward_destination_info'
    2018-01-04T19:45:29 CFY <ves> [ves_58b13] Configuring node
    2018-01-04T19:45:30 CFY <ves> [ves_58b13] Starting node
    2018-01-04T19:45:30 CFY <ves> [ves_58b13.start] Sending task 'dockerplugin.create_and_start_container_for_components_with_streams'
    2018-01-04T19:45:30 CFY <ves> [ves_58b13.start] Task started 'dockerplugin.create_and_start_container_for_components_with_streams'
    Timed out waiting for workflow 'install' of deployment 'ves' to end. The execution may still be running properly; however, the command-line utility was instructed to wait up to 900 seconds for its completion.
    
    * Run 'cfy executions list' to determine the execution's status.
    * Run 'cfy executions cancel --execution-id 3b9a6058-3b35-4406-a077-8430fff5a518' to cancel the running workflow.
    * Run 'cfy events list --tail --include-logs --execution-id 3b9a6058-3b35-4406-a077-8430fff5a518' to retrieve the execution's events/logs
    + cfy install -p ./blueprints/tca/tca.yaml -b tca -d tca -i ../config/tcainputs.yaml
    Uploading blueprint ./blueprints/tca/tca.yaml...
    Blueprint uploaded. The blueprint's id is tca
    Processing inputs source: ../config/tcainputs.yaml
    Creating new deployment from blueprint tca...
    Deployment created. The deployment's id is tca
    Executing workflow install on deployment tca [timeout=900 seconds]
    Deployment environment creation is in progress...
    2018-01-04T20:00:27 CFY <tca> Starting 'create_deployment_environment' workflow execution
    2018-01-04T20:00:27 CFY <tca> Installing deployment plugins
    2018-01-04T20:00:27 CFY <tca> Sending task 'cloudify_agent.operations.install_plugins'
    2018-01-04T20:00:27 CFY <tca> Task started 'cloudify_agent.operations.install_plugins'
    2018-01-04T20:00:28 CFY <tca> Task succeeded 'cloudify_agent.operations.install_plugins'
    2018-01-04T20:00:28 CFY <tca> Skipping starting deployment policy engine core - no policies defined
    2018-01-04T20:00:28 CFY <tca> Creating deployment work directory
    2018-01-04T20:00:28 CFY <tca> 'create_deployment_environment' workflow execution succeeded
    2018-01-04T20:00:37 CFY <tca> Starting 'install' workflow execution
    2018-01-04T20:00:37 CFY <tca> [tca_tca_ca556] Creating node
    2018-01-04T20:00:37 CFY <tca> [tca_tca_ca556.create] Sending task 'cdapcloudify.cdap_plugin.create'
    2018-01-04T20:00:38 CFY <tca> [tca_tca_ca556.create] Task started 'cdapcloudify.cdap_plugin.create'
    2018-01-04T20:00:38 CFY <tca> [tca_tca_ca556.create] Task succeeded 'cdapcloudify.cdap_plugin.create'
    2018-01-04T20:00:38 CFY <tca> [tca_tca_ca556] Configuring node
    2018-01-04T20:00:39 CFY <tca> [tca_tca_ca556] Starting node
    2018-01-04T20:00:39 CFY <tca> [tca_tca_ca556.start] Sending task 'cdapcloudify.cdap_plugin.deploy_and_start_application'
    2018-01-04T20:00:39 CFY <tca> [tca_tca_ca556.start] Task started 'cdapcloudify.cdap_plugin.deploy_and_start_application'
    2018-01-04T20:01:04 CFY <tca> [tca_tca_ca556.start] Task succeeded 'cdapcloudify.cdap_plugin.deploy_and_start_application'
    2018-01-04T20:01:04 CFY <tca> 'install' workflow execution succeeded
    Finished executing workflow install on deployment tca
    * Run 'cfy events list --include-logs --execution-id 381ec53d-5709-4479-9933-d367d219f471' to retrieve the execution's events/logs
    + cfy install -p ./blueprints/hrules/holmes-rules.yaml -b hrules -d hrules -i ../config/hr-ip.yaml
    Uploading blueprint ./blueprints/hrules/holmes-rules.yaml...
    Blueprint uploaded. The blueprint's id is hrules
    Processing inputs source: ../config/hr-ip.yaml
    Creating new deployment from blueprint hrules...
    Deployment created. The deployment's id is hrules
    Executing workflow install on deployment hrules [timeout=900 seconds]
    Deployment environment creation is in progress...
    2018-01-04T20:01:14 CFY <hrules> Starting 'create_deployment_environment' workflow execution
    2018-01-04T20:01:14 CFY <hrules> Installing deployment plugins
    2018-01-04T20:01:14 CFY <hrules> Sending task 'cloudify_agent.operations.install_plugins'
    2018-01-04T20:01:14 CFY <hrules> Task started 'cloudify_agent.operations.install_plugins'
    2018-01-04T20:01:16 CFY <hrules> Task succeeded 'cloudify_agent.operations.install_plugins'
    2018-01-04T20:01:16 CFY <hrules> Skipping starting deployment policy engine core - no policies defined
    2018-01-04T20:01:16 CFY <hrules> Creating deployment work directory
    2018-01-04T20:01:16 CFY <hrules> 'create_deployment_environment' workflow execution succeeded
    2018-01-04T20:01:21 CFY <hrules> Starting 'install' workflow execution
    2018-01-04T20:01:21 CFY <hrules> [docker_holmes_host_877ca] Creating node
    2018-01-04T20:01:21 CFY <hrules> [pgaasvm_20bce] Creating node
    2018-01-04T20:01:21 CFY <hrules> [docker_holmes_host_877ca.create] Sending task 'dockerplugin.select_docker_host'
    2018-01-04T20:01:21 CFY <hrules> [pgaasvm_20bce.create] Sending task 'pgaas.pgaas_plugin.create_database'
    2018-01-04T20:01:22 CFY <hrules> [docker_holmes_host_877ca.create] Task started 'dockerplugin.select_docker_host'
    2018-01-04T20:01:22 CFY <hrules> [pgaasvm_20bce.create] Task started 'pgaas.pgaas_plugin.create_database'
    2018-01-04T20:01:22 CFY <hrules> [docker_holmes_host_877ca.create] Task succeeded 'dockerplugin.select_docker_host'
    2018-01-04T20:01:22 CFY <hrules> [docker_holmes_host_877ca] Configuring node
    2018-01-04T20:01:23 CFY <hrules> [pgaasvm_20bce.create] Task succeeded 'pgaas.pgaas_plugin.create_database'
    2018-01-04T20:01:23 CFY <hrules> [docker_holmes_host_877ca] Starting node
    2018-01-04T20:01:23 CFY <hrules> [pgaasvm_20bce] Configuring node
    2018-01-04T20:01:23 CFY <hrules> [pgaasvm_20bce] Starting node
    2018-01-04T20:01:24 CFY <hrules> [holmesrules_5137d] Creating node
    2018-01-04T20:01:24 CFY <hrules> [holmesrules_5137d.create] Sending task 'dockerplugin.create_for_components_with_streams'
    2018-01-04T20:01:24 CFY <hrules> [holmesrules_5137d.create] Task started 'dockerplugin.create_for_components_with_streams'
    2018-01-04T20:01:25 CFY <hrules> [holmesrules_5137d.create] Task succeeded 'dockerplugin.create_for_components_with_streams'
    2018-01-04T20:01:25 CFY <hrules> [holmesrules_5137d->docker_holmes_host_877ca|preconfigure] Sending task 'relationshipplugin.forward_destination_info'
    2018-01-04T20:01:25 CFY <hrules> [holmesrules_5137d->docker_holmes_host_877ca|preconfigure] Task started 'relationshipplugin.forward_destination_info'
    2018-01-04T20:01:26 CFY <hrules> [holmesrules_5137d->docker_holmes_host_877ca|preconfigure] Task succeeded 'relationshipplugin.forward_destination_info'
    2018-01-04T20:01:26 CFY <hrules> [holmesrules_5137d] Configuring node
    2018-01-04T20:01:27 CFY <hrules> [holmesrules_5137d] Starting node
    2018-01-04T20:01:27 CFY <hrules> [holmesrules_5137d.start] Sending task 'dockerplugin.create_and_start_container_for_components_with_streams'
    2018-01-04T20:01:27 CFY <hrules> [holmesrules_5137d.start] Task started 'dockerplugin.create_and_start_container_for_components_with_streams'
    2018-01-04T20:01:54 CFY <hrules> [holmesrules_5137d.start] Task succeeded 'dockerplugin.create_and_start_container_for_components_with_streams'
    2018-01-04T20:01:54 CFY <hrules> 'install' workflow execution succeeded
    Finished executing workflow install on deployment hrules
    * Run 'cfy events list --include-logs --execution-id ba698197-330b-45db-a47b-31fbf1e52a0c' to retrieve the execution's events/logs
    + cfy install -p ./blueprints/hengine/holmes-engine.yaml -b hengine -d hengine -i ../config/he-ip.yaml
    Uploading blueprint ./blueprints/hengine/holmes-engine.yaml...
    Blueprint uploaded. The blueprint's id is hengine
    Processing inputs source: ../config/he-ip.yaml
    Creating new deployment from blueprint hengine...
    Deployment created. The deployment's id is hengine
    Executing workflow install on deployment hengine [timeout=900 seconds]
    Deployment environment creation is in progress...
    2018-01-04T20:02:04 CFY <hengine> Starting 'create_deployment_environment' workflow execution
    2018-01-04T20:02:04 CFY <hengine> Installing deployment plugins
    2018-01-04T20:02:04 CFY <hengine> Sending task 'cloudify_agent.operations.install_plugins'
    2018-01-04T20:02:04 CFY <hengine> Task started 'cloudify_agent.operations.install_plugins'
    2018-01-04T20:02:06 CFY <hengine> Task succeeded 'cloudify_agent.operations.install_plugins'
    2018-01-04T20:02:06 CFY <hengine> Skipping starting deployment policy engine core - no policies defined
    2018-01-04T20:02:06 CFY <hengine> Creating deployment work directory
    2018-01-04T20:02:06 CFY <hengine> 'create_deployment_environment' workflow execution succeeded
    2018-01-04T20:02:11 CFY <hengine> Starting 'install' workflow execution
    2018-01-04T20:02:12 CFY <hengine> [docker_holmes_host_7fa6d] Creating node
    2018-01-04T20:02:12 CFY <hengine> [pgaasvm_cd3bb] Creating node
    2018-01-04T20:02:12 CFY <hengine> [pgaasvm_cd3bb.create] Sending task 'pgaas.pgaas_plugin.create_database'
    2018-01-04T20:02:12 CFY <hengine> [docker_holmes_host_7fa6d.create] Sending task 'dockerplugin.select_docker_host'
    2018-01-04T20:02:12 CFY <hengine> [pgaasvm_cd3bb.create] Task started 'pgaas.pgaas_plugin.create_database'
    2018-01-04T20:02:12 CFY <hengine> [docker_holmes_host_7fa6d.create] Task started 'dockerplugin.select_docker_host'
    2018-01-04T20:02:12 CFY <hengine> [docker_holmes_host_7fa6d.create] Task succeeded 'dockerplugin.select_docker_host'
    2018-01-04T20:02:12 CFY <hengine> [pgaasvm_cd3bb.create] Task succeeded 'pgaas.pgaas_plugin.create_database'
    2018-01-04T20:02:13 CFY <hengine> [docker_holmes_host_7fa6d] Configuring node
    2018-01-04T20:02:13 CFY <hengine> [pgaasvm_cd3bb] Configuring node
    2018-01-04T20:02:13 CFY <hengine> [docker_holmes_host_7fa6d] Starting node
    2018-01-04T20:02:13 CFY <hengine> [pgaasvm_cd3bb] Starting node
    2018-01-04T20:02:14 CFY <hengine> [holmesengine_643a5] Creating node
    2018-01-04T20:02:14 CFY <hengine> [holmesengine_643a5.create] Sending task 'dockerplugin.create_for_components_with_streams'
    2018-01-04T20:02:14 CFY <hengine> [holmesengine_643a5.create] Task started 'dockerplugin.create_for_components_with_streams'
    2018-01-04T20:02:15 CFY <hengine> [holmesengine_643a5.create] Task succeeded 'dockerplugin.create_for_components_with_streams'
    2018-01-04T20:02:15 CFY <hengine> [holmesengine_643a5->docker_holmes_host_7fa6d|preconfigure] Sending task 'relationshipplugin.forward_destination_info'
    2018-01-04T20:02:15 CFY <hengine> [holmesengine_643a5->docker_holmes_host_7fa6d|preconfigure] Task started 'relationshipplugin.forward_destination_info'
    2018-01-04T20:02:16 CFY <hengine> [holmesengine_643a5->docker_holmes_host_7fa6d|preconfigure] Task succeeded 'relationshipplugin.forward_destination_info'
    2018-01-04T20:02:16 CFY <hengine> [holmesengine_643a5] Configuring node
    2018-01-04T20:02:16 CFY <hengine> [holmesengine_643a5] Starting node
    2018-01-04T20:02:16 CFY <hengine> [holmesengine_643a5.start] Sending task 'dockerplugin.create_and_start_container_for_components_with_streams'
    2018-01-04T20:02:17 CFY <hengine> [holmesengine_643a5.start] Task started 'dockerplugin.create_and_start_container_for_components_with_streams'
    2018-01-04T20:02:55 CFY <hengine> [holmesengine_643a5.start] Task succeeded 'dockerplugin.create_and_start_container_for_components_with_streams'
    2018-01-04T20:02:56 CFY <hengine> 'install' workflow execution succeeded
    Finished executing workflow install on deployment hengine
    * Run 'cfy events list --include-logs --execution-id 568d7e03-a0e7-40cb-ad49-c5a6a7a027f7' to retrieve the execution's events/logs
    + echo 10.195.200.32
    + echo 10.195.200.42
    + rm -f /tmp/ready_to_exit
    + '[' '!' -e /tmp/ready_to_exit ']'
    + sleep 30
    + '[' '!' -e /tmp/ready_to_exit ']'
    + sleep 30

    Unluckily, In the run above, I faced an issue with the VES collector that failed to get deployed properly:

    + cfy install -p ./blueprints/ves/ves.yaml -b ves -d ves -i ../config/vesinput.yaml
    Uploading blueprint ./blueprints/ves/ves.yaml...
    Blueprint uploaded. The blueprint's id is ves
    Processing inputs source: ../config/vesinput.yaml
    Creating new deployment from blueprint ves...
    Deployment created. The deployment's id is ves
    Executing workflow install on deployment ves [timeout=900 seconds]
    Deployment environment creation is in progress...
    2018-01-04T19:45:21 CFY <ves> Starting 'create_deployment_environment' workflow execution
    2018-01-04T19:45:21 CFY <ves> Installing deployment plugins
    2018-01-04T19:45:21 CFY <ves> Sending task 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:45:21 CFY <ves> Task started 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:45:22 CFY <ves> Task succeeded 'cloudify_agent.operations.install_plugins'
    2018-01-04T19:45:22 CFY <ves> Skipping starting deployment policy engine core - no policies defined
    2018-01-04T19:45:22 CFY <ves> Creating deployment work directory
    2018-01-04T19:45:23 CFY <ves> 'create_deployment_environment' workflow execution succeeded
    2018-01-04T19:45:25 CFY <ves> Starting 'install' workflow execution
    2018-01-04T19:45:25 CFY <ves> [docker_collector_host_eb9ee] Creating node
    2018-01-04T19:45:26 CFY <ves> [docker_collector_host_eb9ee.create] Sending task 'dockerplugin.select_docker_host'
    2018-01-04T19:45:26 CFY <ves> [docker_collector_host_eb9ee.create] Task started 'dockerplugin.select_docker_host'
    2018-01-04T19:45:26 CFY <ves> [docker_collector_host_eb9ee.create] Task succeeded 'dockerplugin.select_docker_host'
    2018-01-04T19:45:27 CFY <ves> [docker_collector_host_eb9ee] Configuring node
    2018-01-04T19:45:27 CFY <ves> [docker_collector_host_eb9ee] Starting node
    2018-01-04T19:45:28 CFY <ves> [ves_58b13] Creating node
    2018-01-04T19:45:28 CFY <ves> [ves_58b13.create] Sending task 'dockerplugin.create_for_components_with_streams'
    2018-01-04T19:45:28 CFY <ves> [ves_58b13.create] Task started 'dockerplugin.create_for_components_with_streams'
    2018-01-04T19:45:29 CFY <ves> [ves_58b13.create] Task succeeded 'dockerplugin.create_for_components_with_streams'
    2018-01-04T19:45:29 CFY <ves> [ves_58b13->docker_collector_host_eb9ee|preconfigure] Sending task 'relationshipplugin.forward_destination_info'
    2018-01-04T19:45:29 CFY <ves> [ves_58b13->docker_collector_host_eb9ee|preconfigure] Task started 'relationshipplugin.forward_destination_info'
    2018-01-04T19:45:29 CFY <ves> [ves_58b13->docker_collector_host_eb9ee|preconfigure] Task succeeded 'relationshipplugin.forward_destination_info'
    2018-01-04T19:45:29 CFY <ves> [ves_58b13] Configuring node
    2018-01-04T19:45:30 CFY <ves> [ves_58b13] Starting node
    2018-01-04T19:45:30 CFY <ves> [ves_58b13.start] Sending task 'dockerplugin.create_and_start_container_for_components_with_streams'
    2018-01-04T19:45:30 CFY <ves> [ves_58b13.start] Task started 'dockerplugin.create_and_start_container_for_components_with_streams'
    Timed out waiting for workflow 'install' of deployment 'ves' to end. The execution may still be running properly; however, the command-line utility was instructed to wait up to 900 seconds for its completion.
    
    * Run 'cfy executions list' to determine the execution's status.
    * Run 'cfy executions cancel --execution-id 3b9a6058-3b35-4406-a077-8430fff5a518' to cancel the running workflow.
    * Run 'cfy events list --tail --include-logs --execution-id 3b9a6058-3b35-4406-a077-8430fff5a518' to retrieve the execution's events/logs

    To debug, I have done the following:

    1. go in the boot container

      docker exec -it boot bash
    2. Activate the virtual environment created by the installer

      source dcaeinstall/bin/activate
    3. Run the command provided in the failed execution logs to see the logs output which might point to the failure

      (dcaeinstall) installer@e6120e566d15:~/consul$ cfy events list --tail --include-logs --execution-id 3b9a6058-3b35-4406-a077-8430fff5a518
      Listing events for execution id 3b9a6058-3b35-4406-a077-8430fff5a518 [include_logs=True]
      Execution of workflow install for deployment ves failed. [error=Traceback (most recent call last):
        File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py", line 472, in _remote_workflow_child_thread
        File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py", line 504, in _execute_workflow_function
        File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/workflows.py", line 27, in install
          node_instances=set(ctx.node_instances))
        File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py", line 28, in install_node_instances
          processor.install()
        File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py", line 83, in install
          graph_finisher_func=self._finish_install)
        File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py", line 103, in _process_node_instances
          self.graph.execute()
        File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/workflows/tasks_graph.py", line 133, in execute
          self._handle_terminated_task(task)
        File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/workflows/tasks_graph.py", line 207, in _handle_terminated_task
          raise RuntimeError(message)
      RuntimeError: Workflow failed: Task failed 'dockerplugin.create_and_start_container_for_components_with_streams' -> 500 Server Error: Internal Server Error ("{"message":"Get https://nexus3.onap.org:10001/v2/onap/org.onap.dcaegen2.collectors.ves.vescollector/manifests/v1.1.0: read tcp 10.0.0.13:46574-\u003e199.204.45.137:10001: read: no route to host"}")
      ]

      Unfortunatly for me, Nexus decided to crap on me at the exact time it tried to query it...
      Anyway, here try to understand the issue, and/or send the output to the mailing-list. In my case, I tried to uninstall and re-installed the ves and it worked.

    4. Uninstall the failed deployment

      (dcaeinstall) installer@e6120e566d15:~/consul$ cfy uninstall  -d ves
      Executing workflow uninstall on deployment ves [timeout=900 seconds]
      2018-01-04T20:23:39 CFY <ves> Starting 'uninstall' workflow execution
      2018-01-04T20:23:39 CFY <ves> [ves_58b13] Stopping node
      2018-01-04T20:23:39 CFY <ves> [ves_58b13.stop] Sending task 'dockerplugin.stop_and_remove_container'
      2018-01-04T20:23:39 CFY <ves> [ves_58b13.stop] Task started 'dockerplugin.stop_and_remove_container'
      2018-01-04T20:23:40 CFY <ves> [ves_58b13.stop] Task failed 'dockerplugin.stop_and_remove_container' -> 'container_id'
      2018-01-04T20:23:41 CFY <ves> [ves_58b13] Deleting node
      2018-01-04T20:23:41 CFY <ves> [ves_58b13.delete] Sending task 'dockerplugin.cleanup_discovery'
      2018-01-04T20:23:41 CFY <ves> [ves_58b13.delete] Task started 'dockerplugin.cleanup_discovery'
      2018-01-04T20:23:41 CFY <ves> [ves_58b13.delete] Task succeeded 'dockerplugin.cleanup_discovery'
      2018-01-04T20:23:42 CFY <ves> [docker_collector_host_eb9ee] Stopping node
      2018-01-04T20:23:42 CFY <ves> [docker_collector_host_eb9ee] Deleting node
      2018-01-04T20:23:42 CFY <ves> [docker_collector_host_eb9ee.delete] Sending task 'dockerplugin.unselect_docker_host'
      2018-01-04T20:23:42 CFY <ves> [docker_collector_host_eb9ee.delete] Task started 'dockerplugin.unselect_docker_host'
      2018-01-04T20:23:43 CFY <ves> [docker_collector_host_eb9ee.delete] Task succeeded 'dockerplugin.unselect_docker_host'
      2018-01-04T20:23:43 CFY <ves> 'uninstall' workflow execution succeeded
      Finished executing workflow uninstall on deployment ves
      * Run 'cfy events list --include-logs --execution-id cecdd4f0-8fd4-48b0-b9f7-2eb888b478e8' to retrieve the execution's events/logs
      Deleting deployment ves...
      Deployment deleted
      Deleting blueprint ves...
      Blueprint deleted
    5. Install the deployment

      (dcaeinstall) installer@e6120e566d15:~/consul$ cfy install -p ./blueprints/ves/ves.yaml -b ves -d ves -i ../config/vesinput.yaml
      Uploading blueprint ./blueprints/ves/ves.yaml...
      Blueprint uploaded. The blueprint's id is ves
      Processing inputs source: ../config/vesinput.yaml
      Creating new deployment from blueprint ves...
      Deployment created. The deployment's id is ves
      Executing workflow install on deployment ves [timeout=900 seconds]
      Deployment environment creation is in progress...
      2018-01-04T20:24:01 CFY <ves> Starting 'create_deployment_environment' workflow execution
      2018-01-04T20:24:02 CFY <ves> Installing deployment plugins
      2018-01-04T20:24:02 CFY <ves> Sending task 'cloudify_agent.operations.install_plugins'
      2018-01-04T20:24:02 CFY <ves> Task started 'cloudify_agent.operations.install_plugins'
      2018-01-04T20:24:03 CFY <ves> Task succeeded 'cloudify_agent.operations.install_plugins'
      2018-01-04T20:24:03 CFY <ves> Skipping starting deployment policy engine core - no policies defined
      2018-01-04T20:24:03 CFY <ves> Creating deployment work directory
      2018-01-04T20:24:03 CFY <ves> 'create_deployment_environment' workflow execution succeeded
      2018-01-04T20:24:06 CFY <ves> Starting 'install' workflow execution
      2018-01-04T20:24:06 CFY <ves> [docker_collector_host_fec48] Creating node
      2018-01-04T20:24:06 CFY <ves> [docker_collector_host_fec48.create] Sending task 'dockerplugin.select_docker_host'
      2018-01-04T20:24:06 CFY <ves> [docker_collector_host_fec48.create] Task started 'dockerplugin.select_docker_host'
      2018-01-04T20:24:07 CFY <ves> [docker_collector_host_fec48.create] Task succeeded 'dockerplugin.select_docker_host'
      2018-01-04T20:24:07 CFY <ves> [docker_collector_host_fec48] Configuring node
      2018-01-04T20:24:07 CFY <ves> [docker_collector_host_fec48] Starting node
      2018-01-04T20:24:08 CFY <ves> [ves_45c17] Creating node
      2018-01-04T20:24:08 CFY <ves> [ves_45c17.create] Sending task 'dockerplugin.create_for_components_with_streams'
      2018-01-04T20:24:08 CFY <ves> [ves_45c17.create] Task started 'dockerplugin.create_for_components_with_streams'
      2018-01-04T20:24:09 CFY <ves> [ves_45c17.create] Task succeeded 'dockerplugin.create_for_components_with_streams'
      2018-01-04T20:24:09 CFY <ves> [ves_45c17->docker_collector_host_fec48|preconfigure] Sending task 'relationshipplugin.forward_destination_info'
      2018-01-04T20:24:09 CFY <ves> [ves_45c17->docker_collector_host_fec48|preconfigure] Task started 'relationshipplugin.forward_destination_info'
      2018-01-04T20:24:10 CFY <ves> [ves_45c17->docker_collector_host_fec48|preconfigure] Task succeeded 'relationshipplugin.forward_destination_info'
      2018-01-04T20:24:10 CFY <ves> [ves_45c17] Configuring node
      2018-01-04T20:24:10 CFY <ves> [ves_45c17] Starting node
      2018-01-04T20:24:10 CFY <ves> [ves_45c17.start] Sending task 'dockerplugin.create_and_start_container_for_components_with_streams'
      2018-01-04T20:24:10 CFY <ves> [ves_45c17.start] Task started 'dockerplugin.create_and_start_container_for_components_with_streams'
      2018-01-04T20:24:48 CFY <ves> [ves_45c17.start] Task succeeded 'dockerplugin.create_and_start_container_for_components_with_streams'
      2018-01-04T20:24:49 CFY <ves> 'install' workflow execution succeeded
      Finished executing workflow install on deployment ves
      * Run 'cfy events list --include-logs --execution-id 5029d6f6-000d-49e7-be9e-c6f627b0d73e' to retrieve the execution's events/logs
  5. Now, time for a break. This will take arround 30/40 minutes for the Kubernetes stuff, and up to an hour for DCAE in OpenStack
  6. After 45mn, everything is ready

    $ kubectl get pods --all-namespaces

    Result:

    $ kubectl get pods --all-namespaces
    NAMESPACE             NAME                                          READY     STATUS    RESTARTS   AGE
    kube-system           heapster-4285517626-n5b57                     1/1       Running   0          55m
    kube-system           kube-dns-638003847-px0s1                      3/3       Running   0          55m
    kube-system           kubernetes-dashboard-716739405-llh0w          1/1       Running   0          55m
    kube-system           monitoring-grafana-2360823841-tn80f           1/1       Running   0          55m
    kube-system           monitoring-influxdb-2323019309-34ml1          1/1       Running   0          55m
    kube-system           tiller-deploy-737598192-k2ttl                 1/1       Running   0          55m
    onap-aaf              aaf-1993711932-0xcdt                          0/1       Running   0          46m
    onap-aaf              aaf-cs-1310404376-6zjjh                       1/1       Running   0          46m
    onap-aai              aai-resources-1412762642-kh8r0                2/2       Running   0          47m
    onap-aai              aai-service-749944520-t87vn                   1/1       Running   0          47m
    onap-aai              aai-traversal-3084029645-x29p6                2/2       Running   0          47m
    onap-aai              data-router-3434587794-hj9b3                  1/1       Running   0          47m
    onap-aai              elasticsearch-622738319-m85sn                 1/1       Running   0          47m
    onap-aai              hbase-1949550546-lncls                        1/1       Running   0          47m
    onap-aai              model-loader-service-4144225433-0m8sp         2/2       Running   0          47m
    onap-aai              search-data-service-378072033-sfrnd           2/2       Running   0          47m
    onap-aai              sparky-be-3094577325-902jg                    2/2       Running   0          47m
    onap-appc             appc-1828810488-xg5k3                         2/2       Running   0          47m
    onap-appc             appc-dbhost-2793739621-ckxrf                  1/1       Running   0          47m
    onap-appc             appc-dgbuilder-2298093128-qd4b4               1/1       Running   0          47m
    onap-clamp            clamp-2211988013-qwkvl                        1/1       Running   0          46m
    onap-clamp            clamp-mariadb-1812977665-mp89r                1/1       Running   0          46m
    onap-cli              cli-595710742-wj4mg                           1/1       Running   0          47m
    onap-consul           consul-agent-3312409084-kv21c                 1/1       Running   1          47m
    onap-consul           consul-server-1173049560-966zr                1/1       Running   0          47m
    onap-consul           consul-server-1173049560-d656s                1/1       Running   1          47m
    onap-consul           consul-server-1173049560-k41w3                1/1       Running   0          47m
    onap-dcaegen2         dcaegen2                                      1/1       Running   0          47m
    onap-kube2msb         kube2msb-registrator-1359309322-p60lx         1/1       Running   0          46m
    onap-log              elasticsearch-1942187295-mtw6l                1/1       Running   0          47m
    onap-log              kibana-3372627750-k8q6p                       1/1       Running   0          47m
    onap-log              logstash-1708188010-2vpd1                     1/1       Running   0          47m
    onap-message-router   dmaap-3126594942-vnj5w                        1/1       Running   0          47m
    onap-message-router   global-kafka-666408702-1z9c5                  1/1       Running   0          47m
    onap-message-router   zookeeper-624700062-kvk1m                     1/1       Running   0          47m
    onap-msb              msb-consul-3334785600-nz1zt                   1/1       Running   0          47m
    onap-msb              msb-discovery-196547432-pqs3g                 1/1       Running   0          47m
    onap-msb              msb-eag-1649257109-nl11h                      1/1       Running   0          47m
    onap-msb              msb-iag-1033096170-6cx7t                      1/1       Running   0          47m
    onap-mso              mariadb-829081257-q90fd                       1/1       Running   0          47m
    onap-mso              mso-3784963895-brdxx                          2/2       Running   0          47m
    onap-multicloud       framework-2273343137-nnvr5                    1/1       Running   0          47m
    onap-multicloud       multicloud-ocata-1517639325-gwkjr             1/1       Running   0          47m
    onap-multicloud       multicloud-vio-4239509896-zxmvx               1/1       Running   0          47m
    onap-multicloud       multicloud-windriver-3629763724-993qk         1/1       Running   0          47m
    onap-policy           brmsgw-1909438199-k2ppk                       1/1       Running   0          47m
    onap-policy           drools-2600956298-p9t68                       2/2       Running   0          47m
    onap-policy           mariadb-2660273324-lj0ts                      1/1       Running   0          47m
    onap-policy           nexus-3663640793-pgf51                        1/1       Running   0          47m
    onap-policy           pap-466625067-2hcxb                           2/2       Running   0          47m
    onap-policy           pdp-2354817903-65rnb                          2/2       Running   0          47m
    onap-portal           portalapps-1783099045-prvmp                   2/2       Running   0          47m
    onap-portal           portaldb-3181004999-0t228                     2/2       Running   0          47m
    onap-portal           portalwidgets-2060058548-w6hr9                1/1       Running   0          47m
    onap-portal           vnc-portal-3680188324-b22zq                   1/1       Running   0          47m
    onap-robot            robot-2551980890-cw3vj                        1/1       Running   0          47m
    onap-sdc              sdc-be-2336519847-hcs6h                       2/2       Running   0          47m
    onap-sdc              sdc-cs-1151560586-sfkf0                       1/1       Running   0          47m
    onap-sdc              sdc-es-2438522492-cw6rj                       1/1       Running   0          47m
    onap-sdc              sdc-fe-2862673798-lplcx                       2/2       Running   0          47m
    onap-sdc              sdc-kb-1258596734-43lf7                       1/1       Running   0          47m
    onap-sdnc             sdnc-1395102659-rd27h                         2/2       Running   0          47m
    onap-sdnc             sdnc-dbhost-3029711096-vl2jg                  1/1       Running   0          47m
    onap-sdnc             sdnc-dgbuilder-4267203648-bb828               1/1       Running   0          47m
    onap-sdnc             sdnc-portal-2558294154-3nh31                  1/1       Running   0          47m
    onap-uui              uui-4267149477-bqt0r                          1/1       Running   0          46m
    onap-uui              uui-server-3441797946-dx683                   1/1       Running   0          46m
    onap-vfc              vfc-catalog-840807183-lx4d0                   1/1       Running   0          46m
    onap-vfc              vfc-emsdriver-2936953408-fb2pf                1/1       Running   0          46m
    onap-vfc              vfc-gvnfmdriver-2866216209-k5t1t              1/1       Running   0          46m
    onap-vfc              vfc-hwvnfmdriver-2588350680-bpglx             1/1       Running   0          46m
    onap-vfc              vfc-jujudriver-406795794-ttp9p                1/1       Running   0          46m
    onap-vfc              vfc-nokiavnfmdriver-1760240499-xm0qk          1/1       Running   0          46m
    onap-vfc              vfc-nslcm-3756650867-1dnr0                    1/1       Running   0          46m
    onap-vfc              vfc-resmgr-1409642779-0603z                   1/1       Running   0          46m
    onap-vfc              vfc-vnflcm-3340104471-xsk72                   1/1       Running   0          46m
    onap-vfc              vfc-vnfmgr-2823857741-r04xj                   1/1       Running   0          46m
    onap-vfc              vfc-vnfres-1792029715-ls480                   1/1       Running   0          46m
    onap-vfc              vfc-workflow-3450325534-flwtw                 1/1       Running   0          46m
    onap-vfc              vfc-workflowengineactiviti-4110617986-mvlgl   1/1       Running   0          46m
    onap-vfc              vfc-ztesdncdriver-1452986549-c59jb            1/1       Running   0          46m
    onap-vfc              vfc-ztevmanagerdriver-2080553526-wdxwq        1/1       Running   0          46m
    onap-vid              vid-mariadb-3318685446-hmf2q                  1/1       Running   0          47m
    onap-vid              vid-server-2994633010-x3t74                   2/2       Running   0          47m
    onap-vnfsdk           postgres-436836560-cl2dz                      1/1       Running   0          46m
    onap-vnfsdk           refrepo-1924147637-wft62                      1/1       Running   0          46m


    Update result (04/01/2018): Since c/26645 got merged, note the two new pods under the onap-dcaegen2 namespace. Here it was faster than above, because I already had the docker image locally.

    $ kubectl get pods --all-namespaces
    NAMESPACE             NAME                                          READY     STATUS    RESTARTS   AGE
    kube-system           heapster-4285517626-t1400                     1/1       Running   0          23d
    kube-system           kube-dns-638003847-3lsc9                      3/3       Running   0          23d
    kube-system           kubernetes-dashboard-716739405-dl0mx          1/1       Running   1          23d
    kube-system           monitoring-grafana-2360823841-mz943           1/1       Running   0          23d
    kube-system           monitoring-influxdb-2323019309-3hg7j          1/1       Running   1          23d
    kube-system           tiller-deploy-737598192-0s32x                 1/1       Running   0          23d
    onap-aaf              aaf-1993711932-kk25r                          0/1       Running   0          16m
    onap-aaf              aaf-cs-1310404376-s4ftc                       1/1       Running   0          16m
    onap-aai              aai-resources-2269342809-rkd30                2/2       Running   0          17m
    onap-aai              aai-service-749944520-76sl1                   1/1       Running   0          17m
    onap-aai              aai-traversal-8423740-mw90r                   2/2       Running   0          17m
    onap-aai              data-router-3434587794-n2x9c                  1/1       Running   0          17m
    onap-aai              elasticsearch-622738319-swzv2                 1/1       Running   0          17m
    onap-aai              hbase-1949550546-c22z3                        1/1       Running   0          17m
    onap-aai              model-loader-service-4144225433-s9pqp         2/2       Running   0          17m
    onap-aai              search-data-service-3842430948-grqvc          2/2       Running   0          17m
    onap-aai              sparky-be-4222608366-ggm5d                    2/2       Running   0          17m
    onap-appc             appc-1828810488-f0zg3                         2/2       Running   0          17m
    onap-appc             appc-dbhost-2793739621-6rnjz                  1/1       Running   0          17m
    onap-appc             appc-dgbuilder-2298093128-nnz9c               1/1       Running   0          17m
    onap-clamp            clamp-2211988013-rngzh                        1/1       Running   0          16m
    onap-clamp            clamp-mariadb-1812977665-qhrpp                1/1       Running   0          16m
    onap-cli              cli-595710742-2cqw7                           1/1       Running   0          16m
    onap-consul           consul-agent-3312409084-261ks                 1/1       Running   2          17m
    onap-consul           consul-server-1173049560-20448                1/1       Running   2          17m
    onap-consul           consul-server-1173049560-9pn84                1/1       Running   1          17m
    onap-consul           consul-server-1173049560-md1th                1/1       Running   1          17m
    onap-dcaegen2         heat-bootstrap-4010086101-0fdss               1/1       Running   0          17m
    onap-dcaegen2         nginx-1230103904-2qm41                        1/1       Running   0          17m
    onap-kube2msb         kube2msb-registrator-1227291125-lb6vw         1/1       Running   0          16m
    onap-log              elasticsearch-1942187295-xqrxr                1/1       Running   0          16m
    onap-log              kibana-3372627750-q93zv                       1/1       Running   0          16m
    onap-log              logstash-1708188010-18gms                     1/1       Running   0          16m
    onap-message-router   dmaap-3126594942-w1s5p                        1/1       Running   0          17m
    onap-message-router   global-kafka-3848542622-snjvq                 1/1       Running   0          17m
    onap-message-router   zookeeper-624700062-m08rx                     1/1       Running   0          17m
    onap-msb              msb-consul-3334785600-j7x42                   1/1       Running   0          17m
    onap-msb              msb-discovery-196547432-hpjlk                 1/1       Running   0          17m
    onap-msb              msb-eag-1649257109-m5d4h                      1/1       Running   0          17m
    onap-msb              msb-iag-1033096170-slnn5                      1/1       Running   0          17m
    onap-mso              mariadb-829081257-5nbgm                       1/1       Running   0          17m
    onap-mso              mso-3784963895-l7q3d                          2/2       Running   0          17m
    onap-multicloud       framework-2273343137-v3kfd                    1/1       Running   0          16m
    onap-multicloud       multicloud-ocata-1517639325-2pxbw             1/1       Running   0          16m
    onap-multicloud       multicloud-vio-4239509896-3fh96               1/1       Running   0          16m
    onap-multicloud       multicloud-windriver-3629763724-0mmpt         1/1       Running   0          16m
    onap-policy           brmsgw-4149605335-283vs                       1/1       Running   0          17m
    onap-policy           drools-870120400-k491c                        2/2       Running   0          17m
    onap-policy           mariadb-2660273324-mpjkq                      1/1       Running   0          17m
    onap-policy           nexus-1730114603-9nvdg                        1/1       Running   0          17m
    onap-policy           pap-1693910617-v1f4p                          2/2       Running   0          17m
    onap-policy           pdp-3450409118-0dtq9                          2/2       Running   0          17m
    onap-portal           portalapps-1783099045-3v42h                   2/2       Running   0          17m
    onap-portal           portaldb-1451233177-6qtqq                     1/1       Running   0          17m
    onap-portal           portalwidgets-2060058548-k8pq1                1/1       Running   0          17m
    onap-portal           vnc-portal-1319334380-5sjxz                   1/1       Running   0          17m
    onap-robot            robot-2551980890-xjngn                        1/1       Running   0          17m
    onap-sdc              sdc-be-2336519847-lb6f4                       2/2       Running   0          17m
    onap-sdc              sdc-cs-1151560586-06t27                       1/1       Running   0          17m
    onap-sdc              sdc-es-3319302712-d6c0z                       1/1       Running   0          17m
    onap-sdc              sdc-fe-2862673798-ffk7d                       2/2       Running   0          17m
    onap-sdc              sdc-kb-1258596734-rkvff                       1/1       Running   0          17m
    onap-sdnc             sdnc-1395102659-nc13w                         2/2       Running   0          17m
    onap-sdnc             sdnc-dbhost-3029711096-dxjsq                  1/1       Running   0          17m
    onap-sdnc             sdnc-dgbuilder-4267203648-v6s5w               1/1       Running   0          17m
    onap-sdnc             sdnc-portal-2558294154-k6psg                  1/1       Running   0          17m
    onap-uui              uui-4267149477-q672x                          1/1       Running   0          16m
    onap-uui              uui-server-3441797946-qs1hq                   1/1       Running   0          16m
    onap-vfc              vfc-catalog-840807183-xqh9c                   1/1       Running   0          16m
    onap-vfc              vfc-emsdriver-2936953408-jwskm                1/1       Running   0          16m
    onap-vfc              vfc-gvnfmdriver-2866216209-fkt8n              1/1       Running   0          16m
    onap-vfc              vfc-hwvnfmdriver-2588350680-qg4xq             1/1       Running   0          16m
    onap-vfc              vfc-jujudriver-406795794-lm91b                1/1       Running   0          16m
    onap-vfc              vfc-nokiavnfmdriver-1760240499-3bswj          1/1       Running   0          16m
    onap-vfc              vfc-nslcm-3756650867-k5wtq                    1/1       Running   0          16m
    onap-vfc              vfc-resmgr-1409642779-0xwxp                   1/1       Running   0          16m
    onap-vfc              vfc-vnflcm-3340104471-3s2th                   1/1       Running   0          16m
    onap-vfc              vfc-vnfmgr-2823857741-96h0q                   1/1       Running   0          16m
    onap-vfc              vfc-vnfres-1792029715-v0fm0                   1/1       Running   0          16m
    onap-vfc              vfc-workflow-3450325534-zfj11                 1/1       Running   0          16m
    onap-vfc              vfc-workflowengineactiviti-4110617986-vc677   1/1       Running   0          16m
    onap-vfc              vfc-ztesdncdriver-1452986549-vv31k            1/1       Running   0          16m
    onap-vfc              vfc-ztevmanagerdriver-2080553526-lzk66        1/1       Running   0          16m
    onap-vid              vid-mariadb-3318685446-zdnww                  1/1       Running   0          17m
    onap-vid              vid-server-3026751708-7hglt                   2/2       Running   0          17m
    onap-vnfsdk           postgres-436836560-rkc4k                      1/1       Running   0          16m
    onap-vnfsdk           refrepo-1924147637-xhrbj                      1/1       Running   0          16m



  7. Let's run health check to see current status, with the expected failure for DCAE, as it's now deployed.

    cd oom/kubernetes/robot
    
    
    ./ete-k8s.sh [kubernetes-namespace] health
    
    
    Example
    $ ./ete-k8s.sh onap health

    Result:

    Starting Xvfb on display :88 with res 1280x1024x24
    Executing robot tests at log level TRACE
    ==============================================================================
    OpenECOMP ETE
    ==============================================================================
    OpenECOMP ETE.Robot
    ==============================================================================
    OpenECOMP ETE.Robot.Testsuites
    ==============================================================================
    OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp components are...
    ==============================================================================
    Basic DCAE Health Check                                               [ WARN ] Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ffa61dbfa50>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /healthcheck
    [ WARN ] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ffa61dbf650>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /healthcheck
    [ WARN ] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ffa5fe40510>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /healthcheck
    | FAIL |
    ConnectionError: HTTPConnectionPool(host='dcae-controller.onap-dcae', port=8080): Max retries exceeded with url: /healthcheck (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ffa619bf7d0>: Failed to establish a new connection: [Errno -2] Name or service not known',))
    ------------------------------------------------------------------------------
    Basic SDNGC Health Check                                              | PASS |
    ------------------------------------------------------------------------------
    Basic A&AI Health Check                                               | PASS |
    ------------------------------------------------------------------------------
    Basic Policy Health Check                                             | PASS |
    ------------------------------------------------------------------------------
    Basic MSO Health Check                                                | PASS |
    ------------------------------------------------------------------------------
    Basic ASDC Health Check                                               | PASS |
    ------------------------------------------------------------------------------
    Basic APPC Health Check                                               | PASS |
    ------------------------------------------------------------------------------
    Basic Portal Health Check                                             | PASS |
    ------------------------------------------------------------------------------
    Basic Message Router Health Check                                     | PASS |
    ------------------------------------------------------------------------------
    Basic VID Health Check                                                | PASS |
    ------------------------------------------------------------------------------
    Basic Microservice Bus Health Check                                   | PASS |
    ------------------------------------------------------------------------------
    Basic CLAMP Health Check                                              | PASS |
    ------------------------------------------------------------------------------
    catalog API Health Check                                              | PASS |
    ------------------------------------------------------------------------------
    emsdriver API Health Check                                            | PASS |
    ------------------------------------------------------------------------------
    gvnfmdriver API Health Check                                          | PASS |
    ------------------------------------------------------------------------------
    huaweivnfmdriver API Health Check                                     | PASS |
    ------------------------------------------------------------------------------
    multicloud API Health Check                                           | PASS |
    ------------------------------------------------------------------------------
    multicloud-ocata API Health Check                                     | PASS |
    ------------------------------------------------------------------------------
    multicloud-titanium_cloud API Health Check                            | PASS |
    ------------------------------------------------------------------------------
    multicloud-vio API Health Check                                       | PASS |
    ------------------------------------------------------------------------------
    nokiavnfmdriver API Health Check                                      | PASS |
    ------------------------------------------------------------------------------
    nslcm API Health Check                                                | PASS |
    ------------------------------------------------------------------------------
    resmgr API Health Check                                               | PASS |
    ------------------------------------------------------------------------------
    usecaseui-gui API Health Check                                        | PASS |
    ------------------------------------------------------------------------------
    vnflcm API Health Check                                               | PASS |
    ------------------------------------------------------------------------------
    vnfmgr API Health Check                                               | PASS |
    ------------------------------------------------------------------------------
    vnfres API Health Check                                               | PASS |
    ------------------------------------------------------------------------------
    workflow API Health Check                                             | PASS |
    ------------------------------------------------------------------------------
    ztesdncdriver API Health Check                                        | PASS |
    ------------------------------------------------------------------------------
    ztevmanagerdriver API Health Check                                    | PASS |
    ------------------------------------------------------------------------------
    OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp compo... | FAIL |
    30 critical tests, 29 passed, 1 failed
    30 tests total, 29 passed, 1 failed
    ==============================================================================
    OpenECOMP ETE.Robot.Testsuites                                        | FAIL |
    30 critical tests, 29 passed, 1 failed
    30 tests total, 29 passed, 1 failed
    ==============================================================================
    OpenECOMP ETE.Robot                                                   | FAIL |
    30 critical tests, 29 passed, 1 failed
    30 tests total, 29 passed, 1 failed
    ==============================================================================
    OpenECOMP ETE                                                         | FAIL |
    30 critical tests, 29 passed, 1 failed
    30 tests total, 29 passed, 1 failed
    ==============================================================================
    Output:  /share/logs/ETE_46070/output.xml
    Log:     /share/logs/ETE_46070/log.html
    Report:  /share/logs/ETE_46070/report.html
    command terminated with exit code 1


    Update result (04/01/2018): Since c/26645 got merged, DCAE health check is passing

    $ ./ete-k8s.sh health
    Starting Xvfb on display :88 with res 1280x1024x24
    Executing robot tests at log level TRACE
    ==============================================================================
    OpenECOMP ETE
    ==============================================================================
    OpenECOMP ETE.Robot
    ==============================================================================
    OpenECOMP ETE.Robot.Testsuites
    ==============================================================================
    OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp components are...
    ==============================================================================
    Basic DCAE Health Check                                               | PASS |
    ------------------------------------------------------------------------------
    Basic SDNGC Health Check                                              | PASS |
    ------------------------------------------------------------------------------
    Basic A&AI Health Check                                               | PASS |
    ------------------------------------------------------------------------------
    Basic Policy Health Check                                             | PASS |
    ------------------------------------------------------------------------------
    Basic MSO Health Check                                                | PASS |
    ------------------------------------------------------------------------------
    Basic ASDC Health Check                                               | PASS |
    ------------------------------------------------------------------------------
    Basic APPC Health Check                                               | PASS |
    ------------------------------------------------------------------------------
    Basic Portal Health Check                                             | PASS |
    ------------------------------------------------------------------------------
    Basic Message Router Health Check                                     | PASS |
    ------------------------------------------------------------------------------
    Basic VID Health Check                                                | PASS |
    ------------------------------------------------------------------------------
    Basic Microservice Bus Health Check                                   | PASS |
    ------------------------------------------------------------------------------
    Basic CLAMP Health Check                                              | PASS |
    ------------------------------------------------------------------------------
    catalog API Health Check                                              | PASS |
    ------------------------------------------------------------------------------
    emsdriver API Health Check                                            | PASS |
    ------------------------------------------------------------------------------
    gvnfmdriver API Health Check                                          | PASS |
    ------------------------------------------------------------------------------
    huaweivnfmdriver API Health Check                                     | PASS |
    ------------------------------------------------------------------------------
    multicloud API Health Check                                           | PASS |
    ------------------------------------------------------------------------------
    multicloud-ocata API Health Check                                     | PASS |
    ------------------------------------------------------------------------------
    multicloud-titanium_cloud API Health Check                            | PASS |
    ------------------------------------------------------------------------------
    multicloud-vio API Health Check                                       | PASS |
    ------------------------------------------------------------------------------
    nokiavnfmdriver API Health Check                                      | PASS |
    ------------------------------------------------------------------------------
    nslcm API Health Check                                                | PASS |
    ------------------------------------------------------------------------------
    resmgr API Health Check                                               | PASS |
    ------------------------------------------------------------------------------
    usecaseui-gui API Health Check                                        | PASS |
    ------------------------------------------------------------------------------
    vnflcm API Health Check                                               | PASS |
    ------------------------------------------------------------------------------
    vnfmgr API Health Check                                               | PASS |
    ------------------------------------------------------------------------------
    vnfres API Health Check                                               | PASS |
    ------------------------------------------------------------------------------
    workflow API Health Check                                             | PASS |
    ------------------------------------------------------------------------------
    ztesdncdriver API Health Check                                        | PASS |
    ------------------------------------------------------------------------------
    ztevmanagerdriver API Health Check                                    | PASS |
    ------------------------------------------------------------------------------
    OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp compo... | PASS |
    30 critical tests, 30 passed, 0 failed
    30 tests total, 30 passed, 0 failed
    ==============================================================================
    OpenECOMP ETE.Robot.Testsuites                                        | PASS |
    30 critical tests, 30 passed, 0 failed
    30 tests total, 30 passed, 0 failed
    ==============================================================================
    OpenECOMP ETE.Robot                                                   | PASS |
    30 critical tests, 30 passed, 0 failed
    30 tests total, 30 passed, 0 failed
    ==============================================================================
    OpenECOMP ETE                                                         | PASS |
    30 critical tests, 30 passed, 0 failed
    30 tests total, 30 passed, 0 failed
    ==============================================================================
    Output:  /share/logs/ETE_21228/output.xml
    Log:     /share/logs/ETE_21228/log.html
    Report:  /share/logs/ETE_21228/report.html
  8. Let's run the init_robot script, that will enable us to check the robot logs

    cd oom/kubernetes/robot
    $ ./demo-k8s.sh init_robot

    Result:

    WEB Site Password for user 'test': Starting Xvfb on display :89 with res 1280x1024x24
    Executing robot tests at log level TRACE
    ==============================================================================
    OpenECOMP ETE
    ==============================================================================
    OpenECOMP ETE.Robot
    ==============================================================================
    OpenECOMP ETE.Robot.Testsuites
    ==============================================================================
    OpenECOMP ETE.Robot.Testsuites.Update Onap Page :: Initializes ONAP Test We...
    ==============================================================================
    Update ONAP Page                                                      | PASS |
    ------------------------------------------------------------------------------
    OpenECOMP ETE.Robot.Testsuites.Update Onap Page :: Initializes ONA... | PASS |
    1 critical test, 1 passed, 0 failed
    1 test total, 1 passed, 0 failed
    ==============================================================================
    OpenECOMP ETE.Robot.Testsuites                                        | PASS |
    1 critical test, 1 passed, 0 failed
    1 test total, 1 passed, 0 failed
    ==============================================================================
    OpenECOMP ETE.Robot                                                   | PASS |
    1 critical test, 1 passed, 0 failed
    1 test total, 1 passed, 0 failed
    ==============================================================================
    OpenECOMP ETE                                                         | PASS |
    1 critical test, 1 passed, 0 failed
    1 test total, 1 passed, 0 failed
    ==============================================================================
    Output:  /share/logs/demo/UpdateWebPage/output.xml
    Log:     /share/logs/demo/UpdateWebPage/log.html
    Report:  /share/logs/demo/UpdateWebPage/report.html
  9.  Navigate to

    <kubernetes-vm-ip>:30209

    and to see the robot logs, go to

    <kubernetes-vm-ip>:30209/logs
  • No labels

188 Comments

  1. Just need some clarification:

    I presume the helm client and server that are indicated in the prerequisite needs to be installed on the rancher VM. Is my understanding correct ?

    Ravi

    1. Hi Ravi,

      This is correct.

      Alexis.

      1. If that's the case, can I swap the order of the instructions so that the helm installation step happens after we spin up the Rancher VM?

        1. Helm comes with a client and a server binary. The host needs to have the client binary, the K8S VMs needs to have the server binary. When creating the K8S VM using Rancher, Rancher will install Helm for you. So whether you install the client binary of helm before or after doesn't matter much.

          1. Sorry, I'm still confused. What is the 'host'? Is that the Rancher VM?

            I'm confused because Alexis Chiarello says Helm should be installed on the Rancher machine, yet the instructions for installing Helm (on the Rancher machine) appear prior to the steps where we create the Rancher VM. How can you install something on a machine that does not yet exist?

            1. The host is the machine from which you will run the deploy commands, like ./createAll ./deleteAll ./createConfig provided by OOM to deploy ONAP.

              So if you use the rancher VM to do that, host is the rancher VM, if you use your laptop, then host is your lapstop, etc...

              The helm client needs to be installed on the host before you try to use OOM script to deploy ONAP.


                1. Click Kubernetes → CLI
                2. Click Generate Config
                  I am doing following steps on Kubernetes VM:
                3. Copy/Paste in your host

                Then the following steps on Rancher VM:

                1. Install kubectl

                  curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubectl
                  chmod +x ./kubectl
                  sudo mv ./kubectl /usr/local/bin/kubectl
                2. Make your kubectl use this new environment

                  kubectl config use-context <rancher-environment-name>

                I am also deploying OOM on Rancher and all the steps after "Clone OOM Beijing branch" (on Rancher only).

                Is this correct or am i doing something wrong ?

  2. Hi All,

    I am having trouble getting the rancher 1.6.10 running. What docker version should I be using ?

    I have tried 1.13.1 and also 17.03.0-ce and in both the rancher server never comes up. I get the below error

    time="2018-01-23T01:26:18Z" level=info msg="Starting rancher-compose-executor" version=v0.14.18

    time="2018-01-23T01:26:18Z" level=fatal msg="Unable to create event router" error="Get http://localhost:8080/v2-beta: dial tcp [::1]:8080: getsockopt: connection refused"

    time="2018-01-23T01:26:18Z" level=fatal msg="Failed to configure cattle client: Get http://localhost:8080/v2-beta: dial tcp [::1]:8080: getsockopt: connection refused"

    time="2018-01-23T01:26:18Z" level=info msg="Downloading key from http://localhost:8081/v1/scripts/api.crt"

    time="2018-01-23T01:26:18Z" level=fatal msg="Error getting config." error="Invalid key content"

    time="2018-01-23T01:26:18Z" level=warning msg="Couldn't load install uuid: Get http://localhost:8080/v2-beta: dial tcp [::1]:8080: getsockopt: connection refused"

    I do not see “.... Startup Succeeded, Listening on port..” message in the logs.

    Also obviously I cannot get to Rancher UI.

    Any help to get past this is greatly appreciated..

    Regards,

    Ravi

    1. I got Rancher working with this version of docker:


      Client:

       Version:       17.12.0-ce

       API version:   1.35

       Go version:    go1.9.2

       Git commit:    c97c6d6

       Built: Wed Dec 27 20:11:19 2017

       OS/Arch:       linux/amd64



        1. Actually I think that list is wrong. I tried 1.13 and it didn't work.

          1. The link is for rancher v1.6.x. Out of the box, it came for me with K8S 1.7.x. Docker 1.13 is for K8S 1.8.

            Rancher, Helm, Kubernetes and Docker versions are very specific and driven mostly by Rancher, if you use Rancher.

    2. Guys, the last vetted versions are as follows - only master is updated to ONAP on Kubernetes#Requirements

      master/beijing

      rancher 1.6.11 to 1.6.14, Helm 2.8 (do a helm init --upgrade after registering a host and setting .kube/config), kubernetes 1.8.6, docker 17.03.2-ce

      Amsterdam (need to check - but the previous Rancher 1.6.10, helm 2.3.0, kubernetes 1.8.x, docker 12.3)


      Also, I am back on this page - as I am tryng to stand up the dcae VMs like cdap and consul in order to get the dcae_controller working in HEAT - so I can use  Alexis de Talhouët work bridging OOM and DCAE.

      also following

      http://onap.readthedocs.io/en/latest/submodules/dcaegen2.git/docs/sections/installation_heat.html

      will post here as well.

    3. Hi Ravi,

      Are you able to fix this error?

      I am getting the same one now.

      Believe due to some dependency towards openstack services. Since i was trying on the host server machine (kvm installed on top of that ) and nova compute service is running on the host machine. Rancher was working fine that time.

      But when i create a VM usning  openstack nova computer using dash board and try installing the rancher....UI  is not coming up and seeing the same error.


      Regards,

      Pranjal

  3. I think that most of the instructions of this guide has been collected and automated with this script[1] and can be deployed to OpenStack using the corresponding plugin[2]

    [1] https://git.onap.org/integration/tree/bootstrap/vagrant-onap/lib/oom

    [2http://onap.readthedocs.io/en/latest/submodules/integration.git/bootstrap/vagrant-onap/doc/source/features/openstack.html

    1. How do you run that script?

      It seems like a list of function definitions, but nothing actually gets called.

  4. Alexis de Talhouët  curl command with POST in step 4 is failing with the error 405, Method not allowed. But it works when I try GET

    Error

    {"id":"5b2f8bec-b83c-41f9-8b67-76710b175409","type":"error","links":{},"actions":{},"status":405,"code":"Method not allowed","message":"Method not allowed","detail":null,"baseType":"error"}


    I used below command

    root@onap:~# curl -u "958DE820E963E2862BC5:fmcMomN9xss1iU1dd2qbqq4VqpbzW8Rjf9Thrgb2" \

    > -X POST \
    > -H 'Accept:application/json' \
    > -H 'Content-Type:application/json' \
    > -d '{
    > "hostname":"onap12",
    > "engineInstallUrl":"wget https://raw.githubusercontent.com/rancher/install-docker/master/1.12.6.sh",
    > "openstackConfig":{
    > "authUrl":"http://XXXXX:5000/v3",
    > "domainName":"Default",
    > "endpointType":"adminURL",
    > "flavorName":"rancher.xlarge",
    > "imageName":"Ubuntu_16_01_iso",
    > "netName":"onap_nw1",
    > "password":"redhat",
    > "sshUser":"ubuntu",
    > "tenantName":"admin",
    > "username":"admin"}
    > }' \
    > 'http://xxxxx:8080/v2-beta/projects/1a7/hosts/'

    1. Alexis de Talhouët The CURL command worked later but host creation on Openstack failed with below error

      Error detecting OS: Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded

  5. Which VM should we run the "Deploy OOM" steps in?

    Since the previous steps were in the Kubernetes VM, I'm assuming that's where we should run them.


    However the pre-requisites lists 3 VMs. So should I do these steps in the remaining VM? If so, where are the instructions on how to link that VM up with Rancher and Kubernetes?


    Edit: In the video you can see what this page used to be like. The "Deploy OOM" instructions used to be prefaced with a link to "OOM Infrastructure Setup". That link is no longer there. Where did it go? I think that's where all the missing steps are.

    1. Here, you created two VMs, one for Rancher, one for K8S. I'm not sure what's the third VM you're mentioning.

      The K8S one will be where you will deploy ONAP.

      Regarding the video, right, the page might have been rework since then, but the content is still there. "OOM Infrastructure Setup" is the equivalent of "Setup infrastructure" in this page, and "Deploy OOM", well, is "Deploy OOM" in this page.


      1. Woops, there aren't 3 VMs listed, there are 17 listed in the 'prerequisite' section. The remaining 15 are the ones I'm referring to.

        Are they spun up by some scripts, or do I manually need to spin them up?

        1. The remaining 15 are for DCAE. See Deployingthedcaegen2pod to understand how they get spun up. They will be spun up by the dcae boostrap container that will live in the dcae boostrap vm that will get created by the dcaegen2 pod, if you configured OOM Amsterdam branch to deploy DCAE, given you have provided correct parameters.

    2. Matthew, there are instructions on bringing up a colocated VM (server and host) - I will be switching to using Alexis's approach of splitting the rancher server and the multiple hosts - but for now you can get a feel by checking out

      ONAP on Kubernetes#ONAPInstallation

      specifically

      ONAP on Kubernetes#Registeryourhost

      I am here on Alexis' page because now that I have ONAP (except DCAE) running  - I would like to connect DCAE running in HEAT to Kubernetes using Alexis' proxy he developed.

      /michael



  6. Checking out why DCAE is failing to orchestrate consul in HEAT.

    I previously rebooted AAI-vm1 to get the 7 containers up that were blocking the initial AAI rest calls from DCAE

    Goal is to be able to shut down all HEAT vms except the DCAE subset (cdap, consul...)

    getting a neutron network error - checking id's

    DCAEGEN2-300 - Getting issue details... STATUS

    root@onap-dcae-bootstrap:/opt# docker logs -f boot
    Initiated ./blueprints/centos_vm.yaml
    If you make changes to the blueprint, run `cfy local init -p ./blueprints/centos_vm.yaml` again to apply them
    + cfy local execute -w install --task-retries=10
    2018-02-03 19:16:19 CFY <local> Starting 'install' workflow execution
    2018-02-03 19:16:19 CFY <local> [security_group_15499] Creating node
    2018-02-03 19:16:19 CFY <local> [key_pair_a883e] Creating node
    2018-02-03 19:16:19 CFY <local> [private_net_22910] Creating node
    2018-02-03 19:16:19 CFY <local> [floatingip_vm00_3413c] Creating node
    2018-02-03 19:16:19 CFY <local> [private_net_22910.create] Sending task 'neutron_plugin.network.create'
    2018-02-03 19:16:19 CFY <local> [floatingip_vm00_3413c.create] Sending task 'neutron_plugin.floatingip.create'
    2018-02-03 19:16:19 CFY <local> [key_pair_a883e.create] Sending task 'nova_plugin.keypair.create'
    2018-02-03 19:16:19 CFY <local> [security_group_15499.create] Sending task 'neutron_plugin.security_group.create'
    2018-02-03 19:16:19 CFY <local> [private_net_22910.create] Task started 'neutron_plugin.network.create'
    2018-02-03 19:16:19 CFY <local> [floatingip_vm00_3413c.create] Task started 'neutron_plugin.floatingip.create'
    2018-02-03 19:16:19 CFY <local> [key_pair_a883e.create] Task started 'nova_plugin.keypair.create'
    2018-02-03 19:16:19 CFY <local> [security_group_15499.create] Task started 'neutron_plugin.security_group.create'
    2018-02-03 19:16:19 CFY <local> [private_net_22910.create] Task failed 'neutron_plugin.network.create' -> {"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}} [status_code=401]
    2018-02-03 19:16:19 CFY <local> 'install' workflow execution failed: Workflow failed: Task failed 'neutron_plugin.network.create' -> {"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}} [status_code=401]
    Workflow failed: Task failed 'neutron_plugin.network.create' -> {"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}} [status_code=401]
    root@onap-dcae-bootstrap:/opt# docker ps


    5000/v2/v2.0 should be 5000/v2.0

    for /opt/config/keystone_url.txt

     

    fixed neutron - next error

    2018-02-04 02:05:56 CFY <local> [security_group_ba9b4.create] Task started 'neutron_plugin.security_group.create'

    2018-02-04 02:05:56 CFY <local> [key_pair_82fb7.create] Task failed 'nova_plugin.keypair.create' -> The resource could not be found. [status_code=404]

    2018-02-04 02:05:56 CFY <local> [floatingip_vm00_479c2.create] Task failed 'neutron_plugin.floatingip.create' -> {"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}} [status_code=401]

    2018-02-04 02:05:56 CFY <local> 'install' workflow execution failed: Workflow failed: Task failed 'neutron_plugin.floatingip.create' -> {"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}} [status_code=401]

    Workflow failed: Task failed 'neutron_plugin.floatingip.create' -> {"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}} [status_code=401]

    1. Switching sides from HEAT to OOM - running with config in dcae-onap-parameters.yaml for  OOM-508 - Getting issue details... STATUS

      Running the dcae controller from the OOM side now

      Filled in onap-parameters.yaml from the amsterdam template.

      Triaging a failure bringing up DCAE 

      ONAP on Kubernetes#AmsterdamOOM+DCAEHeatinstallation

      OOM-657 - Getting issue details... STATUS

      Getting the following in amsterdam:20180204

      running same helm as master 2.8.0 server/client - should be ok up from 2.3.0

      Creating deployments and services **********

      Error: release onap-dcaegen2 failed: Service "dcaegen2" is invalid: spec.ports[2]: Duplicate value: api.ServicePort{Name:"", Protocol:"TCP", Port:8443, TargetPort:intstr.IntOrString{Type:0, IntVal:0, StrVal:""}, NodePort:0}

      The command helm returned with error code 1


      1. I think this is a sporadic issue.

        1. Hi Alexis,

             After looking further I see that oom/kubernetes/dcaegen2/templates/nginx-service.yaml has both aai-service and sdc-be referring to port 8443.

          Just to see if it makes any difference I just took out the sdc-be out 

          - name: sdc-be
          port: 8443
          targetPort: 8443
          nodePort: 30602

          After doing that I cleaned up reran the dcaegen2 and I do not see the error now. Just want to know what is the correct way to fix this. 

          Regards,

          Ravi..

          1. Yes, this is on purpose. Two applications using the same port. TBH, the more I think about it, the more I think we can define only one in the service, for both apps, as after, nginx will tell which application is coming from the 8443 port by doing domain name lookup. Note: this is a suggestion, not yet a tested fix.

          2. Hi Alexis,

               Thanks for the update. Just to be clear what you are saying is there will be only one entry in the nginx-service.yaml that would server both aai-service and sdc-be. Will the nodePort have the ability to specify multiple comma separated ports like 30600 for aai and 30602 for sdc-be.

            Method I have chosen now is completely taking out the sdc-be and not sure if the 30602 port is even being considered in my installation. Just curious how do others have this working ?

            Regards,

            Ravi..

      2. Hi Michael,

           Can you please let me know what you did to get past the above issue. I am seeing the same problem..

        Regards,

        Ravi

    2. Hi Michael,

      I have encountered the same error which deploying DCAE. Please let me know how you resolved the issue?

      I am using Amsterdam release.

      ERROR LOGS:
      ------------
      If you make changes to the blueprint, run `cfy local init -p ./blueprints/centos_vm.yaml` again to apply them
      
      + cfy local execute -w install --task-retries=10
      
      2018-02-22 00:51:43 CFY <local> Starting 'install' workflow execution
      
      2018-02-22 00:51:43 CFY <local> [security_group_325c5] Creating node
      
      2018-02-22 00:51:43 CFY <local> [key_pair_8efae] Creating node
      
      2018-02-22 00:51:43 CFY <local> [floatingip_vm00_b6fff] Creating node
      
      2018-02-22 00:51:43 CFY <local> [private_net_a10e3] Creating node
      
      2018-02-22 00:51:43 CFY <local> [key_pair_8efae.create] Sending task 'nova_plugin.keypair.create'
      
      2018-02-22 00:51:43 CFY <local> [floatingip_vm00_b6fff.create] Sending task 'neutron_plugin.floatingip.create'
      
      2018-02-22 00:51:43 CFY <local> [security_group_325c5.create] Sending task 'neutron_plugin.security_group.create'
      
      2018-02-22 00:51:43 CFY <local> [private_net_a10e3.create] Sending task 'neutron_plugin.network.create'
      
      2018-02-22 00:51:43 CFY <local> [key_pair_8efae.create] Task started 'nova_plugin.keypair.create'
      
      2018-02-22 00:51:43 CFY <local> [floatingip_vm00_b6fff.create] Task started 'neutron_plugin.floatingip.create'
      
      2018-02-22 00:51:43 CFY <local> [security_group_325c5.create] Task started 'neutron_plugin.security_group.create'
      
      2018-02-22 00:51:43 CFY <local> [private_net_a10e3.create] Task started 'neutron_plugin.network.create'
      
      2018-02-22 00:51:43 CFY <local> [key_pair_8efae.create] Task failed 'nova_plugin.keypair.create' -> The resource could not be found. [status_code=404]
      
      2018-02-22 00:51:43 CFY <local> [floatingip_vm00_b6fff.create] Task failed 'neutron_plugin.floatingip.create' -> {"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}} [status_code=401]
      
      2018-02-22 00:51:43 CFY <local> 'install' workflow execution failed: Workflow failed: Task failed 'neutron_plugin.floatingip.create' -> {"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}} [status_code=401]
      
      Workflow failed: Task failed 'neutron_plugin.floatingip.create' -> {"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}} [status_code=401]
      1. Hi yogesh sharmainstaller check this file. This is the script that runs inside the boot container. This will help you to trace out the bug. Or you login to container, and manually running the script.

        Also I will suggest to take a look at the openstack credentials it is using. You can check that in ubuntu boot VM in /opt/app/config folder. Check each file

        1. Hi Bharath Thiruveedula,

          I see this problem on dace docker container. This was fine last week. Not sure if it has downloaded a new image for dcae?

          + wagon install -s dnsdesig.wgn
          INFO - Installing dnsdesig.wgn
          INFO - Installing dnsdesig...
          INFO - Installing within current virtualenv: True...
          ERROR -
          ERROR - Usage:
          ERROR - pip install [options] <requirement specifier> [package-index-options] ...
          ERROR - pip install [options] -r <requirements file> [package-index-options] ...
          ERROR - pip install [options] [-e] <vcs project url> ...
          ERROR - pip install [options] [-e] <local project path> ...
          ERROR - pip install [options] <archive url/path> ...
          ERROR -
          ERROR - no such option: --use-wheel
          ERROR -
          ERROR - Could not install package: dnsdesig.

           How to fix this?

          I logged into the container and made some changes to the installed script. But in the container I am not able to run wagon cmds as it does not have sudo permissions. There is another person in community facing the same problem. He has replied in the same wiki.

          Any suggestions/comments will be helpful.

          Thanks, Vijayalakshmi


            1. Thanks Arindam. Looks like the problem was not resolved even with "pip install pip==9.0.3".

      2. yogesh sharma

        WHen I encountered the exact error that you faced, I noticed that the output of the "cat /opt/config/keystone_url.txt" on the Ubuntu VM was "http://$HOST_IP:5000/v3/v2.0". I fixed that file to set the correct value, killed the dead boot container using command "sudo docker system prune" and kicked of the /opt/dcae2_install.sh script again but encountered the below error:


        Installing Cloudify Manager on 172.24.4.12.
        + echo ‘Installing Cloudify Manager on 172.24.4.12.’
        ++ sed s/PVTIP=//
        ++ grep PVTIP
        ++ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ./key600 centos@172.24.4.12 ‘echo PVTIP=`curl --silent http://$HOST_IP_OF_OPENSTACK/2009-04-04/meta-data/local-ipv4`’
        Warning: Permanently added ‘172.24.4.12’ (ECDSA) to the list of known hosts.
        Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
        + PVTIP=
        Cannot access specified machine at 172.24.4.12 using supplied credentials
        + ‘[’ ‘’ = ‘’ ‘]’
        + echo Cannot access specified machine at 172.24.4.12 using supplied credentials
        + exit

        1. Additionally, I notice that the /opt/app/config/inputs.yaml and /opt/app/config/cdapinputs.yaml file have their values or "auth_url: 

          'http://$OPENSTACK_HOST_IP:5000/v3/v2.0'

          Sounds like a bug to me

           

          1. Please, re-consider the onap-parameters.yaml you've input at first. Keystone URL should not be provided with the api version, it's always provided seperatly.

            Within the dcae boostrap vm, values from /opt/config/ are used to populate template under /opt/app/config/*, so if you change a value in there, it's normal you have to update the template as well.

            1. As part of the onap-parameters.yaml (see my version at http://paste.openstack.org/show/682213/) the API version is not part of the Keystone URL. It seemed to me that scripts that run after the Ubuntu DCAE Boot VM comes up, it makes edits to the /opt/config/keystone_url.txt and /opt/app/config/cdapinputs.yaml and /opt/app/config/inputs.yaml. I tried going over the dcae2_install.sh and dcae2_vm_init.sh to see if it makes those edits, but was not able to zero in where the edits are being made to those files.

  7. I'm having trouble adding a host to Rancher using the script.

    In the list of hosts in rancher, I see a host with status 'Error', description 'exit status 1'.

    How can I debug this error? Rancher doesn't seem to have any error message more detailed than "exit status 1". There is no VM in openstack, so I can't check there for logs. I don't know what has gone wrong, and I don't know where to find logs.

    1. Matthew,

         I just started bringing up OOM in openlab, in the past I just used AWS or Azure without DCAE.  Now that I need DCAE - back to openstack.

         For my own simplicity - I run co-located for now.  my rancher server (3G) and my pods (~55G) run on a single 64G VM.  When I register a host I use the public IP (the 10.12.6.x one),  running the generated client curl works with this IP.  Since both the server and host are docker containers - you could tail the logs

      docker ps -a
      2d810bfebb7f        rancher/agent:v1.2.9                                                                                                                "/run.sh run"             5 hours ago         Up 5 hours                                                        rancher-agent
      b3bd27b591e3        rancher/server:v1.6.14                                                                                                              "/usr/bin/entry /u..."    5 hours ago         Up 5 hours                     3306/tcp, 0.0.0.0:8880->8080/tcp   rancher_server
      
      
      docker logs -f rancher-agent
      docker logs -f rancher_server

      there are also rest audit logs at (new to these)

      http://10.12.6.120:8880/admin/audit-logs

      if this helps

      /michael

      1. Hi Michael,

        i ran the command : docker run -d -p 8080:8080 rancher/server:v1.6.10 to install the rancher.

        But i can see only 1 docker inside the host machine (bare metal/KVM). I followed the above steps for the  installation.

        Thanks 

        Pranjal

    2. Hi Matthew,

      Are you able to figure out the solution.

      Thanks,

      Pranjal

      1. No, I can't figure it out. The logs didn't provide an answer.

        I think my installation of openstack was a bit broken. The error was not deterministic. Sometimes I would get a timeout error, sometimes this one.

        I'm trying again on a fresh version of Openstack. Hopefully that fixes the problem.

  8. Alexis,

      Hi, you may have noticed some traffic on your work and thanks for the mail.

      Just starting my deep dive into your work after being out of the loop since it started late Nov.

      Passing this high level by you for your diagram

    https://wiki.onap.org/display/DW/ONAP+on+Kubernetes+on+Rancher+in+OpenStack?preview=/19202062/22250073/DCAE_K8s_Designate.jpg#ONAPonKubernetesonRancherinOpenStack-Deployingthedcaegen2pod

       OOM orchestrates the DCAE vms (15) through the cloudify manager via the dcae controller on the OOM side

       The heat DCAE controller is required? – trying to see if the 2 work together - as it is in your diagram.

       Also the 64g cloudify vm on the HEAT side – I don’t know enough yet how the cloudify blueprint task are applied but I thought that the controller sends tasks to the cloudify vm – but on your OOM tenant in openlab this xxlarge vm is not there – so we don’t need it anymore - I was under the impression that when we containerize the cloudify manager then we could move everything left over in HEAT to kubernetes where we could make a cdap replicaset (and use consul already in OOM).

       Another question is – we still need to run a partial heat deployment in the case that we would need the HEAT dcae bootstrap according to your diagram.

      thanks Alexis, no rush, just trying out the amsterdam branch and my WIP filled in onap-parameters.yaml you expanded.

    ONAP on Kubernetes#AmsterdamOOM+DCAEHeatinstallation

       /michael

  9. Michael,

    Cloudify VM (dcaeorcl00) is using a m1.medium flavor, not a xxlarge one, as you noticed. OOM does only create the dcae-boostrap VM that will run the dcae-boostrap container. Then the dcae-boostrap container is reponsible for bringing up everything else. First it brings up cloudify-manager, then it will use cloudify to create the other VMs.

    Yes, we still run a partial HEAT deployment, until DCAE is fully containerized. My work was to get DCAEGEN2 deployed and working, not to enhance it and containerized it, which is a different story.

    Let's meet, to clarify some points.

    1. Alexis,

         Clearer now - thank you.  I was wondering what the orcl VM was - so what we do differently now if I understand it:

      HEAT-only: dcae-controller and cloudify manager from static heat template (along with non-dcae vms), then dcae-controller orchestrates cdap and other nodes

      OOM-only: oom side dcae-controller orchestrates heat side dcae-bootstrap (which does a 2 step orcl/cloudify vm and then cdap vms)


         I am pulling in recent changes to amsterdam for the ueb port fix - thanks for that

      https://gerrit.onap.org/r/#/c/30147/

         For the phase 2 containerization of DCAE - yes I understand that cloudify will be creating a docker image for their cloudify manager - following that work separately

      OOM-569 - Getting issue details... STATUS

      OOM-565 - Getting issue details... STATUS

      We also will have a 2nd call tomorrow on helping AAI with their cinder issues specific to native kubernetes PV support in openstack. 

      I will also look into using kubeadm on openlab

      OOM-591 - Getting issue details... STATUS

      You Alexis de Talhouëtwere requested for assistance along with Borislav GlozmanYury NovitskyGary Wu and Jerome Doucerain

      I'll send out the call for 1100 EST (GMT-5 ) - 20180206 shortly.

      /michael



         

  10. Alexis,

      Retesting after the OOM-654 merge - no longer getting the port conflict - the 2 dcaegen2 containers coming up - will know soon about the cloudify side

      Using your latest onap-parameters.yaml after the 20180123 refactor.


    ubuntu@onap-oom-obrien:~$ kubectl get pods --all-namespaces | grep dcae
    onap-dcaegen2         heat-bootstrap-4010086101-7352l               1/1       Running             0          18m
    onap-dcaegen2         nginx-1230103904-3rwfn                        1/1       Running             0          18m
    
    
    # stack starting
    ubuntu@onap-oom-obrien:~$ kubectl -n onap-dcaegen2 logs -f heat-bootstrap-4010086101-7352l
    + '[' 1 -ne 1 ']'
    + NAMESPACE=onap
    + MR_ZONE=onap-message-router
    + STACK_NAME=dcae
    + SIMPLEDEMO_ONAP_ORG_ZONE_NAME=simpledemo.onap.org.
    + SIMPLEDEMO_ONAP_ORG_ZONE_ID=
    + RANDOM_STRING=
    
    
    Checking my dns configs - getting an auth error but my urls look OK.
    checking my rc settings on the tenant
    ++ kubectl get services dcaegen2 -o 'jsonpath={.status.loadBalancer.ingress[0].ip}'
    + NODE_IP=10.12.6.124
    ++ openstack recordset list -c records --type=A -f yaml
    ++ head -n 1
    ++ awk ' { print $3 } '
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-66e15c13-a9df-49bd-beaa-3c7265ee9d51)
    DCAE nginx host ip has changed, update DNS records...
    + CURRENT_NODE_IP=
    + '[' 10.12.6.124 '!=' '' ']'
    + refresh_dns_records
    + echo 'DCAE nginx host ip has changed, update DNS records...'
    ++ kubectl get services dcaegen2 -o 'jsonpath={.status.loadBalancer.ingress[0].ip}'
    + NODE_IP=10.12.6.124
    ++ openstack recordset list --type=A -c=id -f=yaml
    ++ awk ' { print $3 } '
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-6bf061d0-03f2-4db2-bd06-ce9817a526c3)
    + SIMPLEDEMO_ONAP_ORG_RECORD_TYPE_A_IDS=
    + sleep 10
    
    
    verifying that everything is v3 or v2.0
    onap-parameters.yaml
    # don't use v3 here as v2.0 is a hardcoded append in the init script and will result in
    Failed to discover available identity versions when contacting http://10.12.25.5:5000/v3/v2.0. Attempting to parse version from URL.
    DNSAAS_KEYSTONE_URL: "http://10.12.25.5:5000"
    
    ubuntu@onap-oom-obrien:~$ kubectl -n onap-dcaegen2 logs -f heat-bootstrap-4010086101-60n9p
    Processing triggers for libc-bin (2.23-0ubuntu10) ...
    ++ curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt
    + curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.9.2/bin/linux/amd64/kubectl
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 64.2M  100 64.2M    0     0  7988k      0  0:00:08  0:00:08 --:--:-- 8430k
    + chmod +x ./kubectl
    + mv ./kubectl /usr/local/bin/kubectl
    ++ kubectl get services dcaegen2 -o 'jsonpath={.status.loadBalancer.ingress[0].ip}'
    + NODE_IP=10.12.6.124
    + '[' v2.0 = v2.0 ']'
    + source /opt/heat/DCAE-openrc-v2.sh
    ++ export OS_AUTH_URL=http://10.12.25.5:5000/v2.0
    ++ OS_AUTH_URL=http://10.12.25.5:5000/v2.0
    ++ export OS_TENANT_ID=a85a07a....d67d802c9fc50a7
    ++ OS_TENANT_ID=a85a07a5f34d.....2c9fc50a7
    ++ export OS_TENANT_NAME=Logging
    ++ OS_TENANT_NAME=Logging
    ++ unset OS_PROJECT_ID
    ++ unset OS_PROJECT_NAME
    ++ unset OS_USER_DOMAIN_NAME
    ++ unset OS_INTERFACE
    ++ export OS_USERNAME=m...en
    ++ OS_USERNAME=mi...en
    ++ export OS_PASSWORD=W...j
    ++ OS_PASSWORD=WhqyYJTRjCLj
    ++ export OS_REGION_NAME=RegionOne
    ++ OS_REGION_NAME=RegionOne
    ++ '[' -z RegionOne ']'
    ++ export OS_ENDPOINT_TYPE=publicURL
    ++ OS_ENDPOINT_TYPE=publicURL
    ++ export OS_IDENTITY_API_VERSION=2
    ++ OS_IDENTITY_API_VERSION=2
    ++ openstack stack list -c 'Stack Name' -f yaml
    ++ awk '{ print $4}'
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-3adb84b9-f9eb-48a3-941a-b65816459b87)
    + EXISTING_STACKS=
    + [[ '' =~ (^|[[:space:]])dcae($|[[:space:]]) ]]
    + openstack stack create -t /opt/heat/onap_dcae.yaml -e /opt/heat/onap_dcae.env dcae
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-946ebe74-3bfd-4022-b805-fd27d28626dc)
    + sleep 10
    ++ openstack stack output show dcae dcae_floating_ip -c output_value -f yaml
    ++ awk '{ print $2}'
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-77829ce6-f318-4e9d-968b-50e1164c5026)
    + DCAE_CONTROLLER_IP=
    + sed -i -e s/DCAE_CONTROLLER_IP_HERE//g /opt/robot/vm_properties.py
    ++ openstack stack output show dcae random_string -c output_value -f yaml
    ++ awk '{ print $2}'
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-ff36bed5-132c-426f-9cbc-94487a1c0af1)
    + RANDOM_STRING=
    + SIMPLEDEMO_ONAP_ORG_ZONE_NAME=.simpledemo.onap.org.
    + '[' v2.0 = v2.0 ']'
    + source /opt/heat/DNS-openrc-v2.sh
    ++ export OS_AUTH_URL=http://10.12.25.5:5000/v2.0
    ++ OS_AUTH_URL=http://10.12.25.5:5000/v2.0
    ++ export OS_TENANT_ID=a85a07....78d67d802c9fc50a7
    ++ OS_TENANT_ID=a85a07a5f34d.....2c9fc50a7
    ++ export OS_TENANT_NAME=Logging
    ++ OS_TENANT_NAME=Logging
    ++ unset OS_PROJECT_ID
    ++ unset OS_PROJECT_NAME
    ++ unset OS_USER_DOMAIN_NAME
    ++ unset OS_INTERFACE
    ++ export OS_USERNAME=demo
    ++ OS_USERNAME=demo
    ++ export OS_PASSWORD=onapdemo
    ++ OS_PASSWORD=onapdemo
    ++ export OS_REGION_NAME=RegionOne
    ++ OS_REGION_NAME=RegionOne
    ++ '[' -z RegionOne ']'
    ++ export OS_ENDPOINT_TYPE=publicURL
    ++ OS_ENDPOINT_TYPE=publicURL
    ++ export OS_IDENTITY_API_VERSION=2
    ++ OS_IDENTITY_API_VERSION=2
    + configure_dns_designate
    ++ openstack zone list -f=yaml -c=name
    ++ awk ' { print$3 } '
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-a8caf549-1224-43b8-992b-5d64241268e2)
    Zone .simpledemo.onap.org. doens't exist, creating ...
    + EXISTING_ZONES=
    + [[ '' =~ (^|[[:space:]]).simpledemo.onap.org.($|[[:space:]]) ]]
    + echo 'Zone .simpledemo.onap.org. doens'\''t exist, creating ...'
    ++ openstack zone create --email=oom@onap.org '--description=DNS zone bridging DCAE and OOM' --type=PRIMARY simpledemo.onap.org. -f=yaml -c id
    ++ awk '{ print $2} '
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-ae0101dc-71e4-4963-afa1-6c1824745243)
    + SIMPLEDEMO_ONAP_ORG_ZONE_ID=
    + echo 'Create recordSet for .simpledemo.onap.org.'
    + openstack recordset create --type=A --ttl=10 --records=10.12.6.124 vm1.aai
    Create recordSet for .simpledemo.onap.org.
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-0fc65552-168d-4411-9211-230ef74c57d4)
    + openstack recordset create --type=A --ttl=10 --records=10.12.6.124 vm1.sdc
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-8828d2ee-a522-4183-b08c-5cc097b74840)
    + openstack recordset create --type=A --ttl=10 --records=10.12.6.124 vm1.mr
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-096a5bc8-8969-4531-a05a-e77bbda22194)
    + openstack recordset create --type=A --ttl=10 --records=10.12.6.124 vm1.policy
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-c60f4bb2-7bb7-4562-b20d-052055560aac)
    + openstack recordset create --type=A --ttl=10 --records=10.12.6.124 vm1.openo
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-05ff2459-c3b8-400f-8cf8-a792bbcce998)
    Create CNAMEs for .simpledemo.onap.org.
    + echo 'Create CNAMEs for .simpledemo.onap.org.'
    + openstack recordset create --type=CNAME --ttl=86400 --records=vm1.aai..simpledemo.onap.org. c1.vm1.aai..simpledemo.onap.org.
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-02073fe7-6b89-4dc0-9d34-78323c2adca4)
    + openstack recordset create --type=CNAME --ttl=86400 --records=vm1.aai..simpledemo.onap.org. c2.vm1.aai..simpledemo.onap.org.
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-0c446d5a-b4c1-40ec-a503-ed602dabf6a1)
    + openstack recordset create --type=CNAME --ttl=86400 --records=vm1.aai..simpledemo.onap.org. c3.vm1.aai..simpledemo.onap.org.
    ....
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-3da1b940-d35b-4fb5-960e-2b9e0dac658f)
    + openstack recordset create --type=CNAME --ttl=86400 --records=vm1.openo..simpledemo.onap.org. esr.api..simpledemo.onap.org.
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-7b9f892c-8b62-44e7-b9fb-6b54a0c5575a)
    Monitor DCAE nginx host ip...
    + monitor_nginx_node_ip
    + echo 'Monitor DCAE nginx host ip...'
    + true
    ++ kubectl get services dcaegen2 -o 'jsonpath={.status.loadBalancer.ingress[0].ip}'
    + NODE_IP=10.12.6.124
    ++ openstack recordset list -c records --type=A -f yaml
    ++ head -n 1
    ++ awk ' { print $3 } '
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-0870eddf-e0dd-4e02-b086-63eb95f51828)
    + CURRENT_NODE_IP=
    + '[' 10.12.6.124 '!=' '' ']'
    + refresh_dns_records
    + echo 'DCAE nginx host ip has changed, update DNS records...'
    DCAE nginx host ip has changed, update DNS records...
    ++ kubectl get services dcaegen2 -o 'jsonpath={.status.loadBalancer.ingress[0].ip}'
    + NODE_IP=10.12.6.124
    ++ openstack recordset list --type=A -c=id -f=yaml
    ++ awk ' { print $3 } '
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-e777a804-de9f-48a0-b329-8fcbd90fd613)
    + SIMPLEDEMO_ONAP_ORG_RECORD_TYPE_A_IDS=
    + sleep 10
    + true
    ++ kubectl get services dcaegen2 -o 'jsonpath={.status.loadBalancer.ingress[0].ip}'
    + NODE_IP=10.12.6.124
    ++ openstack recordset list -c records --type=A -f yaml
    ++ head -n 1
    ++ awk ' { print $3 } '
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-434588dd-d414-43f4-af24-84652ae3e2ce)
    + CURRENT_NODE_IP=
    + '[' 10.12.6.124 '!=' '' ']'
    + refresh_dns_records
    + echo 'DCAE nginx host ip has changed, update DNS records...'
    DCAE nginx host ip has changed, update DNS records...
    ++ kubectl get services dcaegen2 -o 'jsonpath={.status.loadBalancer.ingress[0].ip}'
    + NODE_IP=10.12.6.124
    ++ openstack recordset list --type=A -c=id -f=yaml
    ++ awk ' { print $3 } '
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-bbfbafaa-0639-4e37-b133-6726c09f70bf)
    + SIMPLEDEMO_ONAP_ORG_RECORD_TYPE_A_IDS=
    + sleep 10
    
    for
    ubuntu@onap-oom-obrien:/dockerdata-nfs/onap/dcaegen2/heat$ cat onap_dcae.env | grep keyst
      keystone_url: http://10.12.25.5:5000
      dnsaas_keystone_url: http://10.12.25.5:5000
      dcae_keystone_url: http://10.0.14.1/api/multicloud-titanium_cloud/v0/pod25_RegionOne/identity/v2.0
    
    
    Experimenting in container
    ubuntu@onap-oom-obrien:/dockerdata-nfs/onap/dcaegen2/heat$ sudo vi DNS-openrc-v2.sh
    
    export OS_AUTH_URL=http://10.12.25.5:5000/v2.0
    #export OS_AUTH_URL=http://10.12.25.2:5000/v2.0
    export OS_TENANT_ID=a85a0.......802c9fc50a7
    export OS_TENANT_NAME=Logging
    export OS_USERNAME=demo
    export OS_PASSWORD=onapdemo
    export OS_REGION_NAME=RegionOne
    
    
    root@heat-bootstrap:/opt/heat# source DNS-openrc-v2.sh 
    root@heat-bootstrap:/opt/heat# openstack recordset list
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-8d3619cb-d3e4-46d2-b923-6c0cd3df6598)
    ubuntu@onap-oom-obrien:~$ kubectl -n onap-dcaegen2 exec -it heat-bootstrap-4010086101-8cdwz bash
    root@heat-bootstrap:/# cd /opt/heat                                                                                                                                          
    root@heat-bootstrap:/opt/heat# source DCAE-openrc-v2.sh 
    root@heat-bootstrap:/opt/heat# openstack server list
    | 87569b68-cd4c-4a1f-9c6c-96ea7ce3d9b9 | onap-oom-obrien | ACTIVE | oam_onap_w37L=10.0.16.1, 10.12.6.124               | ubuntu-16-04-cloud-amd64 | m1.xxlarge |
    | d80f35ac-1257-47fc-828e-dddc3604d3c1 | oom-jenkins     | ACTIVE | appc-multicloud-integration=10.10.5.14, 10.12.6.49 |                          | v1.xlarge  |
    
    
    root@heat-bootstrap:/opt/heat# source DNS-openrc-v2.sh 
    root@heat-bootstrap:/opt/heat# openstack server list   
    The request you have made requires authentication. (HTTP 401) (Request-ID: req-82cfa5be-e351-49d0-bf87-18834c8affa0)
    
    
    The password/username for the pod25 Designate DNS as a Service - should be demo/onapdemo
    ubuntu@onap-oom-obrien:/dockerdata-nfs/onap/dcaegen2/heat$ cat DNS-openrc-v2.sh 
    export OS_USERNAME="demo"
    export OS_PASSWORD="onapdemo"
    
    I am not using multicloud proxying so the following url would not resolve anyway for me (no instance) - I am using the regular keystone url - which likely won't recognize the demo/onapdemo credentials
    http://10.0.14.1/api/multicloud-titanium_cloud/v0/pod25_RegionOne/identity/v2.0
    
    If I set the user/pass to my tenant - then the DNS rc works for openstack commands - testing to see if this will pass the dns record creation commands now
    
    
    Experiment 2
    If I use my own credentials for DNS - I get a lack of arguments - as expected
    ubuntu@onap-oom-obrien:~$ kubectl -n onap-dcaegen2 logs -f heat-bootstrap-4010086101-9vd97
    Zone simpledemo.onap.org. doens't exist, creating ...
    ++ openstack zone create --email=oom@onap.org '--description=DNS zone bridging DCAE and OOM' --type=PRIMARY simpledemo.onap.org. -f=yaml -c id
    ++ awk '{ print $2} '
    public endpoint for dns service in RegionOne region not found
    Create recordSet for simpledemo.onap.org.
    + SIMPLEDEMO_ONAP_ORG_ZONE_ID=
    + echo 'Create recordSet for simpledemo.onap.org.'
    + openstack recordset create --type=A --ttl=10 --records=10.12.6.124 vm1.aai
    usage: openstack recordset create [-h] [-f {json,shell,table,value,yaml}]
                                      [-c COLUMN] [--max-width <integer>]
                                      [--fit-width] [--print-empty] [--noindent]
                                      [--prefix PREFIX] --record RECORD --type
                                      TYPE [--ttl TTL] [--description DESCRIPTION]
                                      [--all-projects] [--edit-managed]
                                      [--sudo-project-id SUDO_PROJECT_ID]
                                      zone_id name
    openstack recordset create: error: too few arguments
    
    
    Q: could anyone pass me their DNS-openrc-v2.sh file from their /dockerdata-nfs dir from their working Intel openlab environment so I can compare them - I specifically would like to see the DNS keystone url
    thank you
    /michael

    DNSaaS references

    http://onap.readthedocs.io/en/latest/submodules/dcaegen2.git/docs/sections/installation_heat.html#heat-template-parameters

    Alexis, original fix to parameterize the hardcoded user/pass to designate

    https://lists.onap.org/pipermail/onap-discuss/2018-January/007549.html

    https://gerrit.onap.org/r/gitweb?p=demo.git;a=blob;f=boot/dcae2_vm_init.sh;h=b071dffd53f0a431bbdff1c1228edce8ecddef2d;hb=refs/heads/amsterdam

    163     local DNSAAS_USERNAME='demo'
    164     local DNSAAS_PASSWORD='onapdemo'


    Alexis,

     Also verifying - we don't need this anymore - the file has been merged into onap-parameters.yaml right

    in setenv.bash

    # dcaegen2 bootstrap configuration input yaml file.  Start from the sample, and set your environments real values:

    # example: export DCAEGEN2_CONFIG_INPUT_FILE_PATH=/tmp/dcae-parameters.yaml

    DCAEGEN2_CONFIG_INPUT_FILE_PATH=${DCAEGEN2_CONFIG_INPUT_FILE_PATH:-../dcaegen2/dcae-parameters-sample.yaml}


    20180207: update: session with Alexis - was not using separate tenant id for the 25.5 Designate openstack.  Also a temporary workaround for the hardcoded heat stack name was required.

    DCAE stack coming up now

    https://lists.onap.org/pipermail/onap-discuss/2018-February/008011.html

    OOM-673 - Getting issue details... STATUS

    status

    https://lists.onap.org/pipermail/onap-discuss/2018-February/008047.html

    1. Issue resolved - was a designate zone collision - see  OOM-673 - Getting issue details... STATUS

      delete the zones or use a distinct heat stack name

      thanks Alexis

  11. Rahul Sharma Hi I tried the alternate steps on "ONAP on Kubernetes on Rancher in OpenStack"  but I am getting an issue in step 4 'Create the Kubernetes host on OpenStack' 

    When I execute the curl command, the host appears in kub but it says 'waiting for ssh to be available' and it fails after 60 retries.

    I have opened all ports and I am able to ssh to the openstack VM manually.

    K8S_FLAVOR, PRIVATE_NETWORK_NAME are available on my openstack instance. I have also enabled ssh within openstack security group too besides opening all ports on the host VM of openstack


    My command with output

    curl -u "6AA23C57847D4CEC21A4:ptoixTZKipyCxZb713eTpMcsk75BCXh2DD4tiiQP" \
    > -X POST \
    > -H 'Accept: application/json' \
    > -H 'Content-Type: application/json' \
    > -d '{
    > "hostname":"onap12",
    > "openstackConfig":{
    > "authUrl":"http://40.71.3.251:5000/v3",
    > "domainName":"Default",
    > "endpointType":"adminURL",
    > "flavorName":"m1.tiny",
    > "imageName":"Ubuntu_16_01_iso",
    > "netName":"onap_int_nw",
    > "password":"redhat",
    > "sshUser":"root",
    > "tenantName":"admin",
    > "username":"admin"}
    > }' \
    > 'http://13.92.196.37:8080/v2-beta/projects/1a7/hosts/'
    {"id":"1h1","type":"host","links":{"self":"http:\/\/13.92.196.37:8080\/v2-beta\/projects\/1a7\/hosts\/1h1","account":"http:\/\/13.92.196.37:8080\/v2-beta\/projects\/1a7\/hosts\/1h1\/account","clusters":"http:\/\/13.92.196.37:8080\/v2-beta\/projects\/1a7\/hosts\/1h1\/clusters","containerEvents":"http:\/\/13.92.196.37:8080\/v2-beta\/projects\/1a7\/hosts\/1h1\/containerevents","healthcheckInstanceHostMaps":"http:\/\/13.92.196.37:8080\/v2-beta\/projects\/1a7\/hosts\/1h1\/healthcheckinstancehostmaps","hostLabels":"http:\/\/13.92.196.37:8080\/v2-beta\/projects\/1a7\/hosts\/1h1\/hostlabels","hosts":"http:\/\/13.92.196.37:8080\/v2-beta\/projects\/1a7\/hosts\/1h1\/hosts","instances":"http:\/\/13.92.196.37:8080\/v2-beta\/projects\/1a7\/hosts\/1h1\/instances","ipAddresses":"http:\/\/13.92.196.37:8080\/v2-beta\/projects\/1a7\/hosts\/1h1\/ipaddresses","serviceEvents":"http:\/\/13.92.196.37:8080\/v2-beta\/projects\/1a7\/hosts\/1h1\/serviceevents","storagePools":"http:\/\/13.92.196.37:8080\/v2-beta\/projects\/1a7\/hosts\/1h1\/storagepools","volumes":"http:\/\/13.92.196.37:8080\/v2-beta\/projects\/1a7\/hosts\/1h1\/volumes"},"actions":{},"baseType":"host","name":null,"state":"registering","accountId":"1a7","agentIpAddress":null,"agentState":null,"amazonec2Config":null,"authCertificateAuthority":null,"authKey":null,"azureConfig":null,"computeTotal":1000000,"created":"2018-02-07T17:50:54Z","createdTS":1518025854000,"description":null,"digitaloceanConfig":null,"dockerVersion":null,"driver":null,"engineEnv":null,"engineInsecureRegistry":null,"engineInstallUrl":null,"engineLabel":null,"engineOpt":null,"engineRegistryMirror":null,"engineStorageDriver":null,"hostTemplateId":null,"hostname":"onap12","info":null,"instanceIds":null,"kind":"host","labels":null,"localStorageMb":null,"memory":null,"milliCpu":null,"openstackConfig":{"type":"openstackConfig","activeTimeout":"200","authUrl":"http:\/\/40.71.3.251:5000\/v3","availabilityZone":"","cacert":"","domainId":"","domainName":"Default","endpointType":"adminURL","flavorId":"","flavorName":"m1.tiny","floatingipPool":"","imageId":"","imageName":"Ubuntu_16_01_iso","ipVersion":"4","keypairName":"","netId":"","netName":"onap_int_nw","password":"redhat","privateKeyFile":"","region":"","secGroups":"","sshPort":"22","sshUser":"root","tenantId":"","tenantName":"admin","userDataFile":"","username":"admin"},"packetConfig":null,"physicalHostId":null,"publicEndpoints":null,"removed":null,"stackId":null,"transitioning":"yes","transitioningMessage":"In Progress","transitioningProgress":null,"uuid":"4c30d241-8d15-4dc1-aab8-cee3eb603a21"}root@onap:~


    1. Try "sshUser": "ubuntu"

      (or whatever non-root user you have)

      Most Linux distros prevent you from SSHing in as root directly.

      1. Matthew Davis I am able to ssh directly with root and with another user openstack

        ssh root@<ipaddress>  directly logs in

        ssh openstack@<ipaddress> -i ~/.ssh/id_rsa logs in with passphrase for id_rsa

        but when I try these users in my curl command it hangs at 'waiting for ssh'

        My openstack is Ocata on RHEL 7.4 VM, it is installed using packstack installer

        1. Hmm, unfortunately I'm not very familiar with Openstack.

          Try adding the host through the Rancher GUI instead of the command line.

          If that doesn't work, try again but instead of the password field, link to the sshKey. (from the GUI you can see which fields are for passwords and which are for keys)

  12. For all the people struggling to add a host using the REST API, there is an alternative:

    You can create the host manually, then provision it with docker, and finally run the docker command the  that Rancher will provide you when clicking on “Add host” under the infrastructure tab / hosts. See  "Copy, paste, and run the command below to register the host with Rancher:” (example: https://www.google.ca/search?q=Copy,+paste,+and+run+the+command+below+to+register+the+host+with+Rancher:&client=firefox-b-ab&dcr=0&source=lnms&tbm=isch&sa=X&ved=0ahUKEwjhzrGs9ZjZAhVr_IMKHUV7B5sQ_AUIDCgD&biw=1417&bih=921#imgrc=gTc1VIl6VGdofM:)

    You can find the proper version of Docker to install here (for Rancher 1.6): http://rancher.com/docs/rancher/v1.6/en/hosts/#supported-docker-versions

    1. That's what I did eventually.

      Is the end result exactly the same, or is there some functionality you lose? (e.g. deleting the host from Rancher)

      1. Functionality is exactly the same. It's just that you're responsible for installing docker, so you need to pick the right version.

      2. Are you able to install ONAP ?

        The difference i feel while installing is as follows:

        case 1 : Manually create a VM (hosting rancher, k8s and oom) on top of the KVM. -Custom adding  the host

        case 2 :  Create a VM using openstack cloud of flavour 8/64/100. - tried custom  adding the  host 

        case 1 is working fine with custom adding the host.

        case 2 is not working because of 2 problems:

        • Rancher UI is not coming up showing connnection reset error.
        • if I host one vm (RANCHER) on through KVM and create a host  on openstack cloud VM, even though all the containers are not coming up specific kubernetes server containers.

        If i go with case 1 i can see all onap pods running on openstack finally ..so there is no need to further integrate with openstack env.

        Any one is having any idea about this.


        Thanks,

        Pranjal


    2. Hi Alex,

      I have created server host machine ..i can able to install rancher docker on that and also i can able to see the UI for rancher.

      But when i install rancher on openstack VM hosted on server machine....Not able to open the UI for the same.


      Installation version for docker and rancher are same in both the cases.

      Error:
      root@onap-rancher:~# curl -k http://127.0.0.1:8080
      curl: (56) Recv failure: Connection reset by peer

      Docker racher is uo and running :
      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
      49be83a3c673 rancher/server:v1.6.10 "/usr/bin/entry /usr…" 56 seconds ago Up 20 seconds 3306/tcp, 0.0.0.0:8080->8080/tcp rancher-server

      Please do let me know any othher info you want from my end

      Thanks
      Pranjal

      1. pranjal sharma What setup are you trying to achieve Pranjal?

        I have created two VMs, one with onap oom and one with openstack.

        k8s, rancher and docker are on the vm where onap oom is installed. rancher is not needed on the VM with openstack.

        1. hi syed,
          we have 2 server machine and on server vm is hosting openstack controller node and on another server i have created instance vm on openstack cloud environment (since 2 vm have 2 compute node configurations), but on 2nd vm (cloud vm) i am not able install rancher properly.

          1. I have installed rancher only on the VM that has onap, to manage the k8s pods. I dont need rancher on my second vm that has openstack.

            If you need rancher on openstack vm, pls verify that the port on which you are trying to open rancher ui is open. I had to open port 8880 on my VM, until then rancher UI was not opening even though install was successful

            1. My observation till now is as follows:

              The difference i feel while installing is as follows:

              case 1 : Manually create a VM (hosting rancher, k8s and oom) on top of the KVM. -Custom adding  the host

              case 2 :  Create a VM using openstack cloud of flavour 8/64/100. - tried custom  adding the  host 

              case 1 is working fine with custom adding the host.

              case 2 is not working because of 2 problems:

              • Rancher UI is not coming up showing connnection reset error.
              • if I host one vm (RANCHER) on through KVM and create a host  on openstack cloud VM, even though all the containers are not coming up specific kubernetes server containers.

              If i go with case 1 i can see all onap pods running on openstack finally ..so there is no need to further integrate with openstack env.

              Any one is having any idea about this.


              Thanks,

              Pranjal

    3. Hi Alexis de Talhouët

      I am installing onap-portal pods in the  kubernetes host machine.

      Confronted with the error for vnc-portal pod of onap-portal component.And probably we required this pod (vnc-portal) to be in running state to open the portal web page.please correct if i am wrong.


      Descriptions of command as follows:

      root@ONAP-OOM:~/oom/kubernetes/oneclick# kubectl describe pod vnc-portal-1252894321-dts2j -n onap-portal


      Normal Pulling 9m (x3 over 30m) kubelet, onap-oom pulling image "oomk8s/readiness-check:1.0.0"
      Normal Started 9m (x3 over 29m) kubelet, onap-oom Started container
      Warning FailedSync 9m (x3 over 32m) kubelet, onap-oom Error syncing pod
      Normal Pulled 9m (x3 over 29m) kubelet, onap-oom Successfully pulled image "oomk8s/readiness-check:1.0.0"
      Normal Created 9m (x3 over 29m) kubelet, onap-oom Created container


      root@ONAP-OOM:~/oom/kubernetes/oneclick# kubectl get pods --all-namespaces -o wide

      onap-portal   vnc-portal-1252894321-dts2j            0/1       Init:1/5   4          56m       10.42.49.34     onap-oom

      It had restarted 4 times and failed while doing the sync of the pod.

      If anyone is having any idea about this.


      Thank you

      Pranjal 


  13. Alexis thank you for your support.

    All of DCAE is up via OOM (including the 7 CDAP nodes) 

    Issue was: each tenant hits its floating IP allocation after 2.5 DCAE installs - we run out of IPs because they are not deleted.
    Fix: delete all unassociated IPs before brining up OOM/DCAE - we cannot mix cloudify blueprint orchestration with manual openstack deletion - once in a blueprint - we need to remove everything orchestrated on top of HEAT using the cloudify manager - or do as the integration team does and clean the tenant before a deployment.


    {noformat}
    after deleting all floating IPs and rerunning the OOM deployment
    Time: 35 min from heat side dcae-boot install - 55 min total from one-click OOM install

    obrienbiometrics:lab_logging michaelobrien$ ssh ubuntu@10.12.6.124
    Last login: Fri Feb 9 16:50:48 2018 from 10.12.25.197
    ubuntu@onap-oom-obrien:~$ kubectl -n onap-dcaegen2 exec -it heat-bootstrap-4010086101-fd5p2 bash
    root@heat-bootstrap:/# cd /opt/heat
    root@heat-bootstrap:/opt/heat# source DCAE-openrc-v3.sh
    root@heat-bootstrap:/opt/heat# openstack server list
    +--------------------------------------+---------------------+--------+----------------------------------------------------+--------------------------+------------+
    | ID | Name | Status | Networks | Image | Flavor |
    +--------------------------------------+---------------------+--------+----------------------------------------------------+--------------------------+------------+
    | 29990fcb-881f-457c-a386-aa32691d3beb | dcaepgvm00 | ACTIVE | oam_onap_3QKg=10.99.0.13, 10.12.6.144 | ubuntu-16-04-cloud-amd64 | m1.medium |
    | 7b4b63f3-c436-41a8-96dd-665baa94a698 | dcaecdap01 | ACTIVE | oam_onap_3QKg=10.99.0.19, 10.12.5.219 | ubuntu-16-04-cloud-amd64 | m1.large |
    | f4e6c499-8938-4e04-ab78-f0e753fe3cbb | dcaecdap00 | ACTIVE | oam_onap_3QKg=10.99.0.9, 10.12.6.69 | ubuntu-16-04-cloud-amd64 | m1.large |
    | 60ccff1f-e7c3-4ab4-b749-96aef7ee0b8c | dcaecdap04 | ACTIVE | oam_onap_3QKg=10.99.0.16, 10.12.5.106 | ubuntu-16-04-cloud-amd64 | m1.large |
    | df56d059-dc91-4122-a8de-d59ea14c5062 | dcaecdap05 | ACTIVE | oam_onap_3QKg=10.99.0.15, 10.12.6.131 | ubuntu-16-04-cloud-amd64 | m1.large |
    | 648ea7d3-c92f-4cd8-870f-31cb80eb7057 | dcaecdap02 | ACTIVE | oam_onap_3QKg=10.99.0.20, 10.12.6.128 | ubuntu-16-04-cloud-amd64 | m1.large |
    | c13fb83f-1011-44bb-bc6c-36627845a468 | dcaecdap06 | ACTIVE | oam_onap_3QKg=10.99.0.18, 10.12.6.134 | ubuntu-16-04-cloud-amd64 | m1.large |
    | 5ed7b172-1203-45a3-91e1-c97447ef201e | dcaecdap03 | ACTIVE | oam_onap_3QKg=10.99.0.6, 10.12.6.123 | ubuntu-16-04-cloud-amd64 | m1.large |
    | 80ada3ca-745e-42db-b67c-cdd83140e68e | dcaedoks00 | ACTIVE | oam_onap_3QKg=10.99.0.12, 10.12.6.173 | ubuntu-16-04-cloud-amd64 | m1.medium |
    | 5e9ef7af-abb3-4311-ae96-a2d27713f4c5 | dcaedokp00 | ACTIVE | oam_onap_3QKg=10.99.0.17, 10.12.6.168 | ubuntu-16-04-cloud-amd64 | m1.medium |
    | d84bbb08-f496-4762-8399-0aef2bb773c2 | dcaecnsl00 | ACTIVE | oam_onap_3QKg=10.99.0.7, 10.12.6.184 | ubuntu-16-04-cloud-amd64 | m1.medium |
    | 53f41bfc-9512-4a0f-b431-4461cd42839e | dcaecnsl01 | ACTIVE | oam_onap_3QKg=10.99.0.11, 10.12.6.188 | ubuntu-16-04-cloud-amd64 | m1.medium |
    | b6177cb2-5920-40b8-8f14-0c41b73b9f1b | dcaecnsl02 | ACTIVE | oam_onap_3QKg=10.99.0.4, 10.12.6.178 | ubuntu-16-04-cloud-amd64 | m1.medium |
    | 5e6fd14b-e75b-41f2-ad61-b690834df458 | dcaeorcl00 | ACTIVE | oam_onap_3QKg=10.99.0.8, 10.12.6.185 | CentOS-7 | m1.medium |
    | 5217dabb-abd7-4e57-972a-86efdd5252f5 | dcae-dcae-bootstrap | ACTIVE | oam_onap_3QKg=10.99.0.3, 10.12.6.183 | ubuntu-16-04-cloud-amd64 | m1.small |
    | 87569b68-cd4c-4a1f-9c6c-96ea7ce3d9b9 | onap-oom-obrien | ACTIVE | oam_onap_w37L=10.0.16.1, 10.12.6.124 | ubuntu-16-04-cloud-amd64 | m1.xxlarge |
    | d80f35ac-1257-47fc-828e-dddc3604d3c1 | oom-jenkins | ACTIVE | appc-multicloud-integration=10.10.5.14, 10.12.6.49 | | v1.xlarge |
    +--------------------------------------+---------------------+--------+----------------------------------------------------+--------------------------+------------+
    root@heat-bootstrap:/opt/heat#
    {noformat}


  14. Hi Micheal and Alexis,


      I have deployed the dace pod. The heat-bootstrap pod throws some errors:

    Could not find requested endpoint in Service Catalog. Zone .simpledemo.onap.org. doens't exist, creating ... + EXISTING_ZONES= + [[ '' =~ (^|[[:space:]]).simpledemo.onap.org.($|[[:space:]]) ]] + echo 'Zone .simpledemo.onap.org. doens'\''t exist, creating ...' ++ openstack zone create --email=oom@onap.org '--description=DNS zone bridging DCAE and OOM' --type=PRIMARY .simpledemo.onap.org. -f=yaml -c id ++ awk '{ print $2} ' Could not find requested endpoint in Service Catalog.


    Request you to pass the values you used for DCAE section in onap-parameters.yaml.

    Following are the ones I have used:

    Thanks

    Vijaya

    IS_SAME_OPENSTACK_AS_VNF: "true"


    DCAE_OS_PUBLIC_NET_ID: "1cb65443-e72f-4eab-8bbb-f979b8259c92"
    # The name of the public network.
    DCAE_OS_PUBLIC_NET_NAME: "external_network"
    # This is the private network that will be used by DCAE VMs. The network will be created during the DCAE boostrap process,
    # and will the subnet created will use this CIDR.
    #DCAE_OS_OAM_NETWORK_CIDR: "10.99.0.0/27"
    DCAE_OS_OAM_NETWORK_CIDR: "192.168.10.0/24"
    # This will be the private ip of the DCAE boostrap VM. This VM is responsible for spinning up the whole DCAE stack (14 VMs total)
    DCAE_IP_ADDR: "192.168.10.40"

    # The flavors' name to be used by DCAE VMs
    DCAE_OS_FLAVOR_SMALL: "m1.small"
    DCAE_OS_FLAVOR_MEDIUM: "m1.medium"
    DCAE_OS_FLAVOR_LARGE: "m1.large"
    # The images' name to be used by DCAE VMs
    DCAE_OS_UBUNTU_14_IMAGE: "ubuntu-14.04-server-cloudimg"
    DCAE_OS_UBUNTU_16_IMAGE: "ubuntu-16.04-server-cloudimg"
    DCAE_OS_CENTOS_7_IMAGE: "centos7-cloudimg"

    DNS_IP : "8.8.8.8"
    DNS_FORWARDER: "8.8.8.8"

    # Public DNS - not used but required by the DCAE boostrap container
    EXTERNAL_DNS: "8.8.8.8"

    # DNS domain for the DCAE VMs
    DCAE_DOMAIN: "dcaeg2.onap.org"

    DNSAAS_PROXY_ENABLE: "false"

    DCAE_PROXIED_KEYSTONE_URL: ""

    DNSAAS_API_VERSION_HERE: "v2.0"
    DNSAAS_API_VERSION: "v2.0"
    DNSAAS_REGION: "RegionOne"
    DNSAAS_KEYSTONE_URL: "http://172.16.20.10:5000"
    DNSAAS_TENANT_ID: "b522e7abc1784e938314b978db96433e"
    DNSAAS_TENANT_NAME: "onap"
    DNSAAS_USERNAME: "onap"
    DNSAAS_PASSWORD: "onap123"



    1. "Could not find requested endpoint in Service Catalog" → This is something with your OpenStack.

      1. Thanks Alexis.

        I have all-in-one openstack setup and no DNS designate/forwarder. What should be the value of the following parameters in my opan-parameter.yaml:

        DNS_IP :

        DNS_FORWARDER:

        DNSAAS_PROXY_ENABLE: "false" 

         DCAE_PROXIED_KEYSTONE_URL: ""

        Request you  to share the onap-parameters.yaml you have used to deploy the dacegen2.

        Thanks

        Vijaya

        1. Hi Vijayalakshmi,

          You need to install Designate (DNSaaS) on OpenStack (https://docs.openstack.org/designate/latest/install/index.html), and configure Designate's backend (e.g. bind9) to forward external DNS queries to an external DNS (e.g. 8.8.8.8).

          and then, check your config params:

          DNS_IP : "8.8.8.8" ← Here the IP of Openstack's Designate endpoint
          DNS_FORWARDER: "8.8.8.8" ← Here the IP of Openstack's Designate endpoint


          Cheers

          David

          1. Thanks David. Will install Designate and check the DCAE module.

            -Vijaya

        2. If you want to deploy DCAE, you must have DNS Designate support. For information about the params, please see http://onap.readthedocs.io/en/latest/submodules/dcaegen2.git/docs/sections/installation_heat.html#heat-template-parameters

          DNS_IP = IP address of the DNS Designate backend server

          DNS_FORWARDER = Put the IP of the DNS Designate backend if you don't know what yo put, assuming your DNS backend is set to foward DNS request to an external DNS.

          DNSAAS_PROXY_ENABLE = whether or not DNS Designate support lives in the same instance as the one used to deploy DCAE. If not, say true.

           DCAE_PROXIED_KEYSTONE_URL = to be provided only if DNSAAS_PROXY_ENABLE set to true.

          1. Thanks Alexis. Will install the designate in the openstack.

            -Vijaya

    2. Vijayalakshmi,

       I have also started attempting to install ONAP.

      What version of openstack have you installed? Ocata or Pike? Are you using devstack? if Yes, do you mind sharing your local.conf file. I would like to understand what settings you are using in local.conf file. 

      Thanks

      -Vish

      1. Hi Vishwanath,

        I have Mitaka and I have a OpenStack setup(not devstack).
        It is all-in-one configuration.
        Thanks
        Vijaya

      2. Hi Vishwanth,

            I usually use the below steps in Install openstack on my baremetal server using Packstack. If you follow the below steps there is a high possibility that you can get a reliable openstack env.

        1. ############ OS install on Baremetal server
        Install centos-7 on a baremetal server and make sure you do the following during the installation process:
        - Setup a Hostname
        - Configure the Staic IP/Gateway/Netmask/DNS server for the external facing interface eno1
        - Configure Static IP/Netmask for internal facing interface eno2 for private vXLAN network between compute nodes
        - Setup the Installation Disk so that you allocate only 50-100GB for /root and all the remaining to / partition
        - Setup root password and an additional user

        2. ######## Download and install the updates and the latest Openstack and related utilities
        sudo yum -y update
        sudo yum -y upgrade
        sudo yum install -y centos-release-openstack-pike
        sudo yum install -y openstack-packstack
        sudo yum install -y openstack-utils dnsmasq-utils

        3. ####### Generate a default answer file
        sudo packstack --gen-answer-file=~/pike-default-answers.cfg

        4. ###### Disable all the services that interfear with Openstack
        sudo systemctl disable firewalld
        sudo systemctl stop firewalld
        sudo systemctl disable NetworkManager
        sudo systemctl stop NetworkManager
        sudo systemctl enable network
        sudo systemctl start network

        5. ####### Update the default answer file with your env specific settings
        [root@ONAP-Kub-Rancher-Openstack ~]# diff pike-default-answers.cfg pike-os4onap-answers.cfg
        11c11
        < CONFIG_HEAT_INSTALL=n
        > CONFIG_HEAT_INSTALL=y
        64c64
        < CONFIG_MAGNUM_INSTALL=n
        > CONFIG_MAGNUM_INSTALL=y
        79c79
        < CONFIG_NTP_SERVERS=
        > CONFIG_NTP_SERVERS=168.127.133.13
        90c90
        < CONFIG_DEBUG_MODE=n
        > CONFIG_DEBUG_MODE=y
        559c559
        < CONFIG_CINDER_VOLUMES_SIZE=20G
        > CONFIG_CINDER_VOLUMES_SIZE=350G
        705c705
        < CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
        > CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eno1
        889c889
        < CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=
        > CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ex
        902c902
        < CONFIG_NEUTRON_OVS_TUNNEL_IF=
        > CONFIG_NEUTRON_OVS_TUNNEL_IF=eno2
        1194c1194
        < CONFIG_PROVISION_DEMO=y
        > CONFIG_PROVISION_DEMO=n

        6. ####### Increase the User Level FD Limits
        vi /etc/security/limits.conf
        #Set root user soft and hard limits as follows:
        root soft nofile 10240
        root hard nofile 20480

        7. ######## To increase mariadb openfile limit
        If you run any openstack command and if it does not respond OR if the install is hanging then
        check in /var/log/mariadb/mariadb.log and if you see any errors like
        [ERROR] Error in accept: Too many open files
        [ERROR] Error in accept: Bad file descriptor
        Then you need to increase the open files limits by editing:
        - vi /usr/lib/systemd/system/mariadb.service
        # Add this line to the “[Service]” section.
        LimitNOFILE=infinity
        - Reload systemctl daemon and restart MariaDB
        systemctl daemon-reload
        /sbin/service mariadb restart

        8. ####### run the packstack with updated answer file
        sudo packstack --answer-file=pike-os4onap-answers.cfg --debug
        NOTE: To look at the progress
        Do ls -ltr on /var/tmp/packstack and find the latest dir that got created and change into manifests dir.
        cd /var/tmp/packstack/dab5008b9df14d2391228fd141e3f30c/manifests
        tail the running file. tail -f 167.254.211.15_controller.pp.running

        1. Ravi,

          Appreciate your response.

          I have 2 physical machines, one on which I install ONAP following the instructions at ONAP on Kubernetes on Rancher

          On the second physical machine, I install Ubuntu 16.04, followed by Ocata Devstack installation. The heat-bootstrap container on the onap-dcaegen2 pod in the first physical machine launches heat stack on the devstack in the second physical machine. A single Ubuntu 16.04 VM named "dcae-dcae-bootstrap" VM gets created. I was hoping that more DCAE VMs would get created as part of the stack run, not sure why the other VMs are not getting created. Do you have any idea on how to troubleshoot this further?

          Thanks

          1. The dcae-bootstrap will be provision with a dcae-boostrap docker container. This container is responsible to spin up the 14 other DCAE VMs.

            If none are being created, I suggest you go in the VM and look at the container logs.

  15. FYI: One thing to keep in mind when DCAE VMs are getting installed on OpenStack, you may want to ensure that the quota for the project(admin, demo) allows for enough instances to be created. Also, the vCPUs, RAM may need to be updated as well. See the screen shot that displays default quotas for "admin" project. The number of instances by default is set to 10 whereas atleast 15 DCAE VMs get spawned. The way to check quotas from GUI is to login as admin user, and navigate to Identity->Projects→Mange members of the project you are interested in checking the quota set.

    1. One additional note to keep in mind when deploying DCAE on OpenStack: If the DCAE Host machine has less than 900 GB hard disk space, you may want to set the disk_allocation_ratio to a value greater than 1.0 in /etc/nova/nova.conf and restart all the nova services.

      I ran into an issue with all the DCAE VMs not coming up(only 8 came up) since the host machine had only 500 GB hard disk. I set the disk_allocation_ratio  = 32.0 in the /etc/nova/nova.conf,  restarted all the nova services and restarted the dcae2_install.sh from the Ubuntu VM and all the DCAE VMs came up and were running.

      1. Vishwanath, these are good points - going through them now - thank you

        I had everything up - no issues just before the router deletion on the intel lab - but do run into periodic issues.


  16. Michael or Other DCAE experts,

     As part of installing Amsterdam ONAP+DCAE, I noticed that only DCAE VM gets created on the openstack instance. I looked at the heat template (refer http://paste.openstack.org/show/680128/) generated via OpenStack and I see reference to only one "type: OS::Nova::Server" which explains why only one VM is created. How do the rest of the DCAE VMs get created?

    Thanks

    -Vish

    1. The dcae-bootstrap VM will be provisioned with a dcae-boostrap docker container. This container is responsible to spin up the 14 other DCAE VMs.

      If none are being created, I suggest you go in the VM and look at the container logs.

      1. Alexis,

        Appreciate the response.

        When I log into the single Ubuntu 16.04 VM created by the heat stack and type the command "docker", I get the below message which indicates that docker itself has not been installed.

        "ubuntu@dcae-dcae-bootstrap:~$ docker

        The program 'docker' is currently not installed. You can install it by typing:
        sudo apt install docker.io

        "

        1. ah, this means the cloud-init failed at some point. Please look at the cloud-init logs, under /tmp/dcae...

          1. There does not exist a /tmp/dcae* in the dcae-bootvm


            1. vishwanath jayaraman, can  you check the cloud-init-output.log? if /tmp/dcae folder was not created, then probably dcae2_install.sh  didn't run properly.

              1. Thanks for your response and helping me out with this in the onap IRC channel. Find the contents of the /var/log/cloud-init-output.log at http://paste.openstack.org/show/680408/.

                I am able to ping 8.8.8.8 from the VM, but looks like DNS is not configured on the boot VM like you mentioned on the onap IRC channel, hence nexus is not getting resolved

                1. vishwanath jayaraman, as I can see "curl: (6) Could not resolve host: nexus.onap.org", the VM couldn't resolve DNS. You can follow the below steps:

                  1)cd oom/kubernetes/oneclick; 

                  2)./deleteAll.bash -n onap -a dcaegen2

                  3) vim /dockerdata-nfs/onap/dcaegen2/heat/onap_dcae.yaml and then add the following line before "curl -k __nexus_repo__/org.onap.demo/boot/__artifacts_version__/dcae2_install.sh -o /opt/dcae2_install.sh"

                  echo "nameserver 8.8.8.8" >> /etc/resolv.conf

                  4)cd oom/kubernetes/oneclick; ./createAll.bash -n onap -a dcaegen2


                  1. Thanks, will try the above steps

                  2. If you are doing it for the first time, then you have to edit the file oom/kubernetes/config/docker/init/src/config/dcaegen2/heat/onap_dcae.yaml . 


                    I think it is better to add generic solution for this by taking external_dns parameter and provide in the yaml file. I will talk to OOM team and send the patch for this. 

                  3. TBH, this is a hack, as /etc/resolv.conf is getting configured by the cloud-init.

                    It should point to your DNS Designate backend, and you should have your DNS Designate backend configure to forward DNS request to an external DNS.

                2. what's the content of your resolv.conf?

                  It should point to your DNS Designate backend, and you should have your DNS Designate backend configure to forward DNS request to an external DNS.

                  1. Alexis, 

                    Below is the content of resolv.conf after implementing Bharath's suggestion.

                    What might I have not set properly in the onap-parameters.yaml, refer my onap-parameters.yaml at http://paste.openstack.org/show/682213/

                    ubuntu@dcae-dcae-bootstrap:~$ cat /etc/resolv.conf 

                    # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)

                    #     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN

                    nameserver 8.8.8.8 (This got added later per Bharath's instructions and was not there)

                    nameserver $HOST_IP (IP of the OpenStack Host)

                    search openstacklocal

                    1. Asuuming you have

                      nameserver $HOST_IP (IP of the OpenStack Host)

                      in your /etc/resolv.conf, then your VM should send DNS request to OpenStack DNS Designate. Either Designate is able to resolve based on the zone it has, either it cannot and should forward the request to an external DNS. My guess is your DNS Desigante backend is not fowarding the request, reason why you had to add 8.8.8.8 in your resolv.conf

                      1. Alexis,

                        Appreciate the response.

                        What should I verify on the OpenStack host that has Designate installed to confirm that its configured properly?


                        1. Let's say your DNS Designate Backend is bind9, you'll look at the /var/named/named.conf

                          Basically, DNS Designate is backed by a DNS software. Make sure that it's configure to forward. For example: https://gerrit.onap.org/r/#/c/28347/5/kubernetes/config/docker/init/src/config/openstack/designate-backend-bind9/named.conf

                          1. Alexis,

                            Appreciate the prompt response.

                            I have pdns4 as the backend driver, I will look the equivalent file for that.

                          2. Alexis, 

                            I have uploaded the contents of /etc/bind/named.conf && named.conf.options && named.conf.default-zones at http://paste.openstack.org/show/682710/. Would be great if you can review and see if there is anything obviously wrong.

                            On the second DCAE VM which is CentOS based, I see the below as the contents, Does it look right?

                            [centos@dcaeorcl00 ~]$ cat /etc/resolv.conf 

                            ; Created by cloud-init on instance boot automatically, do not edit.

                            ;

                            ; generated by /usr/sbin/dhclient-script

                            search apjF.dcaeg2.onap.org

                            nameserver $HOST_IP


                            However, the CentOS is not able to ping www.google.com but can ping 8.8.8.8

                            Additional Info:

                            ============

                            the output of cat /etc/resolv.conf in the host where OpenStack/Devstack is installed is 

                            nameserver 10.0.80.11

                            nameserver 10.0.80.12


                            1. Issue is now resolved.I had set the Forwarders and also set value of recursion to yes in /etc/bind/named.conf.options file (refer http://paste.openstack.org/show/682921/)

                              1. Good. Haven't seen that before my comment bellow.

                            2. it does not look right, you're missing the forwarding config, that's why google domain can't be resolved. Add the following in your named.conf.options

                                      forwarders {
                                          8.8.8.8;
                                      };
                                      allow-query { 0.0.0.0/0; };
                                      recursion yes

                              as shown here: https://gerrit.onap.org/r/#/c/28347/5/kubernetes/config/docker/init/src/config/openstack/designate-backend-bind9/named.conf

                              1. Alexis,

                                 What does allow-query { 0.0.0.0/0; } do?

    2. Can you make sure you have designate installed as part of the openstack that is configured to host DCAE VMs

      1. Ravi Rao,

         Yes designate services are running as part of the devstack services.

        However, when I execute the command "openstack zone list", I see the below output where the status shows up as Error, not sure what that means

        +--------------------------------------+---------------------------+---------+------------+--------+--------+
        | id | name | type | serial | status | action |
        +--------------------------------------+---------------------------+---------+------------+--------+--------+
        | ffdd13e2-b2e8-4488-8dcd-f218bd468f79 | lzOH.simpledemo.onap.org. | PRIMARY | 1519165170 | ERROR | UPDATE |
        | 44da630a-bc92-48d6-a7ba-10efd761df4a | p55J.simpledemo.onap.org. | PRIMARY | 1519173881 | ERROR | UPDATE |
        +--------------------------------------+---------------------------+---------+------------+--------+--------+

        1. check that /var/named folder have the right permissions
          # chmod g+w /var/named

          Check /var/log/designate/worker.log

          1. I do not see a folder "/var/named" or "/var/log/designate/worker.log" on the machine where I have installed ocata devstack. Where should  I be verifying these permissions?

            1. It depends on your DNS backend, I was assuming CentOS+Bind9.
              /var/named is the folder where zone files are stored whenever a zone is created in Designate.
              Regarding devstack, I am not sure in which folder the Designate logs are stored. /var/log/stack ?

              1. I have installed devstack on Ubuntu 16.04. I see the below logs related to designate in the /opt/stack/logs

                designate-agent.log

                designate-agent.log.2018-02-20-001729

                designate-api.log

                designate-api.log.2018-02-20-001729

                designate-central.log

                designate-central.log.2018-02-20-001729

                designate-mdns.log

                designate-mdns.log.2018-02-20-001729

                designate-pool-manager.log

                designate-pool-manager.log.2018-02-20-001729

                designate-sink.log

                designate-sink.log.2018-02-20-001729

                designate-zone-manager.log

                designate-zone-manager.log.2018-02-20-001729

                1. Can you check and see if you can create a zone manually in your openstack env

                  rndc -s 127.0.0.1 -p 953 -k /etc/designate/rndc.key addzone "Kgkl.dcaeg2.onap.org" '{ type slave; masters { 127.0.0.1 port 5354;}; file "slave.Kgkl.dcaeg2.onap.org.b7362b36-cf12-4cee-bb39-3fd941793ab7"; };'



                  1. Appreciate your response, I had a situation where the OpenStack desingate was set to use pdns3 as the driver but the pdns4 was configured.(got this info from the openstack IRC) I have moved past this particular issue where the status of the zones show up as ERROR. The fix was to specify the 

                    DESIGNATE_BACKEND_DRIVER=pdns4 in local.conf file.

  17. Hi there,

    is the robot healthcheck expected to work with DCAE? I deployed all 17 VMs (rancher+k8s+DCAE) using the integration team script (https://github.com/onap/integration/tree/master/deployment/heat/onap-oom). All seems ok, no errors in the logs or cloudify manager UI.

    Running ./ete-k8s.sh health, results in:

    Basic DCAE Health Check                                               [ WARN ] Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f79b5f5e750>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /healthcheck
    [ WARN ] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f79b5f5e5d0>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /healthcheck
    [ WARN ] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f79b9dffe50>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /healthcheck
    | FAIL |
    ConnectionError: HTTPConnectionPool(host='null', port=8080): Max retries exceeded with url: /healthcheck (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f79b60bf550>: Failed to establish a new connection: [Errno -2] Name or service not known',)

    Running the healthcheck after adding the dcae-bootstrap VM IP to /dockerdata-nfs/onap/robot/eteshare/config/vm_properties.py

    "GLOBAL_INJECTED_DCAE_IP_ADDR" : "10.30.30.104",

    results in:

    Basic DCAE Health Check                                               | FAIL |
    [  | cdap |  |  |  |  | 0ab9d97e465348a8856f43f6b254b501_cdap_app_cdap_app_tca | cdap_broker | 
    config_binding_service | deployment_handler | inventory | platform_dockerhost | 
    | 03cc692710b047f89258e860bb5d8f3a_dcae-analytics-holmes-engine-management
    | 2608310fce364334ac233a33501b7ba9_dcaegen2-collectors-ves |
    67bb47d5ae8941a48c8283a722d3442c_dcae-analytics-holmes-rule-management | component_dockerhost | 
    | cloudify_manager ] does not contain match for pattern 'service-change-handler'.

    Any hint on what step could be missing?


    Cheers!
    David

    1. Hi, healthcheck is failing as DCAE service-change-handler service is not running, hence not registered in the DCAE Consul. To debug, I suggest you log in the  dcaedokp00 vm and look at the docker log for that container. Usually, when it's failing it's because it can't connect to dmaap. If that's the case, please make sure the port 3904 is open and accessible from any of your K8S hosts. Are you running latest Amsterdam? Few fixes came in to fix issues around this.

      1. Thanks a lot, Alexis. policy-handler and servicechange-handler docker containers were down in dcaedokp00 vm. I just did a docker start on those containers, the robot healthcheck reports PASS on DCAE now. I am running latest Amsterdam.

      2. Hi Alexis,

            I am seeing that  service-change-handler container exited in dcaedokp00 VM. Logs indicate the below error.

        18-03-19 18:45:33 55c5c0f1bee0 INFO [sch.core:177] - Setup logging: Rolling appender DEFAULT
        18-03-19 18:45:34 55c5c0f1bee0 INFO [org.openecomp.sdc.impl.DistributionClientImpl:229] - DistributionClient - init
        18-03-19 18:45:34 55c5c0f1bee0 DEBUG [org.openecomp.sdc.impl.DistributionClientImpl:322] - get cluster server list from ASDC
        18-03-19 18:45:34 55c5c0f1bee0 DEBUG [org.openecomp.sdc.http.AsdcConnectorClient:142] - about to perform getServerList. requestId= c2c20e68-92b0-4ef1-8fde-898fea3e6cee url= /sdc/v1/distributionUebCluster
        18-03-19 18:45:34 55c5c0f1bee0 DEBUG [org.openecomp.sdc.http.HttpAsdcClient:267] - url to send https://vm1.sdc.chuz.simpledemo.onap.org:8443/sdc/v1/distributionUebCluster
        18-03-19 18:45:34 55c5c0f1bee0 ERROR [org.openecomp.sdc.http.HttpAsdcClient:289] - failed to connect to url: /sdc/v1/distributionUebCluster
        sch.core.main
        ...
        sch.core/-main core.clj: 181
        sch.core/-main core.clj: 196
        sch.core/run-distribution-client! core.clj: 163
        org.openecomp.sdc.impl.DistributionClientImpl.init DistributionClientImpl.java: 241
        org.openecomp.sdc.impl.DistributionClientImpl.initUebServerList DistributionClientImpl.java: 325
        org.openecomp.sdc.http.AsdcConnectorClient.getServerList AsdcConnectorClient.java: 94
        org.openecomp.sdc.http.AsdcConnectorClient.performAsdcServerRequest AsdcConnectorClient.java: 143
        org.openecomp.sdc.http.HttpAsdcClient.getRequest HttpAsdcClient.java: 278
        org.apache.http.impl.client.CloseableHttpClient.execute CloseableHttpClient.java: 107
        org.apache.http.impl.client.CloseableHttpClient.execute CloseableHttpClient.java: 82
        org.apache.http.impl.client.InternalHttpClient.doExecute InternalHttpClient.java: 184
        org.apache.http.impl.execchain.RedirectExec.execute RedirectExec.java: 110
        org.apache.http.impl.execchain.RetryExec.execute RetryExec.java: 88
        org.apache.http.impl.execchain.ProtocolExec.execute ProtocolExec.java: 184
        org.apache.http.impl.execchain.MainClientExec.execute MainClientExec.java: 236
        org.apache.http.impl.execchain.MainClientExec.establishRoute MainClientExec.java: 380
        org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect PoolingHttpClientConnectionManager.java: 353
        org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect DefaultHttpClientConnectionOperator.java: 111
        org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve SystemDefaultDnsResolver.java: 45
        java.net.InetAddress.getAllByName InetAddress.java: 1126
        java.net.InetAddress.getAllByName InetAddress.java: 1192
        java.net.InetAddress.getAllByName0 InetAddress.java: 1276
        java.net.InetAddress.getAddressesFromNameService InetAddress.java: 1323
        java.net.InetAddress$2.lookupAllHostAddr InetAddress.java: 928
        java.net.Inet6AddressImpl.lookupAllHostAddr Inet6AddressImpl.java
        java.net.UnknownHostException: vm1.sdc.chuz.simpledemo.onap.org: Name or service not known

        18-03-19 18:45:34 55c5c0f1bee0 ERROR [org.openecomp.sdc.http.AsdcConnectorClient:332] - status from ASDC is org.openecomp.sdc.http.HttpAsdcResponse@7b4fbedb
        18-03-19 18:45:34 55c5c0f1bee0 ERROR [org.openecomp.sdc.http.AsdcConnectorClient:333] - DistributionClientResultImpl [responseStatus=ASDC_CONNECTION_FAILED, responseMessage=ASDC server problem]
        18-03-19 18:45:34 55c5c0f1bee0 DEBUG [org.openecomp.sdc.http.AsdcConnectorClient:336] - error from ASDC is: failed to connect
        18-03-19 18:45:34 55c5c0f1bee0 ERROR [sch.core:167] - DistributionClientResultImpl [responseStatus=ASDC_CONNECTION_FAILED, responseMessage=ASDC server problem]
        18-03-19 18:45:34 55c5c0f1bee0 INFO [sch.core:201] - Done

        Can you please let me know how to resolve this issue.

        Also Can you please let me know How I can  make sure the port 3904 is open and accessible from any of your K8S hosts.

        I see the Openstack security group that is used by all VMs have all the ports opened up. Is there anything else I need to do for 3904 ?

        Regards,

        Ravi

  18. If we have to add another openstack instance in our existing setup, do we need to redeploy onap?

    I have one onap and one openstack instance running. I will modify onap-parameters.yaml to add the second region running on second openstack, after this do I need to redeploy onap? 

    1. Hi, to register a new VIM, please follow this page: How-To: Register a VIM/Cloud Instance to ONAP

  19. Alexis/Michael, Bharath and other ONAP experts,

    I finally got 9 of the 15 VMs running with 7 of them being assigned floating IPs.

    I also see the following errors when the cloudify manager is running that I suspect is the issue, any pointers on how to further debug this:

    2018-02-23T09:37:07 CFY <DockerPlatform> [docker_host_48d4c] Creating node
    2018-02-23T09:37:07 CFY <DockerPlatform> [docker_host_48d4c.create] Sending task 'dockerplugin.select_docker_host'
    2018-02-23T09:37:07 CFY <DockerPlatform> [docker_host_48d4c.create] Task started 'dockerplugin.select_docker_host'
    2018-02-23T09:37:07 CFY <DockerPlatform> [docker_host_48d4c.create] Task succeeded 'dockerplugin.select_docker_host'
    2018-02-23T09:37:08 CFY <DockerPlatform> [docker_host_48d4c] Configuring node
    2018-02-23T09:37:08 CFY <DockerPlatform> [docker_host_48d4c] Starting node
    2018-02-23T09:37:09 CFY <DockerPlatform> [registrator_d0365] Creating node
    2018-02-23T09:37:09 CFY <DockerPlatform> [registrator_d0365->docker_host_48d4c|preconfigure] Sending task 'relationshipplugin.forward_destination_info'
    2018-02-23T09:37:09 CFY <DockerPlatform> [registrator_d0365->docker_host_48d4c|preconfigure] Task started 'relationshipplugin.forward_destination_info'
    2018-02-23T09:37:10 CFY <DockerPlatform> [registrator_d0365->docker_host_48d4c|preconfigure] Task succeeded 'relationshipplugin.forward_destination_info'
    2018-02-23T09:37:10 CFY <DockerPlatform> [registrator_d0365] Configuring node
    2018-02-23T09:37:10 CFY <DockerPlatform> [registrator_d0365] Starting node
    2018-02-23T09:37:10 CFY <DockerPlatform> [registrator_d0365.start] Sending task 'dockerplugin.create_and_start_container'
    2018-02-23T09:37:10 CFY <DockerPlatform> [registrator_d0365.start] Task started 'dockerplugin.create_and_start_container'
    2018-02-23T09:37:11 CFY <DockerPlatform> [registrator_d0365.start] Task failed 'dockerplugin.create_and_start_container' -> Failed to find: platform_dockerhost
    Traceback (most recent call last):
      File "/tmp/pip-build-LCZo8_/cloudify-plugins-common/cloudify/dispatch.py", line 596, in main
      File "/tmp/pip-build-LCZo8_/cloudify-plugins-common/cloudify/dispatch.py", line 366, in handle
      File "/opt/mgmtworker/env/plugins/dockerplugin-2.4.0/lib/python2.7/site-packages/dockerplugin/decorators.py", line 53, in wrapper
        raise RecoverableError(e)
    RecoverableError: Failed to find: platform_dockerhost


    1. Additional note: Script from  https://gerrit.onap.org/r/#/c/32019/2/install/rancher/oom_rancher_setup.sh with the "-b amsterdam" option was used to setup ONAP components

  20. When installing ONAP using OOM on ubuntu 16.04, is it a hard requirement that it be run when logged in as a "root" user or is any user that is added to the sudoers group would work as well?

    1. You can be ubuntu - only diff is to sudo in front of several parts of the install and also make sure you run the ubuntu user addition after installing docker.

      fyi, the fully automated rancher install for amsterdam or master in the script under review - assumes root so that I don't have to do or log out after 

      sudo usermod -aG docker ubuntu

      https://gerrit.onap.org/r/#/c/32019

      OOM-715 - Getting issue details... STATUS


  21. Was wondering if there is a GUI/Portal for DCAE similar to the GUI for the ONAP portal? If Yes, How do I access the DCAE guI.

    1. Not sure about a single DCAE portal, but here are some GUIs you can access:

      Consul: http://<dcaecnsl00_IP>:8500/ui/

      Cloudify: http://<dcaeorcl00_IP>:8600/

      CDAP: http://<dcaecdap02_IP>:8700/cdap/ns/cdap_tca_hi_lo


  22. Loadbalancer IP Missing for launching the DCAE Stack. Has anyone successful in deploying DCAEGEN2 with OOM having multiple kubernetes instances (1 Master and 5 Slave Nodes) 

    Entrypoint.sh attempts to get the Nodeip using below command but it fails to get it as loadbalancer ingress ip is null

     NODE_IP=`kubectl get services dcaegen2 -o jsonpath='{.status.loadBalancer.ingress[0].ip}'`



    [root@kubernetes-master-ecosystem-20348-03-08-18 ~]# kubectl get service dcaegen2 -n onap-dcaegen2 -o json
    {
    "apiVersion": "v1",
    "kind": "Service",
    "metadata": {
    "creationTimestamp": "2018-03-13T06:34:15Z",
    "labels": {
    "app": "nginx"
    },
    "name": "dcaegen2",
    "namespace": "onap-dcaegen2",
    "resourceVersion": "935981",
    "selfLink": "/api/v1/namespaces/onap-dcaegen2/services/dcaegen2",
    "uid": "8ca36348-2688-11e8-9a35-fa163ee2f307"
    },
    "spec": {
    "clusterIP": "10.104.231.121",
    "externalIPs": [
    "10.247.204.12",
    "10.247.204.5",
    "10.247.204.10",
    "10.247.204.8",
    "10.247.204.3",
    "10.247.204.11"
    ],
    "externalTrafficPolicy": "Local",
    "healthCheckNodePort": 32548,
    "ports": [
    {
    "name": "aai-service",
    "nodePort": 30600,
    "port": 8443,
    "protocol": "TCP",
    "targetPort": 8443
    },
    {
    "name": "dmaap",
    "nodePort": 30601,
    "port": 3904,
    "protocol": "TCP",
    "targetPort": 3904
    },
    {
    "name": "sdc-be",
    "nodePort": 30602,
    "port": 8443,
    "protocol": "TCP",
    "targetPort": 8443
    },
    {
    "name": "pdp",
    "nodePort": 30603,
    "port": 8081,
    "protocol": "TCP",
    "targetPort": 8081
    },
    {
    "name": "msbapigw",
    "nodePort": 30604,
    "port": 80,
    "protocol": "TCP",
    "targetPort": 80
    },
    {
    "name": "multicloud-tinanium",
    "nodePort": 30605,
    "port": 9005,
    "protocol": "TCP",
    "targetPort": 9005
    }
    ],
    "selector": {
    "app": "nginx"
    },
    "sessionAffinity": "None",
    "type": "LoadBalancer"
    },
    "status": {
    "loadBalancer": {}
    }
    }

  23. Hello,

    I am a complete beginner struggling to understand the entire installation process. I need to setup ONAP on a server.The server is running on CentOS 6.8. So, do I need to install OpenStack on it to setup a cloud setup? Is there anyone who can help me with a complete start-from scratch installation?

  24. Hi Alexis,

    I wanted to know the recommended way of pulling the docker images for Amsterdam version. In this wiki page I did not find reference to prepull_docker.sh script which was used in earlier version 1.1.0. Is it advisable to run this script before creating the pods using createAll.sh script or is it not required? It is taking a lot of time launching the pods and I could only see 75 of them in running state and the rest did not proceed further. How many images in total are required to be pulled?  In my setup docker images command shows some 113 images pulled. Also the values.yaml files in each oom/kubernetes component directory the pullPolicy: Always is set. Can we change this setting to stop pulling the images each time I create the pods. Is it required to be changed in all the components?

    Kindly advise me.

    Thanks,

    Vidhu

  25. Hi Brain and Alexis,


      Once my dcae-bootstrap VM is created, as part of init, there is the following docker pull.

    docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.bootstrap:v1.1.1

    However, the pull always ends in an error "unexpected EOF". I have tried many number of times and hit the same error.

    Is there any problem with the repository?

    $ sudo docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.bootstrap:v1.1.1 v1.1.1: Pulling from onap/org.onap.dcaegen2.deployments.bootstrap

    50aff78429b1: Downloading [=============>                                     ]  11.29MB/42.74MB
    f6d82e297bce: Download complete 
    275abb2c8a6f: Download complete 
    9f15a39356d6: Download complete 
    fc0342a94c89: Download complete 
    fd97135e26f3: Downloading [==================================================>]  163.3MB/163.3MB 65cb4d0362f2: Download complete 
    ca9b192fa64b: Download complete 
    cde868d37e2f: Download complete 
    e0c7601b44dd: Download complete 
    a7efac1be55b: Download complete 
    unexpected EOF
    :~$ 

    Thanks

    Vijaya 

  26. Hello Guys,

    Getting error during spin-up of boot vm 


    Currently I'm getting error in the step( inside install script in boot docker container)

     cfy bootstrap --install-plugins -p bootstrap-blueprint.yaml -i bootstrap-inputs.yaml

    Here is the error log. Appreciate if anyone has any solution for this.


    2018-03-28 07:02:31 LOG <manager> [elasticsearch_45a04.create] INFO: Deploying blueprint resource components/elasticsearch/scripts/rotate_es_indices to /etc/cron.daily/rotate_es_indices
    2018-03-28 07:02:32 LOG <manager> [elasticsearch_45a04.create] INFO: chowning /etc/cron.daily/rotate_es_indices by root:root...
    2018-03-28 07:02:32 LOG <manager> [elasticsearch_45a04.create] INFO: Enabling systemd service elasticsearch...
    2018-03-28 07:02:32 LOG <manager> [elasticsearch_45a04.create] INFO: Waiting for 192.168.0.5:9200 to become available...
    2018-03-28 07:02:33 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is not available yet, retrying... (1/24)
    2018-03-28 07:02:35 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is not available yet, retrying... (2/24)
    2018-03-28 07:02:37 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is not available yet, retrying... (3/24)
    2018-03-28 07:02:39 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is not available yet, retrying... (4/24)
    2018-03-28 07:02:41 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is not available yet, retrying... (5/24)
    2018-03-28 07:02:43 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is not available yet, retrying... (6/24)
    2018-03-28 07:02:45 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is not available yet, retrying... (7/24)
    2018-03-28 07:02:47 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is not available yet, retrying... (8/24)
    2018-03-28 07:02:50 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is open!
    2018-03-28 07:02:50 LOG <manager> [elasticsearch_45a04.create] INFO: Deleting `cloudify_storage` index if exists...
    2018-03-28 07:02:50 LOG <manager> [elasticsearch_45a04.create] INFO: Failed to DELETE http://192.168.0.5:9200/cloudify_storage/ (reason: Not Found)
    2018-03-28 07:02:50 LOG <manager> [elasticsearch_45a04.create] INFO: Creating `cloudify_storage` index...
    2018-03-28 07:02:50 LOG <manager> [elasticsearch_45a04.create] INFO: Declaring blueprint mapping...
    2018-03-28 07:02:51 LOG <manager> [elasticsearch_45a04.create] INFO: Declaring deployment mapping...
    2018-03-28 07:02:51 LOG <manager> [elasticsearch_45a04.create] INFO: Declaring execution mapping...
    2018-03-28 07:02:51 LOG <manager> [elasticsearch_45a04.create] INFO: Declaring node mapping...
    2018-03-28 07:02:51 LOG <manager> [elasticsearch_45a04.create] INFO: Declaring node instance mapping...
    2018-03-28 07:02:51 LOG <manager> [elasticsearch_45a04.create] INFO: Declaring deployment modification mapping...
    2018-03-28 07:02:51 LOG <manager> [elasticsearch_45a04.create] INFO: Declaring deployment update mapping...
    2018-03-28 07:02:51 LOG <manager> [elasticsearch_45a04.create] INFO: Waiting for shards to be active...
    2018-03-28 07:02:52 CFY <manager> [elasticsearch_45a04.create] Task succeeded 'fabric_plugin.tasks.run_script'
    2018-03-28 07:02:52 CFY <manager> [amqp_influx_010b6] Configuring node
    2018-03-28 07:02:52 CFY <manager> [elasticsearch_45a04] Configuring node
    2018-03-28 07:02:52 CFY <manager> [amqp_influx_010b6] Starting node
    2018-03-28 07:02:53 CFY <manager> [amqp_influx_010b6.start] Sending task 'fabric_plugin.tasks.run_script'
    2018-03-28 07:02:53 CFY <manager> [amqp_influx_010b6.start] Task started 'fabric_plugin.tasks.run_script'
    2018-03-28 07:02:53 LOG <manager> [amqp_influx_010b6.start] INFO: Preparing fabric environment...
    2018-03-28 07:02:53 LOG <manager> [amqp_influx_010b6.start] INFO: Environment prepared successfully
    2018-03-28 07:02:53 LOG <manager> [amqp_influx_010b6.start] INFO: Starting AMQP-Influx Broker Service...
    2018-03-28 07:03:23 LOG <manager> [amqp_influx_010b6.start] INFO: Starting systemd service cloudify-amqpinflux...
    Traceback (most recent call last):
    File "/usr/lib/python2.7/wsgiref/handlers.py", line 86, in run
    self.finish_response()
    File "/usr/lib/python2.7/wsgiref/handlers.py", line 128, in finish_response
    self.write(data)
    File "/usr/lib/python2.7/wsgiref/handlers.py", line 212, in write
    self.send_headers()
    File "/usr/lib/python2.7/wsgiref/handlers.py", line 270, in send_headers
    self.send_preamble()
    File "/usr/lib/python2.7/wsgiref/handlers.py", line 194, in send_preamble
    'Date: %s\r\n' % format_date_time(time.time())
    File "/usr/lib/python2.7/socket.py", line 328, in write
    self.flush()
    File "/usr/lib/python2.7/socket.py", line 307, in flush
    self._sock.sendall(view[write_offset:write_offset+buffer_size])
    error: [Errno 32] Broken pipe
    [172.16.1.62] out: Traceback (most recent call last):
    [172.16.1.62] out: File "/tmp/cloudify-ctx/ctx", line 139, in <module>
    [172.16.1.62] out: main()
    [172.16.1.62] out: File "/tmp/cloudify-ctx/ctx", line 128, in main
    [172.16.1.62] out: args.timeout)
    [172.16.1.62] out: File "/tmp/cloudify-ctx/ctx", line 84, in client_req
    [172.16.1.62] out: response = request_method(socket_url, request, timeout)
    [172.16.1.62] out: File "/tmp/cloudify-ctx/ctx", line 65, in http_client_req
    [172.16.1.62] out: timeout=timeout)
    [172.16.1.62] out: File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
    [172.16.1.62] out: return opener.open(url, data, timeout)
    [172.16.1.62] out: File "/usr/lib64/python2.7/urllib2.py", line 431, in open
    [172.16.1.62] out: response = self._open(req, data)
    [172.16.1.62] out: File "/usr/lib64/python2.7/urllib2.py", line 449, in _open
    [172.16.1.62] out: '_open', req)
    [172.16.1.62] out: File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
    [172.16.1.62] out: result = func(*args)
    [172.16.1.62] out: File "/usr/lib64/python2.7/urllib2.py", line 1244, in http_open
    [172.16.1.62] out: return self.do_open(httplib.HTTPConnection, req)
    [172.16.1.62] out: File "/usr/lib64/python2.7/urllib2.py", line 1217, in do_open
    [172.16.1.62] out: r = h.getresponse(buffering=True)
    [172.16.1.62] out: File "/usr/lib64/python2.7/httplib.py", line 1089, in getresponse
    [172.16.1.62] out: response.begin()
    [172.16.1.62] out: File "/usr/lib64/python2.7/httplib.py", line 444, in begin
    [172.16.1.62] out: version, status, reason = self._read_status()
    [172.16.1.62] out: File "/usr/lib64/python2.7/httplib.py", line 400, in _read_status
    [172.16.1.62] out: line = self.fp.readline(_MAXLINE + 1)
    [172.16.1.62] out: File "/usr/lib64/python2.7/socket.py", line 476, in readline
    [172.16.1.62] out: data = self._sock.recv(self._rbufsize)
    [172.16.1.62] out: socket.timeout: timed out
    [172.16.1.62] out: Traceback (most recent call last):
    [172.16.1.62] out: File "/tmp/cloudify-ctx/scripts/tmpTwdgO3-start.py-76RC18XP", line 16, in <module>
    [172.16.1.62] out: utils.start_service(AMQPINFLUX_SERVICE_NAME)
    [172.16.1.62] out: File "/tmp/cloudify-ctx/scripts/utils.py", line 1099, in start_service
    [172.16.1.62] out: systemd.start(service_name, append_prefix=append_prefix)
    [172.16.1.62] out: File "/tmp/cloudify-ctx/scripts/utils.py", line 498, in start
    [172.16.1.62] out: .format(full_service_name))
    [172.16.1.62] out: File "/tmp/cloudify-ctx/cloudify.py", line 56, in info
    [172.16.1.62] out: return self._logger(level='info', message=message)
    [172.16.1.62] out: File "/tmp/cloudify-ctx/cloudify.py", line 50, in _logger
    [172.16.1.62] out: return check_output(cmd)
    [172.16.1.62] out: File "/tmp/cloudify-ctx/cloudify.py", line 32, in check_output
    [172.16.1.62] out: raise error
    [172.16.1.62] out: subprocess.CalledProcessError: Command '['ctx', 'logger', 'info', 'Starting systemd service cloudify-amqpinflux...']' returned non-zero exit status 1
    [172.16.1.62] out:

    Fatal error: run() received nonzero return code 1 while executing!

    Requested: source /tmp/cloudify-ctx/scripts/env-tmpTwdgO3-start.py-76RC18XP && /tmp/cloudify-ctx/scripts/tmpTwdgO3-start.py-76RC18XP
    Executed: /bin/bash -l -c "cd /tmp/cloudify-ctx/work && source /tmp/cloudify-ctx/scripts/env-tmpTwdgO3-start.py-76RC18XP && /tmp/cloudify-ctx/scripts/tmpTwdgO3-start.py-76RC18XP"

    Aborting.
    2018-03-28 07:03:23 CFY <manager> [elasticsearch_45a04] Starting node
    2018-03-28 07:03:23 CFY <manager> [amqp_influx_010b6.start] Task failed 'fabric_plugin.tasks.run_script' -> run() received nonzero return code 1 while executing!

    Requested: source /tmp/cloudify-ctx/scripts/env-tmpTwdgO3-start.py-76RC18XP && /tmp/cloudify-ctx/scripts/tmpTwdgO3-start.py-76RC18XP
    Executed: /bin/bash -l -c "cd /tmp/cloudify-ctx/work && source /tmp/cloudify-ctx/scripts/env-tmpTwdgO3-start.py-76RC18XP && /tmp/cloudify-ctx/scripts/tmpTwdgO3-start.py-76RC18XP"
    2018-03-28 07:03:23 CFY <manager> [elasticsearch_45a04.start] Sending task 'fabric_plugin.tasks.run_script'
    2018-03-28 07:03:23 CFY <manager> [elasticsearch_45a04.start] Task started 'fabric_plugin.tasks.run_script'
    2018-03-28 07:03:23 LOG <manager> [elasticsearch_45a04.start] INFO: Preparing fabric environment...
    2018-03-28 07:03:23 LOG <manager> [elasticsearch_45a04.start] INFO: Environment prepared successfully
    2018-03-28 07:03:24 LOG <manager> [elasticsearch_45a04.start] INFO: Starting Elasticsearch Service...
    2018-03-28 07:03:24 LOG <manager> [elasticsearch_45a04.start] INFO: Starting systemd service elasticsearch...
    2018-03-28 07:03:24 LOG <manager> [elasticsearch_45a04.start] INFO: elasticsearch is running

    ..............

    ..............

    ..............

    Bootstrap failed! (400: Failed during plugin installation. (ExecutionFailure: Error occurred while executing the install_plugin system workflow : ProcessExecutionError - RuntimeError: RuntimeError: Workflow failed: Task failed 'cloudify_agent.operations.install_plugins' -> Managed plugin installation found but its ID does not match the ID of the plugin currently on the manager. [existing: ff8642cd-5c74-4435-a6f8-c3c1c09ac713, new: 

  27. Hi Alexis, Brian,

      For dcae, in the docker container I get the following error:

    INFO - Installing dnsdesig.wgn INFO - Installing dnsdesig...

    INFO - Installing within current virtualenv: True...

    ERROR - ERROR - Usage:

    ERROR - pip install [options] <requirement specifier> [package-index-options] ...

    ERROR - pip install [options] -r <requirements file> [package-index-options] ... ERROR - pip install [options] [-e] <vcs project url> ... ERROR - pip install [options] [-e] <local project path> ... ERROR - pip install [options] <archive url/path> ... ERROR - ERROR - no such option: --use-wheel ERROR - ERROR - Could not install package: dnsdesig.

     I am not able to login to container as it exits immediately.

    Kindly advise how to proceed.

    Thanks

    Vijaya

    1. Hi Alexis de Talhouët,

         Any idea on this issue? Any workaround?

      regards, Vijayalakshmi

      1. hi, on the dcae_vm_install.sh script try to use 

        at the docker run command

  28. Hi Vijayalakshmi H,

    We are facing the same issue. Are you able to fix it?

    1. Arindam, unfortunately no solution.

      Am waiting to hear from Brian/Alexis on this.

      -Vijayalakshmi

    1. Abraham, where did you get the link from?  Search at readthedocs seems to work for me now. 

  29. This seems to be a bug in how the search function is implemented in readthedocs.  The correct link is  http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_user_guide.html?highlight=oom

    I'm happy to hear your feedback on the document.

    Cheers, Roger

    1. Roger, thank you for providing the correct link. I will be happy to provide feedback on the document. What would be the best way to provide feedback on the document?

      1. readthedocs doesn't seem to have a good way to provide feedback (that I know of).  If nothing else email me: Roger.Maitland@amdocs.com

        Thanks, Roger

      2. Link updated here 

  30. Hi,

    I have ONAP on K8S up and running, but I am still trying to get DCAE running. So far, I only got the dcae-dcae-bootstrap VM running, but the multicloud registration fails, and the docker container in the bootstrap vm fails.

    Can you explain why in the documentation it's mentioned that the tenant name used for the proxy openstack with designate must match the tenant name used for my OOM deployment on the 1st openstack ?

    Also, my tenant name in OOM is "0750179787_ITNSchool", if i use the same tenant name for designate, will the underscore character cause an issue ?

    Also, i noticed that cloudowner is hardcoded to pod25 and pod25dns, does this make any difference, as my main cloudowner for OOM is "CloudOwner" and region is "fr1" ?

    Thanks.



    Now, I am receiving the following error when DCAE bootstrap VM tries to register with MultiCloud, I see that the dcae_vm_int.sh script could not get the correct token from MultiCloud.

     

    ===> Waiting for MultiCloud to get ready for getting 200 from http://vm1.openo.VyK5.simpledemo.onap.org:9005/api/multicloud-titanium_cloud/v0/swagger.json @ Mon Apr 23 07:03:22 UTC 2018

    RESP CODE 200, matches with expected RESP CODE 200.

    ===> MultiCloud ready @ Mon Apr 23 07:03:22 UTC 2018

    ===> Register DNS zone VyK5.dcaeg2.onap.org. under admin

    =====> Getting token from http://vm1.openo.VyK5.simpledemo.onap.org/api/multicloud-titanium_cloud/v0/pod25_fr1/identity/v3/auth/tokens

    Received Keystone token tmp_auth_token},</pre></li> from http://vm1.openo.VyK5.simpledemo.onap.org/api/multicloud-titanium_cloud/v0/pod25_fr1/identity/v3/auth/tokens

    *   Trying 84.39.51.47...

    * Connected to vm1.openo.VyK5.simpledemo.onap.org (84.39.51.47) port 80 (#0)

    > GET /api/multicloud-titanium_cloud/v0/pod25_fr1/dns-delegate/v2/zones?name=VyK5.dcaeg2.onap.org. HTTP/1.1

    > Host: vm1.openo.VyK5.simpledemo.onap.org

    > User-Agent: curl/7.47.0

    > Accept: */*

    > Content-Type: application/json

    > X-Auth-Token: tmp_auth_token},</pre></li>

    >

    < HTTP/1.1 403 Forbidden

    < Server: nginx/1.12.2

    < Date: Mon, 23 Apr 2018 07:03:22 GMT

    < Content-Type: application/json

    < Transfer-Encoding: chunked

    < Connection: keep-alive

    < Vary: Cookie

    < X-Frame-Options: SAMEORIGIN

    < Allow: GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS

    <

    { [69 bytes data]

    * Connection #0 to host vm1.openo.VyK5.simpledemo.onap.org left intact

    =====> No zone of same name VyK5.dcaeg2.onap.org. found, creating new zone

    *   Trying 84.39.51.47...

    * Connected to vm1.openo.VyK5.simpledemo.onap.org (84.39.51.47) port 80 (#0)

    > POST /api/multicloud-titanium_cloud/v0/pod25_fr1/dns-delegate/v2/zones HTTP/1.1

    > Host: vm1.openo.VyK5.simpledemo.onap.org

    > User-Agent: curl/7.47.0

    > Accept: */*

    > Content-Type: application/json

    > X-Auth-Token: tmp_auth_token},</pre></li>

    > Content-Length: 67

    >

    * upload completely sent off: 67 out of 67 bytes

    < HTTP/1.1 403 Forbidden

    < Server: nginx/1.12.2

    < Date: Mon, 23 Apr 2018 07:03:22 GMT

    < Content-Type: application/json

    < Transfer-Encoding: chunked

    < Connection: keep-alive

    < Vary: Cookie

    < X-Frame-Options: SAMEORIGIN

    < Allow: GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS

    <

    * Connection #0 to host vm1.openo.VyK5.simpledemo.onap.org left intact

    {"detail":"Authentication credentials were not provided."}=====> Zone listing

    *   Trying 84.39.51.47...

    * Connected to vm1.openo.VyK5.simpledemo.onap.org (84.39.51.47) port 80 (#0)

    > GET /api/multicloud-titanium_cloud/v0/pod25_fr1/dns-delegate/v2/zones HTTP/1.1

    > Host: vm1.openo.VyK5.simpledemo.onap.org

    > User-Agent: curl/7.47.0

    > Accept: */*

    > Content-Type: application/json

    > X-Auth-Token: tmp_auth_token},</pre></li>

    >

    < HTTP/1.1 403 Forbidden

    < Server: nginx/1.12.2

    < Date: Mon, 23 Apr 2018 07:03:22 GMT

    < Content-Type: application/json

    < Transfer-Encoding: chunked

    < Connection: keep-alive

    < Vary: Cookie

    < X-Frame-Options: SAMEORIGIN

    < Allow: GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS

    <

    { [69 bytes data]

    * Connection #0 to host vm1.openo.VyK5.simpledemo.onap.org left intact

    {

        "detail": "Authentication credentials were not provided."

    }

    * Could not resolve host: Content-Type

    * Closing connection 0

    *   Trying 84.39.51.47...

    * Connected to vm1.openo.VyK5.simpledemo.onap.org (84.39.51.47) port 80 (#1)

    > GET /api/multicloud-titanium_cloud/v0/pod25_fr1/dns-delegate/v2/zones?name=VyK5.dcaeg2.onap.org. HTTP/1.1

    > Host: vm1.openo.VyK5.simpledemo.onap.org

    > User-Agent: curl/7.47.0

    > Accept: */*

    > X-Auth-Token: tmp_auth_token},</pre></li>

    >

    < HTTP/1.1 403 Forbidden

    < Server: nginx/1.12.2

    < Date: Mon, 23 Apr 2018 07:03:23 GMT

    < Content-Type: application/json

    < Transfer-Encoding: chunked

    < Connection: keep-alive

    < Vary: Cookie

    < X-Frame-Options: SAMEORIGIN

    < Allow: GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS

    <

    { [69 bytes data]

    * Connection #1 to host vm1.openo.VyK5.simpledemo.onap.org left intact

    =====> After creation, zone VyK5.dcaeg2.onap.org. ID is {"detail":"Authentication credentials were not provided."}

    Registration and configuration for proxying DNSaaS completed.

    1. hi, where did you get these logs?? 

      I think i have similar problems. 

      1. Hi, This is the output from the dcae-bootstrap vm /opt/dcae_vm_init.sh.

        1. hi, my dcae2_install.log have this:

          + ./dcae2_vm_init.sh

          Login Succeeded

          Using Designate DNSaaS service, performing additional registration and configuration

          http://192.168.89.220:5000/v2.0

          ===> Getting token from http://192.168.89.220:5000/v2.0/tokens

          *   Trying 192.168.89.220...

          * Connected to 192.168.89.220 (192.168.89.220) port 5000 (#0)

          > POST /v2.0/tokens HTTP/1.1

          > Host: 192.168.89.220:5000

          > User-Agent: curl/7.47.0

          > Accept: */*

          > Content-Type: application/json

          > Content-Length: 146

          } [146 bytes data]

          * upload completely sent off: 146 out of 146 bytes

          < HTTP/1.1 200 OK

          < Date: Thu, 26 Apr 2018 13:38:41 GMT

          < Server: Apache/2.4.18 (Ubuntu)

          < Vary: X-Auth-Token

          < X-Distribution: Ubuntu

          < x-openstack-request-id: req-fc035805-5977-4d46-bd80-eeb080c6679a

          < Content-Length: 4073

          < Content-Type: application/json

          { [4073 bytes data]

          * Connection #0 to host 192.168.89.220 left intact

          ===> Register DNS zone BN3T.dcaeg2.onap.org. at Designate API endpoint http://192.168.89.220:9001//v2/zones

          * Rebuilt URL to: gAAAAABa4dZh1UdPGlqLASUcFNVOCXFaD5TBA5HXX1qIoQXHCxZhTMHL6d_N1gLo0LCaTbMmR5kTh-mkNMKCgev3iAxANWIaxXYrXh1jkCzDQQKC7xERXRvuZ1A2itngesGKhpTK7Yg0qH37EiTHcL8e34eVwhJrfFOshDIJVOCHgtrfwuTDuzE/

          * Could not resolve host: gAAAAABa4dZh1UdPGlqLASUcFNVOCXFaD5TBA5HXX1qIoQXHCxZhTMHL6d_N1gLo0LCaTbMmR5kTh-mkNMKCgev3iAxANWIaxXYrXh1jkCzDQQKC7xERXRvuZ1A2itngesGKhpTK7Yg0qH37EiTHcL8e34eVwhJrfFOshDIJVOCHgtrfwuTDuzE

          * Closing connection 0

          *   Trying 192.168.89.220...

          * Connected to 192.168.89.220 (192.168.89.220) port 9001 (#1)

          > GET //v2/zones HTTP/1.1

          > Host: 192.168.89.220:9001

          > User-Agent: curl/7.47.0

          > Accept: */*

          < HTTP/1.1 401 Unauthorized

          < Content-Type: application/json

          < Content-Length: 114

          < Www-Authenticate: Keystone uri='http://controller:35357'

          < X-Openstack-Request-Id: req-d983760b-9d62-4ebe-8ae5-e698641da02c

          < Date: Thu, 26 Apr 2018 13:38:42 GMT

          { [114 bytes data]

          * Connection #1 to host 192.168.89.220 left intact

          jq: error (at <stdin>:1): Cannot iterate over null (null)

          ======> Zone BN3T.dcaeg2.onap.org. does not exist.  Create

          *   Trying 192.168.89.220...

          * Connected to 192.168.89.220 (192.168.89.220) port 9001 (#0)

          > POST //v2/zones HTTP/1.1

          > Host: 192.168.89.220:9001

          > User-Agent: curl/7.47.0

          > Accept: application/json

          > Content-Type: application/json

          > X-Auth-Token: gAAAAABa4dZh1UdPGlqLASUcFNVOCXFaD5TBA5HXX1qIoQXHCxZhTMHL6d_N1gLo0LCaTbMmR5kTh-mkNMKCgev3iAxANWIaxXYrXh1jkCzDQQKC7xERXRvuZ1A2itngesGKhpTK7Yg0qH37EiTHcL8e34eVwhJrfFOshDIJVOCHgtrfwuTDuzE

          > Content-Length: 156

          } [156 bytes data]

          * upload completely sent off: 156 out of 156 bytes

          < HTTP/1.1 202 Accepted

          < Location: http://controller:9001/v2/zones/c48efbe6-cb3b-479a-815d-65530fd3fac1

          < Content-Length: 592

          < Content-Type: application/json

          < X-Openstack-Request-Id: req-ec87b3ed-8e7f-4d55-8a9c-4096f8355e5a

          < Date: Thu, 26 Apr 2018 13:38:42 GMT

          { [592 bytes data]

          * Connection #0 to host 192.168.89.220 left intact

          Unable to find image 'nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.bootstrap:v1.1.1' locally

          v1.1.1: Pulling from onap/org.onap.dcaegen2.deployments.bootstrap

          50aff78429b1: Pulling fs layer

          f6d82e297bce: Pulling fs layer

          275abb2c8a6f: Pulling fs layer

          9f15a39356d6: Pulling fs layer

          ...

          ...

          and keep waiting for consul.

          Can you help me?

          thanks

          1. Solved: change the docker image from 1.1.1 to 1.1.2:

            • 'nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.bootstrap:1.1.2'
            • -The Unauthorized token is due to a lack of " " in the respective curl command.
  31. Hi,

    I have ONAP on K8S up and running, the dcae-bootstrap VM up.

    However, the script dcae_vm_init.sh cannot get a token from my Openstack v2.0 api, and the srcipt exits.

    If I try to get the token manually from my Openstack v2.0, I can get the token OK, as you can see below.

    I also show below the registration done by the script in AAI, I noticed that the script used to add "v3" to my identity URL, which is wrong, so i changed this value to be correct "v2.0", but still the script cannot get the token.

    I tried looking into the MSB-IAG container logs, but it only shows the GET request, and i don't see the response from my openstack, and since it's SSL, i cannot use wireshark to see the response from openstack.

    How can i troubleshoot this issue, to see the HTTP TOKEN request and response between MSB and Openstack v2.0 ?

    Also, why do we need to get the token via multicloud, if my DCAE vm and OOM are on the same tenant, using the same keystone ? do we need the multicloud proxy here ?

    This is a manual GET TOKEN request: 

    root@dcae-dcae-bootstrap:/opt# curl -v -H 'Content-Type: application/json' -X POST -d '{"auth":{"passwordCredentials":{"username":"abdelmuhaimen.seaudi@orange.com","password":"xxxx"},"tenantName": "0750179787_ITNSchool"}}}' https://identity.fr1.cloudwatt.com/v2.0/tokens
    Note: Unnecessary use of -X or --request, POST is already inferred.
    *   Trying 185.23.94.20...
    * Connected to identity.fr1.cloudwatt.com (185.23.94.20) port 443 (#0)
    * found 148 certificates in /etc/ssl/certs/ca-certificates.crt
    * found 592 certificates in /etc/ssl/certs
    * ALPN, offering http/1.1
    * SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384
    *        server certificate verification OK
    *        server certificate status verification SKIPPED
    *        common name: *.fr1.cloudwatt.com (matched)
    *        server certificate expiration date OK
    *        server certificate activation date OK
    *        certificate public key: RSA
    *        certificate version: #3
    *        subject: C=FR,L=Paris,O=Orange,OU=Orange Cloud for Business,CN=*.fr1.cloudwatt.com
    *        start date: Mon, 12 Feb 2018 00:00:00 GMT
    *        expire date: Fri, 16 Nov 2018 12:00:00 GMT
    *        issuer: C=US,O=DigiCert Inc,CN=DigiCert Global CA G2
    *        compression: NULL
    * ALPN, server did not agree to a protocol
    > POST /v2.0/tokens HTTP/1.1
    > Host: identity.fr1.cloudwatt.com
    > User-Agent: curl/7.47.0
    > Accept: */*
    > Content-Type: application/json
    > Content-Length: 143

    * upload completely sent off: 143 out of 143 bytes
    < HTTP/1.1 200 OK
    < Content-Type: application/json; charset=UTF-8
    < Cache-Control: no-cache
    < Server: Jetty (horse API)
    < Content-Length: 7319
    < Strict-Transport-Security: max-age=55779513; includeSubDomains

    {"access":{"token":{"id":"gAAAAABa4D83is432vt2OSzjl_ej6WVVKiCFASVNfTl-M-
    ...

    This is the GET TOKEN request from dcae_vm_init.sh using multicloud proxy:

    root@dcae-dcae-bootstrap:/opt# curl -v -H 'Content-Type: application/json' -X POST -d '{"auth":{"tenantName": "0750179787_ITNSchool"}}' http://vm1.openo.7Ho5.simpledemo.onap.org/api/multicloud-titanium_cloud/v0/pod25_fr1/identity/v3/auth/tokens
    Note: Unnecessary use of -X or --request, POST is already inferred.
    *   Trying 84.39.51.47...
    * Connected to vm1.openo.7Ho5.simpledemo.onap.org (84.39.51.47) port 80 (#0)
    > POST /api/multicloud-titanium_cloud/v0/pod25_fr1/identity/v3/auth/tokens HTTP/1.1
    > Host: vm1.openo.7Ho5.simpledemo.onap.org
    > User-Agent: curl/7.47.0
    > Accept: */*
    > Content-Type: application/json
    > Content-Length: 47

    * upload completely sent off: 47 out of 47 bytes
    < HTTP/1.1 500 Internal Server Error
    < Server: nginx/1.12.2
    < Date: Wed, 25 Apr 2018 08:35:06 GMT
    < Content-Type: application/json
    < Transfer-Encoding: chunked
    < Connection: keep-alive
    < Vary: Cookie
    < X-Frame-Options: SAMEORIGIN
    < Allow: GET, POST, HEAD, OPTIONS

    * Connection #0 to host vm1.openo.7Ho5.simpledemo.onap.org left intact
    {"error":"'token'"}

    This is the cloud region registration in AAI:

    root@dcae-dcae-bootstrap:/opt# curl -k -X GET -H "X-FromAppId: AAI-Temp-Tool" -H "X-TransactionId: AAI-Temp-Tool"" -H "Content-Type: application/json" -H "Accept: application/json" -u AAI:AAI https://vm1.aai.7Ho5.simpledemo.onap.org:8443/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/pod25/fr1?depth=all | json_pp
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  1350  100  1350    0     0   5702      0 --:--:-- --:--:-- --:--:--  5720
    {
       "owner-defined-type" : "owner-defined-type",
       "esr-system-info-list" : {
          "esr-system-info" : [
             {
                "password" : "xxxx",
                "cloud-domain" : "default",
                "system-type" : "VIM",
                "ssl-insecure" : true,
                "type" : "example-type-val-85254",
                "service-url" : "https://identity.fr1.cloudwatt.com/v2.0",
                "ip-address" : "example-ip-address-val-44431",
                "port" : "example-port-val-93234",
                "system-name" : "example-system-name-val-29070",
                "resource-version" : "1524600009342",
                "vendor" : "example-vendor-val-94515",
                "esr-system-info-id" : "432ac032-e996-41f2-84ed-9c7a1766eb29",
                "version" : "example-version-val-71880",
                "default-tenant" : "0750179787_ITNSchool",
                "ssl-cacert" : "example-ssl-cacert-val-75021",
                "user-name" : "abdelmuhaimen.seaudi@orange.com"
             }
          ]
       },
       "resource-version" : "1524569196405",
       "cloud-extra-info" : "{\"epa-caps\":{\"huge_page\":\"true\",\"cpu_pinning\":\"true\",\"cpu_thread_policy\":\"true\",\"numa_aware\":\"true\",\"sriov\":\"true\",\"dpdk_vswitch\":\"true\",\"rdt\":\"false\",\"numa_locality_pci\":\"true\"},\"dns-delegate\":{\"cloud-owner\":\"pod25dns\",\"cloud-region-id\":\"RegionOne\"}}",
       "cloud-type" : "openstack",
       "sriov-automation" : false,
       "cloud-region-version" : "titanium_cloud",
       "complex-name" : "complex name",
       "cloud-zone" : "cloud zone",
       "cloud-region-id" : "fr1",
       "identity-url" : "http://vm1.openo.7Ho5.simpledemo.onap.org/api/multicloud-titanium_cloud/v0/pod25_fr1/identity/v2.0",
       "cloud-owner" : "pod25"
    }


  32. Hi Alexis de Talhouët and Michael O'Brien,

      I have the vDNS stack up with load balancer, DNS and Paketgenerator VMs.

    I verified the command: dig @vLoadBalancer_IP host1.dnsdemo.onap.org and is working as output shown in

    https://gerrit.onap.org/r/gitweb?p=demo.git;a=blob;f=README.md;h=38491e0437fc0b7a16003d9cd3bfe2fbf3d90ac8;hb=refs/heads/master.

    I see there is script run_streams_dns.sh running. Where do I see the output of this script?

    Any other way to check/verify the vDNS functionality?

    Thanks

    Vijayalakshmi

  33. Hi All,

    After doing ./createConfig.sh -n onap, I see this error on the kubernates dashboard for pod config and the container stuck in creating state.

    "Search Line limits were exceeded, some dns names have been omitted, the applied search line is: onap.svc.cluster.local svc.cluster.local cluster.local kubelet.kubernetes.rancher.internal kubernetes.rancher.internal rancher.internal"


    1. did you try run cd.sh script??

      1. Hi Pedro,

        Nope. Where is that script and from where it should be invoked ?

        1. hi, wich version do you want install??

          1. I am trying ONAP setup - oom - Amsterdam release.

            1. hi,

              try to use the oom_rancher_setup_1 and cd scripts from Michael O'Brien git: https://github.com/obrienlabs/onap-root

              1. Sangeeth, those search line limit errors are limitation of rancher to 5 dns entries - you can ignore them - they are only a red herring

                /michael

                Also don't use my old github account - the scripts are moved into onap under LOG-320

                LOG-320 - Getting issue details... STATUS

                under

                https://git.onap.org/logging-analytics/tree/deploy


                and optimized for master/beijing and the helm install  - try not to use amsterdam - it is no longer supported

                /michael

                1. Hi michael,

                  It's possible to run the vFW demo with closed loop in amsterdam or should i move to Beijing??

                  Thanks 

  34. SOLVED: [ERROR: Failed to find: platform_dockerhost] → bad configs at openstack designate

    Hi all,

    Now i'm getting an Error on dcaecdap06 VMCaptura de ecrã 2018-04-30, às 15.13.46.png

    the boot log is:

    ...

    ...

    ...

    2018-04-30T12:46:16 CFY <policy_handler> [policy_handler_c1eb8.start] Sending task 'dockerplugin.create_and_start_container_for_platforms' [retry 27]

    2018-04-30T12:46:16 CFY <policy_handler> [policy_handler_c1eb8.start] Task started 'dockerplugin.create_and_start_container_for_platforms' [retry 27]

    2018-04-30T12:46:17 CFY <policy_handler> [policy_handler_c1eb8.start] Task failed 'dockerplugin.create_and_start_container_for_platforms' -> Failed to find: platform_dockerhost [retry 27]

    2018-04-30T12:46:47 CFY <policy_handler> [policy_handler_c1eb8.start] Sending task 'dockerplugin.create_and_start_container_for_platforms' [retry 28]

    2018-04-30T12:46:47 CFY <policy_handler> [policy_handler_c1eb8.start] Task started 'dockerplugin.create_and_start_container_for_platforms' [retry 28]

    2018-04-30T12:46:48 CFY <policy_handler> [policy_handler_c1eb8.start] Task failed 'dockerplugin.create_and_start_container_for_platforms' -> Failed to find: platform_dockerhost [retry 28]

    Timed out waiting for workflow 'install' of deployment 'policy_handler' to end. The execution may still be running properly; however, the command-line utility was instructed to wait up to 900 seconds for its completion.


    * Run 'cfy executions list' to determine the execution's status.

    * Run 'cfy executions cancel --execution-id b173bb7b-8047-431e-9b3a-2e3a291c2a02' to cancel the running workflow.

    * Run 'cfy events list --tail --include-logs --execution-id b173bb7b-8047-431e-9b3a-2e3a291c2a02' to retrieve the execution's events/logs

    Waiting for CDAP cluster to register

    + echo 'Waiting for CDAP cluster to register'

    + grep cdap

    + curl -Ss http://192.168.89.23:8500/v1/catalog/service/cdap

    + echo -n .

    + sleep 30

    + grep cdap

    + curl -Ss http://192.168.89.23:8500/v1/catalog/service/cdap

    + echo -n .

    + sleep 30

    ...

    ...

    ...

    Can someone help me?? [Michael O'Brien Alexis Chiarello]

    thanks

  35. ######[SOLVED]########

    Hi,

    Is there anyone who is deploying DCAE here??

    i'm currently having errors on the cloudify deployments: PlatformServicesInventory, hengine and hrules. Basically says that container never became healthy, so it seems that is a problem with the container/image.

    here are the logs:

    PlatformServicesInventory.log 

    hrules.log

    hengine.log


    Can someone confirm that??


    UPDATE: now PlatformServicesInventory is ok. hrules and hengine still with the same errors.


    SOLUTION: After looking at logs of hrules and hengine containers i found a problem with some connections to dns zones. After, i looked to dcaegen2-bootstrap pod logs and i saw that had some command errors. Then, i restarted the pod to reinicialize the whole process (a couple of times until everything is ok (and i realize that pod had some issues, it's not stable)) and its done.


    Pedro

  36. Hi Michael,

    I have setup ONAP Amsterdam on OpenStack. I am using 2 VMs one for Rancher and other for Kubernetes host VM with OOM. I am following this page for the setup and referring to the steps in your script https://gerrit.onap.org/r/#/c/32019/17/install/rancher/oom_rancher_setup.sh for setting up the VMs manually. I have been able to register the Kubernetes host in Rancher and launch all the pods except the following pods

    onap-appc             appc-dgbuilder-2298093128-hqqkg               0/1       CrashLoopBackOff   128        13h       10.42.136.15    kubernetes1

    onap-consul           consul-agent-3312409084-d15vl                 0/1       CrashLoopBackOff   159        13h       10.42.243.215   kubernetes1

    onap-sdnc             sdnc-dgbuilder-4011443503-kcc6j               0/1       CrashLoopBackOff   127        13h       10.42.46.144    kubernetes1

    onap-sdnc             sdnc-portal-516977107-c1wr3                   0/1       CrashLoopBackOff   128        13h       10.42.45.41     kubernetes1

    On looking into the logs of consul-agent-3312409084-d15vl I saw following error:

    Error parsing /consul/config/aai-data-router-health.json

    To fix this I changed the mount point to some other location in consul-agent-deployment.yaml and recreated the consul pod. It worked. Is this some known bug or some version problem? The other consul pods are running fine and I see the mount point for them is /consul/config in consul-server-deployment.yaml and consul-server-service.yaml as well. 

    Regards,

    Vidhu


    1. Amsterdam is not being adjusted anymore - I would switch to beijing/master - DCAE is fully containerized there.

      /michael

      1. Hi Michael,

        Do you mean to say Amsterdam is no longer maintained and might have issues? I wanted to setup Amsterdam with DCAE enabled. I am now able to run all the pods except consul-agent which gets stuck in CrashLoopBackOff state. To fix this I have to changed the mount point to some other location in consul-agent-deployment.yaml and re-create consul.

        Are you suggesting it is better to go for Beijing? How stable is Beijing and is DCAE fully functional there?

        Thanks,

        Vidhu

          


      2. Hi Michael,

        I want to try Beijing release of OOM in OpenStack. Just wanted to confirm if the script "https://gerrit.onap.org/r/#/c/32019/17/install/rancher/oom_rancher_setup.sh" is meant to be run on  single VM that will have all the components Rancher, Kubernetes and OOM running? What space and memory size(flavor) is recommended for this VM? 

        Thanks,

        Vidhu 

  37. hi Michael, Brian,

     I have instantiated the vDNS/vLoadBalancer stack in openstack. I have observed that most of the times the VMs are not reachable upon instantiation.

    Once in ~10 attempts the VMs however are pingable. but as part of the init/install scripts when the VMs get rebooted, they go unreachable. 

    Please guide me on how to get this resolved?? or if there is any specific wiki for vDNS where I can post this query?

    Thanks

    Vijaya


  38. [Solved] - bad nfs-share configs.


    hi all,

    i deployed beijing with 2 rancher/kubernetes clusters.

    when i run the healthcheck i got this for all tests:

    maybe a dns-pod problem or i missed any cluster configuration?

    I'm able to open the portal, vid, etc. 

    1. Hi Pedro,

      Can you let me know what steps and scripts are you following for installing Beijing release?

      Thanks,

      Vidhu

      1. hi,

        i'm following this: https://git.onap.org/logging-analytics/plain/deploy/rancher/oom_entrypoint.sh, but for a full deployment i think you need to build a kubernetes cluster due to pods limit. You can find the instructions at readthedocs.

        1. Hi Pedro,

          Are you running this script in a OpenStack instance or on a physical host machine?

          Regards,

          Vidhu

          1. currently, what i have is a 3 node cluster: 1 for rancher (master), and 2 slaves for beijing.

            you can follow: http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_setup_kubernetes_rancher.html

            1. Thanks Pedro the links helped a lot. 

              I am using the following cd.sh script to install onap :

              https://git.onap.org/logging-analytics/plain/deploy/cd.sh

              Do I have to override the parameters in values.yaml to suit to my OpenStack environment or shall I leave it as default one that is there in oom/kuberenetes/onap directory. In cd.sh script I see the values.yaml overrriding is commented out.


              The following values.yaml did not work for me
              https://jira.onap.org/secure/attachment/11414/values.yaml

              Not sure if the following "so" configuration needs to be changed as per my OpenStack env. 

                # so server configuration
                config:
                  # message router configuration
                  dmaapTopic: "AUTO"
                  # openstack configuration
                  openStackUserName: "vnf_user"
                  openStackRegion: "RegionOne"
                  openStackKeyStoneUrl: "http://1.2.3.4:5000"
                  openStackServiceTenantName: "service"
                  openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"

              I feel it can be modified later too? 

              Please share your views.

              Regards,

              Vidhu

              1. The OpenStack configuration will need to be customized for your deployment(s).  As described in the OOM User Guide (https://onap.readthedocs.io/en/beijing/submodules/oom.git/docs/oom_user_guide.html) you can use the -f <my_custom_values.yaml> syntax to override the example values provided.

                Cheers,

                Roger

                1. Hi Roger,

                  I am using the same values.yaml provided in oom/kubernetes/onap folder unmodified, considering I can overrride these values later, as you mentioned.  I was able to run about 130 pods, while few got stuck in crashloopbackoff and init state. Hope these dummy values are not affecting the pods?

                  How many pods are there in total? I created a 2 node Kubernetes cluster (1 Rancher and 2 Kubernetes host VMs). 

                  Thanks,

                  Vidhu 

                  1. Hi Vidhu,

                    I tried with 3 kubernetes host (8 vCPU, 32 GB RAM per host) and 120 + pods came in running state, then one of the k8 host went is disconnected mode and all pods on that host went in unknown state. Whole setup was messed up. As per my understanding, some of JAVA processes ran out of memory and also caused k8 host in disconnected state.  

                    I tried with 6 hosts now and didn't face this issue. My recommendation would be to add more hosts

                    -Sunny

                    1. I agree that more nodes are better than fewer.  I counted the # of pods in one of the CD systems and got 169 so that's a lot of load on any cluster.

                      1. The 2 K8s hosts I created have 64GB RAM, 15 vCPUs and 160GB disk space each which is much more than minimum recommended for a single node. I did not face any performance issue and about 142 pods came up. Maybe I can add one more.


                      2. Hi Roger,

                        I am now trying with 4 node Kubernetes cluster but most of the pods are stuck in container creating state. I see the following logs in the event description in most of the pods

                        Normal SandboxChanged 14m (x84 over 2h) kubelet, k8s-beijing-2 Pod sandbox changed, it will be killed and re-created.
                        Warning FailedSync 8m (x89 over 2h) kubelet, k8s-beijing-2 Error syncing pod
                        Warning FailedCreatePodSandBox 3m (x84 over 2h) kubelet, k8s-beijing-2 Failed create pod sandbox.

                        I added 2 new nodes to the 2 older nodes on which I had tried installing OOM earlier. I terminated all the pods in old setup and then freshly registered all the 4 nodes in Rancher. The older nodes also have the images pulled. Can this be the reason? Should I create all the 4 nodes fresh?

                        Thanks,

                        Vidhu



  39. Hi all,

    is there any .yaml template for custom configuration for beijing?? (similar to onap-parameters.yaml for amsterdam)

    thanks

  40. Hi All,

        May be unrelated to ONAP but this is more towards the Openstack env that I am trying to get it running using the latest Queens and Packstack.

    I have a 3 Node openstack setup. 1 Controller/Compute/Storage and 2 other compute nodes.

    Was able to get the openstack installed and verified that I see the 3 hypervisors. When I spinup rancher VM it gets instantiated on the controller/compute node. When I try to instantiate the VM that needs to host k8s cluster since it is xlarge flavor there are not enough resource on controller/compute node so it needs to be spun up on of the compute nodes, but it gets stuck in BUILDING state for ever. I have tried spinning up test VMs as long as they can be spun up on controller/compute node it will get instantiated, once the resource gets exhausted on this controller/compute I see the same issue with test-VMs also. They also get stuck in BUILDING state for ever. Unfortunately I do not see any errors in any of the nova-logs. This worked fine with Ocata. I have posted this on openstack mailing list too. But just posting it here, in case someone else faced a similar issue and has some pointers for me to get this resolved..

    Regards,

    Ravi 

    1. You will not be able to host all of ONAP on a single VM as there is a limit to 110 pods on a single node; however, if you're interested in only a fraction of ONAP you may find one node is enough. K8s will balance across many nodes in a cluster so you might want to try 4x32GB - 8 vCores each instead of one large node which might help avoid this problem.


      Cheers,

      Roger

  41. Hi,

    Is anyone facing problems while bringing up SDNC pods in Beijing release. The sdnc-db pod does not come up due to following liveness and readiness error

    Warning Unhealthy 1m kubelet, k8s-beijing-2 Liveness probe failed: mysqladmin: connect to server at 'localhost' failed

    error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)'

    Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!

    Warning Unhealthy 18s (x13 over 2m) kubelet, k8s-beijing-2 Readiness probe failed: dial tcp 10.42.167.245:3306: getsockopt: connection refused

    I am using helm upgrade -f option to enable SDNC individually. I saw a similar kind of bug https://gerrit.onap.org/r/#/c/54647/ reported where the initialDelaySeconds is set to 180 for sdnc-ansible-server and sdnc-portal. I find that the liveness and and readiness time interval in sdnc/values.yaml is 10.

    Also does SDNC require to be run in certain order due to any dependency on other pods?

    Regards,

    Vidhu



  42. Dear all, 

    I am a new person in OpenStack and ONAP. I am trying on installing ONAP.

    How can I get the information about Openstack Configuration? 

    {OPENSTACK_INSTANCE}: The OpenStack Instance Name to give to your K8S VM

    {OPENSTACK_IP}: The IP of your OpenStack deployment
    {RANCHER_IP}: The IP of the Rancher VM created previously
    {K8S_FLAVOR}: The Flavor to use for the kubernetes VM. Recommanded specs:

    {UBUNTU_1604}: The Ubuntu 16.04 image
    {PRIVATE_NETWORK_NAME}: a private network
    {OPENSTACK_TENANT_NAME}: Openstack tenant
    {OPENSTACK_USERNAME}: Openstack username
    {OPENSTACK_PASSWORD}: OpenStack password


    Thank you very much.

    Best regards,

    Nguyen Van Hung

    1. Hi, for Openstack parameters, you should be able to get them by using the Openstack Horizon Dashboard or using the Openstack RC script through API calls, assuming you have access to the OpenStack environment where you are installing ONAP.

    1. Click Kubernetes → CLI
    2. Click Generate Config
      I am doing following steps on Kubernetes VM:
    3. Copy/Paste in your host

    Then the following steps on Rancher VM:

    1. Install kubectl

      curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubectl
      chmod +x ./kubectl
      sudo mv ./kubectl /usr/local/bin/kubectl
    2. Make your kubectl use this new environment

      kubectl config use-context <rancher-environment-name>

    I am also deploying OOM on Rancher and all the steps after "Clone OOM Beijing branch" (on Rancher only).

    Is this correct or am i doing something wrong ?

    1. Hi, what error are you seeing specifically? I never had to run that config use-context command. As long as I grab the copied config from the rancher gui and place it in ~/.kube/config, it should work right away assuming your rancher vm has connectivity to the rancher host URL.

      1. oki, i also skipped that command. Given DACE2agen=false in values.yaml file.

        Currently, there are total 145 pods on single node(i.e. kubernetes)

        All the portal pods are running on Kubernetes with IP 192.168.1.138.

        Rancher is running on another IP.

        I am not able to access ONAP GUi with url http://192.168.1.138:8989/ONAPPORTAL/applicationsHome

        It is giving blank page without any error.

        In /etc/hosts of kubernetes, i have provided the following info:

        192.168.1.138 portal.api.simpledemo.onap.org
        192.168.1.138 vid.api.simpledemo.onap.org
        192.168.1.138 sdc.api.fe.simpledemo.onap.org
        192.168.1.138 portal-sdk.simpledemo.onap.org
        192.168.1.138 policy.api.simpledemo.onap.org
        192.168.1.138 aai.api.sparky.simpledemo.onap.org
        192.168.1.138 cli.api.simpledemo.onap.org
        192.168.1.138 msb.api.discovery.simpledemo.onap.org


        1. Also, after disabling DCAE2agen, i can see following pods running:

          onap dev-dcae-cloudify-manager-d8748c658-2zclr 1/1 Running 0 17h 10.42.47.244 sb4-k8s

          onap dev-dcae-db-0 1/1 Running 0 17h 10.42.252.70 sb4-k8s
          onap dev-dcae-db-1 1/1 Running 0 16h 10.42.135.225 sb4-k8s
          onap dev-dcae-healthcheck-559675764f-nhlg4 1/1 Running 0 17h 10.42.16.252 sb4-k8s
          onap dev-dcae-redis-0 1/1 Running 0 17h 10.42.247.206 sb4-k8s
          onap dev-dcae-redis-1 1/1 Running 0 17h 10.42.178.143 sb4-k8s
          onap dev-dcae-redis-2

          1. I got the solution.

            Thanks.

  43. Has anyone here installed Beijing release successfully?

    I am facing some issues in deploying ONAP Beijing release.

  44. Hello,

    I am trying to install the ONAP Casablanca release on Openstack using heat template. I have gone through the below link as well

     https://onap.readthedocs.io/en/casablanca/submodules/integration.git/docs/onap-oom-heat.html#onap-oom-heat

    But it looks like DCAE is incorporate with the installation. So if I want to install the ONAP(Casablanca) without DCAE component, what should be my approach?

    Is it possible to do it with the automated script and heat template itself?

    Please help me with these queries

    Thanks in Advance !

  45. Hi,

    I currently have a onap cluster on kubernetes on openstack running Casablanca. 

    I need to upgrade to dublin/master or install dublin/master from the scratch. Can somebody please let me know what are the prerequisites for the dublin or latest master.

    The below page has the software requirements but doesn't have the rancher version.

    https://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_cloud_setup_guide.html

    Any pointer is much appreciated.

    Thanks in advance,

    Thiriloshini

  46. I want to install the ONAP on openstack please let me know what is the hadrware configuration is required for this ?