2
1
0

I am trying to setup DCAEGGEN2 using manual installation approach. The environment I am using is behind the proxy. So for installing DCAEGEN2 I have to do lot of workarounds to make it work(though I haven't reached that state). Here are the steps I took just to make centos VM  up.


1.  As Designate service is not running in my ONAP installed openstack, we are supposed to  create one surrogate openstack and add those details in dnsaas_*.txt in dcae_bootstrap VM


2. We have to make sure that "tenant name" and "region name" of both openstack should be same.


3. If you are using keystone v3 for any of the openstack make sure the domain name should match the domain name in AAI payload while registering pod25 and pod25dns . If you are using "Default" domain name, then all is well :)


4. If you are behind proxy, we should pass proxy variables while running boot container. Please find my docker run command below:


docker run -d  --name boot -v /opt/app/config:/opt/app/installer/config -v /etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt -e "CLOUDIFY_SSL_TRUST_ALL=true"  -e "no_proxy=127.0.0.1,<some_local_ips>" -e "http_proxy=<proxy>:80" -e REQUESTS_CA_BUNDLE="/etc/ssl/certs/ca-certificates.crt" -e "https_proxy= <proxy> :80"  -e "PYTHONHTTPSVERIFY=0 "   -e "LOCATION=MYGL" "nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.bootstrap:v1.1.1.3" /opt/app/installer/installer


5.  For all SSL based errors refer  https://jira.onap.org/browse/DCAEGEN2-218. Thanks Elena


6. And if you are using v3 for main openstack and it is baed on SSL, you have to tweak the multicloud code. Find the tweak below. 

  • Login to multi service VM and login to  multicloud-windriver  container
  • Open  /opt/windriver/lib/newton/newton/requests/views/util.py
  • Change  "return session.Session(auth=auth)" to "return session.Session(auth=auth, verify=Fals)"

7. If you make it through to launch "centos_vm" that's the first relief. But make sure your centos_vm should have "centos" user, if not present then it won't proceed. The approach I used is to change "SSHUSER" in installer file in boot container and commit the image with new version and update dcae2_vm_init.sh file with updated docker image name


8. The next blocker is the following line 


"PVTIP=$(ssh $SSHOPTS -i "$PVTKEY" "$SSHUSER"@"$PUBIP" 'echo PVTIP=`curl --silent http://169.254.169.254/2009-04-04/meta-data/local-ipv4`' | grep PVTIP | sed 's/PVTIP=//')"


If you are using proxy, set no_proxy for  "169.254.169.254" for example:


"PVTIP=$(ssh $SSHOPTS -i "$PVTKEY" "$SSHUSER"@"$PUBIP" ' export no_proxy=$no_proxy,169.254.169.254;echo PVTIP=`curl --silent http://169.254.169.254/2009-04-04/meta-data/local-ipv4`' | grep PVTIP | sed 's/PVTIP=//')


9. boot container launches centos_vm with some cloud-init. Now if you want to have successful installation of cloudinit scripts I need to add proxy details. It seems like a rabbit hole for me. Currently I block here with the following logs: 


2018-02-01 12:42:27 CFY <manager> [elasticsearch_1eb63.creation] Sending task 'fabric_plugin.tasks.run_script'

2018-02-01 12:42:27 CFY <manager> [amqp_influx_895ee.creation] Task started 'fabric_plugin.tasks.run_script'

2018-02-01 12:42:27 LOG <manager> [amqp_influx_895ee.creation] INFO: Preparing fabric environment...

2018-02-01 12:42:27 LOG <manager> [amqp_influx_895ee.creation] INFO: Environment prepared successfully

[floating_ip] out: Traceback (most recent call last):

[floating_ip] out:   File "/tmp/cloudify-ctx/ctx", line 139, in <module>

[floating_ip] out:     main()

[floating_ip] out:   File "/tmp/cloudify-ctx/ctx", line 128, in main

[floating_ip] out:     args.timeout)

[floating_ip] out:   File "/tmp/cloudify-ctx/ctx", line 84, in client_req

[floating_ip] out:     response = request_method(socket_url, request, timeout)

[floating_ip] out:   File "/tmp/cloudify-ctx/ctx", line 65, in http_client_req

[floating_ip] out:     timeout=timeout)

[floating_ip] out:   File "/usr/lib64/python2.7/urllib2.py", line 127, in urlopen

[floating_ip] out:     return _opener.open(url, data, timeout)

[floating_ip] out:   File "/usr/lib64/python2.7/urllib2.py", line 410, in open

[floating_ip] out:     response = meth(req, response)

[floating_ip] out:   File "/usr/lib64/python2.7/urllib2.py", line 523, in http_response

[floating_ip] out:     'http', request, response, code, msg, hdrs)

[floating_ip] out:   File "/usr/lib64/python2.7/urllib2.py", line 448, in error

[floating_ip] out:     return self._call_chain(*args)

[floating_ip] out:   File "/usr/lib64/python2.7/urllib2.py", line 382, in _call_chain

[floating_ip] out:     result = func(*args)

[floating_ip] out:   File "/usr/lib64/python2.7/urllib2.py", line 531, in http_error_default

[floating_ip] out:     raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)

[floating_ip] out: urllib2.HTTPError: HTTP Error 503: Service Unavailable

[floating_ip] out: Traceback (most recent call last):

[floating_ip] out:   File "/tmp/cloudify-ctx/scripts/tmp1kbocD-creation_validation.py-SXMU13V0", line 9, in <module>

[floating_ip] out:     join(dirname(__file__), 'utils.py'))

[floating_ip] out:   File "/tmp/cloudify-ctx/cloudify.py", line 247, in download_resource

[floating_ip] out:     return check_output(cmd)

[floating_ip] out:   File "/tmp/cloudify-ctx/cloudify.py", line 32, in check_output

[floating_ip] out:     raise error

[floating_ip] out: subprocess.CalledProcessError: Command '['ctx', 'download-resource', 'components/utils.py', '/tmp/cloudify-ctx/scripts/utils.p

y']' returned non-zero exit status 1



Please find the attachment for full logs


Apologies for long mail. But I wrote all the issues I faced because:

1) It may help others who is in same environment like me

2) To get the suggestions from the community that if anything I made wrong in the workarounds. 


I also tried to setup DCAE in standalone mode, but it seems we can see the data flow only in standlone mode but not policy enforcement, triggering of SO for certain events ec.


Any help in this area is much appreciated

CommentAdd your comment...

2 answers

  1.  
    2
    1
    0

    Bharath Thiruveedula I have installed DACE using ONAP heat template but technically does the same installation, This may take a lot of time, please aware of that. But hopefully my changes will be useful for you (just beware these are a lot of changes which require patience). I have tried almost all scenarios but nothing worked for me except this one.

    Do not pass the environment variables directly to container this will create problems with cloudify while fetching packages.

    Take a look into the installer file (which runs inside the docker container named boot) and observe the changes. 

    You need to run a apache server locally and download all the files (where $apache_ip is mentioned). You can refer the external url just above the change itself.

    Once you download the files make sure review each one of them and update the import settings

    For example in cdap_broker.yaml

    imports:

    #- http://www.getcloudify.org/spec/cloudify/3.4/types.yaml
    - http://<local_ip>/types.yaml
    #- https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.plugins/releases/type_files/cdapcloudify/14/cdapcloudify_types.yaml
    - http://<local_ip>/type_files/cdapcloudify/14/cdapcloudify_types.yaml
    #- https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.plugins/releases/type_files/dockerplugin/2/dockerplugin_types.yaml
    - http://<local_ip>/type_files/dockerplugin/2/dockerplugin_types.yaml
    #- https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.plugins/releases/type_files/relationshipplugin/1/relationshipplugin_types.yaml
    - http://<local_ip>/type_files/relationshipplugin/1/relationshipplugin_types.yaml

    Also you have to update the cloudify cloud-init scripts(cloudify uses openstack plugin to deploy DCAE components in openstack cloud).

    Here are the list of file you have to download, please follow the script to fetch from the external URLs

    cloudify-manager-resources_3.4.0-ga-b400.tar.gz

    consul_0.8.3_linux_amd64.zip

    3.4.tar.gz

    all other files are attached here (hopefully nothing changed), Note that these files will be useful for Amsterdam release. You may have to investigate if you see any error

    after extracting the files into hosting server (/var/www/html) in simple linux apache2. Please search for xxx.xxx.xxx.xxx

    xxx.xxx.xxx.xxx represents both hosting server and proxy. So, make sure you give proxy for only where proxy is mentioned. Before that observe the changes and files

    grep -nris "xxx.xxx.xxx.xxx" 

    1. Bharath Thiruveedula

      Thanks a lot kranthi guttikonda


      Does the archive file you uploaded is the one to be served using Apache after changing the contents?

    2. kranthi guttikonda

      Yes Bharath Thiruveedula along with 

      cloudify-manager-resources_3.4.0-ga-b400.tar.gz

      consul_0.8.3_linux_amd64.zip

      3.4.tar.gz

      To download these look into installer script (for urls)

    3. Arindam Mondal

      Hi,

      kranthi guttikonda  you have mentioned that "Also you have to update the cloudify cloud-init scripts". Could you please tell which scripts you are talking about?

    4. Arindam Mondal

      Currently I'm getting error in the step

       cfy bootstrap --install-plugins -p bootstrap-blueprint.yaml -i bootstrap-inputs.yaml

      Here is the error log. Appreciate if anyone has any solution for this.


      2018-03-28 07:02:31 LOG <manager> [elasticsearch_45a04.create] INFO: Deploying blueprint resource components/elasticsearch/scripts/rotate_es_indices to /etc/cron.daily/rotate_es_indices
      2018-03-28 07:02:32 LOG <manager> [elasticsearch_45a04.create] INFO: chowning /etc/cron.daily/rotate_es_indices by root:root...
      2018-03-28 07:02:32 LOG <manager> [elasticsearch_45a04.create] INFO: Enabling systemd service elasticsearch...
      2018-03-28 07:02:32 LOG <manager> [elasticsearch_45a04.create] INFO: Waiting for 192.168.0.5:9200 to become available...
      2018-03-28 07:02:33 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is not available yet, retrying... (1/24)
      2018-03-28 07:02:35 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is not available yet, retrying... (2/24)
      2018-03-28 07:02:37 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is not available yet, retrying... (3/24)
      2018-03-28 07:02:39 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is not available yet, retrying... (4/24)
      2018-03-28 07:02:41 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is not available yet, retrying... (5/24)
      2018-03-28 07:02:43 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is not available yet, retrying... (6/24)
      2018-03-28 07:02:45 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is not available yet, retrying... (7/24)
      2018-03-28 07:02:47 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is not available yet, retrying... (8/24)
      2018-03-28 07:02:50 LOG <manager> [elasticsearch_45a04.create] INFO: 192.168.0.5:9200 is open!
      2018-03-28 07:02:50 LOG <manager> [elasticsearch_45a04.create] INFO: Deleting `cloudify_storage` index if exists...
      2018-03-28 07:02:50 LOG <manager> [elasticsearch_45a04.create] INFO: Failed to DELETE http://192.168.0.5:9200/cloudify_storage/ (reason: Not Found)
      2018-03-28 07:02:50 LOG <manager> [elasticsearch_45a04.create] INFO: Creating `cloudify_storage` index...
      2018-03-28 07:02:50 LOG <manager> [elasticsearch_45a04.create] INFO: Declaring blueprint mapping...
      2018-03-28 07:02:51 LOG <manager> [elasticsearch_45a04.create] INFO: Declaring deployment mapping...
      2018-03-28 07:02:51 LOG <manager> [elasticsearch_45a04.create] INFO: Declaring execution mapping...
      2018-03-28 07:02:51 LOG <manager> [elasticsearch_45a04.create] INFO: Declaring node mapping...
      2018-03-28 07:02:51 LOG <manager> [elasticsearch_45a04.create] INFO: Declaring node instance mapping...
      2018-03-28 07:02:51 LOG <manager> [elasticsearch_45a04.create] INFO: Declaring deployment modification mapping...
      2018-03-28 07:02:51 LOG <manager> [elasticsearch_45a04.create] INFO: Declaring deployment update mapping...
      2018-03-28 07:02:51 LOG <manager> [elasticsearch_45a04.create] INFO: Waiting for shards to be active...
      2018-03-28 07:02:52 CFY <manager> [elasticsearch_45a04.create] Task succeeded 'fabric_plugin.tasks.run_script'
      2018-03-28 07:02:52 CFY <manager> [amqp_influx_010b6] Configuring node
      2018-03-28 07:02:52 CFY <manager> [elasticsearch_45a04] Configuring node
      2018-03-28 07:02:52 CFY <manager> [amqp_influx_010b6] Starting node
      2018-03-28 07:02:53 CFY <manager> [amqp_influx_010b6.start] Sending task 'fabric_plugin.tasks.run_script'
      2018-03-28 07:02:53 CFY <manager> [amqp_influx_010b6.start] Task started 'fabric_plugin.tasks.run_script'
      2018-03-28 07:02:53 LOG <manager> [amqp_influx_010b6.start] INFO: Preparing fabric environment...
      2018-03-28 07:02:53 LOG <manager> [amqp_influx_010b6.start] INFO: Environment prepared successfully
      2018-03-28 07:02:53 LOG <manager> [amqp_influx_010b6.start] INFO: Starting AMQP-Influx Broker Service...
      2018-03-28 07:03:23 LOG <manager> [amqp_influx_010b6.start] INFO: Starting systemd service cloudify-amqpinflux...
      Traceback (most recent call last):
      File "/usr/lib/python2.7/wsgiref/handlers.py", line 86, in run
      self.finish_response()
      File "/usr/lib/python2.7/wsgiref/handlers.py", line 128, in finish_response
      self.write(data)
      File "/usr/lib/python2.7/wsgiref/handlers.py", line 212, in write
      self.send_headers()
      File "/usr/lib/python2.7/wsgiref/handlers.py", line 270, in send_headers
      self.send_preamble()
      File "/usr/lib/python2.7/wsgiref/handlers.py", line 194, in send_preamble
      'Date: %s\r\n' % format_date_time(time.time())
      File "/usr/lib/python2.7/socket.py", line 328, in write
      self.flush()
      File "/usr/lib/python2.7/socket.py", line 307, in flush
      self._sock.sendall(view[write_offset:write_offset+buffer_size])
      error: [Errno 32] Broken pipe
      [172.16.1.62] out: Traceback (most recent call last):
      [172.16.1.62] out: File "/tmp/cloudify-ctx/ctx", line 139, in <module>
      [172.16.1.62] out: main()
      [172.16.1.62] out: File "/tmp/cloudify-ctx/ctx", line 128, in main
      [172.16.1.62] out: args.timeout)
      [172.16.1.62] out: File "/tmp/cloudify-ctx/ctx", line 84, in client_req
      [172.16.1.62] out: response = request_method(socket_url, request, timeout)
      [172.16.1.62] out: File "/tmp/cloudify-ctx/ctx", line 65, in http_client_req
      [172.16.1.62] out: timeout=timeout)
      [172.16.1.62] out: File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
      [172.16.1.62] out: return opener.open(url, data, timeout)
      [172.16.1.62] out: File "/usr/lib64/python2.7/urllib2.py", line 431, in open
      [172.16.1.62] out: response = self._open(req, data)
      [172.16.1.62] out: File "/usr/lib64/python2.7/urllib2.py", line 449, in _open
      [172.16.1.62] out: '_open', req)
      [172.16.1.62] out: File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
      [172.16.1.62] out: result = func(*args)
      [172.16.1.62] out: File "/usr/lib64/python2.7/urllib2.py", line 1244, in http_open
      [172.16.1.62] out: return self.do_open(httplib.HTTPConnection, req)
      [172.16.1.62] out: File "/usr/lib64/python2.7/urllib2.py", line 1217, in do_open
      [172.16.1.62] out: r = h.getresponse(buffering=True)
      [172.16.1.62] out: File "/usr/lib64/python2.7/httplib.py", line 1089, in getresponse
      [172.16.1.62] out: response.begin()
      [172.16.1.62] out: File "/usr/lib64/python2.7/httplib.py", line 444, in begin
      [172.16.1.62] out: version, status, reason = self._read_status()
      [172.16.1.62] out: File "/usr/lib64/python2.7/httplib.py", line 400, in _read_status
      [172.16.1.62] out: line = self.fp.readline(_MAXLINE + 1)
      [172.16.1.62] out: File "/usr/lib64/python2.7/socket.py", line 476, in readline
      [172.16.1.62] out: data = self._sock.recv(self._rbufsize)
      [172.16.1.62] out: socket.timeout: timed out
      [172.16.1.62] out: Traceback (most recent call last):
      [172.16.1.62] out: File "/tmp/cloudify-ctx/scripts/tmpTwdgO3-start.py-76RC18XP", line 16, in <module>
      [172.16.1.62] out: utils.start_service(AMQPINFLUX_SERVICE_NAME)
      [172.16.1.62] out: File "/tmp/cloudify-ctx/scripts/utils.py", line 1099, in start_service
      [172.16.1.62] out: systemd.start(service_name, append_prefix=append_prefix)
      [172.16.1.62] out: File "/tmp/cloudify-ctx/scripts/utils.py", line 498, in start
      [172.16.1.62] out: .format(full_service_name))
      [172.16.1.62] out: File "/tmp/cloudify-ctx/cloudify.py", line 56, in info
      [172.16.1.62] out: return self._logger(level='info', message=message)
      [172.16.1.62] out: File "/tmp/cloudify-ctx/cloudify.py", line 50, in _logger
      [172.16.1.62] out: return check_output(cmd)
      [172.16.1.62] out: File "/tmp/cloudify-ctx/cloudify.py", line 32, in check_output
      [172.16.1.62] out: raise error
      [172.16.1.62] out: subprocess.CalledProcessError: Command '['ctx', 'logger', 'info', 'Starting systemd service cloudify-amqpinflux...']' returned non-zero exit status 1
      [172.16.1.62] out:

      Fatal error: run() received nonzero return code 1 while executing!

      Requested: source /tmp/cloudify-ctx/scripts/env-tmpTwdgO3-start.py-76RC18XP && /tmp/cloudify-ctx/scripts/tmpTwdgO3-start.py-76RC18XP
      Executed: /bin/bash -l -c "cd /tmp/cloudify-ctx/work && source /tmp/cloudify-ctx/scripts/env-tmpTwdgO3-start.py-76RC18XP && /tmp/cloudify-ctx/scripts/tmpTwdgO3-start.py-76RC18XP"

      Aborting.
      2018-03-28 07:03:23 CFY <manager> [elasticsearch_45a04] Starting node
      2018-03-28 07:03:23 CFY <manager> [amqp_influx_010b6.start] Task failed 'fabric_plugin.tasks.run_script' -> run() received nonzero return code 1 while executing!

      Requested: source /tmp/cloudify-ctx/scripts/env-tmpTwdgO3-start.py-76RC18XP && /tmp/cloudify-ctx/scripts/tmpTwdgO3-start.py-76RC18XP
      Executed: /bin/bash -l -c "cd /tmp/cloudify-ctx/work && source /tmp/cloudify-ctx/scripts/env-tmpTwdgO3-start.py-76RC18XP && /tmp/cloudify-ctx/scripts/tmpTwdgO3-start.py-76RC18XP"
      2018-03-28 07:03:23 CFY <manager> [elasticsearch_45a04.start] Sending task 'fabric_plugin.tasks.run_script'
      2018-03-28 07:03:23 CFY <manager> [elasticsearch_45a04.start] Task started 'fabric_plugin.tasks.run_script'
      2018-03-28 07:03:23 LOG <manager> [elasticsearch_45a04.start] INFO: Preparing fabric environment...
      2018-03-28 07:03:23 LOG <manager> [elasticsearch_45a04.start] INFO: Environment prepared successfully
      2018-03-28 07:03:24 LOG <manager> [elasticsearch_45a04.start] INFO: Starting Elasticsearch Service...
      2018-03-28 07:03:24 LOG <manager> [elasticsearch_45a04.start] INFO: Starting systemd service elasticsearch...
      2018-03-28 07:03:24 LOG <manager> [elasticsearch_45a04.start] INFO: elasticsearch is running

      ..............

      ..............

      ..............

      Bootstrap failed! (400: Failed during plugin installation. (ExecutionFailure: Error occurred while executing the install_plugin system workflow : ProcessExecutionError - RuntimeError: RuntimeError: Workflow failed: Task failed 'cloudify_agent.operations.install_plugins' -> Managed plugin installation found but its ID does not match the ID of the plugin currently on the manager. [existing: ff8642cd-5c74-4435-a6f8-c3c1c09ac713, new: 

    5. kranthi guttikonda

      Hi Arindam Mondal if you are behind proxy did you make sure yum_proxy.sh contains no_proxy information for your lab hosts? It should contain the ip of centos vm private address, floating address, 127.0.0.1, localhost ..etc. Have a look info the files provided. 

      Certainly error is not because of error: [Errno 32] Broken pipe 

      It seems failure happened while uploading the plugin to nginx

      Bootstrap failed! (400: Failed during plugin installation. (ExecutionFailure: Error occurred while executing the install_plugin system workflow : ProcessExecutionError - RuntimeError: RuntimeError: Workflow failed: Task failed 'cloudify_agent.operations.install_plugins' -> Managed plugin installation found but its ID does not match the ID of the plugin currently on the manager. [existing: ff8642cd-5c74-4435-a6f8-c3c1c09ac713, new: d6d56d8b-8f6b-4f4c-b048-ea96e565cfad]))
      Executing teardown due to failed bootstrap...


      So, just make sure environment no_proxy is set properly and delete the boot container, centos VM (clear all openstack network elements) and ./dcae_vm_init.sh

    6. Arindam Mondal

      Hi kranthi guttikonda,

      Thanks a lot for your help.

      Could you please provide the "simple-manager-blueprint.yaml" file as well, cuz It is not there in the attachment.

      Also, I found some typo in the installer script. So is it ok to keep the same thing?


      wget http://$apache_ip/cloudify-manager-blueprints-3.4/simple-manager-blueprint.yaml
      cp simple-manager-blueprint.yaml bootstrap-blueprint.yaml
      ed bootstrap-blueprint.yaml <<'!EOF'
      /^node_types:/-1a
        plugin_resources:
          description: >
            Holds any archives that should be uploaded to the manager.
          default: []
        dsl_resources:
          description: >
            Holds a set of dsl required resources
          default: []
      .
      /^        upload_resources:/a
                plugin_resources: { get_input: plugin_resources }
      .
      w
      q
      !EOF
      
      
      

      you can see it is "ed" instead of "sed" and also add some unwanted character at the end.

    7. kranthi guttikonda

      Hi Arindam Mondal Attaching the cloudify-manager-blueprints-3.4.zip which contains all the files. 

      cloudify-manager-blueprints-3.4.zip

      actually I don't remember changing in installer script near mentioned snippet. You can compare between the original comes with the DACE image and provided one. Perhaps I am wrong and unwanted characters were inserted

    8. Arindam Mondal

      Dear kranthi guttikonda,

      Thanks again for your help. 

    CommentAdd your comment...
  2.  
    1
    0
    -1

    Here is the installer file

      CommentAdd your comment...