1
0
-1

I am deploying ONAP Beijing behind the proxy environment. As part of the ONAP deployment, I am trying to bring up DCAE. 

I have followed the below steps to bring some of the DCAE services.

1)Locate  k8s-config_binding_service.yaml

2)change to the folder(blueprints) based on above result and modified the imports to point to local server where I am hosting all files that is needed for dcae like types.yaml, k8splugin_types.yaml and so on

But the problem currently I am facing is inventory, policy-handler, and service-change-handler are not completely up. The only logs I got from service-change-handler is "unknown exception: consul"


Do I need to add anything else to solve these issues?

CommentAdd your comment...

3 answers

  1.  
    1
    0
    -1

    Hi,

    I am installing ONAP using heat templates behind a proxy and I was stuck with the DCAE too, I have no idea about the k8 installation but I hope this would help, the problem I had is that the DCAE-VM-Init.sh in installing localy sometimes so you need to read it and see where to use proxy and where not to, also it calls 'register.sh' who creates the 'key/value' it also doesn't work behind a proxy.

    Hope this would help.

    Abir

    1. kranthi guttikonda

      Abir ELATTAR

      DCAE deployment with heat and OOM are different. Both uses different approaches. Even for heat based deployments you may need to run a local http server in order to provide access to cloudify blueprints. 

      Just for reference take a look into DCAEGEN2 fails at centos_vm stage

    2. Abir ELATTAR

      I am installing the Beijing Release, so as far now I haven't had any problems with centos or cloudify blueprints, but still I am still installing ONAP so I haven't used the DCAE yet. Could you enlighten me about the cloudify blueprints please?

    CommentAdd your comment...
  2.  
    1
    0
    -1

    I am currently seeing problem with DCAE_service_handler which as described below

    dcae service_change_handler gets ASDC POL5000

      CommentAdd your comment...
    1.  
      1
      0
      -1

      Bharath Thiruveedula

      Below are the steps I have used to bring up DACE behind proxy with Beijing OOM release. However, Currently I am seeing an issue with service-change-handler while talking to SDC. Give a try and let me know

      Run a apache (http server) internally and place the beijing.tar.gz in /var/www/html directory.

      untar the file and it should create a directory called beijing and also create subdirectories

      Replace the xx.xx.xx.xx in the files with your apache ip address (grep -nris "xx.xx.xx.xx" beijing) from /var/www/html directory. Output should be like below

      beijing/dcaepolicyplugin_types.yaml:23: - http://xx.xx.xx.xx/beijing/types.yaml
      beijing/k8splugin_types.yaml:23: - http://xx.xx.xx.xx/beijing/types.yaml
      beijing/sshkey_types.yaml:5: - http://xx.xx.xx.xx/beijing/types.yaml
      beijing/relationshipplugin_types.yaml:22: - http://xx.xx.xx.xx/beijing/types.yaml
      beijing/types.yaml:251: default: http://xx.xx.xx.xx/beijing/fs/mkfs.sh
      beijing/types.yaml:373: default: http://xx.xx.xx.xx/beijing/fs/fdisk.sh
      beijing/types.yaml:386: default: http://xx.xx.xx.xx/beijing/fs/mount.sh
      beijing/types.yaml:392: default: http://xx.xx.xx.xx/beijing/fs/unmount.sh
      beijing/types.yaml:615: source: http://xx.xx.xx.xx/beijing/policies/host_failure.clj
      beijing/types.yaml:647: source: http://xx.xx.xx.xx/beijing/policies/threshold.clj
      beijing/types.yaml:683: source: http://xx.xx.xx.xx/beijing/policies/ewma_stabilized.clj
      beijing/types.yaml:714: source: http://xx.xx.xx.xx/beijing/triggers/execute_workflow.clj


      Assuming single host deployment with k8s(if not appropriate steps must be performed in the other hosts as well based on the containers they host) Download the DCAE related images

      docker pull registry.hub.docker.com/library/busybox:latest
      docker pull crunchydata/crunchy-postgres:centos7-10.3-1.8.2
      docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.platform.configbinding:2.1.5
      docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.platform.inventory-api:3.0.1
      docker pull postgres:9.5.2
      docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.platform.servicechange-handler:1.1.4
      docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.platform.deployment-handler:2.1.5
      docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.platform.policy-handler:2.4.5
      docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.tca-cdap-container:1.1.0
      docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.2.0
      docker pull nexus3.onap.org:10001/onap/holmes/rule-management:1.1.0

      docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.k8s-bootstrap-container:1.1.11

      docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.cm-container:1.3.0
      docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.healthcheck-container:1.1.0
      docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.redis-cluster-container:1.0.0

      updatedb

      locate k8s-config_binding_service.yaml
      locate k8s-deployment_handler.yaml
      locate k8s-holmes-engine.yaml
      locate k8s-holmes-rules.yaml
      locate k8s-inventory.yaml
      locate k8s-pgaas-initdb.yaml
      locate k8s-policy_handler.yaml
      locate k8s-tca.yaml
      locate k8s-ves.yaml

      In each file update the imports section with appropriate hosted url (this is an important step). For multiple hosts you may need to do the same (if the images are already downloaded or DCAE deployed earlier ..etc)

      Now deploy the DCAE using OOM (helm upgrade -i onap-dcae local/dcae --namespace onap). - Depends on your install

      Watch the dcae-bootstrap container logs. It will bring up all the pods and containers.

      1. Bharath Thiruveedula

        I followed similar steps. Have you faced any errors in policy-handler pod? In my case, I am facing UnKnownHostException: consul

      2. kranthi guttikonda

        Bharath Thiruveedula I do not see any problem with policy-handler. Here are the logs. Make sure you modify all the files mentioned here. Also the problem with service-handler is because of Dmaap not working properly behind proxy.

        root@oom:~# kubectl -n onap logs -f dep-policy-handler-6b7dfbdd5c-9vs98 policy-handler
        APP_VER=2.4.5
        /etc/hosts
        # Kubernetes-managed hosts file.
        127.0.0.1 localhost
        ::1 localhost ip6-localhost ip6-loopback
        fe00::0 ip6-localnet
        fe00::0 ip6-mcastprefix
        fe00::1 ip6-allnodes
        fe00::2 ip6-allrouters
        10.42.189.38 policy-handler
        running policy_handler as 17 log logs/policy_handler.log
        Linux policy-handler 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 GNU/Linux
        total used free shared buff/cache available
        Mem: 251G 78G 147G 146M 24G 171G
        Swap: 976M 0B 976M
        Filesystem Size Used Avail Use% Mounted on
        none 824G 71G 712G 10% /
        tmpfs 126G 0 126G 0% /dev
        tmpfs 126G 0 126G 0% /sys/fs/cgroup
        /dev/sda1 824G 71G 712G 10% /etc/hosts
        shm 64M 0 64M 0% /dev/shm
        tmpfs 126G 12K 126G 1% /run/secrets/kubernetes.io/serviceaccount
        tmpfs 126G 0 126G 0% /sys/firmware
        PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
        1 ? Ss 0:00 0 1023 19048 3556 0.0 /bin/bash ./run_policy.sh
        17 ? R 0:00 0 2 23025 6548 0.0 python -m policyhandler/policy_handler
        26 ? R 0:00 0 106 29865 1624 0.0 ps afxvw
        27 ? S 0:00 0 30 5949 692 0.0 tee -a logs/policy_handler.log
        Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
        nl UNCONN 0 0 rtnl:1621 * sk=0 cb=0 groups=0x00000000
        nl UNCONN 0 0 rtnl:kernel * sk=0 cb=0 groups=0x00000000
        nl UNCONN 768 0 tcpdiag:kernel * sk=0 cb=0 groups=0x00000000
        nl UNCONN 4352 0 tcpdiag:ss/28 * sk=0 cb=0 groups=0x00000000
        nl UNCONN 0 0 xfrm:kernel * sk=0 cb=0 groups=0x00000000
        nl UNCONN 0 0 audit:kernel * sk=0 cb=0 groups=0x00000000
        nl UNCONN 0 0 fiblookup:kernel * sk=0 cb=0 groups=0x00000000
        nl UNCONN 0 0 nft:kernel * sk=0 cb=0 groups=0x00000000
        nl UNCONN 0 0 uevent:kernel * sk=0 cb=0 groups=0x00000000
        nl UNCONN 0 0 genl:kernel * sk=0 cb=0 groups=0x00000000



      3. Bharath Thiruveedula

        kranthi guttikonda, yup it worked for me I missed one of the import statement in blueprints. Thanks Kranthi!

      CommentAdd your comment...