Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Setup host names on both VMs and your local PC
    On both the VMs and your local PC, sudo vi /etc/hosts, add this line: (In windows, the file is C:\Windows\System32\drivers\etc\hosts.)

    172.30.1.74 message-router-zookeeper message-router-kafka dl-couchbase dl-mariadb dl-mongodb dl-es dl-hdfs dl-feeder dl-adminui
    172.30.1.75 dl-druid dl-superset

  2. Install JDK 8 and Docker on both VMs and local
    sudo apt install openjdk-8-jdk-headless
    Docker install document: https://docs.docker.com/install/linux/docker-ce/ubuntu/
    I install Docker on a Linux VM running in my local Windows.

    Install Docker Compose: https://docs.docker.com/compose/install/ 

  3. Setup ONAP development environment
    (Ref Setting Up Your Development Environment)
    On your local PC,

    cd ~/.m2 (On Windows, it is C:\Users\your_name\.m2)
    mv settings.xml settings.xml-old
    wget https://raw.githubusercontent.com/onap/oparent/master/settings.xml

  4. Check out source code
    On both VMs and local, Check out DataLake source code from https://gerrit.onap.org/r/#/admin/projects/dcaegen2/services to C:\git\onap\dcaegen2\services2 or ~/git/onap/dcaegen2/services2. Currently DataLake Feeder is hosted in ONAP repo as a DCAE component handler.
    If you already checked out the source code before, you may want to sync to the latest.

  5. Setup MariaDB
    (Ref https://mariadb.com/kb/en/library/installing-and-using-mariadb-via-docker/)
    On VM1,
    sudo docker run -p 3306:3306 --name mariadb -e MYSQL_ROOT_PASSWORD=mypass -d mariadb/server:10.3


    Connect to database as root with the password as above, then run

    GRANT ALL PRIVILEGES ON *.* TO dl@"%" IDENTIFIED BY 'dl1234' WITH GRANT OPTION;

    and scripts in these files:
    C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\assembly\scripts\init_db.sql
    C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\assembly\scripts\init_db_data.sql

  6. Setup Kafka
    (Ref https://kafka.apache.org/quickstart)
    This and the following 2 steps describe setting up and using your own Kafka for development and testing. For using ONAP DMaaP, see step "Use DMaaP as data source".
    On VM1,

    mkdir ~/kafka
    cd ~/kafka
    wget http://archive.apache.org/dist/kafka/2.0.0/kafka_2.11-2.0.0.tgz
    tar -xzf kafka_2.11-2.0.0.tgz
    cd ~/kafka/kafka_2.11-2.0.0

    vi config/server.properties 
    change
    #listeners=PLAINTEXT://:9092
    to

    listeners=PLAINTEXT://172.30.1.74:9092

    To start Zookeeper and Kafka:
    cd ~/kafka/kafka_2.11-2.0.0
    nohup bin/zookeeper-server-start.sh config/zookeeper.properties > zk.log &
    nohup bin/kafka-server-start.sh config/server.properties > kf.log &

    Btw, here are the commands to stop them:
    bin/zookeeper-server-stop.sh
    bin/kafka-server-stop.sh


  7. Create test Kafka topics 
    On VM1,

    cd ~/kafka/kafka_2.11-2.0.0

    bin/kafka-topics.sh --create --zookeeper message-router-zookeeper:2181 --replication-factor 1 --partitions 1 --topic AAI-EVENT
    bin/kafka-topics.sh --create --zookeeper message-router-zookeeper:2181 --replication-factor 1 --partitions 1 --topic unauthenticated.DCAE_CL_OUTPUT
    bin/kafka-topics.sh --create --zookeeper message-router-zookeeper:2181 --replication-factor 1 --partitions 1 --topic unauthenticated.SEC_FAULT_OUTPUT
    bin/kafka-topics.sh --create --zookeeper message-router-zookeeper:2181 --replication-factor 1 --partitions 1 --topic msgrtr.apinode.metrics.dmaap

    In case you want to reset the topics, here are the scripts to delete them:

    bin/kafka-topics.sh --zookeeper message-router-zookeeper:2181 --delete --topic AAI-EVENT
    bin/kafka-topics.sh --zookeeper message-router-zookeeper:2181 --delete --topic unauthenticated.DCAE_CL_OUTPUT
    bin/kafka-topics.sh --zookeeper message-router-zookeeper:2181 --delete --topic unauthenticated.SEC_FAULT_OUTPUT
    bin/kafka-topics.sh --zookeeper message-router-zookeeper:2181 --delete --topic msgrtr.apinode.metrics.dmaap

  8. Load test data to Kafka
    The test data files are checked out from source depot in previous step "Check out source code".
    On VM1,

    cd ~/kafka/kafka_2.11-2.0.0

    bin/kafka-console-producer.sh --broker-list message-router-kafka:9092 --topic AAI-EVENT < ~/git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/AAI-EVENT-100.json
    bin/kafka-console-producer.sh --broker-list message-router-kafka:9092 --topic unauthenticated.DCAE_CL_OUTPUT < ~/git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/DCAE_CL_OUTPUT-100.json
    bin/kafka-console-producer.sh --broker-list message-router-kafka:9092 --topic unauthenticated.SEC_FAULT_OUTPUT < ~/git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/SEC_FAULT_OUTPUT-100.json
    bin/kafka-console-producer.sh --broker-list message-router-kafka:9092 --topic msgrtr.apinode.metrics.dmaap < ~/git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/msgrtr.apinode.metrics.dmaap-100.json


    To check if the data is successfully loaded, one can read the data: 

    bin/kafka-console-consumer.sh --bootstrap-server message-router-kafka:9092 --topic AAI-EVENT --from-beginning
    bin/kafka-console-consumer.sh --bootstrap-server message-router-kafka:9092 --topic unauthenticated.DCAE_CL_OUTPUT --from-beginning
    bin/kafka-console-consumer.sh --bootstrap-server message-router-kafka:9092 --topic unauthenticated.SEC_FAULT_OUTPUT  --from-beginning 
    bin/kafka-console-consumer.sh --bootstrap-server message-router-kafka:9092 --topic msgrtr.apinode.metrics.dmaap --from-beginning

  9. Setup MongoDB
    On VM1,
    sudo docker run -d -p 27017:27017 --name mongodb mongo
    or to start a stopped one 
    sudo docker start mongomongodb


  10. Setup Couchbase 
    On VM1,
    • Start docker
      sudo docker run -d --name couchbase -p 8091-8094:8091-8094 -p 11210:11210 couchbase/server-sandbox:6.0.0
      or to start a stopped one 
      sudo docker start couchbase
    • Create user and bucket

      Access http://dl-couchbase:8091/ , use login: "Administrator/password". 

      Create bucket "datalake", with memory quota 200MB.
      Create user dl/dl1234 , with “Application Access” and "Views Admin" roles to bucket "datalake".

  11. Setup ElasticSearch & Kibana 
    (Ref https://docs.swiftybeaver.com/article/33-install-elasticsearch-kibana-via-docker)
    On VM1,

    sudo docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" --name elastic docker.elastic.co/elasticsearch/elasticsearch:7.1.1
    sudo docker run -d --link elastic:dl-es -e "ELASTICSEARCH_HOSTS=http://dl-es:9200" -p 5601:5601 --name kibana docker.elastic.co/kibana/kibana:7.1.1

    or to start the stopped ones
    sudo docker start elastic
    sudo docker start kibana

  12. Create test Indices in ElasticSearch 
    Indices should be auto created by DataLake Feeder.
    To access Kibana: http://dl-es:5601/ .
    In case you want to reset the Indices, here are the scripts to delete them:

    curl -X DELETE "dl-es:9200/aai-event?pretty"

    curl -X DELETE "dl-es:9200/unauthenticated.dcae_cl_output?pretty"

    curl -X DELETE "dl-es:9200/unauthenticated.sec_fault_output?pretty"

    curl -X DELETE "dl-es:9200/msgrtr.apinode.metrics.dmaap?pretty"


  13. Setup Druid
    (Ref http://druid.io/docs/latest/tutorials/index.html)

    We install Druid and Superset on VM2, that is because: 1. Druid uses port 8091, which is also used by Couchbase; 2. Druid uses its own Zookeeper, and we already installed one on VM1. (2nd conflict can be resolved by modifying Druid configs though.)
    mkdir ~/druid

    cd ~/druid
    wget http://apache.stu.edu.tw/incubator/druid/0.14.2-incubating/apache-druid-0.14.2-incubating-bin.tar.gz
    tar -xzf apache-druid-0.14.2-incubating-bin.tar.gz
    cd ~/druid/apache-druid-0.14.2-incubating
     
    vi ~/druid/apache-druid-0.14.2-incubating/quickstart/tutorial/conf/druid/middleManager/runtime.properties, update:
    druid.host=dl-druid
    druid.worker.capacity=30

    vi ~/druid/apache-druid-0.14.2-incubating/quickstart/tutorial/conf/druid/middleManager/jvm.config, update:
    -Xmx640m


    Install Zookeeper:
    curl https://archive.apache.org/dist/zookeeper/zookeeper-3.4.11/zookeeper-3.4.11.tar.gz -o zookeeper-3.4.11.tar.gz
    tar -xzf zookeeper-3.4.11.tar.gz
    mv zookeeper-3.4.11 zk

  14. Run Druid
    cd ~/druid/apache-druid-0.14.2-incubating
    nohup bin/supervise -c quickstart/tutorial/conf/tutorial-cluster.conf > log.txt &

  15. Submit Druid Kafka indexing service supervisors
    (Ref http://druid.io/docs/latest/tutorials/tutorial-kafka.html)
    We use the Druid Kafka indexing service to load data from Kafka. For each topic, we will need to submit a supervisor spec to Druid: 

    cd ~/

    curl -XPOST -H'Content-Type: application/json' -d @git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/AAI-EVENT-kafka-supervisor.json http://dl-druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H'Content-Type: application/json' -d @git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/DCAE_CL_OUTPUT-kafka-supervisor.json http://dl-druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H'Content-Type: application/json' -d @git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/SEC_FAULT_OUTPUT-kafka-supervisor.json http://dl-druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H'Content-Type: application/json' -d @git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/msgrtr.apinode.metrics.dmaap-kafka-supervisor.json http://dl-druid:8090/druid/indexer/v1/supervisor

    Windows' version:
    curl -XPOST -H"Content-Type: application/json" -d @C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\main\resources\druid\AAI-EVENT-kafka-supervisor.json http://dl-druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H"Content-Type: application/json" -d @C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\main\resources\druid\DCAE_CL_OUTPUT-kafka-supervisor.json http://dl-druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H"Content-Type: application/json" -d @C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\main\resources\druid\SEC_FAULT_OUTPUT-kafka-supervisor.json http://dl-druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H"Content-Type: application/json" -d @C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\main\resources\druid\msgrtr.apinode.metrics.dmaap-kafka-supervisor.json http://dl-druid:8090/druid/indexer/v1/supervisor

    Druid tasks: http://dl-druid:8090 
    Druid datasource: http://dl-druid:8081/#/datasources 

  16. Setup Superset
    (Ref https://superset.incubator.apache.org/installation.html#start-with-docker)
    On VM2,
    mkdir ~/superset

    cd ~/superset
    git clone https://github.com/apache/incubator-superset/
    cd ~/superset/incubator-superset/contrib/docker

    vi docker-compose.yml, add the external host dl-druid to service 'superset':

    extra_hosts:
    - "dl-druid:172.30.1.75"

    vi docker-init.sh, change:
    flask fab create-admin --app superset → fabmanager create-admin --app superset

    superset load_examples →  superset load-examples


    Then
    sudo docker-compose run superset ./docker-init.sh
    (This will take awhile. You will be asked to provide a new username and password.)

  17.  Run Superset

    cd ~/superset/incubator-superset/contrib/docker
    sudo docker-compose up -d

    Setup Druid as a data source
    Open http://dl-superset:8088/ , using the login created in step 'Setup Superset', go to Sources → Druid Clusters → Add a new record (the '+' sign), and set:
    Verbose Name=DataLake druid
    Broker Host=172.30.1.75
    Cluster=dl_druid

  18. Setup Hadoop/HDFS
    If you already have a Hadoop cluster, set 'dl-hdfs' to its NameNode IP in /etc/hosts. Otherwise, install a Cloudera QuickStart VM in Docker or other VM formats on VM1.
    Download image from http://www.cloudera.com/content/support/en/downloads/quickstart_vms.html.

    For Docker, (Ref. https://www.cloudera.com/documentation/enterprise/5-13-x/topics/quickstart_docker_container.html)
    gunzip cloudera-quickstart-vm-5.13.0-0-beta-docker.tar.gz
    tar -xvf cloudera-quickstart-vm-5.13.0-0-beta-docker.tar
    cd cloudera-quickstart-vm-5.13.0-0-beta-docker
    sudo docker import cloudera-quickstart-vm-5.13.0-0-beta-docker.tar
    sudo docker images
    sudo docker run --name=hadoop --hostname=quickstart.cloudera --privileged=true -t -i -p 7180:7180 -p 8020:8020 -p 50075:50075 -p 50010:50010 5d3a901291ef_replace_with_yours /usr/bin/docker-quickstart
    /home/cloudera/cloudera-manager --express

    Access Cloudera Manager via http://dl-hdfs:7180 , using login 'cloudera/cloudera', and start the cluster.

    On the QuickStart VM, create HDFS folder '/datalake', where the data will be stored, and assign it to user 'dl':
    sudo -u hdfs hadoop fs -mkdir /datalake
    sudo -u hdfs hadoop fs -chown dl /datalake

  19. Run DataLake Feeder in IDE

    The Feeder is a Spring boot application. The entry point is org.onap.datalake.feeder.Application. Run the project in Eclipse as "Spring Boot App". Once started, the app reads the topic list from Zookeeper, and then pulls data from these Kafka topics, and inserts the data to MongoDB, Couchbase, Elasticsearch and HDFS. The data loaded to Kafka in step 'Load test data to Kafka' should appears in all the databases/store, and you should be able to use the above installed UI tools to view it.

    The REST APIs provided by controllers are documented on Swagger page: http://localhost:1680/datalake/v1/swagger-ui.html .

  20. Create Docker image for deployment
    To create Docker image in your local development environment, it is required to install Docker in local.
    cd ~/git/onap/dcaegen2/services2/components/datalake-handler/feeder
    mvn clean package -DskipTests
    sudo docker build -t moguobiao/datalake-feeder -f src/assembly/Dockerfile . (Replace 'moguobiao' with your name)

    Push docker image to dockerhub

    sudo docker login -u moguobiao -p password
    sudo docker push moguobiao/datalake-feeder

  21. Deploy Docker image 
    On VM1,
    sudo docker pull moguobiao/datalake-feeder
    sudo docker run -d -p 1680:1680 --name dl-feeder --add-host=message-router-kafka:172.30.1.74 --add-host=message-router-zookeeper:172.30.1.74 --add-host=dl-couchbase:172.30.1.74 --add-host=dl-mariadb:172.30.1.74 --add-host=dl-mongodb:172.30.1.74 --add-host=dl-hdfs:172.30.1.74 --add-host=dl-es:172.30.1.74 moguobiao/datalake-feeder

  22. Deploy AdminUI
    On VM1,
    1. Development mode
      1. Environment setup
        Install nodejs >= 10.9.0 and Angular CLI >= 7
        please following to setup development environment https://angular.io/guide/quickstart
        # cd ~/git/onap/dcaegen2/services2/components/datalake-handler/admin/src
        # npm install
      2. Mockup API server setup (optional)
        # npm run mockup
        # curl http://dl-adminui:1680/datalake/v1/feeder/status
        return 200 means the mockup server is working
      3. Run application
        # vim proxy.conf.json, modify the feeder IP address, line:3 "target": "http://dl-adminui:1680"
        If you don't enable the mockup server, kindly use "target": "http://dl-feeder:1680"
        # npm start (In Windows, ng serve --proxy-config proxy.conf.json)
        Access Admin UI page http://dl-adminui:4200
    2. Production mode with Docker
      # cd ~/git/onap/dcaegen2/services2/components/datalake-handler/admin
      # docker build -t datalake-adminui . --no-cache
      # docker run -d -p 80:80 --name dl-adminui --add-host=dl-feeder:172.30.1.74  datalake-adminui
      Access Admin UI page http://dl-adminui


...