Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Setup host names on both VMs and your local PC
    On both the VMs and your local PC, sudo vi /etc/hosts, add this line: (In windows, the file is C:\Windows\System32\drivers\etc\hosts.)

    172.30.1.74 message-router-zookeeper message-router-kafka dl_couchbase dl_mariadb dl_mongodb dl_es dlhdfs dl_feeder dl_adminui
    172.30.1.75 dl_druid dl_superset

  2. Install JDK 8 and Docker on both VMs and local
    sudo apt install openjdk-8-jdk-headless
    Docker install document: https://docs.docker.com/install/linux/docker-ce/ubuntu/
    I install Docker on a Linux VM running in my local Windows.

    Install Docker Compose: https://docs.docker.com/compose/install/ 

  3. Setup ONAP development environment
    (Ref Setting Up Your Development Environment)
    On your local PC,

    cd ~/.m2 (On Windows, it is C:\Users\your_name\.m2)
    mv settings.xml settings.xml-old
    wget https://raw.githubusercontent.com/onap/oparent/master/settings.xml

  4. Check out source code
    On both VMs and local, Check out DataLake source code from https://gerrit.onap.org/r/#/admin/projects/dcaegen2/services to C:\git\onap\dcaegen2\services2 or ~/git/onap/dcaegen2/services2. Currently DataLake Feeder is hosted in ONAP repo as a DCAE component handler.
    If you already checked out the source code before, you may want to sync to the latest.

  5. Setup MariaDB
    (Ref https://mariadb.com/kb/en/library/installing-and-using-mariadb-via-docker/)
    On VM1,
    sudo docker run -p 3306:3306 --name mariadb -e MYSQL_ROOT_PASSWORD=mypass -d mariadb/server:10.3


    Connect to database as root with the password as above, then run

    GRANT ALL PRIVILEGES ON *.* TO dl@"%" IDENTIFIED BY 'dl1234' WITH GRANT OPTION;

    and scripts in C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\assembly\scripts\init_db.sql

  6. Setup Kafka
    (Ref https://kafka.apache.org/quickstart)
    This and the following 2 steps describe setting up and using your own Kafka for development and testing. For using ONAP DMaaP, see step "Use DMaaP as data source".
    On VM1,

    mkdir ~/kafka
    cd ~/kafka
    wget http://archive.apache.org/dist/kafka/2.0.0/kafka_2.11-2.0.0.tgz
    tar -xzf kafka_2.11-2.0.0.tgz
    cd ~/kafka/kafka_2.11-2.0.0

    vi config/server.properties 
    change
    #listeners=PLAINTEXT://:9092
    to

    listeners=PLAINTEXT://172.30.1.74:9092

    To start Zookeeper and Kafka:
    cd ~/kafka/kafka_2.11-2.0.0
    nohup bin/zookeeper-server-start.sh config/zookeeper.properties > zk.log &
    nohup bin/kafka-server-start.sh config/server.properties > kf.log &

    Btw, here are the commands to stop them:
    bin/zookeeper-server-stop.sh
    bin/kafka-server-stop.sh


  7. Create test Kafka topics 
    On VM1,

    cd ~/kafka/kafka_2.11-2.0.0

    bin/kafka-topics.sh --create --zookeeper message-router-zookeeper:2181 --replication-factor 1 --partitions 1 --topic AAI-EVENT
    bin/kafka-topics.sh --create --zookeeper message-router-zookeeper:2181 --replication-factor 1 --partitions 1 --topic unauthenticated.DCAE_CL_OUTPUT
    bin/kafka-topics.sh --create --zookeeper message-router-zookeeper:2181 --replication-factor 1 --partitions 1 --topic unauthenticated.SEC_FAULT_OUTPUT
    bin/kafka-topics.sh --create --zookeeper message-router-zookeeper:2181 --replication-factor 1 --partitions 1 --topic msgrtr.apinode.metrics.dmaap

    In case you want to reset the topics, here are the scripts to delete them:

    bin/kafka-topics.sh --zookeeper message-router-zookeeper:2181 --delete --topic AAI-EVENT
    bin/kafka-topics.sh --zookeeper message-router-zookeeper:2181 --delete --topic unauthenticated.DCAE_CL_OUTPUT
    bin/kafka-topics.sh --zookeeper message-router-zookeeper:2181 --delete --topic unauthenticated.SEC_FAULT_OUTPUT
    bin/kafka-topics.sh --zookeeper message-router-zookeeper:2181 --delete --topic msgrtr.apinode.metrics.dmaap

  8. Load test data to Kafka
    The test data files are checked out from source depot in previous step "Check out source code".
    On VM1,

    cd ~/kafka/kafka_2.11-2.0.0

    bin/kafka-console-producer.sh --broker-list message-router-kafka:9092 --topic AAI-EVENT < ~/git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/AAI-EVENT-100.json
    bin/kafka-console-producer.sh --broker-list message-router-kafka:9092 --topic unauthenticated.DCAE_CL_OUTPUT < ~/git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/DCAE_CL_OUTPUT-100.json
    bin/kafka-console-producer.sh --broker-list message-router-kafka:9092 --topic unauthenticated.SEC_FAULT_OUTPUT < ~/git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/SEC_FAULT_OUTPUT-100.json
    bin/kafka-console-producer.sh --broker-list message-router-kafka:9092 --topic msgrtr.apinode.metrics.dmaap < ~/git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/msgrtr.apinode.metrics.dmaap-100.json


    To check if the data is successfully loaded, one can read the data: 

    bin/kafka-console-consumer.sh --bootstrap-server message-router-kafka:9092 --topic AAI-EVENT --from-beginning
    bin/kafka-console-consumer.sh --bootstrap-server message-router-kafka:9092 --topic unauthenticated.DCAE_CL_OUTPUT --from-beginning
    bin/kafka-console-consumer.sh --bootstrap-server message-router-kafka:9092 --topic unauthenticated.SEC_FAULT_OUTPUT  --from-beginning 
    bin/kafka-console-consumer.sh --bootstrap-server message-router-kafka:9092 --topic msgrtr.apinode.metrics.dmaap --from-beginning

  9. Setup MongoDB
    On VM1,
    sudo docker run -d -p 27017:27017 --name mongodb mongo

  10. Setup Couchbase 
    On VM1,
    • Start docker
      sudo docker run -d --name couchbase -p 8091-8094:8091-8094 -p 11210:11210 couchbase/server-sandbox:6.0.0
    • Create user and bucket

      Access http://dl_couchbase:8091/ , use login: "Administrator/password". 

      Create bucket "datalake", with memory quota 200MB.
      Create user dl/dl1234 , with “Application Access” and "Views Admin" roles to bucket "datalake".

  11. Setup ElasticSearch & Kibana 
    (Ref https://docs.swiftybeaver.com/article/33-install-elasticsearch-kibana-via-docker)
    On VM1,

    sudo docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" --name elastic docker.elastic.co/elasticsearch/elasticsearch:7.1.1
    sudo docker run -d --link elastic:dl_es -e "ELASTICSEARCH_HOSTS=http://dl_es:9200" -p 5601:5601 --name kibana docker.elastic.co/kibana/kibana:7.1.1

  12. Create test Indices in ElasticSearch 
    Indices should be auto created by Feeder.
    To access Kibana: http://dl_es:5601/ .
    In case you want to reset the Indices, here are the scripts to delete them:

    curl -X DELETE "dl_es:9200/aai-event?pretty"

    curl -X DELETE "dl_es:9200/unauthenticated.dcae_cl_output?pretty"

    curl -X DELETE "dl_es:9200/unauthenticated.sec_fault_output?pretty"

    curl -X DELETE "dl_es:9200/msgrtr.apinode.metrics.dmaap?pretty"


  13. Setup Druid
    (Ref http://druid.io/docs/latest/tutorials/index.html)

    We install Druid and Superset on VM2, that is because: 1. Druid uses port 8091, which is also used by Couchbase; 2. Druid uses its own Zookeeper, and we already installed one on VM1. (2nd conflict can be resolved by modifing Druid configs.)
    mkdir ~/druid

    cd ~/druid
    wget http://apache.stu.edu.tw/incubator/druid/0.14.2-incubating/apache-druid-0.14.2-incubating-bin.tar.gz
    tar -xzf apache-druid-0.14.2-incubating-bin.tar.gz
    cd ~/druid/apache-druid-0.14.2-incubating
     
    vi ~/druid/apache-druid-0.14.2-incubating/quickstart/tutorial/conf/druid/middleManager/runtime.properties, update:
    druid.worker.capacity=30

    vi ~/druid/apache-druid-0.14.2-incubating/quickstart/tutorial/conf/druid/middleManager/jvm.config, update:
    -Xmx640m


    Install Zookeeper:
    curl https://archive.apache.org/dist/zookeeper/zookeeper-3.4.11/zookeeper-3.4.11.tar.gz -o zookeeper-3.4.11.tar.gz
    tar -xzf zookeeper-3.4.11.tar.gz
    mv zookeeper-3.4.11 zk

  14. Run Druid
    cd ~/druid/apache-druid-0.14.2-incubating
    nohup  bin/supervise -c quickstart/tutorial/conf/tutorial-cluster.conf > log.txt &

  15. Submit Druid Kafka indexing service supervisors
    (Ref http://druid.io/docs/latest/tutorials/tutorial-kafka.html)
    We use the Druid Kafka indexing service to load data from Kafka. For each topic, we will need to submit a supervisor spec to Druid: 

    cd ~/

    curl -XPOST -H'Content-Type: application/json' -d @git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/AAI-EVENT-kafka-supervisor.json http://dl_druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H'Content-Type: application/json' -d @git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/DCAE_CL_OUTPUT-kafka-supervisor.json http://dl_druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H'Content-Type: application/json' -d @git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/SEC_FAULT_OUTPUT-kafka-supervisor.json http://dl_druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H'Content-Type: application/json' -d @git/onap/dcaegen2/services2/components/datalake-handler/feeder/src/main/resources/druid/msgrtr.apinode.metrics.dmaap-kafka-supervisor.json http://dl_druid:8090/druid/indexer/v1/supervisor

    Windows' version:
    curl -XPOST -H"Content-Type: application/json" -d @C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\main\resources\druid\AAI-EVENT-kafka-supervisor.json http://dl_druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H"Content-Type: application/json" -d @C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\main\resources\druid\DCAE_CL_OUTPUT-kafka-supervisor.json http://dl_druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H"Content-Type: application/json" -d @C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\main\resources\druid\SEC_FAULT_OUTPUT-kafka-supervisor.json http://dl_druid:8090/druid/indexer/v1/supervisor
    curl -XPOST -H"Content-Type: application/json" -d @C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\src\main\resources\druid\msgrtr.apinode.metrics.dmaap-kafka-supervisor.json http://dl_druid:8090/druid/indexer/v1/supervisor

    Druid tasks: http://dl_druid:8090 
    Druid datasource: http://dl_druid:8081/#/datasources 

  16. Setup Superset
    (Ref https://superset.incubator.apache.org/installation.html#start-with-docker)
    On VM2,
    mkdir ~/superset

    cd ~/superset
    git clone https://github.com/apache/incubator-superset/
    cd ~/superset/incubator-superset/contrib/docker

    vi docker-compose.yml, add the external host dl_druid to service 'superset':

    extra_hosts:
    - "dl_druid:172.30.1.75"

    vi docker-init.sh, change:
    flask fab create-admin --app superset → fabmanager create-admin --app superset

    superset load_examples →  superset load-examples


    Then
    sudo docker-compose run superset ./docker-init.sh
    (This will take awhile. You will be asked to provide a new username and password.)

  17.  Run Superset

    cd ~/superset/incubator-superset/contrib/docker
    sudo docker-compose up -d

    Setup Druid as a data source
    Open http://dl_superset:8088/ , using the login created in step 'Setup Superset', go to Sources → Druid Clusters → Add a new record (the '+' sign), and set:
    Verbose Name=dl_druid
    Broker Host=dl_druid
    Cluster=dl_druid

  18. Setup Hadoop/HDFS
    If you already have a Hadoop cluster, set 'dlhdfs' to its NameNode IP in /etc/hosts. Otherwise, install a Cloudera QuickStart VM in Docker or other VM formats on VM1.
    Download image from http://www.cloudera.com/content/support/en/downloads/quickstart_vms.html.

    For Docker, (Ref. https://www.cloudera.com/documentation/enterprise/5-13-x/topics/quickstart_docker_container.html)
    gunzip cloudera-quickstart-vm-5.13.0-0-beta-docker.tar.gz
    tar -xvf cloudera-quickstart-vm-5.13.0-0-beta-docker.tar
    cd cloudera-quickstart-vm-5.13.0-0-beta-docker
    sudo docker import cloudera-quickstart-vm-5.13.0-0-beta-docker.tar
    sudo docker images
    sudo docker run --name=hadoop --hostname=quickstart.cloudera --privileged=true -t -i -p 7180:7180 -p 8020:8020 -p 50075:50075 -p 50010:50010 5d3a901291ef_replace_with_yours /usr/bin/docker-quickstart
    /home/cloudera/cloudera-manager --express

    Access Cloudera Manager via http://dlhdfs:7180 , using login 'cloudera/cloudera', and start the cluster.

    On the QuickStart VM, create HDFS folder '/datalake', where the data will be stored, and assign it to user 'dl':
    sudo -u hdfs hadoop fs -mkdir /datalake
    sudo -u hdfs hadoop fs -chown dl /datalake

  19. Run DataLake Feeder in IDE

    The Feeder is a Spring boot application. The entry point is org.onap.datalake.feeder.Application. Run the project in Eclipse as "Spring Boot App". Once started, the app reads the topic list from Zookeeper, and then pulls data from these Kafka topics, and inserts the data to MongoDB, Couchbase, Elasticsearch and HDFS. The data loaded to Kafka in step 'Load test data to Kafka' should appears in all the databases/store, and you should be able to use the above installed UI tools to view it.

    The REST APIs provided by controllers are documented on Swagger page: http://localhost:1680/datalake/v1/swagger-ui.html .

  20. Create Docker image for deployment
    To create Docker image in your local development environment, it is required to install Docker in local.
    cd ~/git/onap/dcaegen2/services2/components/datalake-handler/feeder
    mvn clean package -DskipTests
    sudo docker build -t moguobiao/datalake-feeder -f src/assembly/Dockerfile . (Replace 'moguobiao' with your name)

    Push docker image to dockerhub

    sudo docker login -u moguobiao -p password
    sudo docker push moguobiao/datalake-feeder

  21. Deploy Docker image 
    On VM1,
    sudo docker pull moguobiao/datalake-feeder
    sudo docker run -d -p 1680:1680 --name dl_feeder --add-host=message-router-kafka:172.30.1.74 --add-host=message-router-zookeeper:172.30.1.74 --add-host=dl_couchbase:172.30.1.74 --add-host=dl_mariadb:172.30.1.74 --add-host=dl_mongodb:172.30.1.74 --add-host=dlhdfs:172.30.1.74 --add-host=dl_es:172.30.1.74 moguobiao/datalake-feeder

  22. Deploy AdminUI
    On VM1,
    1. Development mode
      Install nodejs >= 10.9.0 and Angular CLI >= 7
      please following to setup development environment https://angular.io/guide/quickstart
      # cd ~/git/onap/dcaegen2/services2/components/datalake-handler/admin/src
      # vim proxy.conf.json, modify the feeder IP address, line:3 "target": "http://to_your_feeder_address"
      # npm install

      # npm start (In Windows, ng serve --proxy-config proxy.conf.json)
      Access Admin UI page http://dl_adminui:4200
    2. Production mode with Docker
      # cd ~/git/onap/dcaegen2/services2/components/datalake-handler/admin
      # docker build -t datalake-adminui . --no-cache
      # docker run -d -p 80:80 --name dl_adminui --add-host=dl_feeder:172.30.1.74  datalake-adminui
      Access Admin UI page http://dl_adminui

  23. Use DMaaP (Release C) as data source

    Add VM1 to Kubernetes cluster
    ONAP at China Mobile Lab is deployed as a Kubernetes cluster. For DL Feeder to connect to DMaaP’s Kafka and Zookeeper, we need to add VM1 to the cluster. This is done by installing Rancher containers on the VM.

    Find Zookeeper and Kafka hosts
    kubectl -n onap get pod -o wide | grep dmaap-message-router
    In our instance, it returns
    dev-dmaap-message-router-58cb7f9644-v5qvq 1/1 Running 0 53d 10.42.97.241 mr01-node3 <none>
    dev-dmaap-message-router-kafka-6685877dc4-xkvrk 1/1 Running 0 53d 10.42.243.183 mr01-node2 <none>
    dev-dmaap-message-router-zookeeper-bc76c44f4-6sfbx 1/1 Running 0 53d 10.42.13.227 mr01-node1 <none>

    So we update /etc/hosts on VM1 with
    10.42.13.227 message-router-zookeeper 
    10.42.243.183 message-router-kafka

    Run Feeder
    We are not able to run the docker container like in step “Deploy Docker image”, because even though VM1 is within the Kubernetes cluster, the Feeder container is not. One way to solve it is to deploy the image into the Kubernetes cluster, which is illustrated in step “Deploy Docker image to Kubernetes cluster”. There is a simple way to run the Feeder for development and testing:

    • Copy jar file C:\git\onap\dcaegen2\services2\components\datalake-handler\feeder\target\feeder-1.0.0-SNAPSHOT.jar to VM1. This jar file was created in step “Create Docker image for deployment”, when running the Maven command. 

    • Then run
      nohup java -jar feeder-1.0.0-SNAPSHOT.jar > feeder.log &

  24. Deploy Docker image to Kubernetes cluster
    TODO
  25.  ...