Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The APPC OOM deployment consists of a 3 node ODL Cluster plus a DB node.  The details for the environment can be found at APPC WindRiver Lab.  From a testing persepctive perspective a directory has been created on the Master APPC node (10.12.5.171).  The directory is called "testing."

...

Within the testing directory there are sample test json requests along with jmeter and A&AI data load scripts. The breakdown for these directories is below:

apache-jmeter-4.0

This directory contains APACHE Jmeter along with APPC-Tests.  The APPC-Tests subfolder contains jmx files that can be utilized to run specific APPC LCM Actions.  The tests can be executed either command line or via the UI.

  • Command Line (The example below is executing the test from the testing->apache-jmeter-4.0->APPC-Tests folder):
    • ../bin/jmeter.sh -n  -L DEBUG -t APPC-LCM-Action-Rebuild.jmx l APPCLCM-Action-Rebuild.jtl
      • The jmx file actually contains the test "script" and the jtl file is the formatted test results file.
  • Executing JMeter from the UI:
    • /home/ubuntu/testing/apache-jmeter-4.0/jmeter
    • This will pull up the UI.  However, you will need to have X11 setup on your desktop.

ONAP-Testing

ONAP Testing directory contains the data and scripts that will allow the user to update a particular A&AI instance. 

  • put_closed_loop_sh is the script that actually "puts" news data into A&AI.

    • ./put_closed_loop.sh aaiporhost |python -m json.tool

    • The below files need to be configured for the specific VNF being loaded:

      • cloud-region.json

      • model.json

      • generic-vnf-.json

      • vserver-generic-vnf-relationship.json

    • Additionally, for the Stability Testing 3 VMs were created.  Each VM has a separate directory named accordingly.  These files for the specific VNFs do not require modification if there is need to load the data for that VNF in a new or cleaned out A&AI>

  • verify.sh is the script that can be utilized to verify the VNF is loaded into A&AI.  In order to query for the specific VNF simply modify the named-query.json file to reflect that VNF.

  • Pre-Defined data has been created for:

    • Stability-Test-VM1

    • Stability-Test-VM2

    • Stability-Test-VM3

    • vDNS

Common commands utilzied in the OOM env:

  • kubectl get pods --all-namespaces -o wide -w
    • Displays all the running pods and related information.

      Code Block
      titlePOD Information
      Cubuntu@k8s-master:~$ kubectl get pods --all-namespaces -o wide -w
      NAMESPACE     NAME                                         READY     STATUS    RESTARTS   AGE       IP            NODE
      kube-system   etcd-k8s-master                              1/1       Running   1          6d        10.12.5.171   k8s-master
      kube-system   kube-apiserver-k8s-master                    1/1       Running   1          6d        10.12.5.171   k8s-master
      kube-system   kube-controller-manager-k8s-master           1/1       Running   1          6d        10.12.5.171   k8s-master
      kube-system   kube-dns-86f4d74b45-px44s                    3/3       Running   12         19d       10.32.0.3     k8s-master
      kube-system   kube-proxy-25tm5                             1/1       Running   4          19d       10.12.5.171   k8s-master
      kube-system   kube-proxy-6dt4z                             1/1       Running   3          19d       10.12.5.174   k8s-appc1
      kube-system   kube-proxy-jmv67                             1/1       Running   3          19d       10.12.5.193   k8s-appc2
      kube-system   kube-proxy-l8fks                             1/1       Running   4          19d       10.12.5.194   k8s-appc3
      kube-system   kube-scheduler-k8s-master                    1/1       Running   1          6d        10.12.5.171   k8s-master
      kube-system   tiller-deploy-84f4c8bb78-p8b6j               1/1       Running   0          6d        10.36.0.4     k8s-appc3
      kube-system   weave-net-bz7wr                              2/2       Running   13         19d       10.12.5.194   k8s-appc3
      kube-system   weave-net-c2pxd                              2/2       Running   10         19d       10.12.5.174   k8s-appc1
      kube-system   weave-net-jw29c                              2/2       Running   10         19d       10.12.5.171   k8s-master
      kube-system   weave-net-kxxpl                              2/2       Running   10         19d       10.12.5.193   k8s-appc2
      onap          onap-appc-0                                  2/2       Running   0          3d        10.44.0.5     k8s-appc1
      onap          onap-appc-1                                  2/2       Running   0          1h        10.47.0.1     k8s-appc2
      onap          onap-appc-2                                  2/2       Running   0          1h        10.36.0.5     k8s-appc3
      onap          onap-appc-cdt-5474c7cb88-8sfqp               1/1       Running   0          3d        10.36.0.10    k8s-appc3
      onap          onap-appc-db-0                               2/2       Running   0          3d        10.47.0.2     k8s-appc2
      onap          onap-appc-dgbuilder-5dccdf57b5-qntzg         1/1       Running   0          3d        10.47.0.6     k8s-appc2
      onap          onap-dbcl-db-0                               1/1       Running   0          3d        10.47.0.4     k8s-appc2
      onap          onap-dbcl-db-1                               1/1       Running   0          3d        10.44.0.8     k8s-appc1
      onap          onap-dmaap-7dfc8f84d6-qmqmx                  1/1       Running   1          3d        10.36.0.8     k8s-appc3
      onap          onap-dmaap-bus-controller-6c4d4d55cf-jrf4w   1/1       Running   0          3d        10.47.0.8     k8s-appc2
      onap          onap-global-kafka-6bb8cf75c8-dr5rm           1/1       Running   2          3d        10.36.0.11    k8s-appc3
      onap          onap-log-elasticsearch-855f94ccc4-jsc8t      1/1       Running   0          3d        10.47.0.7     k8s-appc2
      onap          onap-log-kibana-764bd47f-wxhhh               1/1       Running   0          3d        10.47.0.9     k8s-appc2
      onap          onap-log-logstash-776fd9fbd6-7hzhg           1/1       Running   0          3d        10.44.0.2     k8s-appc1
      onap          onap-robot-d7c758c5f-fz68f                   1/1       Running   0          3d        10.44.0.4     k8s-appc1
      onap          onap-zookeeper-7774c66fb9-5nwmd              1/1       Running   0          3d        10.44.0.9     k8s-appc1
    •  kubectl exec -ti onap-appc-0 -c appc -n onap bash
      • This command is executed from the Master node and opens a bash shell into the particular pod.

New or Clean APPC DB

Data for the different actions must be loaded into the PROTOCOL_REFERENCE table whenever a new instance of the APPC DB is instantiated or cleaned.  In order to load the Stability Tests VNFs use the following:


Code Block
1.  kubectl exec -ti onap-appc-db-0 -c appc-db -n onap bash
2.  mysql -u sdnctl -p
3.  Enter the following SQL command to insert the data (First column is the unique key for each record...number is not as important as the fact it should be unique within the table):

INSERT INTO `PROTOCOL_REFERENCE` VALUES (1,'Stop','vTESTVM','OS',now(),'NO','vm');
INSERT INTO `PROTOCOL_REFERENCE` VALUES (2,'Start','vTESTVM','OS',now(),'NO','vm');
INSERT INTO `PROTOCOL_REFERENCE` VALUES (4,'Rebuild','vTESTVM','OS',now(),'NO','vm');
INSERT INTO `PROTOCOL_REFERENCE` VALUES (3,'Restart','vTESTVM','OS',now(),'NO','vnf');
INSERT INTO `PROTOCOL_REFERENCE` VALUES (5,'Restart','vTESTVM','OS',now(),'NO','vm');
INSERT INTO `PROTOCOL_REFERENCE` VALUES (18,'Snapshot','vTESTVM','OS',now(),'NO','vm');


KIbana Queries

  • Querying for a specific string in the Message filed:
    • message:"REBUILD_STATUS -- SUCCESS"
  • Querying for all transactions for a RequestId:
    • RequestId:"RequestId you are searching for"
  • View for Restart and Rebuilds can be found here.