You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 44 Next »

Follow the steps below to setup the CPS environment. 

Checkout the project

Checkout https://gerrit.onap.org/r/admin/repos/cps

Building the project

When building the project run from the root cps folder :

mvn clean install

From docker-compose folder run the following after building the images locally :

VERSION=latest DB_USERNAME=cps DB_PASSWORD=cps docker-compose up -d

This starts both cps and postgres containers.

Note: Checkout the README.md in docker-compose folder for detailed steps.

Setup schema in DB

Liquibase auto creates the schema on startup.

Set environment variables with relevant connection details which can be found in application.yml in cps-application/resources folder.

Running the project

This option is if you have  a local PostgreSQL running.

From the cps folder run the following command :

java -DDB_HOST=localhost -DDB_USERNAME=cps -DDB_PASSWORD=cps -jar cps-application/target/cps-application-x.y.z-SNAPSHOT.jar

NB. On Linux use IP address of a container instead of localhost

OR 

From the cps\cps-application folder run the following command:

mvn spring-boot:run

Running CPS via Helm charts on Minikube :

WSL Checks (when using WSL2 on MS Windows)

Check that your WSL 2 environment is running both linux distribution and docker using a windows command prompt/shell
*It might be needed to configure for Windows is configured for WSL 2 and WSL itself is set to use your linux distribution as default.

WSL Check
$ wsl -l -v
  NAME                   STATE           VERSION
* Ubuntu-20.04           Running         2
  docker-desktop         Running         2
  docker-desktop-data    Running         2

When using WSL 2ensure to open a WSL shell window ie. Command Prompt, wsl ...

Install MiniKube

Install and start MiniKube

Install and Start MiniKube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
minikube start

Install Kubectl and Helm and Helm Repo 

To setup kubectl and helm for ONAP locally  follow steps as outlined in the deploy section on - https://docs.onap.org/projects/onap-oom/en/latest/oom_user_guide.html#deploy

Please note the following amendments to the above instructions:

Install helm push plugin (before building the Helm repository)

Install Helm Push Plugin
helm plugin install https://github.com/chartmuseum/helm-push.git

After following the steps above ensure your local repo has the charts loaded onto it :

helm search repo local
NAME CHART VERSION APP VERSION DESCRIPTION
local/a1policymanagement 8.0.0 1.0.0 A Helm chart for A1 Policy Management Service
local/aaf 8.0.0 ONAP Application Authorization Framework
local/aai 8.0.0 ONAP Active and Available Inventory
local/appc 8.0.0 Application Controller
...
local/contrib 8.0.0 ONAP optional tools
local/cps 8.0.0 Configuration Persistance Service (CPS)

Deploy CPS

To install CPS only, run the following command from within the oom/kubernetes/cps folder

Install CPS using Helm
cd <your git repo>/oom/kubernetes/cps
helm upgrade dev1 local/cps -i -f values.yaml --set global.masterPassword=mysecr

Once you chart is deployed, we can test it by hitting the spring actuator endpoint from a pod:

Test CPS is alive
kubectl run -it network-multitool-$USER --image=praqma/network-multitool --restart=Never --rm -- bash

curl -X GET "http://cps:8080/manage/health" -H "accept: application/json" -H "Content-Type: application/json"

Note. This was tested on windows using WSL2 with Ubuntu 20.04 but any similar environment should suffice.

Setting up SDNC, RAN-sim controller and Honeycomb simulator locally:

SDNC setup

To setup SDNC, firstly download these 2 files:

  1. certs.tar
  2. docker-compose.yml

Unzip certs.tar to the same folder as where you put the downloaded, docker-compose.yml file.

From the same folder as above, run the following command to setup SDNC.

Docker command
docker-compose up -d

SDNC should be up, when this command has ran successfully. 

RAN-sim controller setup

To set up RAN-sim controller follow the steps provided in this page RAN-Sim setup or use the steps below.

  1. Clone and Checkout Ran-Sim Controllergit clone "https://gerrit.onap.org/r/integration/simulators/ran-simulator"
  2. Pull the pre-built docker image using the command 

    Docker pull command for ransim controller
    docker pull docker.io/shsubedi/ransimcontroller:v1
  3. Use the following command to tag the image

    Docker tag command
    docker tag shsubedi/ransimcontroller:v1 onap/ransim:1.0.0-SNAPSHOT
  4. Navigate to '<YOUR_DIRECTORY>/ran-simulator/ransim/docker' directory
  5. Modify the docker-compose.yml file, update the SDNR_IP and SDNR_PORT
    To get the SDNR_IP, run the following command

    Inspect SDNC ip
    docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <SDNC_CONTAINER_ID>


    SDNR_IP=<SDNC_IP>
    SDNR_PORT=8282

  6. Run the 'docker-compose up -d' command from the '<YOUR_DIRECTORY>/ran-simulator/ransim/docker' directory
    ransim and mariadb should come up, when this command has ran successfully.

Honeycomb simulator setup

To set up the Honeycomb simulator, follow the steps below or the steps in this page Core & RAN Simulators.

  1. Pull the custom honeycomb docker image using the command

    Docker pull command
    docker pull docker.io/tragait/gnbsim:v1
  2. Clone/download https://github.com/onap-oof-pci-poc/ran-sim
  3. Update ransim and honeycom IP address at '<YOUR_DIRECTORY>/ran-sim/hcsim-content/gnbsim/hc/config/gnbsim.json'
    Make sure the following are updated to.
    To get the ransimIp and hcIp do the following:

    Inspect ransim ip
    docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <ransim_CONTAINER_ID>
    • "ransimIp": <ransimIP>
    • "ransimPort": 8081
    • "hcIp": <ransimIP>
    • "hcPort": 2831
  4. Update the image name in the '<YOUR_DIRECTORY>/ran-sim/hcsim-content/gnbsim/hc/docker-compose.yml' to:
    • image: tragait/gnbsim:v1
  5. Run the below command from '<YOUR_DIRECTORY>/ran-sim/hcsim-content/gnbsim/hc' directory

    Docker compose up command
    docker-compose up -d

    While running the docker-compose up -d command, these servers will be mounted in SDNC 
    In case these servers are not mounted in SDNC, you can use the following curl command to mount the HC sim.
    To get the ip of the hc sim do the following:

    Inspect hc ip
    docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <hc_CONTAINER_ID>

    Make sure to modify the below curl command, replace HC_SIM_IP with the ip retrieved from previous command.
    Note. If using WSL 2  then the HC_SIM_IP in the below curl command can be replaced with ip address got from doing : 

    wsl hostname -I in the windows powershell. 

    HC sim mount command
    curl -i -X PUT http://localhost:8282/restconf/config/network-topology:network-topology/topology/topology-netconf/node/hc -k -H 'Accept: application/xml' -H 'Content-Type: text/xml' --user "admin":"Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U"  -d '<node xmlns="urn:TBD:params:xml:ns:yang:network-topology"> <node-id>hc</node-id> <host xmlns="urn:opendaylight:netconf-node-topology">HC_SIM_IP</host>  <port xmlns="urn:opendaylight:netconf-node-topology">2831</port>  <username xmlns="urn:opendaylight:netconf-node-topology">admin</username>  <password xmlns="urn:opendaylight:netconf-node-topology">admin</password>  <tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>  <!-- non-mandatory fields with default values, you can safely remove these if you do not wish to override any of these values-->  <reconnect-on-changed-schema xmlns="urn:opendaylight:netconf-node-topology">false</reconnect-on-changed-schema>  <connection-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">20000</connection-timeout-millis>  <max-connection-attempts xmlns="urn:opendaylight:netconf-node-topology">0</max-connection-attempts>  <between-attempts-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">2000</between-attempts-timeout-millis>  <sleep-factor xmlns="urn:opendaylight:netconf-node-topology">1.5</sleep-factor>  <!-- keepalive-delay set to 0 turns off keepalives-->  <keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">120</keepalive-delay></node>'
  6. Check using 'docker container ls' that the honeycomb simulator is up and running.

Once the above steps have been completed, check if the honeycomb simulator has been mounted in SDNC by going to the following link and clicking on the Mounted Resources section:
 http://localhost:8282/apidoc/explorer/index.html
Note. If using WSL 2 then the localhost can be replaced with ip address got from doing :
wsl hostname -I in the windows powershell. 

  • Credentials  : - admin / Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U

Setting up SDNC, CPS & NCMP, DMI-Plugin and netconf-pnp-simulator locally:

Download the following zip file and extract it. 

Navigate to the folder where the files were extracted and run the below command from '<sim/>' directory

Docker compose command from sim folder
docker-compose up -d

Then navigate to the folder where the files were extracted and run the below command.

Create a docker network
docker network create test_network

Then run the following command.

docker compose command
docker-compose up -d

Check using 'docker container ls' that SDNC, CPS&NCMP, DMI-Plugin and netconf-pnp-simulator are up and running.

FAQ

How to fix "Error: could not open `{argLine}'when running unit tests from Intellij IDE ?

If not able to run unit tests from Intellj unit tests tool because of this error

Error: could not open `{argLine}'

Process finished with exit code 1

Then review maven-surefire-plugin integration with Intellij:

  • Go to Settings-> Build,Execution,Deployment -> Build Tools -> Maven -> Running Tests
  • Uncheck argLine


  • No labels