Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Note: in Casablanca you can simply Certify the VSP and continue on with Service Design and Creation (see image below)

Image Added

Service Design and Creation

...

install python-pip and other python modules (see the comment section)


apt install python-pip
pip install ipaddress
pip install pyyaml
pip install mysql-connector-python
pip install progressbar2
pip install python-novaclient
pip install python-openstackclient
pip install kubernetes


Run automation program to deploy services

Sign into SDC as designer and download five csar files for infra, vbng, vgmux, vbrg, and rescust. Copy all the csar files to directory csar.

If robot has done the model onboardin for you the CSARs may also be inside the robot container in the /tmp/csar directory.

Now you can simply run 'vcpe.py' to see the instructions.

...

service_instance_id: Take from __var/svc_instance_uuid file. Copy the value for gmux without letter 'V'


Code Block
#demo-k8s.sh <namespace> heatbridge <stack_name> <service_instance_id> <service> <oam-ip-address>
root@oom-rancher:~/integration/test/vcpe# ~/oom/kubernetes/robot/demo-k8s.sh onap heatbridge vcpe_vfmodule_e2744f48729e4072b20b_201811262136 d8914ef3-3fdb-4401-adfe-823ee75dc604 vCPEvGMUX 10.0.101.21

...

Code Block
titleneutron.sh
collapsetrue
mysql -uroot -ppassword -e 'update catalogdb.heat_template set body="
heat_template_version: 2013-05-23
description: A simple Neutron network
parameters:
  network_name:
    type: string
    description: Name of the Neutron Network
    default: ONAP-NW1
  shared:
     type: boolean
     description: Shared amongst tenants
     default: False
outputs:
  network_id:
    description: Openstack network identifier
    value: { get_resource: network }
resources:
  network:
    type: OS::Neutron::Net
    properties:
      name: { get_param: network_name }
      shared: { get_param: shared }" where name="Generic NeutronNet"'

...

7.1 Add route on sdnc cluster node `ip route add 10.3.0.0/24 via 10.0.101.10 dev ens3`. You can find sdnc cluster node name by using kubectl describe sdnc pod

7.2 Run from Rancher node `kubectl -n onap exec -it dev-sdnc-sdnc-0 -- /opt/sdnc/bin/addIpAddresses.sh VGW 10.5.0 22 250`
8. Install python-pip and other python libraries. See tutorial comments section

If you have onap_dev key in local, you can run the following commands; otherwise you can find the cluster node ip from Openstack Horizon and login with key.

Code Block
titleSet SDNC cluster node route
Code Block
titleInstall python lib
collapsetrue
aptroot@release-getrancher:~# installkubectl -y python-pip
pip install ipaddress
pip install pyyaml
pip install mysql-connector-python
pip install progressbar2
pip install python-novaclient
pip install python-openstackclient
pip install netaddr
pip install kubernetes

9. Change the following env and service related parameters in vcpecommon.py, then run `vcpe.py init`. You may see some sql command failure, it's ok to ignore. 

Code Block
titlevcpecommon.py change
collapsetrue
--os-tenant-id
--os-projet-domain-name
oam_onap_net
oam_onap_subnet
self.vgw_VfModuleModelInvariantUuid

10. Run `vcpe.py infra`
11. Make sure sniro configuration is run as part of the above step.
12. Install curl command in sdnc-sdnc container
13. Run healthcheck-k8s.sh to check connectivity from sdnc to brg and gmux. If healthcheck-k8s.sh fails, check /opt/config/sdnc_ip.txt to see it has the SDNC host ip correctly. If you need to change SDNC host ip, you need to clean up and rerun `vcpe.py infra`. Also verify tap interfaces tap-0 and tap-1 are up by running vppctl with show int command. If tap interfaces are not up, use vppctl tap delete tap-0 and tap-1 and then run `/opt/bind_nic.sh` followed by `/opt/set_nat.sh`.

n onap get pod -o wide | grep sdnc-0
dev-sdnc-sdnc-0 2/2 Running 0 5h38m 10.42.3.22 release-k8s-11 <none> <none>
root@release-rancher:~# source ~/integration/deployment/heat/onap-rke/env/windriver/Integration-SB-04-openrc (source your openstack env file)
root@release-rancher:~# openstack server show -f json release-k8s-11 | jq .addresses
"oam_network_nzbD=10.0.0.10, 10.12.6.36"
root@release-rancher:~# ssh -i ~/.ssh/onap_dev ubuntu@10.12.6.36 -- sudo ip route add 10.3.0.0/24 via 10.0.101.10 dev ens3

7.2 Run from Rancher node `kubectl -n onap exec -it dev-sdnc-sdnc-0 -- /opt/sdnc/bin/addIpAddresses.sh VGW 10.5.0 22 250`
8. Install python-pip and other python libraries. See tutorial comments section

Code Block
titleInstall python lib
collapsetrue
apt-get install -y python-pip
pip install ipaddress
pip install pyyaml
pip install mysql-connector-python
pip install progressbar2
pip install python-novaclient
pip install python-openstackclient
pip install netaddr
pip install kubernetes

9. Change the following env and service related parameters in vcpecommon.py 

Code Block
titlevcpecommon.py change
collapsetrue
--os-tenant-id
--os-projet-domain-name
oam_onap_net
oam_onap_subnet
self.vgw_VfModuleModelInvariantUuid

9.1 Run `vcpe.py init`. You may see some sql command failure, it's ok to ignore.

10. Run `vcpe.py infra`
11. Make sure sniro configuration is run as part of the above step.
12. Install curl command inside sdnc-sdnc-0 container
13. Run `healthcheck-k8s.py onap` to check connectivity from sdnc to brg and gmux. If healthcheck-k8s.sh fails, check /opt/config/sdnc_ip.txt to see it has the SDNC host ip correctly. If you need to change SDNC host ip, you need to clean up and rerun `vcpe.py infra`.

.

If you have changed the SDNC_IP after instantiation of the vBNG and vBRGEMU:

  1. you need to also update the /opt/sdnc_ip in the vBNG and run v_bng_install.sh to get the vBNG route tables updated.
  2. you need to change sdnc_ip.txt and ip.txt on the vBRGEMU
Code Block
titlevppctl tap command
collapsetrue
root@zdcpe1cpe01brgemu01-201812261515:~# vppctl tap  delete tap-0
Deleted.
root@zdcpe1cpe01brgemu01-201812261515:~# vppctl tap  delete tap-1
Deleted.
[WAIT A FEW SECONDS BEFORE DOING NEXT STEPS or you may get an error since vppctl lstack returns error.]
root@zdcpe1cpe01brgemu01-201812261515:~# /opt/bind_nic.sh
root@zdcpe1cpe01brgemu01-201812261515:~# /opt/set_nat.sh
root@zdcpe1cpe01brgemu01-201812261515:~# vppctl show int
Code Block
titlevppctl tap command
collapsetrue
root@zdcpe1cpe01brgemu01-201812261515:~# vppctl tap  delete tap-0
Deleted.
root@zdcpe1cpe01brgemu01-201812261515:~# vppctl tap  delete tap-1
Deleted.
root@zdcpe1cpe01brgemu01-201812261515:~# /opt/bind_nic.sh
root@zdcpe1cpe01brgemu01-201812261515:~# /opt/set_nat.sh
root@zdcpe1cpe01brgemu01-201812261515:~# vppctl show int
              Name               Idx       State          Counter          Count     
GigabitEthernet0/4/0              1         up       tx packets                    12
                                                     tx bytes                    3912
local0                            0        down      
tap-0                             2         up       rx packets                     5
                           Name               Idx       State    rx bytes     Counter          Count      410

GigabitEthernet0/4/0              1         up       tx packets                    12
    drops                               7
                  tx bytes                    3912
local0              ip6              0        down      1
tap-10                             32         up       rx packets                     15
                                                     rx bytes                      70410
                                                     drops                          7
                                                     ip6                            1

14. Run `vcpe.py customer`
15. Verify tunnelxconn and brg vxlan tunnels are set up correctly
16. Set up vgw and brg dhcp and route, and ping from brg to vgw. Note vgw public ip on Openstack Horizon may be wrong. Use vgw OAM ip to login.

Code Block
titleTest data plane
collapsetrue
 1. ssh to vGW
 2. Restart DHCP: systemctl restart isc-dhcp-server
 3. ssh to vBRG
 4. Get IP from vGW: dhclient lstack
 5. Add route to Internet: ip route add 10.2.0.0/24 via 192.168.1.254 dev lstack
 6. ping the web server: ping 10.2.0.10
 7. wget http://10.2.0.10

17. Add identity-url property in RegionOne with Postman
18. Add new DG in APPC for closed loop. See APPC release note for steps. CCSDK-741
19. Update gmux libevel.so. See Eric comments on vcpe test status wiki

20. Run heatbridge Robot script

21. Push closed loop policy on Pap.
22. Run `vcpe.py loop` and verify vgmux is restarted

Code Block
titleClosed loop event messages
collapsetrue
VES_MEASUREMENT_OUTPUT event from VES collector to DCAE:
"{\"event\":{\"commonEventHeader\":{\"startEpochMicrosec\":1548802103113302,\"sourceId\":\"3dcbc028-45f0-4899-82a5-bb9cc7f14b32\",\"eventId\":\"Generic_traffic\",\"reportingEntityId\":\"No UUID available\",\"internalHeaderFields\":{\"collectorTimeStamp\":\"Tue, 01 29 2019 10:48:33 UTC\"},\"eventType\":\"HTTP request rate\",\"priority\":\"Normal\",\"version\":1.2,\"reportingEntityName\":\"zdcpe1cpe01mux01-201901291531\",\"sequence\":17,\"domain\":\"measurementsForVfScaling\",\"lastEpochMicrosec\":1548802113113302,\"eventName\":\"Measurement_vGMUX\",\"sourceName\":\"vcpe_vnf_9ab915ef-f44f-4fe5-a6ce_201901291531\"},\"measurementsForVfScalingFields\":{\"cpuUsageArray\":[{\"percentUsage\":0,\"cpuIdentifier\":\"cpu1\",\"cpuIdle\":47.1,\"cpuUsageSystem\":0,\"cpuUsageUser\":5.9}],\"measurementInterval\":10,\"requestRate\":540,\"vNicUsageArray\":[{\"transmittedOctetsDelta\":0,\"receivedTotalPacketsDelta\":0,\"vNicIdentifier\":\"eth0\",\"valuesAreSuspect\":\"true\",\"transmittedTotalPacketsDelta\":0,\"receivedOctetsDelta\":0}],\"measurementsForVfScalingVersion\":2.1,\"additionalMeasurements\":[{\"name\":\"ONAP-DCAE\",\"arrayOfFields\":[{\"name\":\"Packet-Loss-Rate\",\"value\":\"0.0\"}]}]}}}"

DCAE_CL_OUTPUT event from DCAE to Policy:
{
	"closedLoopEventClient": "DCAE_INSTANCE_ID.dcae-tca",
	"policyVersion": "v0.0.1",
	"policyName": "DCAE.Config_tca-hi-lo",
	"policyScope": "DCAE",
	"target_type": "VNF",
	"AAI": {
		"generic-vnf.resource-version": "1548788326279",
		"generic-vnf.nf-role": "",
		"generic-vnf.prov-status": "ACTIVE",
		"generic-vnf.orchestration-status": "Active",
		"generic-vnf.is-closed-loop-disabled": false,
		"generic-vnf.service-id": "f9457e8c-4afd-45da-9389-46acd9bf5116",
		"generic-vnf.in-maint": false,
		"generic-vnf.nf-type": "",
		"generic-vnf.nf-naming-code": "",
		"generic-vnf.vnf-name": "vcpe_vnf_9ab915ef-f44f-4fe5-a6ce_201901291531",
		"generic-vnf.model-version-id": "7dc4c0d8-e536-4b4e-92e6-492ae6b8d79a",
		"generic-vnf.model-customization-id": "a1ca6c01-8c6c-4743-9039-e34038d74a4d",
		"generic-vnf.nf-function": "",
		"generic-vnf.vnf-type": "demoVCPEvGMUX/9ab915ef-f44f-4fe5-a6ce 0",
		"generic-vnf.model-invariant-id": "637a6f52-6955-414d-a50f-0bfdbd76dac8",
		"generic-vnf.vnf-id": "3dcbc028-45f0-4899-82a5-bb9cc7f14b32"
	},
	"closedLoopAlarmStart": 1548803088140708,
	"closedLoopEventStatus": "ONSET",
	"closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
	"version": "1.0.2",
	"target": "generic-vnf.vnf-name",
	"requestID": "0e74d6df-627d-4a97-a679-be85ddad6758",
	"from": "DCAE"
}


APPC-LCM-READ event from Policy to APPC:
{
  "body": {
    "input": {
      "common-header": {
        "timestamp": "2019-01-29T23:05:42.121Z",
        "api-ver": "2.00",
        "originator-id": "923ac972-6ec1-4e34-b6e1-76dc7481d5af",
        "request-id": "923ac972-6ec1-4e34-b6e1-76dc7481d5af",
        "sub-request-id": "1",
        "flags": {}
      },
      "action": "Restart",
      "action-identifiers": {
        "vnf-id": "3dcbc028-45f0-4899-82a5-bb9cc7f14b32"
      }
    }
  },
  "version": "2.0",
  "rpc-name": "restart",
  "correlation-id": "923ac972-6ec1-4e34-b6e1-76dc7481d5af-1",
  "type": "request"
}

tap-1                             3         up       rx packets                     1
                                                     rx bytes                      70
                                                     drops                          7
                                                     ip6                            1

14. Run `vcpe.py customer`
15. Verify tunnelxconn and brg vxlan tunnels are set up correctly
16. Set up vgw and brg dhcp and route, and ping from brg to vgw. Note vgw public ip on Openstack Horizon may be wrong. Use vgw OAM ip to login.

Code Block
titleTest data plane
collapsetrue
 1. ssh to vGW
 2. Restart DHCP: systemctl restart isc-dhcp-server
 3. ssh to vBRG
 4. Get IP from vGW: dhclient lstack
 5. Add route to Internet: ip route add 10.2.0.0/24 via 192.168.1.254 dev lstack
 6. ping the web server: ping 10.2.0.10
 7. wget http://10.2.0.10

17. Add identity-url property in RegionOne with Postman
18. Add new DG in APPC for closed loop. See APPC release note for steps. CCSDK-741
19. Update gmux libevel.so. See Eric comments on vcpe test status wiki

20. Run heatbridge Robot script

21. Push closed loop policy on Pap.
22. Run `vcpe.py loop` and verify vgmux is restarted

Code Block
titleClosed loop event messages
collapsetrue
VES_MEASUREMENT_OUTPUT event from VES collector to DCAE:
{
	"event": {
		"commonEventHeader": {
			"startEpochMicrosec": 1548802103113302,
			"sourceId": "3dcbc028-45f0-4899-82a5-bb9cc7f14b32",
			"eventId": "Generic_traffic",
			"reportingEntityId": "No UUID available",
			"internalHeaderFields": {
				"collectorTimeStamp": "Tue, 01 29 2019 10:48:33 UTC"
			},
			"eventType": "HTTP request rate",
			"priority": "Normal",
			"version": 1.2,
			"reportingEntityName": "zdcpe1cpe01mux01-201901291531",
			"sequence": 17,
			"domain": "measurementsForVfScaling",
			"lastEpochMicrosec": 1548802113113302,
			"eventName": "Measurement_vGMUX",
			"sourceName": "vcpe_vnf_9ab915ef-f44f-4fe5-a6ce_201901291531"
		},
		"measurementsForVfScalingFields": {
			"cpuUsageArray": [
				{
					"percentUsage": 0,
					"cpuIdentifier": "cpu1",
					"cpuIdle": 47.1,
					"cpuUsageSystem": 0,
					"cpuUsageUser": 5.9
				}
			],
			"measurementInterval": 10,
			"requestRate": 540,
			"vNicUsageArray": [
				{
					"transmittedOctetsDelta": 0,
					"receivedTotalPacketsDelta": 0,
					"vNicIdentifier": "eth0",
					"valuesAreSuspect": "true",
					"transmittedTotalPacketsDelta": 0,
					"receivedOctetsDelta": 0
				}
			],
			"measurementsForVfScalingVersion": 2.1,
			"additionalMeasurements": [
				{
					"name": "ONAP-DCAE",
					"arrayOfFields": [
						{
							"name": "Packet-Loss-Rate",
							"value": "0.0"
						}
					]
				}
			]
		}
	}
}



DCAE_CL_OUTPUT event from DCAE to Policy:
{
	"closedLoopEventClient": "DCAE_INSTANCE_ID.dcae-tca",
	"policyVersion": "v0.0.1",
	"policyName": "DCAE.Config_tca-hi-lo",
	"policyScope": "DCAE",
	"target_type": "VNF",
	"AAI": {
		"generic-vnf.resource-version": "1548788326279",
		"generic-vnf.nf-role": "",
		"generic-vnf.prov-status": "ACTIVE",
		"generic-vnf.orchestration-status": "Active",
		"generic-vnf.is-closed-loop-disabled": false,
		"generic-vnf.service-id": "f9457e8c-4afd-45da-9389-46acd9bf5116",
		"generic-vnf.in-maint": false,
		"generic-vnf.nf-type": "",
		"generic-vnf.nf-naming-code": "",
		"generic-vnf.vnf-name": "vcpe_vnf_9ab915ef-f44f-4fe5-a6ce_201901291531",
		"generic-vnf.model-version-id": "7dc4c0d8-e536-4b4e-92e6-492ae6b8d79a",
		"generic-vnf.model-customization-id": "a1ca6c01-8c6c-4743-9039-e34038d74a4d",
		"generic-vnf.nf-function": "",
		"generic-vnf.vnf-type": "demoVCPEvGMUX/9ab915ef-f44f-4fe5-a6ce 0",
		"generic-vnf.model-invariant-id": "637a6f52-6955-414d-a50f-0bfdbd76dac8",
		"generic-vnf.vnf-id": "3dcbc028-45f0-4899-82a5-bb9cc7f14b32"
	},
	"closedLoopAlarmStart": 1548803088140708,
	"closedLoopEventStatus": "ONSET",
	"closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
	"version": "1.0.2",
	"target": "generic-vnf.vnf-name",
	"requestID": "0e74d6df-627d-4a97-a679-be85ddad6758",
	"from": "DCAE"
}


APPC-LCM-READ event from Policy to APPC:
{
  "body": {
    "input": {
      "common-header": {
        "timestamp": "2019-01-29T23:05:42.121Z",
        "api-ver": "2.00",
        "originator-id": "923ac972-6ec1-4e34-b6e1-76dc7481d5af",
        "request-id": "923ac972-6ec1-4e34-b6e1-76dc7481d5af",
        "sub-request-id": "1",
        "flags": {}
      },
      "action": "Restart",
      "action-identifiers": {
        "vnf-id": "3dcbc028-45f0-4899-82a5-bb9cc7f14b32"
      }
    }
  },
  "version": "2.0",
  "rpc-name": "restart",
  "correlation-id": "923ac972-6ec1-4e34-b6e1-76dc7481d5af-1",
  "type": "request"
}

23. To repeat create infra step, you can delete infra vf-module stacks first and the network stacks from Openstack Horizon Orchestration->Stack page, then clean up the record in sdnc DHCP_MAC table before rerun `vcpe.py infra`
24. To repeat create customer step, you can delete customer stack, then clear up tunnles by running `cleanGMUX.py gmux_public_ip` and `cleanGMUX.py brg_public_ip`. After that you can rerun create customer command

25. If SDNC needs to be redeployed, you need again to distribute service model from SDC UI, create ip pool, install curl, and set SDNC VM cluster node routing table. Then you should reinstantiate infra VNFs, otherwise you would need to change sdnc ip address in VNFs for snat config. 

Checklist for Dublin and El Alto Releases

  1. Model distribution by `demo-k8s.sh onap init`. this will onboard VNFs and 4 services, i.e. infrastructure,  brg, bng and gmux
  2. Run Robot `ete-k8s.sh onap distributevCPEResCust`. This step assumes step 1 successfully distributed the 4 models
  3. Add customer SDN-ETHERNET-INTERNET (need to put into vcpe init)
  4. Add identity-url to RegionOne
  5. Add route on sdnc cluster node `ip route add 10.3.0.0/24 via 10.0.101.10 dev ens3`
  6. Initialize SDNC ip pool by running from Rancher node `kubectl -n onap exec -it dev-sdnc-sdnc-0 -- /opt/sdnc/bin/addIpAddresses.sh VGW 10.5.0 22 250`
  7. Install python and other python libraries
    1. In El Alto this can be done via ~integration/test/vcpe/bin/setup.sh
  8. Change the openstack env parameters and the customer service related parameter in vcpecommon.py
    1. Make sure to Change vgw_VfModuleModelInvariantUuid  in vcpecommon.py based on the CSAR - it changes for every CSAR
  9. Run `vcpe.py init`
  10. Insert the custom service workflow entry in SO catalogdb
Code Block
titleInsert customer workflow into SO service table
collapsetrue
root@sb04-rancher:~# kubectl exec dev-mariadb-galera-mariadb-galera-0 -- mysql -uroot -psecretpassword -e "INSERT INTO catalogdb.service_recipe (ACTION, VERSION_STR, DESCRIPTION, ORCHESTRATION_URI, SERVICE_PARAM_XSD, RECIPE_TIMEOUT, SERVICE_TIMEOUT_INTERIM, CREATION_TIMESTAMP, SERVICE_MODEL_UUID) VALUES ('createInstance','1','vCPEResCust 2019-06-03 _04ba','/mso/async/services/CreateVcpeResCustService',NULL,181,NULL, NOW(),'6c4a469d-ca2c-4b02-8cf1-bd02e9c5a7ce')"

10. Run `vcpe.py infra`

11. Install curl command inside sdnc-sdnc-0 container

12. From Rancher node run `healthcheck-k8s.py onap` to check connectivity from sdnc to brg and gmux

13. Update libevel.so in vGMUX

14. Run heatbridge

15. Push new Policy. Follow Jorge's steps in 

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyINT-1089

Code Block
titlePush policy
collapsetrue
root@dev-robot-robot-66c9dbc759-8j7lr:/# curl -k --silent --user 'healthcheck:zb!XztG34' -X POST "https://policy-api:6969/policy/api/v1/policytypes/onap.policies.controlloop.Operational/versions/1.0.0/policies" -H "Accept: application/json" -H "Content-Type: application/json" -d @operational.vcpe.json.txt
{"policy-id":"operational.vcpe","policy-version":"1","content":"controlLoop%3A%0D%0A++version%3A+2.0.0%0D%0A++controlLoopName%3A+ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e%0D%0A++trigger_policy%3A+unique-policy-id-1-restart%0D%0A++timeout%3A+3600%0D%0A++abatement%3A+true%0D%0A+%0D%0Apolicies%3A%0D%0A++-+id%3A+unique-policy-id-1-restart%0D%0A++++name%3A+Restart+the+VM%0D%0A++++description%3A%0D%0A++++actor%3A+APPC%0D%0A++++recipe%3A+Restart%0D%0A++++target%3A%0D%0A++++++type%3A+VM%0D%0A++++retry%3A+3%0D%0A++++timeout%3A+1200%0D%0A++++success%3A+final_success%0D%0A++++failure%3A+final_failure%0D%0A++++failure_timeout%3A+final_failure_timeout%0D%0A++++failure_retries%3A+final_failure_retries%0D%0A++++failure_exception%3A+final_failure_exception%0D%0A++++failure_guard%3A+final_failure_guard"}
root@dev-robot-robot-66c9dbc759-8j7lr:/# curl --silent -k --user 'healthcheck:zb!XztG34' -X POST "https://policy-pap:6969/policy/pap/v1/pdps/policies" -H "Accept: application/json" -H "Content-Type: application/json" -d @operational.vcpe.pap.json.txt 

{
"policies": [
{
"policy-id": "operational.vcpe",
"policy-version": 1
}
]
}

16. Start closeloop by `./vcpe.py loop` to trigger packet drop VES event. You may need to run the command twice if the first run fails

[Note you may need to comment out the set_closed_loop in vcpe.py line 165 if

Jira
serverONAP JIRA
serverId425b2b0a-557c-3c0c-b515-579789cceedb
keyINT-1323
is not closed.

#vcpecommon.set_closed_loop_policy(policy_template_file)

Code Block
titlePacket measurement event received on VES collector
collapsetrue
[2019-06-04 11:03:49,822][INFO ][pool-5-thread-20][org.onap.dcae.common.EventProcessor] - QueueSize:0 EventProcessor Removing element: {"VESversion":"v5","VESuniqueId":"88f3548c-1a93-4f1d-8a2a-001f8d4a2aea","event":{"commonEventHeader":{"startEpochMicrosec":1559646219672586,"sourceId":"d92444f5-1985-4e15-807e-b8de2d96e489","eventId":"Generic_traffic","reportingEntityId":"No UUID available","eventType":"HTTP request rate","priority":"Normal","version":1.2,"reportingEntityName":"zdcpe1cpe01mux01-201906032354","sequence":9,"domain":"measurementsForVfScaling","lastEpochMicrosec":1559646229672586,"eventName":"Measurement_vGMUX","sourceName":"vcpe_vnf_vcpe_vgmux_201906032354"},"measurementsForVfScalingFields":{"cpuUsageArray":[{"percentUsage":0,"cpuIdentifier":"cpu1","cpuIdle":100,"cpuUsageSystem":0,"cpuUsageUser":0}],"measurementInterval":10,"requestRate":492,"vNicUsageArray":[{"transmittedOctetsDelta":0,"receivedTotalPacketsDelta":0,"vNicIdentifier":"eth0","valuesAreSuspect":"true","transmittedTotalPacketsDelta":0,"receivedOctetsDelta":0}],"measurementsForVfScalingVersion":2.1,"additionalMeasurements":[{"name":"ONAP-DCAE","arrayOfFields":[{"name":"Packet-Loss-Rate","value":"22.0"}]}]}}}

17. Stop cloed loop for testing with ./vcpe.py noloss

Frankfurt vCPE.py Log for creating networks:


View file
namevcpe.20200625.log
height250
23. To repeat create infra step, you can delete infra vf-module stacks first and the network stacks from Openstack Horizon Orchestration->Stack page, then clean up the record in sdnc DHCP_MAC table before rerun `vcpe.py infra`
24. To repeat create customer step, you can delete customer stack, then clear up tunnles by running `cleanGMUX.py gmux_public_ip` and `cleanGMUX.py brg_public_ip`. After that you can rerun create customer command

Typical Errors and Solutions

...

When running "vcpe.py infra" command, if you see error message about subnet can't be found. It may be because your python-openstackclient is not the latest version and don't support "openstack subnet set --name" command option. Upgrade the module with "pip install --upgrade python-openstackclient".subnet can't be found. It may be because your python-openstackclient is not the latest version and don't support "openstack subnet set --name" command option. Upgrade the module with "pip install --upgrade python-openstackclient".


Unable to generate VM name error from SDNC

Received error from SDN-C: Unable to generate VM name: naming-policy-generate-name: input.policy-instance-name is not set and input.policy is ASSIGN.

To resolve this: Check the  vgw_VfModuleModelInvariantUuid parameter in the vcpecommon.py script is updated with your ResCust_svc VF_ModuleModelInvariantUuid or not. For every new customer don't forget to update this.

Add this to CheckList:

# CHANGEME: vgw_VfModuleModelInvariantUuid is in rescust service csar, look in service-VcpesvcRescust1118-template.yml for groups vgw module metadata. TODO: read this value automcatically
self.vgw_VfModuleModelInvariantUuid = '26d6a718-17b2-4ba8-8691-c44343b2ecd2'