Dublin

Roll back to a previous SDC image

Update SDC image version in OOM
root@sb00-rancher:~# find oom/kubernetes/sdc -name 'values.yaml' -exec grep -Hn "image:" {} \;
oom/kubernetes/sdc/charts/sdc-onboarding-be/values.yaml:31:image: onap/sdc-onboard-backend:1.4-STAGING-latest
oom/kubernetes/sdc/charts/sdc-wfd-fe/values.yaml:31:image: onap/workflow-frontend:latest
oom/kubernetes/sdc/charts/sdc-dcae-fe/values.yaml:30:image: onap/dcae-fe:1.3-STAGING-latest
oom/kubernetes/sdc/charts/sdc-dcae-tosca-lab/values.yaml:30:image: onap/dcae-tosca-app:1.3-STAGING-latest
oom/kubernetes/sdc/charts/sdc-es/values.yaml:34:image: onap/sdc-elasticsearch:1.4-STAGING-latest
oom/kubernetes/sdc/charts/sdc-wfd-be/values.yaml:31:image: onap/workflow-backend:latest
oom/kubernetes/sdc/charts/sdc-dcae-be/values.yaml:30:image: onap/dcae-be:1.3-STAGING-latest
oom/kubernetes/sdc/charts/sdc-fe/values.yaml:31:image: onap/sdc-frontend:1.4-STAGING-latest
oom/kubernetes/sdc/charts/sdc-dcae-dt/values.yaml:30:image: onap/dcae-dt:1.2-STAGING-latest
oom/kubernetes/sdc/charts/sdc-be/values.yaml:31:image: onap/sdc-backend:1.4-STAGING-latest
oom/kubernetes/sdc/charts/sdc-kb/values.yaml:31:image: onap/sdc-kibana:1.4-STAGING-latest
oom/kubernetes/sdc/charts/sdc-cs/values.yaml:31:image: onap/sdc-cassandra:1.4-STAGING-latest
root@sb00-rancher:~# find . -name 'values.yaml' -exec sed -i 's/1\.4-STAGING-latest/1\.4\.1-20190516T202520Z/g' {} \;


Drop SDC keyspaces from shared Cassandra
root@sb02-rancher:~# kubectl -n onap exec -it dev-cassandra-cassandra-0 -- cqlsh -u cassandra -p cassandra -e "describe keyspaces"

sdccomponent  system_auth  sdcaudit     system_distributed  sdctitan     
workflow      system       dox          system_traces       sdcrepository
zusammen_dox  aaigraph     sdcartifact  zusammen_workflow 

root@sb02-rancher:~# kubectl -n onap exec -it dev-cassandra-cassandra-0 -- cqlsh -u cassandra -p cassandra -e "drop keyspace sdctitan"
<stdin>:1:OperationTimedOut: errors={'127.0.0.1': 'Request timed out while waiting for schema agreement. See Session.execute[_async](timeout) and Cluster.max_schema_agreement_wait.'}, last_host=127.0.0.1
command terminated with exit code 2
root@sb02-rancher:~# kubectl -n onap exec -it dev-cassandra-cassandra-0 -- cqlsh -u cassandra -p cassandra -e "drop keyspace sdcrepository"
root@sb02-rancher:~# kubectl -n onap exec -it dev-cassandra-cassandra-0 -- cqlsh -u cassandra -p cassandra -e "drop keyspace sdcartifact"
root@sb02-rancher:~# kubectl -n onap exec -it dev-cassandra-cassandra-0 -- cqlsh -u cassandra -p cassandra -e "drop keyspace sdccomponent"
root@sb02-rancher:~# kubectl -n onap exec -it dev-cassandra-cassandra-0 -- cqlsh -u cassandra -p cassandra -e "drop keyspace sdcaudit"
<stdin>:1:OperationTimedOut: errors={'127.0.0.1': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=127.0.0.1
command terminated with exit code 2
root@sb02-rancher:~# kubectl -n onap exec -it dev-cassandra-cassandra-0 -- cqlsh -u cassandra -p cassandra -e "describe keyspaces"

workflow      system_auth  zusammen_workflow  system_distributed  aaigraph
zusammen_dox  system       dox                system_traces     


Redeploy SDC
root@sb00-rancher:~# ./integration/deployment/heat/onap-rke/scripts/redeploy-module.sh sdc


Access Shared Cassandra DB

Common Cassandra DB access
root@staging-rancher:~/oom/kubernetes/sdc/charts/sdc-cs# kubectl -n onap exec -it dev-cassandra-cassandra-0 bash
root@dev-cassandra-cassandra-0:/# cqlsh -u cassandra -p cassandra 
Connected to cassandra at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.2.14 | CQL spec 3.3.1 | Native protocol v4]
Use HELP for help.
cassandra@cqlsh> describe keyspaces;

sdccomponent system_auth sdcaudit system_distributed sdctitan 
workflow system dox system_traces sdcrepository
zusammen_dox aaigraph sdcartifact zusammen_workflow 

Casablanca



Credential for basic authentication when access with REST 

Username   Password
 aai Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U
 vid Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U



vIMS service design and creation video (voice is in Chinese)


CSAR used by CMCC for R2

Huawei: vSBC_update_v03.csar

Nokia:cscf.v2.csar/etsi_config.json


CSARs used by CMCC for R1

ZTE: resource-ZteEpcMmeVf-csar_fix.csar | ZteEpcSpgwVf-csar.csar

Huawei: Huawei_vMME.csar | Huawei_vSPGW_fixed.csar | Huawei_vHSS.csar | Huawei_vPCRF_aligned_fixed.csar | vSBC_aligned.csar

Nokia: vCSCF_v3.0.csar


If there is an update on SDC TOSCA node type, you need to delete Cassandra database file from SDC VM, and download docker images

SDC Normative TOSCA Node Type Update
ubuntu@vm03-sdc:~$ sudo rm -rf /data/CS/
ubuntu@vm03-sdc:~$ sudo bash -x /opt/sdc_vm_init.sh
SDC Enable Debug Log
Inside container sbc-BE, change log configuration
root@405b964017de:/var/lib/jetty# vi /var/lib/jetty/config/catalog-be/logback.xml 
Enable enable-all-log to true
	<property scope="context" name="enable-all-log" value="true" />

Change log level from INFO to DEBUG in all places
SDC-BE Log Files
On SDC VM, run:
ubuntu@vm03-sdc:~$ tail -f /data/logs/BE/SDC/SDC-BE/debug.log


SDC image includes VoLTE CSARs which will be used to do self-test when SDC starts. The test result is in an HTML report  

CSAR Onboarding SDC Self Test Report
root@vm03-sdc:~# ls -l /data/logs/sdc-sanity/ExtentReport/SDC*.html
-rw-r--r-- 1 root root 31929 Oct 29 13:51 /data/logs/sdc-sanity/ExtentReport/SDC_CI_Extent_Report.html


SDC healthcheck url http://10.12.5.154:30205/sdc2/rest/healthCheck, sample output is:

SDC healthcheck sample output
{
  "sdcVersion": "1.3.6",
  "siteMode": "unknown",
  "componentsInfo": [
    {
      "healthCheckComponent": "BE",
      "healthCheckStatus": "UP",
      "version": "1.3.6",
      "description": "OK"
    },
    {
      "healthCheckComponent": "TITAN",
      "healthCheckStatus": "UP",
      "description": "OK"
    },
    {
      "healthCheckComponent": "ES",
      "healthCheckStatus": "UP",
      "version": "1.3.6",
      "description": "OK"
    },
    {
      "healthCheckComponent": "DE",
      "healthCheckStatus": "DOWN",
      "description": "U-EB cluster is not available"
    },
    {
      "healthCheckComponent": "CASSANDRA",
      "healthCheckStatus": "UP",
      "description": "OK"
    },
    {
      "healthCheckComponent": "ON_BOARDING",
      "healthCheckStatus": "UP",
      "version": "1.3.6",
      "description": "OK",
      "componentsInfo": [
        {
          "healthCheckComponent": "ZU",
          "healthCheckStatus": "UP",
          "description": "OK"
        },
        {
          "healthCheckComponent": "BE",
          "healthCheckStatus": "UP",
          "version": "1.3.6",
          "description": "OK"
        },
        {
          "healthCheckComponent": "CAS",
          "healthCheckStatus": "UP",
          "version": "2.1.17",
          "description": "OK"
        }
      ]
    },
    {
      "healthCheckComponent": "DCAE",
      "healthCheckStatus": "UP",
      "version": "1.3.0",
      "description": "OK",
      "componentsInfo": [
        {
          "healthCheckComponent": "BE",
          "healthCheckStatus": "UP",
          "version": "1.3.0",
          "description": "OK"
        },
        {
          "healthCheckComponent": "TOSCA_LAB",
          "healthCheckStatus": "UP",
          "version": "1.3.0",
          "description": "OK"
        }
      ]
    }
  ]
}



SDC ignores zip file in vendor CSAR Artifacts/ directory. See  SDC-483 - Getting issue details... STATUS  

The workaround is after VSP onboarding and import, go to Deployment Artifacts on VNF design GUI, and click Add Artifact then choose Type as Other. From there you should be able to upload your zip file. The zip file will be stored under Artifacts/Other directory of SDC output TOSCA CSAR file. 


VoLTE Service Design

Step 1. Create New VSP by upload vendor VNF CSAR. 




Step 2. Create VNF by importing VSP, and submit for testing



  • No labels

11 Comments

  1. Creating VSP from network package requires zip file with YAML files for base (and optionally volume, env), no directories.
    and the csars attached doesn't match this. so how did you created the VSPs ?

  2. These VSPs are provided by vendors as TOSCA CSAR files, not HEAT

  3. Hi Mr Yang :

      I got a problem when I import the VSP which created with the Package  Huawei_vSPGW_fixed.csar, when I import it in the home page and click the create,the backend return a error, I debug the code and find that the toscaOperationFacade doesn't support the node type "tosca.nodes.nfv.VduCpd", and I find this node type in tosca-simple-nfv-1.1.yaml,

    could you help me with this ?

    Thanks a lot


  4. Hi Yang, I has a same question as Arik. I don't have Heat Template, in SDC UI, when create VSP, I  need upload a ZIP file, how the Vendor CSAR works since the zip file can not have folders.. Does the Vendor CSAR have specific format? I doubt..

    Thanks

    Jennie

  5. Jennie,

    ZIP can preserve folders in it. ONAP CSAR spec is here Csar Structure

  6. 杨总, 

         能不能更新下casablanca 的CSAR文件啊, 这个页面的CSAR 在导入VSP的时候,错误

    1. Could comment again in english?

      1. hi, 

              Can you update csar for Casablanca version?

        1. Sorry, CSARs were provided by vendors. VoLTE did not participate Casablanca use case testing, so we did not get any CSAR update from vendors. 

  7. Yang,

    we are getting parsing exception when we upload vendor csar of VOLTE which has been provided in ONAP (Huawei_vSPGW_fixed.csar, and other vendor csar ). As you told that Vendor does not provided updated csar. So how can we create VOLTE template in SDC|?

    1. Hi Goutam,

      Sorry I no longer work on ONAP project. You can contact ONAP Integration team lead. 

      Yang