NFV Approach:

  1. A capability type for Infrastructure requirements (nfv.capabilities.Compute)
  2. A node type (nfv.nodes.Compute) with a capability of the nfv.Capabilities.Compute type
  3. A VDU node "connects" to a Compute node using this Compute requirement.

node_templates:
  my_vnf:
    requirements:
     - host:
        node: my_compute
        capability: host
  
  my_compute:
    type: onap.nodes.Compute
    capabilities:
      host:
        properties:
          num_cpus: 4
          memory_size: 100 MB



ONAP Approach

  1. A capability type for infrastructure requirements (onap.capabilities.Compute)
  2. A VDU node expresses its infra requirements as a TOSCA requirements with the node_filter construct


node_templates:
  my_vdu:
    requirements:
      - host:
          node_filter:
            capabilities: onap.capabilities.Compute
            properties:
              num_cpus:
                - equal: 4
              memory_size:
                - greater_or_equal: 100 MB                                        




The approach can be extended into the "decomposed VDU variant":


node_templates:
  my_vnf:
    requirements:
      - host:
          node: my_compute
          capabilities: host

  my_compute:
    type: onap.nodes.Compute
    capabilities:
      host: #...
    requirements:
      - host:
          node_filter:
            capabilities: onap.capabilities.Compute
            properties:
              num_cpus:
                - equal: 4
              memory_size:
                - greater_or_equal: 100 MB                                        
                        
 



  • This solution filters for property values, while NFV's is looking for the perfect match with equality of property values
  • A requirement can be exposed through the substitution mapping, while the NFV'’s node is bound to stay inside the topology. That is, in my way we always have a summary of all infra requirement at the topmost level of the model nesting. With the NFV solution, the orchestrator may need to scan through the whole model in order to collect all requirements
  • In the ONAP solution, the designer can always add a deployment artifact to the “requiring” node, and the requirements will stay unfulfilled. In the NFV proposal, with adding a deployment artifact to the Compute node (to launch with the OS only), this Compute node immediately stops being abstract and the orchestrator will think that the infra requirements have been fulfilled within the model
  • Modeling a requirement as a capability is quite counter-intuitive and less readable...




  • No labels

1 Comment

  1. My biggest pet peeve about the examples in the TOSCA spec is that they confuse capabilities and requirements. Invariably, capabilities are used where requirements should be used instead. This is based on the (incorrect) assumption that TOSCA orchestrators should treat Compute nodes as "built-in" nodes that have no further dependencies on anything else. This is obviously not correct, since Compute nodes (at least virtual ones) depend on Cloud infrastructure on top of which the Compute node needs to be instantiated, so a Compute node itself should have a HostedOn requirement, which should include CPU, Memory, Disk Size, and other parameters.