• Project Name:

  • Proposed name for the project: Modeling
  • Proposed name for the repository: org.onap.modeling modeling

Project description:

 The unified model-driven approach uses models as sources of data for generating processes/codes and following workflows (not code development as source) – this way, the system can be more flexible and future proof, easy to update and use for cross-platform solutions since the “only” thing needed is Model update and manipulation through Engine.

  • This project will produce the tools related to modeling.

Scope:

  • Modeling tools and converters
  • Code - parsers (unified API interfaces) and tools (translators)

Architecture Alignment:

  • How does this project fit into the rest of the ONAP Architecture?
  • Following project is going to use this projects
    • SDC

    • VNF SDK

    • VF-C

    • A&AI
    • SO

    • ICE

    • Policy Framework
  • Are there dependencies with other open source projects?
    • None
    • etc.

Deliverables for release 1

Parsers and translators:

  • Tosca Parsers
  • YANG Parsers
  • Tosca to Heat
  • YANG to Tosca


Collaboration:

Resources:

Use the above information to create a key project facts section on your project page

Key Project Facts

Project Name:

  • JIRA project name: modeling
  • JIRA project prefix: modeling

Repo name: org.onap.modeling
Lifecycle State:
Primary Contact:
Project Lead:
mailing list tag [modeling] 
Committers:


CommitterEmailLinux Foundation ID
Alex Vulalex.vul@intel.comavul
Amir Levyamir@gigaspaces.comamirlevy
Arthur BerezinArthur@gigaspaces.comArthurBerezin
Bruce Thompsonbrucet@cisco.com
Chengli Wangwangchengli@chinamobile.comwangchengli
Ethan Lynnethanlynnl@vmware.comethanlynnl
Hu Dongdonghu@raisecom.comdonghu1102
Hui Deng denghui02@hotmail.comdenghui02
Ling Lilenon_lee@hotmial.comlenon_lee
Lingli Dengdenglingli@chinamobile.comlingli
Liuhe Zhongzhongheliu@boco.com.cnzhongheliu
Maopeng Zhangzhang.maopeng1@zte.com.cnMaopengzhang
Rittwik Janarjana@research.att.comRjana
Sandeep Shahss00473517@techmahindra.comSandeepLinux
Shitao Lilishitao@huawei.comLishitao
Thinh Nguyenphuthinh.nguyenphu@nokia.com
Xinhui Lilxinhui@vmware.comxinhuili
Yanbin Shishiyb.gd@chinatelecom.cnYanbinShi
Yannan Hanhanyanan@raisecom.comhanyanan
Yuanwei Yangyangyuanwei@boco.com.cnyuanweiyang
Zhaoxing Mengmeng.zhaoxing1@zte.com.cnZhaoxing



*Link to TSC approval: 

Link to approval of additional submitters: 


  • No labels

17 Comments

  1. I am wondering how modeling project will model the NFVI resources, while I found only "Resource models (that contain VF and VFC)".

    My concern is: Will modeling project  cover the information model of NFVI resources? This is very critical to enable VF/VFC verndors and service providers to leverage the NFVI resources like EPA features in an automation approach.

    One possible scenario is that when the VF/VFC is implemented assuming there are some type EPA features exist, Vendor should make it  specified in some specification by VNF-SDK,  and Multi-VIM will report those EPA resources to Catalog and AAI, so the SDC user could design the service based on these VF/VFC, operator could deploy such service later.

    Without resource model to describe the catalog and quantity of EPA capability/resources, Both VF/VFC vendors and Service Providers will have to align to each other by offline approach, which is a deviation of automation.

    1. That's an excellent point. And i think that the modeling project is there to make sure we built jointly and with code (not just UML / text) a definition that spans across all ONAP project to deliver on the promise of an automated platform. 

      I think that our first goal is to figure out the right balance between the autonomy that various ONAP projects get over their architecture (which include the model they will need) - and the "centralized" approach where the modeling project will dictate a uniform model for the projects. 

      During the ONAP developer's summit, we agreed that modeling project will provide: repository to hold all models, and guidelines for creating of models. 

      I hope that for the actual project proposal we will be able to prescribe a working model for ONAP to be successful. 

  2. I don't think Modeling could dictating other code projects, code we have is only about parsers and translators, for vnf/ns spec, modeling project need to work together with VNF SDK/SO/VF-C et al., for information model, we need to work together with A&AI project as well. Please help to join our contributor list if you would. thanks

  3. Chat from 16-MAY-2017 call:

    From Bryan Sullivan to Everyone: 06:38 AM
    I disagree in principle that declarative methods can'rt handle complex use cases.
    From DENG Hui to Everyone: 06:39 AM
    Bryan, do you have example, how vendor's volte could be deployed based on declarative methods?
    From Bryan Sullivan to Everyone: 06:39 AM
    Dependencies as an example is clearly supported by a declarative model, and can be used to derive VNFC sequencing.
    From Alex Vul to Everyone: 06:40 AM
    I agree with Bryan...
    From Bryan Sullivan to Everyone: 06:40 AM
    You need to list the particular aspects of the model (actions in a deployment process for example) that need to be assessed per support in declarative means. Do we have an IRC channel for the meeting? Will be more effective than this chat tool.
    From Alex Vul to Everyone: 06:42 AM
    just look at the VMware proprietary implementaton or the BMC one, or examples of complex SAP landscapes that have been modelled in TOSCA the complexity of SAP landscapes rivals what we need to do in terms of network services, VNFs and SFCs
    From Bryan Sullivan to Everyone: 06:42 AM
    Also, the Cloudify DSL has a lot of complex features that they have extended TOSCA with. These have not worked into the standard but are a clear example of how to declaratively model complex features.
    From Alex Vul to Everyone: 06:43 AM
    I have been working with declarative orchestrators for more than 10 years - they can do the job...
    From maopeng to Everyone: 06:44 AM
    could the Tosca deal with both deploy and upgrade?
    From Bryan Sullivan to Everyone: 06:44 AM
    Same for JuJu, which uses a largely declarative approach, supplemented by scripts where needed for any detailed actions not yet supported by the model DSL. This is a common technique, and is a design decision in the end (how much to script based upon lifecycle hook calls).
    From DENG Hui to Everyone: 06:45 AM
    one more clarification again: is there any vendor's VOLTE could be deployed based on declarative model? please show the example,
    From Bryan Sullivan to Everyone: 06:45 AM
    I see no reason that upgrade cannot be represented by a reference to a dependent artifact that has changed version (upgrade or downgrade), based upon the current running state of a VNF.
    From DENG Hui to Everyone: 06:45 AM
    today's svnfm is under JUJU,
    From Alex Vul to Everyone: 06:45 AM
    TOSCA *is* for deployment
    From maopeng to Everyone: 06:46 AM
    @Alex?just for deployment?
    From Bryan Sullivan to Everyone: 06:46 AM
    Deng, the "vendor's VOLTE" example needs to be defined re requirements by the vendor. We can't in a generic sense respond to a particular example withouth understanding the assumptions/requirements of that example.
    From Alex Vul to Everyone: 06:46 AM
    you can also orchestrate upgrades/updates
    From Amir Levy to Everyone: 06:47 AM
    @DENG - we have ran VoLTE over declarative model using Cloudify
    From Alex Vul to Everyone: 06:47 AM
    the containment graph is helpful for in-service upgrades
    From Amir Levy to Everyone: 06:47 AM
    https://www.youtube.com/watch?v=NfkCkj3Hd8U
    From DENG Hui to Everyone: 06:47 AM
    @amir, which operator , and which telecom vendor, don't tell me that is open source
    From maopeng to Everyone: 06:48 AM
    one Tosca file or two files do deploy and update?
    From Amir Levy to Everyone: 06:48 AM
    @DENG check out - OPNFV / ClearWater
    From Bryan Sullivan to Everyone: 06:48 AM
    Deng, where possible we should always use open source examples to develop ONAP functionality. (if I understood your point/question)
    From DENG Hui to Everyone: 06:49 AM
    Bryan, we are talking about commerical value and commerical deployment, not just for academy based on open source vnf
    From Bryan Sullivan to Everyone: 06:51 AM
    If we can't derive clear deployable platform value from developing a platform using open source reference VNFs, then we might as well go home now. That does not impact your ability to deploy proprietary "real" VNFs, or diminish the potential quality of the platform to support them. We just have to do a thorough job using open source tools including VNFs.
    From maopeng to Everyone: 06:51 AM
    if the deploy and the update have the different relations or other depandencies running state, how to do that?
    From Alex Vul to Everyone: 06:52 AM
    can you elaborate maopeng...
    From Bryan Sullivan to Everyone: 06:52 AM
    We are not in a science experiment here; we are working in the open on a real deployable platform. We just need to keep all the components of the project discussion open and free.
    @alex, we can discuss offline
    From Me to Everyone: 06:54 AM
    Nokia successfully deployed a commercial IMS using a TOSCA driven templates decribed here:https://tools.ext.nokia.com/asset/200827
    From Alex Vul to Everyone: 06:55 AM
    sure... i spent 15 years working on app modeling... what Huabing is proposing is actually one implementation that I have done...
    From Bryan Sullivan to Everyone: 06:55 AM
    Do these chat notes get published in the minutes? If not we need to start using IRC so the notes are minuted.

  4. Some review comments on the project proposal:

    1. For the Architectural Alignment, we should consider embedding an image of the ONAP Architecture diagram with the Modeling highlighted, per New Project Proposals (bottom of the page).
    2. We should create a Committer list of resources and a Contributor list of resources.
    3. The Primary Contact and Project Lead should be identified.
    4. The Lifecycle State should be Incubation
    5. There is currently one Repo name identified, should we consider multiple Repos, e.g., Information Models (with dimensions of Resource, Service, Workflow, etc.), Tooling, Processes/Guidelines, Data Models, Workflows
    6. We might want to consider adding a What Problem is this project trying to Solve
    7. Regarding these statements:

    Code - parsers (unified API interfaces) and tools

    What does this mean?  Wouldn't the individual ONAP projects be responsible for contributing their project specific Information Models, Data Models, Process Workflows and API definitions per the Modeling framework, Modeling guidelines and Common Information Model?

  5. One review comment on the modeling project proposal description.  There is published document from ETSI NFV which describes the touchpoints/relations between the NFV Information Model and information models from other organisations, such as ONF, 3GPP, TMF. So, I am proposing adding new bullet point.


    • How does this align with external standards/specifications?
      • Oasis Tosca NFV profile
      • ETSI NFV SOL spec
      • ETSI IFA024: Report on External Touchpoints related to NFV Information Model
  6. Scope of this project is overlapping with all other ONAP projects as individual projects, based on their functional requirements shall be defining their data model and information model. However since the Modeling project intends to unify all the data and information models across ONAP projects to harmonize all these models and create "Common Data Model" and consistent Information Model, handshake will be required with all projects and for that we need to devise a process (Process document) capturing,

    • Acceptable formats for publishing data models by individual projects. It should include individual data items, relationships, cardinality, usage, description, Reference to ETSI NFV standard if any etc.
    • Modeling team will collect, comprehend, collaborate and consolidate the models received from individual projects and propose updated data model back to individual projects with justification for each change. While making the changes Modeling team will ensure that changes in the proposed model does not break any of the intended function of the project. 
    • Format for the details of public Interfaces/APIs exposed towards each ONAP module.
    • Format for the details of expected/required Interfaces/APIs from each ONAP module.

    The proposed process document shall help in

    • Providing clarity and visibility to individual project teams on modeling items.
    • Faster collaboration to arrive at common unified model.
    • Creation of detailed workflows
    • Integration of all ONAP components
  7. NOTE: I am NOT trying to say this project won't work. I AM trying to say I have timeline concerns.

    "Time plan:

    Info Model specification finalize at 1.5 month before functional freeze(for R1, 20th June, 2017)
    Data Model specification finalize at 1 month before functional freeze (for R1, 3rd, July, 2017)"

    I'm hoping that the above are typos, or that they mean yes, here are the **requirements** to be finalized for our models. To me, "model specification" means that the technical specification for the model is done.

    I do not see any info model right now (please correct me if there is one), and do not think that it is possible to build one in < 3 weeks. (I've been building info and data models for well over 30 years).

    If there IS an info model, then I would like to see it. I am the Orchestration Area Director for the MEF and a TMF Distinguished Fellow. I would like to see what is being reused vs. modified vs. created, for these and other critical modeling sources (e.g., ONF, NFV).

    If there IS an info model, then I do not think that you could generate a generic set of reusable and extensible data models in 2 weeks. I am sure that one or more data models currently exist, but they likely exist as bottom-up implementations that may or may not be related to each other. Specifically, I doubt that two different data models were developed from a common information model, because if they were, this is a huge selling point and should be able to be seen somewhere! If I am correct, then you would have to "backwards-engineer" the data models to come up with an info model, and that is exactly the wrong way to build an info model! For example, suppose I had an RDBMS, and implemented an object using tables related to each other by primary and foreign keys. This would prompt me to use FKs/PKs as object identifiers. Now suppose I want a directory data model. It doesn't know what a PK or FK is. So I now have to translate relational calculus concepts to constructs appropriate for the new data model. Furthermore, I likely have to rework my queries, since LDAP (or X.500) is optimized for text, while SQL is much more expressive and powerful. This becomes increasingly difficult from a bottom-up approach because each data model relies on making decisions for a particular platform, language, and protocol; info models should be independent of all of these.

    1. Hi John Strassner,

      Great feedback. I've asked the same question about the existence of Information models (both back in Open-O and in ONAP), but haven't received any direct feedback or found any myself. See my earlier comment above. I'm not clear on whether the community at large understands what an Information Model typically contains. In my opinion, ONAP should also be developing all the Use Case Sequence Diagrams in an ONAP Common Information Model. Would be great to see both the static and behavioral diagrams captured in a CIM.

      1. Hi Brian Hedstrom,

        thank you. For me, model-driven engineering requires a set of artifacts that include structure and behavior diagrams (to use UML 2.x lingo). I've never been wild about UML Use Case diagrams - they don't tell me anything! I much prefer use case templates (I've developed some based on Alistair Cockburn's seminal work that we use in the MEF, and would be happy to share). However, class, component, state, sequence, and activity diagrams are very important.

        <rant>

        Not sure where to put these thoughts. These are observations based on what many SDOs are doing.

        I would also like to see methods put on class diagrams. Otherwise, you can't link high-level API calls to the set of class methods that should be called to implement the APIs. I hope ONAP does better.

        Finally, I hope that ONAP builds real UML models. Not "information elements", but real taxonomies that obey classification principles (e.g., Single Responsibility, Liskov Substitution, and other important design paradigms). Finally, please, no mixing of classes, metadata describing classes, object instances, and metadata describing object instances. This ruins reusability, and makes the generated code very brittle.

        </rant>


          1. Brian Hedstrom, this is the use case template I built for the MEF. It is perhaps too detailed, but it is a modification of what I have used in real production scenarios. It is based on Cockburn's seminal work.

  8. Looking at the MEF wiki there seems to be some good pages that explain the terms: Information Model and Data Model

    https://wiki.mef.net/display/CESG/MEF+Common+Information+Model

    https://wiki.mef.net/display/CESG/Data+Model

    Do this team agree that these pages are relevant for how ONAP understand the terms?


    1. Hi Carsten,
      I'm the Orchestration Area Director for the MEF.

      The first URL is talking about a MEF project. This URL needs revision, as it does not make clear what the term is, and what the differences are between the MEF Core Model (MCM) and the MEF Common Information Model. Sorry about that, I'll get this fixed.

      In the meantime, here are the working definitions of the terms "information model" and "data model" that are being used in the current MEF modeling documents; they are identical to documents being worked on in several IETF WGs as well as ETSI ISG ENI.


         An information model is a representation of concepts of interest
         to an environment in a form that is independent of data repository,
         data definition language, query language, implementation language,
         and protocol.


         A data model is a representation of concepts of interest to an
         environment in a form that is dependent on data repository, data
         definition language, query language, implementation language, and/or
         protocol (typically, but not necessarily, all five).


      HTH, and kind regards,
      John

      1. I was not going into the details about MEF. But more the high level.

        1. Information Modeling using UML.
        2. Data Models more protocol/implementation specifics.

        Are this group creating Information Model for ONAP? and is UML getting used?

        Looking at the deliverables it seems there is one EPIC: MODELING-1 - Getting issue details... STATUS which is Tosca and Yang validators at the current time. But there is no deliverable for model defining ONAP.

        1. assumed we would start with UML, and further assumed that we would build an information model first. This enables deriving multiple data models for different purposes while maintaining data coherency.

          However, on reading the description, I agree with you - it is unclear at the best. One doesn't need a UML model to build a parser. Perhaps one of the leads can answer:

          1. Is this project using UML at all?
          2. If so, are we building an information model, or just data models?
  9. Where can I find the ZOOM recordings of the Modeling meetings?