Modeling Approach Comparison Criteria as Modified at the MODELS 2011 CMA Workshop Geri Georg1, Gunter Mussbacher2, Betty H.C. Cheng3, Ana Moreira4, Robert France1 1 Computer Science Department, Colorado State University, Fort Collins, Colorado, USA 2 Dept. of Systems and Computer Engineering, Carleton University, Ottawa, Canada 3 Dept, of Computer Science and Engineering, Michigan State University, East Lansing, Michigan, USA 4 Universidade Nova de Lisboa, Lisbon, Portugal December, 2011 The criteria described in this document represent a work-in-progress, begun at the Bellairs AOM1 workshop in April 2011, and continued at the CMA2 workshop in October 2011. The descriptions in this document are the refinement of the original concepts described in order to provide a basis for understanding, analysis, and comparison of various modeling approaches. The document is organized into a section for key concepts and a section for modeling approach dimensions. Key concepts related to a modeling technique3 are those that are targeted for optimization or improvement by the technique. They thus become the criteria by which a given technique can be evaluated or criteria that should be considered when creating a new technique, and their description/definition is critical to common understanding between modelers. Please note that these concepts and their use as criteria can be applied to either a technique or to the model(s) that result from using a technique. The focus of this document, however, is the assessment of techniques, and not their resulting models. Key concepts of a modeling approach can be described along several different dimensions. Thus, some or all of the key concepts may be applicable and in fact have different interpretations depending on the dimension along which they are applied. While we recognize this fact, we present the dimensions and key concepts in a more orthogonal view. The description of a key concept or a dimension contains detailed definitions that either were discussed in the previous workshops, or may have been synthesized by the authors to reflect more general ideas that have not yet been widely discussed and refined. One or more questions about this key concept or dimension are posed at the end of the discussion. Our goal is to continue to refine this document in order to address all concepts with a similar level of detail. 1. Modeling Approach Dimensions 1.1 Phase/activity Modeling approaches may be most applicable during certain phases of software development, or for specific activities of software development. They may also be useful during multiple phases or for multiple activities. For example, the early requirements phase 1 http://www.cs.mcgill.ca/~joerg/SEL/AOM_Bellairs_2011.html http://cserg0.site.uottawa.ca/cma2011/ 3 Note that we do use the terms approach and technique interchangeably. 2 1 focuses on stakeholder goals and the description and assessment of alternatives. During the architecture or high-level design phase, an architecture is created that describes the overall system structure as well as general rules and constraints imposed on that structure. Examples of architecture are 3-tier, event-based, distributed client-server, distributed peer-to-peer, etc. Activities such as analysis, validation, verification, and evolution tend to crosscut across development phases, although their purpose and targets may differ from one phase to another. For example, model checking can be used as a high-level design analysis technique, whereas performance analysis may be targeted towards low-level design or even implementation phases. Validation ensures that the system is what the stakeholder wants, while verification ensures that the system is built as specified. Questions: A. During which software development phase is the modeling technique applicable? Early requirements Late requirements Architecture/High-level design Low level design Implementation Integration Deployment Other: B. During which activity is the modeling technique applicable? Validation – describe what is validated: Verification – describe what is verified: Evolution – describe what is evolved: Analysis – describe what is analyzed: Other: 1.2 Notation/language A model must be defined in a well-defined language. In some cases, models may be expressed in a single language, but there are certainly situations that can benefit from the use of languages that are specific to the model viewpoint (e.g., using a language such as POLICY [ref] to define security policies). A language may have formal semantics, i.e., the language is based upon a domain that is well-understood and that allows proofs to be performed, e.g., math or Petri nets, or it is possible to map the language to math, Petri nets, etc. A language with a rigorous semantics is a language that is potentially based on a metamodel and can be machine analyzed. Finally, a language that is not formal is not machine analyzable. Quality attributes are non-functional requirements that are used as criteria to judge the quality of a system, rather than its specific behavior. Techniques to specify and measure such requirements are, e.g., performance specification using the UML MARTE profile followed by system design performance analysis. 2 Questions: A. Please characterize the language/notation: Language/notation is standard. Language/notation is built on standard(s) – list standard(s): Language/notation is specific to application domain(s) – list domain(s): Which application domains should be avoided? Language/notation is general purpose. B. Is the language/notation: Formal Rigorous Not formal C. If the language/notation is formal or rigorous, it is assumed to be machine analyzable. What can be analyzed (e.g., which quality attributes)? 1.3 Units of encapsulation/types of concerns A modeling approach encapsulates some concepts as units, in order to easily reason about them, manipulate them, analyze them, etc. Examples for units of encapsulation are activities, services, views, aspects, features, themes, communication, concerns, etc. For example, an aspect is a crosscutting unit of encapsulation. A feature is (usually) a functional unit of encapsulation, as in a feature model. A concern is a functional or non-functional unit of encapsulation. A theme [9] is an encapsulation of some functionality or crosscutting concern in a system and is considered more general than an aspect. Furthermore, the units may differ depending on the phase of software development. For example, and not limited to the ones mentioned, units for a requirements phase could be use cases, qualities, and stakeholders, units for design could be classes, aspects, data definitions, and states, and units for implementation could be classes, modules. Questions: A. What does the technique encapsulate as a unit? B. If the units differ over development phases or development activities, please add these details: 1.4 Views and concrete syntax supported A view is a projection of a model for a particular reason. For many models, while both structural and behavioral specifications are part of the model, there can be separate structural and behavioral views of the model, and these perspectives may be described using different languages/notations (e.g., the structural perspective may be described using class diagrams while the behavioral perspective may be described using state machines or activity diagrams). 3 A modeling technique may also use specific views for qualities such as performance, reliability, and safety to support focused modeling and analysis of these qualities. Questions: A. What views are supported by the modeling technique? Structural Behavioral Intentional Quality-based – describe which qualities are supported: Other: B. What types of concrete syntax are used by the modeling technique? Textual Visual Other: 2. Key Concepts The key concepts listed below can be applied to different modeling approaches, but they take on a different interpretation depending on the context in which they are applied. 2.1 Paradigm Modeling techniques are based on a paradigm, which is defined as the theories, rules, concepts, abstractions, and practices that constrain how activities and development artifacts are handled or represented. A paradigm imposes a decomposition criterion which is then used to decompose and analyze a problem into smaller problems. Examples are the object-oriented, aspect-oriented, functional, and service-oriented paradigms. The choice of paradigm affects other concepts and dimensions, including tools. For example, in the object-oriented paradigm, modules are classes/abstract data types, while in the procedural paradigm modules are functional abstractions. Tools must be able to perform activities on these module elements. Questions: A. What is the name of the technique? B. What are the paradigms supported by the technique? Object-oriented Aspect-oriented Functional Service-oriented Component-based Feature-oriented Other: 2.2 Modularity The Separation of Concerns principle states that a given problem involves different kinds of concerns, which should be identified and separated to cope with complexity, and to 4 achieve required engineering quality factors such as robustness, adaptability, maintainability, and reusability. The ideal decision regarding which concerns must be separated from each other requires finding the right compromise for satisfying multiple (quality) factors. A multidimensional decomposition, or as it is called multi-dimensional separation of concerns [1], permits the clean separation of multiple, potentially overlapping and interacting concerns simultaneously, with support for on-demand re-modularization to encapsulate new concerns at any time. A modeling technique, however, may not support (or indeed need to support) the encapsulation of crosscutting concerns through advanced decomposition and composition mechanisms, but may rather be restricted to basic forms of composition such as aggregation. Modularization [2] is decomposition into modules, and decomposition criteria are imposed by the paradigm. A module is a software unit with well-defined interfaces which express the services that are provided by the module for use by other modules. A module promotes information hiding by separating its interfaces from its implementation. Some techniques allow modules to be composed with the help of a composition specification. A composition specification may be an intrinsic part of a module or may be encapsulated in its own unit. Hence, the modularity of the composition specification is also of interest, i.e., whether or not it is separated from the modules that are being composed. Questions: A. How does the language support modularity? (This question deals with how the modeling technique supports modularizing the units of encapsulation identified above.) Special operators 1st class language elements Domain-specific language elements Service level agreements Interface definitions Other: B. If the modeling technique allows composition of modules, can the composition specification itself be separated from the specification of the modules? Yes No Other details: 2.3 Composability Composition is the act of creating new software units by reusing existing ones (e.g., by putting together several units of encapsulation through manipulating their associated modules). Basic composition enables the structuring of modules through traditional means (e.g., aggregation). In addition to basic composition mechanisms, three special kinds of compositions are distinguished: composition of crosscutting concerns, probabilistic compositions, and fuzzy compositions [3]. Furthermore, implicit composition does not require the specification of a specific composition operator, e.g., a weaver makes assumptions about how composition is supposed to be realized. On the other hand, explicit composition requires a composition operator, e.g., add, replace, wrap, merge, etc. If a language provides mechanisms for modularization of crosscutting concerns, it must also provide operators for the compositions of these concerns with base concerns. 5 Aspect weaving and superimposition are two examples of such operators. The composition of crosscutting concerns can be syntax-based or semantics-based [4]. In syntax-based composition, the composition is based on syntactic references to base concerns. This may lead to the well-known fragile pointcut problem, where structural changes in the base concerns may invalidate the compositions. This problem is tackled in semantics-based composition by relying on the meaning of the models and the relationships to be captured by the composition rather than the structure of the base concerns or specific naming conventions. Semanticsbased composition may be applied to the identification of locations where composition is supposed to occur (e.g., identification of semantically equivalent patterns) or the composition itself (e.g., in simplifying complex results by recognizing redundant model elements; an example is composing inheritance classes that also have a direct relation – simple composition will result in multiple direct relations, all except the first of which are redundant). The rules that govern semantics-based composition cannot be inferred by a machine unless the formalization of a technique goes beyond its abstract grammar, e.g., if an execution semantics is formally described then behavior equivalence may be determined. Probabilistic compositions are required if there are uncertainties in the specification of problems. Here, it may be preferable to define and optimize the solutions and the composition of solutions with respect to the probabilistic problem definitions [3]. Fuzzy compositions are required if a problem can be solved in a number of alternative ways, and it is not possible (or desired) to commit to a single alternative. Here, each alternative may be assigned to a fuzzy set that expresses the degree of relevancy of a solution. Special composition operators are required, which can reason about fuzzy sets. Techniques may also be classified by whether they support symmetric or asymmetric composition. A composition operator is typically applied to two input models to create another model. If both inputs are of the same “type”, then symmetric composition is probably supported, and in this case commutativity holds as the order of the inputs does not matter. If the input models are of different “types” (e.g., class and aspect), then asymmetric composition is probably supported by the technique, i.e., composition works if A is applied to B but does not work if B is applied to A. Regardless of whether basic or more advanced composition mechanisms are supported by a modeling approach, assessing the algebraic properties of composition operators such as commutativity, associativity, and transitivity makes it possible to predict what the result of a composition will be. A language treats models uniformly, if it facilitates decomposing a model into a set of uniform and cooperating concerns, and it provides composition operators that manipulate such concerns in a uniform manner. The uniform treatment of the concerns helps to fulfil the closure property in the models. If a composition operator applied to any member of a set produces a model that is also in the set, then the set of models is closed under that operation [3]. The closure property is a way to standardize the decomposition mechanism and composition operators that are provided by a language. In addition, it helps to increase the abstraction level of concerns by forming a hierarchy of concerns in which the concerns at the higher levels of the hierarchy abstract from their lower level concerns. Questions: A. If the technique does not use composition to create new units, how does it create new units from existing units? B. What kinds of composition operators are supported by the technique? Explicit 6 Implicit Basic Probabilistic Fuzzy Syntax-based Semantics-based None Other: C. If the technique uses composition, how are composition operators defined by the technique? D. If the technique uses composition, is it: Symmetric Asymmetric Other: E. What are the composition operators of the technique? F. What algebraic properties does each composition operator provide? Commutativity – list each composition operator, if applicable: Associativity – list each composition operator, if applicable: Transitivity – list each composition operator, if applicable: Other: G. Which composition operators produce models that are closed under the operator? H. How does the technique support incremental composition? I. How does the technique measure the quality of a composition? Resulting model modularity Resulting model coupling/cohesion Other: J. How is a composed model presented to the modeler? Using a new model Using an annotated original model Using a set of composition rules Using a tool Other: 7 2.4 Traceability Traceability is (i) the ability to chronologically interrelate uniquely identifiable entities in a way that is verifiable or (ii) the ability to verify the history, location, or application of an entity (e.g., a concern, a process, or an artifact such as a requirements model element or a module) by means of documented recorded identification [5]. Traceability can be vertical or horizontal. Using requirements as an example, vertical traceability is the ability to trace requirements back and forth through the various layers of development, e.g., through the associated life-cycle work products of architecture specifications, detailed designs, code, unit test plans, integration test plans, system test plans, etc. It is possible to generalize this definition of vertical traceability to refer to abstraction levels above the requirements (e.g., system engineering or business process engineering) and below the requirements (e.g., architecture) [6]. Again using requirements as an example, horizontal traceability refers to the ability to trace requirements back and forth to associated plans such as the project plan, quality assurance plan, configuration management plan, risk management plan, etc. This definition can also be generalized. Horizontal traceability may be required during any phase of software development, e.g., at the design level (tracing across different design documents) and at levels above requirements (system engineering or product family domain analysis). From a composition point of view, traceability includes being able to identify elements in the composed model with their sources in the original models. It may be argued that traceability is the concern of tools, however, some entities may be explicit in the modeling approach that are, or could be, targeted for traceability. Questions: A. What types of traceability are supported? (For each type, describe the entities that are traced either in the technique (i.e., as part of the process of applying the technique), or in models produced by the technique?) Horizontal Vertical Other: B. What capabilities are provided by such tracing? Refinement identification Composition source identification Other: 2.5 Mapping from the previous and to the next stage of software development: Simply speaking, this is vertical mapping. How do we go from one development phase to another? However, this is not a traceability issue but rather a compatibility issue. It is about whether the concepts of one approach are compatible with another approach that precedes or follows the approach. It may be necessary to coerce the concepts of one approach into those of another or to extend one approach to make it interface properly with another approach. Questions: 8 A. How does the modeling technique address moving from the “previous” stage of software development, whatever that might be? B. How does the modeling technique address moving to the “next” stage of software development, whatever that might be? 2.6 Trade-off analysis Trade-off analysis is defined as “Determining the effect of decreasing one or more key factors and simultaneously increasing one or more other key factors in a decision, design, or project” [7]. Trade-off analysis offers the ability to choose between a set of alternatives, based on a set of potentially conflicting criteria. Questions: A. Does the modeling technique: Integrate trade-off analysis as part of the technique and/or tools Support trade-off analysis by providing easy access to relevant data If yes, describe the data: Have no concept of trade-off analysis Other: 2.7 Tool support Tools automate formalized parts of one or more activities within and across software development phases (e.g., requirements, design, code, validation, deployment, runtime management, evolution). Questions: A. What activities of the approach are supported by tools? Specification Composition Analysis Validation Verification Model consistency Other: B. If a tool is used to visualize a composed model, how is it visualized? C. How do tools help users understand the relationships of units of encapsulation? Which relationships are covered (e.g., unwanted interactions or conflicts, other areas where resolutions are required, or dependencies between units of encapsulation)? Unwanted interactions or conflicts – describe how the tool helps: Dependencies – describe how the tool helps: 9 Other: 2.8 Empirical studies/evaluation Empirical studies are classified as controlled experiments, case studies, and surveys [8]. A case study is an in-depth observational study, whose purpose is to examine projects or evaluate a system under study. Case studies are mostly used to demonstrate the application of an approach. A survey is an exploration of a body of knowledge based on existing literature. It is a descriptive research method and provides no control over the measurements. A survey considers the precedent work in the broader sense [8]. A controlled experiment [8] is performed in a controlled environment with the aim to manipulate one or more variables and control other variables at fixed values to measure the effect of change. Questions: A. Briefly describe empirical studies or evaluations that have been performed for the technique (provide references wherever possible). 3. Additional Key Concepts (“parking lot”) There are several additional concepts that the original workgroup at the Bellairs AOM workshop identified, but were unable to adequately discuss at the workshop to include in the previous sections. Likewise, we were unable to discuss them at the CMA workshop, although we did reorder them based on the participants’ perception of their importance. 3.1 Reusability E.g., parameterization, libraries, templates… 3.2 Scalability Scalability evaluation is the process of assessing how the cost-effectiveness (which needs to be defined) of an approach evolves as a function of the complexity of the case studies it is applied to. For instance, complexity can be measured in terms of the size of models, which for example, can be measured in terms of number of modeling elements. 3.3 Inter-module dependency and interaction 3.4 Abstraction 3.5 Usability Readability, understandability 3.6 Evolvability Support for requirements changes 3.7 Reduction of modeling effort Modeling effort can be measured in different ways. One way is to conduct a controlled experiment that compares the modeling effort of applying an AO approach with that of an OO approach. An alternate, much less expensive way, is to estimate modeling effort through a surrogate measure, such as the number of modeling elements required for the different approaches. 10 3.8 Completeness 3.9 Expressiveness A modeling technique can be expressive enough to model domain-specific concepts, specific non-functional requirement concepts (e.g., resource allocation), more general concepts such as objects, etc. It can also be expressive in modeling relations among these concepts and in its manipulations of the concepts/relations (for example in expressing how they should be composed or transformed). 4. References 1. Harold Ossher and P. Tarr: “Multi-Dimensional Separation of Concerns and the Hyperspace Approach”. In Software Architectures and Component Technology, M. Aksit (Ed.), Kluwer Academic Publishers, pp. 293–323, 2002. 2. David Parnas: “On the Criteria To Be Used in Decomposing Systems into Modules”. In Communications of the ACM, vol. 15, pp. 1053–1058, 1972. 3. Mehmet Aksit: “The 7 C's for Creating Living Software: A Research Perspective for QualityOriented Software Engineering”. In Turkish Journal of Electrical Engineering and Computer Sciences, vol. 12, pp. 61–95, no. 2, 2004. 4. Ruzanna Chitchyan, Phil Greenwood, Americo Sampaio, Awais Rashid, Alessandro Garcia, and Lyrene Fernandes da Silva: “Semantic vs. syntactic compositions in aspect-oriented requirements engineering: an empirical study”. In Proceedings of the 8th ACM international conference on Aspect-oriented software development, pp. 149–160, 2009. 5. ISO 9001:2008, Quality management systems http://www.iso.org/iso/catalogue_detail?csnumber=46486 – Requirements, 6. Tim Kasse, “Practical Insight into CMMI”, Artech House Publishers, ISBN: 1580536255, pp. 153–154, 2008. 7. Business 2011. Dictionary, http://www.businessdictionary.com/definition/tradeoff-analysis.html, 8. Claes Wohlin, Per Runeson, and Martin Höst, “Experimentation in Software Engineering: An Introduction”, Springer, 1999. 9. Siobhán Clarke and Elisa Baniassad, “Aspect-Oriented Analysis and Design: The Theme Approach”, Addison-Wesley Professional, 2005. 10. A. Terry Bahill, Mack Alford, K. Bharathan, John R. Clymer, Douglas L. Dean, Jeremy Duke, Greg Hill, Edward V. LaBudde, Eric J. Taipale and A. Wayne Wymore, “The DesignMethods Comparison Project”, IEEE Transactions on System, Man, and Cybernetics, Part C: Applications and Reviews, vol. 28, no.1, February 1998. 11