Critical Systems Development Using Modeling Languages – CSDUML 2006 Workshop Report Geri Georg1, Siv Hilde Houmb2, Robert France1, Steffen Zschaler3, Dorina C. Petriu4, and Jan Jürjens5 1Colorado State University Computer Science Department {georg, france}@cs.colostate.edu 2Norwegian University of Science and Technology Computer Science Department sivhoumb@idi.ntnu.no 3Technische Universität Dresden Department of Computer Science Steffen.Zschaler@tu-dresden.de 4Carleton University Systems & Computer Eng. Dept. petriu@sce.carleton.ca 5 The Open University Computing Department j.jurjens@open.ac.uk Abstract. The CSDUML 2006 workshop is a continuation of the series regarding development of critical systems using modeling languages. The report summarizes papers presented and discussion at the workshop. 1 Introduction CSDUML 2006 was held in conjunction with the MoDELS 2006 conference in Genoa, Italy. The workshop took place on Sunday, October 1, 2006. Twenty-five people from both academia and industry attended the workshop. Six papers were presented during the day. However, the major part of our time was spent in discussions. The paper presentations were organized in four sessions, with discussions after the first, second, and fourth presentation sessions. The report summarizes the results of these discussions. The papers were structured into the following four sessions: 1) specification and analysis, 2) verification, 3) automatic system generation, and 4) case studies. The case studies contained results from some, but not all of the other categories. This report is structured according to the presentation sessions, summarizing the papers, discussion points, and outcomes. Session 1: Specification and Analysis, Alexander Knapp session chair The papers presented in this session covered modeling specification approaches and toolsupport for their analysis. Two papers were presented in this session: 1) Quality-of-Service Modeling and Analysis of Dependable Application Models, by András Balogh and András Pataricza, presented by András Balogh. The paper presents non-functional component and system property modeling and analysis to improve software quality and allow early recognition of possible problems. 2) Modeling an Electronic Throttle Controller using the Timed Abstract State Machine Language and Toolset by Martin Ouimet, Guillaume Berteau, and Kristina Lundqvist, presented by Martin Ouimet. The paper presents the Timed Abstract State Machine (TASM) language and toolset, to specify and analyze reactive embedded real-time systems. Non-functional properties including timing behavior and resource consumption can be specified, and their behaviors simulated for analysis purposes. The session chair proposed a set of open research questions that still remain in light of the research presented in these papers. These questions stimulated the general discussion of critical systems specification and analysis. The discussion either reached conclusions or raised further points in the following three areas. Specification. Critical systems specifications are often cross-cutting, so aspect or other feature-oriented techniques may be applicable. These techniques must be able to specify the interference between QoS attributes, and allow for simple checks, refinement, and systematic overviews of particular properties. There is a difference between closed, predictable, embedded-style systems, and open, unpredictable business-critical systems. Interference between critical properties can be defined away in many closed systems, which greatly simplifies both specification and analysis. Open systems, by contrast, are subject to unpredictable interactions among critical properties. Analysis. Critical system property analysis varies between properties, so different tools are needed for each type of analysis. Additionally, many tools require model transformations prior to use. However, it is not clear how analysis techniques will scale to large system specifications. Traditional analysis tools may not be useful under all conditions. For example, trade-off analysis and prioritization must be done in light of the business domain. A typical strategy is to minimize hardware cost, as determined by complete cost of ownership (via supplier maintenance contracts). Analysis techniques still need to be integrated into newer specification techniques, such as aspectspecification techniques. Information dissemination. The need for a common platform for the dissemination and exchange of state-of-the-art research in languages and specification and analysis tools became clear in the discussions. Discussion included the ReMODD project and websites providing information on UML Case tools (http://www.jeckle.de/umltools.htm). As researchers, we should be using mediums such as these to enhance knowledge and discussion. Session 2: Verification, Kevin Lano session chair One paper in this session was presented. It concerned consistency checking of UML-based behavioral models: 1) Model Checking of UML 2.0 Interactions by Alexander Knapp and Jochen Wuttke. The paper describes a translation of UML 2.0 interactions into automata for model checking to determine whether an interaction can be satisfied by a given set of message exchanging UML state machines. The second discussion highlighted four areas. Consistency checking needs to occur across models and across critical system properties. In particular different types of models, e.g. deployment diagrams, static structure, and behavioral models, all need to be checked for consistency across the critical system properties. Workflow could be useful in supporting such checks. For example, when decisions are made regarding physical deployment, consistency checks need to occur to ensure that desired behavior still exists, and that critical properties are still present. The EU project Model Plex may have applicable work in this area with examples and techniques to trace model changes and verify run-time models. Techniques such as fault injection into state machines to check interactions could be used to verify critical system properties. Correctness by construction may also be a viable option during model transformations. An outstanding issue in verification is modularity. It isn’t clear if UML model structure is sufficient. It also is not clear how to deal with hierarchical or incomplete models. Domain specific languages and profiles are two techniques leading to similar modeling results. Their use should be determined by domain experts. In general, the group considered profiles less work for the person creating them, but often more work for those trying to use them. This is particularly true when multiple, perhaps interacting profiles increases. An example is chip design, where 4-5 profiles are needed, which may or may not be very well aligned in terms of their use, interactions, and analysis tools. DSLs are more work to come up with, but may present a better language for developers to understand, and better aligned methods, techniques, and tools. An issue in this space is again the lack of disseminated knowledge and experience across the community. Session 3: System Generation, Robert France session chair The paper in this session covered the use of model-driven engineering technology for the generation of software for critical systems. 1) Automated Synthesis of High-Integrity Systems using Model-Driven Development by K. Lano and K. Androutsopolous. The paper describes the application of MDD to two areas of high-integrity systems: reactive control systems, and web applications. Semantic consistency analysis using two methods is also described. Session 4: Case Studies, Geri Georg session chair The two papers in this session presented case studies in the critical-systems domain. 1) Experiences with Precise State Modeling in an Industrial Safety Critical System by Nina Holt, Bente Anda, Knut Asskildt, Lionel C. Briand, Jan Endresen, and Sverre Frøystein. This paper reports on experiences using statechart-driven UML modeling in the development of a safety-critical system at ABB. 2) Specification of the Control Logic of an eVoting System in UML: the ProVotE experience by Roberto Tiella, Adolfo Villafiorita, and Silvia Tomasi. This paper presents some of the issues and challenges faced during the development of an electronic voting machine, and how UML models were integrated in the development process. Some existing tools were extended to support the formal verification of the UML specifications. The third discussion included the third and fourth session papers, as well as a general recap of topics discussed throughout the day. Conclusions and outstanding issues were drawn in five areas. Verification. Most verification work seems to be in state machines, using model checking. In part, this is due to the fact that verification of functional properties is more mature than for non-functional properties; it is harder to verify properties such as performance, timing, and security. There is some work going on using activity diagrams to perform quality constraint consistency checking. It may also be possible to check semantic consistency using OCL for verification, and reasoning over traces, pre- and post- conditions. It is also the case that deployment diagrams should be used since their information influences preserved properties in dynamic behavior models. Verifications need to be formal, but there are things we want to describe and there is no language to describe them; formalisms don’t exist in these cases. Fault injection may be a technique we can use; based on state charts and fault states. We need to note however, that there is a difference between fault analysis and security analysis: faults can be simple, whereas attacks are usually quite complex – any real problem has infinite state space. Research should continue exploring the combination of model checkers and theorem provers to make the best of both worlds. Accidental Complexity. Using UML for critical system development is very complex. We discussed whether this is inherent in the nature of UML, or whether it is accidental – stemming from the way we use UML to develop these kinds of systems? If this complexity is introduced by our techniques and tools, it should be avoided whenever possible. Complexity definitely hinders acceptance by developers. Inherent complexity can perhaps be addressed through the use of domain specific languages. Profiles seem to make the problem worse. DSLs and representations. Domain experts have to restrict the use of UML notations, and have to present the subset in a way that can be useful; the language must allow engineers to be more effective without fundamentally changing what they do. In some sense they are like shortcuts, in fact it may be possible to derive DSLs from determining what shortcuts developers use. DSLs are necessary because while UML models capture what we mean, the diagrams are hard to create and sometimes understand – leading to complexity (accidental?). For example, in many cases text would be easier to create and understand than activity diagrams. An idea we might explore is to generate graphical representations from text, recognizing in some cases that the graphical representation might not be more intuitive than the text, and discarding it in that case. Some tools can synchronize text and graphics (e.g. SecureUML, which has an access policy DSL, and uses a graphical interface to generate this text.) Development Traceability. There must be a link between requirements and verification and changes made to models based on analysis results; the issue is that the traces have to be well defined and well understood. The results of case studies are problematic. They often do not produce any really new insights, or learnings that can be easily applied in other situations. It would be a good research topic to define how to go about performing a “perfect” case study – planning, what you want to find out, how to find it out, common problems, etc. The end result would be a template for running a case study that could be used by people setting up case studies, and people evaluating papers written about case studies, or evaluating the results of the case study, etc.