TUT1 – How to Evaluate Software Architectures [Monday, full-day]
Organizers: Jens Knodel and Mathias Naab
Thorough and continuous architecting is the key to overall success in software engineering, and architecture evaluation is a crucial part of it. This tutorial presents a pragmatic architecture evaluation approach and insights gained from its application in more than 75 projects with industrial customers in the past decade. It presents context factors, empirical data, and example cases, as well as lessons learned on mitigating the risk of change through architecture evaluation.
By providing comprehensive answers to many typical questions and discussing lessons learned, the tutorial allows the audience to not only learn how to conduct architecture evaluations and interpret its results, but also to become aware of risks such as false conclusions, manipulating data, and unsound lines of argument.
The target audience includes both practitioners and researchers. It encourages practitioners to conduct architecture evaluations. At the same time, it offers researchers insights into industrial architecture evaluations, which can inspire future research directions.
TUT2 – ThingML: A Generative Approach to Engineer Heterogeneous and Distributed Systems [Monday morning]
Organizers: Franck Fleurey and Brice Morin
Cyber Physical Systems (CPS) typically rely on a highly heterogeneous interconnection of platforms and devices offering a diversity of complementary capabilities: from cloud server with their virtually unlimited resources to tiny microcontrollers supporting the connection to the physical world. This tutorial presents ThingML, a tool-supported Model-Driven Software Engineering (MDSE) approach targeting the heterogeneity and distribution challenges associated with the development of CPS. ThingML is based on a domain specific modelling languages integrating state-of-the-art concepts for modeling distributed systems, and comes with a set of compilers targeting a large set of platforms and communication protocols. ThingML has been iteratively elaborated over the past years based on a set of experiences and projects aiming at applying the state of the art in MDSE in practical contexts and with different industry partners.
TUT3 – Language Engineering with The GEMOC Studio [Monday afternoon]
Organizers: Olivier Barais, Benoît Combemale and Andreas Wortmann
This tutorial provides a practical approach for developing and integrating various Domain-Specific (modeling) Languages (DSLs) used in the development of modern complex software-intensive systems, with the main objective to support abstraction and separation of concerns. The tutorial leverages on the tooling provided by the GEMOC studio to present the various facilities offered by the Eclipse platform (incl., EMF/Ecore, Xtext, Sirius). Then the tutorial introduces the advanced features of the GEMOC Studio to extend a DSL with a well-defined execution semantics, possibly including formal concurrency constraints and coordination patterns. From such a specification, we demonstrate the ability of the studio to automatically support model execution, graphical animation, omniscient debugging, concurrency analysis and concurrent execution of heterogeneous models. The tutorial is composed of both lectures and hands-on sessions. Hands-on sessions allow participants to experiment on a concrete use case of an architecture description language used to coordinate heterogeneous behavioral and structural components.
TUT4 – Software Quality Analysis with Observation-Enhanced Quantitative Verification [Monday afternoon]
Organizer: Radu Calinescu
Quantitative verification (QV) is a powerful tool for the analysis of performance, dependability and other quality properties of software systems. Supported by today’s fast probabilistic model checkers, QV can analyse these properties for alternative designs and existing software, in domains ranging from service-based systems and cloud computing to embedded systems. Recent advances have dramatically improved the usefulness and accuracy of this analysis by exploiting observations of the software or its components, available for instance from logs, unit testing or monitoring. This tutorial will provide an introduction to the quantitative verification of software quality with the probabilistic model checker PRISM, followed by a presentation of two advanced techniques for observation- enhanced quantitative verification. The first technique computes confidence intervals for the analysed quality properties using parametric Markov models of the software system. The second technique refines the Markov models used to assess quality properties of component-based software by exploiting observations of the execution times of its components. Both techniques can significantly reduce the risk of invalid software engineering decisions, and are fully supported by new QV tools. The tutorial will include short exercises and practical demonstrations of PRISM and of the new QV tools. Attending it will benefit researchers and practitioners from the area of software performance and dependability engineering, as well as those interested in formal approaches to the modelling, analysis and verification of quality aspects of software.
TUT5 – Discover Quality Requirements with the Mini-QAW [Tuesday morning]
Organizer: Thijmen de Gooijer
Good quality requirements help you to make the right architectural decisions but collecting your requirements is not always easy. The Quality Attribute Workshop (QAW) helps teams effectively gather requirements but can be costly and cumbersome to organize. The mini-QAW is a short (a few hours to a full day) workshop designed for inexperienced facilitators and a great fit for teams practicing Agile methods. Variants of the mini-QAW exist for both face-to-face and remote collaboration. The mini-QAW method has been used successfully by several groups throughout the world. It is finding its place as a standard tool among many software architects. During this session we will walk participants through a mini-QAW simulation. By the end of the session participants will have learned about and applied some of the core mini-QAW activities including scenario brainstorming using a “system properties web”, creating stakeholder empathy maps, and visual voting. The mini-QAW combines these activities with a tuned agenda (compared to the traditional QAW) to create a fast, effective, and fun workshop many teams can easily adopt and succeed with. By the end of the session participants will have gained first-hand experience facilitating and participating in the workshop that will let them use the method with their teams back home.
TUT6 – Architectural Runtime Modeling and Visualization for Quality-Aware DevOps in Cloud Applications [Tuesday, full-day]
Organizers: Robert Heinrich, Christian Zirkelbach and Reiner Jung
Cloud-based software applications are designed to change during operations to provide constant quality of service. This leads to increasing blurring of the boundary between development and operations. In this tutorial, we present approaches to address gaps between architectural modeling in development and operations and thus allow for phase-spanning usage of architectural models. The foundation is maintaining the semantic relationships between monitoring outcomes and architectural models. We discuss the integration of development models, code generation, monitoring, runtime model updates, as well as adaptation candidate generation and execution. We describe the combination of descriptive and prescriptive architectural models to improve the communication and collaboration between operators and developers. The consideration of static and dynamic content in architectural models supports operation-level analysis and adaptations. Furthermore, we present different architectural runtime model visualizations, which allow detecting the above mentioned gaps for development on the one hand and for operating on the other hand.
TUT7 – Strategic Management of Technical Debt [Tuesday afternoon]
Organizer: Philippe Kruchten
The technical debt metaphor acknowledges that software development teams sometimes accept compromises in a system in one dimension (for example, modularity) to meet an urgent demand in some other dimension (for example, a deadline), and that such compromises incur a “debt”. If not properly managed the interest on this debt may continue to accrue, severely hampering system stability and quality and impacting the team’s ability to deliver enhancements at a pace that satisfies business needs. Although unmanaged debt can have disastrous results, strategically managed debt can help businesses and organizations take advantage of time-sensitive opportunities, fulfill market needs and acquire stakeholder feedback. Because architecture has such leverage within the overall development life cycle, strategic management of architectural debt is of primary importance. Some aspects of technical debt – but not all technical debt – affect product quality. This tutorial introduces the technical debt metaphor and the techniques for measuring and communicating this technical debt, integrating it fully with the software development lifecycle.
Proposal Submission: 12 February 2017 Proposal Notification: 13 February 2017 Possible cancellation notice: 3 March 2017 Camera-Ready Tutorial Summary: 10 March 2017 Tutorial handouts: 21 March 2017 Tutorial days: 3 and 4 April 2017 (Monday/Tuesday)
For more information please contact the Tutorials Chairs:
Remco de Boer – rdeboer(at)archixl.nl
Noël Plouzeau – noel.plouzeau(at)inria.fr