Keynote: The (Un)Expected Impact of Tools in Software Evolution


Speaker: Gail C. Murphy, University of British Columbia
Abstract: A plethora of tools are used to help develop most software systems, including compilers, interpreters, test automation, configuration managers, issue trackers and more. Tools are also often the means proposed to solve software development and evolution problems investigated by software engineering researchers. Despite the importance of tools to software developers and software engineering researchers, all too often the impact of tools on the software produced and how the software evolves is not studied. This talk explores why it is time for software engineering researchers to move beyond the study of software artifacts to also consider the tools being used to produce those artifacts. The talk will raise such questions as whether and how the architecture of tools used might impact the architecture of the software built.

Bio: Gail C. Murphy is a Professor of Computer Science and Vice-President Research and Innovation at the University of British Columbia. She is a Fellow of the Royal Society of Canada and a Fellow of the Association for Computing Machinery (ACM), as well as co-founder of Tasktop Technologies Incorporated.
After completing her B.Sc. at the University of Alberta in 1987, she worked for five years as a software engineer in the Lower Mainland. She later pursued graduate studies in computer science at the University of Washington, earning first a M.Sc. (1994) and then a Ph.D. (1996) before joining UBC.
Dr. Murphy’s research focuses on improving the productivity of software developers and knowledge workers by providing the necessary tools to identify, manage and coordinate the information that matters most for their work. She also maintains an active research group with post-doctoral and graduate students.

Keynote: Software Architecture Challenges for Machine Learning Systems


Speaker: Grace Lewis, Carnegie Mellon University
Abstract: Developing software systems that contain machine learning (ML) components requires an end-to-end perspective that considers the unique life cycle of these components — from data acquisition, to model training, to model deployment and evolution. While there is an understanding that ML components in the end are software components, there are some characteristics of ML components that bring challenges to software architecture and design activities, such as data-dependent behavior, model drift over time, and timely collection of labeled data to inform retraining. In this talk I will highlight some of these challenges, along with thoughts on practices and remaining gaps for successfully architecting ML-enabled systems.

Bio: Grace Lewis is a principal researcher and lead of the Tactical and AI-enabled Systems (TAS) initiative at the Software Engineering Institute at Carnegie Mellon University. Lewis is the principal investigator for the “Automating Mismatch Detection and Testing in ML Systems” and “Predicting Inference Degradation in Production ML Systems. Grace’s current areas of expertise and interest include software engineering for AI/ML systems, edge computing, software architecture (in particular the development of software architecture practices for systems that integrate emerging technologies), and software engineering in society. Grace received a Ph.D. in Computer Science from Vrije Universiteit Amsterdam. She is also very active in the IEEE Computer Society, currently serving as VP for Technical and Conference Activities (T&C) and a member of the Board of Governors.

Keynote: Software architecture as leverage in large-scale systems


Speaker: Steven Jeromy Carriere, Datadog
Abstract: “Scale” is a complex notion that encompasses some easy-ish-to-measure factors such as the resource footprint or transaction rate of a system, but also substantially more subtle considerations such as service dependencies that influence the cost of making changes and team behaviors that affect how long it takes to resolve a production issue. This talk will offer some experiences – successes and failures – in applying the principles and practices of software architecture as leverage over a few challenges typical of large-scale systems. These challenges will include operability, observability, and product delivery velocity.

Bio: Jeromy is Senior Vice President, Engineering at Datadog, where his teams build all of Datadog’s observability products, including cloud monitoring, log management, network monitoring, real user monitoring, application performance management, and continuous profiling. Prior to Datadog, Jeromy was an engineering director at Facebook, leading a team building a service management platform for Facebook’s private cloud; before that, he was at Google, where his teams built the Google Stackdriver product suite for monitoring, logging, and tracing as part of the Google Cloud Platform. A common theme in these roles is the application of the principles and practice of software architecture to better inform system design and operational quality.
Jeromy previously played architect and management roles at eBay, Yahoo!, Vistaprint, Fidelity Investments, Microsoft, AOL, and the Software Engineering Institute, and co-founded Kinitos and Quack.com.