1st Edition
Explainable Agency in Artificial Intelligence Research and Practice
This book focuses on a subtopic of explainable AI (XAI) called explainable agency (EA), which involves producing records of decisions made during an agent’s reasoning, summarizing its behavior in human-accessible terms, and providing answers to questions about specific choices and the reasons for them. We distinguish explainable agency from interpretable machine learning (IML), another branch of XAI that focuses on providing insight (typically, for an ML expert) concerning a learned model and its decisions. In contrast, explainable agency typically involves a broader set of AI-enabled techniques, systems, and stakeholders (e.g., end users), where the explanations provided by EA agents are best evaluated in the context of human subject studies.
The chapters of this book explore the concept of endowing intelligent agents with explainable agency, which is crucial for agents to be trusted by humans in critical domains such as finance, self-driving vehicles, and military operations. This book presents the work of researchers from a variety of perspectives and describes challenges, recent research results, lessons learned from applications, and recommendations for future research directions in EA. The historical perspectives of explainable agency and the importance of interactivity in explainable systems are also discussed. Ultimately, this book aims to contribute to the successful partnership between humans and AI systems.
Features:
- Contributes to the topic of explainable artificial intelligence (XAI)
- Focuses on the XAI subtopic of explainable agency
- Includes an introductory chapter, a survey, and five other original contributions
Preface
Editor Biographies
Contributors
1. From Explainable to Justified Agency
PAT LANGLEY
2. A Survey of Global Explanations in Reinforcement Learning
YOTAM AMITAI AND OFRA AMIR
3. Integrated Knowledge-Based Reasoning and Data-Driven Learning for Explainable Agency in Robotics
MOHAN SRIDHARAN
4. Explanation as Question Answering Based on User Guides
ASHOK GOEL, VRINDA NANDAN, ERIC GREGORI, SUNGEUN AN, AND SPENCER RUGABER
5. Interpretable Multi-Agent Reinforcement Learning with Decision-Tree Policies
STEPHANIE MILANI, ZHICHENG ZHANG, NICHOLAY TOPIN, ZHEYUAN RYAN SHI, CHARLES KAMHOUA, EVANGELOS E. PAPALEXAKIS, AND FEI FANG
6. Towards the Automatic Synthesis of Interpretable Chess Tactics
ABHIJEET KRISHNAN AND CHRIS MARTENS
7. The Need for Empirical Evaluation of Explanation Quality
NICHOLAS HALLIWELL, FABIEN GANDON, FREDDY LECUE, AND SERENA VILLATA
Index
Biography
Dr. Silvia Tulli is an Assistant Professor at Sorbonne University. She received her Marie Curie ITN research fellowship and completed her Ph.D. at Instituto Superior Técnico. Her research interests lie at the intersection of explainable AI, interactive machine learning, and reinforcement learning.
Dr. David W. Aha (UC Irvine, 1990) serves as the Director of the AI Center at the Naval Research Laboratory in Washington, DC. His research interests include goal reasoning agents, deliberative autonomy, case-based reasoning, explainable AI, machine learning (ML), reproducible studies, and related topics.