Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach

Social Sustainability Explainable AI Rationale generation User perception Interpretability Artificial intelligence Machine learning Critical technical practice Sociotechnical Human-centered computing

Authors: Upol Ehsan, Mark O. Riedl

Year: 2020

Published in: International Conference on Human-Computer Interaction.

Read me: Preprint. DOI: https://doi.org/10.1007/978-3-030-60117-1_33. Website. 🎥Video. 👩‍💻Replication package.

Abstract: Explanations—a form of post-hoc interpretability—play an instrumental role in making systems accessible as AI continues to proliferate complex and sensitive sociotechnical systems. In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design. It develops a holistic understanding of “who” the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems. In particular, we advocate for a reflective sociotechnical approach. We illustrate HCXAI through a case study of an explanation system for non-technical end-users that shows how technical advancements and the understanding of human factors co-evolve. Building on the case study, we lay out open research questions pertaining to further refining our understanding of “who” the human is and extending beyond 1-to-1 human-computer interactions. Finally, we propose that a reflective HCXAI paradigm—mediated through the perspective of Critical Technical Practice and supplemented with strategies from HCI, such as value-sensitive design and participatory design—not only helps us understand our intellectual blind spots, but it can also open up new design and research spaces.

Bibtex (copy):
@InProceedings{10.1007/978-3-030-60117-1_33,
author="Ehsan, Upol
and Riedl, Mark O.",
editor="Stephanidis, Constantine
and Kurosu, Masaaki
and Degen, Helmut`  
and Reinerman-Jones, Lauren",
title="Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach",
booktitle="HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence",
year="2020",
publisher="Springer International Publishing",
address="Cham",
pages="449--466",
abstract="Explanations---a form of post-hoc interpretability---play an instrumental role in making systems accessible as AI continues to proliferate complex and sensitive sociotechnical systems. In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design. It develops a holistic understanding of ``who'' the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems. In particular, we advocate for a reflective sociotechnical approach. We illustrate HCXAI through a case study of an explanation system for non-technical end-users that shows how technical advancements and the understanding of human factors co-evolve. Building on the case study, we lay out open research questions pertaining to further refining our understanding of ``who'' the human is and extending beyond 1-to-1 human-computer interactions. Finally, we propose that a reflective HCXAI paradigm---mediated through the perspective of Critical Technical Practice and supplemented with strategies from HCI, such as value-sensitive design and participatory design---not only helps us understand our intellectual blind spots, but it can also open up new design and research spaces.",
isbn="978-3-030-60117-1"}

Annotation

By Valentijn van de Beek, Merlijn Mac Gillavry, Joey de Water, Leon de Klerk. 🪧Slides.

This paper proposes a perspective on explainable AI (XAI) which puts the human at the centre, rather than the computer. Decisions are explained such that users can understand the process. Important to this approach is the differences between people of varying backgrounds who may have differing needs, understanding, or biases.

XAI gives the reasoning behind the decisions that AI makes. Most research into XIA has focused on interpretability of how much a human can reason about a model’s output from either the input or output. Although often forgotten, the human-perspective is crucial to XIA systems. It governs the how, what and why of the data collection. This approach allows for reflection on implicit values, finding epistemological blind spots and making them actionable.

The case study is based on rationale generation, i.e. a process of producing a natural language rationale for agent behaviour as if a human had performed the behaviour and verbalised their inner monologue. A deep neural network is trained on human explanations to explain the decisions of an AI agent playing Frogger. Frogger is an objective-based game, where a player-controlled frog avoids traffic and crosses a river to reach the top.

The case study has two phases, i.e. a proof of feasibility of generating rationales and a technological evolution by making the XAI system more human-centered. The first phase finds that the neural network produced accurate and human-satisfactory rationales. The second phase finds an alignment between the intended and perceived differences in features of the rationales. It found that users prefer detailed rationales to create a mental model of agent behaviour. Finally, the case study shows how technology development and understanding of human factors co-evolve together.

This yields two interesting directions: perception differences and social signals. The former concerns the difference in confidence and understandability based on user background. The latter is about the social context that an XAI system may find itself in collaborative settings.

These questions require a new perspective on XAI which incorporates all parties into system design. This is based on Critical Technical Practice (CTP). Core to CTP is identifying metaphors and assumptions in the field, finding marginalised ones, bringing them to the forefront, and developing new technology and practices. In XAI the dominant narrative is that interpretability and explainability are model-centred problems, while CTP invites us to consider whether the human or the computer is central to what interpretations are. Benefits of CTP are the exploration of new ideas and empowerment of users.

One example is the idea that humans should find the explanation satisfactory, while the authors argue that in some problem domains (e.g. fake news) scepticism and critical reflection would yield better results. This also makes users more sensitive to the limitations of AI.

Two approaches to CTP are participatory design and value-sensitive design. The former refers to challenging the power dynamics between the user and designers. The latter explores values, tensions, and political realities in the system. Future work requires understanding and cooperating with communities and researchers with knowledge about both domains.

– 📖 –