Publication:
Explaining by Conversing: The Argument for Conversational XAI Systems

No Thumbnail Available

Date

2021-06-23

Published Version

Published Version

Journal Title

Journal ISSN

Volume Title

Publisher

The Harvard community has made this article openly available. Please share how this access benefits you.

Research Projects

Organizational Units

Journal Issue

Citation

Marrakchi, Wassim. 2021. Explaining by Conversing: The Argument for Conversational XAI Systems. Bachelor's thesis, Harvard College.

Research Data

Abstract

The interest in chatbots and conversational agents is as old as artificial intelligence (AI) itself. Recently, multiple members of the HCI community including Weld and Bansal (2018) have suggested that conversational explanation systems is the best path forward for explainable human-agent interaction. This recommendation is often presented without its supporting arguments so we embarked on this thesis to shed some light on the call behind conversational explainable AI (XAI) systems. First, we survey the research on the need for explanations from AI systems and on the models' ability to provide them. Second, we provide a set of obstacles in the way of interpreting and making meaning of these explanations and explain these obstacles by drawing from the results of several studies in human-computer interaction, machine learning, cognitive science, and education theory. Finally, we take these obstacles into account to argue for conversational explanation systems and propose a Wizard-of-Oz (WoZ) experiment to test some of our hypotheses.

Description

Other Available Sources

Keywords

artificial intelligence, conversational, explainability, explainable artificial intelligence, interpretability, XAI, Computer science

Terms of Use

This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service

Endorsement

Review

Supplemented By

Referenced By

Related Stories