Explaining by Conversing: The Argument for Conversational XAI Systems
Access StatusFull text of the requested work is not available in DASH at this time ("dark deposit"). For more information on dark deposits, see our FAQ.
MetadataShow full item record
CitationMarrakchi, Wassim. 2021. Explaining by Conversing: The Argument for Conversational XAI Systems. Bachelor's thesis, Harvard College.
AbstractThe interest in chatbots and conversational agents is as old as artificial intelligence (AI) itself. Recently, multiple members of the HCI community including Weld and Bansal (2018) have suggested that conversational explanation systems is the best path forward for explainable human-agent interaction. This recommendation is often presented without its supporting arguments so we embarked on this thesis to shed some light on the call behind conversational explainable AI (XAI) systems. First, we survey the research on the need for explanations from AI systems and on the models' ability to provide them. Second, we provide a set of obstacles in the way of interpreting and making meaning of these explanations and explain these obstacles by drawing from the results of several studies in human-computer interaction, machine learning, cognitive science, and education theory. Finally, we take these obstacles into account to argue for conversational explanation systems and propose a Wizard-of-Oz (WoZ) experiment to test some of our hypotheses.
Citable link to this pagehttps://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37368579
- FAS Theses and Dissertations