Greta Warren
Artificial Intelligence (AI) systems designed to support decision making currently experience two major problems; i) how best to explain its predictions or recommendations to users in a way that fosters user trust and acceptance in the system, and ii) how to promote optimal decision-making through interaction with users at the interface. Users appear to naturally engage in contrastive or counterfactual thinking in scenarios where there is a degree of uncertainty e.g. “why was this prediction made rather than that one?”. This research aims to apply findings on counterfactual thinking from the psychological and cognitive science literature to the explainable artificial intelligence (XAI) problem by exploring different design options and performing end-user studies to determine the most effective solutions for such interactions.