Stephen Pilli

Thesis: 
Nudging humans to have ethically aligned conversation using AI

Conversational AI (or chatbots) exists in obvious conversation-enabled devices such as Alexa, Google Assistant, or Siri, but also in devices such as smartphones, smartwatches, FitBit, etc. Given that we spend almost all our entire waking hours in close or constant contact with a smart device, it is trivial for the device to nudge our attention to news/views/ decision-options that it considers important. Nudges are behavioural interventions that arise primarily from human decision-making frailties (e.g., loss-aversion, inertia, conformity) and opportunity seeking. However, not all humans are affected by the same biases, i.e., a nudge that works on one person may not work on another. This project investigates the possibility of a conversational agent nudging the human to use/employ ethically grounded language in conversations and texts. The conversational agent will attempt to deliver nudges in an adaptive manner, with the objective of making the human more ethically aware.

Supervisor: 
Vivek Nallur
Email: 
stephen.pilli@ucdconnect.ie
Research Group: 
SFI D-REAL