IxDA London – The Uncanny Valley & Subconscious Biases of Conversational UI

The theme of IxDA London’s June event was Algorithms, Machine Learning, AI and us designers – an evening of great discussions that prompted me to dig up reading material on The Uncanny Valley and Subconscious Biases. Both these topics were strongly present, the former directly and the latter indirectly, in Ed and John’s presentation on designing for IBM Watson. They discussed the ‘Uncanny Valley of Emotion’ as a third line on the curve in addition to ‘still’ and ‘moving’ in the traditional model of the uncanny valley. While I understand their intent in creating a third category – accounting for the invisible systems, agents, and interactions not visible or physically accessible – in retrospect I disagree with the characterization. Emotion, or lack of, can by explicitly betrayed by movement. From my understanding, subtle asynchronous or unnatural movements directly related to emotional responses expected by humans are a key ingredient in the Uncanny Valley. Therefore, I would rename the ’emotion’ curve suggested by the Watson team to ‘implicit,’ thereby retaining emotion as a criteria for both explicit (still and moving) and implicit interactions.

Uncanny valley of emotion at IxDA London, photo by Karey Helms

The second subtopic, subconscious biases, greatly concerns me. A recent article in the New York Times – Artificial Intelligence’s White Guy Problem – sums it up perfectly. As designers, how do we build into our processes accountability for subconscious (and conscious) biases relative to algorithms, machine learning, and conversational interfaces? I don’t have an answer but I would like to find one!

Relevant links and resources:
The Uncanny Valley
Uncanny valley: why we find human-like robots and dolls so creepy
Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley
The Uncanny Wall
Artificial Intelligence’s White Guy Problem