“Adding Human-Like Eyes And Facial Expressions To Robots Conveys Emotion Where Viewers Do Not Expect Emotion”

From the Economist, an explanation of the Uncanny Valley Effect, which tries to explain human unease with examples of AI that seem real and unreal at once:

“ARTIFICIALLY created beings, whether they be drawn or sculpted, are warmly accepted by viewers when they are distinctively inhuman. As their appearances are made more real, however, acceptance turns to discomfort until the point where the similarity is almost perfect, when comfort returns. This effect, called ‘the uncanny valley’ because of the dip in acceptance between clearly inhuman and clearly human forms, is well known, particularly to animators, but why it happens is a mystery. Some suggest it is all about outward appearance, but a study just published in Cognition by Kurt Gray at the University of North Carolina and Daniel Wegner at Harvard argues that there can be something else involved as well: the apparent presence of a mind where it ought not to be.

According to some philosophers the mind is made up of two parts, agency (the capacity to plan and do things) and experience (the capacity to feel and sense things). Both set people apart from robots, but Dr Gray and Dr Wegner speculated that experience in particular was playing a crucial role in generating the uncanny-valley effect. They theorised that adding human-like eyes and facial expressions to robots conveys emotion where viewers do not expect emotion to be present. The resulting clash of expectations, they thought, might be where the unease was coming from.”

••••••••••

Facial motion test of AI baby:

Tags: ,