American researchers want to find how humans trust/distrust strangers and whether those decisions are accurate. And they're planning to use robots to find just that.
Northeastern University psychology professor David DeSteno is collaborating with Cynthia Breazeal, director of the MIT Media Lab's Personal Robots Group, Robert Frank, an economist, and David Pizarro, a psychologist, both from Cornell, for the interdisciplinary research.
The team is examining whether nonverbal cues and gestures could affect our trustworthiness judgments.
This project tests their theories by having humans interact with the social robot, Nexi, in an attempt to judge her trustworthiness. Unbeknownst to participants, Nexi has been programmed to make gestures while speaking with selected participants - gestures that the team hypothesizes could determine whether or not she's deemed trustworthy.
DeSteno said: "Using a humanoid robot whose every expression and gesture we can control will allow us to better identify the exact cues and psychological processes that underlie humans' ability to accurately predict if a stranger is trustworthy."
During the first part of the experiment, Nexi makes small talk with her human counterpart for 10 minutes, asking and answering questions about topics such as travelling, where they are from and what they like most about living in Boston.
DeSteno said: "The goal was to simulate a normal conversation with accompanying movements to see what the mind would intuitively glean about the trustworthiness of another,"
The participants then play an economic game called "Give Some," which asks them to determine how much money Nexi might give them at the expense of her individual profit. Simultaneously, they decide how much, if any, they'll give to Nexi. The rules of the game allow for two distinct outcomes: higher individual profit for one and loss for the other, or relatively smaller and equal profits for both partners.
DeSteno said: "Trust might not be determined by one isolated gesture, but rather a 'dance' that happens between the strangers, which leads them to trust or not trust the other."
The team will continue testing their theories by seeing if Nexi can be taught to predict the trustworthiness of human partners.