/Study Suggests Robots Are More Persuasive When They Pretend To Be Human

Study Suggests Robots Are More Persuasive When They Pretend To Be Human

Image for illustration purposes only. Source: Pablo Buffer

Summary: Advances in artificial intelligence have created bots and machines that can potentially pass as humans if they interact with people exclusively through a digital medium.

Original author and publication date: Daniel Nelson – November 18, 2019

Futurizonte Editor’s Note: If a robot acts like a human, we trust the robot. If a human acts like a robot (or even if a human acts like a human), we will probably decide not to trust him/her.

From the article:

Recently, a team of computer science researchers have studied how robots/machines and humans interact when the humans believe that the robots are also human. As reported by ScienceDaily, the results of the study found that people find robots/chatbots more persuasive when they believe the bots are human.

Talal Rahwan, the associate professor of Computer Science at NYU Abu Dhabi, has recently led a study that examined how robots and humans interact with each other. The results of the experiment were published in Nature Machine Intelligence in a report called Transparency-Efficiency Tradeoff in Human-Machine Cooperation. During the course of the study, test subjects were instructed to play a cooperative game with a partner, and the partner may be either a human or a bot.

The game was a twist on the classic Prisoner’s Dilemma, where participants must decide whether or not to cooperate or betray the other on every round. In a prisoner’s dilemma, one side may choose to defect and betray their partner to achieve a benefit at cost to the other player, and only by cooperating can both sides assure themselves of gain.

The researchers manipulated their test subjects by providing them with either correct or incorrect information about the identity of their partner. Some of the participants were told that they were playing with a bot, even though their partner was actually human. Other participants were in the inverse situation. Over the course of the experiment, the research team was able to quantify if people treated partners differently when they were told their partners were bots. The researchers tracked the degree to which any prejudice against the bots existed, and how these attitudes impacted interactions with bots who identified themselves.

The results of the experiment demonstrated that bots were more effective at engendering cooperation from their partners when the human believed that the bot was also a human.

However, when it was revealed that the bot was a bot, cooperation levels dropped. Rahwan explained that while many scientists and ethicists agree that AI should be transparent regarding how decisions are made, it’s less clear that they should also be transparent about their nature when communicating with others.

READ the complete original article here.