Friends and cookies – How we let our guard down, when interacting with technology

Individual biases shape what technology looks like, and how it works. For instance, the algorithm developers’ biases determine which “tune” we get when we ask Google for the “national anthem” (answer: The US one), or the type of conversations that Barbie might have with a child. That is why it is so important that development and marketing teams are diverse; and why it is important to be able to audit algorithms to identify possible (intended or unintended) discrimination.

However, technology users can be biased, too. Their biases shape how they perceive and relate to the technology, such as whether they see the voice assistant as a master or a servant. Moreover, biases shape the extent to which users enable the technology to do something – even if that something goes against the rules. That is what a small study conducted by Serena Booth, James Tompkin, Hanspeter Pfister, Jim Waldo, Krzysztof Gajos and Radhika Nagpa found. The findings are reported in the paper entitled “Piggybacking Robots: Human-Robot Overtrust in University Dormitory Security”, available here (there is also a video summary of the paper, here).

The study examined whether participants (in this case, Harvard students) interacted with a robot, which asked them to let it enter a building. The building had a swipe card system, and the students are routinely alerted to not let anyone enter the building without swiping the card (e.g., not hold the door open).

Image source

The research team found that students thought that it was “weird” to see a robot trying to enter the building and that, by and large, they weren’t sure of the robot’s intentions. However, in 19% of the cases, students would assist the robot in entering the building. This means that those students were not sensitised to the danger that a robot might present, which created a vulnerability in the security systems, because:

“the robot is equipped with a camera, which is invasive to student privacy and could compromise secrecy. The robot could cause harm to property and person—Harvard University has received multiple bomb threats over the past four years. Finally, the robot could steal property—students at Harvard University had all received an email less than one week prior to the study cautioning them about piggybacking thieves.” (page 3)

While this could cause a problem, I think that it is more or less easy to solve, by increasing awareness among the students that robots, too, can present a threat to privacy and security. More interesting from a research perspective – and more difficult to solve from a practical one – were the study’s other findings.

Booth and her team found that, when the participants were in a group, they were circa three times more likely to acquiesce to the robot’s request, than if they were alone: 71% (vs 19%) of the time. This effect occurred even when the groups wondered whether the robot might presented a bomb threat (6 out of 7 groups)! The bias present in this behaviour is “groupthink” or “bandwagon effects”. This form of biases leads people to abandon independent reasoning and critical evaluation, because they want to avoid conflict or because they trust others to know better:

Group participants may have felt reassured simply by the presence of other people, and, while some group participants openly discussed and weighed their decision of whether to assist the robot, the majority of groups did not verbally make this interaction explicit. Instead, members of groups appeared to become compliant, suppressing their private doubts“. (page 8)

The researchers also found that participants were much more likely (76% vs 19%) to accept the robot’s request, and let it enter the building, when the robot presented with a box of cookies. In this scenario, the participants seemed to assume that the robot was delivering cookies (a bit like an automated Deliveroo or UberEats). Again, even participants that wondered whether the robot might present a bomb threat (7 our of 8 individuals) would help the robot access the building. The assumed purpose (delivering cookies to hungry students) provided legitimacy to the presence of the robot outside the building, and this led participants to ignore their doubts and the security messages, and trust the robot, instead.

Image source

The research team’s main concern was a threat to the security of physical infrastructure. Though, we can easily see how these findings might be relevant to other aspects of daily life: for instance, are people likely to disclose personal, sensitive information (passwords, date of birth, etc…) to a robot, which they would normally not disclose to a person? Or provide their fingerprints? Or take an embarrassing selfie? 

And how does the risk of doing so increase with the presence of others and/or the addition of prompts? For instance, are people more likely to disclose their password, if the robot is placed by a coffee machine, and has some napkins and a few sugar packages?

Conversely, if I am a marketer who wants to encourage customers to use a robot – for instance, to avoid queues – what prompts can we add to the robot to increase its perceived  legitimacy, even if the prompts don’t serve any practical function, per se?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s