HRI 2017
 
Piggybacking Robots: Human-Robot Overtrust in University Dormitory Security

Serena Booth1 James Tompkin1,2 Krzysztof Gajos1 Jim Waldo1 Hanspeter Pfister1 Radhika Nagpal1
1Harvard University 2Brown University


A TurtleBot tries to gain access to a secure facility with an ingenious plan.


Abstract
Can overtrust in robots compromise physical security? We conducted a series of experiments in which a robot positioned outside a secure-access student dormitory asked passersby to assist it to gain access. We found individual participants were comparably likely to assist the robot in exiting (40% assistance rate) as in entering (19%). When the robot was disguised as a food delivery agent for the fictional start-up Robot Grub, individuals were more likely to assist the robot in entering (76%). Groups of people were more likely than individuals to assist the robot in entering (71%). Lastly, we found participants who identified the robot as a bomb threat were just as likely to open the door (87%) as those who did not. Thus, we demonstrate that overtrust—the unfounded belief that the robot does not intend to deceive or carry risk—can represent a significant threat to physical security.

Paper
PDF (7 MB)
Supplemental Video
MP4 (70 MB)

 
@inproceedings{booth17:piggybacking,
title = {Piggybacking Robots: Human-Robot Overtrust in University Dormitory Security},
author = {Serena Booth and James Tompkin and Krzysztof Z. Gajos and Jim Waldo and
Hanspeter Pfister and Radhika Nagpal},
booktitle = {Proceedings of HRI'17},
year = {2017},
}

     Press

  Motherboard | Vice: Would you trust this robot if it gave you a cookie?
  Vocativ: Do we trust robots too much?
  BostInno: Study Shows People Don't Trust Robots - Unless They're Carrying Cookies
  IEEE Spectrum: Video Friday
  Harvard SEAS: In automation we trust
  • [Spanish] La psicología de la confianza y la desconfianza hacia los robots
  • [Italian] Ci fidiamo troppo dei robot?
  • [Japanese] FujiTV FNN Minnano News