Presentation
Watch for failing Objects: What Inappropriate Compliance Reveals about Shared Mental Models in Autonomous Cars
SessionCEDM/HART1: Decision Making
Event Type
Lecture
In-Person
Cognitive Engineering & Decision Making
Human AI Robot Teaming (HART)
TimeWednesday, October 6th2:06pm - 2:24pm EDT
LocationGrand Salon V
DescriptionThis paper evaluates Banks et al.’s Human-AI Shared Mental Model theory by examining how a self-driving vehicle’s hazard assessment facilitates shared mental models. Participants were asked to affirm the vehicle’s assessment of road objects as either hazards or mistakes in real-time as behavioral and subjective measures were collected. The baseline performance of the AI was purposefully low (<50%) to examine how the human’s shared mental model might lead to inappropriate compliance. Results indicated that while the participant true positive rate was high, overall performance was reduced by the large false positive rate, indicating that participants were indeed being influenced by the AI’s faulty assessments, despite full transparency as to the ground-truth. Both performance and compliance were directly affected by frustration, mental, and even physical demands. Dispositional factors such as faith in other people’s cooperativeness and in technology companies were also significant. Thus, our findings strongly supported the theory that shared mental models play a measurable role in performance and compliance, in a complex interplay with trust.