Affiliation:
1. Department of Information Sciences, Naval Postgraduate School, Monterey, CA 93943, USA
Abstract
Human decision-making is increasingly supported by artificial intelligence (AI) systems. From medical imaging analysis to self-driving vehicles, AI systems are becoming organically embedded in a host of different technologies. However, incorporating such advice into decision-making entails a human rationalization of AI outputs for supporting beneficial outcomes. Recent research suggests intermediate judgments in the first stage of a decision process can interfere with decisions in subsequent stages. For this reason, we extend this research to AI-supported decision-making to investigate how intermediate judgments on AI-provided advice may influence subsequent decisions. In an online experiment (N = 192), we found a consistent bolstering effect in trust for those who made intermediate judgments and over those who did not. Furthermore, violations of total probability were observed at all timing intervals throughout the study. We further analyzed the results by demonstrating how quantum probability theory can model these types of behaviors in human–AI decision-making and ameliorate the understanding of the interaction dynamics at the confluence of human factors and information features.
Reference95 articles.
1. Modeling, replicating, and predicting human behavior: A survey;Fuchs;ACM Trans. Auton. Adapt. Syst.,2023
2. In the land of the blind, the one-eyed man is king: Knowledge brokerage in the age of learning algorithms;Waardenburg;Organ. Sci.,2022
3. The context problem in artificial intelligence;Denning;Commun. ACM,2022
4. Blair, D., Chapa, J.O., Cuomo, S., and Hurst, J. (2021). Humans and hardware: An exploration of blended tactical workflows using john boyd’s ooda loop. The Conduct of War in the 21st Century, Routledge.
5. Challenges of contemporary command and future military operations|Scienti;Wrzosek;Sci. J. Mil. Univ. Land Forces,2022