Affiliation:
1. Center for Humans and Machines, Max Planck Institute for Human Development
2. Department of Economics, University of Würzburg
Abstract
There is growing interest in the field of cooperative artificial intelligence (AI), that is, settings in which humans and machines cooperate. By now, more than 160 studies from various disciplines have reported on how people cooperate with machines in behavioral experiments. Our systematic review of the experimental instructions reveals that the implementation of the machine payoffs and the information participants receive about them differ drastically across these studies. In an online experiment ( N = 1,198), we compare how these different payoff implementations shape people’s revealed social preferences toward machines. When matched with machine partners, people reveal substantially stronger social preferences and reciprocity when they know that a human beneficiary receives the machine payoffs than when they know that no such “human behind the machine” exists. When participants are not informed about machine payoffs, we found weak social preferences toward machines. Comparing survey answers with those from a follow-up study ( N = 150), we conclude that people form their beliefs about machine payoffs in a self-serving way. Thus, our results suggest that the extent to which humans cooperate with machines depends on the implementation and information about the machine’s earnings.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献