Author:
Bower Alexander H.,Steyvers Mark
Abstract
AbstractThough humans should defer to the superior judgement of AI in an increasing number of domains, certain biases prevent us from doing so. Understanding when and why these biases occur is a central challenge for human-computer interaction. One proposed source of such bias is task subjectivity. We test this hypothesis by having both real and purported AI engage in one of the most subjective expressions possible: Humor. Across two experiments, we address the following: Will people rate jokes as less funny if they believe an AI created them? When asked to rate jokes and guess their likeliest source, participants evaluate jokes that they attribute to humans as the funniest and those to AI as the least funny. However, when these same jokes are explicitly framed as either human or AI-created, there is no such difference in ratings. Our findings demonstrate that user attitudes toward AI are more malleable than once thought—even when they (seemingly) attempt the most fundamental of human expressions.
Publisher
Springer Science and Business Media LLC
Reference34 articles.
1. Dawes, R. M., Faust, D. & Meehl, P. E. Clinical versus actuarial judgment. Science 243, 1668–1674 (1989).
2. Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E. & Nelson, C. Clinical versus mechanical prediction: A meta-analysis. Psychol. Assess. 12, 19–30 (2000).
3. Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J. & Mullainathan, S. Human decisions and machine predictions. Q. J. Econ. 133, 237–293 (2018).
4. Burton, J. W., Stein, M.-K. & Jensen, T. B. A systematic review of algorithm aversion in augmented decision making. J. Behav. Decis. Mak. 33, 220–239 (2020).
5. Jussupow, E., Benbasat, I. & Heinzl, A. Why are we averse towards algorithms? A comprehensive literature review on algorithm aversion. In Proceedings of the 28th European Conference on Information Systems (2020).
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献