Affiliation:
1. School of Philosophy, Psychology, and Language Sciences, University of Edinburgh
2. Sentience Institute, New York, New York
Abstract
Artificial intelligences (AIs), although often perceived as mere tools, have increasingly advanced cognitive and social capacities. In response, psychologists are studying people’s perceptions of AIs as moral agents (entities that can do right and wrong) and moral patients (entities that can be targets of right and wrong actions). This article reviews the extent to which people see AIs as moral agents and patients and how they feel about such AIs. We also examine how characteristics about ourselves and the AIs affect attributions of moral agency and patiency. We find multiple factors that contribute to attributions of moral agency and patiency in AIs, some of which overlap with attributions of morality to humans (e.g., mind perception) and some that are unique (e.g., sci-fi fan identity). We identify several future directions, including studying agency and patiency attributions to the latest generation of chatbots and to likely more advanced future AIs that are being rapidly developed.
Reference38 articles.
1. Empathy and attitudes: Can feeling for a member of a stigmatized group improve feelings toward the group?
2. People are averse to machines making moral decisions
3. Algorithmic discrimination causes less moral outrage than human discrimination.
4. Bubeck S., Chandrasekaran V., Eldan R., Gehrke J., Horvitz E., Kamar E., Lee P., Lee Y. T., Li Y., Lundberg S., Nori H., Palangi H., Ribiero M. T., Zhang Y. (2023). Sparks of artificial general intelligence: Early experiments with GPT-4. ArXiv. https://arxiv.org/abs/2303.12712
5. Robots, law and the retribution gap
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献