Author:
Simon Judith,Rieder Gernot,Branford Jason
Abstract
AbstractAdvances in artificial intelligence have recently stirred both public and academic debates about the opportunities but also the risks posed by these developments. It is evident that the disruptive impact of AI in many societal domains can no longer be ignored. This topical collection emerged from a full week of high-quality paper presentations at the CEPE/IACAP Joint Conference 2021: The Philosophy and Ethics of Artificial Intelligence and comprises 13 articles that were chosen purely on the merit and originality of their respective arguments as well as their ability to advance the existing ethical and philosophical discourse on AI. This introduction provides a concise overview of the individual contributions, grouping them into four thematic strands: (a) On Democracy, Regulation, and (Public) Legitimation in an AI-powered World, (b) On the Challenge of Protecting Privacy in Today’s Data Economy, (c) On Solidarity, Inclusivity, and Responsibility in AI Design, and (d) Reconsidering AI Ethics. As such, the introduction serves as a gateway and guide to the topical collection, contributing to what has recently emerged as a ‘hot topic’ within philosophy and beyond but has also been at the heart of research within the CEPE and IACAP communities for a long time. The paper concludes with some hopeful remarks on the current landscape of the field and its possible trajectory.
Publisher
Springer Science and Business Media LLC
Reference26 articles.
1. Adomaitis, L., & Oak, R. (2023). Ethics of adversarial machine learning and data poisoning. Digital Society, 2(8), 1–13. https://doi.org/10.1007/s44206-023-00039-1
2. Ballsun-Stanton, B. (2022). Students, participatory design, and serious games in a response to: ‘No Algorithmization without Representation: Pilot Study on Regulatory Experiments in an Exploratory Sandbox’. Digital Society, 1(23), 1–9. https://doi.org/10.1007/s44206-022-00024-0
3. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In FAccT ‘21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). https://doi.org/10.1145/3442188.3445922.
4. Carter, S. E. (2022). A value-centered exploration of data privacy and personalized privacy assistants. Digital Society, 1(27), 1–24. https://doi.org/10.1007/s44206-022-00028-w
5. Durt, C., Froese, T., & Fuchs, T. (2023). AI, large language models, distributional semantics, scaffolding, meaning, understanding in large language models and the patterns of human language use: An alternative view of the relation of AI to understanding and sentience. Preprint. http://philsci-archive.pitt.edu/22744/.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献