Author:
Gros Chloe,Werkhoven Peter,Kester Leon,Martens Marieke
Abstract
Abstract
Despite significant advancements in AI and automated driving, a robust ethical framework for AV decision-making remains undeveloped. Such a framework requires clearly defined moral attributes to guide AVs in evaluating complex and ethically sensitive scenarios. Existing frameworks often rely on a single normative ethical theory, limiting their ability to address the nuanced nature of human decision-making and leading to conflicting outcomes. Augmented Utilitarianism (AU) offers a promising alternative by integrating elements of virtue ethics, deontology, and consequentialism into a non-normative framework. Grounded in moral psychology and neuroscience, AU employs mathematical ethical goal functions to capture societally aligned attributes. In this study, we propose and evaluate a method to elicit these attributes for AV decision-making. One hundred participants were presented with traffic scenarios, including critical and non-critical situations, and tasked with evaluating the relevance of an initial set of 11 attributes (e.g., physical harm, psychological harm, and moral responsibility) while suggesting additional relevant attributes. Results identified two new attributes—environmental harm and energy efficiency—and revealed that four attributes (physical harm, psychological harm, legality of the AV, and self-preservation) varied significantly between critical and non-critical scenarios. These findings suggest that the weight of attributes in ethical goal functions may need to adapt to situational criticality. The method was validated based on key evaluation criteria: it demonstrated sensitivity by producing varying relevance scores for attributes, was deemed relevant by participants for eliciting AV decision-making attributes, and allowed for the identification of additional attributes, enhancing the robustness of the framework. This work contributes to the development of a dynamic and context-sensitive ethical framework for AV decision-making.
Funder
Nederlandse Organisatie voor Toegepast Natuurwetenschappelijk Onderzoek
Publisher
Springer Science and Business Media LLC
Reference37 articles.
1. Aliman N-M, Kester L (2019) Requisite variety in ethical utility functions for AI value alignment
2. Aliman NM, Kester L, Werkhoven P, Yampolskiy R (2019) Orthogonality-based disentanglement of responsibilities for ethical intelligent systems. Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), 11654 LNAI. https://doi.org/10.1007/978-3-030-27005-6_3
3. Aliman N-M, Kester L (2022) Crafting a flexible heuristic moral meta-model for meaningful AI control in pluralistic societies. In: Wernaart B (ed) Moral design and technology. Wageningen Academic, pp 63–80
4. Awad E, Dsouza S, Kim R, Schulz J, Henrich J, Shariff A, Bonnefon JF, Rahwan I (2018) The moral machine experiment. Nature 563(7729):59–64. https://doi.org/10.1038/s41586-018-0637-6
5. Beauchamp TL, Childress JF (2019) Principles of biomedical ethics, 8th edn. Oxford University Press, Oxford