Abstract
Artificial intelligence (AI) is becoming a part of our daily lives at a fast pace, offering myriad benefits for society. At the same time, there is concern about the unpredictability and uncontrollability of AI. In response, legislators and scholars call for more transparency and explainability of AI. This article considers what it would mean to require transparency of AI. It advocates looking beyond the opaque concept of AI, focusing on the concrete risks and biases of its underlying technology: machine-learning algorithms. The article discusses the biases that algorithms may produce through the input data, the testing of the algorithm and the decision model. Any transparency requirement for algorithms should result in explanations of these biases that are both understandable for the prospective recipients, and technically feasible for producers. Before asking how much transparency the law should require from algorithms, we should therefore consider if the explanation that programmers could offer is useful in specific legal contexts.
Publisher
Cambridge University Press (CUP)
Reference46 articles.
1. Making machine learning models interpretable;Vellido;ESANN proceedings,2012
2. Data recombination for neural semantic parsing;Jia;Association for Computational Linguistics,2016
3. Entrepreneurs on Horseback: Reflections on the Organization of Law;Ibrahim;Arizona Law Review,2008
4. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability
5. Semantics derived automatically from language corpora contain human-like biases
Cited by
84 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献