Ethical guidelines for the use of artificial intelligence and the challenges from value conflicts
DOI:
https://doi.org/10.5324/eip.v15i1.3756Abstract
The aim of this article is to articulate and critically discuss different answers to the following question: How should decision-makers deal with conflicts that arise when the values usually entailed in ethical guidelines – such as accuracy, privacy, non-discrimination and transparency – for the use of Artificial Intelligence (e.g. algorithm-based sentencing) clash with one another? To begin with, I focus on clarifying some of the general advantages of using such guidelines in an ethical analysis of the use of AI. Some disadvantages will also be presented and critically discussed. Second, I will show that we need to distinguish between three kinds of conflict that can exist for ethical guidelines used in the moral assessment of AI. This section will be followed by a critical discussion of different answers to the question of how to handle what we shall call internal and external values conflicts. Finally, I will wrap up with a critical discussion of three different strategies to resolve what is called a ‘genuine value conflict’. These strategies are: the ‘accepting the existence of irresolvable conflict’ view, the ranking view, and value monism. This article defends the ‘accepting the existence of irresolvable conflict’ view. It also argues that even though the ranking view and value monism, from a merely theoretical (or philosophical) point of view, are better equipped to solve genuine value conflicts among values in ethical guidelines for artificial intelligence, this is not the case in real-life decision-making.
Keywords: AI; ethical guidelines; algorithm-based sentencing; value conflicts
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2021 thomas søbirk petersen
This work is licensed under a Creative Commons Attribution 4.0 International License.