The legally mandated approximate language about AI

Authors

  • Truls Pedersen
  • Sjur Kristoffer Dyrkolbotn

Abstract

In light of the current explosion of application of machine learning in data analysis and inference, we examine a particular challenge raised by the new EU General Data Protection Regulation (GDPR). The challenge we address pertains particularly to the demand that analyses of a person's data must be comprehensible to that person. 

While there is a long tradition in viewing the world in terms of objects and properties in intuitive ways, recent decades have entertained a tension between more rule-based theories of mind (e.g., the Representational Theory of Mind) and more holistic approaches (e.g., Connectionism). While both approaches have merit, one seems to depart too much from a classical understanding of "knowing" to adequately satisfy the imminent legal reality, and the other seems to be incapable of adequately capturing modern data analysis (as of yet). 

As a solution to this predicament we propose a pragmatic compromise based on argumentation theory which seems to be able to provide a solid foundation in classical concepts, while at the same time permitting enthymematic presuppositions. We argue that developing a framework for explaining machine behavior in terms of abstract argumentation theory can address this dilemma -- provide sufficient expressivity while remaining true to established definitions of epistemology -- to satisfy the conditions of the GDPR and motivating concerns.

Downloads

Download data is not yet available.

Downloads

Published

2018-08-08

How to Cite

[1]
T. Pedersen and S. K. Dyrkolbotn, “The legally mandated approximate language about AI”, NIKT, Aug. 2018.

Issue

Section

Articles