Explainable AI (XAI) in a societal citizen perspective – power, conflicts, and ambiguity
Keywords:
Explainable AI, XAI, sociotechnical theory, human-centered AIAbstract
Explainable AI (XAI) as a research field has had an exponential growth in the last decade driven by a curiosity to investigate "the inside" of AI models appearing as black boxes through developing techniques and methods. Recent definitions frame XAI as a process encompassing both data and application, explicitly underscoring that XAI should be regarded as more than just techniques and methods. The human stakeholder perspective is clearly underscored in recent definitions of XAI, but what a human stakeholder focus means in organizational and societal settings is currently unexplored. This paper therefore aims to explore how the concept of XAI can be applied in societal settings by first presenting a layered theoretical understanding of XAI, suggesting that societal explainability dimensions might be grasped through an analysis of the current discourse surrounding AI in public press. We use a sample of news articles published in the Norwegian public press early summer 2024 regarding Meta's approach to use personal data in training of AI models to perform a discourse analysis. The aim of the analysis is to provide insights into how the articles create a perception of reality relating to the future AI system Meta aims to develop. Our analysis reveals that the public is presented with oppositions constructing a reality surrounding the future AI system Meta aims to develop, with shifting power dynamics, conflicting interests, and ambiguity in responsibility as major themes present in the current discourse. We end by discussing the implications following from our analytical approach and findings.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Dorthea Mathilde Kristin Vatn, Patrick Mikalef

This work is licensed under a Creative Commons Attribution 4.0 International License.