Artificial intelligence (AI) is increasingly used in many public and private spheres of human life, where fields such as law enforcement, financial markets, social media, or autonomous machines and robotics made great strides in recent years through a new paradigm of machine learning – deep learning. However, when discussing these newer forms of artificial intelligence, often the black box metaphor is brought up – the manner in which machines obtain their models appear opaque and difficult to retrace for the average observer. This is raising concerns regarding human autonomy, agency, fairness, and justice – chiefly among them the very practical fears that for instance protected demographics are unconsciously discriminated against, as well as more philosophical ones in that deep learning systems are notoriously hard to debug well.

AI developers, media, and civil society need to engage and work together to overcome the poor transparency and accountability of new AI technologies.

The good, the bad or the ugly?
Like other innovations in, e.g., nuclear power or bioengineering, AI can produce both benefits and unforeseen, harmful consequences. On the one hand, AI deliver improved organisational performance and decisions. On the other hand, like humans, AIs can fail their intended goals. This can be because the training data they use may be biased or because their recommendations and decisions may yield unintended and negative consequences.

Inclusive discussions and careful considerations
How can we foster artificial intelligence that is not harmful but beneficial to human life?

Communication plays a crucial role in addressing this question. Research in both business and technology ethics has emphasized the value of public engagement and deliberation (a thoughtful and inclusive form of discussion that seeks the attainable consensus) for shaping responsible innovation. Ideally, ethical businesses engage with local actors, governments, and civil society to foster better understanding and responsible processes for innovation through deliberation.

However, for reasons of self-interest, power imbalances, and information advantages, businesses are seemingly unlikely to solve AI challenges deliberatively.

Is it possible to shine light into black boxes?
Even if private sector innovators were committed to communicate with stakeholders to ensure fair and responsible innovation, AI – as a technology – seemingly complicates such efforts because of its opacity, that is it’s poor transparency, explainability, and accountability.

This is because machine-learning algorithms may, for practical purposes, be inaccessible and incomprehensible. This is not only to laypersons but oftentimes also, at least in everyday practice, to the organisations that own and employ them, and even to system programmers and specialists.

However, opacity of AI must not serve as an excuse to resist scrutiny of AI in public discourse. To show pathways forward, we first relate ideal requirements for deliberation to specific conditions for AI opacity – these for instance comprise the principle that every voice should, within reasonable limits, be given the opportunity to be heard, or the maxim that help is extended to each voice to comprehend the fundamental mechanisms of the system discussion. To propose pathways how this process could work in practice, we then proceed to discuss the roles and responsibilities of key actors (such as corporate AI developers, media as translators of complex mechanisms, and civil society actors such as NGOs and activists that advocate for more accountable AI) in the deliberative exploration and evaluation of responsible implementations of AI.

Open democratic discussions needed
There is a need to supplement and extend the current ‘micro (expert) focus’ in AI ethics on methods and principles for explainable and accountable AI. Our argument focuses instead on deliberation as an important ‘macro (societal) layer’ that relates such methods and principles to broader processes of democratic engagement and legitimation. Practically speaking, we advocate to further bridge the gap between those experts who possess deep technical knowledge to inspect AI, and the potentially impacted public at large. We describe the intermediary role that for instance journalism and activism can play in this translation role, if this role is performed well, with a deliberative stance. Thus communication from corporations (AI developers) and ‘fluid observation’ (holding developers accountable through publicizing instances of shortcomings of the technology) facilitated through quality media and an engaged citizenry may help offset common deficits found in AI innovation. Together, this should strengthen the bottom-up identification, problematization, and interpretation of AI in practice to make progress in this domain more responsible in the long run.

Reference:
Buhmann, A. and C. Fieseler (2021). “Towards a deliberative framework for responsible innovation in artificial intelligence.” Technology in Society 64: https://doi.org/10.1016/j.techsoc.2020.101475 (open access)

Featured image: Pixabay

Alexander Buhmann
Associate professor at BI Norwegian Business School | Website

Alexander Buhmann, Ph.D., is associate professor of corporate communication at BI Norwegian Business School. His research is situated at the intersection of communication, digital technology, and management. Alexander holds an M.A. in media studies from the University of Siegen (Germany) and a Ph.D. in communication studies from the University of Fribourg (Switzerland).

Christian Fieseler
Christian Fieseler
Professor in Media and Communication Management at BI Norwegian Business School | Website

Christian Fieseler is project leader of the research project Algorithmic AccountabilityRead more about Christian on AFINO's webpage.