The challenge of Artificial Intelligence with Morals: A glimpse into the future of accountability in AI
From social media newsfeeds to law enforcement, financial markets, medical procedures, and countless other applications, Algorithms and Artificial Intelligence (AI) are already transforming our lives.
Increasingly, AI systems and algorithms are used in educational systems, workplaces, cities, and governments, transforming our relation to these structures, and altering crucial services. And yet, AI is built on models that lack transparency and are difficult to understand by users.
They rely on large amounts of data sets that are usually incomplete or that may perpetuate patterns of discrimination and marginalization of certain communities. AI is an essential part of digital transformation processes and has the potential to improve our lives and futures.
However, the use of AI raises concerns about human agency, autonomy, privacy, fairness, and discrimination.
Where some would have a laissez-faire approach in the name of innovation others have emphasized the value of public engagement and deliberation for shaping responsible innovation.
We believe these issues require broad societal conversations and reflection into the future of AI. Moreover, the notion that technologies are neutral in their development and implementation only exacerbate and amplify the inequalities that already exist in society.
If we do not recognize that our technologies are not without the embedded biases already existing in structures of power then we are unable to use technology to correct those imbalances. As Ben Green writes in The Smart Enough City, both public discourse and policy approaches need to consider algorithms “not as unassailable oracles but as socially constructed and fallible inputs to political decisions.”
In other words, only when we directly face the challenges of algorithmic governance can we begin to chart a better trajectory for its development.
How to chart better these trajectories then?
We believe that research and innovation policy instruments such as National AI Strategies are important vehicles in collecting the preoccupations that arise when we engage in a societal conversation on how we expect AI and algorithms governance. Such AI strategies could serve as blueprints and chart courses of the kind of governance systems that will ensure we reap the benefits of AI and ensure accountability from relevant stakeholders when harm is done
Here some questions arise: How should we regulate AI and Algorithmic Systems? Are traditional legal, regulatory and policy systems adequate? How do we ensure accountability in AI? What is the role of governments and civil society in AI Governance?
To get a better understanding of how these issues are playing out in, we recently conducted research on AI National Strategies from European Countries.
In a recent article co-authored with Itziar Castello, Christian Fieseler and myself, presented this past July 2021 in the Academy of Management Annual Conference we argued that through their National AI strategies, Governments are trying to square the aspiration of exploiting the potentials of machine learning with safeguarding their communities against the perceived ills of unchecked artificial systems. Our research showed that these policy instruments are an interesting showcase for a recent turn in policy work and formulation, that increasingly tries to intertwine moral sentiment with strategic policy dimensions.
This process of moralizing is interesting and unprecedented coming from governmental actors, as these documents are guidance documents but not law. Given the significant leeway in development trajectories of technologies such as artificial intelligence, we argue that these more moralizing elements within policy documents are illustrative of a new class of policy writing, meant to catalyse and shape public opinion and thus by proxy development trajectories.
Not only are these policy instrument novel in embedding a moralizing dimension to the technologies they foster, from a responsible innovation perspective they serve as a blueprint for other areas of research and innovation where private individuals are resistant to allow scrutiny or accountability.
References
Castello, I., Fieseler, C., & Uribe, S. (2021). Moral legitimisation in science, technology and innovation policies. Academy of Management Proceedings.
Green, B. (2017). The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future. Boston: MIT Press.
Featured image: Colourbox.
Santiago Uribe
Santiago holds an M.Phil. in Development, Environment and Cultural Change from the University of Oslo and holds an LLB in Laws from Universidad de los Andes. He is currently working at BI Norwegian Business School at the Nordic Centre for Internet and Society as a research assistant for the NRC’s funded project on Algorithmic Accountability. Santiago’s current interests focus on the ethics of digitization, surveillance, and responsible digital transformations.