Responsible Research and Innovation for Preventing Online Violence
In an era of political, economic, and social instability, online violence can spread hatred and trigger sections of the population to pursue their own political, economic or social objectives in an unacceptable or unlawful manner. It is therefore crucial to detect and analyse online violence at an early stage and to develop appropriate preventive measures.
To address this, an international consortium of researchers from the Western Norway Research Institute (WNRI), The Norwegian University of Science and Technology (NTNU), NLA University College, and George Mason University have begun an ambitious transdisciplinary research project intended to help communities prevent the spread of hate speech online. This is the SOCYTI project, a multi-year effort combining social science, computer science, ethics, legal analysis, and local expertise with the goal of developing a cloud-based, real-time detection system capable of evaluating text and images from social media posts for hateful content on a larger scale than ever previously possible. This detection system is intended to support community resilience by enabling proactive interventions to counter and prevent further spread of hate speech that is detected.
To achieve this, a major focus is improving the ability of Artificial Intelligence (AI) to recognize hateful content whether in the form of explicit language, implicit language, images, or symbols. We are currently working on two aspects of this. The first is the development of an ontology of hate speech. This functions as the mental model of hate speech held by the autonomous system. Developing an informed mental model of hate speech has required the team to have a deep understanding of the concept: how it is defined, how it can be identified, what causes it, and what it does to people and communities. The second aspect is our method of classifying data. We are refining our current method to better distinguish between hateful and offensive content. The distinctions are multi-faceted, and depend on factors like speaker intent, speaker and target identity, the specific language used, and more. In addition to automatically collecting and sorting as much of this data as possible, our monitoring system will involve human experts working in concert with automated tools. This is critical to maintain proper context sensitivity and to navigate the complexities and changing landscape of hate speech and online discourse.
A novel aspect of this project is to develop an understanding of the relationship between community resilience and violence-inducing behaviors, such as online hate speech. Research on community resilience is explicitly concerned with what enables communities to cope with, adapt to, and overcome pressing threats to their functioning. It is well understood that hate speech is an urgent threat to the cohesion of a community and can result in very real violent consequences for vulnerable members of that community, thus community resilience research is a natural fit for the prevention of online violence. We hope this knowledge will contribute to both our and others’ understanding of what hate speech is and where it comes from. For this purpose, our team has compiled multiple models of community resilience and has drawn from them a set of indicators of community resilience for further use in the project. Our goal is to establish theoretically backed connections between indicators of community resilience and different kinds of hate speech. This is done under the hypothesis that the indicators captured by community resilience models can provide useful proxy measurements for some of the drivers of hate speech, in other words, the factors that contribute to the spread of hate speech in a community.
Along with our extensive research, we have involved non-academics in the development of our knowledge and systems. Namely, local citizens have been interviewed by the team for their perspectives on and understanding of hate speech. This year we will host a workshop with representatives of some of the different stakeholders in a community, including law enforcement, media representatives, and citizens. We hope to test our data annotation framework and hate speech ontology. We will finalize our hate speech ontology, incorporating our identification and classification framework as well as the related community resilience indicators. Ultimately, we will begin development of the technological system itself. The project has embedded RRI practices in the field of computational social science – a sector at the crossroads between computer science and society – in Norway and the USA. Thanks to the AFINO project for collaborating with us and for sharing thoughts on responsible research and innovation for transdisciplinary project.
The SOCYTI project has received funding from the Research Council of Norway as a Researcher Project for Technological Convergence related to Enabling Technologies under grant agreement no 331736.
Find more information about the SOCYTI project at their webpage.
Featured photo: Colourbox.
Rajendra Akerkar
Rajendra Akerkar is a professor and coordinator of Big Data and Emerging Technologies research group at Western Norway Research Institute (Vestlandsforsking). He is the SOCYTI project manager. Read more about Rajendra on Big Data Research Groups website at Western Norway Research Institute.
Dante Della Vella
Dante Della Vella is a researcher at Western Norway Research Institute (Vestlandsforsking). He has a Masters Degree in Cognitive Systems Engineering and a Bachelor's Degree in Industrial and Systems Engineering from The Ohio State University. He is pursuing research into the resilience of communities themselves as layered socio-technical systems.