- -
UPV
 
Home UPV :: Profiles :: Media :: Web news

Toxic speeches

Artificial intelligence system that analyses toxicity in political discourse developed

[ 24/03/2025 ]

A team from the VRAIN Institute of the Universitat Politčcnica de Valčncia (UPV) has developed a methodology with a modular artificial intelligence system that is capable of identifying irony, biting criticism or sarcasm with greater precision to detect when political language crosses the threshold of toxicity.

This methodology has already been tested in the analysis of 10 sessions of the Valencian Parliament. From these sessions, 875 samples were selected, half of which, 435, had a toxicity score above 50%, implying a higher probability of toxicity, while the other half had a toxicity score below 50%.

Conclusions at the international conference ICAART 2025

The VRAIN research team at the UPV, formed by members of the PROS group, Antoni Mestre, Joan Fons, Manoli Albert, Vicente Pelechano, and the researcher from the Department of Computer Science at the University of Valencia, Miriam Gil, together with the researcher from the Universitŕ degli Studi del Sannio in Benevento (Italy), Franccesco Malafarina, are the developers of this methodology whose conclusions have recently been presented.

Entitled "Sentiment-enriched AI for the detection of toxic speeches; A case study of political speeches in Les Corts Valencianes", the paper has been presented at the International Conference on Artificial Intelligence and Agents (ICAART) 2025 held in the city of Porto.

New possibilities for automated moderation

"The study, applied to interventions in Les Corts Valencianes, proposes an innovative approach that improves traditional toxic speech detection systems by introducing a "confusion zone". Instead of labelling a message as simply toxic or non-toxic, the AI detects the most ambiguous cases and subjects them to a sentiment analysis that allows a better assessment of the emotional tone of the speech," explains UPV's VRAIN researcher Antoni Mestre.

While the results, without the confusion zone, reach an accuracy of 80.35%, when the sentiment analysis layer is incorporated into detecting toxic speech, especially in politically charged and linguistically complex contexts such as Les Corts, the accuracy improves to 87.89%.

"Our work offers a more nuanced view of political language. It also opens up new possibilities for automated content moderation and analysis of public debate with more sophisticated and fairer tools. Its application in parliaments would allow their presiding officers to moderate the debate more fairly and accurately, ensuring a more respectful and constructive discourse," concludes Antoni Mestre.

Reference

Mestre, A., Malafarina, F., Fons, J., Albert, M., Gil, M. and Pelechano, V. (2025). Sentiment-Enriched AI for Toxic Speech Detection: A Case Study of Political Discourses in the Valencian Parliament. In Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART; ISBN 978-989-758-737-5; ISSN 2184-433X, SciTePress, pages 555-561. DOI: 10.5220/0013159600003890

Outstanding news


The Diamond Army The Diamond Army
Two students came up with the UPV initiative that has engaged more than 1,600 volunteers and shattered the false myth of the 'crystal generation'
ARWU 2024 ARWU 2024
The Shanghai ranking reaffirms the UPV as the best polytechnic in Spain for yet another year
Distinction of the Generalitat for Scientific Merit Distinction of the Generalitat for Scientific Merit
Guanter has been distinguished in recognition of his research excellence in the development of satellite methods for environmental applications
The new statutes come into force The new statutes come into force
The Universitat Politčcnica de Valčncia is the first university in Spain with statutes adapted to the new LOSU
NanoNIR project against breast cancer NanoNIR project against breast cancer
UPV Researcher Carla Arnau del Valle receives an EU Marie Curie grant to develop biosensors for the early detection of this cancer
Large artificial intelligence language models, increasingly unreliable Large artificial intelligence language models, increasingly unreliable
According to a study by the Universitat Politčcnica de Valčncia, ValgrAI and the University of Cambridge, published in the journal Nature



EMAS upv