Social networks appeal to emotions, especially in situations of crisis and large doses of disinformation. These reactions differ depending on the social network in question. The emotional paradigm is not the same, as each platform activates different cognitive and sensory processes.
A study led by researchers from the Universitat Politčcnica de Valčncia (UPV) has managed to identify a little-researched phenomenon: different social networks and different emotions. After analysing hundreds of messages on X and TikTok, the researchers have concluded that disinformation about the Dana on X is mainly associated with greater sadness and fear. In contrast, on TikTok, it is correlated with greater anger and disgust.
'In X, as it is mostly textual content, users must interpret the information, favouring a more introspective emotional response, where narratives highlight tragedies and negative events more leisurely, triggering feelings of sadness and fear. On the other hand, TikTok, by integrating visual and auditory elements, offers a multisensory experience that produces immediate and more intense responses,' explained UPV researchers Paolo Rosso and Iván Arcos, who have concluded that the reactive and visceral emotions generated by TikTok centre on anger and disgust.
These conclusions were reached in the study ?Divergent emotional patterns in disinformation on social networks. An analysis of tweets and tiktoks about DANA in Valencia? led by researchers from the Universitat Politčcnica de Valčncia Iván Arcos and Paolo Rosso and Ramón Salaverría, from the School of Communication at the University of Navarra.
Dramatic music, tonal variations, and visual effects of the Chinese social network act as catalysts for these less-meditated emotions. Furthermore, the TikTok audience 'accustomed to dynamic and fast-moving content, tend to process information more immediately, which contributes to emotional polarisation in the face of disinformation,' conclude Rosso and Arcos, who work at the PRHLT research centre.
The study, developed within the framework of the Iberian Digital Media Observatory and published at the ICAART-2025 Artificial Intelligence Conference, was conducted after analysing the spread of disinformation on these two social networks during the Dana in Valencia on 29 October. To this end, the emotional and linguistic patterns of 650 posts from X and TikTok were analysed.
The report reinforces the conclusion that appealing to emotions is a deliberate and recurring strategy in misleading messages. Likewise, the linguistic analysis of the messages shows that reliable content uses more articulate language. In contrast, fake news messages use negations, personal anecdotes or references to family members to legitimise their claims through direct testimonies and appear more credible.
In the process of analysing the messages posted on social networks, keywords related to the Dana were used. Some of the most frequently used keywords were ‘conspiracy’, ‘deceased concealment’, ‘deception’, ‘manipulation’, ‘lies’, ‘Bonaire cemetery’ or ‘help rejected’.
The audio from the TikTok posts was also analysed, making it possible to differentiate between different patterns: the reliable audio posts had brighter tones and robotic or monotonous narratives, generating clarity and credibility. However, disinformation messages used tonal variation, emotional depth and musical elements to alter perception.
In the face of the increase in disinformation on social networks, the researchers point to the use of artificial intelligence on platforms to verify content since ‘it could automatically analyse thousands of publications, detect patterns characteristic of disinformation and notify them. It could also alert users to the possible dubious veracity of certain posts, which would help to mitigate the spread of misleading information’.
Outstanding news