figshare
Browse

Stronger Together: Unleashing the Social Impact of Hate Speech Research

Download (448.22 kB)
poster
posted on 2025-01-12, 04:06 authored by Sidney WongSidney Wong

The advent of the internet has been both a blessing and a curse for once marginalised communities. When used well, the internet can be used to connect and establish communities crossing different intersections; however, it can also be used as a tool to alienate people and communities as well as perpetuate hate, misinformation, and disinformation especially on social media platforms. In the last three decades, we have seen an exponential growth into hate speech research with rapid developments in the last decade alone as a result of methodological advancement in computational linguistics and NLP [1]. These advancements have been purported as a valuable resource in policing anti-social behaviour online [2]. However, community-minded researchers are beginning to question the benefits of computational solutions in combating hate speech [3], while others have critiqued the datafication of hate speech research has become an unnecessary distraction for computational linguists in combating this social issue [4]. We propose steering hate speech research and researchers away from pre-existing computational solutions and consider social methods to inform social solutions to address this social problem. In a similar way linguistics research can inform language planning policy, linguists should apply what we know about language and society to mitigate some of the emergent risks and dangers of anti-social behaviour in digital spaces. For example, discursive strategies such as ‘Voldermorting’ [5] and ‘Algospeak’ [6] may support communities in developing strategies to combat hate speech, misinformation, and disinformation. We argue linguists and NLP researchers can play a principle role in unleashing the social impact potential of linguistics research working alongside communities, advocates, activists, and policymakers to enable equitable digital inclusion and to close the digital divide [7].

Reference list

[1] Tontodimamma, A., Nissi, E., Sarra, A., & Fontanella, L. (2021). Thirty years of research into hate speech: Topics of interest and their evolution. Scientometrics, 126(1), 157–179. https://doi.org/10.1007/s11192-020-03737-6

[2] Rawat, A., Kumar, S., & Samant, S. S. (2024). Hate speech detection in social media: Techniques, recent trends, and future challenges. WIREs Computational Statistics, 16(2), e1648. https://doi.org/10.1002/wics.1648

[3] Parker, S., & Ruths, D. (2023). Is hate speech detection the solution the world wants? Proceedings of the National Academy of Sciences, 120(10), e2209384120. https://doi.org/10.1073/pnas.2209384120

[4] Laaksonen, S.-M., Haapoja, J., Kinnunen, T., Nelimarkka, M., & Pöyhtäri, R. (2020). The Datafication of Hate: Expectations and Challenges in Automated Hate Speech Monitoring. Frontiers in Big Data, 3. https://doi.org/10.3389/fdata.2020.00003

[5] van der Nagel, E. (2018). ‘Networks that work too well’: Intervening in algorithmic connections. Media International Australia, 168(1), 81–92. https://doi.org/10.1177/1329878X18783002

[6] Steen, E., Yurechko, K., & Klug, D. (2023). You Can (Not) Say What You Want: Using Algospeak to Contest and Evade Algorithmic Content Moderation on TikTok. Social Media + Society, 9(3), 20563051231194584. https://doi.org/10.1177/20563051231194586

[7] Norris, P. (2001). The Digital Divide. In Digital Divide: Civic Engagement, Information Poverty, and the Internet Worldwide. Cambridge University Press. https://doi.org/10.1017/CBO9781139164887

History

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC