Comparing Pre-trained Language Model for Arabic Hate Speech Detection

Authors

  • Kheir Eddine Daouadi Echahid Cheikh Larbi Tebessi University
  • Yaakoub Boualleg Mohamed Khider University of Biskra
  • Oussama Guehairia Mohamed Khider University of Biskra

DOI:

https://doi.org/10.13053/cys-28-2-4130

Keywords:

Arabic Hate Speech detection, Fine-tuning, Transfer Learning, AraBERT

Abstract

Today, hate speech detection from Arabic tweets attracts the attention of several researchers around the world. Different classification approaches have been proposed as a result of these research efforts. However, two of the main challenges confronted in this context are the use of handcrafted features and the fact that their performance rate is still limited. In this paper, we address the task of Arabic hate speech identification on Twitter and provide a deeper understanding of the capabilities of new techniques based on machine learning. In particular, we compare the performance of traditional machine learning methods with recently pre-trained language models based on Transfer Learning as well as deep learning models. We conducted experiments on a benchmark dataset with a standard evaluation scenario. Experiments show that: the multidialectal pre-trained language models outperform monolingual and multilingual ones; the fine-tuning of pre-trained language models improves the accuracy results of hate speech detection from Arabic tweets. Our main contribution is the achievement of promising results in Arabic by applying multidialectal pre-trained language models trained on Twitter data.

Author Biographies

Kheir Eddine Daouadi, Echahid Cheikh Larbi Tebessi University

Faculty of Exact Sciences and Nature and Life Sciences

Yaakoub Boualleg, Mohamed Khider University of Biskra

Faculty of Exact Sciences and Nature and Life Sciences

Oussama Guehairia, Mohamed Khider University of Biskra

Faculty of Sciences and Technology

Downloads

Published

2024-06-12

Issue

Section

Articles