Comparing Pre-trained Language Model for Arabic Hate Speech Detection
Abstract
Today, hate speech detection from Arabic tweets attracts the attention of several researchers around the world. Different classification approaches have been proposed as a result of these research efforts. However, two of the main challenges confronted in this context are the use of handcrafted features and the fact that their performance rate is still limited. In this paper, we address the task of Arabic hate speech identification on Twitter and provide a deeper understanding of the capabilities of new techniques based on machine learning. In particular, we compare the performance of traditional machine learning methods with recently pre-trained language models based on Transfer Learning as well as deep learning models. We conducted experiments on a benchmark dataset with a standard evaluation scenario. Experiments show that: the multidialectal pre-trained language models outperform monolingual and multilingual ones; the fine-tuning of pre-trained language models improves the accuracy results of hate speech detection from Arabic tweets. Our main contribution is the achievement of promising results in Arabic by applying multidialectal pre-trained language models trained on Twitter data.
Keywords
Arabic Hate Speech detection; Fine-tuning; Transfer Learning; AraBERT