MultiLate Classifier: A Novel Ensemble of CNN-BiLSTM with ResNet-based Multimodal Classifier for AI-generated Hate Speech Detection

Advaitha Vetagiri, Prateek Mogha, Partha Pakray

Abstract


The rise of multimodal hate speech, which combines text and visual elements, poses significant challenges for online content moderation. Traditional detection models often focus on single modalities and struggle with AI-generated content that is contextually nuanced and semantically complex. These limitations lead to suboptimal performance, as existing frameworks are not robust enough to handle the evolving nature of hate speech across diverse contexts and datasets. An integrated approach that captures the interplay between text and images is needed for more accurate identification. This paper introduces a novel MultiLate classifier designed to synergistically integrate text and image modalities for robust hate speech detection to address these challenges. The textual component employs a CNN-BiLSTM architecture, augmented by a feature fusion pipeline incorporating Three W's Question Answering and sentiment analysis. For the image modality, the classifier utilizes a pre-trained ResNet50 architecture alongside Diffusion Attention Attribution Maps to generate pixel-level heatmaps, highlighting salient regions corresponding to contextually significant words. These heatmaps are selectively processed to enhance both classification accuracy and computational efficiency. The extracted features from both modalities are then fused to perform comprehensive multimodal classification. Extensive evaluations of the MULTILATE and MultiOFF datasets demonstrate the efficacy of the proposed approach. Comparative analysis against state-of-the-art models underscores the robustness and generalization capability of the MultiLate classifier. The proposed framework enhances detection accuracy and optimizes computational resource utilization, significantly advancing multimodal hate speech classification.

Keywords


Hate Speech Detection; Multimodal Classification; CNN-BiLSTM; ResNet50; Diffusion Attention Attribution Maps

Full Text: PDF