MultiLate Classifier: A Novel Ensemble of CNN-BiLSTM with ResNet-based Multimodal Classifier for AI-generated Hate Speech Detection
DOI:
https://doi.org/10.13053/cys-29-3-5397Keywords:
Hate Speech Detection, Multimodal Classification, CNN-BiLSTM, ResNet50, Diffusion Attention Attribution MapsAbstract
The rise of multimodal hate speech, which combines text and visual elements, poses significant challenges for online content moderation. Traditional detection models often focus on single modalities and struggle with AI-generated content that is contextually nuanced and semantically complex. These limitations lead to suboptimal performance, as existing frameworks are not robust enough to handle the evolving nature of hate speech across diverse contexts and datasets. An integrated approach that captures the interplay between text and images is needed for more accurate identification. This paper introduces a novel MultiLate classifier designed to synergistically integrate text and image modalities for robust hate speech detection to address these challenges. The textual component employs a CNN-BiLSTM architecture, augmented by a feature fusion pipeline incorporating Three W's Question Answering and sentiment analysis. For the image modality, the classifier utilizes a pre-trained ResNet50 architecture alongside Diffusion Attention Attribution Maps to generate pixel-level heatmaps, highlighting salient regions corresponding to contextually significant words. These heatmaps are selectively processed to enhance both classification accuracy and computational efficiency. The extracted features from both modalities are then fused to perform comprehensive multimodal classification. Extensive evaluations of the MULTILATE and MultiOFF datasets demonstrate the efficacy of the proposed approach. Comparative analysis against state-of-the-art models underscores the robustness and generalization capability of the MultiLate classifier. The proposed framework enhances detection accuracy and optimizes computational resource utilization, significantly advancing multimodal hate speech classification.Downloads
Published
2025-09-25
Issue
Section
Articles
License
Hereby I transfer exclusively to the Journal "Computación y Sistemas", published by the Computing Research Center (CIC-IPN),the Copyright of the aforementioned paper. I also accept that these
rights will not be transferred to any other publication, in any other format, language or other existing means of developing.I certify that the paper has not been previously disclosed or simultaneously submitted to any other publication, and that it does not contain material whose publication would violate the Copyright or other proprietary rights of any person, company or institution. I certify that I have the permission from the institution or company where I work or study to publish this work.The representative author accepts the responsibility for the publicationof this paper on behalf of each and every one of the authors.
This transfer is subject to the following conditions:- The authors retain all ownership rights (such as patent rights) of this work, except for the publishing rights transferred to the CIC, through this document.
- Authors retain the right to publish the work in whole or in part in any book they are the authors or publishers. They can also make use of this work in conferences, courses, personal web pages, and so on.
- Authors may include working as part of his thesis, for non-profit distribution only.