Author Profiling in Social Media with Multimodal Information

Miguel Á. Álvarez Carmona, Esaú Villatoro Tello, Manuel Montes y Gómez, Luis Villaseñor Pineda

Abstract


This paper summarizes the thesis: ”Author Profiling in Social Media with Multimodal Information.” Our solution uses a multimodal approach to extracting information from written messages and images shared by users. Previous work has shown the existence of useful information for this task in these modalities; however, our proposal goes further, demonstrating the complementarity of the modalities when merging these two sources of information. To do this, we propose to transform images to texts, and with them, to have the same framework of representation for both kinds of information, which allow to achieve their fusion. Our work explores different methods for extracting information either from the text and the images. To represent the extracted information, different distributional term representations approaches were explored in order to identify the topics addressed by the user. For this purpose, an evaluation framework was proposed in order to identify the most appropriate method for this task. The results show that the textual descriptions of the images contain useful information for the author profiling task, and that the fusion of textual information with information extracted from the images increases the accuracy of this task.

Keywords


Author profiling, multimodal information, natural language processing, text classification

Full Text: PDF