Dialectones: Finding Statistically Significant Dialectal Boundaries Using Twitter Data

Carlos A. Rodriguez-Diaz, Sergio Jimenez, George Dueñas, Johnatan Estiven Bonilla, Alexander Gelbukh

Abstract


Most NLP applications assume that a particular language is homogeneous in the regions where it is spoken. However, each language varies considerably throughout its geographical distribution. To make NLP sensitive to dialects, a reliable, representative and up-to-date source of information that quantitatively represents such geographical variation is necessary. However, some of the current approaches have disadvantages such as the need for parameters, the disregard of the geographical coordinates in the analysis, and the use of linguistic alternations that presuppose the existence of specific dialectal varieties.Detection of ``ecotones'' is an analogous problem in the field of ecology that focuses on the identification of boundaries, instead of regions, in ecosystems  facilitating the construction of statistical tests. We adapted the concept of ``ecotone'' to ``dialectone'' for the detection of dialectal boundaries by using two non-parametric statistical tests:  the Hilbert-Schmidt independence criterion (HSIC) and the Wilcoxon signed-rank. The proposed method was applied to a large corpus of Spanish tweets produced in 160 locations in Colombia through the analysis of unigram features. The resulting dialectones showed to be meaningful but difficult to compare against regions identified by other authors using classical dialectometry. We concluded that the automatic detection of dialectones is convenient alternative to classical methods in dialectometry and a potential source of information for automatic language applications.

Keywords


Dialectometry, nonparametric method, corpus-based dialectometry, Hilbert-Schmidt independence criterion, Wilcoxon signed-rank test, ecotone, dialectone

Full Text: PDF