Word Embeddings: A Comprehensive Survey

Authors

  • Alexandr Pak Institute of Informational and Computational Technologies
  • Atabay Ziyaden
  • Timur Saparov Institute of Informational and Computational Technologies
  • Iskander Akhmetov Kazakh-British Technical University
  • Alexander Gelbukh Instituto Politécnico Nacional

DOI:

https://doi.org/10.13053/cys-28-4-5225

Keywords:

Language Models, Distributive Semantics, Word Embeddings, Natural Language Processing, Deep Learning

Abstract

This article is a systematic review of available studies in the area of word embeddings with an emphasis on classical matrix factorization techniques and contemporary neural word embedding algorithms such as Word2Vec, GloVe, and Bert. The efficiency and effectiveness of these methods for mapping semantic and lexical relationships are evaluated in greater detail providing analysis of the topology of these techniques. In addition, this approach demonstrates a model accuracy of 77%, which is 3% below the best human performance. At the same time the study has also shown the weaknesses of some models such as BERT, which lead to unrealistic high accuracy due to spurious correlations in the datasets. We see that there are three bottlenecks for the subsequent development of NLP algorithms: assimilation of inductive bias, common sense embedding, and generalization problem. The outcomes from this research help in enhancing the strength and applicability of word embeddings in natural language processing tasks.

Downloads

Published

2024-12-03

Issue

Section

Articles