Evaluating the Performance of Large Language Models for Spanish Language in Undergraduate Admissions Exams

Autores/as

  • Sabino Miranda IEEE
  • Obdulia Pichardo-Lagunas
  • Bella Martinez-Seis
  • Pierre Baldi

DOI:

https://doi.org/10.13053/cys-27-4-4790

Palabras clave:

Large language models, ChatGPT, BARD, undergraduate admissions exams

Resumen

This study evaluates the performance of large language models, specifically GPT-3.5 and BARD (supported by Gemini Pro model), in undergraduate admissions exams proposed by the National Polytechnic Institute in Mexico. The exams cover Engineering/Mathematical and Physical Sciences, Biological and Medical Sciences, and Social and Administrative Sciences. Both models demonstrated proficiency, exceeding the minimum acceptance scores for respective academic programs to up to 75% for some academic programs. GPT-3.5 outperformed BARD in Mathematics and Physics, while BARD performed better in History and questions related to factual information. Overall, GPT-3.5 marginally surpassed BARD with scores of 60.94% and 60.42%, respectively.

Descargas

Publicado

2023-12-27

Número

Sección

Artículos