The Impact of Allowing ChatGPT on Responses to Different Question Types in a Mid-Term Math Exam

  • László Bognár University of Dunaújváros, Institute of Computer Engineering, Department of Mathematics
  • Antal Joós University of Dunaújváros, Institute of Computer Engineering, Department of Mathematics
Keywords: ChatGPT, mathematics education, chatbot

Abstract

This study investigates the impact of ChatGPT on student perfor­mance in mid-term math exams, focusing on differences in scores across various types of test questions. The findings reveal that students using Chat­GPT exhibited significantly lower average scores compared to their non-GPT counterparts, with more erratic performance patterns. In particular, Chat­GPT users struggled with complex mathematical operations, such as matrix inverses and vector multiplications. Both ChatGPT and Copilot displayed similar levels of consistency, occasionally providing incorrect or mixed an­swers, which may have contributed to the lower performance of GPT users.

The study suggests that inadequate preparation and unfamiliarity with us­ing GPT during exams could also have played a role in these results. These findings raise important questions about the integration of AI tools in educa­tion, particularly in subjects like mathematics, where precision is essential. Future research should explore optimal ways to integrate AI tools like Chat­GPT into learning environments to enhance, rather than hinder, academic performance.

Published
2025-03-01
How to Cite
BognárL., & JoósA. (2025). The Impact of Allowing ChatGPT on Responses to Different Question Types in a Mid-Term Math Exam. Dunakavics, 13(3), 5-14. https://doi.org/10.63684/dk.2025.3.01
Section
Cikkek