Assessment Study of ChatGPT-3.5’s Performance on the Final Polish Medical Examination: Accuracy in Answering 980 Questions
-
Published:2024-08-16
Issue:16
Volume:12
Page:1637
-
ISSN:2227-9032
-
Container-title:Healthcare
-
language:en
-
Short-container-title:Healthcare
Author:
Siebielec Julia1, Ordak Michal1ORCID, Oskroba Agata1, Dworakowska Anna1ORCID, Bujalska-Zadrozny Magdalena1
Affiliation:
1. Department of Pharmacotherapy and Pharmaceutical Care, Faculty of Pharmacy, Medical University of Warsaw, 02-091 Warsaw, Poland
Abstract
Background/Objectives: The use of artificial intelligence (AI) in education is dynamically growing, and models such as ChatGPT show potential in enhancing medical education. In Poland, to obtain a medical diploma, candidates must pass the Medical Final Examination, which consists of 200 questions with one correct answer per question, is administered in Polish, and assesses students’ comprehensive medical knowledge and readiness for clinical practice. The aim of this study was to determine how ChatGPT-3.5 handles questions included in this exam. Methods: This study considered 980 questions from five examination sessions of the Medical Final Examination conducted by the Medical Examination Center in the years 2022–2024. The analysis included the field of medicine, the difficulty index of the questions, and their type, namely theoretical versus case-study questions. Results: The average correct answer rate achieved by ChatGPT for the five examination sessions hovered around 60% and was lower (p < 0.001) than the average score achieved by the examinees. The lowest percentage of correct answers was in hematology (42.1%), while the highest was in endocrinology (78.6%). The difficulty index of the questions showed a statistically significant correlation with the correctness of the answers (p = 0.04). Questions for which ChatGPT-3.5 provided incorrect answers had a lower (p < 0.001) percentage of correct responses. The type of questions analyzed did not significantly affect the correctness of the answers (p = 0.46). Conclusions: This study indicates that ChatGPT-3.5 can be an effective tool for assisting in passing the final medical exam, but the results should be interpreted cautiously. It is recommended to further verify the correctness of the answers using various AI tools.
Reference33 articles.
1. Alowais, S.A., Alghamdi, S.S., Alsuhebany, N., Alqahtani, T., Alshaya, A.I., Almohareb, S.N., Aldairem, A., Alrashed, M., Bin Saleh, K., and Badreldin, H.A. (2023). Revolutionizing healthcare: The role of artificial intelligence in clinical practice. BMC Med. Educ., 23. 2. Overview of artificial intelligence in medicine;Amisha;J. Fam. Med. Prim. Care,2019 3. Ahmad, Z., Rahim, S., Zubair, M., and Abdul-Ghafar, J. (2021). Artificial intelligence (AI) in medicine, current applications and future role with special emphasis on its potential and promise in pathology: Present and future impact, obstacles including costs and acceptance among pathologists, practical and philosophical considerations. A comprehensive review. Diagn. Pathol., 16. 4. Duffy, V.G. (2023). Advanced Artificial Intelligence Methods for Medical Applications. Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management, Proceedings of HCII 2023, Copenhagen, Denmark, 23–28 July, Springer. 5. A scoping review of artificial intelligence in medical education: BEME Guide No. 84;Gordon;Med. Teach.,2024
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|