Large language models for generating medical examinations: systematic review

Author:

Artsi Yaara1,Sorin Vera2,Konen Eli2,Glicksberg Benjamin S.3,Nadkarni Girish3,Klang Eyal4

Affiliation:

1. Azrieli Faculty of Medicine, Bar-Ilan University

2. Department of Diagnostic Imaging, Chaim Sheba Medical Center

3. Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai

4. The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai

Abstract

Abstract Background Writing multiple choice questions (MCQs) for the purpose of medical exams is challenging. It requires extensive medical knowledge, time and effort from medical educators. This systematic review focuses on the application of large language models (LLMs) in generating medical MCQs. Methods The authors searched for studies published up to November 2023. Search terms focused on LLMs generated MCQs for medical examinations. MEDLINE was used as a search database. Results Overall, eight studies published between April 2023 and October 2023 were included. Six studies used Chat-GPT 3.5, while two employed GPT 4. Five studies showed that LLMs can produce competent questions valid for medical exams. Three studies used LLMs to write medical questions but did not evaluate the validity of the questions. One study conducted a comparative analysis of different models. One other study compared LLM-generated questions with those written by humans. All studies presented faulty questions that were deemed inappropriate for medical exams. Some questions required additional modifications in order to qualify. Conclusions LLMs can be used to write MCQs for medical examinations. However, their limitations cannot be ignored. Further study in this field is essential and more conclusive evidence is needed. Until then, LLMs may serve as a supplementary tool for writing medical examinations.

Publisher

Research Square Platform LLC

Reference49 articles.

1. The global health workforce stock and distribution in 2020 and 2030: a threat to equity and 'universal' health coverage?;Boniol M;BMJ Glob Health,2022

2. GBD 2019 Human Resources for Health Collaborators. Lancet. 2022;399(10341):2129–54. 10.1016/S0140-6736(22)00532-3. Measuring the availability of human resources for health and its relationship to universal health coverage for 204 countries and territories from 1990 to 2019: a systematic analysis for the Global Burden of Disease Study 2019.

3. Physician workforce in the United States of America: forecasting nationwide shortages;Zhang X;Hum Resour Health,2020

4. Rigby PG, Gururaja RP. World medical schools: The sum also rises. JRSM Open. 2017;8(6):2054270417698631. Published 2017 Jun 5. 10.1177/2054270417698631.

5. What are the impacts of setting up new medical schools? A narrative review;Hashem F;BMC Med Educ,2022

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3