Evaluating GPT-4-based ChatGPT's clinical potential on the NEJM quiz

Author:

Ueda Daiju,Walston Shannon L.,Matsumoto Toshimasa,Deguchi Ryo,Tatekawa Hiroyuki,Miki Yukio

Abstract

Abstract Background GPT-4-based ChatGPT demonstrates significant potential in various industries; however, its potential clinical applications remain largely unexplored. Methods We employed the New England Journal of Medicine (NEJM) quiz “Image Challenge” from October 2021 to March 2023 to assess ChatGPT's clinical capabilities. The quiz, designed for healthcare professionals, tests the ability to analyze clinical scenarios and make appropriate decisions. We evaluated ChatGPT's performance on the NEJM quiz, analyzing its accuracy rate by questioning type and specialty after excluding quizzes which were impossible to answer without images. ChatGPT was first asked to answer without the five multiple-choice options, and then after being given the options. Results ChatGPT achieved an 87% (54/62) accuracy without choices and a 97% (60/62) accuracy with choices, after excluding 16 image-based quizzes. Upon analyzing performance by quiz type, ChatGPT excelled in the Diagnosis category, attaining 89% (49/55) accuracy without choices and 98% (54/55) with choices. Although other categories featured fewer cases, ChatGPT's performance remained consistent. It demonstrated strong performance across the majority of medical specialties; however, Genetics had the lowest accuracy at 67% (2/3). Conclusion ChatGPT demonstrates potential for diagnostic applications, suggesting its usefulness in supporting healthcare professionals in making differential diagnoses and enhancing AI-driven healthcare.

Funder

Iida Group Holdings Co.,Ltd.

Publisher

Springer Science and Business Media LLC

Reference21 articles.

1. Hirschberg J, Manning CD. Advances in natural language processing. Science. 2015;349:261–6.

2. OpenAI. GPT-4 Technical Report. arXiv [cs.CL]. 2023.

3. Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, et al. Language Models are Few-Shot Learners. arXiv [cs.CL]. 2020;:1877–901.

4. Eloundou T, Manning S, Mishkin P, Rock D. GPTs are GPTs: An early look at the labor market impact potential of large language models. arXiv [econ.GN]. 2023.

5. Bubeck S, Chandrasekaran V, Eldan R, Gehrke J, Horvitz E, Kamar E, et al. Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv [cs.CL]. 2023.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3