Enhancing Health Literacy: Evaluating the Readability of Patient Handouts Revised by ChatGPT's Large Language Model

Author:

Swisher Austin R.1ORCID,Wu Arthur W.2,Liu Gene C.2,Lee Matthew K.2,Carle Taylor R.2,Tang Dennis M.2ORCID

Affiliation:

1. Department of Otolaryngology–Head and Neck Surgery Mayo Clinic Phoenix Arizona USA

2. Division of Otolaryngology–Head and Neck Surgery Cedars‐Sinai Los Angeles California USA

Abstract

AbstractObjectiveTo use an artificial intelligence (AI)‐powered large language model (LLM) to improve readability of patient handouts.Study DesignReview of online material modified by AI.SettingAcademic center.MethodsFive handout materials obtained from the American Rhinologic Society (ARS) and the American Academy of Facial Plastic and Reconstructive Surgery websites were assessed using validated readability metrics. The handouts were inputted into OpenAI's ChatGPT‐4 after prompting: “Rewrite the following at a 6th‐grade reading level.” The understandability and actionability of both native and LLM‐revised versions were evaluated using the Patient Education Materials Assessment Tool (PEMAT). Results were compared using Wilcoxon rank‐sum tests.ResultsThe mean readability scores of the standard (ARS, American Academy of Facial Plastic and Reconstructive Surgery) materials corresponded to “difficult,” with reading categories ranging between high school and university grade levels. Conversely, the LLM‐revised handouts had an average seventh‐grade reading level. LLM‐revised handouts had better readability in nearly all metrics tested: Flesch‐Kincaid Reading Ease (70.8 vs 43.9; P < .05), Gunning Fog Score (10.2 vs 14.42; P < .05), Simple Measure of Gobbledygook (9.9 vs 13.1; P < .05), Coleman‐Liau (8.8 vs 12.6; P < .05), and Automated Readability Index (8.2 vs 10.7; P = .06). PEMAT scores were significantly higher in the LLM‐revised handouts for understandability (91 vs 74%; P < .05) with similar actionability (42 vs 34%; P = .15) when compared to the standard materials.ConclusionPatient‐facing handouts can be augmented by ChatGPT with simple prompting to tailor information with improved readability. This study demonstrates the utility of LLMs to aid in rewriting patient handouts and may serve as a tool to help optimize education materials.Level of EvidenceLevel VI.

Publisher

Wiley

Reference55 articles.

1. A digital initiative to improve patient health literacy;Ludens M;South Dakota Med,2022

2. KutnerMA National Center for Education Statistics.The Health Literacy of America's Adults: Results from the 2003 National Assessment of Adult Literacy. U.S. Department of Education National Center for Education Statistics; 2006.

3. Health literacy, social determinants of health, and disease prevention and control;Coughlin SS;J Environ Health Sci,2020

4. The evolving concept of health literacy

5. Validation of ChatGPT 3.5 as a Tool to Optimize Readability of Patient-facing Craniofacial Education Materials

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3