Abstract
AbstractBackgroundLarge Language Models (LLMs) show promise in medical diagnosis, but their performance varies with prompting. Recent studies suggest that modifying prompts may enhance diagnostic capabilities.ObjectiveThis study aimed to test whether a prompting approach that aligns with general clinical reasoning methodology—specifically, separating processes of summarizing clinical information and making diagnoses based on the summary instead of one-step processing—can enhance LLM’s medical diagnostic capabilitiesMethods322 quiz questions fromRadiology’sDiagnosis Please cases (1998-2023) were used. We employed Claude 3.5 Sonnet, a state-of-the-art LLM, to compare three approaches: 1) Conventional zero-shot chain-of-thought prompt, as a baseline, 2) two-step approach: LLM organizes patient history and imaging findings, then provides diagnoses, and 3) Summary-only approach: Using only the LLM-generated summary for diagnoses.ResultsThe two-step approach significantly outperformed both baseline and summary-only methods in diagnosis accuracy, as determined by McNemar tests. Primary diagnosis accuracy was 60.6% for the two-step approach, compared to 56.5% for baseline (p=0.042) and 56.3% for summary-only (p=0.035). For the top three diagnoses, accuracy was 70.5%, 66.5%, and 65.5% respectively (p=0.005 for baseline, p=0.008 for summary-only). No significant differences were observed between baseline and summary-only approaches.ConclusionOur results indicate that a structured clinical reasoning approach enhances LLM’s diagnostic accuracy. This method shows potential as a valuable tool for deriving diagnoses from free-text clinical information. The approach aligns well with established clinical reasoning processes, suggesting its potential applicability in real-world clinical settings.
Publisher
Cold Spring Harbor Laboratory
Reference20 articles.
1. Rajpurkar P. AI in health and medicine. Nature Medicine. 2022;28.
2. Large language models encode clinical knowledge
3. Diagnostic reasoning prompts reveal the potential for large language model interpretability in medicine
4. Xiong G , Jin Q , Wang X , Zhang M , Lu Z , Zhang A. Improving Retrieval-Augmented Generation in Medicine with Iterative Follow-up Questions. arXiv; 2024. http://arxiv.org/abs/2408.00727. Accessed August 27, 2024.