Author:
Omar Mahmud,Soffer Shelly,Charney Alexander W.,Landi Isotta,Nadkarni Girish N.,Klang Eyal
Abstract
BackgroundWith their unmatched ability to interpret and engage with human language and context, large language models (LLMs) hint at the potential to bridge AI and human cognitive processes. This review explores the current application of LLMs, such as ChatGPT, in the field of psychiatry.MethodsWe followed PRISMA guidelines and searched through PubMed, Embase, Web of Science, and Scopus, up until March 2024.ResultsFrom 771 retrieved articles, we included 16 that directly examine LLMs’ use in psychiatry. LLMs, particularly ChatGPT and GPT-4, showed diverse applications in clinical reasoning, social media, and education within psychiatry. They can assist in diagnosing mental health issues, managing depression, evaluating suicide risk, and supporting education in the field. However, our review also points out their limitations, such as difficulties with complex cases and potential underestimation of suicide risks.ConclusionEarly research in psychiatry reveals LLMs’ versatile applications, from diagnostic support to educational roles. Given the rapid pace of advancement, future investigations are poised to explore the extent to which these models might redefine traditional roles in mental health care.
Reference42 articles.
1. Artificial intelligence in the era of ChatGPT - Opportunities and challenges in mental health care;Singh;Indian J Psychiatry,2023
2. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations;Dave;Front Artif Intell,2023
3. Opportunities, applications, challenges and ethical implications of artificial intelligence in psychiatry: a narrative review;Terra;Egypt J Neurol Psychiatr Neurosurg,2023
4. A digital ally: The potential roles of ChatGPT in mental health services;He;Asian J Psychiatr,2023
5. Artificial intelligence in medicine;Beam;New Engl J Med,2023