Abstract
Abstract
Objectives
To define requirements that condition trust in artificial intelligence (AI) as clinical decision support in radiology from the perspective of various stakeholders and to explore ways to fulfil these requirements.
Methods
Semi-structured interviews were conducted with twenty-five respondents—nineteen directly involved in the development, implementation, or use of AI applications in radiology and six working with AI in other areas of healthcare. We designed the questions to explore three themes: development and use of AI, professional decision-making, and management and organizational procedures connected to AI. The transcribed interviews were analysed in an iterative coding process from open coding to theoretically informed thematic coding.
Results
We identified four aspects of trust that relate to reliability, transparency, quality verification, and inter-organizational compatibility. These aspects fall under the categories of substantial and procedural requirements.
Conclusions
Development of appropriate levels of trust in AI in healthcare is complex and encompasses multiple dimensions of requirements. Various stakeholders will have to be involved in developing AI solutions for healthcare and radiology to fulfil these requirements.
Clinical relevance statement
For AI to achieve advances in radiology, it must be given the opportunity to support, rather than replace, human expertise. Support requires trust. Identification of aspects and conditions for trust allows developing AI implementation strategies that facilitate advancing the field.
Key Points
• Dimensions of procedural and substantial demands that need to be fulfilled to foster appropriate levels of trust in AI in healthcare are conditioned on aspects related to reliability, transparency, quality verification, and inter-organizational compatibility.
•Creating the conditions for trust to emerge requires the involvement of various stakeholders, who will have to compensate the problem’s inherent complexity by finding and promoting well-defined solutions.
Publisher
Springer Science and Business Media LLC
Subject
Radiology, Nuclear Medicine and imaging,General Medicine
Reference43 articles.
1. Jones C, Thornton J, Wyatt JC (2021) Enhancing trust in clinical decision support systems: a framework for developers. BMJ Health Care Inform 28:e100247
2. Samek W, Wiegand T, Müller K-R (2017) Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. arXiv:170808296
3. Sahiner B, Pezeshk A, Hadjiiski LM et al (2019) Deep learning in medical imaging and radiation therapy. Med Phy 46:e1–e36
4. Yang L, Ene IC, Arabi Belaghi R, Koff D, Stein N, Santaguida P (2021) Stakeholders’ perspectives on the future of artificial intelligence in radiology: a scoping review. Eur Radiol 32:1477–1495
5. Gille F, Jobin A, Ienca M (2020) What we talk about when we talk about trust: theory of trust for AI in healthcare. Intell-Based Med 1–2:100001