Developing a deep learning model to predict the breast implant texture types with ultrasonography image: a feasibility study (Preprint)

Author:

Kim Ho HeonORCID,Jung Won ChanORCID,Pi Kyungran,Lee Angela Soeun,Kim Min Soo,Kim Hye Jin,Kim Jae Hong

Abstract

BACKGROUND

Breast implants, including textured variants, have been widely used in aesthetic and reconstructive mammoplasty. However, the textured type, which is one of the shell types of breast implants, has been identified as a possible carcinogenic factor for lymphoma, specifically breast implant-associated anaplastic large cell lymphoma (BIA-ALCL). Identifying the texture type of the implant is critical to the diagnosis of BIA-ALCL. However, distinguishing the shell type can be difficult due to human memory or loss of medical history. An alternative approach is to use ultrasonography, but this method also has limitations in quantitative assessment.

OBJECTIVE

The objective of this study is to determine the feasibility of using a deep learning model to classify the textured shell type of breast implants and make robust predictions from ultrasonography images from heterogeneous sources.

METHODS

A total of 19,502 breast implant images were retrospectively collected from heterogeneous sources, including images from both Canon (D1) and GE (D2), images of ruptured implants (D3), and images without implants (D4), as well as publicly available images (D5). The Canon (D1) images were trained using Resnet-50. The performance of the model on D1 was evaluated using stratified 5-fold cross-validation. Additionally, external validation was conducted using D2 and D5. The AUROC and PRAUC were calculated based on the contribution of the pixels with Grad-CAM. To identify the significant pixels for classification, we masked the pixels that contributed less than 10%, up to a maximum of 100%. To assess model robustness to uncertainty, Shannon entropy was calculated for four image groups: Canon (D1), GE (D2), ruptured implant (D3), and without implants (D5).

RESULTS

The deep learning model achieved an average AUROC of 0.98 and a PRAUC of 0.88 in the Canon dataset (D1). For images captured with GE (D2), the model achieved an AUROC of 0.985 and a PRAUC of 0.748. Additionally, the model predicted an AUROC of 0.909 and a PRAUC of 0.958 for a dataset available online. For quantitative validation, this model maintained PRAUC up to 90% masking of less contributing pixels, and the remnant pixels located in breast shell layers. Furthermore, the prediction uncertainty increased in the following order: Canon (D1), GE (D2), ruptured implant (D3), no implant (D5) (0.066; 0072; 0.371; 0.777, respectively).

CONCLUSIONS

We have demonstrated the feasibility of using deep learning to predict the shell types of breast implants. With this approach, the textured shell types of breast implants can be quantified, supporting the first step in the diagnosis of BIA-ALCL.

Publisher

JMIR Publications Inc.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3