Dual uncertainty-guided multi-model pseudo-label learning for semi-supervised medical image segmentation
-
Published:2024
Issue:2
Volume:21
Page:2212-2232
-
ISSN:1551-0018
-
Container-title:Mathematical Biosciences and Engineering
-
language:
-
Short-container-title:MBE
Author:
Qiu Zhanhong,Gan Weiyan,Yang Zhi,Zhou Ran,Gan Haitao
Abstract
<abstract><p>Semi-supervised medical image segmentation is currently a highly researched area. Pseudo-label learning is a traditional semi-supervised learning method aimed at acquiring additional knowledge by generating pseudo-labels for unlabeled data. However, this method relies on the quality of pseudo-labels and can lead to an unstable training process due to differences between samples. Additionally, directly generating pseudo-labels from the model itself accelerates noise accumulation, resulting in low-confidence pseudo-labels. To address these issues, we proposed a dual uncertainty-guided multi-model pseudo-label learning framework (DUMM) for semi-supervised medical image segmentation. The framework consisted of two main parts: The first part is a sample selection module based on sample-level uncertainty (SUS), intended to achieve a more stable and smooth training process. The second part is a multi-model pseudo-label generation module based on pixel-level uncertainty (PUM), intended to obtain high-quality pseudo-labels. We conducted a series of experiments on two public medical datasets, ACDC2017 and ISIC2018. Compared to the baseline, we improved the Dice scores by 6.5% and 4.0% over the two datasets, respectively. Furthermore, our results showed a clear advantage over the comparative methods. This validates the feasibility and applicability of our approach.</p></abstract>
Publisher
American Institute of Mathematical Sciences (AIMS)
Subject
Applied Mathematics,Computational Mathematics,General Agricultural and Biological Sciences,Modeling and Simulation,General Medicine
Reference44 articles.
1. O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5–9, 2015, Proceedings, Part III 18, (2015), 234–241. https://doi.org/10.1007/978331924574428 2. F. Milletari, N. Navab, S. Ahmadi, V-net: Fully convolutional neural networks for volumetric medical image segmentation, in 2016 Fourth International Conference on 3D Vision (3DV), (2016), 565–571. https://doi.org/10.1109/3DV.2016.79 3. L. Qiu, H. Ren, RSegNet: A joint learning framework for deformable registration and segmentation, IEEE Trans. Autom. Sci. Eng., 19 (2021), 2499–2513. https://doi.org/10.1109/TASE.2021.3087868 4. W. Kim, A. Kanezaki, M. Tanaka, Unsupervised learning of image segmentation based on differentiable feature clustering, IEEE Trans. Image Process., 29 (2020), 8055–8068. https://doi.org/10.1109/TIP.2020.3011269 5. W. Lei, Q. Su, T. Jiang, R. Gu, N. Wang, X. Liu, et al., One-shot weakly-supervised segmentation in 3D medical images, IEEE Trans. Med. Imaging, 43 (2024), 175–189. https://doi.org/10.1109/TMI.2023.3294975
|
|