Multi-Class Wound Classification via High and Low-Frequency Guidance Network
-
Published:2023-12-01
Issue:12
Volume:10
Page:1385
-
ISSN:2306-5354
-
Container-title:Bioengineering
-
language:en
-
Short-container-title:Bioengineering
Author:
Guo Xiuwen12, Yi Weichao12, Dong Liquan123, Kong Lingqin123, Liu Ming123, Zhao Yuejin123, Hui Mei12, Chu Xuhong123
Affiliation:
1. School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China 2. Beijing Key Laboratory for Precision Optoelectronic Measurement Instrument and Technology, Beijing 100081, China 3. Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing 314019, China
Abstract
Wound image classification is a crucial preprocessing step to many intelligent medical systems, e.g., online diagnosis and smart medical. Recently, Convolutional Neural Network (CNN) has been widely applied to the classification of wound images and obtained promising performance to some extent. Unfortunately, it is still challenging to classify multiple wound types due to the complexity and variety of wound images. Existing CNNs usually extract high- and low-frequency features at the same convolutional layer, which inevitably causes information loss and further affects the accuracy of classification. To this end, we propose a novel High and Low-frequency Guidance Network (HLG-Net) for multi-class wound classification. To be specific, HLG-Net contains two branches: High-Frequency Network (HF-Net) and Low-Frequency Network (LF-Net). We employ pre-trained models ResNet and Res2Net as the feature backbone of the HF-Net, which makes the network capture the high-frequency details and texture information of wound images. To extract much low-frequency information, we utilize a Multi-Stream Dilation Convolution Residual Block (MSDCRB) as the backbone of the LF-Net. Moreover, a fusion module is proposed to fully explore informative features at the end of these two separate feature extraction branches, and obtain the final classification result. Extensive experiments demonstrate that HLG-Net can achieve maximum accuracy of 98.00%, 92.11%, and 82.61% in two-class, three-class, and four-class wound image classifications, respectively, which outperforms the previous state-of-the-art methods.
Funder
JCJQ Plan National Natural Science Foundation of China BIT Research and Innovation Promoting Project National Key Research and Development Program of China
Reference52 articles.
1. A Survey of Wound Image Analysis Using Deep Learning: Classification, Detection, and Segmentation;Zhang;IEEE Access,2022 2. Computer Aided Diagnosis of Diabetic Foot Using Infrared Thermography: A Review;Adam;Comput. Biol. Med.,2017 3. Yap, M.H., Hachiuma, R., Alavi, A., Brungel, R., Cassidy, B., Goyal, M., Zhu, H., Ruckert, J., Olshansky, M., and Huang, X. (2021). Deep Learning in Diabetic Foot Ulcers Detection: A Comprehensive Evaluation. Comput. Biol. Med., 135. 4. Al-Garaawi, N., Ebsim, R., Alharan, A.F., and Yap, M.H. (2022). Diabetic Foot Ulcer Classification Using Mapped Binary Patterns and Convolutional Neural Networks. Comput. Biol. Med., 140. 5. Chronic Leg Ulcers: The Impact of Venous Disease;Bergqvist;J. Vasc. Surg.,1999
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|