Annotate and retrieve in vivo images using hybrid self-organizing map
-
Published:2023-10-31
Issue:
Volume:
Page:
-
ISSN:0178-2789
-
Container-title:The Visual Computer
-
language:en
-
Short-container-title:Vis Comput
Author:
Kaur ParminderORCID, Malhi Avleen, Pannu Husanbir
Abstract
AbstractMultimodal retrieval has gained much attention lately due to its effectiveness over uni-modal retrieval. For instance, visual features often under-constrain the description of an image in content-based retrieval; however, another modality, such as collateral text, can be introduced to abridge the semantic gap and make the retrieval process more efficient. This article proposes the application of cross-modal fusion and retrieval on real in vivo gastrointestinal images and linguistic cues, as the visual features alone are insufficient for image description and to assist gastroenterologists. So, a cross-modal information retrieval approach has been proposed to retrieve related images given text and vice versa while handling the heterogeneity gap issue among the modalities. The technique comprises two stages: (1) individual modality feature learning; and (2) fusion of two trained networks. In the first stage, two self-organizing maps (SOMs) are trained separately using images and texts, which are clustered in the respective SOMs based on their similarity. In the second (fusion) stage, the trained SOMs are integrated using an associative network to enable cross-modal retrieval. The underlying learning techniques of the associative network include Hebbian learning and Oja learning (Improved Hebbian learning). The introduced framework can annotate images with keywords and illustrate keywords with images, and it can also be extended to incorporate more diverse modalities. Extensive experimentation has been performed on real gastrointestinal images obtained from a known gastroenterologist that have collateral keywords with each image. The obtained results proved the efficacy of the algorithm and its significance in aiding gastroenterologists in quick and pertinent decision making.
Publisher
Springer Science and Business Media LLC
Subject
Computer Graphics and Computer-Aided Design,Computer Vision and Pattern Recognition,Software
Reference69 articles.
1. Zhang, D., Islam, M.M., Lu, G.: A review on automatic image annotation techniques. Pattern Recogn. 45(1), 346–362 (2012) 2. Dutta, A., Verma, Y., Jawahar, C.: Automatic image annotation: the quirks and what works. Multimed. Tools Appl. 77(24), 31991–32011 (2018) 3. Kaur, P., Pannu, H.S., Malhi, A.K.: Comparative analysis on cross-modal information retrieval: a review. Comput. Sci. Rev. 39, 100336 (2021) 4. Palazzo, S., Spampinato, C., Kavasidis, I., Giordano, D., Schmidt, J., Shah, M.: Decoding brain representations by multimodal learning of neural activity and visual features. IEEE Trans. Pattern Anal. Mach. Intell. 43(11), 3833–3849 (2020) 5. Nicholson, A.A., Densmore, M., McKinnon, M.C., Neufeld, R.W., Frewen, P.A., Théberge, J., Jetly, R., Richardson, J.D., Lanius, R.A.: Machine learning multivariate pattern analysis predicts classification of posttraumatic stress disorder and its dissociative subtype: a multimodal neuroimaging approach. Psychol. Med. 49(12), 2049–2059 (2019)
|
|