Learning Video-Text Aligned Representations for Video Captioning

Author:

Shi Yaya1ORCID,Xu Haiyang2ORCID,Yuan Chunfeng3ORCID,Li Bing3ORCID,Hu Weiming4ORCID,Zha Zheng-Jun1ORCID

Affiliation:

1. School of Information Science and Technology, University of Science and Technology of China, Hefei, Anhui, China

2. Alibaba Group, Hangzhou, Zhejiang, China

3. NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China

4. NLPR, Institute of Automation, Chinese Academy of Sciences, China and School of ArtificialIntelligence, University of Chinese Academy of Sciences, China and CAS Center for Excellence in BrainScience and Intelligence Technology, Beijing, China

Abstract

Video captioning requires that the model has the abilities of video understanding, video-text alignment, and text generation. Due to the semantic gap between vision and language, conducting video-text alignment is a crucial step to reduce the semantic gap, which maps the representations from the visual to the language domain. However, the existing methods often overlook this step, so the decoder has to directly take the visual representations as input, which increases the decoder’s workload and limits its ability to generate semantically correct captions. In this paper, we propose a video-text alignment module with a retrieval unit and an alignment unit to learn video-text aligned representations for video captioning. Specifically, we firstly propose a retrieval unit to retrieve sentences as additional input which is used as the semantic anchor between visual scene and language description. Then, we employ an alignment unit with the input of the video and retrieved sentences to conduct the video-text alignment. The representations of two modal inputs are aligned in a shared semantic space. The obtained video-text aligned representations are used to generate semantically correct captions. Moreover, retrieved sentences provide rich semantic concepts which are helpful for generating distinctive captions. Experiments on two public benchmarks, i.e., VATEX and MSR-VTT, demonstrate that our method outperforms state-of-the-art performances by a large margin. The qualitative analysis shows that our method generates correct and distinctive captions.

Funder

National Key R&D program of China

Beijing Natural Science Foundation

Natural Science Foundation of China

Key Research Program of Frontier Sciences, CAS

University Synergy Innovation Program of Anhui Province

Science and Technology Service Network Initiative, CAS

Fundamental Research Funds for the Central Universities

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Networks and Communications,Hardware and Architecture

Reference67 articles.

1. Nayyer Aafaq, Naveed Akhtar, Wei Liu, Syed Zulqarnain Gilani, and Ajmal Mian. 2019. Spatio-temporal dynamics and semantic attribute enriched visual encoding for video captioning. In IEEE Conference on Computer Vision and Pattern Recognition. 12487–12496.

2. Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. 2021. Frozen in time: A joint video and image encoder for end-to-end retrieval. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1728–1738.

3. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. 65–72.

4. David L. Chen and William B. Dolan. 2011. Collecting highly parallel data for paraphrase evaluation. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA, Dekang Lin, Yuji Matsumoto, and Rada Mihalcea (Eds.). The Association for Computer Linguistics, 190–200.

Cited by 8 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. RFL-LSU: A Robust Federated Learning Approach with Localized Stepwise Updates;ACM Transactions on Internet Technology;2024-08-30

2. Video captioning based on dual learning via multiple reconstruction blocks;Image and Vision Computing;2024-08

3. Action-aware Linguistic Skeleton Optimization Network for Non-autoregressive Video Captioning;ACM Transactions on Multimedia Computing, Communications, and Applications;2024-07-20

4. Multimodal AI-Based Summarization and Storytelling for Soccer on Social Media;Proceedings of the ACM Multimedia Systems Conference 2024 on ZZZ;2024-04-15

5. Sentiment-Oriented Transformer-Based Variational Autoencoder Network for Live Video Commenting;ACM Transactions on Multimedia Computing, Communications, and Applications;2024-01-11

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3