A ResNet-Based Audio-Visual Fusion Model for Piano Skill Evaluation

Author:

Zhao Xujian1ORCID,Wang Yixin1ORCID,Cai Xuebo2

Affiliation:

1. School of Computer Science and Technology, Southwest University of Science and Technology, Mianyang 621010, China

2. School of Music and Dance, Sichuan University of Culture and Arts, Mianyang 621000, China

Abstract

With the rise in piano teaching in recent years, many people have joined the ranks of piano learners. However, the high cost of traditional manual instruction and the exclusive one-on-one teaching model have made learning the piano an extravagant endeavor. Most existing approaches, based on the audio modality, aim to evaluate piano players’ skills. Unfortunately, these methods overlook the information contained in videos, resulting in a one-sided and simplistic evaluation of the piano player’s skills. More recently, multimodal-based methods have been proposed to assess the skill level of piano players by using both video and audio information. However, existing multimodal approaches use shallow networks to extract video and audio features, which limits their ability to extract complex spatio-temporal and time-frequency characteristics from piano performances. Furthermore, the fingering and pitch-rhythm information of the piano performance is embedded within the spatio-temporal and time-frequency features, respectively. Therefore, we propose a ResNet-based audio-visual fusion model that is able to extract both the visual features of the player’s finger movement track and the auditory features, including pitch and rhythm. The joint features are then obtained through the feature fusion technique by capturing the correlation and complementary information between video and audio, enabling a comprehensive and accurate evaluation of the player’s skill level. Moreover, the proposed model can extract complex temporal and frequency features from piano performances. Firstly, ResNet18-3D is used as the backbone network for our visual branch, allowing us to extract feature information from the video data. Then, we utilize ResNet18-2D as the backbone network for the aural branch to extract feature information from the audio data. The extracted video features are then fused with the audio features, generating multimodal features for the final piano skill evaluation. The experimental results on the PISA dataset show that our proposed audio-visual fusion model, with a validation accuracy of 70.80% and an average training time of 74.02 s, outperforms the baseline model in terms of performance and operational efficiency. Furthermore, we explore the impact of different layers of ResNet on the model’s performance. In general, the model achieves optimal performance when the ratio of video features to audio features is balanced. However, the best performance achieved is 68.70% when the ratio differs significantly.

Funder

Ministry of Education

Sichuan Provincial Department of Science and Technology

Publisher

MDPI AG

Subject

Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3