Author:
You Jie,Wu Wenqin,Lee Joonwhoan
Abstract
AbstractSound is one of the primary forms of sensory information that we use to perceive our surroundings. Usually, a sound event is a sequence of an audio clip obtained from an action. The action can be rhythm patterns, music genre, people speaking for a few seconds, etc. The sound event classification address distinguishes what kind of audio clip it is from the given audio sequence. Nowadays, it is a common issue to solve in the following pipeline: audio pre-processing→perceptual feature extraction→classification algorithm. In this paper, we improve the traditional sound event classification algorithm to identify unknown sound events by using the deep learning method. The compact cluster structure in the feature space for known classes helps recognize unknown classes by allowing large room to locate unknown samples in the embedded feature space. Based on this concept, we applied center loss and supervised contrastive loss to optimize the model. The center loss tries to minimize the intra- class distance by pulling the embedded feature into the cluster center, while the contrastive loss disperses the inter-class features from one another. In addition, we explored the performance of self-supervised learning in detecting unknown sound events. The experimental results demonstrate that our proposed open-set sound event classification algorithm and self-supervised learning approach achieve sustained performance improvements in various datasets.
Funder
National Research Foundation of Korea
Publisher
Springer Science and Business Media LLC
Reference65 articles.
1. Chowdhury, S. & Widmer, G. On Perceived Emotion in Expressive Piano Performance: Further Experimental Evidence for the Relevance of Mid-level Perceptual Features. arXiv preprint arXiv:2107.13231 (2021).
2. Le, P. N., Ambikairajah, E., Epps, J., Sethu, V. & Choi, E. H. Investigation of spectral centroid features for cognitive load classification. Speech Commun. 53(4), 540–551 (2011).
3. Wang, D., Yu, C. & Hansen, J. H. Robust harmonic features for classification-based pitch estimation. IEEE/ACM Trans. Audio Speech Lang. Process. 25(5), 952–964 (2017).
4. Gjerdingen, R. O. Cognitive Foundations of Musical Pitch (1992).
5. Xie, L., Fu, Z. H., Feng, W. & Luo, Y. Pitch-density-based features and an SVM binary tree approach for multi-class audio classification in broadcast news. Multimed. Syst. 17, 101–112 (2011).