Author:
Komeiji Shuji,Shigemi Kai,Mitsuhashi Takumi,Iimura Yasushi,Suzuki Hiroharu,Sugano Hidenori,Shinoda Koichi,Yatabe Kohei,Tanaka Toshihisa
Abstract
AbstractThis study describes speech synthesis from an Electrocorticogram (ECoG) during imagined speech. We aim to generate high-quality audio despite the limitations of available training data by employing a Transformer-based decoder and a pretrained vocoder. Specifically, we used a pre-trained neural vocoder, Parallel WaveGAN, to convert the log-mel spectrograms output by the Transformer decoder, which was trained on ECoG signals, into high-quality audio signals. In our experiments, using ECoG signals recorded from 13 participants, the synthesized speech from imagined speech achieved a dynamic time-warping (DTW) Pearson correlation ranging from 0.85 to 0.95. This high-quality speech synthesis can be attributed to the Transformer decoder’s ability to accurately reconstruct high-fidelity log-mel spectrograms, demonstrating its effectiveness in dealing with limited training data.
Publisher
Cold Spring Harbor Laboratory