Speech Audio Synthesis from Tagged MRI and Non-Negative Matrix Factorization via Plastic Transformer

The tongue's intricate 3D structure, comprising localized functional units, plays a crucial role in the production of speech. When measured using tagged MRI, these functional units exhibit cohesive displacements and derived quantities that facilitate the complex process of speech production. No...

Full description

Saved in:
Bibliographic Details
Published inLecture notes in computer science Vol. 14226; p. 435
Main Authors Liu, Xiaofeng, Xing, Fangxu, Stone, Maureen, Zhuo, Jiachen, Fels, Sidney, Prince, Jerry L, El Fakhri, Georges, Woo, Jonghye
Format Journal Article Book Chapter
LanguageEnglish
Published Germany 01.01.2023
Online AccessGet full text
ISSN1611-3349
0302-9743
DOI10.1007/978-3-031-43990-2_41

Cover

More Information
Summary:The tongue's intricate 3D structure, comprising localized functional units, plays a crucial role in the production of speech. When measured using tagged MRI, these functional units exhibit cohesive displacements and derived quantities that facilitate the complex process of speech production. Non-negative matrix factorization-based approaches have been shown to estimate the functional units through motion features, yielding a set of building blocks and a corresponding weighting map. Investigating the link between weighting maps and speech acoustics can offer significant insights into the intricate process of speech production. To this end, in this work, we utilize two-dimensional spectrograms as a proxy representation, and develop an end-to-end deep learning framework for translating weighting maps to their corresponding audio waveforms. Our proposed plastic light transformer (PLT) framework is based on directional product relative position bias and single-level spatial pyramid pooling, thus enabling flexible processing of weighting maps with variable size to fixed-size spectrograms, without input information loss or dimension expansion. Additionally, our PLT framework efficiently models the global correlation of wide matrix input. To improve the realism of our generated spectrograms with relatively limited training samples, we apply pair-wise utterance consistency with Maximum Mean Discrepancy constraint and adversarial training. Experimental results on a dataset of 29 subjects speaking two utterances demonstrated that our framework is able to synthesize speech audio waveforms from weighting maps, outperforming conventional convolution and transformer models.
ISSN:1611-3349
0302-9743
DOI:10.1007/978-3-031-43990-2_41