Audio-visual speech recognition using deep bottleneck features and high-performance lipreading

This paper develops an Audio-Visual Speech Recognition (AVSR) method, by (1) exploring high-performance visual features, (2) applying audio and visual deep bottleneck features to improve AVSR performance, and (3) investigating effectiveness of voice activity detection in a visual modality. In our ap...

Full description

Saved in:
Bibliographic Details
Published in2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA) pp. 575 - 582
Main Authors Tamura, Satoshi, Ninomiya, Hiroshi, Kitaoka, Norihide, Osuga, Shin, Iribe, Yurie, Takeda, Kazuya, Hayamizu, Satoru
Format Conference Proceeding
LanguageEnglish
Japanese
Published Asia-Pacific Signal and Information Processing Association 01.12.2015
Subjects
Online AccessGet full text
DOI10.1109/APSIPA.2015.7415335

Cover

More Information
Summary:This paper develops an Audio-Visual Speech Recognition (AVSR) method, by (1) exploring high-performance visual features, (2) applying audio and visual deep bottleneck features to improve AVSR performance, and (3) investigating effectiveness of voice activity detection in a visual modality. In our approach, many kinds of visual features are incorporated, subsequently converted into bottleneck features by deep learning technology. By using proposed features, we successfully achieved 73.66% lipreading accuracy in speaker-independent open condition, and about 90% AVSR accuracy on average in noisy environments. In addition, we extracted speech segments from visual features, resulting 77.80% lipreading accuracy. It is found VAD is useful in both audio and visual modalities, for better lipreading and AVSR.
DOI:10.1109/APSIPA.2015.7415335