Audio-visual speech recognition using deep bottleneck features and high-performance lipreading
This paper develops an Audio-Visual Speech Recognition (AVSR) method, by (1) exploring high-performance visual features, (2) applying audio and visual deep bottleneck features to improve AVSR performance, and (3) investigating effectiveness of voice activity detection in a visual modality. In our ap...
Saved in:
| Published in | 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA) pp. 575 - 582 |
|---|---|
| Main Authors | , , , , , , |
| Format | Conference Proceeding |
| Language | English Japanese |
| Published |
Asia-Pacific Signal and Information Processing Association
01.12.2015
|
| Subjects | |
| Online Access | Get full text |
| DOI | 10.1109/APSIPA.2015.7415335 |
Cover
| Summary: | This paper develops an Audio-Visual Speech Recognition (AVSR) method, by (1) exploring high-performance visual features, (2) applying audio and visual deep bottleneck features to improve AVSR performance, and (3) investigating effectiveness of voice activity detection in a visual modality. In our approach, many kinds of visual features are incorporated, subsequently converted into bottleneck features by deep learning technology. By using proposed features, we successfully achieved 73.66% lipreading accuracy in speaker-independent open condition, and about 90% AVSR accuracy on average in noisy environments. In addition, we extracted speech segments from visual features, resulting 77.80% lipreading accuracy. It is found VAD is useful in both audio and visual modalities, for better lipreading and AVSR. |
|---|---|
| DOI: | 10.1109/APSIPA.2015.7415335 |