Domain invariant speech features using a new divergence measure
Existing speech classification algorithms often perform well when evaluated on training and test data drawn from the same distribution. In practice, however, these distributions are not always the same. In these circumstances, the performance of trained models will likely decrease. In this paper, we...
Saved in:
| Published in | 2014 IEEE Spoken Language Technology Workshop (SLT) pp. 77 - 82 |
|---|---|
| Main Authors | , , , |
| Format | Conference Proceeding |
| Language | English |
| Published |
IEEE
01.12.2014
|
| Subjects | |
| Online Access | Get full text |
| DOI | 10.1109/SLT.2014.7078553 |
Cover
| Summary: | Existing speech classification algorithms often perform well when evaluated on training and test data drawn from the same distribution. In practice, however, these distributions are not always the same. In these circumstances, the performance of trained models will likely decrease. In this paper, we discuss an underutilized divergence measure and derive an estimable upper bound on the test error rate that depends on the error rate on the training data and the distance between training and test distributions. Using this bound as motivation, we develop a feature learning algorithm that aims to identify invariant speech features that generalize well to data similar to, but different from, the training set. Comparative results confirm the efficacy of the algorithm on a set of cross-domain speech classification tasks. |
|---|---|
| DOI: | 10.1109/SLT.2014.7078553 |