Shennong: A Python toolbox for audio speech features extraction

We introduce Shennong, a Python toolbox and command-line utility for audio speech features extraction. It implements a wide range of well-established state-of-the-art algorithms: spectro-temporal filters such as Mel-Frequency Cepstral Filterbank or Predictive Linear Filters, pre-trained neural netwo...

Full description

Saved in:
Bibliographic Details
Published inBehavior research methods Vol. 55; no. 8; pp. 4489 - 4501
Main Authors Bernard, Mathieu, Poli, Maxime, Karadayi, Julien, Dupoux, Emmanuel
Format Journal Article
LanguageEnglish
Published New York Springer US 01.12.2023
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN1554-3528
1554-351X
1554-3528
DOI10.3758/s13428-022-02029-6

Cover

More Information
Summary:We introduce Shennong, a Python toolbox and command-line utility for audio speech features extraction. It implements a wide range of well-established state-of-the-art algorithms: spectro-temporal filters such as Mel-Frequency Cepstral Filterbank or Predictive Linear Filters, pre-trained neural networks, pitch estimators, speaker normalization methods, and post-processing algorithms. Shennong is an open source, reliable and extensible framework built on top of the popular Kaldi speech processing library. The Python implementation makes it easy to use by non-technical users and integrates with third-party speech modeling and machine learning tools from the Python ecosystem. This paper describes the Shennong software architecture, its core components, and implemented algorithms. Then, three applications illustrate its use. We first present a benchmark of speech features extraction algorithms available in Shennong on a phone discrimination task. We then analyze the performances of a speaker normalization model as a function of the speech duration used for training. We finally compare pitch estimation algorithms on speech under various noise conditions.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1554-3528
1554-351X
1554-3528
DOI:10.3758/s13428-022-02029-6