- Research Article
- Open access
- Published:
Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis
EURASIP Journal on Advances in Signal Processing volume 2005, Article number: 387845 (2005)
Abstract
A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Büchler, M., Allegro, S., Launer, S. et al. Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis. EURASIP J. Adv. Signal Process. 2005, 387845 (2005). https://doi.org/10.1155/ASP.2005.2991
Received:
Revised:
Published:
DOI: https://doi.org/10.1155/ASP.2005.2991