Skip to main content
  • Research Article
  • Open access
  • Published:

Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis

Abstract

A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Büchler.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Büchler, M., Allegro, S., Launer, S. et al. Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis. EURASIP J. Adv. Signal Process. 2005, 387845 (2005). https://doi.org/10.1155/ASP.2005.2991

Download citation

  • Received:

  • Revised:

  • Published:

  • DOI: https://doi.org/10.1155/ASP.2005.2991

Keywords and phrases