Auditory and Vestibular Research 2017. 26(4):223-230.

Effect of hearing aid amplitude compression on emotional speech recognition
Hossein Namvar Arefi, Seyed Jalal Sameni, Hamid Jalilvand, Mohammad Kamali


Background and Aim: Understanding emotion is crucial for human social interactions. Amplitude compression in hearing aids affects acoustical characteristics of incoming sound, which is necessary for emotion recognition. The present study investigated this effect(s).
Methods: Hearing aid amplitude compression on Persian emotional speech database (ESD) was simulated using MATLAB software. Three types of hearing loss including high tone loss (HTL), low tone loss (LTL), and flat were simulated using three amplification methods, i.e. fast-acting compression (FAC), slow-acting compression (SAC), and linear. Forty normal hearing young adult subjects (aged 20-35 years, mean and SD: 26.98±4.50) with no depression participated in this study. Emotion recognition before and after hearing aid compression simulation was compared statistically using independent t-test considering p<0.05 as the significance level.
Results: Fear, sad, angry, and happy emotion recognition are statistically different in all three types of simulated hearing loss, whereas disgust emotion recognition is affected only in LTL. There is no statistical difference in neutral emotion recognition in all three types of simulated hearing loss. There are significant differences in sad, angry, and happy emotion recognition in FAC while SAC does not affect statistical differences in all emotions except in happy utterance. Fear, sad, and angry emotion recognition are statistically different in linear amplification.
Conclusion: Emotion recognition reduces after hearing aid amplitude compression simulation. Statistically significant differences in emotion recognition depend on emotions such as happy, fear, angry, type of simulated hearing loss such as HTL, LTL, and flat; amplification methods such as FAC, SAC, and linear.


Emotional speech; emotion perception; hearing aid; amplitude compression

Full Text:



Lin FR, Niparko JK, Ferrucci L. Hearing loss prevalence in the United States. Arch Intern Med. 2011;171(20):1851-53. doi:10.1001/archinternmed.2011.506.

Chien W, Lin FR. Prevalence of hearing aid use among older adults in the United States. Arch Intern Med. 2012;172(3):292-3. doi:10.1001/archinternmed.2011.1408.

Dillon H. Hearing aids. 2nd ed. New York:Thieme; 2012.

Souza PE. Effects of compression on speech acoustics, intelligibility, and sound quality. Trends Amplif. 2002;6(4):131-65. doi: 10.1177/108471380200600402.

Launer S, Zakis JA, Moore BCJ. Hearing aid signal processing. In: Popelka GR, Moore BCJ, Fay RR, Popper AN, editors. Hearing aids, springer handbook of auditory research, vol. 56. 1st ed. New York: Springer; 2016. p. 93-130.

ANSI A. S3. 22-2003, Specification of hearing aid characteristics. New York: American National Standards Institute. 2003.

Banse R, Scherer KR. Acoustic profiles in vocal emotion expression. J Pers Soc Psychol. 1996;70(3):614-36. doi:

Wilson D, Wharton T. Relevance and prosody. J Pragmat. 2006;38(10):1559-79. doi:

Cowie R, Cornelius RR. Describing the emotional states that are expressed in speech. Speech Commun. 2003;40(1-2):5-32. doi:

Ververidis D, Kotropoulos C. Emotional speech recognition: resources, features, and methods. Speech Commun. 2006;48(9):1162-81. doi:

Goy H, Pichora-Fuller MK, Singh G, Russo FA. Perception of emotional speech by listeners with hearing aids. Can Acoust. 2016;44(3):182-3.

Ghassemzadeh H, Mojtabai R, Karamghadiri N, Ebrahimkhani N. Psychometric properties of a Persian-language version of the beck depression inventory--second edition: BDI-II-Persian. Depress Anxiety. 2005;21(4):185-92. doi: 10.1002/da.20070.

Keshtiari N, Kuhlmann M, Eslami M, Klann-Delius G. Recognizing emotional speech in Persian: a validated database of Persian emotional speech (PersianESD). Behav Res Methods. 2015;47(1):275-94. doi: 10.3758/s13428-014-0467-x.

Moore BC, Füllgrabe C, Stone MA. Effect of spatial separation, extended bandwidth, and compression speed on intelligibility in a competing-speech task. J Acoust Soc Am. 2010;128(1):360-71. doi: 10.1121/1.3436533.

Moore BC, Glasberg BR, Stone MA. Development of a new method for deriving initial fittings for hearing aids with multi-channel compression: CAMEQ2-HF. Int J Audiol. 2010;49(3):216-27. doi: 10.3109/14992020903296746.

Peirce JW. Psychopy—psychophysics software in Python. J Neurosci Methods. 2007;162(1-2):8-13.doi:

Buchanan TW, Lutz K, Mirzazade S, Specht K, Shah NJ, Zilles K, et al. Recognition of emotional prosody and verbal components of spoken language: an fMRI study. Brain Res Cogn Brain Res. 2000;9(3):227-38. PMID: 10808134

Moore BC. Perceptual consequences of cochlear hearing loss and their implications for the design of hearing aids. Ear Hear. 1996;17(2):133-61. PMID: 8698160

Most T, Aviner C. Auditory, visual, and auditory-visual perception of emotions by individuals with cochlear implants, hearing AIDS, and normal hearing. J Deaf Stud Deaf Educ. 2009;14(4):449-64. doi: 10.1093/deafed/enp007.

Murray IR, Arnott JL. Toward the simulation of emotion in synthetic speech: a review of the literature on human vocal emotion. J Acoust Soc Am. 1993;93(2):1097-108. PMID: 8445120

Williams CE, Stevens KN. Emotions and speech:some acoustical correlates. J Acoust Soc Am. 1972;52(4):1238-50. doi:


  • There are currently no refbacks.

Creative Commons Attribution-NonCommercial 3.0

This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License which allows users to read, copy, distribute and make derivative works for non-commercial purposes from the material, as long as the author of the original work is cited properly.