Speech recognition in ‘adverse conditions’ has been a familiar area of research in computer science, engineering, and hearing sciences for several decades. In contrast, most psycholinguistic theories of speech recognition are built upon evidence gathered from tasks performed by healthy listeners on carefully recorded speech, in a quiet environment, and under conditions of undivided attention.Building upon the momentum initiated by the Psycholinguistic Approaches to Speech Recognition in Adverse Conditions workshop held in Bristol, UK, in 2010, the aim of this volume is to promote a multi-disciplinary, yet unified approach to the perceptual, cognitive, and neuro-physiological mechanisms underpinning the recognition of degraded speech, variable speech, speech experienced under cognitive load, and speech experienced by theoretically relevant populations. This collection opens with a review of the literature and a formal classification of adverse conditions. The research articles then highlight those adverse conditions with the greatest potential for constraining theory, showing that some speech phenomena often believed to be immutable can be affected by noise, surface variations, or attentional set in ways that will force researchers to rethink their theory. This volume is essential for those interested in speech recognition outside laboratory constraints.
Table of Contents
1. Speech recognition in adverse conditions: A review Sven L. Mattys, Ann R. Bradlow, Matthew H. Davis and Sophie K. Scott 2. Talker-specific perceptual adaptation during online speech perception Alison M. Trude and Sarah Brown-Schmidt 3. Effects of dialect variation on the semantic predictability benefit Cynthia G. Clopper 4. Word learning under adverse listening conditions: Context-specific recognition Sarah C. Creel, Richard N. Aslin and Michael K. Tanenhaus 5. Familiarisation conditions and the mechanisms that underlie improved recognition of dysarthric speech Stephanie A. Borrie, Megan J. McAuliffe, Julie M. Liss, Cecilia Kirk, Gregory A. O'Beirne and Tim Anderson 6. The effect of energetic and informational masking on the time-course of stream segregation: Evidence that streaming depends on vocal fine structure cues Payam Ezzatian, Liang Li, M. Kathleen Pichora-Fuller and Bruce A. Schneider 7. Speech-in-speech recognition: A training study Kristin J. Van Engen 8. Sentence comprehension in competing speech: Dichotic sentence-word priming reveals hemispheric differences in auditory semantic processing Jennifer Aydelott, Dinah Baer-Henney, Maciej Trzaskowski, Robert Leech and Frederic Dick 9. Brain regions recruited for the effortful comprehension of noise-vocoded words Alexis Hervais-Adelman, Robert P. Carlyon, Ingrid S. Johnsrude and Matthew H. Davis 10. Audiovisual benefit for recognition of speech presented with single-talker noise in older listeners Alexandra Jesse and Esther Janse 11. Sentence comprehension in proficient adult cochlear implant users: On the vulnerability of syntax A. Hahne, A. Wolf, J. Müller, D. Mürbe and A. D. Friederici 12. Increased lexical activation and reduced competition in second-language listening Mirjam Broersma 13. A lexically-biased attentional set compensates for variable speech quality caused by pronunciation variation Mark A. Pitt and Christine M. Szostak 14. Adverse conditions improve distinguishability of auditory, motor and perceptuo-motor theories of speech perception: An exploratory Bayesian modelling study C. Moulin-Frier, R. Laurent, P. Bessière, J. L. Schwartz and J. Diard
Sven L. Mattys is Professor in the School of Psychology at the University of York, UK.
Ann R. Bradlow is Professor in the Department of Linguistics at Northwestern University, USA.
Matthew H. Davis is Programme Leader Track Scientist in the MRC Cognition and Brain Sciences Unit at Cambridge University, UK.
Sophie K. Scott is Professor in the Institute of Cognitive Neuroscience at University College London, UK.