67 results found
Reichenbach J, Keshavarzi M, Kegler M, et al., Transcranial alternating current stimulation in the theta band but not in the delta band modulates the comprehension of naturalistic speech in noise, NeuroImage, ISSN: 1053-8119
Weissbart H, Reichenbach J, Kandylaki K, 2020, Cortical tracking of surprisal during continuous speech comprehension, Journal of Cognitive Neuroscience, Vol: 32, Pages: 155-166, ISSN: 0898-929X
Speech comprehension requires rapid online processing of a continuous acoustic signal to extract structure and meaning. Previous studies on sentence comprehension have found neural correlates of the predictability of a word given its context, as well as a of the precision of such a prediction. However, they have focussed on single sentences and on particular words in those sentences. Moreover, they compared neural responses to words with low and high predictability, as well as with low and high precision. However, in speech comprehension a listener hears many successive words whose predictability and precision vary over a large range. Here we show that cortical activity in different frequency bands tracks word surprisal in continuous natural speech, and that this tracking is modulated by precision. We obtain these results through quantifying surprisal and precision from naturalistic speech using a deep neural network, and through relating these speech features to electroencephalographic (EEG) responses of human volunteers acquired during auditory story comprehension. We find significant cortical tracking of surprisal at low frequencies including the delta band as well as in the higher-frequency beta and gamma bands, and observe that the tracking is modulated by the precision. Our results pave the way to further investigate the neurobiology of natural speech comprehension.
Etard O, Kegler M, Braiman C, et al., 2019, Decoding of selective attention to continuous speech from the human auditory brainstem response, NeuroImage, Vol: 200, Pages: 1-11, ISSN: 1053-8119
Humans are highly skilled at analysing complex acoustic scenes. The segregation of different acoustic streams and the formation of corresponding neural representations is mostly attributed to the auditory cortex. Decoding of selective attention from neuroimaging has therefore focussed on cortical responses to sound. However, the auditory brainstem response to speech is modulated by selective attention as well, as recently shown through measuring the brainstem's response to running speech. Although the response of the auditory brainstem has a smaller magnitude than that of the auditory cortex, it occurs at much higher frequencies and therefore has a higher information rate. Here we develop statistical models for extracting the brainstem response from multi-channel scalp recordings and for analysing the attentional modulation according to the focus of attention. We demonstrate that the attentional modulation of the brainstem response to speech can be employed to decode the attentional focus of a listener from short measurements of 10 s or less in duration. The decoding remains accurate when obtained from three EEG channels only. We further show how out-of-the-box decoding that employs subject-independent models, as well as decoding that is independent of the specific attended speaker is capable of achieving similar accuracy. These results open up new avenues for investigating the neural mechanisms for selective attention in the brainstem and for developing efficient auditory brain-computer interfaces.
Saiz Alia M, Forte A, Reichenbach J, 2019, Individual differences in the attentional modulation of the human auditory brainstem response to speech inform on speech-in-noise deficits, Scientific Reports, Vol: 9, ISSN: 2045-2322
People with normal hearing thresholds can nonetheless have difficulty with understanding speech in noisy backgrounds. The origins of such supra-threshold hearing deficits remain largely unclear. Previously we showed that the auditory brainstem response to running speech is modulated by selective attention, evidencing a subcortical mechanism that contributes to speech-in-noise comprehension. We observed, however, significant variation in the magnitude of the brainstem’s attentional modulation between the different volunteers. Here we show that this variability relates to the ability of the subjects to understand speech in background noise. In particular, we assessed 43 young human volunteers with normal hearing thresholds for their speech-in-noise comprehension. We also recorded their auditory 30brainstem responses to running speech when selectively attending to one of two competing voices. To control for potential peripheral hearing deficits, and in particular for cochlear synaptopathy, we further assessed noise exposure, the temporal sensitivity threshold, the middle-ear muscle reflex, and the auditory-brainstem response to clicks in various levels of background noise. These tests did not show evidence for cochlear synaptopathy amongst the volunteers. Furthermore, we found that only the attentional modulation of the brainstem response to speech was significantly related to speech-in-noise comprehension. Our results therefore evidence an impact of top-down modulation of brainstem activity on the variability in speech-in-noise comprehension amongst the subjects.
Reichenbach J, Kadir S, Kaza C, et al., Modulation of speech-in-noise comprehension through transcranial current stimulation with the phase-shifted speech envelope, IEEE Transactions on Neural Systems and Rehabilitation Engineering, ISSN: 1534-4320
Neural activity tracks the envelope of a speech signalat latencies from 50 ms to 300 ms. Modulating this neural trackingthrough transcranial alternating current stimulation influencesspeech comprehension. Two important variables that can affectthis modulation are the latency and the phase of the stimulationwith respect to the sound. While previous studies have found aninfluence of both variables on speech comprehension, theinteraction between both has not yet been measured. We presented17 subjects with speech in noise coupled with simultaneoustranscranial alternating current stimulation. The currents werebased on the envelope of the target speech but shifted by differentphases, as well as by two temporal delays of 100 ms and 250 ms.We also employed various control stimulations, and assessed thesignal-to-noise ratio at which the subject understood half of thespeech. We found that, at both latencies, speech comprehension ismodulated by the phase of the current stimulation. However, theform of the modulation differed between the two latencies. Phaseand latency of neurostimulation have accordingly distinctinfluences on speech comprehension. The different effects at thelatencies of 100 ms and 250 ms hint at distinct neural processes forspeech processing.
Etard O, Reichenbach J, 2019, Neural speech tracking in the theta and in the delta frequency band differentially encode clarity and comprehension of speech in noise, Journal of Neuroscience, Vol: 39, Pages: 5750-5759, ISSN: 0270-6474
Humans excel at understanding speech even in adverse conditions such as background noise. Speech processing may be aided by cortical activity in the delta and theta frequency bands that has been found to track the speech envelope. However, the rhythm of non-speech sounds is tracked by cortical activity as well. It therefore remains unclear which aspects of neural speech tracking represent the processing of acoustic features, related to the clarity of speech, and which aspects reflect higher-level linguistic processing related to speech comprehension. Here we disambiguate the roles of cortical tracking for speech clarity and comprehension through recording EEG responses to native and foreign language in different levels of background noise, for which clarity and comprehension vary independently. We then use a both a decoding and an encoding approach to relate clarity and comprehension to the neural responses. We find that cortical tracking in the theta frequency band is mainly correlated to clarity, while the delta band contributes most to speech comprehension. Moreover, we uncover an early neural component in the delta band that informs on comprehension and that may reflect a predictive mechanism for language processing. Our results disentangle the functional contributions of cortical speech tracking in the delta and theta bands to speech processing. They also show that both speech clarity and comprehension can be accurately decoded from relatively short segments of EEG recordings, which may have applications in future mind-controlled auditory prosthesis.
BinKhamis G, Forte AE, Reichenbach J, et al., 2019, Speech auditory brainstem responses in adult hearing aid users: Effects of aiding and background noise, and prediction of behavioral measures, Trends in Hearing, Vol: 23, Pages: 1-20, ISSN: 2331-2165
Evaluation of patients who are unable to provide behavioral responses on standard clinical measures is challenging due to the lack of standard objective (non-behavioral) clinical audiological measures that assess the outcome of an intervention (e.g. hearing aids). Brainstem responses to short consonant-vowel stimuli (speech-ABRs) have been proposed as a measure of subcortical encoding of speech, speech detection, and speech-in-noise performance in individuals with normal hearing. Here, we investigated the potential of speech-ABRs as an objective clinical outcome measure of speech detection, speech-in-noise detection and recognition, and self-reported speech understanding in adults with sensorineural hearing loss. We compared aided and unaided speech-ABRs, and speech-ABRs in quiet and in noise. Additionally, we evaluated whether speech-ABR F0 encoding (obtained from the complex cross-correlation with the 40 ms [da] fundamental waveform) predicted aided behavioral speech recognition in noise and/or aided self-reported speech understanding. Results showed: (i) aided speech-ABRs had earlier peak latencies, larger peak amplitudes, and larger F0 encoding amplitudes compared to unaided speech-ABRs; (ii) the addition of background noise resulted in later F0 encoding latencies, but did not have an effect on peak latencies and amplitudes, or on F0 encoding amplitudes; and (iii) speech-ABRs were not a significant predictor of any of the behavioral or self-report measures. These results show thatspeech-ABR F0 encoding is not a good predictor of speech-in-noise recognition or self reported speech understanding with hearing aids. However, our results suggest that speech- ABRs may have potential for clinical application as an objective measure of speech detection with hearing aids.
Braiman C, Fridman A, Conte MM, et al., 2018, Cortical response to the natural speech envelope correlates with neuroimaging evidence of cognition in severe brain injury, Current Biology, Vol: 28, Pages: 3833-3839, ISSN: 1879-0445
Recent studies identify severely brain-injured patients with limited or no behavioral responses who successfully perform functional magnetic resonance imaging (fMRI) or electroencephalogram (EEG) mental imagery tasks [1, 2, 3, 4, 5]. Such tasks are cognitively demanding ; accordingly, recent studies support that fMRI command following in brain-injured patients associates with preserved cerebral metabolism and preserved sleep-wake EEG [5, 6]. We investigated the use of an EEG response that tracks the natural speech envelope (NSE) of spoken language [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22] in healthy controls and brain-injured patients (vegetative state to emergence from minimally conscious state). As audition is typically preserved after brain injury, auditory paradigms may be preferred in searching for covert cognitive function [23, 24, 25]. NSE measures are obtained by cross-correlating EEG with the NSE. We compared NSE latencies and amplitudes with and without consideration of fMRI assessments. NSE latencies showed significant and progressive delay across diagnostic categories. Patients who could carry out fMRI-based mental imagery tasks showed no statistically significant difference in NSE latencies relative to healthy controls; this subgroup included patients without behavioral command following. The NSE may stratify patients with severe brain injuries and identify those patients demonstrating “cognitive motor dissociation” (CMD)  who show only covert evidence of command following utilizing neuroimaging or electrophysiological methods that demand high levels of cognitive function. Thus, the NSE is a passive measure that may provide a useful screening tool to improve detection of covert cognition with fMRI or other methods and improve stratification of patients with disorders of consciousness in research studies.
Kegler M, Etard OE, Forte AE, et al., 2018, Complex Statistical Model for Detecting the Auditory Brainstem Response to Natural Speech and for Decoding Attention from High-Density EEG Recordings, ARO 2018
Forte AE, Etard OE, Reichenbach JDT, 2018, Selective Auditory Attention At The Brainstem Level, ARO 2018
Saiz Alia M, Askari A, Forte AE, et al., 2018, A model of the human auditory brainstem response to running speech, ARO 2018
Etard OE, Kegler M, Braiman C, et al., 2018, Real-time decoding of selective attention from the human auditory brainstem response to continuous speech, BioRxiv
Reichenbach JDT, Ciganovic N, Warren R, et al., 2018, Static length changes of cochlear outer hair cells can tune low-frequency hearing, PLoS Computational Biology, Vol: 14, ISSN: 1553-734X
The cochlea not only transduces sound-induced vibration into neural spikes, it also amplifiesweak sound to boost its detection. Actuators of this active process are sensory outer haircells in the organ of Corti, whereas the inner hair cells transduce the resulting motion intoelectric signals that propagate via the auditory nerve to the brain. However, how the outerhair cells modulate the stimulus to the inner hair cells remains unclear. Here, we combinetheoretical modeling and experimental measurements near the cochlear apex to study theway in which length changes of the outer hair cells deform the organ of Corti. We develop ageometry-based kinematic model of the apical organ of Corti that reproduces salient, yetcounter-intuitive features of the organ’s motion. Our analysis further uncovers a mechanismby which a static length change of the outer hair cells can sensitively tune the signal transmittedto the sensory inner hair cells. When the outer hair cells are in an elongated state,stimulation of inner hair cells is largely inhibited, whereas outer hair cell contraction leads toa substantial enhancement of sound-evoked motion near the hair bundles. This novel mechanismfor regulating the sensitivity of the hearing organ applies to the low frequencies thatare most important for the perception of speech and music. We suggest that the proposedmechanism might underlie frequency discrimination at low auditory frequencies, as well asour ability to selectively attend auditory signals in noisy surroundings.
Forte AE, Etard O, Reichenbach J, 2017, The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention, eLife, Vol: 6, ISSN: 2050-084X
Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. Here we develop a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity. We employ this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention.
Kegler M, Etard O, Forte AE, et al., 2017, Complex statistical model for detecting the auditory brainstem response to natural speech and for decoding attention, Basic Auditory Science 2017
Forte AE, Etard O, Reichenbach J, 2017, Selective auditory attention modulates the human brainstem's response to running speech, Basic Auditory Science 2017
Etard, Reichenbach J, EEG-measured correlates of comprehension in speech-in-noise listening, Basic Auditory Science 2017
Sidiras C, Iliadou V, Nimatoudis I, et al., 2017, Spoken word recognition enhancement due to preceding synchronized beats compared to unsynchronized or unrhythmic beats, Frontiers in Neuroscience, Vol: 11, ISSN: 1662-4548
The relation between rhythm and language has been investigated over the last decades, with evidence that these share overlapping perceptual mechanisms emerging from several different strands of research. The dynamic Attention Theory posits that neural entrainment to musical rhythm results in synchronized oscillations in attention, enhancing perception of other events occurring at the same rate. In this study, this prediction was tested in 10 year-old children by means of a psychoacoustic speech recognition in babble paradigm. It was hypothesized that rhythm effects evoked via a short isochronous sequence of beats would provide optimal word recognition in babble when beats and word are in sync. We compared speech recognition in babble performance in the presence of isochronous and in sync vs. non-isochronous or out of sync sequence of beats. Results showed that (a) word recognition was the best when rhythm and word were in sync, and (b) the effect was not uniform across syllables and gender of subjects. Our results suggest that pure tone beats affect speech recognition at early levels of sensory or phonemic processing.
Ciganovic N, Wolde-Kidan A, Reichenbach JDT, 2017, Hair bundles of cochlear outer hair cells are shaped to minimize their fluid-dynamic resistance, Scientific Reports, Vol: 7, ISSN: 2045-2322
The mammalian sense of hearing relies on two types of sensory cells: inner hair cells transmit the auditory stimulus to the brain, while outer hair cells mechanically modulate the stimulus through active feedback. Stimulation of a hair cell is mediated by displacements of its mechanosensitive hair bundle which protrudes from the apical surface of the cell into a narrow fluid-filled space between reticular lamina and tectorial membrane. While hair bundles of inner hair cells are of linear shape, those of outer hair cells exhibit a distinctive V-shape. The biophysical rationale behind this morphology, however, remains unknown. Here we use analytical and computational methods to study the fluid flow across rows of differently shaped hair bundles. We find that rows of V-shaped hair bundles have a considerably reduced resistance to crossflow, and that the biologically observed shapes of hair bundles of outer hair cells are near-optimal in this regard. This observation accords with the function of outer hair cells and lends support to the recent hypothesis that inner hair cells are stimulated by a net flow, in addition to the well-established shear flow that arises from shearing between the reticular lamina and the tectorial membrane.
Forte AE, Etard O, Reichenbach J, 2017, Complex Auditory-brainstem Response to the Fundamental Frequency of Continuous Natural Speech, ARO 2017
Warren RL, Ramamoorthy S, Ciganovic N, et al., 2016, Minimal basilar membrane motion in low-frequency hearing, Proceedings of the National Academy of Sciences of the United States of America, Vol: 113, Pages: E4304-E4310, ISSN: 1091-6490
Low-frequency hearing is critically important for speech and music perception, but no mechanical measurements have previously been available from inner ears with intact low-frequency parts. These regions of the cochlea may function in ways different from the extensively studied high-frequency regions, where the sensory outer hair cells produce force that greatly increases the sound-evoked vibrations of the basilar membrane. We used laser interferometry in vitro and optical coherence tomography in vivo to study the low-frequency part of the guinea pig cochlea, and found that sound stimulation caused motion of a minimal portion of the basilar membrane. Outside the region of peak movement, an exponential decline in motion amplitude occurred across the basilar membrane. The moving region had different dependence on stimulus frequency than the vibrations measured near the mechanosensitive stereocilia. This behavior differs substantially from the behavior found in the extensively studied high-frequency regions of the cochlea.
Reichenbach CS, Braiman C, Schiff ND, et al., 2016, The auditory-brainstem response to continuous, non repetitive speech is modulated by the speech envelope and reflects speech processing, Frontiers in Computational Neuroscience, Vol: 10, ISSN: 1662-5188
The auditory-brainstem response (ABR) to short and simple acoustical signals is an important clinical tool used to diagnose the integrity of the brainstem. The ABR is also employed to investigate the auditory brainstem in a multitude of tasks related to hearing, such as processing speech or selectively focusing on one speaker in a noisy environment. Such research measures the response of the brainstem to short speech signals such as vowels or words. Because the voltage signal of the ABR has a tiny amplitude, several hundred to a thousand repetitions of the acoustic signal are needed to obtain a reliable response. The large number of repetitions poses a challenge to assessing cognitive functions due to neural adaptation. Here we show that continuous, non-repetitive speech, lasting several minutes, may be employed to measure the ABR. Because the speech is not repeated during the experiment, the precise temporal form of the ABR cannot be determined. We show, however, that important structural features of the ABR can nevertheless be inferred. In particular, the brainstem responds at the fundamental frequency of the speech signal, and this response is modulated by the envelope of the voiced parts of speech. We accordingly introduce a novel measure that assesses the ABR as modulated by the speech envelope, at the fundamental frequency of speech and at the characteristic latency of the response. This measure has a high signal-to-noise ratio and can hence be employed effectively to measure the ABR to continuous speech. We use this novel measure to show that the auditory brainstem response is weaker to intelligible speech than to unintelligible, time-reversed speech. The methods presented here can be employed for further research on speech processing in the auditory brainstem and can lead to the development of future clinical diagnosis of brainstem function.
Reichenbach T, 2016, Hearing Damage Through Blast, Blast Injury Science and Engineering, Publisher: Springer International Publishing, Pages: 307-314, ISBN: 9783319218663
Reichenbach JDT, Meltzer B, Reichenbach CS, et al., 2015, The steady-state response of the cerebral cortex to the beat of music reflects both the comprehension of music and attention, Frontiers in Human Neuroscience, Vol: 9, ISSN: 1662-5161
The brain's analyses of speech and music share a range of neural resources and mechanisms. Music displays a temporal structure of complexity similar to that of speech, unfolds over comparable timescales, and elicits cognitive demands in tasks involving comprehension and attention. During speech processing, synchronized neural activity of the cerebral cortex in the delta and theta frequency bands tracks the envelope of a speech signal, and this neural activity is modulated by high-level cortical functions such as speech comprehension and attention. It remains unclear, however, whether the cortex also responds to the natural rhythmic structure of music and how the response, if present, is influenced by higher cognitive processes. Here we employ electroencephalography (EEG) to show that the cortex responds to the beat of music and that this steady-state response reflects musical comprehension and attention. We show that the cortical response to the beat is weaker when subjects listen to a familiar tune than when they listen to an unfamiliar, nonsensical musical piece. Furthermore, we show that in a task of intermodal attention there is a larger neural response at the beat frequency when subjects attend to a musical stimulus than when they ignore the auditory signal and instead focus on a visual one. Our findings may be applied in clinical assessments of auditory processing and music cognition as well as in the construction of auditory brain-machine interfaces.
Reichenbach T, Stefanovic A, Nin F, et al., 2015, Otoacoustic Emission Through Waves on Reissner's Membrane, 12th International Workshop on the Mechanics of Hearing, Publisher: AMER INST PHYSICS, ISSN: 0094-243X
Reichenbach T, Hudspeth AJ, 2014, The physics of hearing: fluid mechanics and the active process of the inner ear, REPORTS ON PROGRESS IN PHYSICS, Vol: 77, ISSN: 0034-4885
Tchumatchenko T, Reichenbach T, 2014, A wave of cochlear bone deformation can underlie bone conduction and otoacoustic emissions, 12th International Workshop on the Mechanics of Hearing, Publisher: AIP Publishing LLC, ISSN: 0094-243X
A sound signal is transmitted to the cochlea through vibration of the middle ear that induces a pressure difference across the cochlea’s elastic basilar membrane. In an alternative pathway for transmission, the basilar membrane can also be deflected by vibration of the cochlear bone, without participation of the middle ear. This second pathway, termed bone conduction, is increasingly used in commercial applications, namely in bone-conduction headphones that deliver sound through vibration of the skull. The mechanism of this transmission, however, remains unclear. Here, we study a cochlear model in which the cochlear bone is deformable. We show that deformation of the cochlear bone, such as resulting from bone stimulation, elicits a wave on the basilar membrane and can hence explain bone conduction. Interestingly, stimulation of the basilar membrane can in turn elicit a wave of deformation of the cochlear bone. We show that this has implications for the propagation of otoacoustic emissions: these can emerge from the cochlea through waves of bone deformation.
Tchumatchenko T, Reichenbach T, 2014, A cochlear-bone wave can yield a hearing sensation as well as otoacoustic emission, NATURE COMMUNICATIONS, Vol: 5, ISSN: 2041-1723
Dobrinevski A, Alava M, Reichenbach T, et al., 2014, Mobility-dependent selection of competing strategy associations, Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, Vol: 89, ISSN: 1539-3755
Standard models of population dynamics focus on the interaction, survival, and extinction of the competing species individually. Real ecological systems, however, are characterized by an abundance of species (or strategies, in the terminology of evolutionary-game theory) that form intricate, complex interaction networks. The description of the ensuing dynamics may be aided by studying associations of certain strategies rather than individual ones. Here we show how such a higher-level description can bear fruitful insight. Motivated from different strains of colicinogenic Escherichia coli bacteria, we investigate a four-strategy system which contains a three-strategy cycle and a neutral alliance of two strategies. We find that the stochastic, spatial model exhibits a mobility-dependent selection of either the three-strategy cycle or of the neutral pair. We analyze this intriguing phenomenon numerically and analytically. © 2014 American Physical Society.
Reichenbach T, 2014, Otoacoustic emission through waves on Reissner's membrane and bone deformation, ISSN: 2221-3767
The inner ear acts not only as a detector of sound, but can produce sound itself. These otoacoustic emissions are generated by an active process in the inner ear. The active process leads to a nonlinearity that produces distortion that is emitted as sound from the ear. How such a distortion propagates from its generation site within the inner ear back to the middle ear remains, however, unclear. Here we describe two novel modes of wave propagation in the cochlea, namely a wave on the elastic Reissner's membrane as well as a wave of deformation of the cochlear bone. Each mode can explain a distinct component of otoacoustic emissions. The cochlear-bone deformation can also underlie bone conduction, the phenomenon by which we can hear a vibration of the skull as sound.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.