77 results found
Kulkarni A, Kegler M, Reichenbach T, 2021, Effect of visual input on syllable parsing in a computational model of a neural microcircuit for speech processing., Journal of Neural Engineering, Vol: 5, Pages: 1-14, ISSN: 1741-2552
Seeing a person talking can help to understand them, in particular in a noisy environment. However, how the brain integrates the visual information with the auditory signal to enhance speech comprehension remains poorly understood. Here we address this question in a computational model of a cortical microcircuit for speech processing. The model consists of an excitatory and an inhibitory neural population that together create oscillations in the theta frequency range. When simulated with speech, the theta rhythm becomes entrained to the onsets of syllables, such that the onsets can be inferred from the network activity. We investigate how well the obtained syllable parsing performs when different types of visual stimuli are added. In particular, we consider currents related to the rate of syllables as well as currents related to the mouth-opening area of the talking faces. We find that currents that target the excitatory neuronal population can influence speech comprehension, both boosting it or impeding it, depending on the temporal delay and on whether the currents are excitatory or inhibitory. In contrast, currents that act on the inhibitory neurons do not impact speech comprehension significantly. Our results suggest neural mechanisms for the integration of visual information with the acoustic information in speech and make experimentally-testable predictions.
Keshavarzi M, Varano E, Reichenbach J, 2021, Cortical tracking of a background speaker modulates the comprehension of a foreground speech signal, The Journal of Neuroscience, Vol: 41, Pages: 5093-5101, ISSN: 0270-6474
Understanding speech in background noise is a difficult task. The tracking of speech rhythms such as the rate of syllables and words by cortical activity has emerged as a key neural mechanism for speech in-noise comprehension. In particular, recent investigations have used transcranial alternating current stimulation (tACS) with the envelope of a speech signal to influence the cortical speech tracking, demonstrating that this type of stimulation modulates comprehension and therefore evidencing a functional role of the cortical tracking in speech processing. Cortical activity has been found to track the rhythms of a background speaker as well, but the functional significance of this neural response remains unclear. Here we employ a speech-comprehension task with a target speaker in the presence of a distractor voice to show that tACS with the speech envelope of the target voice as well as tACS with the envelope of the distractor speaker both modulate the comprehension of the target speech. Because the envelope of the distractor speech does not carry information about the target speech stream, the modulation of speech comprehension through tACS with this envelope evidences that the cortical tracking of the background speaker affects the comprehension of the foreground speech signal. The phase dependency of the resulting modulation of speech comprehension is, however, opposite to that obtained from tACS with the envelope of the target speech signal. This suggests that the cortical tracking of the ignored speech stream and that of the attended speech stream may compete for neural resources.
Saiz Alia M, Miller P, Reichenbach J, 2021, Otoacoustic emissions evoked by the time-varying harmonic structure of speech, eNeuro, Vol: 8, Pages: 1-12, ISSN: 2373-2822
The human auditory system is exceptional at comprehending an individual speaker even in complex acoustic environments. Because the inner ear, or cochlea, possesses an active mechanism that can be controlled by subsequent neural processing centers through descending nerve fibers, it may already contribute to speech processing. The cochlear activity can be assessed by recording otoacoustic emissions (OAEs), but employing these emissions to assess speech processing in the cochlea is obstructed by the complexity of natural speech. Here, we develop a novel methodology to measure OAEs that are related to the time-varying harmonic structure of speech [speech-distortion-product OAEs (DPOAEs)]. We then employ the method to investigate the effect of selective attention on the speech-DPOAEs. We provide tentative evidence that the speech-DPOAEs are larger when the corresponding speech signal is attended than when it is ignored. Our development of speech-DPOAEs opens up a path to further investigations of the contribution of the cochlea to the processing of complex real-world signals.
Sumner L, Mestel A, Reichenbach J, 2021, Steady streaming as a method for drug delivery tothe inner ear, Scientific Reports, Vol: 11, Pages: 1-12, ISSN: 2045-2322
The inner ear, or cochlea, is a fluid-filled organ housing the mechanosensitive hair cells. Sound stimulation is relayed to the hair cells through waves that propagate on the elastic basilar membrane. Sensorineural hearing loss occurs from damage to the hair cells and cannot currently be cured. Although drugs have been proposed to prevent damage or restore functionality to hair cells, a difficulty with such treatments is ensuring adequate drug delivery to the cells. Because the cochlea is encased in the temporal bone, it can only be accessed from its basal end. However, the hair cells that are responsible for detecting speech-frequency sounds reside at the opposite, apical end. In this paper we show that steady streaming can be used to transport drugs along the cochlea. Steady streaming is a nonlinear process that accompanies many fluctuating fluid motions, including the sound-evoked waves in the inner ear. We combine an analytical approximation for the waves in the cochlea with computational fluid dynamic simulations to demonstrate that the combined steady streaming effects of several different frequencies can transport drugs from the base of the cochlea further towards the apex. Our results therefore show that multi-frequency sound stimulation can serve as a non-invasive method to transport drugs efficiently along the cochlea.
Kegler M, Reichenbach J, 2021, Modelling the effects of transcranial alternating current stimulation on the neural encoding of speech in noise, NeuroImage, Vol: 224, ISSN: 1053-8119
Transcranial alternating current stimulation (tACS) can non-invasively modulate neuronal activity in the cerebral cortex, in particular at the frequency of the applied stimulation. Such modulation can matter for speech processing, since the latter involves the tracking of slow amplitude fluctuations in speech by cortical activity. tACS with a current signal that follows the envelope of a speech stimulus has indeed been found to influence the cortical tracking and to modulate the comprehension of the speech in background noise. However, how exactly tACS influences the speech-related cortical activity, and how it causes the observed effects on speech comprehension, remains poorly understood. A computational model for cortical speech processing in a biophysically plausible spiking neural network has recently been proposed. Here we extended the model to investigate the effects of different types of stimulation waveforms, similar to those previously applied in experimental studies, on the processing of speech in noise. We assessed in particular how well speech could be decoded from the neural network activity when paired with the exogenous stimulation. We found that, in the absence of current stimulation, the speech-in-noise decoding accuracy was comparable to the comprehension of speech in background noise of human listeners. We further found that current stimulation could alter the speech decoding accuracy by a few percent, comparable to the effects of tACS on speech-in-noise comprehension. Our simulations further allowed us to identify the parameters for the stimulation waveforms that yielded the largest enhancement of speech-in-noise encoding. Our model thereby provides insight into the potential neural mechanisms by which weak alternating current stimulation may influence speech comprehension and allows to screen a large range of stimulation waveforms for their effect on speech processing.
Saiz-Alia M, Reichenbach T, 2020, Computational modeling of the auditory brainstem response to continuous speech., Journal of Neural Engineering, Vol: 17, Pages: 1-12, ISSN: 1741-2552
OBJECTIVE: The auditory brainstem response can be recorded non-invasively from scalp electrodes and serves as an important clinical measure of hearing function. We have recently shown how the brainstem response at the fundamental frequency of continuous, non-repetitive speech can be measured, and have used this measure to demonstrate that the response is modulated by selective attention. However, different parts of the speech signal as well as several parts of the brainstem contribute to this response. Here we employ a computational model of the brainstem to elucidate the influence of these different factors. APPROACH: We developed a computational model of the auditory brainstem by combining a model of the middle and inner ear with a model of globular bushy cells in the cochlear nuclei and with a phenomenological model of the inferior colliculus. We then employed the model to investigate the neural response to continuous speech at different stages in the brainstem, following the methodology developed recently by ourselves for detecting the brainstem response to running speech from scalp recordings. We compared the simulations with recordings from healthy volunteers. MAIN RESULTS: We found that the auditory-nerve fibers, the cochlear nuclei and the inferior colliculus all contributed to the speech-evoked brainstem response, although the dominant contribution came from the inferior colliculus. The delay of the response corresponded to that observed in experiments. We further found that a broad range of harmonics of the fundamental frequency, up to about 8 kHz, contributed to the brainstem response. The response declined with increasing fundamental frequency, although the signal-to-noise ratio was largely unaffected. SIGNIFICANCE: Our results suggest that the scalp-recorded brainstem response at the fundamental frequency of speech originates predominantly in the inferior colliculus. They further show that the response is shaped by a large number of higher harmonics of
Reichenbach J, Keshavarzi M, 2020, Transcranial alternating current stimulation with the theta-band portion of the temporally-aligned speech envelope improves speech-in-noise comprehension, Frontiers in Human Neuroscience, Vol: 14, Pages: 1-8, ISSN: 1662-5161
Transcranial alternating current stimulation with the speech envelope can modulate the comprehension of speech in noise. The modulation stems from the theta- but not the delta-band portion of the speech envelope, and likely reflects the entrainment of neural activity in the theta frequency band, which may aid the parsing of the speech stream. The influence of the current stimulation on speech comprehension can vary with the time delay between the current waveform and the audio signal. While this effect has been investigated for current stimulation based on the entire speech envelope, it has not yet been measured when the current waveform follows the theta-band portion of the speech envelope. Here, we show that transcranial current stimulation with the speech envelope filtered in the theta frequency band improves speech comprehension as compared to a sham stimulus. The improvement occurs when there is no time delay between the current and the speech stimulus, as well as when the temporal delay is comparatively short, 90 ms. In contrast, longer delays, as well as negative delays, do not impact speech-in-noise comprehension. Moreover, we find that the improvement of speech comprehension at no or small delays of the current stimulation is consistent across participants. Our findings suggest that cortical entrainment to speech is most influenced through current stimulation that follows the speech envelope with at most a small delay. They also open a path to enhancing the perception of speech in noise, an issue that is particularly important for people with hearing impairment.
Ota T, Nin F, Choi S, et al., 2020, Characterisation of the static offset in the travelling wave in the cochlear basal turn, Pflügers Archiv European Journal of Physiology, Vol: 472, Pages: 625-635, ISSN: 0031-6768
In mammals, audition is triggered by travelling waves that are evoked by acoustic stimuli in the cochlear partition, a structure containing sensory hair cells and a basilar membrane. When the cochlea is stimulated by a pure tone of low frequency, a static offset occurs in the vibration in the apical turn. In the high-frequency region at the cochlear base, multi-tone stimuli induce a quadratic distortion product in the vibrations that suggests the presence of an offset. However, vibrations below 100 Hz, including a static offset, have not been directly measured there. We therefore constructed an interferometer for detecting motion at low frequencies including 0 Hz. We applied the interferometer to record vibrations from the cochlear base of guinea pigs in response to pure tones. When the animals were exposed to sound at an intensity of 70 dB or higher, we recorded a static offset of the sinusoidally vibrating cochlear partition by more than 1 nm towards the scala vestibuli. The offset’s magnitude grew monotonically as the stimuli intensified. When stimulus frequency was varied, the response peaked around the best frequency, the frequency that maximised the vibration amplitude at threshold sound pressure. These characteristics are consistent with those found in the low-frequency region and are therefore likely common across the cochlea. The offset diminished markedly when the somatic motility of mechanosensitive outer hair cells, the force-generating machinery that amplifies the sinusoidal vibrations, was pharmacologically blocked. Therefore, the partition offset appears to be linked to the electromotile contraction of outer hair cells.
Keshavarzi M, Kegler M, Kadir S, et al., 2020, Transcranial alternating current stimulation in the theta band but not in the delta band modulates the comprehension of naturalistic speech in noise, NeuroImage, Vol: 210, ISSN: 1053-8119
Auditory cortical activity entrains to speech rhythms and has been proposed as a mechanism for online speech processing. In particular, neural activity in the theta frequency band (4–8 Hz) tracks the onset of syllables which may aid the parsing of a speech stream. Similarly, cortical activity in the delta band (1–4 Hz) entrains to the onset of words in natural speech and has been found to encode both syntactic as well as semantic information. Such neural entrainment to speech rhythms is not merely an epiphenomenon of other neural processes, but plays a functional role in speech processing: modulating the neural entrainment through transcranial alternating current stimulation influences the speech-related neural activity and modulates the comprehension of degraded speech. However, the distinct functional contributions of the delta- and of the theta-band entrainment to the modulation of speech comprehension have not yet been investigated. Here we use transcranial alternating current stimulation with waveforms derived from the speech envelope and filtered in the delta and theta frequency bands to alter cortical entrainment in both bands separately. We find that transcranial alternating current stimulation in the theta band but not in the delta band impacts speech comprehension. Moreover, we find that transcranial alternating current stimulation with the theta-band portion of the speech envelope can improve speech-in-noise comprehension beyond sham stimulation. Our results show a distinct contribution of the theta- but not of the delta-band stimulation to the modulation of speech comprehension. In addition, our findings open up a potential avenue of enhancing the comprehension of speech in noise.
Vanheusden F, Kegler M, Ireland K, et al., 2020, Hearing aids do not alter cortical entrainment to speech at audible levels in mild-to-moderately hearing-impaired subjects, Frontiers in Human Neuroscience, Vol: 14, Pages: 1-13, ISSN: 1662-5161
Background: Cortical entrainment to speech correlates with speech intelligibility and attention to a speech stream in noisy environments. However, there is a lack of data on whether cortical entrainment can help in evaluating hearing aid fittings for subjects with mild to moderate hearing loss. One particular problem that may arise is that hearing aids may alter the speech stimulus during (pre-)processing steps, which might alter cortical entrainment to the speech. Here, the effect of hearing aid processing on cortical entrainment to running speech in hearing impaired subjects was investigated.Methodology: Seventeen native English-speaking subjects with mild-to-moderate hearing loss participated in the study. Hearing function and hearing aid fitting were evaluated using standard clinical procedures. Participants then listened to a 25-min audiobook under aided and unaided conditions at 70 dBA sound pressure level (SPL) in quiet conditions. EEG data were collected using a 32-channel system. Cortical entrainment to speech was evaluated using decoders reconstructing the speech envelope from the EEG data. Null decoders, obtained from EEG and the time-reversed speech envelope, were used to assess the chance level reconstructions. Entrainment in the delta- (1–4 Hz) and theta- (4–8 Hz) band, as well as wideband (1–20 Hz) EEG data was investigated.Results: Significant cortical responses could be detected for all but one subject in all three frequency bands under both aided and unaided conditions. However, no significant differences could be found between the two conditions in the number of responses detected, nor in the strength of cortical entrainment. The results show that the relatively small change in speech input provided by the hearing aid was not sufficient to elicit a detectable change in cortical entrainment.Conclusion: For subjects with mild to moderate hearing loss, cortical entrainment to speech in quiet at an audible level is not affected by he
Reichenbach J, Kadir S, Kaza C, et al., 2020, Modulation of speech-in-noise comprehension through transcranial current stimulation with the phase-shifted speech envelope, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol: 28, Pages: 23-31, ISSN: 1534-4320
Neural activity tracks the envelope of a speech signalat latencies from 50 ms to 300 ms. Modulating this neural trackingthrough transcranial alternating current stimulation influencesspeech comprehension. Two important variables that can affectthis modulation are the latency and the phase of the stimulationwith respect to the sound. While previous studies have found aninfluence of both variables on speech comprehension, theinteraction between both has not yet been measured. We presented17 subjects with speech in noise coupled with simultaneoustranscranial alternating current stimulation. The currents werebased on the envelope of the target speech but shifted by differentphases, as well as by two temporal delays of 100 ms and 250 ms.We also employed various control stimulations, and assessed thesignal-to-noise ratio at which the subject understood half of thespeech. We found that, at both latencies, speech comprehension ismodulated by the phase of the current stimulation. However, theform of the modulation differed between the two latencies. Phaseand latency of neurostimulation have accordingly distinctinfluences on speech comprehension. The different effects at thelatencies of 100 ms and 250 ms hint at distinct neural processes forspeech processing.
Weissbart H, Reichenbach J, Kandylaki K, 2020, Cortical tracking of surprisal during continuous speech comprehension, Journal of Cognitive Neuroscience, Vol: 32, Pages: 155-166, ISSN: 0898-929X
Speech comprehension requires rapid online processing of a continuous acoustic signal to extract structure and meaning. Previous studies on sentence comprehension have found neural correlates of the predictability of a word given its context, as well as a of the precision of such a prediction. However, they have focussed on single sentences and on particular words in those sentences. Moreover, they compared neural responses to words with low and high predictability, as well as with low and high precision. However, in speech comprehension a listener hears many successive words whose predictability and precision vary over a large range. Here we show that cortical activity in different frequency bands tracks word surprisal in continuous natural speech, and that this tracking is modulated by precision. We obtain these results through quantifying surprisal and precision from naturalistic speech using a deep neural network, and through relating these speech features to electroencephalographic (EEG) responses of human volunteers acquired during auditory story comprehension. We find significant cortical tracking of surprisal at low frequencies including the delta band as well as in the higher-frequency beta and gamma bands, and observe that the tracking is modulated by the precision. Our results pave the way to further investigate the neurobiology of natural speech comprehension.
Etard O, Kegler M, Braiman C, et al., 2019, Decoding of selective attention to continuous speech from the human auditory brainstem response, NeuroImage, Vol: 200, Pages: 1-11, ISSN: 1053-8119
Humans are highly skilled at analysing complex acoustic scenes. The segregation of different acoustic streams and the formation of corresponding neural representations is mostly attributed to the auditory cortex. Decoding of selective attention from neuroimaging has therefore focussed on cortical responses to sound. However, the auditory brainstem response to speech is modulated by selective attention as well, as recently shown through measuring the brainstem's response to running speech. Although the response of the auditory brainstem has a smaller magnitude than that of the auditory cortex, it occurs at much higher frequencies and therefore has a higher information rate. Here we develop statistical models for extracting the brainstem response from multi-channel scalp recordings and for analysing the attentional modulation according to the focus of attention. We demonstrate that the attentional modulation of the brainstem response to speech can be employed to decode the attentional focus of a listener from short measurements of 10 s or less in duration. The decoding remains accurate when obtained from three EEG channels only. We further show how out-of-the-box decoding that employs subject-independent models, as well as decoding that is independent of the specific attended speaker is capable of achieving similar accuracy. These results open up new avenues for investigating the neural mechanisms for selective attention in the brainstem and for developing efficient auditory brain-computer interfaces.
Saiz Alia M, Forte A, Reichenbach J, 2019, Individual differences in the attentional modulation of the human auditory brainstem response to speech inform on speech-in-noise deficits, Scientific Reports, Vol: 9, ISSN: 2045-2322
People with normal hearing thresholds can nonetheless have difficulty with understanding speech in noisy backgrounds. The origins of such supra-threshold hearing deficits remain largely unclear. Previously we showed that the auditory brainstem response to running speech is modulated by selective attention, evidencing a subcortical mechanism that contributes to speech-in-noise comprehension. We observed, however, significant variation in the magnitude of the brainstem’s attentional modulation between the different volunteers. Here we show that this variability relates to the ability of the subjects to understand speech in background noise. In particular, we assessed 43 young human volunteers with normal hearing thresholds for their speech-in-noise comprehension. We also recorded their auditory 30brainstem responses to running speech when selectively attending to one of two competing voices. To control for potential peripheral hearing deficits, and in particular for cochlear synaptopathy, we further assessed noise exposure, the temporal sensitivity threshold, the middle-ear muscle reflex, and the auditory-brainstem response to clicks in various levels of background noise. These tests did not show evidence for cochlear synaptopathy amongst the volunteers. Furthermore, we found that only the attentional modulation of the brainstem response to speech was significantly related to speech-in-noise comprehension. Our results therefore evidence an impact of top-down modulation of brainstem activity on the variability in speech-in-noise comprehension amongst the subjects.
Etard O, Reichenbach J, 2019, Neural speech tracking in the theta and in the delta frequency band differentially encode clarity and comprehension of speech in noise, Journal of Neuroscience, Vol: 39, Pages: 5750-5759, ISSN: 0270-6474
Humans excel at understanding speech even in adverse conditions such as background noise. Speech processing may be aided by cortical activity in the delta and theta frequency bands that has been found to track the speech envelope. However, the rhythm of non-speech sounds is tracked by cortical activity as well. It therefore remains unclear which aspects of neural speech tracking represent the processing of acoustic features, related to the clarity of speech, and which aspects reflect higher-level linguistic processing related to speech comprehension. Here we disambiguate the roles of cortical tracking for speech clarity and comprehension through recording EEG responses to native and foreign language in different levels of background noise, for which clarity and comprehension vary independently. We then use a both a decoding and an encoding approach to relate clarity and comprehension to the neural responses. We find that cortical tracking in the theta frequency band is mainly correlated to clarity, while the delta band contributes most to speech comprehension. Moreover, we uncover an early neural component in the delta band that informs on comprehension and that may reflect a predictive mechanism for language processing. Our results disentangle the functional contributions of cortical speech tracking in the delta and theta bands to speech processing. They also show that both speech clarity and comprehension can be accurately decoded from relatively short segments of EEG recordings, which may have applications in future mind-controlled auditory prosthesis.
BinKhamis G, Forte AE, Reichenbach J, et al., 2019, Speech auditory brainstem responses in adult hearing aid users: Effects of aiding and background noise, and prediction of behavioral measures, Trends in Hearing, Vol: 23, Pages: 1-20, ISSN: 2331-2165
Evaluation of patients who are unable to provide behavioral responses on standard clinical measures is challenging due to the lack of standard objective (non-behavioral) clinical audiological measures that assess the outcome of an intervention (e.g. hearing aids). Brainstem responses to short consonant-vowel stimuli (speech-ABRs) have been proposed as a measure of subcortical encoding of speech, speech detection, and speech-in-noise performance in individuals with normal hearing. Here, we investigated the potential of speech-ABRs as an objective clinical outcome measure of speech detection, speech-in-noise detection and recognition, and self-reported speech understanding in adults with sensorineural hearing loss. We compared aided and unaided speech-ABRs, and speech-ABRs in quiet and in noise. Additionally, we evaluated whether speech-ABR F0 encoding (obtained from the complex cross-correlation with the 40 ms [da] fundamental waveform) predicted aided behavioral speech recognition in noise and/or aided self-reported speech understanding. Results showed: (i) aided speech-ABRs had earlier peak latencies, larger peak amplitudes, and larger F0 encoding amplitudes compared to unaided speech-ABRs; (ii) the addition of background noise resulted in later F0 encoding latencies, but did not have an effect on peak latencies and amplitudes, or on F0 encoding amplitudes; and (iii) speech-ABRs were not a significant predictor of any of the behavioral or self-report measures. These results show thatspeech-ABR F0 encoding is not a good predictor of speech-in-noise recognition or self reported speech understanding with hearing aids. However, our results suggest that speech- ABRs may have potential for clinical application as an objective measure of speech detection with hearing aids.
Braiman C, Fridman A, Conte MM, et al., 2018, Cortical response to the natural speech envelope correlates with neuroimaging evidence of cognition in severe brain injury, Current Biology, Vol: 28, Pages: 3833-3839.E3, ISSN: 0960-9822
Recent studies identify severely brain-injured patients with limited or no behavioral responses who successfully perform functional magnetic resonance imaging (fMRI) or electroencephalogram (EEG) mental imagery tasks [1, 2, 3, 4, 5]. Such tasks are cognitively demanding ; accordingly, recent studies support that fMRI command following in brain-injured patients associates with preserved cerebral metabolism and preserved sleep-wake EEG [5, 6]. We investigated the use of an EEG response that tracks the natural speech envelope (NSE) of spoken language [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22] in healthy controls and brain-injured patients (vegetative state to emergence from minimally conscious state). As audition is typically preserved after brain injury, auditory paradigms may be preferred in searching for covert cognitive function [23, 24, 25]. NSE measures are obtained by cross-correlating EEG with the NSE. We compared NSE latencies and amplitudes with and without consideration of fMRI assessments. NSE latencies showed significant and progressive delay across diagnostic categories. Patients who could carry out fMRI-based mental imagery tasks showed no statistically significant difference in NSE latencies relative to healthy controls; this subgroup included patients without behavioral command following. The NSE may stratify patients with severe brain injuries and identify those patients demonstrating “cognitive motor dissociation” (CMD)  who show only covert evidence of command following utilizing neuroimaging or electrophysiological methods that demand high levels of cognitive function. Thus, the NSE is a passive measure that may provide a useful screening tool to improve detection of covert cognition with fMRI or other methods and improve stratification of patients with disorders of consciousness in research studies.
Forte AE, Etard OE, Reichenbach JDT, 2018, Selective Auditory Attention At The Brainstem Level, ARO 2018
Saiz Alia M, Askari A, Forte AE, et al., 2018, A model of the human auditory brainstem response to running speech, ARO 2018
Kegler M, Etard OE, Forte AE, et al., 2018, Complex Statistical Model for Detecting the Auditory Brainstem Response to Natural Speech and for Decoding Attention from High-Density EEG Recordings, ARO 2018
Etard OE, Kegler M, Braiman C, et al., 2018, Real-time decoding of selective attention from the human auditory brainstem response to continuous speech, BioRxiv
Reichenbach JDT, Ciganovic N, Warren R, et al., 2018, Static length changes of cochlear outer hair cells can tune low-frequency hearing, PLoS Computational Biology, Vol: 14, ISSN: 1553-734X
The cochlea not only transduces sound-induced vibration into neural spikes, it also amplifiesweak sound to boost its detection. Actuators of this active process are sensory outer haircells in the organ of Corti, whereas the inner hair cells transduce the resulting motion intoelectric signals that propagate via the auditory nerve to the brain. However, how the outerhair cells modulate the stimulus to the inner hair cells remains unclear. Here, we combinetheoretical modeling and experimental measurements near the cochlear apex to study theway in which length changes of the outer hair cells deform the organ of Corti. We develop ageometry-based kinematic model of the apical organ of Corti that reproduces salient, yetcounter-intuitive features of the organ’s motion. Our analysis further uncovers a mechanismby which a static length change of the outer hair cells can sensitively tune the signal transmittedto the sensory inner hair cells. When the outer hair cells are in an elongated state,stimulation of inner hair cells is largely inhibited, whereas outer hair cell contraction leads toa substantial enhancement of sound-evoked motion near the hair bundles. This novel mechanismfor regulating the sensitivity of the hearing organ applies to the low frequencies thatare most important for the perception of speech and music. We suggest that the proposedmechanism might underlie frequency discrimination at low auditory frequencies, as well asour ability to selectively attend auditory signals in noisy surroundings.
Forte AE, Etard O, Reichenbach J, 2017, The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention, eLife, Vol: 6, ISSN: 2050-084X
Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. Here we develop a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity. We employ this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention.
Kegler M, Etard O, Forte AE, et al., 2017, Complex statistical model for detecting the auditory brainstem response to natural speech and for decoding attention, Basic Auditory Science 2017
Forte AE, Etard O, Reichenbach J, 2017, Selective auditory attention modulates the human brainstem's response to running speech, Basic Auditory Science 2017
Etard, Reichenbach J, 2017, EEG-measured correlates of comprehension in speech-in-noise listening, Basic Auditory Science 2017
Sidiras C, Iliadou V, Nimatoudis I, et al., 2017, Spoken word recognition enhancement due to preceding synchronized beats compared to unsynchronized or unrhythmic beats, Frontiers in Neuroscience, Vol: 11, ISSN: 1662-4548
The relation between rhythm and language has been investigated over the last decades, with evidence that these share overlapping perceptual mechanisms emerging from several different strands of research. The dynamic Attention Theory posits that neural entrainment to musical rhythm results in synchronized oscillations in attention, enhancing perception of other events occurring at the same rate. In this study, this prediction was tested in 10 year-old children by means of a psychoacoustic speech recognition in babble paradigm. It was hypothesized that rhythm effects evoked via a short isochronous sequence of beats would provide optimal word recognition in babble when beats and word are in sync. We compared speech recognition in babble performance in the presence of isochronous and in sync vs. non-isochronous or out of sync sequence of beats. Results showed that (a) word recognition was the best when rhythm and word were in sync, and (b) the effect was not uniform across syllables and gender of subjects. Our results suggest that pure tone beats affect speech recognition at early levels of sensory or phonemic processing.
Ciganovic N, Wolde-Kidan A, Reichenbach JDT, 2017, Hair bundles of cochlear outer hair cells are shaped to minimize their fluid-dynamic resistance, Scientific Reports, Vol: 7, ISSN: 2045-2322
The mammalian sense of hearing relies on two types of sensory cells: inner hair cells transmit the auditory stimulus to the brain, while outer hair cells mechanically modulate the stimulus through active feedback. Stimulation of a hair cell is mediated by displacements of its mechanosensitive hair bundle which protrudes from the apical surface of the cell into a narrow fluid-filled space between reticular lamina and tectorial membrane. While hair bundles of inner hair cells are of linear shape, those of outer hair cells exhibit a distinctive V-shape. The biophysical rationale behind this morphology, however, remains unknown. Here we use analytical and computational methods to study the fluid flow across rows of differently shaped hair bundles. We find that rows of V-shaped hair bundles have a considerably reduced resistance to crossflow, and that the biologically observed shapes of hair bundles of outer hair cells are near-optimal in this regard. This observation accords with the function of outer hair cells and lends support to the recent hypothesis that inner hair cells are stimulated by a net flow, in addition to the well-established shear flow that arises from shearing between the reticular lamina and the tectorial membrane.
Forte AE, Etard O, Reichenbach J, 2017, Complex Auditory-brainstem Response to the Fundamental Frequency of Continuous Natural Speech, ARO 2017
Warren RL, Ramamoorthy S, Ciganovic N, et al., 2016, Minimal basilar membrane motion in low-frequency hearing, Proceedings of the National Academy of Sciences of the United States of America, Vol: 113, Pages: E4304-E4310, ISSN: 1091-6490
Low-frequency hearing is critically important for speech and music perception, but no mechanical measurements have previously been available from inner ears with intact low-frequency parts. These regions of the cochlea may function in ways different from the extensively studied high-frequency regions, where the sensory outer hair cells produce force that greatly increases the sound-evoked vibrations of the basilar membrane. We used laser interferometry in vitro and optical coherence tomography in vivo to study the low-frequency part of the guinea pig cochlea, and found that sound stimulation caused motion of a minimal portion of the basilar membrane. Outside the region of peak movement, an exponential decline in motion amplitude occurred across the basilar membrane. The moving region had different dependence on stimulus frequency than the vibrations measured near the mechanosensitive stereocilia. This behavior differs substantially from the behavior found in the extensively studied high-frequency regions of the cochlea.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.