71 results found
Reichenbach J, Keshavarzi M, 2020, Transcranial alternating current stimulation with the theta-band portion of the temporally-aligned speech envelope improves speech-in-noise comprehension, Frontiers in Human Neuroscience, Vol: 14, Pages: 1-8, ISSN: 1662-5161
Transcranial alternating current stimulation with the speech envelope can modulate the comprehension of speech in noise. The modulation stems from the theta- but not the delta-band portion of the speech envelope, and likely reflects the entrainment of neural activity in the theta frequency band, which may aid the parsing of the speech stream. The influence of the current stimulation on speech comprehension can vary with the time delay between the current waveform and the audio signal. While this effect has been investigated for current stimulation based on the entire speech envelope, it has not yet been measured when the current waveform follows the theta-band portion of the speech envelope. Here, we show that transcranial current stimulation with the speech envelope filtered in the theta frequency band improves speech comprehension as compared to a sham stimulus. The improvement occurs when there is no time delay between the current and the speech stimulus, as well as when the temporal delay is comparatively short, 90 ms. In contrast, longer delays, as well as negative delays, do not impact speech-in-noise comprehension. Moreover, we find that the improvement of speech comprehension at no or small delays of the current stimulation is consistent across participants. Our findings suggest that cortical entrainment to speech is most influenced through current stimulation that follows the speech envelope with at most a small delay. They also open a path to enhancing the perception of speech in noise, an issue that is particularly important for people with hearing impairment.
Saiz-Alia M, Reichenbach T, 2020, Computational modeling of the auditory brainstem response to continuous speech., Journal of Neural Engineering, ISSN: 1741-2552
OBJECTIVE: The auditory brainstem response can be recorded non-invasively from scalp electrodes and serves as an important clinical measure of hearing function. We have recently shown how the brainstem response at the fundamental frequency of continuous, non-repetitive speech can be measured, and have used this measure to demonstrate that the response is modulated by selective attention. However, different parts of the speech signal as well as several parts of the brainstem contribute to this response. Here we employ a computational model of the brainstem to elucidate the influence of these different factors. APPROACH: We developed a computational model of the auditory brainstem by combining a model of the middle and inner ear with a model of globular bushy cells in the cochlear nuclei and with a phenomenological model of the inferior colliculus. We then employed the model to investigate the neural response to continuous speech at different stages in the brainstem, following the methodology developed recently by ourselves for detecting the brainstem response to running speech from scalp recordings. We compared the simulations with recordings from healthy volunteers. MAIN RESULTS: We found that the auditory-nerve fibers, the cochlear nuclei and the inferior colliculus all contributed to the speech-evoked brainstem response, although the dominant contribution came from the inferior colliculus. The delay of the response corresponded to that observed in experiments. We further found that a broad range of harmonics of the fundamental frequency, up to about 8 kHz, contributed to the brainstem response. The response declined with increasing fundamental frequency, although the signal-to-noise ratio was largely unaffected. SIGNIFICANCE: Our results suggest that the scalp-recorded brainstem response at the fundamental frequency of speech originates predominantly in the inferior colliculus. They further show that the response is shaped by a large number of higher harmonics of
Ota T, Nin F, Choi S, et al., 2020, Characterisation of the static offset in the travelling wave in the cochlear basal turn, Pflügers Archiv European Journal of Physiology, Vol: 472, Pages: 625-635, ISSN: 0031-6768
In mammals, audition is triggered by travelling waves that are evoked by acoustic stimuli in the cochlear partition, a structure containing sensory hair cells and a basilar membrane. When the cochlea is stimulated by a pure tone of low frequency, a static offset occurs in the vibration in the apical turn. In the high-frequency region at the cochlear base, multi-tone stimuli induce a quadratic distortion product in the vibrations that suggests the presence of an offset. However, vibrations below 100 Hz, including a static offset, have not been directly measured there. We therefore constructed an interferometer for detecting motion at low frequencies including 0 Hz. We applied the interferometer to record vibrations from the cochlear base of guinea pigs in response to pure tones. When the animals were exposed to sound at an intensity of 70 dB or higher, we recorded a static offset of the sinusoidally vibrating cochlear partition by more than 1 nm towards the scala vestibuli. The offset’s magnitude grew monotonically as the stimuli intensified. When stimulus frequency was varied, the response peaked around the best frequency, the frequency that maximised the vibration amplitude at threshold sound pressure. These characteristics are consistent with those found in the low-frequency region and are therefore likely common across the cochlea. The offset diminished markedly when the somatic motility of mechanosensitive outer hair cells, the force-generating machinery that amplifies the sinusoidal vibrations, was pharmacologically blocked. Therefore, the partition offset appears to be linked to the electromotile contraction of outer hair cells.
Keshavarzi M, Kegler M, Kadir S, et al., 2020, Transcranial alternating current stimulation in the theta band but not in the delta band modulates the comprehension of naturalistic speech in noise, NeuroImage, Vol: 210, ISSN: 1053-8119
Auditory cortical activity entrains to speech rhythms and has been proposed as a mechanism for online speech processing. In particular, neural activity in the theta frequency band (4–8 Hz) tracks the onset of syllables which may aid the parsing of a speech stream. Similarly, cortical activity in the delta band (1–4 Hz) entrains to the onset of words in natural speech and has been found to encode both syntactic as well as semantic information. Such neural entrainment to speech rhythms is not merely an epiphenomenon of other neural processes, but plays a functional role in speech processing: modulating the neural entrainment through transcranial alternating current stimulation influences the speech-related neural activity and modulates the comprehension of degraded speech. However, the distinct functional contributions of the delta- and of the theta-band entrainment to the modulation of speech comprehension have not yet been investigated. Here we use transcranial alternating current stimulation with waveforms derived from the speech envelope and filtered in the delta and theta frequency bands to alter cortical entrainment in both bands separately. We find that transcranial alternating current stimulation in the theta band but not in the delta band impacts speech comprehension. Moreover, we find that transcranial alternating current stimulation with the theta-band portion of the speech envelope can improve speech-in-noise comprehension beyond sham stimulation. Our results show a distinct contribution of the theta- but not of the delta-band stimulation to the modulation of speech comprehension. In addition, our findings open up a potential avenue of enhancing the comprehension of speech in noise.
Vanheusden F, Kegler M, Ireland K, et al., 2020, Hearing aids do not alter cortical entrainment to speech at audible levels in mild-to-moderately hearing-impaired subjects, Frontiers in Human Neuroscience, Vol: 14, Pages: 1-13, ISSN: 1662-5161
Background: Cortical entrainment to speech correlates with speech intelligibility and attention to a speech stream in noisy environments. However, there is a lack of data on whether cortical entrainment can help in evaluating hearing aid fittings for subjects with mild to moderate hearing loss. One particular problem that may arise is that hearing aids may alter the speech stimulus during (pre-)processing steps, which might alter cortical entrainment to the speech. Here, the effect of hearing aid processing on cortical entrainment to running speech in hearing impaired subjects was investigated.Methodology: Seventeen native English-speaking subjects with mild-to-moderate hearing loss participated in the study. Hearing function and hearing aid fitting were evaluated using standard clinical procedures. Participants then listened to a 25-min audiobook under aided and unaided conditions at 70 dBA sound pressure level (SPL) in quiet conditions. EEG data were collected using a 32-channel system. Cortical entrainment to speech was evaluated using decoders reconstructing the speech envelope from the EEG data. Null decoders, obtained from EEG and the time-reversed speech envelope, were used to assess the chance level reconstructions. Entrainment in the delta- (1–4 Hz) and theta- (4–8 Hz) band, as well as wideband (1–20 Hz) EEG data was investigated.Results: Significant cortical responses could be detected for all but one subject in all three frequency bands under both aided and unaided conditions. However, no significant differences could be found between the two conditions in the number of responses detected, nor in the strength of cortical entrainment. The results show that the relatively small change in speech input provided by the hearing aid was not sufficient to elicit a detectable change in cortical entrainment.Conclusion: For subjects with mild to moderate hearing loss, cortical entrainment to speech in quiet at an audible level is not affected by he
Reichenbach J, Kadir S, Kaza C, et al., 2020, Modulation of speech-in-noise comprehension through transcranial current stimulation with the phase-shifted speech envelope, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol: 28, Pages: 23-31, ISSN: 1534-4320
Neural activity tracks the envelope of a speech signalat latencies from 50 ms to 300 ms. Modulating this neural trackingthrough transcranial alternating current stimulation influencesspeech comprehension. Two important variables that can affectthis modulation are the latency and the phase of the stimulationwith respect to the sound. While previous studies have found aninfluence of both variables on speech comprehension, theinteraction between both has not yet been measured. We presented17 subjects with speech in noise coupled with simultaneoustranscranial alternating current stimulation. The currents werebased on the envelope of the target speech but shifted by differentphases, as well as by two temporal delays of 100 ms and 250 ms.We also employed various control stimulations, and assessed thesignal-to-noise ratio at which the subject understood half of thespeech. We found that, at both latencies, speech comprehension ismodulated by the phase of the current stimulation. However, theform of the modulation differed between the two latencies. Phaseand latency of neurostimulation have accordingly distinctinfluences on speech comprehension. The different effects at thelatencies of 100 ms and 250 ms hint at distinct neural processes forspeech processing.
Weissbart H, Reichenbach J, Kandylaki K, 2020, Cortical tracking of surprisal during continuous speech comprehension, Journal of Cognitive Neuroscience, Vol: 32, Pages: 155-166, ISSN: 0898-929X
Speech comprehension requires rapid online processing of a continuous acoustic signal to extract structure and meaning. Previous studies on sentence comprehension have found neural correlates of the predictability of a word given its context, as well as a of the precision of such a prediction. However, they have focussed on single sentences and on particular words in those sentences. Moreover, they compared neural responses to words with low and high predictability, as well as with low and high precision. However, in speech comprehension a listener hears many successive words whose predictability and precision vary over a large range. Here we show that cortical activity in different frequency bands tracks word surprisal in continuous natural speech, and that this tracking is modulated by precision. We obtain these results through quantifying surprisal and precision from naturalistic speech using a deep neural network, and through relating these speech features to electroencephalographic (EEG) responses of human volunteers acquired during auditory story comprehension. We find significant cortical tracking of surprisal at low frequencies including the delta band as well as in the higher-frequency beta and gamma bands, and observe that the tracking is modulated by the precision. Our results pave the way to further investigate the neurobiology of natural speech comprehension.
Etard O, Kegler M, Braiman C, et al., 2019, Decoding of selective attention to continuous speech from the human auditory brainstem response, NeuroImage, Vol: 200, Pages: 1-11, ISSN: 1053-8119
Humans are highly skilled at analysing complex acoustic scenes. The segregation of different acoustic streams and the formation of corresponding neural representations is mostly attributed to the auditory cortex. Decoding of selective attention from neuroimaging has therefore focussed on cortical responses to sound. However, the auditory brainstem response to speech is modulated by selective attention as well, as recently shown through measuring the brainstem's response to running speech. Although the response of the auditory brainstem has a smaller magnitude than that of the auditory cortex, it occurs at much higher frequencies and therefore has a higher information rate. Here we develop statistical models for extracting the brainstem response from multi-channel scalp recordings and for analysing the attentional modulation according to the focus of attention. We demonstrate that the attentional modulation of the brainstem response to speech can be employed to decode the attentional focus of a listener from short measurements of 10 s or less in duration. The decoding remains accurate when obtained from three EEG channels only. We further show how out-of-the-box decoding that employs subject-independent models, as well as decoding that is independent of the specific attended speaker is capable of achieving similar accuracy. These results open up new avenues for investigating the neural mechanisms for selective attention in the brainstem and for developing efficient auditory brain-computer interfaces.
Saiz Alia M, Forte A, Reichenbach J, 2019, Individual differences in the attentional modulation of the human auditory brainstem response to speech inform on speech-in-noise deficits, Scientific Reports, Vol: 9, ISSN: 2045-2322
People with normal hearing thresholds can nonetheless have difficulty with understanding speech in noisy backgrounds. The origins of such supra-threshold hearing deficits remain largely unclear. Previously we showed that the auditory brainstem response to running speech is modulated by selective attention, evidencing a subcortical mechanism that contributes to speech-in-noise comprehension. We observed, however, significant variation in the magnitude of the brainstem’s attentional modulation between the different volunteers. Here we show that this variability relates to the ability of the subjects to understand speech in background noise. In particular, we assessed 43 young human volunteers with normal hearing thresholds for their speech-in-noise comprehension. We also recorded their auditory 30brainstem responses to running speech when selectively attending to one of two competing voices. To control for potential peripheral hearing deficits, and in particular for cochlear synaptopathy, we further assessed noise exposure, the temporal sensitivity threshold, the middle-ear muscle reflex, and the auditory-brainstem response to clicks in various levels of background noise. These tests did not show evidence for cochlear synaptopathy amongst the volunteers. Furthermore, we found that only the attentional modulation of the brainstem response to speech was significantly related to speech-in-noise comprehension. Our results therefore evidence an impact of top-down modulation of brainstem activity on the variability in speech-in-noise comprehension amongst the subjects.
Etard O, Reichenbach J, 2019, Neural speech tracking in the theta and in the delta frequency band differentially encode clarity and comprehension of speech in noise, Journal of Neuroscience, Vol: 39, Pages: 5750-5759, ISSN: 0270-6474
Humans excel at understanding speech even in adverse conditions such as background noise. Speech processing may be aided by cortical activity in the delta and theta frequency bands that has been found to track the speech envelope. However, the rhythm of non-speech sounds is tracked by cortical activity as well. It therefore remains unclear which aspects of neural speech tracking represent the processing of acoustic features, related to the clarity of speech, and which aspects reflect higher-level linguistic processing related to speech comprehension. Here we disambiguate the roles of cortical tracking for speech clarity and comprehension through recording EEG responses to native and foreign language in different levels of background noise, for which clarity and comprehension vary independently. We then use a both a decoding and an encoding approach to relate clarity and comprehension to the neural responses. We find that cortical tracking in the theta frequency band is mainly correlated to clarity, while the delta band contributes most to speech comprehension. Moreover, we uncover an early neural component in the delta band that informs on comprehension and that may reflect a predictive mechanism for language processing. Our results disentangle the functional contributions of cortical speech tracking in the delta and theta bands to speech processing. They also show that both speech clarity and comprehension can be accurately decoded from relatively short segments of EEG recordings, which may have applications in future mind-controlled auditory prosthesis.
BinKhamis G, Forte AE, Reichenbach J, et al., 2019, Speech auditory brainstem responses in adult hearing aid users: Effects of aiding and background noise, and prediction of behavioral measures, Trends in Hearing, Vol: 23, Pages: 1-20, ISSN: 2331-2165
Evaluation of patients who are unable to provide behavioral responses on standard clinical measures is challenging due to the lack of standard objective (non-behavioral) clinical audiological measures that assess the outcome of an intervention (e.g. hearing aids). Brainstem responses to short consonant-vowel stimuli (speech-ABRs) have been proposed as a measure of subcortical encoding of speech, speech detection, and speech-in-noise performance in individuals with normal hearing. Here, we investigated the potential of speech-ABRs as an objective clinical outcome measure of speech detection, speech-in-noise detection and recognition, and self-reported speech understanding in adults with sensorineural hearing loss. We compared aided and unaided speech-ABRs, and speech-ABRs in quiet and in noise. Additionally, we evaluated whether speech-ABR F0 encoding (obtained from the complex cross-correlation with the 40 ms [da] fundamental waveform) predicted aided behavioral speech recognition in noise and/or aided self-reported speech understanding. Results showed: (i) aided speech-ABRs had earlier peak latencies, larger peak amplitudes, and larger F0 encoding amplitudes compared to unaided speech-ABRs; (ii) the addition of background noise resulted in later F0 encoding latencies, but did not have an effect on peak latencies and amplitudes, or on F0 encoding amplitudes; and (iii) speech-ABRs were not a significant predictor of any of the behavioral or self-report measures. These results show thatspeech-ABR F0 encoding is not a good predictor of speech-in-noise recognition or self reported speech understanding with hearing aids. However, our results suggest that speech- ABRs may have potential for clinical application as an objective measure of speech detection with hearing aids.
Braiman C, Fridman A, Conte MM, et al., 2018, Cortical response to the natural speech envelope correlates with neuroimaging evidence of cognition in severe brain injury, Current Biology, Vol: 28, Pages: 3833-3839, ISSN: 1879-0445
Recent studies identify severely brain-injured patients with limited or no behavioral responses who successfully perform functional magnetic resonance imaging (fMRI) or electroencephalogram (EEG) mental imagery tasks [1, 2, 3, 4, 5]. Such tasks are cognitively demanding ; accordingly, recent studies support that fMRI command following in brain-injured patients associates with preserved cerebral metabolism and preserved sleep-wake EEG [5, 6]. We investigated the use of an EEG response that tracks the natural speech envelope (NSE) of spoken language [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22] in healthy controls and brain-injured patients (vegetative state to emergence from minimally conscious state). As audition is typically preserved after brain injury, auditory paradigms may be preferred in searching for covert cognitive function [23, 24, 25]. NSE measures are obtained by cross-correlating EEG with the NSE. We compared NSE latencies and amplitudes with and without consideration of fMRI assessments. NSE latencies showed significant and progressive delay across diagnostic categories. Patients who could carry out fMRI-based mental imagery tasks showed no statistically significant difference in NSE latencies relative to healthy controls; this subgroup included patients without behavioral command following. The NSE may stratify patients with severe brain injuries and identify those patients demonstrating “cognitive motor dissociation” (CMD)  who show only covert evidence of command following utilizing neuroimaging or electrophysiological methods that demand high levels of cognitive function. Thus, the NSE is a passive measure that may provide a useful screening tool to improve detection of covert cognition with fMRI or other methods and improve stratification of patients with disorders of consciousness in research studies.
Kegler M, Etard OE, Forte AE, et al., 2018, Complex Statistical Model for Detecting the Auditory Brainstem Response to Natural Speech and for Decoding Attention from High-Density EEG Recordings, ARO 2018
Saiz Alia M, Askari A, Forte AE, et al., 2018, A model of the human auditory brainstem response to running speech, ARO 2018
Forte AE, Etard OE, Reichenbach JDT, 2018, Selective Auditory Attention At The Brainstem Level, ARO 2018
Etard OE, Kegler M, Braiman C, et al., 2018, Real-time decoding of selective attention from the human auditory brainstem response to continuous speech, BioRxiv
Reichenbach JDT, Ciganovic N, Warren R, et al., 2018, Static length changes of cochlear outer hair cells can tune low-frequency hearing, PLoS Computational Biology, Vol: 14, ISSN: 1553-734X
The cochlea not only transduces sound-induced vibration into neural spikes, it also amplifiesweak sound to boost its detection. Actuators of this active process are sensory outer haircells in the organ of Corti, whereas the inner hair cells transduce the resulting motion intoelectric signals that propagate via the auditory nerve to the brain. However, how the outerhair cells modulate the stimulus to the inner hair cells remains unclear. Here, we combinetheoretical modeling and experimental measurements near the cochlear apex to study theway in which length changes of the outer hair cells deform the organ of Corti. We develop ageometry-based kinematic model of the apical organ of Corti that reproduces salient, yetcounter-intuitive features of the organ’s motion. Our analysis further uncovers a mechanismby which a static length change of the outer hair cells can sensitively tune the signal transmittedto the sensory inner hair cells. When the outer hair cells are in an elongated state,stimulation of inner hair cells is largely inhibited, whereas outer hair cell contraction leads toa substantial enhancement of sound-evoked motion near the hair bundles. This novel mechanismfor regulating the sensitivity of the hearing organ applies to the low frequencies thatare most important for the perception of speech and music. We suggest that the proposedmechanism might underlie frequency discrimination at low auditory frequencies, as well asour ability to selectively attend auditory signals in noisy surroundings.
Forte AE, Etard O, Reichenbach J, 2017, The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention, eLife, Vol: 6, ISSN: 2050-084X
Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. Here we develop a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity. We employ this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention.
Forte AE, Etard O, Reichenbach J, 2017, Selective auditory attention modulates the human brainstem's response to running speech, Basic Auditory Science 2017
Kegler M, Etard O, Forte AE, et al., 2017, Complex statistical model for detecting the auditory brainstem response to natural speech and for decoding attention, Basic Auditory Science 2017
Etard, Reichenbach J, EEG-measured correlates of comprehension in speech-in-noise listening, Basic Auditory Science 2017
Sidiras C, Iliadou V, Nimatoudis I, et al., 2017, Spoken word recognition enhancement due to preceding synchronized beats compared to unsynchronized or unrhythmic beats, Frontiers in Neuroscience, Vol: 11, ISSN: 1662-4548
The relation between rhythm and language has been investigated over the last decades, with evidence that these share overlapping perceptual mechanisms emerging from several different strands of research. The dynamic Attention Theory posits that neural entrainment to musical rhythm results in synchronized oscillations in attention, enhancing perception of other events occurring at the same rate. In this study, this prediction was tested in 10 year-old children by means of a psychoacoustic speech recognition in babble paradigm. It was hypothesized that rhythm effects evoked via a short isochronous sequence of beats would provide optimal word recognition in babble when beats and word are in sync. We compared speech recognition in babble performance in the presence of isochronous and in sync vs. non-isochronous or out of sync sequence of beats. Results showed that (a) word recognition was the best when rhythm and word were in sync, and (b) the effect was not uniform across syllables and gender of subjects. Our results suggest that pure tone beats affect speech recognition at early levels of sensory or phonemic processing.
Ciganovic N, Wolde-Kidan A, Reichenbach JDT, 2017, Hair bundles of cochlear outer hair cells are shaped to minimize their fluid-dynamic resistance, Scientific Reports, Vol: 7, ISSN: 2045-2322
The mammalian sense of hearing relies on two types of sensory cells: inner hair cells transmit the auditory stimulus to the brain, while outer hair cells mechanically modulate the stimulus through active feedback. Stimulation of a hair cell is mediated by displacements of its mechanosensitive hair bundle which protrudes from the apical surface of the cell into a narrow fluid-filled space between reticular lamina and tectorial membrane. While hair bundles of inner hair cells are of linear shape, those of outer hair cells exhibit a distinctive V-shape. The biophysical rationale behind this morphology, however, remains unknown. Here we use analytical and computational methods to study the fluid flow across rows of differently shaped hair bundles. We find that rows of V-shaped hair bundles have a considerably reduced resistance to crossflow, and that the biologically observed shapes of hair bundles of outer hair cells are near-optimal in this regard. This observation accords with the function of outer hair cells and lends support to the recent hypothesis that inner hair cells are stimulated by a net flow, in addition to the well-established shear flow that arises from shearing between the reticular lamina and the tectorial membrane.
Forte AE, Etard O, Reichenbach J, 2017, Complex Auditory-brainstem Response to the Fundamental Frequency of Continuous Natural Speech, ARO 2017
Warren RL, Ramamoorthy S, Ciganovic N, et al., 2016, Minimal basilar membrane motion in low-frequency hearing, Proceedings of the National Academy of Sciences of the United States of America, Vol: 113, Pages: E4304-E4310, ISSN: 1091-6490
Low-frequency hearing is critically important for speech and music perception, but no mechanical measurements have previously been available from inner ears with intact low-frequency parts. These regions of the cochlea may function in ways different from the extensively studied high-frequency regions, where the sensory outer hair cells produce force that greatly increases the sound-evoked vibrations of the basilar membrane. We used laser interferometry in vitro and optical coherence tomography in vivo to study the low-frequency part of the guinea pig cochlea, and found that sound stimulation caused motion of a minimal portion of the basilar membrane. Outside the region of peak movement, an exponential decline in motion amplitude occurred across the basilar membrane. The moving region had different dependence on stimulus frequency than the vibrations measured near the mechanosensitive stereocilia. This behavior differs substantially from the behavior found in the extensively studied high-frequency regions of the cochlea.
Reichenbach CS, Braiman C, Schiff ND, et al., 2016, The auditory-brainstem response to continuous, non repetitive speech is modulated by the speech envelope and reflects speech processing, Frontiers in Computational Neuroscience, Vol: 10, ISSN: 1662-5188
The auditory-brainstem response (ABR) to short and simple acoustical signals is an important clinical tool used to diagnose the integrity of the brainstem. The ABR is also employed to investigate the auditory brainstem in a multitude of tasks related to hearing, such as processing speech or selectively focusing on one speaker in a noisy environment. Such research measures the response of the brainstem to short speech signals such as vowels or words. Because the voltage signal of the ABR has a tiny amplitude, several hundred to a thousand repetitions of the acoustic signal are needed to obtain a reliable response. The large number of repetitions poses a challenge to assessing cognitive functions due to neural adaptation. Here we show that continuous, non-repetitive speech, lasting several minutes, may be employed to measure the ABR. Because the speech is not repeated during the experiment, the precise temporal form of the ABR cannot be determined. We show, however, that important structural features of the ABR can nevertheless be inferred. In particular, the brainstem responds at the fundamental frequency of the speech signal, and this response is modulated by the envelope of the voiced parts of speech. We accordingly introduce a novel measure that assesses the ABR as modulated by the speech envelope, at the fundamental frequency of speech and at the characteristic latency of the response. This measure has a high signal-to-noise ratio and can hence be employed effectively to measure the ABR to continuous speech. We use this novel measure to show that the auditory brainstem response is weaker to intelligible speech than to unintelligible, time-reversed speech. The methods presented here can be employed for further research on speech processing in the auditory brainstem and can lead to the development of future clinical diagnosis of brainstem function.
Reichenbach T, 2016, Hearing Damage Through Blast, Blast Injury Science and Engineering, Publisher: Springer International Publishing, Pages: 307-314, ISBN: 9783319218663
Reichenbach JDT, Meltzer B, Reichenbach CS, et al., 2015, The steady-state response of the cerebral cortex to the beat of music reflects both the comprehension of music and attention, Frontiers in Human Neuroscience, Vol: 9, ISSN: 1662-5161
The brain's analyses of speech and music share a range of neural resources and mechanisms. Music displays a temporal structure of complexity similar to that of speech, unfolds over comparable timescales, and elicits cognitive demands in tasks involving comprehension and attention. During speech processing, synchronized neural activity of the cerebral cortex in the delta and theta frequency bands tracks the envelope of a speech signal, and this neural activity is modulated by high-level cortical functions such as speech comprehension and attention. It remains unclear, however, whether the cortex also responds to the natural rhythmic structure of music and how the response, if present, is influenced by higher cognitive processes. Here we employ electroencephalography (EEG) to show that the cortex responds to the beat of music and that this steady-state response reflects musical comprehension and attention. We show that the cortical response to the beat is weaker when subjects listen to a familiar tune than when they listen to an unfamiliar, nonsensical musical piece. Furthermore, we show that in a task of intermodal attention there is a larger neural response at the beat frequency when subjects attend to a musical stimulus than when they ignore the auditory signal and instead focus on a visual one. Our findings may be applied in clinical assessments of auditory processing and music cognition as well as in the construction of auditory brain-machine interfaces.
Reichenbach T, Stefanovic A, Nin F, et al., 2015, Otoacoustic Emission Through Waves on Reissner's Membrane, 12th International Workshop on the Mechanics of Hearing, Publisher: AMER INST PHYSICS, ISSN: 0094-243X
Reichenbach T, Hudspeth AJ, 2014, The physics of hearing: fluid mechanics and the active process of the inner ear, REPORTS ON PROGRESS IN PHYSICS, Vol: 77, ISSN: 0034-4885
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.