A neural model was proposed accordingly: speech-motor cortical function is modeled as a neural oscillator, an element capable of generating rhythmic activity, with maximal coupling to auditory system at 4.5 Hz. This suggests that while neural oscillations can entrain to a wider band of external rhythms (e.g., 2.5–6.5 Hz), motor cortex resonates at select frequencies to emphasize syllable coding at 4.5 Hz. In contrast, a more restricted 2.5–4.5 Hz frequency coupling was found in phase-locked responses to speech between auditory and motor cortices (i.e., brain-to-brain synchronization Assaneo & Poeppel, 2018). In their neuroimaging study, Assaneo and Poeppel (2018) demonstrated that auditory entrainment (i.e., sound-to-brain synchronization) is modulated by speech rates from 2.5 to 6.5 Hz but declines at faster rates. Indeed, the majority of the world’s languages unfold at rates centered near 4–5 Hz and neuroacoustic entrainment is enhanced at these ecological syllable speeds ( Ding et al., 2017 Poeppel & Assaneo, 2020). With this variability in mind, it is natural to ask whether the brain’s speech systems are equally efficient across syllable rates, or instead are tuned to a specific natural speech rhythm. Syllable rhythms in speech range in speed from 2–8 Hz ( Ding et al., 2017). These studies demonstrate that the degree to which auditory cortical activity tracks acoustic speech (and non-speech) signals provides an important mechanism for perception. Rather, entrained responses also serve to facilitate speech comprehension ( Doelling et al., 2014 Luo & Poeppel, 2007 Peelle et al., 2013). However, such brain entrainment is not solely low-level neural activity that simply mirrors the acoustic attributes of speech. In particular, speech syllable rhythms, which exhibit a quasiregularity in their envelope modulation ( Ding et al., 2017 Tilsen & Johnson, 2008), have been used to study how the brain parses the continuous speech stream ( Ghitza, 2012 Hyafil et al., 2015). Neurocognitive models suggest that the phase of ongoing brain oscillations, especially within the low theta band (4–8 Hz), lock to the slowly varying amplitude envelope to parse continuous sounds into discrete segments necessary for speech comprehension ( Doelling et al., 2014 Ghitza, 2011, 2012 Giraud & Poeppel, 2012 Luo & Poeppel, 2007). This phenomenon, whereby a listener’s rhythmic brain activity (i.e., oscillations) entrains to the physical signal, is described as neural synchronization or cortical tracking. The auditory cortex faithfully tracks amplitude modulations in continuous sounds, regardless of whether those acoustic events are speech ( Ahissar et al., 2001 Casas et al., 2021 Luo & Poeppel, 2007), modulated white noise ( Henry & Obleser, 2012), or clicks ( Will & Berg, 2007). Parallels across modalities could result from dynamics of the speech motor system coupled with experience-dependent tuning of the perceptual system via the sensorimotor interface. Together, our findings support an intimate link between exogenous and endogenous rhythmic processing that is optimized at 4–5 Hz in both auditory and motor systems. Correlations further revealed strong links between receptive (EEG) and production synchronization abilities individuals with stronger auditory-perceptual entrainment better matched speech rhythms motorically. In contrast, “pure” motor productions (without concurrent sound cues) were most precisely generated at rates of 4.5 and 5.5 Hz, paralleling the neuroacoustic data. Cued speech productions (recruit sensorimotor interaction) were optimal between 2.5 and 4.5 Hz, suggesting a low-frequency constraint on motor output and/or sensorimotor integration. We show that neural synchronization flexibly adapts to the heard stimuli in a rate-dependent manner, but that phase locking is boosted near ∼4.5 Hz, the purported dominant rate of speech. Productions made without concurrent auditory presentation isolated motor speech functions more purely. To evaluate relations between entrainment in the perceptual and production domains, we measured individuals’ (i) neuroacoustic tracking of the EEG to speech trains and their (ii) simultaneous and non-simultaneous productions synchronized to syllable rates between 2.5 and 8.5 Hz. However, whether this fundamental periodicity represents a common organizing principle in both auditory and motor systems involved in speech has not been explicitly tested. Considerable work suggests the dominant syllable rhythm of the acoustic envelope is remarkably similar across languages (∼4–5 Hz) and that oscillatory brain activity tracks these quasiperiodic rhythms to facilitate speech processing.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |