Cortical encoding of acoustic and linguistic rhythms in spoken narratives

Abstract
Speech contains rich acoustic and linguistic information. Using highly controlled speech materials, previous studies have demonstrated that cortical activity is synchronous to the rhythms of perceived linguistic units, for example, words and phrases, on top of basic acoustic features, for example, the speech envelope. When listening to natural speech, it remains unclear, however, how cortical activity jointly encodes acoustic and linguistic information. Here we investigate the neural encoding of words using electroencephalography and observe neural activity synchronous to multi-syllabic words when participants naturally listen to narratives. An amplitude modulation (AM) cue for word rhythm enhances the word-level response, but the effect is only observed during passive listening. Furthermore, words and the AM cue are encoded by spatially separable neural responses that are differentially modulated by attention. These results suggest that bottom-up acoustic cues and top-down linguistic knowledge separately contribute to cortical encoding of linguistic units in spoken narratives.
Funding Information
  • National Natural Science Foundation of China (31771248)
  • MajorScientific Research Project of Zhejiang Lab (2019KB0AC02)
  • Zhejiang Provincial Natural Science Foundation (LGF19H090020)
  • Fundamental Research Funds for the Central Universities (2020FZZX001-05)
  • National Key R&D Program of China (2019YFC0118200)