Your search

In authors or contributors
  • In face-to-face conversation, when a speaker talks, the outcome of their speech can both be heard (audio) and seen (visual). We employed a novel visual phonemic restoration paradigm to assess neural signatures (event related potentials [ERPs]) of audiovisual processing in typically developing children and in children with ASD. During EEG recording, two types of auditory stimuli were alternately presented with video of a speaker saying the consonant-vowel syllable /ba/: 1) a synthesized consonant-vowel syllable /ba/ or 2) a synthesized syllable derived from /ba/ in which auditory cues for the consonant are substantially weakened, such that it sounds more like /a/. The auditory stimuli are easily discriminable, however, in the context of a visual /ba/, the auditory /a/ is typically perceived as /ba/, producing a visual phonemic restoration. In an ERP context, we have shown that this restoration leads to an attenuated phoneme discrimination response in an active task in typical adults and children. To explore the hypothesis that children with autism spectrum disorder (ASD) have atypical AV speech integration under pre-attentive processing conditions, we tested whether children with ASD would show a reduction in this restoration effect under passive listening conditions. Indeed, in this task, children with ASD showed a large /ba/-/a/ discrimination response, even in the presence of a speaker producing /ba/, suggesting reduced influence of visual speech. © 2019 Proceedings of the International Congress on Acoustics. All rights reserved.

Last update from database: 3/13/26, 4:15 PM (UTC)

Explore

Resource type

Resource language