Your search

In authors or contributors
  • Audiovisual speech perception includes the simultaneous processing of auditory and visual speech. Deficits in audiovisual speech perception are reported in autistic individuals; however, less is known regarding audiovisual speech perception within the broader autism phenotype (BAP), which includes individuals with elevated, yet subclinical, levels of autistic traits. We investigate the neural indices of audiovisual speech perception in adults exhibiting a range of autism-like traits using event-related potentials (ERPs) in a phonemic restoration paradigm. In this paradigm, we consider conditions where speech articulators (mouth and jaw) are present (AV condition) and obscured by a pixelated mask (PX condition). These two face conditions were included in both passive (simply viewing a speaking face) and active (participants were required to press a button for a specific consonant–vowel stimulus) experiments. The results revealed an N100 ERP component which was present for all listening contexts and conditions; however, it was attenuated in the active AV condition where participants were able to view the speaker’s face, including the mouth and jaw. The P300 ERP component was present within the active experiment only, and significantly greater within the AV condition compared to the PX condition. This suggests increased neural effort for detecting deviant stimuli when visible articulation was present and visual influence on perception. Finally, the P300 response was negatively correlated with autism-like traits, suggesting that higher autistic traits were associated with generally smaller P300 responses in the active AV and PX conditions. The conclusions support the finding that atypical audiovisual processing may be characteristic of the BAP in adults.

  • Face to face communication typically involves audio and visual components to the speech signal. To examine the effect of task demands on gaze patterns in response to a speaking face, adults participated in two eye-tracking experiments with an audiovisual (articulatory information from the mouth was visible) and a pixelated condition (articulatory information was not visible). Further, task demands were manipulated by having listeners respond in a passive (no response) or an active (button press response) context. The active experiment required participants to discriminate between speech stimuli and was designed to mimic environmental situations which require one to use visual information to disambiguate the speaker’s message, simulating different listening conditions in real-world settings. Stimuli included a clear exemplar of the syllable /ba/ and a second exemplar in which the formant initial consonant was reduced creating an /a/−like consonant. Consistent with our hypothesis, results revealed that the greatest fixations to the mouth were present in the audiovisual active experiment and visual articulatory information led to a phonemic restoration effect for the /a/ speech token. In the pixelated condition, participants fixated on the eyes, and discrimination of the deviant token within the active experiment was significantly greater than the audiovisual condition. These results suggest that when required to disambiguate changes in speech, adults may look to the mouth for additional cues to support processing when it is available.

  • Purpose: Reduced use of visible articulatory information on a speaker's face has been implicated as a possible contributor to language deficits in autism spectrum disorders (ASD). We employ an audiovisual (AV) phonemic restoration paradigm to measure behavioral performance (button press) and event-related potentials (ERPs) of visual speech perception in children with ASD and their neurotypical peers to assess potential neural substrates that contribute to group differences. Method: Two sets of speech stimuli, /ba/–“/a/” (“/a/” was created from the /ba/ token by a reducing the initial consonant) and /ba/–/pa/, were presented within an auditory oddball paradigm to children aged 6–13 years with ASD (n = 17) and typical development (TD; n = 33) within two conditions. The AV condition contained a fully visible speaking face; the pixelated (PX) condition included a face, but the mouth and jaw were PX, removing all articulatory information. When articulatory features were present for the /ba/–“/a/” contrast, it was expected that the influence of the visual articulators would facilitate a phonemic restoration effect in which “/a/” would be perceived as /ba/. ERPs were recorded during the experiment while children were required to press a button for the deviant sound for both sets of speech contrasts within both conditions. Results: Button press data revealed that TD children were more accurate in discriminating between /ba/–“/a/” and /ba/–/pa/ contrasts in the PX condition relative to the ASD group. ERPs in response to the /ba/–/pa/ contrast within both AV and PX conditions differed between children with ASD and TD children (earlier P300 responses for children with ASD). Conclusion: Children with ASD differ in the underlying neural mechanisms responsible for speech processing compared with TD peers within an AV context.

  • Abstract Objectives Listening2Faces (L2F) is a therapeutic, application-based training program designed to improve audiovisual speech perception for persons with communication disorders. The purpose of this research was to investigate the feasibility of using the L2F application with young adults with autism and complex communication needs. Methods Three young adults with autism and complex communication needs completed baseline assessments and participated in training sessions within the L2F application. Behavioral supports, including the use of cognitive picture rehearsal, were used to support engagement with the L2F application. Descriptive statistics were used to provide (1) an overview of the level of participation in L2F application with the use of behavioral supports and (2) general performance on L2F application for each participant. Results All three participants completed the initial auditory noise assessment (ANA) as well as 8 or more levels of the L2F application with varying accuracy levels. One participant completed the entire L2F program successfully. Several behavioral supports were used to facilitate participation; however, each individual demonstrated varied levels of engagement with the application. Conclusions The L2F application may be a viable intervention tool to support audiovisual speech perception in persons with complex communication needs within a school-based setting. A review of behavioral supports and possible beneficial modifications to the L2F application for persons with complex communication needs are discussed.

Last update from database: 3/13/26, 4:15 PM (UTC)

Explore

Resource type

Resource language