Your search

In authors or contributors
  • Audiovisual speech perception includes the simultaneous processing of auditory and visual speech. Deficits in audiovisual speech perception are reported in autistic individuals; however, less is known regarding audiovisual speech perception within the broader autism phenotype (BAP), which includes individuals with elevated, yet subclinical, levels of autistic traits. We investigate the neural indices of audiovisual speech perception in adults exhibiting a range of autism-like traits using event-related potentials (ERPs) in a phonemic restoration paradigm. In this paradigm, we consider conditions where speech articulators (mouth and jaw) are present (AV condition) and obscured by a pixelated mask (PX condition). These two face conditions were included in both passive (simply viewing a speaking face) and active (participants were required to press a button for a specific consonant–vowel stimulus) experiments. The results revealed an N100 ERP component which was present for all listening contexts and conditions; however, it was attenuated in the active AV condition where participants were able to view the speaker’s face, including the mouth and jaw. The P300 ERP component was present within the active experiment only, and significantly greater within the AV condition compared to the PX condition. This suggests increased neural effort for detecting deviant stimuli when visible articulation was present and visual influence on perception. Finally, the P300 response was negatively correlated with autism-like traits, suggesting that higher autistic traits were associated with generally smaller P300 responses in the active AV and PX conditions. The conclusions support the finding that atypical audiovisual processing may be characteristic of the BAP in adults.

  • This study examined fMRI activation when perceivers either passively observed or observed and imitated matched or mismatched audiovisual (McGurk) speech stimuli. Greater activation was observed in the inferior frontal gyrus (IFG) overall for imitation than for perception of audiovisual speech and for imitation of the McGurk-type mismatched stimuli than matched audiovisual stimuli. This unique activation in the IFG during imitation of incongruent audiovisual speech may reflect activation associated with direct matching of incongruent auditory and visual stimuli or conflict between category responses. This study provides novel data about the underlying neurobiology of imitation and integration of AV speech. (C) 2011 Elsevier Ltd. All rights reserved.

  • The lexical decision (LD) and naming (NAM) tasks are ubiquitous paradigms that employ printed word identification. They are major tools for investigating how factors like morphology, semantic information, lexical neighborhood and others affect identification. Although use of the tasks is widespread, there has been little research into how performance in LD or NAM relates to reading ability, a deficiency that limits the translation of research with these tasks to the understanding of individual differences in reading. The present research was designed to provide a link from LD and NAM to the specific variables that characterize reading ability (e.g., decoding, sight word recognition, fluency, vocabulary, and comprehension) as well as to important reading-related abilities (phonological awareness and rapid naming). We studied 99 adults with a wide range of reading abilities. LD and NAM strongly predicted individual differences in word identification, less strongly predicted vocabulary size and did not predict comprehension. Fluency was predicted but with differences that depended on the way fluency was defined. Finally, although the tasks did not predict individual differences in rapid naming or phonological awareness, the failures nevertheless assisted in understanding the cognitive mechanisms behind these reading-related abilities. The results demonstrate that LD and NAM are important tools for the study of individual differences in reading.

  • By 12months, children grasp that a phonetic change to a word can change its identity (phonological distinctiveness). However, they must also grasp that some phonetic changes do not (phonological constancy). To test development of phonological constancy, sixteen 15-month-olds and sixteen 19-month-olds completed an eye-tracking task that tracked their gaze to named versus unnamed images for familiar words spoken in their native (Australian) and an unfamiliar non-native (Jamaican) regional accent of English. Both groups looked longer at named than unnamed images for Australian pronunciations, but only 19-month-olds did so for Jamaican pronunciations, indicating that phonological constancy emerges by 19months. Vocabulary size predicted 15-month-olds' identifications for the Jamaican pronunciations, suggesting vocabulary growth is a viable predictor for phonological constancy development.

  • Face to face communication typically involves audio and visual components to the speech signal. To examine the effect of task demands on gaze patterns in response to a speaking face, adults participated in two eye-tracking experiments with an audiovisual (articulatory information from the mouth was visible) and a pixelated condition (articulatory information was not visible). Further, task demands were manipulated by having listeners respond in a passive (no response) or an active (button press response) context. The active experiment required participants to discriminate between speech stimuli and was designed to mimic environmental situations which require one to use visual information to disambiguate the speaker’s message, simulating different listening conditions in real-world settings. Stimuli included a clear exemplar of the syllable /ba/ and a second exemplar in which the formant initial consonant was reduced creating an /a/−like consonant. Consistent with our hypothesis, results revealed that the greatest fixations to the mouth were present in the audiovisual active experiment and visual articulatory information led to a phonemic restoration effect for the /a/ speech token. In the pixelated condition, participants fixated on the eyes, and discrimination of the deviant token within the active experiment was significantly greater than the audiovisual condition. These results suggest that when required to disambiguate changes in speech, adults may look to the mouth for additional cues to support processing when it is available.

  • Purpose: Reduced use of visible articulatory information on a speaker's face has been implicated as a possible contributor to language deficits in autism spectrum disorders (ASD). We employ an audiovisual (AV) phonemic restoration paradigm to measure behavioral performance (button press) and event-related potentials (ERPs) of visual speech perception in children with ASD and their neurotypical peers to assess potential neural substrates that contribute to group differences. Method: Two sets of speech stimuli, /ba/–“/a/” (“/a/” was created from the /ba/ token by a reducing the initial consonant) and /ba/–/pa/, were presented within an auditory oddball paradigm to children aged 6–13 years with ASD (n = 17) and typical development (TD; n = 33) within two conditions. The AV condition contained a fully visible speaking face; the pixelated (PX) condition included a face, but the mouth and jaw were PX, removing all articulatory information. When articulatory features were present for the /ba/–“/a/” contrast, it was expected that the influence of the visual articulators would facilitate a phonemic restoration effect in which “/a/” would be perceived as /ba/. ERPs were recorded during the experiment while children were required to press a button for the deviant sound for both sets of speech contrasts within both conditions. Results: Button press data revealed that TD children were more accurate in discriminating between /ba/–“/a/” and /ba/–/pa/ contrasts in the PX condition relative to the ASD group. ERPs in response to the /ba/–/pa/ contrast within both AV and PX conditions differed between children with ASD and TD children (earlier P300 responses for children with ASD). Conclusion: Children with ASD differ in the underlying neural mechanisms responsible for speech processing compared with TD peers within an AV context.

  • Abstract Objectives Listening2Faces (L2F) is a therapeutic, application-based training program designed to improve audiovisual speech perception for persons with communication disorders. The purpose of this research was to investigate the feasibility of using the L2F application with young adults with autism and complex communication needs. Methods Three young adults with autism and complex communication needs completed baseline assessments and participated in training sessions within the L2F application. Behavioral supports, including the use of cognitive picture rehearsal, were used to support engagement with the L2F application. Descriptive statistics were used to provide (1) an overview of the level of participation in L2F application with the use of behavioral supports and (2) general performance on L2F application for each participant. Results All three participants completed the initial auditory noise assessment (ANA) as well as 8 or more levels of the L2F application with varying accuracy levels. One participant completed the entire L2F program successfully. Several behavioral supports were used to facilitate participation; however, each individual demonstrated varied levels of engagement with the application. Conclusions The L2F application may be a viable intervention tool to support audiovisual speech perception in persons with complex communication needs within a school-based setting. A review of behavioral supports and possible beneficial modifications to the L2F application for persons with complex communication needs are discussed.

  • Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.

Last update from database: 3/13/26, 4:15 PM (UTC)

Explore

Resource type

Resource language