Your search
Results 7 resources
-
Using eye-tracking methodology, gaze to a speaking face was compared in a group of children with autism spectrum disorders (ASD) and a group with typical development (TD). Patterns of gaze were observed under three conditions: audiovisual (AV) speech in auditory noise, visual only speech and an AV non-face, non-speech control. Children with ASD looked less to the face of the speaker and fixated less on the speakers' mouth than TD controls. No differences in gaze were reported for the non-face, non-speech control task. Since the mouth holds much of the articulatory information available on the face, these findings suggest that children with ASD may have reduced access to critical linguistic information. This reduced access to visible articulatory information could be a contributor to the communication and language problems exhibited by children with ASD.
-
Children with speech sound disorders may perceive speech differently than children with typical speech development. The nature of these speech differences is reviewed with an emphasis on assessing phoneme-specific perception for speech sounds that are produced in error. Category goodness judgment, or the ability to judge accurate and inaccurate tokens of speech sounds, plays an important role in phonological development. The software Speech Assessment and Interactive Learning System, which has been effectively used to assess preschoolers' ability to perform goodness judgments, is explored for school-aged children with residual speech errors (RSEs). However, data suggest that this particular, task may not be sensitive to perceptual differences in school-aged children. The need for the development of clinical tools for assessment of speech perception in school-aged children with RSE is highlighted, and clinical suggestions are provided.
-
This study analyzed distributions of Euclidean displacements in gaze (i.e. “gaze steps”) to evaluate the degree of componential cognitive constraints on audio-visual speech perception tasks. Children performing these tasks exhibited distributions of gaze steps that were closest to power-law or lognormal distributions, suggesting a multiplicatively interactive, flexible, self-organizing cognitive system rather than a component-dominant stipulated cognitive structure. Younger children and children diagnosed with an autism spectrum disorder (ASD) exhibited distributions that were closer to power-law than lognormal, indicating a reduced degree of self-organized structure. The relative goodness of lognormal fit was also a significant predictor of ASD, suggesting that this type of analysis may point towards a promising diagnostic tool. These results lend further support to an interaction-dominant framework that casts cognitive processing and development in terms of self-organization instead of fixed components and show that these analytical methods are sensitive to important developmental and neuropsychological differences.
-
PURPOSE: Autistic adults consistently report difficulties understanding speech in adverse listening environments, which may be related to differences in social communication and participation. Research examining masked-speech recognition in autistic adults is limited, particularly in competing speech backgrounds with high degrees of informational masking. This work characterizes speech-in-speech and speech-in-noise recognition in young adults on the autism spectrum, as well as evaluates self-reported functional listening abilities and listening-related fatigue. METHOD: Masked-speech recognition was evaluated in both autistic (n = 20) and non-autistic (n = 20) young adults with normal hearing. Speech reception thresholds were adaptively measured in two-talker speech and speech-shaped noise using target sentences that were either semantically meaningful or anomalous. Functional listening abilities and listening-related fatigue were assessed using the Speech, Spatial, and Qualities of Hearing Scale and the Vanderbilt Fatigue Scale for Adults. Autism characteristics and social communication experiences were quantified using the Social Responsiveness Scale-Second Edition. RESULTS: Autistic adults displayed significantly poorer speech-in-speech recognition than their non-autistic peers, while speech-in-noise recognition did not differ between groups. Functional listening difficulties in daily life and listening-related fatigue were significantly higher for autistic participants. Autism characteristics strongly predicted functional listening abilities and listening-related fatigue in both groups. CONCLUSIONS: Autistic young adults experience objective speech-in-speech recognition difficulties that correspond with listening challenges in daily life. Autism characteristics and social communication experiences predict functional listening abilities reported by both autistic and non-autistic young adults with normal hearing. Speech-in-speech recognition difficulties observed here may amplify social communication challenges for adults on the autism spectrum. Future work must prioritize improved awareness of autistic listening differences.
-
In face-to-face conversation, when a speaker talks, the outcome of their speech can both be heard (audio) and seen (visual). We employed a novel visual phonemic restoration paradigm to assess neural signatures (event related potentials [ERPs]) of audiovisual processing in typically developing children and in children with ASD. During EEG recording, two types of auditory stimuli were alternately presented with video of a speaker saying the consonant-vowel syllable /ba/: 1) a synthesized consonant-vowel syllable /ba/ or 2) a synthesized syllable derived from /ba/ in which auditory cues for the consonant are substantially weakened, such that it sounds more like /a/. The auditory stimuli are easily discriminable, however, in the context of a visual /ba/, the auditory /a/ is typically perceived as /ba/, producing a visual phonemic restoration. In an ERP context, we have shown that this restoration leads to an attenuated phoneme discrimination response in an active task in typical adults and children. To explore the hypothesis that children with autism spectrum disorder (ASD) have atypical AV speech integration under pre-attentive processing conditions, we tested whether children with ASD would show a reduction in this restoration effect under passive listening conditions. Indeed, in this task, children with ASD showed a large /ba/-/a/ discrimination response, even in the presence of a speaker producing /ba/, suggesting reduced influence of visual speech. © 2019 Proceedings of the International Congress on Acoustics. All rights reserved.
-
This study examined fMRI activation when perceivers either passively observed or observed and imitated matched or mismatched audiovisual (”McGurk”) speech stimuli. Greater activation was observed in the inferior frontal gyrus (IFG) overall for imitation than for perception of audiovisual speech and for imitation of the McGurk-type mismatched stimuli than matched audiovisual stimuli. This unique activation in the IFG during imitation of incongruent audiovisual speech may reflect activation associated with direct matching of incongruent auditory and visual stimuli or conflict between category responses. This study provides novel data about the underlying neurobiology of imitation and integration of AV speech. (C) 2011 Elsevier Ltd. All rights reserved.
-
By 12months, children grasp that a phonetic change to a word can change its identity (phonological distinctiveness). However, they must also grasp that some phonetic changes do not (phonological constancy). To test development of phonological constancy, sixteen 15-month-olds and sixteen 19-month-olds completed an eye-tracking task that tracked their gaze to named versus unnamed images for familiar words spoken in their native (Australian) and an unfamiliar non-native (Jamaican) regional accent of English. Both groups looked longer at named than unnamed images for Australian pronunciations, but only 19-month-olds did so for Jamaican pronunciations, indicating that phonological constancy emerges by 19months. Vocabulary size predicted 15-month-olds' identifications for the Jamaican pronunciations, suggesting vocabulary growth is a viable predictor for phonological constancy development.
Explore
Resource type
- Conference Paper (1)
- Journal Article (6)
Publication year
Resource language
- English (7)