Your search

In authors or contributors
  • Purpose: The purpose of this study was to evaluate the efficacy of a treatment program that includes ultrasound biofeedback for children with persisting speech sound errors associated with childhood apraxia of speech (CAS). Method: Six children ages 9-15 years participated in a multiple baseline experiment for 18 treatment sessions during which treatment focused on producing sequences involving lingual sounds. Children were cued to modify their tongue movements using visual feedback from real-time ultrasound images. Probe data were collected before, during, and after treatment to assess word-level accuracy for treated and untreated sound sequences. As participants reached preestablished performance criteria, new sequences were introduced into treatment. Results: All participants met the performance criterion (80% accuracy for 2 consecutive sessions) on at least 2 treated sound sequences. Across the 6 participants, performance criterion was met for 23 of 31 treated sequences in an average of 5 sessions. Some participants showed no improvement in untreated sequences, whereas others showed generalization to untreated sequences that were phonetically similar to the treated sequences. Most gains were maintained 2 months after the end of treatment. The percentage of phonemes correct increased significantly from pretreatment to the 2-month follow-up. Conclusion: A treatment program including ultrasound biofeedback is a viable option for improving speech sound accuracy in children with persisting speech sound errors associated with CAS.

  • Children with autism spectrum disorders have been reported to be less influenced by a speaker’s face during speech perception than those with typically development. To more closely examine these reported differences, a novel visual phonemic restoration paradigm was used to assess neural signatures (event-related potentials [ERPs]) of audiovisual processing in typically developing children and in children with autism spectrum disorder. Video of a speaker saying the syllable /ba/ was paired with (1) a synthesized /ba/ or (2) a synthesized syllable derived from /ba/ in which auditory cues for the consonant were substantially weakened, thereby sounding more like /a/. The auditory stimuli are easily discriminable; however, in the context of a visual /ba/, the auditory /a/ is typically perceived as /ba/, producing a visual phonemic restoration. Only children with ASD showed a large /ba/-/a/ discrimination response in the presence of a speaker producing /ba/, suggesting reduced influence of visual speech. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature.

  • This study examines how across-trial (average) and trial-by-trial (variability in) amplitude and latency of the N400 event-related potential (ERP) reflect temporal integration of pitch accent and beat gesture. Thirty native English speakers viewed videos of a talker producing sentences with beat gesture co-occurring with a pitch accented focus word (synchronous), beat gesture co-occurring with the onset of a subsequent non-focused word (asynchronous), or the absence of beat gesture (no beat). Across trials, increased amplitude and earlier latency were observed when beat gesture was temporally asynchronous with pitch accenting than when it was temporally synchronous with pitch accenting or absent. Moreover, temporal asynchrony of beat gesture relative to pitch accent increased trial-by-trial variability of N400 amplitude and latency and influenced the relationship between across-trial and trial-by-trial N400 latency. These results indicate that across-trial and trial-by-trial amplitude and latency of the N400 ERP reflect temporal integration of beat gesture and pitch accent during language comprehension, supporting extension of the integrated systems hypothesis of gesture-speech processing and neural noise theories to focus processing in typical adult populations. Copyright © 2020 Elsevier B.V. All rights reserved.

  • This paper includes a detailed description of a familiarization protocol, which is used as an integral component of a larger research protocol to collect electroencephalography (EEG) data and Event-Related Potentials (ERPs). At present, the systems available for the collection of high-quality EEG/ERP data make significant demands on children with developmental disabilities, such as those with an Autism Spectrum Disorder (ASD). Children with ASD may have difficulty adapting to novel situations, tolerating uncomfortable sensory stimuli, and sitting quietly. This familiarization protocol uses Evidence-Based Practices (EBPs) to increase research participants' knowledge and understanding of the specific activities and steps of the research protocol. The tools in this familiarization protocol are a social narrative, a visual schedule, the Premack principle, role-playing, and modeling. The goal of this familiarization protocol is to increase understanding and agency and to potentially reduce anxiety for child participants, resulting in a greater likelihood of the successful completion of the research protocol for the collection of EEG/ERP data.

  • When a speaker talks, the consequences of this can both be heard (audio) and seen (visual). A novel visual phonemic restoration task was used to assess behavioral discrimination and neural signatures (event-related potentials, or ERP) of audiovisual processing in typically developing children with a range of social and communicative skills assessed using the social responsiveness scale, a measure of traits associated with autism. An auditory oddball design presented two types of stimuli to the listener, a clear exemplar of an auditory consonant-vowel syllable /ba/ (the more frequently occurring standard stimulus), and a syllable in which the auditory cues for the consonant were substantially weakened, creating a stimulus which is more like /a/ (the infrequently presented deviant stimulus). All speech tokens were paired with a face producing /ba/ or a face with a pixelated mouth containing motion but no visual speech. In this paradigm, the visual /ba/ should cause the auditory /a/ to be perceived as /ba/, creating an attenuated oddball response; in contrast, a pixelated video (without articulatory information) should not have this effect. Behaviorally, participants showed visual phonemic restoration (reduced accuracy in detecting deviant /a/) in the presence of a speaking face. In addition, ERPs were observed in both an early time window (N100) and a later time window (P300) that were sensitive to speech context (/ba/ or /a/) and modulated by face context (speaking face with visible articulation or with pixelated mouth). Specifically, the oddball responses for the N100 and P300 were attenuated in the presence of a face producing /ba/ relative to a pixelated face, representing a possible neural correlate of the phonemic restoration effect. Notably, those individuals with more traits associated with autism (yet still in the non-clinical range) had smaller P300 responses overall, regardless of face context, suggesting generally reduced phonemic discrimination.

  • Visual information on a talker's face can influence what a listener hears. Commonly used approaches to study this include mismatched audiovisual stimuli (e.g., McGurk type stimuli) or visual speech in auditory noise. In this paper we discuss potential limitations of these approaches and introduce a novel visual phonemic restoration method. This method always presents the same visual stimulus (e.g., /ba/) dubbed with a matched auditory stimulus (/ba/) or one that has weakened consonantal information and sounds more /a/-like). When this reduced auditory stimulus (or /a/) is dubbed with the visual /ba/, a visual influence will result in effectively 'restoring' the weakened auditory cues so that the stimulus is perceived as a /ba/. An oddball design in which participants are asked to detect the /a/ among a stream of more frequently occurring /ba/s while either a speaking face or face with no visual speech was used. In addition, the same paradigm was presented for a second contrast in which participants detected /pa/ among /ba/s, a contrast which should be unaltered by the presence of visual speech. Behavioral and some ERP findings reflect the expected phonemic restoration for the /ba/ vs. /a/ contrast; specifically, we observed reduced accuracy and P300 response in the presence of visual speech. Further, we report an unexpected finding of reduced accuracy and P300 response for both speech contrasts in the presence of visual speech, suggesting overall modulation of the auditory signal in the presence of visual speech. Consistent with this, we observed a mismatch negativity (MMN) effect for the /ba/ vs. /pa/ contrast only that was larger in absence of visual speech. We discuss the potential utility for this paradigm for listeners who cannot respond actively, such as infants and individuals with developmental disabilities.

  • Audiovisual speech perception includes the simultaneous processing of auditory and visual speech. Deficits in audiovisual speech perception are reported in autistic individuals; however, less is known regarding audiovisual speech perception within the broader autism phenotype (BAP), which includes individuals with elevated, yet subclinical, levels of autistic traits. We investigate the neural indices of audiovisual speech perception in adults exhibiting a range of autism-like traits using event-related potentials (ERPs) in a phonemic restoration paradigm. In this paradigm, we consider conditions where speech articulators (mouth and jaw) are present (AV condition) and obscured by a pixelated mask (PX condition). These two face conditions were included in both passive (simply viewing a speaking face) and active (participants were required to press a button for a specific consonant–vowel stimulus) experiments. The results revealed an N100 ERP component which was present for all listening contexts and conditions; however, it was attenuated in the active AV condition where participants were able to view the speaker’s face, including the mouth and jaw. The P300 ERP component was present within the active experiment only, and significantly greater within the AV condition compared to the PX condition. This suggests increased neural effort for detecting deviant stimuli when visible articulation was present and visual influence on perception. Finally, the P300 response was negatively correlated with autism-like traits, suggesting that higher autistic traits were associated with generally smaller P300 responses in the active AV and PX conditions. The conclusions support the finding that atypical audiovisual processing may be characteristic of the BAP in adults.

  • Face to face communication typically involves audio and visual components to the speech signal. To examine the effect of task demands on gaze patterns in response to a speaking face, adults participated in two eye-tracking experiments with an audiovisual (articulatory information from the mouth was visible) and a pixelated condition (articulatory information was not visible). Further, task demands were manipulated by having listeners respond in a passive (no response) or an active (button press response) context. The active experiment required participants to discriminate between speech stimuli and was designed to mimic environmental situations which require one to use visual information to disambiguate the speaker’s message, simulating different listening conditions in real-world settings. Stimuli included a clear exemplar of the syllable /ba/ and a second exemplar in which the formant initial consonant was reduced creating an /a/−like consonant. Consistent with our hypothesis, results revealed that the greatest fixations to the mouth were present in the audiovisual active experiment and visual articulatory information led to a phonemic restoration effect for the /a/ speech token. In the pixelated condition, participants fixated on the eyes, and discrimination of the deviant token within the active experiment was significantly greater than the audiovisual condition. These results suggest that when required to disambiguate changes in speech, adults may look to the mouth for additional cues to support processing when it is available.

  • Purpose: Reduced use of visible articulatory information on a speaker's face has been implicated as a possible contributor to language deficits in autism spectrum disorders (ASD). We employ an audiovisual (AV) phonemic restoration paradigm to measure behavioral performance (button press) and event-related potentials (ERPs) of visual speech perception in children with ASD and their neurotypical peers to assess potential neural substrates that contribute to group differences. Method: Two sets of speech stimuli, /ba/–“/a/” (“/a/” was created from the /ba/ token by a reducing the initial consonant) and /ba/–/pa/, were presented within an auditory oddball paradigm to children aged 6–13 years with ASD (n = 17) and typical development (TD; n = 33) within two conditions. The AV condition contained a fully visible speaking face; the pixelated (PX) condition included a face, but the mouth and jaw were PX, removing all articulatory information. When articulatory features were present for the /ba/–“/a/” contrast, it was expected that the influence of the visual articulators would facilitate a phonemic restoration effect in which “/a/” would be perceived as /ba/. ERPs were recorded during the experiment while children were required to press a button for the deviant sound for both sets of speech contrasts within both conditions. Results: Button press data revealed that TD children were more accurate in discriminating between /ba/–“/a/” and /ba/–/pa/ contrasts in the PX condition relative to the ASD group. ERPs in response to the /ba/–/pa/ contrast within both AV and PX conditions differed between children with ASD and TD children (earlier P300 responses for children with ASD). Conclusion: Children with ASD differ in the underlying neural mechanisms responsible for speech processing compared with TD peers within an AV context.

  • Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.

  • Purpose: The toddler years are a critical period for language development and growth. We investigated how event-related potentials (ERPs) to repeated and novel nonwords are associated with clinical assessments of language in young children. In addition, nonword repetition (NWR) was used to measure phonological working memory to determine the unique and collective contribution of ERP measures of phonemic discrimination and NWR as predictors of language ability. Method: Forty children between the ages of 24-48 months participated in an ERP experiment to determine phonemic discrimination to repeated and novel nonwords in an old/new design. Participants also completed a NWR task to explore the contribution of phonological working memory in predicting language. Results: ERP analyses revealed that faster responses to novel stimuli correlated with higher language performance on clinical assessments of language. Regression analyses revealed that an earlier component was associated with lower level phonemic sensitivity, and a later component was indexing phonological working memory skills similar to NWR. Conclusion: Our findings suggest that passive ERP responses indexing phonological discrimination and phonological working memory are strongly related to behavioral measures of language.

  • Purpose: To examine neural response to spoken and printed language in children with speech sound errors (SSE). Method: Functional magnetic resonance imaging was used to compare processing of auditorily and visually presented words and pseudowords in 17 children with SSE, ages 8; 6[years; months] through 10; 10, with 17 matched controls. Results: When processing spoken words and pseudowords, the SSE group showed less activation than typically speaking controls in left middle temporal gyrus. They also showed greater activation than controls in several cortical and subcortical regions (e. g., left superior temporal gyrus, globus pallidus, insula, fusiform, and bilateral parietal regions). In response to printed words and pseudowords, children with SSE had greater activation than controls in regions including bilateral fusiform and anterior cingulate. Some differences were found in both speech and print processing that that may be associated with children with SSE failing to show common patterns of task-induced deactivation and/or attentional resource allocation. Conclusion: Compared with controls, children with SSE appear to rely more on several dorsal speech perception regions and less on ventral speech perception regions. When processing print, numerous regions were observed to be activated more for the SSE group than for controls.

  • Potocki-Lupski syndrome (PTLS; OMIM 610883) is a genomic syndrome that arises as a result of a duplication of 17p11.2. Although numerous cases of individuals with PTLS have been presented in the literature, its behavioral characterization is still ambiguous. We present a male child with a de novo dup(17)(p11.2p11.2) and he does not possess any autistic features, but is characterized by severe speech and language impairment. In the context of the analyses of this patient and other cases of PTLS, we argue that the central feature of the syndrome appears to be related to diminished speech and language capacity, rather than the specific social deficits central to autism. © 2011.

  • The purpose of the study was to identify structural brain differences in school-age children with residual speech sound errors. Voxel based morphometry was used to compare gray and white matter volumes for 23 children with speech sound errors, ages 8;6-11;11, and 54 typically speaking children matched on age, oral language, and IQ. We hypothesized that regions associated with production and perception of speech sounds would differ between groups. Results indicated greater gray matter volumes for the speech sound error group relative to typically speaking controls in bilateral superior temporal gyrus. There was greater white matter volume in the corpus callosum for the speech sound error group, but less white matter volume in right lateral occipital gyrus. Results may indicate delays in neuronal pruning in critical speech regions or differences in the development of networks for speech perception and production. Copyright © 2013 Elsevier Inc. All rights reserved.

  • We employed brain-behavior analyses to explore the relationship between performance on tasks measuring phonological awareness, pseudoword decoding, and rapid auditory processing (all predictors of reading (dis)ability) and brain organization for print and speech in beginning readers. For print-related activation, we observed a shared set of skill-correlated regions, including left hemisphere temporoparietal and occipitotemporal sites, as well as inferior frontal, visual, visual attention, and subcortical components. For speech-related activation, shared variance among reading skill measures was most prominently correlated with activation in left hemisphere inferior frontal gyrus and precuneus. Implications for brain-based models of literacy acquisition are discussed. (C) 2012 Elsevier Inc. All rights reserved.

  • Reading disability is a brain-based difficulty in acquiring fluent reading skills that affects significant numbers of children. Although neuroanatomical and neurofunctional networks involved in typical and atypical reading are increasingly well characterized, the underlying neurochemical bases of individual differences in reading development are virtually unknown. The current study is the first to examine neurochemistry in children during the critical period in which the neurocircuits that support skilled reading are still developing. In a longitudinal pediatric sample of emergent readers whose reading indicators range on a continuum from impaired to superior, we examined the relationship between individual differences in reading and reading-related skills and concentrations of neurometabolites measured using magnetic resonance spectroscopy. Both continuous and group analyses revealed that choline and glutamate concentrations were negatively correlated with reading and related linguistic measures in phonology and vocabulary (such that higher concentrations were associated with poorer performance). Correlations with behavioral scores obtained 24 months later reveal stability for the relationship between glutamate and reading performance. Implications for neurodevelopmental models of reading and reading disability are discussed, including possible links of choline and glutamate to white matter anomalies and hyperexcitability. These findings point to new directions for research on gene-brain-behavior pathways in human studies of reading disability. © 2014 the authors.

Last update from database: 3/13/26, 4:15 PM (UTC)

Explore

Resource type

Resource language