Your search

  • PURPOSE: The Index of Phonological Complexity and the Word Complexity Measure are two measures of the phonological complexity of a word. Other phonological measures such as phonological neighborhood density have been used to compare stuttered versus fluent words. It appears that in preschoolers who stutter, the length and complexity of the utterance is more influential than the phonetic features of the stuttered word. The present hypothesis was that in school-age children who stutter, stuttered words would be more phonologically complex than fluent words, when the length and complexity of the utterance containing them is comparable. School-age speakers who stutter were hypothesized to differ from those with a concomitant language disorder., METHODS: Sixteen speakers, six females and ten males (M age=12;3; Range=7;7 to 19;5) available from an online database, were divided into eight who had a concomitant language disorder (S+LD) and eight age- and sex-matched speakers who did not (S-Only)., RESULTS: When all stuttered content words were identified, S+LD speakers produced more repetitions, and S-Only speakers produced more inaudible sound prolongations. When stuttered content words were matched to fluent content words and when talker groups were combined, stuttered words were significantly (p<=0.01) higher in both the Index of Phonological Complexity and the Word Complexity Measure and lower in density ("sparser") than fluent words., CONCLUSIONS: Results corroborate those of previous researchers. Future research directions are suggested, such as cross-sectional designs to evaluate developmental patterns of phonological complexity and stuttering plus language disordered connections., EDUCATIONAL OBJECTIVES: The reader will be able to: (a) Define and describe phonological complexity; (b) Define phonological neighborhood density and summarize the literature on the topic; (c) Describe the Index of Phonological Complexity (IPC) for a given word; (d) Describe the Word Complexity Measure (WCM) for a given word; (e) Summarize two findings from the current study and describe how each relates to studies of phonological complexity and fluency disorders. Copyright © 2014 Elsevier Inc. All rights reserved.

  • Perception of spoken language requires attention to acoustic as well as visible phonetic information. This article reviews the known differences in audiovisual speech perception in children with autism spectrum disorders (ASD) and specifies the need for interventions that address this construct. Elements of an audiovisual training program are described. This researcher-developed program delivered via an iPad app presents natural speech in the context of increasing noise, but supported with a speaking face. Children are cued to attend to visible articulatory information to assist in perception of the spoken words. Data from four children with ASD ages 8-10 are presented showing that the children improved their performance on an untrained auditory speech-in-noise task.

  • Purpose: The goals were to (a) test the efficacy of a motor-learning-based treatment that includes ultrasound visual feedback for individuals with residual speech sound errors and (b) explore whether the addition of prosodic cueing facilitates speech sound learning. Method: A multiple-baseline, single-subject design was used, replicated across 8 participants. For each participant, 1 sound context was treated with ultrasound plus prosodic cueing for 7 sessions, and another sound context was treated with ultrasound but without prosodic cueing for 7 sessions. Sessions included ultrasound visual feedback as well as non-ultrasound treatment. Word-level probes assessing untreated words were used to evaluate retention and generalization. Results: For most participants, increases in accuracy of target sound contexts at the word level were observed with the treatment program regardless of whether prosodic cueing was included. Generalization between onset singletons and clusters was observed, as was generalization to sentence-level accuracy. There was evidence of retention during posttreatment probes, including at a 2-month follow-up. Conclusion: A motor-based treatment program that includes ultrasound visual feedback can facilitate learning of speech sounds in individuals with residual speech sound errors.

  • Reading disability is a brain-based difficulty in acquiring fluent reading skills that affects significant numbers of children. Although neuroanatomical and neurofunctional networks involved in typical and atypical reading are increasingly well characterized, the underlying neurochemical bases of individual differences in reading development are virtually unknown. The current study is the first to examine neurochemistry in children during the critical period in which the neurocircuits that support skilled reading are still developing. In a longitudinal pediatric sample of emergent readers whose reading indicators range on a continuum from impaired to superior, we examined the relationship between individual differences in reading and reading-related skills and concentrations of neurometabolites measured using magnetic resonance spectroscopy. Both continuous and group analyses revealed that choline and glutamate concentrations were negatively correlated with reading and related linguistic measures in phonology and vocabulary (such that higher concentrations were associated with poorer performance). Correlations with behavioral scores obtained 24 months later reveal stability for the relationship between glutamate and reading performance. Implications for neurodevelopmental models of reading and reading disability are discussed, including possible links of choline and glutamate to white matter anomalies and hyperexcitability. These findings point to new directions for research on gene-brain-behavior pathways in human studies of reading disability. © 2014 the authors.

  • The purpose of the study was to identify structural brain differences in school-age children with residual speech sound errors. Voxel based morphometry was used to compare gray and white matter volumes for 23 children with speech sound errors, ages 8;6-11;11, and 54 typically speaking children matched on age, oral language, and IQ. We hypothesized that regions associated with production and perception of speech sounds would differ between groups. Results indicated greater gray matter volumes for the speech sound error group relative to typically speaking controls in bilateral superior temporal gyrus. There was greater white matter volume in the corpus callosum for the speech sound error group, but less white matter volume in right lateral occipital gyrus. Results may indicate delays in neuronal pruning in critical speech regions or differences in the development of networks for speech perception and production. Copyright © 2013 Elsevier Inc. All rights reserved.

  • Background: Individuals with acquired apraxia of speech (AOS) can lose precision of articulatory movements, including the ability to achieve correct production of specific sounds or sound sequences. Novel treatment approaches should be explored to enhance treatment outcomes.Aims: To evaluate the clinical feasibility of ultrasound visual feedback of the tongue for addressing errors on rhotics in a patient with AOS. Ultrasound visual feedback was used to provide knowledge of performance to the participant.Methods & Procedures: A multiple baseline single case report is presented to evaluate a treatment programme that uses visual feedback of the participant's tongue from real-time ultrasound images. A blocked practice schedule was implemented during 12 one-hour therapy sessions; 30 minutes involved ultrasound visual feedback (10 minutes of pre-practice and 20 minutes of practice) and 20 minutes involved non-ultrasound practice. Cues were provided to modify tongue shape to achieve perceptually accurate production of rhotics, along with practice trials with increasing levels of phonetic complexity. The feedback type (verbal knowledge of performance and knowledge of results) and feedback frequency (number of trials with feedback) were structured to adhere to principles of motor learning.Outcomes & Results: The participant demonstrated moderate evidence of acquisition of prevocalic rhotics and strong evidence of acquisition of postvocalic rhotics during treatment. There was evidence of retention and generalisation only for postvocalic rhotics. An untreated context was probed regularly and showed no evidence of improvement.Conclusion: The results provide preliminary support for the feasibility of this treatment approach for improving speech accuracy in adults with acquired AOS. The improvements in stimulability for the treated sound sequences could be used to foster further motor learning. © 2014 © Taylor & Francis Group.

  • Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.

  • BACKGROUND: Communication is essential for successful rehabilitation, yet few aphasia treatments have been investigated during the acute stroke phase. Alternative modality use including gesturing, writing, or drawing has been shown to increase communicative effectiveness in people with chronic aphasia. Instruction in alternative modality use during acute stroke may increase patient communication and participation, therefore resulting in fewer adverse situations and improved rehabilitation outcomes., OBJECTIVE: The study purpose was to explore a multimodal communication program for aphasia (MCPA) implemented during acute stroke rehabilitation. MCPA aims to improve communication modality production, and to facilitate switching among modalities to resolve communication breakdowns., METHODS: Two adults with severe aphasia completed MCPA beginning at 2 and 3 weeks post onset a single left-hemisphere stroke. Probes completed during each session allowed for evaluation of modality production and modality switching accuracy., RESULTS: Participants completed MCPA (10 and 14 treatment sessions respectively) and their performance on probes suggested increased accuracy in the production of various alternate communication modalities. However, increased switching to an alternate modality was noted for only one participant., CONCLUSIONS: Further investigation of multimodal treatment during inpatient rehabilitation is warranted. In particular, comparisons between multimodal and standard treatments would help determine appropriate interventions for this setting.

  • Purpose: The purpose of this study was to evaluate the efficacy of a treatment program that includes ultrasound biofeedback for children with persisting speech sound errors associated with childhood apraxia of speech (CAS). Method: Six children ages 9-15 years participated in a multiple baseline experiment for 18 treatment sessions during which treatment focused on producing sequences involving lingual sounds. Children were cued to modify their tongue movements using visual feedback from real-time ultrasound images. Probe data were collected before, during, and after treatment to assess word-level accuracy for treated and untreated sound sequences. As participants reached preestablished performance criteria, new sequences were introduced into treatment. Results: All participants met the performance criterion (80% accuracy for 2 consecutive sessions) on at least 2 treated sound sequences. Across the 6 participants, performance criterion was met for 23 of 31 treated sequences in an average of 5 sessions. Some participants showed no improvement in untreated sequences, whereas others showed generalization to untreated sequences that were phonetically similar to the treated sequences. Most gains were maintained 2 months after the end of treatment. The percentage of phonemes correct increased significantly from pretreatment to the 2-month follow-up. Conclusion: A treatment program including ultrasound biofeedback is a viable option for improving speech sound accuracy in children with persisting speech sound errors associated with CAS.

  • Purpose: This article introduces theoretically driven acoustic measures of /s/ that reflect aerodynamic and articulatory conditions. The measures were evaluated by assessing whether they revealed expected changes over time and labiality effects, along with possible gender differences suggested by past work. Method: Productions of /s/ were extracted from various speaking tasks from typically speaking adolescents (6 boys, 6 girls). Measures were made of relative spectral energies in low-(550-3000 Hz), mid-(3000-7000 Hz), and high-frequency regions (7000-11025 Hz); the mid-frequency amplitude peak; and temporal changes in these parameters. Spectral moments were also obtained to permit comparison with existing work. Results: Spectral balance measures in low-mid and mid-high frequency bands varied over the time course of /s/, capturing the development of sibilance at mid-fricative along with showing some effects of gender and labiality. The mid-frequency spectral peak was significantly higher in nonlabial contexts, and in girls. Temporal variation in the mid-frequency peak differentiated +/- labial contexts while normalizing over gender. Conclusions: The measures showed expected patterns, supporting their validity. Comparison of these data with studies of adults suggests some developmental patterns that call for further study. The measures may also serve to differentiate some cases of typical and misarticulated /s/.

  • Purpose: To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Method: Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up at age 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors was used to predict later speech sound production, PA, and literacy outcomes. Results: Group averages revealed below-average schoolage articulation scores and low-average PA but ageappropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom >10% of their speech sound errors were atypical had lower PA and literacy scores at school age than children who produced <10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores than preschoolers who produced fewer distortion errors. Conclusion: Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschoolers' distortions may be resistant to change over time, leading to persisting speech sound production problems. © American Speech-Language-Hearing Association.

  • We employed brain-behavior analyses to explore the relationship between performance on tasks measuring phonological awareness, pseudoword decoding, and rapid auditory processing (all predictors of reading (dis)ability) and brain organization for print and speech in beginning readers. For print-related activation, we observed a shared set of skill-correlated regions, including left hemisphere temporoparietal and occipitotemporal sites, as well as inferior frontal, visual, visual attention, and subcortical components. For speech-related activation, shared variance among reading skill measures was most prominently correlated with activation in left hemisphere inferior frontal gyrus and precuneus. Implications for brain-based models of literacy acquisition are discussed. (C) 2012 Elsevier Inc. All rights reserved.

  • Objective: A common clinical complaint among older adults is difficulty hearing in noise, even in those with normal or near-normal peripheral hearing sensitivity. Researchers have demonstrated behavioral hearing in noise deficits in older adults, but to date limited evidence, particularly objective, exists elucidating the effects of age on auditory cortical processing in noise. The purpose of this investigation was to explore age related differences in auditory cortical processing at multiple signal-to-noise ratios (SNRs). Study design: Twenty normal-hearing young adults and 15 normal-hearing older adults participated in the study. Late auditory evoked potential (N1 and P2) latencies and amplitudes were measured in quiet and at three signal-to-noise ratios (SNRs) (+ 20, + 10, and 0 SNR). Repeated measures analyses of variance (ANOVA) were utilized to determine if statistically significant differences existed. Results: Significant group by listening condition interactions existed for N1 and P2 amplitudes. P2 latencies were significantly longer for the older adult group compared to the younger adult group. In addition, N1 and P2 amplitudes were significantly smaller for the younger adult group compared to the older adult group. Conclusion: Results suggest a possibly greater reduction in the synchronous neuronal response from quiet to noisy conditions in older adults than in younger adults. © 2013 Informa Healthcare.

  • Purpose: To examine neural response to spoken and printed language in children with speech sound errors (SSE). Method: Functional magnetic resonance imaging was used to compare processing of auditorily and visually presented words and pseudowords in 17 children with SSE, ages 8; 6[years; months] through 10; 10, with 17 matched controls. Results: When processing spoken words and pseudowords, the SSE group showed less activation than typically speaking controls in left middle temporal gyrus. They also showed greater activation than controls in several cortical and subcortical regions (e. g., left superior temporal gyrus, globus pallidus, insula, fusiform, and bilateral parietal regions). In response to printed words and pseudowords, children with SSE had greater activation than controls in regions including bilateral fusiform and anterior cingulate. Some differences were found in both speech and print processing that that may be associated with children with SSE failing to show common patterns of task-induced deactivation and/or attentional resource allocation. Conclusion: Compared with controls, children with SSE appear to rely more on several dorsal speech perception regions and less on ventral speech perception regions. When processing print, numerous regions were observed to be activated more for the SSE group than for controls.

  • Objective: The ability to hear in background noise is related to the processing of the incoming acoustic signal in the peripheral auditory system as well as the central auditory nervous system (CANS). Electrophysiological tests have the ability to demonstrate the underlying neural integrity of the CANS, but to date a lack of literature exists demonstrating the effects of background noise on auditory cortical potentials. Therefore, the purpose of this investigation was to systematically investigate the effects of white noise on tone burst-evoked late auditory evoked potentials (N1, P2, and P3) in normal hearing young adults. Study Design: Twenty young-adult normal-hearing individuals served as subjects. A comparison of the late auditory evoked potentials (N1, P2, and P3) was made at multiple signal-to-noise ratios (SNRs) (quiet, + 20, + 10, 0). N1, P2, and P3 were elicited and both amplitude and latency were measured for each of the potentials. A standard oddball paradigm with binaural stimulation was used to evoke the potentials. Repeated Measures Analyses of Variance (ANOVA) were conducted for both the experimental factors of amplitude and latency with within subjects factors of condition (quiet, + 20, + 10, 0). Results: Results indicated no significant differences in N1, P2, or P3 amplitude or latency between the quiet and + 20 SNR condition; however, at poorer SNRs significant N1, P2, and P3 amplitude and/or latency differences were observed. Conclusion: The results indicate a change in higher-order neural function related to the presence of increased noise in the environment. © 2012 Informa Healthcare.

  • Potocki-Lupski syndrome (PTLS; OMIM 610883) is a genomic syndrome that arises as a result of a duplication of 17p11.2. Although numerous cases of individuals with PTLS have been presented in the literature, its behavioral characterization is still ambiguous. We present a male child with a de novo dup(17)(p11.2p11.2) and he does not possess any autistic features, but is characterized by severe speech and language impairment. In the context of the analyses of this patient and other cases of PTLS, we argue that the central feature of the syndrome appears to be related to diminished speech and language capacity, rather than the specific social deficits central to autism. © 2011.

  • Purpose: To explore whether subgroups of children with residual speech sound disorders (R-SSDs) can be identified through multiple measures of token-to-token phonetic variability (changes in one spoken production to the next). Method: Children with R-SSDs were recorded during a rapid multisyllabic picture naming task and an oral diadochokinetic task. Transcription-based and acoustic measures of token-to-token variability were derived. Articulation accuracy and general indices of language skills were measured as well. Results: Low correlations were observed between transcription-based and acoustic measures of phonetic variability, and among the acoustic measures themselves. Children who were the most variable on one measure were not necessarily highly variable on other measures. Transcription-based measures of variability were associated with language skills. Conclusions: Measures of phonetic variability did not identify children in the sample as consistently high or low. Data do not support the notion that clear subgroups based on phonetic variability can be reliably identified in children with R-SSDs. The link between highly variable phonetic output (quantified by transcription-based measures) and lower language skills requires further exploration.

  • Purpose: To describe (a) the assessment of residual speech sound disorders (SSDs) in bilinguals by distinguishing speech patterns associated with second language acquisition from patterns associated with misarticulations and (b) how assessment of domains such as speech motor control and phonological awareness can provide a more complete understanding of SSDs in bilinguals. Method: A review of Japanese phonology is provided to offer a context for understanding the transfer of Japanese to English productions. A case study of an 11-year-old is presented, demonstrating parallel speech assessments in English and Japanese. Speech motor and phonological awareness tasks were conducted in both languages. Results: Several patterns were observed in the participant's English that could be plausibly explained by the influence of Japanese phonology. However, errors indicating a residual SSD were observed in both Japanese and English. A speech motor assessment suggested possible speech motor control problems, and phonological awareness was judged to be within the typical range of performance in both languages. Conclusion: Understanding the phonological characteristics of the native language can help clinicians recognize speech patterns in the second language associated with transfer. Once these differences are understood, patterns associated with a residual SSD can be identified. Supplementing a relational speech analysis with measures of speech motor control and phonological awareness can provide a more comprehensive understanding of a client's strengths and needs.

Last update from database: 3/13/26, 4:15 PM (UTC)