Refine
Has Fulltext
- no (5)
Document Type
- Article (5) (remove)
Language
- English (5)
Is part of the Bibliography
- yes (5)
Keywords
- speech production (5) (remove)
Institute
Perceptuomotor compatibility between phonemically identical spoken and perceived syllables has been found to speed up response times (RTs) in speech production tasks. However, research on compatibility effects between perceived and produced stimuli at the subphonemic level is limited. Using a cue-distractor task, we investigated the effects of phonemic and subphonemic congruency in pairs of vowels. On each trial, a visual cue prompted individuals to produce a response vowel, and after the visual cue appeared a distractor vowel was auditorily presented while speakers were planning to produce the response vowel. The results revealed effects on RTs due to phonemic congruency (same vs. different vowels) between the response and distractor vowels, which resemble effects previously seen for consonants. Beyond phonemic congruency, we assessed how RTs are modulated as a function of the degree of subphonemic similarity between the response and distractor vowels. Higher similarity between the response and distractor in terms of phonological distance-defined by number of mismatching phonological features-resulted in faster RTs. However, the exact patterns of RTs varied across response-distractor vowel pairs. We discuss how different assumptions about phonological feature representations may account for the different patterns observed in RTs across response-distractor pairs. Our findings on the effects of perceived stimuli on produced speech at a more detailed level of representation than phonemic identity necessitate a more direct and specific formulation of the perception-production link. Additionally, these results extend previously reported perceptuomotor interactions mainly involving consonants to vowels.
The development of phonological awareness, the knowledge of the structural combinatoriality of a language, has been widely investigated in relation to reading (dis)ability across languages. However, the extent to which knowledge of phonemic units may interact with spoken language organization in (transparent) alphabetical languages has hardly been investigated. The present study examined whether phonemic awareness correlates with coarticulation degree, commonly used as a metric for estimating the size of children’s production units. A speech production task was designed to test for developmental differences in intra-syllabic coarticulation degree in 41 German children from 4 to 7 years of age. The technique of ultrasound imaging allowed for comparing the articulatory foundations of children’s coarticulatory patterns. Four behavioral tasks assessing various levels of phonological awareness from large to small units and expressive vocabulary were also administered. Generalized additive modeling revealed strong interactions between children’s vocabulary and phonological awareness with coarticulatory patterns. Greater knowledge of sub-lexical units was associated with lower intra-syllabic coarticulation degree and greater differentiation of articulatory gestures for individual segments. This interaction was mostly nonlinear: an increase in children’s phonological proficiency was not systematically associated with an equivalent change in coarticulation degree. Similar findings were drawn between vocabulary and coarticulatory patterns. Overall, results suggest that the process of developing spoken language fluency involves dynamical interactions between cognitive and speech motor domains. Arguments for an integrated-interactive approach to skill development are discussed.
Electrophysiological research using verbal response paradigms faces the problem of muscle artifacts that occur during speech production or in the period preceding articulation. In this context, this paper has two related aims. The first is to show how the nature of the first phoneme influences the alignment of the ERPs. The second is to further characterize the EEG signal around the onset of articulation, both in temporal and frequency domains. Participants were asked to name aloud pictures of common objects. We applied microstate analyses and time-frequency transformations of ERPs locked to vocal onset to compare the EEG signal between voiced and unvoiced labial plosive word onset consonants. We found a delay of about 40 ms in the set of stable topographic patterns for /b/ relative to /p/ onset words. A similar shift was observed in the power increase of gamma oscillations (30-50 Hz), which had an earlier onset for /p/ trials (similar to 150 ms before vocal onset). This 40-ms shift is consistent with the length of the voiced proportion of the acoustic signal prior to the release of the closure in the vocal responses. These results demonstrate that phonetic features are an important parameter affecting response-locked ERPs, and hence that the onset of the acoustic energy may not be an optimal trigger for synchronizing the EEG activity to the response in vocal paradigms. The indexes explored in this study provide a step forward in the characterization of muscle-related artifacts in electrophysiological studies of speech and language production.