Refine
Year of publication
Document Type
- Article (21)
- Postprint (4)
- Other (2)
- Monograph/Edited Volume (1)
Language
- English (28)
Is part of the Bibliography
- yes (28) (remove)
Keywords
- speech production (4)
- speech (3)
- Festschrift (2)
- Informationsstruktur (2)
- Linguistik (2)
- Morphologie (2)
- Speech perception (2)
- Speech production (2)
- Syntax (2)
- acoustic variability (2)
Institute
Previous studies suggest that there are special timing relations in syllable onsets. The consonants are assumed to be timed, on the one hand, with the vocalic nucleus and, on the other hand, with each other. These competing timing relations result in the C-center effect. However, the C-center effect has not consistently been found in languages with complex onsets. Moreover, it has occasionally been found in languages disallowing complex onsets. The present study investigates onset timing in German while discussing alternative explanations (not related to bonding) for the timing patterns observed. Six German speakers were recorded via Electromagnetic Articulography. The corpus contained items with four clusters (/sk/, /kv/, /gl/, and /pl/). The clusters occur in word-initial position, word-medial position, and across a word boundary preceding different vowels. The results suggest that segmental properties (i.e., oral-laryngeal coordination, coarticulatory resistance) determine the observed timing patterns, and specifically the absence or presence of the C-center effect.
This study focuses on the ability of the adult sound system to reorganise as a result of experience. Participants were exposed to existing and novel syllables in either a listening task or a production task over the course of two days. On the third day, they named disyllabic pseudowords while their electroencephalogram was recorded. The first syllable of these pseudowords had either been trained in the auditory modality, trained in production or had not been trained. The EEG response differed between existing and novel syllables for untrained but not for trained syllables, indicating that training novel sound sequences modifies the processes involved in the production of these sequences to make them more similar to those underlying the production of existing sound sequences. Effects of training on the EEG response were observed both after production training and mere auditory exposure.
We pursue an analysis of the relation between qualitative syllable parses and their quantitative phonetic consequences. To do this, we express the statistics of a symbolic organization corresponding to a syllable parse in terms of continuous phonetic parameters which quantify the timing of the consonants and vowels that make up syllables: consonantal plateau durations, vowel durations, and their variances. These parameters can be estimated from continuous phonetic data. This enables analysis of the link between symbolic phonological form and the continuous phonetics in which this form is manifest. Pursuing such an analysis, we illustrate the predictions of the syllabic organization corresponding to simplex onsets and derive a number of previously experimentally observed and simulation results. Specifically, we derive not only the canonical phonetic manifestations of simplex onsets but also the result that, under certain conditions we make precise, the phonetic indices of the simplex onset organization change to a range of values characteristic of the complex onset organization. Finally, we explore the behavior of phonetic indices for syllabic organization over progressively increasing,sizes of lexical samples, thereby concomitantly diversifying the phonetic context over which these indices are taken.
Perceptuomotor compatibility between phonemically identical spoken and perceived syllables has been found to speed up response times (RTs) in speech production tasks. However, research on compatibility effects between perceived and produced stimuli at the subphonemic level is limited. Using a cue-distractor task, we investigated the effects of phonemic and subphonemic congruency in pairs of vowels. On each trial, a visual cue prompted individuals to produce a response vowel, and after the visual cue appeared a distractor vowel was auditorily presented while speakers were planning to produce the response vowel. The results revealed effects on RTs due to phonemic congruency (same vs. different vowels) between the response and distractor vowels, which resemble effects previously seen for consonants. Beyond phonemic congruency, we assessed how RTs are modulated as a function of the degree of subphonemic similarity between the response and distractor vowels. Higher similarity between the response and distractor in terms of phonological distance-defined by number of mismatching phonological features-resulted in faster RTs. However, the exact patterns of RTs varied across response-distractor vowel pairs. We discuss how different assumptions about phonological feature representations may account for the different patterns observed in RTs across response-distractor pairs. Our findings on the effects of perceived stimuli on produced speech at a more detailed level of representation than phonemic identity necessitate a more direct and specific formulation of the perception-production link. Additionally, these results extend previously reported perceptuomotor interactions mainly involving consonants to vowels.
We examined gestural coordination in C1C2 (C1 stop, C2 lateral or tap) word initial clusters using articulatory (electromagnetic articulometry) and acoustic data from six speakers of Standard Peninsular Spanish. We report on patterns of voice onset time (VOT), gestural plateau duration of C1, C2, and their overlap. For VOT, as expected, place of articulation is a major factor, with velars exhibiting longer VOTs than labials. Regarding C1 plateau duration, voice and place effects were found such that voiced consonants are significantly shorter than voiceless consonants, and velars show longer duration than labials. For C2 plateau duration, lateral duration was found to vary as a function of onset complexity (C vs. CC). As for overlap, unlike in French, where articulatory data for clusters have also been examined, clusters where both C1 and C2 are voiced show more overlap than where voicing differs. Further, overlap was affected by the C2 such that clusters where C2 is a tap show less overlap than clusters where C2 is a lateral. We discuss these results in the context of work aiming to uncover phonetic (e.g., articulatory or perceptual) and phonological forces (e.g., syllabic organization) on timing.
Voice onset time (VOT), a primary cue for voicing in many languages including English and German, is known to vary greatly between speakers, but also displays robust within-speaker consistencies, at least in English. The current analysis extends these findings to German. VOT measures were investigated from voiceless alveolar and velar stops in CV syllables cued by a visual prompt in a cue-distractor task. Comparably to English, a considerable portion of German VOT variability can be attributed to the syllable’s vowel length and the stop’s place of articulation. Individual differences in VOT still remain irrespective of speech rate. However, significant correlations across places of articulation and between speaker-specific mean VOTs and standard deviations indicate that talkers employ a relatively unified VOT profile across places of articulation. This could allow listeners to more efficiently adapt to speaker-specific realisations.
Respect the surroundings
(2021)
Fourteen-month-olds' ability to distinguish a just learned word, /bu?k/, from its minimally different word, /du?k/, was assessed under two pre-exposure conditions: one where /b, d/-initial forms occurred in a varying vowel context and another where the vowel was fixed but the final consonant varied. Infants in the experiments benefited from the variable vowel but not from the variable final consonant context, suggesting that vowel variability but not all kinds of variability are beneficial. These results are discussed in the context of time-honored observations on the vowel-dependent nature of place of articulation cues for consonants.
Only the right noise?
(2020)
Seminal work by Werker and colleagues (Stager & Werker [1997]Nature, 388, 381-382) has found that 14-month-old infants do not show evidence for learning minimal pairs in the habituation-switch paradigm. However, when multiple speakers produce the minimal pair in acoustically variable ways, infants' performance improves in comparison to a single speaker condition (Rost & McMurray [2009]Developmental Science, 12, 339-349). The current study further extends these results and assesses how different kinds of input variability affect 14-month-olds' minimal pair learning in the habituation-switch paradigm testing German learning infants. The first two experiments investigated word learning when the labels were spoken by a single speaker versus when the labels were spoken by multiple speakers. In the third experiment we studied whether non-acoustic variability, implemented by visual variability of the objects presented together with the labels, would also affect minimal pair learning. We found enhanced learning in the multiple speakers compared to the single speaker condition, confirming previous findings with English-learning infants. In contrast, visual variability of the presented objects did not support learning. These findings both confirm and better delimit the beneficial role of speech-specific variability in minimal pair learning. Finally, we review different proposals on the mechanisms via which variability confers benefits to learning and outline what may be likely principles that underlie this benefit. We highlight among these the multiplicity of acoustic cues signalling phonemic contrasts and the presence of relations among these cues. It is in these relations where we trace part of the source for the apparent paradoxical benefit of variability in learning.
Only the right noise?
(2020)
Seminal work by Werker and colleagues (Stager & Werker [1997]Nature, 388, 381-382) has found that 14-month-old infants do not show evidence for learning minimal pairs in the habituation-switch paradigm. However, when multiple speakers produce the minimal pair in acoustically variable ways, infants' performance improves in comparison to a single speaker condition (Rost & McMurray [2009]Developmental Science, 12, 339-349). The current study further extends these results and assesses how different kinds of input variability affect 14-month-olds' minimal pair learning in the habituation-switch paradigm testing German learning infants. The first two experiments investigated word learning when the labels were spoken by a single speaker versus when the labels were spoken by multiple speakers. In the third experiment we studied whether non-acoustic variability, implemented by visual variability of the objects presented together with the labels, would also affect minimal pair learning. We found enhanced learning in the multiple speakers compared to the single speaker condition, confirming previous findings with English-learning infants. In contrast, visual variability of the presented objects did not support learning. These findings both confirm and better delimit the beneficial role of speech-specific variability in minimal pair learning. Finally, we review different proposals on the mechanisms via which variability confers benefits to learning and outline what may be likely principles that underlie this benefit. We highlight among these the multiplicity of acoustic cues signalling phonemic contrasts and the presence of relations among these cues. It is in these relations where we trace part of the source for the apparent paradoxical benefit of variability in learning.
Fitts' law, perhaps the most celebrated law of human motor control, expresses a relation between the kinematic property of speed and the non-kinematic, task-specific property of accuracy. We aimed to assess whether speech movements obey this law using a metronome-driven speech elicitation paradigm with a systematic speech rate control. Specifically, using the paradigm of repetitive speech, we recorded via electromagnetic articulometry speech movement data in sequences of the form /CV.../ from 6 adult speakers. These sequences were spoken at 8 distinct rates ranging from extremely slow to extremely fast. Our results demonstrate, first, that the present paradigm of extensive metronome-driven manipulations satisfies the crucial prerequisites for evaluating Fitts' law in a subset of our elicited rates. Second, we uncover for the first time in speech evidence for Fitts' law at the faster rates and specifically beyond a participant-specific critical rate. We find no evidence for Fitts' law at the slowest metronome rates. Finally, we discuss implications of these results for models of speech.
The speed-curvature power law is a celebrated law of motor control expressing a relation between the kinematic property of speed and the geometric property of curvature. We aimed to assess whether speech movements obey this law just as movements from other domains do. We describe a metronome-driven speech elicitation paradigm designed to cover a wide range of speeds. We recorded via electromagnetic articulometry speech movements in sequences of the form /CV…/ from nine speakers (five German, four English) speaking at eight distinct rates. First, we demonstrate that the paradigm of metronome-driven manipulations results in speech movement data consistent with earlier reports on the kinematics of speech production. Second, analysis of our data in their full three-dimensions and using advanced numerical differentiation methods offers stronger evidence for the law than that reported in previous studies devoted to its assessment. Finally, we demonstrate the presence of a clear rate dependency of the power law’s parameters. The robustness of the speed-curvature relation in our datasets lends further support to the hypothesis that the power law is a general feature of human movement. We place our results in the context of other work in movement control and consider implications for models of speech production.
Spatiotemporal coordination in word-medial stop-lateral and s-stop clusters of American English
(2021)
This paper is concerned with the relation between syllabic organization and intersegmental spatiotemporal coordination using Electromagnetic Articulometry recordings from seven speakers of American English (henceforth, English). Whereas previous work on English has focused on word-initial clusters (preceding a vowel whose identity was not systematically varied), the present work examined word-medial clusters /pl, kl, sp, sk/ in the context of three different vowel heights (high, mid, low). Our results provide evidence for a global organization for the segments involved in these cluster-vowel combinations. This is reflected in a number of ways: compression of the prevocalic consonant and reduction of CV timing in the word-medial cluster case compared to its singleton paired word in both stop-lateral and s-stop clusters, early vowel initiation (as permitted by the clusters' phonetic properties), and presence of compensatory relations between phonetic properties of different segments or intersegmental transitions within each cluster. In other words, we find that the global organization presiding over the segments partaking in these word-medial tautosyllabic CCVs is pleiotropic, that is, simultaneously expressed in multiple phonetic exponents rather than via a privileged metric such as c-center stability or any other such given single measure employed in previous works.
In a cue-distractor task, speakers' response times (RTs) were found to speed up when they perceived a distractor syllable whose vowel was identical to the vowel in the syllable they were preparing to utter. At a more fine-grained level, subphonemic congruency between response and distractor-defined by higher number of shared phonological features or higher acoustic proximity-was also found to be predictive of RT modulations. Furthermore, the findings indicate that perception of vowel stimuli embedded in syllables gives rise to robust and more consistent perceptuomotor compatibility effects (compared to isolated vowels) across different response-distractor vowel pairs.
Of Trees and Birds
(2019)
Gisbert Fanselow’s work has been invaluable and inspiring to many researchers working on syntax, morphology, and information structure, both from a theoretical and from an experimental perspective. This volume comprises a collection of articles dedicated to Gisbert on the occasion of his 60th birthday, covering a range of topics from these areas and beyond. The contributions have in common that in a broad sense they have to do with language structures (and thus trees), and that in a more specific sense they have to do with birds. They thus cover two of Gisbert’s major interests in- and outside of the linguistic world (and perhaps even at the interface).
We offer a dynamical model of phonological planning that provides a formal instantiation of how the speech production and perception systems interact during online processing. The model is developed on the basis of evidence from an experimental task that requires concurrent use of both systems, the so-called response-distractor task in which speakers hear distractor syllables while they are preparing to produce required responses. The model formalizes how ongoing response planning is affected by perception and accounts for a range of results reported across previous studies. It does so by explicitly addressing the setting of parameter values in representations. The key unit of the model is that of the dynamic field, a distribution of activation over the range of values associated with each representational parameter. The setting of parameter values takes place by the attainment of a stable distribution of activation over the entire field, stable in the sense that it persists even after the response cue in the above experiments has been removed. This and other properties of representations that have been taken as axiomatic in previous work are derived by the dynamics of the proposed model. (C) 2016 Elsevier Inc. All rights reserved.
Perceptuo-motor effects of response-distractor compatibility in speech: beyond phonemic identity
(2015)
Previous studies have found faster response times in a production task when a speaker perceives a distractor syllable that is identical to the syllable they are required to produce. No study has found such effects when a response and a distractor are not identical but share parameters below the level of the phoneme. Results from Experiment 1 show some evidence of a response-time effect of response-distractor voicing congruency. Experiment 2 showed a robust effect of articulator congruency: perceiving a distractor that has the same articulatory organ as that implicated in the planned motor response speeds up response times. These results necessitate a more direct and specific formulation of the perception-production link than warranted by previous experimental evidence. Implications for theories of speech production are also discussed.
Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed.
Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed.