Refine
Document Type
- Article (6)
- Postprint (6)
- Doctoral Thesis (1)
Language
- English (13)
Is part of the Bibliography
- yes (13) (remove)
Keywords
- speech (13) (remove)
Institute
Previous research has shown that high phonotactic frequencies facilitate the production of regularly inflected verbs in English-learning children with specific language impairment (SLI) but not with typical development (TD). We asked whether this finding can be replicated for German, a language with a much more complex inflectional verb paradigm than English. Using an elicitation task, the production of inflected nonce verb forms (3rd person singular with - t suffix) with either high-or low-frequency subsyllables was tested in sixteen German-learning children with SLI (ages 4;1-5;1), sixteen TD-children matched for chronological age (CA) and fourteen TD-children matched for verbal age (VA) (ages 3;0-3;11). The findings revealed that children with SLI, but not CA-or VA-children, showed differential performance between the two types of verbs, producing more inflectional errors when the verb forms resulted in low-frequency subsyllables than when they resulted in high-frequency subsyllables, replicating the results from English-learning children.
We asked whether invariant phonetic indices for syllable structure can be identified in a language where word-initial consonant clusters, regardless of their sonority profile, are claimed to be parsed heterosyllabically. Four speakers of Moroccan Arabic were recorded, using Electromagnetic Articulography. Pursuing previous work, we employed temporal diagnostics for syllable structure, consisting of static correspondences between any given phonological organisation and its presumed phonetic indices. We show that such correspondences offer only a partial understanding of the relation between syllabic organisation and continuous indices of that organisation. We analyse the failure of the diagnostics and put forth a new approach in which different phonological organisations prescribe different ways in which phonetic indices change as phonetic parameters are scaled. The main finding is that invariance is found in these patterns of change, rather than in static correspondences between phonological constructs and fixed values for their phonetic indices.
Previous research has shown that high phonotactic frequencies
facilitate the production of regularly inflected verbs in English-learning
children with specific language impairment (SLI) but not with typical
development (TD). We asked whether this finding can be replicated
for German, a language with a much more complex inflectional
verb paradigm than English. Using an elicitation task, the production
of inflected nonce verb forms (3 rd person singular with -t suffix)
with either high- or low-frequency subsyllables was tested in
sixteen German-learning children with SLI (ages 4;1–5 ;1), sixteen
TD-children matched for chronological age (CA) and fourteen TD-
children matched for verbal age (VA) (ages 3;0–3 ;11). The findings
revealed that children with SLI, but not CA- or VA-children, showed
differential performance between the two types of verbs, producing
more inflectional errors when the verb forms resulted in low-frequency
subsyllables than when they resulted in high-frequency subsyllables,
replicating the results from English-learning children.
The present study examines the effect of language experience on vocal emotion perception in a second language. Native speakers of French with varying levels of self-reported English ability were asked to identify emotions from vocal expressions produced by American actors in a forced-choice task, and to rate their pleasantness, power, alertness and intensity on continuous scales. Stimuli included emotionally expressive English speech (emotional prosody) and non-linguistic vocalizations (affect bursts), and a baseline condition with Swiss-French pseudo-speech. Results revealed effects of English ability on the recognition of emotions in English speech but not in non-linguistic vocalizations. Specifically, higher English ability was associated with less accurate identification of positive emotions, but not with the interpretation of negative emotions. Moreover, higher English ability was associated with lower ratings of pleasantness and power, again only for emotional prosody. This suggests that second language skills may sometimes interfere with emotion recognition from speech prosody, particularly for positive emotions.
Language and music share many rhythmic properties, such as variations in intensity and duration leading to repeating patterns. Perception of rhythmic properties may rely on cognitive networks that are shared between the two domains. If so, then variability in speech rhythm perception may relate to individual differences in musicality. To examine this possibility, the present study focuses on rhythmic grouping, which is assumed to be guided by a domain-general principle, the Iambic/Trochaic law, stating that sounds alternating in intensity are grouped as strong-weak, and sounds alternating in duration are grouped as weak-strong. German listeners completed a grouping task: They heard streams of syllables alternating in intensity, duration, or neither, and had to indicate whether they perceived a strong-weak or weak-strong pattern. Moreover, their music perception abilities were measured, and they filled out a questionnaire reporting their productive musical experience. Results showed that better musical rhythm perception - ability was associated with more consistent rhythmic grouping of speech, while melody perception - ability and productive musical experience were not. This suggests shared cognitive procedures in the perception of rhythm in music and speech. Also, the results highlight the relevance of - considering individual differences in musicality when aiming to explain variability in prosody perception.
Is there an ideal time window for language acquisition after which nativelike
representation and processing are unattainable? Although this question has
been heavily debated, no consensus has been reached. Here, we present
evidence for a sensitive period in language development and show that it is
specific to grammar. We conducted a masked priming task with a group of
Turkish-German bilinguals and examined age of acquisition (AoA) effects on
the processing of complex words. We compared a subtle but meaningful
linguistic contrast, that between grammatical inflection and lexical-based
derivation. The results showed a highly selective AoA effect on inflectional
(but not derivational) priming. In addition, the effect displayed a discontinuity
indicative of a sensitive period: Priming from inflected forms was nativelike
when acquisition started before the age of 5 but declined with increasing
AoA. We conclude that the acquisition of morphological rules expressing
morphosyntactic properties is constrained by maturational factors.
Alcohol intoxication is known to affect many aspects of human behavior and cognition; one of such affected systems is articulation during speech production. Although much research has revealed that alcohol negatively impacts pronunciation in a first language (L1), there is only initial evidence suggesting a potential beneficial effect of inebriation on articulation in a non-native language (L2). The aim of this study was thus to compare the effect of alcohol consumption on pronunciation in an L1 and an L2. Participants who had ingested different amounts of alcohol provided speech samples in their L1 (Dutch) and L2 (English), and native speakers of each language subsequently rated the pronunciation of these samples on their intelligibility (for the L1) and accent nativelikeness (for the L2). These data were analyzed with generalized additive mixed modeling. Participants' blood alcohol concentration indeed negatively affected pronunciation in L1, but it produced no significant effect on the L2 accent ratings. The expected negative impact of alcohol on L1 articulation can be explained by reduction in fine motor control. We present two hypotheses to account for the absence of any effects of intoxication on L2 pronunciation: (1) there may be a reduction in L1 interference on L2 speech due to decreased motor control or (2) alcohol may produce a differential effect on each of the two linguistic subsystems.
Alcohol intoxication is known to affect many aspects of human behavior and cognition; one of such affected systems is articulation during speech production. Although much research has revealed that alcohol negatively impacts pronunciation in a first language (L1), there is only initial evidence suggesting a potential beneficial effect of inebriation on articulation in a non-native language (L2). The aim of this study was thus to compare the effect of alcohol consumption on pronunciation in an L1 and an L2. Participants who had ingested different amounts of alcohol provided speech samples in their L1 (Dutch) and L2 (English), and native speakers of each language subsequently rated the pronunciation of these samples on their intelligibility (for the L1) and accent nativelikeness (for the L2). These data were analyzed with generalized additive mixed modeling. Participants' blood alcohol concentration indeed negatively affected pronunciation in L1, but it produced no significant effect on the L2 accent ratings. The expected negative impact of alcohol on L1 articulation can be explained by reduction in fine motor control. We present two hypotheses to account for the absence of any effects of intoxication on L2 pronunciation: (1) there may be a reduction in L1 interference on L2 speech due to decreased motor control or (2) alcohol may produce a differential effect on each of the two linguistic subsystems.
In a cue-distractor task, speakers' response times (RTs) were found to speed up when they perceived a distractor syllable whose vowel was identical to the vowel in the syllable they were preparing to utter. At a more fine-grained level, subphonemic congruency between response and distractor-defined by higher number of shared phonological features or higher acoustic proximity-was also found to be predictive of RT modulations. Furthermore, the findings indicate that perception of vowel stimuli embedded in syllables gives rise to robust and more consistent perceptuomotor compatibility effects (compared to isolated vowels) across different response-distractor vowel pairs.