Department Linguistik
Refine
Year of publication
- 2020 (72) (remove)
Document Type
- Article (58)
- Postprint (7)
- Doctoral Thesis (2)
- Part of Periodical (2)
- Bachelor Thesis (1)
- Master's Thesis (1)
- Review (1)
Keywords
- German (4)
- clefts (4)
- psycholinguistics (4)
- Akan (3)
- bilingualism (3)
- definite pseudoclefts (3)
- morphology (3)
- speech (3)
- English (2)
- Hungarian focus (2)
Institute
- Department Linguistik (72)
- Verband für Patholinguistik e. V. (vpl) (18)
- Extern (2)
- Department Erziehungswissenschaft (1)
- Department Psychologie (1)
- Humanwissenschaftliche Fakultät (1)
- Institut für Informatik und Computational Science (1)
- Institut für Physik und Astronomie (1)
- Interdisziplinäres Zentrum für Kognitive Studien (1)
Alcohol intoxication is known to affect many aspects of human behavior and cognition; one of such affected systems is articulation during speech production. Although much research has revealed that alcohol negatively impacts pronunciation in a first language (L1), there is only initial evidence suggesting a potential beneficial effect of inebriation on articulation in a non-native language (L2). The aim of this study was thus to compare the effect of alcohol consumption on pronunciation in an L1 and an L2. Participants who had ingested different amounts of alcohol provided speech samples in their L1 (Dutch) and L2 (English), and native speakers of each language subsequently rated the pronunciation of these samples on their intelligibility (for the L1) and accent nativelikeness (for the L2). These data were analyzed with generalized additive mixed modeling. Participants' blood alcohol concentration indeed negatively affected pronunciation in L1, but it produced no significant effect on the L2 accent ratings. The expected negative impact of alcohol on L1 articulation can be explained by reduction in fine motor control. We present two hypotheses to account for the absence of any effects of intoxication on L2 pronunciation: (1) there may be a reduction in L1 interference on L2 speech due to decreased motor control or (2) alcohol may produce a differential effect on each of the two linguistic subsystems.
Alcohol intoxication is known to affect many aspects of human behavior and cognition; one of such affected systems is articulation during speech production. Although much research has revealed that alcohol negatively impacts pronunciation in a first language (L1), there is only initial evidence suggesting a potential beneficial effect of inebriation on articulation in a non-native language (L2). The aim of this study was thus to compare the effect of alcohol consumption on pronunciation in an L1 and an L2. Participants who had ingested different amounts of alcohol provided speech samples in their L1 (Dutch) and L2 (English), and native speakers of each language subsequently rated the pronunciation of these samples on their intelligibility (for the L1) and accent nativelikeness (for the L2). These data were analyzed with generalized additive mixed modeling. Participants' blood alcohol concentration indeed negatively affected pronunciation in L1, but it produced no significant effect on the L2 accent ratings. The expected negative impact of alcohol on L1 articulation can be explained by reduction in fine motor control. We present two hypotheses to account for the absence of any effects of intoxication on L2 pronunciation: (1) there may be a reduction in L1 interference on L2 speech due to decreased motor control or (2) alcohol may produce a differential effect on each of the two linguistic subsystems.
Gender stereotypes influence subjective beliefs about the world, and this is reflected in our use of language. But do gender biases in language transparently reflect subjective beliefs? Or is the process of translating thought to language itself biased? During the 2016 United States (N = 24,863) and 2017 United Kingdom (N = 2,609) electoral campaigns, we compared participants' beliefs about the gender of the next head of government with their use and interpretation of pronouns referring to the next head of government. In the United States, even when the female candidate was expected to win, she pronouns were rarely produced and induced substantial comprehension disruption. In the United Kingdom, where the incumbent female candidate was heavily favored, she pronouns were preferred in production but yielded no comprehension advantage. These and other findings suggest that the language system itself is a source of implicit biases above and beyond previously known biases, such as those measured by the Implicit Association Test.
Gender stereotypes influence subjective beliefs about the world, and this is reflected in our use of language. But do gender biases in language transparently reflect subjective beliefs? Or is the process of translating thought to language itself biased? During the 2016 United States (N = 24,863) and 2017 United Kingdom (N = 2,609) electoral campaigns, we compared participants' beliefs about the gender of the next head of government with their use and interpretation of pronouns referring to the next head of government. In the United States, even when the female candidate was expected to win, she pronouns were rarely produced and induced substantial comprehension disruption. In the United Kingdom, where the incumbent female candidate was heavily favored, she pronouns were preferred in production but yielded no comprehension advantage. These and other findings suggest that the language system itself is a source of implicit biases above and beyond previously known biases, such as those measured by the Implicit Association Test.
Only the right noise?
(2020)
Seminal work by Werker and colleagues (Stager & Werker [1997]Nature, 388, 381-382) has found that 14-month-old infants do not show evidence for learning minimal pairs in the habituation-switch paradigm. However, when multiple speakers produce the minimal pair in acoustically variable ways, infants' performance improves in comparison to a single speaker condition (Rost & McMurray [2009]Developmental Science, 12, 339-349). The current study further extends these results and assesses how different kinds of input variability affect 14-month-olds' minimal pair learning in the habituation-switch paradigm testing German learning infants. The first two experiments investigated word learning when the labels were spoken by a single speaker versus when the labels were spoken by multiple speakers. In the third experiment we studied whether non-acoustic variability, implemented by visual variability of the objects presented together with the labels, would also affect minimal pair learning. We found enhanced learning in the multiple speakers compared to the single speaker condition, confirming previous findings with English-learning infants. In contrast, visual variability of the presented objects did not support learning. These findings both confirm and better delimit the beneficial role of speech-specific variability in minimal pair learning. Finally, we review different proposals on the mechanisms via which variability confers benefits to learning and outline what may be likely principles that underlie this benefit. We highlight among these the multiplicity of acoustic cues signalling phonemic contrasts and the presence of relations among these cues. It is in these relations where we trace part of the source for the apparent paradoxical benefit of variability in learning.
Only the right noise?
(2020)
Seminal work by Werker and colleagues (Stager & Werker [1997]Nature, 388, 381-382) has found that 14-month-old infants do not show evidence for learning minimal pairs in the habituation-switch paradigm. However, when multiple speakers produce the minimal pair in acoustically variable ways, infants' performance improves in comparison to a single speaker condition (Rost & McMurray [2009]Developmental Science, 12, 339-349). The current study further extends these results and assesses how different kinds of input variability affect 14-month-olds' minimal pair learning in the habituation-switch paradigm testing German learning infants. The first two experiments investigated word learning when the labels were spoken by a single speaker versus when the labels were spoken by multiple speakers. In the third experiment we studied whether non-acoustic variability, implemented by visual variability of the objects presented together with the labels, would also affect minimal pair learning. We found enhanced learning in the multiple speakers compared to the single speaker condition, confirming previous findings with English-learning infants. In contrast, visual variability of the presented objects did not support learning. These findings both confirm and better delimit the beneficial role of speech-specific variability in minimal pair learning. Finally, we review different proposals on the mechanisms via which variability confers benefits to learning and outline what may be likely principles that underlie this benefit. We highlight among these the multiplicity of acoustic cues signalling phonemic contrasts and the presence of relations among these cues. It is in these relations where we trace part of the source for the apparent paradoxical benefit of variability in learning.
In two experiments, we compared the dynamics of corticospinal excitability when processing visually or linguistically presented tool-oriented hand actions in native speakers and sequential bilinguals. In a third experiment we used the same procedure to test non-motor, low-level stimuli, i.e. scrambled images and pseudo-words.
Stimuli were presented in sequence: pictures (tool + tool-oriented hand action or their scrambled counterpart) and words (tool noun + tool-action verb or pseudo-words). Experiment 1 presented German linguistic stimuli to native speakers, while Experiment 2 presented English stimuli to non-natives. Experiment 3 tested Italian native speakers. Single-pulse trascranial magnetic stimulation (spTMS) was applied to the left motor cortex at five different timings: baseline, 200 ms after tool/noun onset, 150, 350 and 500 ms after hand/verb onset with motor-evoked potentials (MEPs) recorded from the first dorsal interosseous (FDI) and abductor digiti minimi (ADM) muscles.
We report strong similarities in the dynamics of corticospinal excitability across the visual and linguistic modalities. MEPs' suppression started as early as 150 ms and lasted for the duration of stimulus presentation (500 ms). Moreover, we show that this modulation is absent for stimuli with no motor content. Overall, our study supports the notion of a core, overarching system of action semantics shared by different modalities.
This paper addresses the relation between syllable structure and inter-segmental temporal coordination. The data examined are Electromagnetic Articulometry recordings from six speakers of Central Peninsular Spanish (henceforth, Spanish), producing words beginning with the clusters /pl, bl, kl, gl, p(sic), k(sic), t(sic)/ as well as corresponding unclustered sonorant-initial words in three vowel contexts /a, e, o/. In our results, we find evidence for a global organization of the segments involved in these combinations. This is reflected in a number of ways: shortening of the prevocalic sonorant in the cluster-initial case compared to the unclustered case, reorganization of the relative timing of the internal CV subsequence (in a CCV) in the obstruent-lateral context, early vowel initiation, and a strong compensatory relation between the duration of the obstruent-to-lateral transition and the duration of the lateral. In other words, we find that the global organization presiding over the segments partaking in these tautosyllabic CCVs is pleiotropic, that is, simultaneously expressed over a set of different phonetic parameters rather than via a privileged metric such as c-center stability or any other such given single measure (employed in prior works).
Several studies (e.g., Wicha et al., 2003b; DeLong et al., 2005) have shown that readers use information from the sentential context to predict nouns (or some of their features), and that predictability effects can be inferred from the EEG signal in determiners or adjectives appearing before the predicted noun. While these findings provide evidence for the pre-activation proposal, recent replication attempts together with inconsistencies in the results from the literature cast doubt on the robustness of this phenomenon. Our study presents the first attempt to use the effect of gender on predictability in German to study the pre-activation hypothesis, capitalizing on the fact that all German nouns have a gender and that their preceding determiners can show an unambiguous gender marking when the noun phrase has accusative case. Despite having a relatively large sample size (of 120 subjects), both our preregistered and exploratory analyses failed to yield conclusive evidence for or against an effect of pre-activation. The sign of the effect is, however, in the expected direction: the more unexpected the gender of the determiner, the larger the negativity. The recent, inconclusive replication attempts by Nieuwland et al. (2018) and others also show effects with signs in the expected direction. We conducted a Bayesian random-ef-fects meta-analysis using our data and the publicly available data from these recent replication attempts. Our meta-analysis shows a relatively clear but very small effect that is consistent with the pre-activation account and demonstrates a very important advantage of the Bayesian data analysis methodology: we can incrementally accumulate evidence to obtain increasingly precise estimates of the effect of interest.
While much attention has been devoted to the cognition of aging multilingual individuals, little is known about how age affects their grammatical processing. We assessed subject-verb number-agreement processing in sixty native (L1) and sixty non-native (L2) speakers of German (age: 18-84) using a binary-choice sentence-completion task, along with various individual-differences tests. Our results revealed differential effects of age on L1 and L2 speakers' accuracy and reaction times (RTs). L1 speakers' RTs increased with age, and they became more susceptible to attraction errors. In contrast, L2 speakers' RTs decreased, once age-related slowing was controlled for, and their overall accuracy increased. We interpret this as resulting from increased L2 exposure. Moreover, L2 speakers' accuracy/RT patterns were more strongly affected by cognitive variables (working memory, interference control) than L1 speakers'. Our findings show that as regards bilinguals' grammatical processing ability, aging is associated with both gains (in experience) and losses (in cognitive abilities).
This study compares the development of prosodic processing in French- and German-learning infants. The emergence of language-specific perception of phrase boundaries was directly tested using the same stimuli across these two languages. French-learning (Experiment 1, 2) and German-learning 6- and 8-month-olds (Experiment 3) listened to the same French noun sequences with or without major prosodic boundaries ([Loulou et Manou] [et Nina]; [Loulou et Manou et Nina], respectively). The boundaries were either naturally cued (Experiment 1), or cued exclusively by pitch and duration (Experiment 2, 3). French-learning 6- and 8-month-olds both perceived the natural boundary, but neither perceived the boundary when only two cues were present. In contrast, German-learning infants develop from not perceiving the two-cue boundary at 6 months to perceiving it at 8 months, just like German-learning 8-month-olds listening to German (Wellmann, Holzgrefe, Truckenbrodt, Wartenburger, & Hohle, 2012). In a control experiment (Experiment 4), we found little difference between German and French adult listeners, suggesting that later, French listeners catch up with German listeners. Taken together, these cross-linguistic differences in the perception of identical stimuli provide direct evidence for language-specific development of prosodic boundary perception.
Child characteristics, family factors, and preschool factors are all found to affect the rate of bilingual children's vocabulary development in heritage language (HL). However, what remains unknown is the relative importance of these three sets of factors in HL vocabulary growth. The current study explored the complex issue with 457 Singaporean preschool children who are speaking either Mandarin, Malay, or Tamil as their HL. A series of internal factors (e.g., non-verbal intelligence) and external factors (e.g., maternal educational level) were used to predict children's HL vocabulary growth over a year at preschool with linear mixed effects models. The results demonstrated that external factors (i.e., family and preschool factors) are relatively more important than child characteristics in enhancing bilingual children's HL vocabulary growth. Specifically, children's language input quantity (i.e., home language dominance), input quality (e.g., number of books in HL), and HL input quantity at school (i.e., the time between two waves of tests at preschool) predict the participants' HL vocabulary growth, with initial vocabulary controlled. The relative importance of external factors in bilingual children's HL vocabulary development is attributed to the general bilingual setting in Singapore, where HL is taken as a subject to learn at preschool and children have fairly limited exposure to HL in general. The limited amount of input might not suffice to trigger the full expression of internal resources. Our findings suggest the crucial roles that caregivers and preschools play in early HL education, and the necessity of more parental involvement in early HL learning in particular.
This study provides a novel approach for testing the universality of perceptual biases by looking at speech processing in simultaneous bilingual adults learning two languages that support the maintenance of this bias to different degrees. Specifically, we investigated the Iambic/Trochaic Law, an assumed universal grouping bias, in simultaneous French-German bilinguals, presenting them with streams of syllables varying in intensity, duration or neither and asking them whether they perceived them as strong-weak or weak-strong groupings. Results showed robust, consistent grouping preferences. A comparison to monolinguals from previous studies revealed that they pattern with German-speaking monolinguals, and differ from French-speaking monolinguals. The distribution of simultaneous bilinguals' individual performance was best explained by a model fitting a unimodal (not bimodal) distribution, failing to support two subgroups of language dominance. Moreover, neither language experience nor language context predicted their performance. These findings suggest a special role for universal biases in simultaneous bilinguals.
A growing body of experimental syntactic research has revealed substantial variation in the magnitude of island effects, not only across languages but also across different grammatical constructions. Adopting a well-established experimental design, the present study examines island effects in Spanish using a speeded acceptability judgment task. To quantify variation across grammatical constructions, we tested extraction from four different types of structure (subjects, complex noun phrases, adjuncts and interrogative clauses). The results of Bayesian mixed effects modelling showed that the size of island effects varied between constructions, such that there was clear evidence of subject, adjunct and interrogative island effects, but not of complex noun phrase island effects. We also failed to find evidence that island effects were modulated by participants' working memory capacity as measured by an operation span task. To account for our results, we suggest that variability in island effects across constructions may be due to the interaction of syntactic, semantic-pragmatic and processing factors, which may affect island types differentially due to their idiosyncratic properties.
The attentional bias to negative information enables humans to quickly identify and to respond appropriately to potentially threatening situations. Because of its adaptive function, the enhanced sensitivity to negative information is expected to represent a universal trait, shared by all humans regardless of their cultural background. However, existing research focuses almost exclusively on humans from Western industrialized societies, who are not representative for the human species. Therefore, we compare humans from two distinct cultural contexts: adolescents and children from Germany, a Western industrialized society, and from the not equal Akhoe Hai parallel to om, semi-nomadic hunter-gatherers in Namibia. We predicted that both groups show an attentional bias toward negative facial expressions as compared to neutral or positive faces. We used eye-tracking to measure their fixation duration on facial expressions depicting different emotions, including negative (fear, anger), positive (happy), and neutral faces. Both Germans and the not equal Akhoe Hai parallel to om gazed longer at fearful faces, but shorter on angry faces, challenging the notion of a general bias toward negative emotions. For happy faces, fixation durations varied between the two groups, suggesting more flexibility in the response to positive emotions. Our findings emphasize the need for placing research on emotion perception into an evolutionary, cross-cultural comparative framework that considers the adaptive significance of specific emotions, rather than differentiating between positive and negative information, and enables systematic comparisons across participants from diverse cultural backgrounds.
Perceptuomotor compatibility between phonemically identical spoken and perceived syllables has been found to speed up response times (RTs) in speech production tasks. However, research on compatibility effects between perceived and produced stimuli at the subphonemic level is limited. Using a cue-distractor task, we investigated the effects of phonemic and subphonemic congruency in pairs of vowels. On each trial, a visual cue prompted individuals to produce a response vowel, and after the visual cue appeared a distractor vowel was auditorily presented while speakers were planning to produce the response vowel. The results revealed effects on RTs due to phonemic congruency (same vs. different vowels) between the response and distractor vowels, which resemble effects previously seen for consonants. Beyond phonemic congruency, we assessed how RTs are modulated as a function of the degree of subphonemic similarity between the response and distractor vowels. Higher similarity between the response and distractor in terms of phonological distance-defined by number of mismatching phonological features-resulted in faster RTs. However, the exact patterns of RTs varied across response-distractor vowel pairs. We discuss how different assumptions about phonological feature representations may account for the different patterns observed in RTs across response-distractor pairs. Our findings on the effects of perceived stimuli on produced speech at a more detailed level of representation than phonemic identity necessitate a more direct and specific formulation of the perception-production link. Additionally, these results extend previously reported perceptuomotor interactions mainly involving consonants to vowels.
In the present study, we investigated younger and older Persian preschoolers' response tendency and accuracy toward yes/no questions about a coloring activity. Overall, 107 three- to four-year-olds and five- to six-year-old children were asked positive and negative yes/no questions about a picture coloring activity. The questions focused on three question contents namely, actions, environment and person. As for children's response tendency, they showed a compliance tendency. That is, they provided yes and no responses to positively and negatively formed questions respectively. Children especially younger ones were more compliant toward positive questions and their tendency decreased by age. In addition, the results revealed children's highest rate of compliance tendency toward environment inquiries. Concerning response accuracy, the effects of age and question content were significant. Specifically, older children provided more accurate responses than their younger counterparts, especially to yes/no questions asked about the actions performed during the activity. The findings suggest that depending on the format and the content of yes/no questions younger and older children's response accuracy and tendency differ.
Cue-based retrieval theories in sentence processing predict two classes of interference effect: (i) Inhibitory interference is predicted when multiple items match a retrieval cue: cue-overloading leads to an overall slowdown in reading time; and (ii) Facilitatory interference arises when a retrieval target as well as a distractor only partially match the retrieval cues; this partial matching leads to an overall speedup in retrieval time. Inhibitory interference effects are widely observed, but facilitatory interference apparently has an exception: reflexives have been claimed to show no facilitatory interference effects. Because the claim is based on underpowered studies, we conducted a large-sample experiment that investigated both facilitatory and inhibitory interference. In contrast to previous studies, we find facilitatory interference effects in reflexives. We also present a quantitative evaluation of the cue-based retrieval model of Engelmann, Jager, and Vasishth (2019).
Previous studies have suggested that distinctive case marking on noun phrases reduces attraction effects in production, i.e., the tendency to produce a verb that agrees with a nonsubject noun. An important open question is whether attraction effects are modulated by case information in sentence comprehension. To address this question, we conducted three attraction experiments in Armenian, a language with a rich and productive case system. The experiments showed clear attraction effects, and they also revealed an overall role of case marking such that participants showed faster response and reading times when the nouns in the sentence had different case. However, we found little indication that distinctive case marking modulated attraction effects. We present a theoretical proposal of how case and number information may be used differentially during agreement licensing in comprehension. More generally, this work sheds light on the nature of the retrieval cues deployed when completing morphosyntactic dependencies.
This study focuses on the ability of the adult sound system to reorganise as a result of experience. Participants were exposed to existing and novel syllables in either a listening task or a production task over the course of two days. On the third day, they named disyllabic pseudowords while their electroencephalogram was recorded. The first syllable of these pseudowords had either been trained in the auditory modality, trained in production or had not been trained. The EEG response differed between existing and novel syllables for untrained but not for trained syllables, indicating that training novel sound sequences modifies the processes involved in the production of these sequences to make them more similar to those underlying the production of existing sound sequences. Effects of training on the EEG response were observed both after production training and mere auditory exposure.