Filtern
Volltext vorhanden
- nein (3)
Erscheinungsjahr
- 2020 (3) (entfernen)
Dokumenttyp
- Wissenschaftlicher Artikel (3) (entfernen)
Sprache
- Englisch (3)
Gehört zur Bibliographie
- ja (3)
Schlagworte
- language (3) (entfernen)
Institut
Gender stereotypes influence subjective beliefs about the world, and this is reflected in our use of language. But do gender biases in language transparently reflect subjective beliefs? Or is the process of translating thought to language itself biased? During the 2016 United States (N = 24,863) and 2017 United Kingdom (N = 2,609) electoral campaigns, we compared participants' beliefs about the gender of the next head of government with their use and interpretation of pronouns referring to the next head of government. In the United States, even when the female candidate was expected to win, she pronouns were rarely produced and induced substantial comprehension disruption. In the United Kingdom, where the incumbent female candidate was heavily favored, she pronouns were preferred in production but yielded no comprehension advantage. These and other findings suggest that the language system itself is a source of implicit biases above and beyond previously known biases, such as those measured by the Implicit Association Test.
This study examines the discourse basis for referent accessibility and its relation to the choice of referring expressions by children with Autism Spectrum Disorder (ASD) and typically developing children. The aim is to delineate how the linguistic and extra-linguistic context affects referent accessibility to the speaker. The study also examines the degree to which accessibility effects are modulated by cognitive factors such as working memory capacity. In the study, the contrast levels between the referent and a competitor (one contrast/two contrasts) and the syntactic prominence of the referent (subject/object position in the preceding question) were manipulated in an elicited production task. The results provide evidence that the referring expressions of children with ASD correlate with the discourse status of referents to a similar extent as in typically developing controls. All children were more likely to refer with lexical NPs to referents that contrasted on two levels with a highly prominent competitor, compared to referents that contrasted on one level. They were also more likely to produce pronouns for referents previously mentioned in the subject than the object position. The effect of both discourse factors was modulated by the age and working memory capacity of the children with and without ASD. Accordingly, the study suggests that children with ASD do not generally differ from children with typical development in their referential choices when the discourse status of a referent allows them to model the referent's accessibility from their own discourse perspective in a way that is modulated by working memory capacity.
We argue that natural language can be usefully described as quasi-compositional and we suggest that deep learning-based neural language models bear long-term promise to capture how language conveys meaning. We also note that a successful account of human language processing should explain both the outcome of the comprehension process and the continuous internal processes underlying this performance. These points motivate our discussion of a neural network model of sentence comprehension, the Sentence Gestalt model, which we have used to account for the N400 component of the event-related brain potential (ERP), which tracks meaning processing as it happens in real time. The model, which shares features with recent deep learning-based language models, simulates N400 amplitude as the automatic update of a probabilistic representation of the situation or event described by the sentence, corresponding to a temporal difference learning signal at the level of meaning. We suggest that this process happens relatively automatically, and that sometimes a more-controlled attention-dependent process is necessary for successful comprehension, which may be reflected in the subsequent P600 ERP component. We relate this account to current deep learning models as well as classic linguistic theory, and use it to illustrate a domain general perspective on some specific linguistic operations postulated based on compositional analyses of natural language. This article is part of the theme issue 'Towards mechanistic models of meaning composition'.