Refine
Has Fulltext
- no (2)
Year of publication
- 2022 (2) (remove)
Document Type
- Article (2)
Language
- English (2)
Is part of the Bibliography
- yes (2)
Keywords
- Journal policy (1)
- Meta-research (1)
- Open (1)
- Open data (1)
- Reproducibility (1)
- Reproducible statistical analyses (1)
- agreement attraction (1)
- computational modeling (1)
- science (1)
- semantic attraction (1)
- sentence processing; (1)
- similarity-based interference (1)
Institute
Agreement attraction is a cross-linguistic phenomenon where a verb occasionally agrees not with its subject, as required by grammar, but instead with an unrelated noun ("The key to the cabinets were horizontal ellipsis ").
Despite the clear violation of grammatical rules, comprehenders often rate these sentences as acceptable. Contenders for explaining agreement attraction fall into two broad classes: Morphosyntactic accounts specifically designed to explain agreement attraction, and more general sentence processing models, such as the Lewis and Vasishth model, which explain attraction as a consequence of how linguistic structure is stored and accessed in content-addressable memory.
In the present research, we disambiguate between these two classes by testing a surprising prediction made by the Lewis and Vasishth model but not by the morphosyntactic accounts, namely, that attraction should not be limited to morphosyntax, but that semantic features of unrelated nouns equally induce attraction.
A recent study by Cunnings and Sturt provided initial evidence that this may be the case. Here, we report three single-trial experiments in English that compared semantic and agreement attraction and tested whether and how the two interact.
All three experiments showed strong semantically induced attraction effects closely mirroring agreement attraction effects. We complement these results with computational simulations which confirmed that the Lewis and Vasishth model can faithfully reproduce the observed results.
In sum, our findings suggest that attraction is a more general phenomenon than is commonly believed, and therefore favor more general sentence processing models, such as the Lewis and Vasishth model.
In 2019 the Journal of Memory and Language instituted an open data and code policy; this policy requires that, as a rule, code and data be released at the latest upon publication. How effective is this policy? We compared 59 papers published before, and 59 papers published after, the policy took effect. After the policy was in place, the rate of data sharing increased by more than 50%. We further looked at whether papers published under the open data policy were reproducible, in the sense that the published results should be possible to regenerate given the data, and given the code, when code was provided. For 8 out of the 59 papers, data sets were inaccessible. The reproducibility rate ranged from 34% to 56%, depending on the reproducibility criteria. The strongest predictor of whether an attempt to reproduce would be successful is the presence of the analysis code: it increases the probability of reproducing reported results by almost 40%. We propose two simple steps that can increase the reproducibility of published papers: share the analysis code, and attempt to reproduce one's own analysis using only the shared materials.