Refine
Has Fulltext
- no (3) (remove)
Year of publication
- 2021 (3) (remove)
Document Type
- Article (3)
Language
- English (3)
Is part of the Bibliography
- yes (3)
Keywords
- Bayesian data analysis (1)
- ERPs (1)
- bilingualism (1)
- bootstrapping (1)
- building (1)
- competition (1)
- divergence point analyses (1)
- lexical (1)
- model (1)
- non-parametric approaches (1)
Institute
Much work has shown that differences in the timecourse of language processing are central to comparing native (L1) and non-native (L2) speakers. However, estimating the onset of experimental effects in timecourse data presents several statistical problems including multiple comparisons and autocorrelation. We compare several approaches to tackling these problems and illustrate them using an L1-L2 visual world eye-tracking dataset. We then present a bootstrapping procedure that allows not only estimation of an effect onset, but also of a temporal confidence interval around this divergence point. We describe how divergence points can be used to demonstrate timecourse differences between speaker groups or between experimental manipulations, two important issues in evaluating L2 processing accounts. We discuss possible extensions of the bootstrapping procedure, including determining divergence points for individual speakers and correlating them with individual factors like L2 exposure and proficiency. Data and an analysis tutorial are available at https://osf.io/exbmk/.
Language production ultimately aims to convey meaning. Yet words differ widely in the richness and density of their semantic representations, and these differences impact conceptual and lexical processes during speech planning. Here, we replicated the recent finding that semantic richness, measured as the number of associated semantic features according to semantic feature production norms, facilitates object naming. In contrast, intercorrelational semantic feature density, measured as the degree of intercorrelation of a concept's features, presumably resulting in the coactivation of closely related concepts, has an inhibitory influence. We replicated the behavioral effects and investigated their relative time course and electrophysiological correlates. Both the facilitatory effect of high semantic richness and the inhibitory influence of high feature density were reflected in an increased posterior positivity starting at about 250 ms, in line with previous reports of posterior positivities in paradigms employing contextual manipulations to induce semantic interference during language production. Furthermore, amplitudes at the same posterior electrode sites were positively correlated with object naming times between about 230 and 380 ms. The observed effects follow naturally from the assumption of conceptual facilitation and simultaneous lexical competition and are difficult to explain by language production theories dismissing lexical competition.
Experiments in research on memory, language, and in other areas of cognitive science are increasingly being analyzed using Bayesian methods. This has been facilitated by the development of probabilistic programming languages such as Stan, and easily accessible front-end packages such as brms. The utility of Bayesian methods, however, ultimately depends on the relevance of the Bayesian model, in particular whether or not it accurately captures the structure of the data and the data analyst's domain expertise. Even with powerful software, the analyst is responsible for verifying the utility of their model. To demonstrate this point, we introduce a principled Bayesian workflow (Betancourt, 2018) to cognitive science. Using a concrete working example, we describe basic questions one should ask about the model: prior predictive checks, computational faithfulness, model sensitivity, and posterior predictive checks. The running example for demonstrating the workflow is data on reading times with a linguistic manipulation of object versus subject relative clause sentences. This principled Bayesian workflow also demonstrates how to use domain knowledge to inform prior distributions. It provides guidelines and checks for valid data analysis, avoiding overfitting complex models to noise, and capturing relevant data structure in a probabilistic model. Given the increasing use of Bayesian methods, we aim to discuss how these methods can be properly employed to obtain robust answers to scientific questions.