TY - JOUR A1 - Stone, Kate A1 - von der Malsburg, Titus Raban A1 - Vasishth, Shravan T1 - The effect of decay and lexical uncertainty on processing long-distance dependencies in reading JF - PeerJ N2 - To make sense of a sentence, a reader must keep track of dependent relationships between words, such as between a verb and its particle (e.g. turn the music down). In languages such as German, verb-particle dependencies often span long distances, with the particle only appearing at the end of the clause. This means that it may be necessary to process a large amount of intervening sentence material before the full verb of the sentence is known. To facilitate processing, previous studies have shown that readers can preactivate the lexical information of neighbouring upcoming words, but less is known about whether such preactivation can be sustained over longer distances. We asked the question, do readers preactivate lexical information about long-distance verb particles? In one self-paced reading and one eye tracking experiment, we delayed the appearance of an obligatory verb particle that varied only in the predictability of its lexical identity. We additionally manipulated the length of the delay in order to test two contrasting accounts of dependency processing: that increased distance between dependent elements may sharpen expectation of the distant word and facilitate its processing (an antilocality effect), or that it may slow processing via temporal activation decay (a locality effect). We isolated decay by delaying the particle with a neutral noun modifier containing no information about the identity of the upcoming particle, and no known sources of interference or working memory load. Under the assumption that readers would preactivate the lexical representations of plausible verb particles, we hypothesised that a smaller number of plausible particles would lead to stronger preactivation of each particle, and thus higher predictability of the target. This in turn should have made predictable target particles more resistant to the effects of decay than less predictable target particles. The eye tracking experiment provided evidence that higher predictability did facilitate reading times, but found evidence against any effect of decay or its interaction with predictability. The self-paced reading study provided evidence against any effect of predictability or temporal decay, or their interaction. In sum, we provide evidence from eye movements that readers preactivate long-distance lexical content and that adding neutral sentence information does not induce detectable decay of this activation. The findings are consistent with accounts suggesting that delaying dependency resolution may only affect processing if the intervening information either confirms expectations or adds to working memory load, and that temporal activation decay alone may not be a major predictor of processing time. KW - reading KW - comprehension KW - temporal decay KW - preactivation KW - long distance KW - dependencies KW - entropy KW - psycholinguistics KW - locality KW - antilocality Y1 - 2020 U6 - https://doi.org/10.7717/peerj.10438 SN - 2167-8359 VL - 8 PB - PeerJ Inc. CY - London ER - TY - JOUR A1 - Smith, Garrett A1 - Vasishth, Shravan T1 - A principled approach to feature selection in models of sentence processing JF - Cognitive science : a multidisciplinary journal of anthropology, artificial intelligence, education, linguistics, neuroscience, philosophy, psychology ; journal of the Cognitive Science Society N2 - Among theories of human language comprehension, cue-based memory retrieval has proven to be a useful framework for understanding when and how processing difficulty arises in the resolution of long-distance dependencies. Most previous work in this area has assumed that very general retrieval cues like [+subject] or [+singular] do the work of identifying (and sometimes misidentifying) a retrieval target in order to establish a dependency between words. However, recent work suggests that general, handpicked retrieval cues like these may not be enough to explain illusions of plausibility (Cunnings & Sturt, 2018), which can arise in sentences like The letter next to the porcelain plate shattered. Capturing such retrieval interference effects requires lexically specific features and retrieval cues, but handpicking the features is hard to do in a principled way and greatly increases modeler degrees of freedom. To remedy this, we use well-established word embedding methods for creating distributed lexical feature representations that encode information relevant for retrieval using distributed retrieval cue vectors. We show that the similarity between the feature and cue vectors (a measure of plausibility) predicts total reading times in Cunnings and Sturt's eye-tracking data. The features can easily be plugged into existing parsing models (including cue-based retrieval and self-organized parsing), putting very different models on more equal footing and facilitating future quantitative comparisons. KW - Cue‐based retrieval KW - plausibility KW - word embeddings KW - linguistic KW - features Y1 - 2020 U6 - https://doi.org/10.1111/cogs.12918 SN - 0364-0213 SN - 1551-6709 VL - 44 IS - 12 PB - Wiley CY - Hoboken ER - TY - JOUR A1 - Bürki-Foschini, Audrey Damaris A1 - Elbuy, Shereen A1 - Madec, Sylvain A1 - Vasishth, Shravan T1 - What did we learn from forty years of research on semantic interference? BT - a Bayesian meta-analysis JF - Journal of memory and language N2 - When participants in an experiment have to name pictures while ignoring distractor words superimposed on the picture or presented auditorily (i.e., picture-word interference paradigm), they take more time when the word to be named (or target) and distractor words are from the same semantic category (e.g., cat-dog). This experimental effect is known as the semantic interference effect, and is probably one of the most studied in the language production literature. The functional origin of the effect and the exact conditions in which it occurs are however still debated. Since Lupker (1979) reported the effect in the first response time experiment about 40 years ago, more than 300 similar experiments have been conducted. The semantic interference effect was replicated in many experiments, but several studies also reported the absence of an effect in a subset of experimental conditions. The aim of the present study is to provide a comprehensive theoretical review of the existing evidence to date and several Bayesian meta-analyses and meta-regressions to determine the size of the effect and explore the experimental conditions in which the effect surfaces. The results are discussed in the light of current debates about the functional origin of the semantic interference effect and its implications for our understanding of the language production system. KW - Bayesian random effects meta-analysis KW - picture-word interference KW - semantic interference KW - language production Y1 - 2020 U6 - https://doi.org/10.1016/j.jml.2020.104125 SN - 0749-596X SN - 1096-0821 VL - 114 PB - Elsevier CY - San Diego ER - TY - JOUR A1 - Vasishth, Shravan T1 - Using approximate Bayesian computation for estimating parameters in the cue-based retrieval model of sentence processing JF - MethodsX N2 - A commonly used approach to parameter estimation in computational models is the so-called grid search procedure: the entire parameter space is searched in small steps to determine the parameter value that provides the best fit to the observed data. This approach has several disadvantages: first, it can be computationally very expensive; second, one optimal point value of the parameter is reported as the best fit value; we cannot quantify our uncertainty about the parameter estimate. In the main journal article that this methods article accompanies (Jager et al., 2020, Interference patterns in subject-verb agreement and reflexives revisited: A large-sample study, Journal of Memory and Language), we carried out parameter estimation using Approximate Bayesian Computation (ABC), which is a Bayesian approach that allows us to quantify our uncertainty about the parameter's values given data. This customization has the further advantage that it allows us to generate both prior and posterior predictive distributions of reading times from the cue-based retrieval model of Lewis and Vasishth, 2005.
Instead of the conventional method of using grid search, we use Approximate Bayesian Computation (ABC) for parameter estimation in the [4] model.
The ABC method of parameter estimation has the advantage that the uncertainty of the parameter can be quantified. KW - Bayesian parameter estimation KW - Prior and posterior predictive KW - distributions KW - Psycholinguistics Y1 - 2020 U6 - https://doi.org/10.1016/j.mex.2020.100850 SN - 2215-0161 VL - 7 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Paape, Dario A1 - Vasishth, Shravan A1 - von der Malsburg, Titus Raban T1 - Quadruplex negatio invertit? BT - the on-line processing of depth charge sentences JF - Journal of semantics N2 - So-called "depth charge" sentences (No head injury is too trivial to be ignored) are interpreted by the vast majority of speakers to mean the opposite of what their compositional semantics would dictate. The semantic inversion that is observed for sentences of this type is the strongest and most persistent linguistic illusion known to the field (Wason & Reich, 1979). However, it has recently been argued that the preferred interpretation arises not because of a prevailing failure of the processing system, but rather because the non-compositional meaning is grammaticalized in the form of a stored construction (Cook & Stevenson, 2010; Fortuin, 2014). In a series of five experiments, we investigate whether the depth charge effect is better explained by processing failure due to memory overload (the overloading hypothesis) or by the existence of an underlying grammaticalized construction with two available meanings (the ambiguity hypothesis). To our knowledge, our experiments are the first to explore the on-line processing profile of depth charge sentences. Overall, the data are consistent with specific variants of the ambiguity and overloading hypotheses while providing evidence against other variants. As an extension of the overloading hypothesis, we suggest two heuristic processes that may ultimately yield the incorrect reading when compositional processing is suspended for strategic reasons. Y1 - 2020 U6 - https://doi.org/10.1093/jos/ffaa009 SN - 0167-5133 SN - 1477-4593 VL - 37 IS - 4 SP - 509 EP - 555 PB - Oxford Univ. Press CY - Oxford ER - TY - JOUR A1 - Engbert, Ralf A1 - Rabe, Maximilian Michael A1 - Schwetlick, Lisa A1 - Seelig, Stefan A. A1 - Reich, Sebastian A1 - Vasishth, Shravan T1 - Data assimilation in dynamical cognitive science JF - Trends in cognitive sciences N2 - Dynamical models make specific assumptions about cognitive processes that generate human behavior. In data assimilation, these models are tested against timeordered data. Recent progress on Bayesian data assimilation demonstrates that this approach combines the strengths of statistical modeling of individual differences with the those of dynamical cognitive models. Y1 - 2022 U6 - https://doi.org/10.1016/j.tics.2021.11.006 SN - 1364-6613 SN - 1879-307X VL - 26 IS - 2 SP - 99 EP - 102 PB - Elsevier CY - Amsterdam ER - TY - GEN A1 - Stone, Kate A1 - Vasishth, Shravan A1 - Malsburg, Titus von der T1 - Does entropy modulate the prediction of German long-distance verb particles? T2 - Zweitveröffentlichungen der Universität Potsdam : Humanwissenschaftliche Reihe N2 - In this paper we examine the effect of uncertainty on readers’ predictions about meaning. In particular, we were interested in how uncertainty might influence the likelihood of committing to a specific sentence meaning. We conducted two event-related potential (ERP) experiments using particle verbs such as turn down and manipulated uncertainty by constraining the context such that readers could be either highly certain about the identity of a distant verb particle, such as turn the bed […] down, or less certain due to competing particles, such as turn the music […] up/down. The study was conducted in German, where verb particles appear clause-finally and may be separated from the verb by a large amount of material. We hypothesised that this separation would encourage readers to predict the particle, and that high certainty would make prediction of a specific particle more likely than lower certainty. If a specific particle was predicted, this would reflect a strong commitment to sentence meaning that should incur a higher processing cost if the prediction is wrong. If a specific particle was less likely to be predicted, commitment should be weaker and the processing cost of a wrong prediction lower. If true, this could suggest that uncertainty discourages predictions via an unacceptable cost-benefit ratio. However, given the clear predictions made by the literature, it was surprisingly unclear whether the uncertainty manipulation affected the two ERP components studied, the N400 and the PNP. Bayes factor analyses showed that evidence for our a priori hypothesised effect sizes was inconclusive, although there was decisive evidence against a priori hypothesised effect sizes larger than 1μV for the N400 and larger than 3μV for the PNP. We attribute the inconclusive finding to the properties of verb-particle dependencies that differ from the verb-noun dependencies in which the N400 and PNP are often studied. T3 - Zweitveröffentlichungen der Universität Potsdam : Humanwissenschaftliche Reihe - 785 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-562312 SN - 1866-8364 SP - 1 EP - 25 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - JOUR A1 - Jaeger, Lena A. A1 - Engelmann, Felix A1 - Vasishth, Shravan T1 - Similarity-based interference in sentence comprehension: Literature review and Bayesian meta-analysis JF - Journal of memory and language N2 - We report a comprehensive review of the published reading studies on retrieval interference in reflexive-/reciprocal-antecedent and subject-verb dependencies. We also provide a quantitative random-effects meta-analysis of eyetracking and self-paced reading studies. We show that the empirical evidence is only partly consistent with cue-based retrieval as implemented in the ACT-R-based model of sentence processing by Lewis and Vasishth (2005) (LV05) and that there are important differences between the reviewed dependency types. In non-agreement subject-verb dependencies, there is evidence for inhibitory interference in configurations where the correct dependent fully matches the retrieval cues. This is consistent with the LV05 cue-based retrieval account. By contrast, in subject-verb agreement as well as in reflexive-/reciprocal-antecedent dependencies, no evidence for inhibitory interference is found in configurations with a fully cue-matching subject/antecedent. In configurations with only a partially cue-matching subject or antecedent, the meta-analysis reveals facilitatory interference in subject-verb agreement and inhibitory interference in reflexives/reciprocals. The former is consistent with the LV05 account, but the latter is not. Moreover, the meta-analysis reveals that (i) interference type (proactive versus retroactive) leads to different effects in the reviewed dependency types and (ii) the prominence of the distractor strongly influences the interference effect. In sum, the meta-analysis suggests that the LV05 needs important modifications to account for the unexplained interference patterns and the differences between the dependency types. More generally, the meta-analysis provides a quantitative empirical basis for comparing the predictions of competing accounts of retrieval processes in sentence comprehension. (C) 2017 Elsevier Inc. All rights reserved. KW - Cue-based retrieval KW - Syntactic dependency processing KW - Interference KW - Bayesian meta-analysis KW - Agreement KW - Reflexives Y1 - 2017 U6 - https://doi.org/10.1016/j.jml.2017.01.004 SN - 0749-596X SN - 1096-0821 VL - 94 SP - 316 EP - 339 PB - Elsevier CY - San Diego ER - TY - JOUR A1 - Frank, Stefan L. A1 - Trompenaars, Thijs A1 - Vasishth, Shravan T1 - Cross-Linguistic Differences in Processing Double-Embedded Relative Clauses: Working-Memory Constraints or Language Statistics? JF - Cognitive science : a multidisciplinary journal of anthropology, artificial intelligence, education, linguistics, neuroscience, philosophy, psychology ; journal of the Cognitive Science Society N2 - An English double-embedded relative clause from which the middle verb is omitted can often be processed more easily than its grammatical counterpart, a phenomenon known as the grammaticality illusion. This effect has been found to be reversed in German, suggesting that the illusion is language specific rather than a consequence of universal working memory constraints. We present results from three self-paced reading experiments which show that Dutch native speakers also do not show the grammaticality illusion in Dutch, whereas both German and Dutch native speakers do show the illusion when reading English sentences. These findings provide evidence against working memory constraints as an explanation for the observed effect in English. We propose an alternative account based on the statistical patterns of the languages involved. In support of this alternative, a single recurrent neural network model that is trained on both Dutch and English sentences is shown to predict the cross-linguistic difference in the grammaticality effect. KW - Bilingualism KW - Cross-linguistic differences KW - Sentence comprehension KW - Relative clauses KW - Centre embedding KW - Grammaticality illusion KW - Self-paced reading KW - Recurrent neural network model Y1 - 2016 U6 - https://doi.org/10.1111/cogs.12247 SN - 0364-0213 SN - 1551-6709 VL - 40 SP - 554 EP - 578 PB - Wiley-Blackwell CY - Hoboken ER - TY - JOUR A1 - Vasishth, Shravan A1 - Nicenboim, Bruno T1 - Statistical Methods for Linguistic Research: Foundational Ideas - Part I JF - Language and linguistics compass N2 - We present the fundamental ideas underlying statistical hypothesis testing using the frequentist framework. We start with a simple example that builds up the one-sample t-test from the beginning, explaining important concepts such as the sampling distribution of the sample mean, and the iid assumption. Then, we examine the meaning of the p-value in detail and discuss several important misconceptions about what a p-value does and does not tell us. This leads to a discussion of Type I, II error and power, and Type S and M error. An important conclusion from this discussion is that one should aim to carry out appropriately powered studies. Next, we discuss two common issues that we have encountered in psycholinguistics and linguistics: running experiments until significance is reached and the ‘garden-of-forking-paths’ problem discussed by Gelman and others. The best way to use frequentist methods is to run appropriately powered studies, check model assumptions, clearly separate exploratory data analysis from planned comparisons decided upon before the study was run, and always attempt to replicate results. Y1 - 2016 U6 - https://doi.org/10.1111/lnc3.12201 SN - 1749-818X VL - 10 SP - 349 EP - 369 PB - Wiley-Blackwell CY - Hoboken ER - TY - JOUR A1 - Vasishth, Shravan A1 - Lewis, Richard L. T1 - Argument-head distance and processing complexity: Explaining both locality and antilocality effects JF - Language : journal of the Linguistic Society of America N2 - Although proximity between arguments and verbs (locality) is a relatively robust determinant of sentence-processing difficulty (Hawkins 1998, 2001, Gibson 2000), increasing argument-verb distance can also facilitate processing (Konieczny 2000). We present two self-paced reading (SPR) experiments involving Hindi that provide further evidence of antilocality, and a third SPR experiment which suggests that similarity-based interference can attenuate this distance-based facilitation. A unified explanation of interference, locality, and antilocality effects is proposed via an independently motivated theory of activation decay and retrieval interference (Anderson et al. 2004).* Y1 - 2006 U6 - https://doi.org/10.1353/lan.2006.0236 SN - 0097-8507 VL - 82 IS - 4 SP - 767 EP - 794 PB - Linguistic Society of America CY - Washington ER - TY - JOUR A1 - Sorensen, Tanner A1 - Hohenstein, Sven A1 - Vasishth, Shravan T1 - Bayesian linear mixed models using Stan: A tutorial for psychologists, linguists, and cognitive scientists JF - Tutorials in Quantitative Methods for Psychology N2 - With the arrival of the R packages nlme and lme4, linear mixed models (LMMs) have come to be widely used in experimentally-driven areas like psychology, linguistics, and cognitive science. This tutorial provides a practical introduction to fitting LMMs in a Bayesian framework using the probabilistic programming language Stan. We choose Stan (rather than WinBUGS or JAGS) because it provides an elegant and scalable framework for fitting models in most of the standard applications of LMMs. We ease the reader into fitting increasingly complex LMMs, using a two-condition repeated measures self-paced reading study. KW - Bayesian data analysis KW - linear mixed models Y1 - 2016 U6 - https://doi.org/10.20982/tqmp.12.3.p175 SN - 2292-1354 VL - 12 SP - 175 EP - 200 PB - University of Montreal, Department of Psychology CY - Montreal ER - TY - JOUR A1 - Vasishth, Shravan A1 - Nicenboim, Bruno A1 - Engelmann, Felix A1 - Burchert, Frank T1 - Computational Models of Retrieval Processes in Sentence Processing JF - Trends in Cognitive Sciences N2 - Sentence comprehension requires that the comprehender work out who did what to whom. This process has been characterized as retrieval from memory. This review summarizes the quantitative predictions and empirical coverage of the two existing computational models of retrieval and shows how the predictive performance of these two competing models can be tested against a benchmark data-set. We also show how computational modeling can help us better understand sources of variability in both unimpaired and impaired sentence comprehension. Y1 - 2019 U6 - https://doi.org/10.1016/j.tics.2019.09.003 SN - 1364-6613 SN - 1879-307X VL - 23 IS - 11 SP - 968 EP - 982 PB - Elsevier CY - London ER - TY - JOUR A1 - Metzner, Paul A1 - von der Malsburg, Titus Raban A1 - Vasishth, Shravan A1 - Roesler, Frank T1 - The Importance of Reading Naturally: Evidence From Combined Recordings of Eye Movements and Electric Brain Potentials JF - Cognitive science : a multidisciplinary journal of anthropology, artificial intelligence, education, linguistics, neuroscience, philosophy, psychology ; journal of the Cognitive Science Society KW - Reading KW - Sentence comprehension KW - ERP KW - Eye movements KW - Regressions Y1 - 2017 U6 - https://doi.org/10.1111/cogs.12384 SN - 0364-0213 SN - 1551-6709 VL - 41 SP - 1232 EP - 1263 PB - Wiley CY - Hoboken ER - TY - JOUR A1 - Vasishth, Shravan A1 - Nicenboim, Bruno A1 - Beckman, Mary E. A1 - Li, Fangfang A1 - Kong, Eun Jong T1 - Bayesian data analysis in the phonetic sciences BT - a tutorial introduction JF - Journal of phonetics N2 - This tutorial analyzes voice onset time (VOT) data from Dongbei (Northeastern) Mandarin Chinese and North American English to demonstrate how Bayesian linear mixed models can be fit using the programming language Stan via the R package brms. Through this case study, we demonstrate some of the advantages of the Bayesian framework: researchers can (i) flexibly define the underlying process that they believe to have generated the data; (ii) obtain direct information regarding the uncertainty about the parameter that relates the data to the theoretical question being studied; and (iii) incorporate prior knowledge into the analysis. Getting started with Bayesian modeling can be challenging, especially when one is trying to model one’s own (often unique) data. It is difficult to see how one can apply general principles described in textbooks to one’s own specific research problem. We address this barrier to using Bayesian methods by providing three detailed examples, with source code to allow easy reproducibility. The examples presented are intended to give the reader a flavor of the process of model-fitting; suggestions for further study are also provided. All data and code are available from: https://osf.io/g4zpv. KW - Bayesian data analysis KW - Linear mixed models KW - Voice onset time KW - Gender effects KW - Vowel duration Y1 - 2018 U6 - https://doi.org/10.1016/j.wocn.2018.07.008 SN - 0095-4470 VL - 71 SP - 147 EP - 161 PB - Elsevier CY - London ER - TY - JOUR A1 - Nicenboim, Bruno A1 - Roettger, Timo B. A1 - Vasishth, Shravan T1 - Using meta-analysis for evidence synthesis BT - the case of incomplete neutralization in German JF - Journal of phonetics N2 - Within quantitative phonetics, it is common practice to draw conclusions based on statistical significance alone Using incomplete neutralization of final devoicing in German as a case study, we illustrate the problems with this approach. If researchers find a significant acoustic difference between voiceless and devoiced obstruents, they conclude that neutralization is incomplete, and if they find no significant difference, they conclude that neutralization is complete. However, such strong claims regarding the existence or absence of an effect based on significant results alone can be misleading. Instead, the totality of available evidence should be brought to bear on the question. Towards this end, we synthesize the evidence from 14 studies on incomplete neutralization in German using a Bayesian random-effects meta-analysis. Our meta-analysis provides evidence in favor of incomplete neutralization. We conclude with some suggestions for improving the quality of future research on phonetic phenomena: ensure that sample sizes allow for high-precision estimates of the effect; avoid the temptation to deploy researcher degrees of freedom when analyzing data; focus on estimates of the parameter of interest and the uncertainty about that parameter; attempt to replicate effects found; and, whenever possible, make both the data and analysis available publicly. (c) 2018 Elsevier Ltd. All rights reserved. KW - Meta-analysis KW - Incomplete neutralization KW - Final devoicing KW - German KW - Bayesian data analysis Y1 - 2018 U6 - https://doi.org/10.1016/j.wocn.2018.06.001 SN - 0095-4470 VL - 70 SP - 39 EP - 55 PB - Elsevier CY - London ER - TY - JOUR A1 - Vasishth, Shravan A1 - Mertzen, Daniela A1 - Jaeger, Lena A. A1 - Gelman, Andrew T1 - The statistical significance filter leads to overoptimistic expectations of replicability JF - Journal of memory and language N2 - It is well-known in statistics (e.g., Gelman & Carlin, 2014) that treating a result as publishable just because the p-value is less than 0.05 leads to overoptimistic expectations of replicability. These effects get published, leading to an overconfident belief in replicability. We demonstrate the adverse consequences of this statistical significance filter by conducting seven direct replication attempts (268 participants in total) of a recent paper (Levy & Keller, 2013). We show that the published claims are so noisy that even non-significant results are fully compatible with them. We also demonstrate the contrast between such small-sample studies and a larger-sample study; the latter generally yields a less noisy estimate but also a smaller effect magnitude, which looks less compelling but is more realistic. We reiterate several suggestions from the methodology literature for improving current practices. KW - Type M error KW - Replicability KW - Surprisal KW - Expectation KW - Locality KW - Bayesian data analysis KW - Parameter estimation Y1 - 2018 U6 - https://doi.org/10.1016/j.jml.2018.07.004 SN - 0749-596X SN - 1096-0821 VL - 103 SP - 151 EP - 175 PB - Elsevier CY - San Diego ER - TY - JOUR A1 - Nicenboim, Bruno A1 - Vasishth, Shravan T1 - Models of retrieval in sentence comprehension BT - a computational evaluation using Bayesian hierarchical modeling JF - Journal of memory and language N2 - Research on similarity-based interference has provided extensive evidence that the formation of dependencies between non-adjacent words relies on a cue-based retrieval mechanism. There are two different models that can account for one of the main predictions of interference, i.e., a slowdown at a retrieval site, when several items share a feature associated with a retrieval cue: Lewis and Vasishth’s (2005) activation-based model and McElree’s (2000) direct-access model. Even though these two models have been used almost interchangeably, they are based on different assumptions and predict differences in the relationship between reading times and response accuracy. The activation-based model follows the assumptions of the ACT-R framework, and its retrieval process behaves as a lognormal race between accumulators of evidence with a single variance. Under this model, accuracy of the retrieval is determined by the winner of the race and retrieval time by its rate of accumulation. In contrast, the direct-access model assumes a model of memory where only the probability of retrieval can be affected, while the retrieval time is drawn from the same distribution; in this model, differences in latencies are a by-product of the possibility of backtracking and repairing incorrect retrievals. We implemented both models in a Bayesian hierarchical framework in order to evaluate them and compare them. The data show that correct retrievals take longer than incorrect ones, and this pattern is better fit under the direct-access model than under the activation-based model. This finding does not rule out the possibility that retrieval may be behaving as a race model with assumptions that follow less closely the ones from the ACT-R framework. By introducing a modification of the activation model, i.e., by assuming that the accumulation of evidence for retrieval of incorrect items is not only slower but noisier (i.e., different variances for the correct and incorrect items), the model can provide a fit as good as the one of the direct-access model. This first ever computational evaluation of alternative accounts of retrieval processes in sentence processing opens the way for a broader investigation of theories of dependency completion. KW - Cognitive modeling KW - Sentence processing KW - Working memory KW - Cue-based retrieval KW - Similarity-based interference KW - Bayesian hierarchical modeling Y1 - 2018 U6 - https://doi.org/10.1016/j.jml.2017.08.004 SN - 0749-596X SN - 1096-0821 VL - 99 SP - 1 EP - 34 PB - Elsevier CY - San Diego ER - TY - JOUR A1 - Mätzig, Paul A1 - Vasishth, Shravan A1 - Engelmann, Felix A1 - Caplan, David A1 - Burchert, Frank T1 - A computational investigation of sources of variability in sentence comprehension difficulty in aphasia JF - Topics in cognitive science N2 - We present a computational evaluation of three hypotheses about sources of deficit in sentence comprehension in aphasia: slowed processing, intermittent deficiency, and resource reduction. The ACT-R based Lewis and Vasishth (2005) model is used to implement these three proposals. Slowed processing is implemented as slowed execution time of parse steps; intermittent deficiency as increased random noise in activation of elements in memory; and resource reduction as reduced spreading activation. As data, we considered subject vs. object relative sentences, presented in a self-paced listening modality to 56 individuals with aphasia (IWA) and 46 matched controls. The participants heard the sentences and carried out a picture verification task to decide on an interpretation of the sentence. These response accuracies are used to identify the best parameters (for each participant) that correspond to the three hypotheses mentioned above. We show that controls have more tightly clustered (less variable) parameter values than IWA; specifically, compared to controls, among IWA there are more individuals with slow parsing times, high noise, and low spreading activation. We find that (a) individual IWA show differential amounts of deficit along the three dimensions of slowed processing, intermittent deficiency, and resource reduction, (b) overall, there is evidence for all three sources of deficit playing a role, and (c) IWA have a more variable range of parameter values than controls. An important implication is that it may be meaningless to talk about sources of deficit with respect to an abstract verage IWA; the focus should be on the individual's differential degrees of deficit along different dimensions, and on understanding the causes of variability in deficit between participants. KW - Sentence comprehension KW - Aphasia KW - Computational modeling KW - Cue-based retrieval Y1 - 2018 U6 - https://doi.org/10.1111/tops.12323 SN - 1756-8757 SN - 1756-8765 VL - 10 IS - 1 SP - 161 EP - 174 PB - Wiley CY - Hoboken ER - TY - JOUR A1 - Nicenboim, Bruno A1 - Vasishth, Shravan A1 - Engelmann, Felix A1 - Suckow, Katja T1 - Exploratory and confirmatory analyses in sentence processing BT - a case study of number interference in German JF - Cognitive science : a multidisciplinary journal of anthropology, artificial intelligence, education, linguistics, neuroscience, philosophy, psychology ; journal of the Cognitive Science Society N2 - Given the replication crisis in cognitive science, it is important to consider what researchers need to do in order to report results that are reliable. We consider three changes in current practice that have the potential to deliver more realistic and robust claims. First, the planned experiment should be divided into two stages, an exploratory stage and a confirmatory stage. This clear separation allows the researcher to check whether any results found in the exploratory stage are robust. The second change is to carry out adequately powered studies. We show that this is imperative if we want to obtain realistic estimates of effects in psycholinguistics. The third change is to use Bayesian data-analytic methods rather than frequentist ones; the Bayesian framework allows us to focus on the best estimates we can obtain of the effect, rather than rejecting a strawman null. As a case study, we investigate number interference effects in German. Number feature interference is predicted by cue-based retrieval models of sentence processing (Van Dyke & Lewis, 2003; Vasishth & Lewis, 2006), but it has shown inconsistent results. We show that by implementing the three changes mentioned, suggestive evidence emerges that is consistent with the predicted number interference effects. KW - Exploratory and confirmatory analyses KW - Sentence processing KW - Bayesian hierarchical modeling KW - Cue-based retrieval KW - Working memory KW - Similarity-based interference KW - Number interference KW - German Y1 - 2018 U6 - https://doi.org/10.1111/cogs.12589 SN - 0364-0213 SN - 1551-6709 VL - 42 SP - 1075 EP - 1100 PB - Wiley CY - Hoboken ER -