Refine
Year of publication
Document Type
- Article (60) (remove)
Is part of the Bibliography
- yes (60) (remove)
Keywords
- German (6)
- Eye movements (5)
- Reading (5)
- Sentence processing (5)
- interference (5)
- Bayesian data analysis (4)
- Cue-based retrieval (4)
- eye-tracking (4)
- Aphasia (3)
- ERP (3)
- Individual differences (3)
- Parsing (3)
- Scanpaths (3)
- Similarity-based interference (3)
- Working memory (3)
- antilocality (3)
- computational modeling (3)
- individual differences (3)
- locality (3)
- sentence processing (3)
- ACT-R (2)
- Bayesian hierarchical modeling (2)
- Bayesian inference (2)
- Bayesian meta-analysis (2)
- Chance performance (2)
- Chinese (2)
- Chinese reflexives (2)
- Cognitive modeling (2)
- Computational modeling (2)
- DLT (2)
- Eastern Armenian (2)
- Relative clause (2)
- Sentence comprehension (2)
- Spanish (2)
- Surprisal (2)
- Swedish (2)
- Underspecification (2)
- activation (2)
- anaphors (2)
- comprehension (2)
- content-addressable memory (2)
- cue-based retrieval (2)
- entropy (2)
- expectation (2)
- possessives (2)
- psycholinguistics (2)
- reflexives (2)
- statistical (2)
- working memory capacity (2)
- working-memory (2)
- ziji (2)
- Adaptation (1)
- Aging (1)
- Agrammatic aphasia (1)
- Agreement (1)
- Agreement attraction (1)
- Ambiguity (1)
- Autocorrelation (1)
- BEI (1)
- Bayes factor (1)
- Bayes factors (1)
- Bayesian model comparison (1)
- Bayesian parameter estimation (1)
- Bayesian random effects meta-analysis (1)
- Broca's aphasia (1)
- Canonicity and interference effects (1)
- Case (1)
- Classifiers (1)
- Computer model (1)
- Confirmatory versus exploratory data analysis (1)
- Corpus (1)
- Cue‐based retrieval (1)
- Development (1)
- Expectation (1)
- Experience (1)
- Experimental time series (1)
- Exploratory and confirmatory analyses (1)
- Eye tracking (1)
- Eye-tracking (1)
- Final devoicing (1)
- Gender effects (1)
- Generalized additive mixed models (1)
- German syntax (1)
- Good-enough processing (1)
- Hypothesis testing (1)
- Incomplete neutralization (1)
- Information structure (1)
- It- clefts (1)
- Journal policy (1)
- Language understanding (1)
- Linear mixed effect model (1)
- Linear mixed models (1)
- Locality (1)
- Meta-analysis (1)
- Meta-research (1)
- Model selection (1)
- Morphological cues (1)
- N400 (1)
- Non-canonical sentences (1)
- Null hypothesis significance testing (1)
- Number interference (1)
- Object manipulation (1)
- Oculo-motor control (1)
- Online and offline processing (1)
- Online morpho-syntactic processing (1)
- Online sentence processing (1)
- Only-foci (1)
- Open (1)
- Open data (1)
- Parallel processing (1)
- Parameter estimation (1)
- Patholinguistik (1)
- Picture-word interference (1)
- Power (1)
- Prediction (1)
- Prior and posterior predictive (1)
- Psycholinguistics (1)
- Reanalysis (1)
- Reflexives (1)
- Regressions (1)
- Replicability (1)
- Replication (1)
- Reproducibility (1)
- Reproducible statistical analyses (1)
- Self-paced listening (1)
- Self-paced reading (1)
- Sentence Comprehension (1)
- Sentence comprehension deficits (1)
- Sentence comprehension disorders (1)
- Sentence comprehension in aphasia (1)
- Sentence-picture matching (1)
- Sprachtherapie (1)
- Storage cost (1)
- Structural expectation (1)
- Syntactic reanalysis (1)
- Task demands (1)
- Test-retest reliability (1)
- Type M error (1)
- URM (1)
- Unrestricted race model (1)
- Variability (1)
- Visual world paradigm (1)
- Voice onset time (1)
- Vowel duration (1)
- Within-experiment adaptation (1)
- Working-memory (1)
- a priori (1)
- agreement (1)
- anaphor resolution (1)
- antecedent complexity (1)
- anterior PNP (1)
- attraction (1)
- bilingualism (1)
- competition-integration model (1)
- confirmatory analysis (1)
- constraint (1)
- context (1)
- contrasts (1)
- control (1)
- cue-based (1)
- dependencies (1)
- distractor frequency (1)
- distributions (1)
- dynamical models (1)
- ellipsis processing (1)
- experimental linguistics (1)
- exploratory analysis (1)
- eye tracking (1)
- features (1)
- garden-path effect (1)
- gardenpath model (1)
- geistige Behinderung (1)
- gender (1)
- grammatical gender (1)
- hypotheses (1)
- implicit meter (1)
- inference (1)
- language production (1)
- latent processes (1)
- linear models (1)
- linguistic (1)
- locality effects (1)
- long distance (1)
- memory pointer (1)
- memory retrieval (1)
- mental deficiency (1)
- mixture modeling (1)
- multinomial processing tree (1)
- null hypothesis significance testing (1)
- oculomotor (1)
- open science (1)
- patholinguistics (1)
- picture-word interference (1)
- plausibility (1)
- posterior (1)
- posterior P600 (1)
- power (1)
- pre-activation (1)
- preactivation (1)
- predictability (1)
- prediction (1)
- predictions (1)
- preregistration (1)
- primary progessive aphasia (1)
- primär progessive Aphasie (1)
- prior (1)
- probabilistic processing (1)
- re-reading probability (1)
- reading (1)
- reading eye movements (1)
- reanalysis (1)
- retrieval (1)
- science (1)
- self-paced reading (1)
- semantic interference (1)
- silent prosody (1)
- simulation-based calibration (1)
- skipping rate (1)
- speech therapy (1)
- statistical data analysis (1)
- stress-clash (1)
- subject-verb agreement (1)
- syntactic reanalysis (1)
- temporal decay (1)
- uncertainty quantification (1)
- unrestricted race model (1)
- visual world eye-tracking (1)
- word embeddings (1)
- working memory (1)
Institute
- Department Linguistik (60) (remove)
In eye-movement control during reading, advanced process-oriented models have been developed to reproduce behavioral data. So far, model complexity and large numbers of model parameters prevented rigorous statistical inference and modeling of interindividual differences. Here we propose a Bayesian approach to both problems for one representative computational model of sentence reading (SWIFT; Engbert et al., Psychological Review, 112, 2005, pp. 777-813). We used experimental data from 36 subjects who read the text in a normal and one of four manipulated text layouts (e.g., mirrored and scrambled letters). The SWIFT model was fitted to subjects and experimental conditions individually to investigate between- subject variability. Based on posterior distributions of model parameters, fixation probabilities and durations are reliably recovered from simulated data and reproduced for withheld empirical data, at both the experimental condition and subject levels. A subsequent statistical analysis of model parameters across reading conditions generates model-driven explanations for observable effects between conditions.
We present a computational evaluation of three hypotheses about sources of deficit in sentence comprehension in aphasia: slowed processing, intermittent deficiency, and resource reduction. The ACT-R based Lewis and Vasishth (2005) model is used to implement these three proposals. Slowed processing is implemented as slowed execution time of parse steps; intermittent deficiency as increased random noise in activation of elements in memory; and resource reduction as reduced spreading activation. As data, we considered subject vs. object relative sentences, presented in a self-paced listening modality to 56 individuals with aphasia (IWA) and 46 matched controls. The participants heard the sentences and carried out a picture verification task to decide on an interpretation of the sentence. These response accuracies are used to identify the best parameters (for each participant) that correspond to the three hypotheses mentioned above. We show that controls have more tightly clustered (less variable) parameter values than IWA; specifically, compared to controls, among IWA there are more individuals with slow parsing times, high noise, and low spreading activation. We find that (a) individual IWA show differential amounts of deficit along the three dimensions of slowed processing, intermittent deficiency, and resource reduction, (b) overall, there is evidence for all three sources of deficit playing a role, and (c) IWA have a more variable range of parameter values than controls. An important implication is that it may be meaningless to talk about sources of deficit with respect to an abstract verage IWA; the focus should be on the individual's differential degrees of deficit along different dimensions, and on understanding the causes of variability in deficit between participants.
Traxler, Pickering, and Clifton (1998) found that ambiguous sentences are read faster than their unambiguous counterparts. This so-called ambiguity advantage has presented a major challenge to classical theories of human sentence comprehension (parsing) because its most prominent explanation, in the form of the unrestricted race model (URM), assumes that parsing is non-deterministic. Recently, Swets, Desmet, Clifton, and Ferreira (2008) have challenged the URM. They argue that readers strategically underspecify the representation of ambiguous sentences to save time, unless disambiguation is required by task demands. When disambiguation is required, however, readers assign sentences full structure—and Swets et al. provide experimental evidence to this end. On the basis of their findings, they argue against the URM and in favor of a model of task-dependent sentence comprehension. We show through simulations that the Swets et al. data do not constitute evidence for task-dependent parsing because they can be explained by the URM. However, we provide decisive evidence from a German self-paced reading study consistent with Swets et al.'s general claim about task-dependent parsing. Specifically, we show that under certain conditions, ambiguous sentences can be read more slowly than their unambiguous counterparts, suggesting that the parser may create several parses, when required. Finally, we present the first quantitative model of task-driven disambiguation that subsumes the URM, and we show that it can explain both Swets et al.'s results and our findings.
Among theories of human language comprehension, cue-based memory retrieval has proven to be a useful framework for understanding when and how processing difficulty arises in the resolution of long-distance dependencies. Most previous work in this area has assumed that very general retrieval cues like [+subject] or [+singular] do the work of identifying (and sometimes misidentifying) a retrieval target in order to establish a dependency between words. However, recent work suggests that general, handpicked retrieval cues like these may not be enough to explain illusions of plausibility (Cunnings & Sturt, 2018), which can arise in sentences like The letter next to the porcelain plate shattered. Capturing such retrieval interference effects requires lexically specific features and retrieval cues, but handpicking the features is hard to do in a principled way and greatly increases modeler degrees of freedom. To remedy this, we use well-established word embedding methods for creating distributed lexical feature representations that encode information relevant for retrieval using distributed retrieval cue vectors. We show that the similarity between the feature and cue vectors (a measure of plausibility) predicts total reading times in Cunnings and Sturt's eye-tracking data. The features can easily be plugged into existing parsing models (including cue-based retrieval and self-organized parsing), putting very different models on more equal footing and facilitating future quantitative comparisons.
Several studies (e.g., Wicha et al., 2003b; DeLong et al., 2005) have shown that readers use information from the sentential context to predict nouns (or some of their features), and that predictability effects can be inferred from the EEG signal in determiners or adjectives appearing before the predicted noun. While these findings provide evidence for the pre-activation proposal, recent replication attempts together with inconsistencies in the results from the literature cast doubt on the robustness of this phenomenon. Our study presents the first attempt to use the effect of gender on predictability in German to study the pre-activation hypothesis, capitalizing on the fact that all German nouns have a gender and that their preceding determiners can show an unambiguous gender marking when the noun phrase has accusative case. Despite having a relatively large sample size (of 120 subjects), both our preregistered and exploratory analyses failed to yield conclusive evidence for or against an effect of pre-activation. The sign of the effect is, however, in the expected direction: the more unexpected the gender of the determiner, the larger the negativity. The recent, inconclusive replication attempts by Nieuwland et al. (2018) and others also show effects with signs in the expected direction. We conducted a Bayesian random-ef-fects meta-analysis using our data and the publicly available data from these recent replication attempts. Our meta-analysis shows a relatively clear but very small effect that is consistent with the pre-activation account and demonstrates a very important advantage of the Bayesian data analysis methodology: we can incrementally accumulate evidence to obtain increasingly precise estimates of the effect of interest.
Argument-head distance and processing complexity: Explaining both locality and antilocality effects
(2006)
Although proximity between arguments and verbs (locality) is a relatively robust determinant of sentence-processing difficulty (Hawkins 1998, 2001, Gibson 2000), increasing argument-verb distance can also facilitate processing (Konieczny 2000). We present two self-paced reading (SPR) experiments involving Hindi that provide further evidence of antilocality, and a third SPR experiment which suggests that similarity-based interference can attenuate this distance-based facilitation. A unified explanation of interference, locality, and antilocality effects is proposed via an independently motivated theory of activation decay and retrieval interference (Anderson et al. 2004).*
Linear mixed-effects models have increasingly replaced mixed-model analyses of variance for statistical inference in factorial psycholinguistic experiments. Although LMMs have many advantages over ANOVA, like ANOVAs, setting them up for data analysis also requires some care. One simple option, when numerically possible, is to fit the full variance covariance structure of random effects (the maximal model; Barr, Levy, Scheepers & Tily, 2013), presumably to keep Type I error down to the nominal a in the presence of random effects. Although it is true that fitting a model with only random intercepts may lead to higher Type I error, fitting a maximal model also has a cost: it can lead to a significant loss of power. We demonstrate this with simulations and suggest that for typical psychological and psycholinguistic data, higher power is achieved without inflating Type I error rate if a model selection criterion is used to select a random effect structure that is supported by the data. (C) 2017 The Authors. Published by Elsevier Inc.
This tutorial analyzes voice onset time (VOT) data from Dongbei (Northeastern) Mandarin Chinese and North American English to demonstrate how Bayesian linear mixed models can be fit using the programming language Stan via the R package brms. Through this case study, we demonstrate some of the advantages of the Bayesian framework: researchers can (i) flexibly define the underlying process that they believe to have generated the data; (ii) obtain direct information regarding the uncertainty about the parameter that relates the data to the theoretical question being studied; and (iii) incorporate prior knowledge into the analysis. Getting started with Bayesian modeling can be challenging, especially when one is trying to model one’s own (often unique) data. It is difficult to see how one can apply general principles described in textbooks to one’s own specific research problem. We address this barrier to using Bayesian methods by providing three detailed examples, with source code to allow easy reproducibility. The examples presented are intended to give the reader a flavor of the process of model-fitting; suggestions for further study are also provided. All data and code are available from: https://osf.io/g4zpv.
Background: In addition to the canonical subject-verb-object (SVO) word order, German also allows for non-canonical order (OVS), and the case-marking system supports thematic role interpretation. Previous eye-tracking studies (Kamide et al., 2003; Knoeferle, 2007) have shown that unambiguous case information in non-canonical sentences is processed incrementally. For individuals with agrammatic aphasia, comprehension of non-canonical sentences is at chance level (Burchert et al., 2003). The trace deletion hypothesis (Grodzinsky 1995, 2000) claims that this is due to structural impairments in syntactic representations, which force the individual with aphasia (IWA) to apply a guessing strategy. However, recent studies investigating online sentence processing in aphasia (Caplan et al., 2007; Dickey et al., 2007) found that divergences exist in IWAs' sentence-processing routines depending on whether they comprehended non-canonical sentences correctly or not, pointing rather to a processing deficit explanation. Aims: The aim of the current study was to investigate agrammatic IWAs' online and offline sentence comprehension simultaneously in order to reveal what online sentence-processing strategies they rely on and how these differ from controls' processing routines. We further asked whether IWAs' offline chance performance for non-canonical sentences does indeed result from guessing. Methods Procedures: We used the visual-world paradigm and measured eye movements (as an index of online sentence processing) of controls (N = 8) and individuals with aphasia (N = 7) during a sentence-picture matching task. Additional offline measures were accuracy and reaction times. Outcomes Results: While the offline accuracy results corresponded to the pattern predicted by the TDH, IWAs' eye movements revealed systematic differences depending on the response accuracy. Conclusions: These findings constitute evidence against attributing IWAs' chance performance for non-canonical structures to mere guessing. Instead, our results support processing deficit explanations and characterise the agrammatic parser as deterministic and inefficient: it is slowed down, affected by intermittent deficiencies in performing syntactic operations, and fails to compute reanalysis even when one is detected.
Dynamical models make specific assumptions about cognitive processes that generate human behavior. In data assimilation, these models are tested against timeordered data. Recent progress on Bayesian data assimilation demonstrates that this approach combines the strengths of statistical modeling of individual differences with the those of dynamical cognitive models.