Refine
Has Fulltext
- no (77)
Year of publication
Document Type
- Article (77) (remove)
Language
- English (77) (remove)
Is part of the Bibliography
- yes (77)
Keywords
- Eye movements (7)
- German (7)
- Bayesian data analysis (6)
- Reading (6)
- eye-tracking (6)
- Sentence processing (5)
- interference (5)
- locality (5)
- Aphasia (4)
- Computational modeling (4)
- Cue-based retrieval (4)
- Individual differences (4)
- Sentence comprehension (4)
- Working memory (4)
- individual differences (4)
- sentence processing (4)
- ACT-R (3)
- ERP (3)
- Parsing (3)
- Scanpaths (3)
- Similarity-based interference (3)
- Spanish (3)
- Surprisal (3)
- Underspecification (3)
- activation (3)
- antilocality (3)
- computational modeling (3)
- entropy (3)
- expectation (3)
- self-paced reading (3)
- sentence comprehension (3)
- working memory capacity (3)
- Bayesian hierarchical modeling (2)
- Bayesian inference (2)
- Chance performance (2)
- Chinese (2)
- Chinese reflexives (2)
- Cognitive modeling (2)
- DLT (2)
- Eastern Armenian (2)
- Non-canonical sentences (2)
- Relative clause (2)
- Self-paced reading (2)
- Sentence-picture matching (2)
- Swedish (2)
- anaphors (2)
- comprehension (2)
- content-addressable memory (2)
- cue-based retrieval (2)
- eye tracking (2)
- possessives (2)
- psycholinguistics (2)
- reading (2)
- reflexives (2)
- retrieval (2)
- statistical (2)
- working-memory (2)
- ziji (2)
- Adaptation (1)
- Aging (1)
- Agrammatic aphasia (1)
- Agreement (1)
- Agreement attraction (1)
- Ambiguity (1)
- Autocorrelation (1)
- BEI (1)
- Bayes factor (1)
- Bayesian meta-analysis (1)
- Bayesian parameter estimation (1)
- Bayesian random effects meta-analysis (1)
- Bilingualism (1)
- Broca's aphasia (1)
- Canonicity and interference effects (1)
- Case (1)
- Centre embedding (1)
- Classifiers (1)
- Cognitive architecture (1)
- Computational modelling (1)
- Computer model (1)
- Confirmatory versus exploratory data analysis (1)
- Corpus (1)
- Cross-linguistic differences (1)
- Cue‐based retrieval (1)
- Development (1)
- Expectation (1)
- Experience (1)
- Experimental time series (1)
- Exploratory and confirmatory analyses (1)
- Eye tracking (1)
- Eye-tracking (1)
- Final devoicing (1)
- Gender effects (1)
- Generalized additive mixed models (1)
- German syntax (1)
- Good-enough processing (1)
- Grammaticality illusion (1)
- Hindi (1)
- Hypothesis testing (1)
- Incomplete neutralization (1)
- Information structure (1)
- It- clefts (1)
- Journal policy (1)
- Language understanding (1)
- Linear mixed effect model (1)
- Linear mixed models (1)
- Local coherence (1)
- Locality (1)
- Meta-analysis (1)
- Meta-research (1)
- Model selection (1)
- Morphological cues (1)
- N400 (1)
- Null hypothesis significance testing (1)
- Number interference (1)
- Object manipulation (1)
- Oculo-motor control (1)
- Online and offline processing (1)
- Online morpho-syntactic processing (1)
- Online sentence processing (1)
- Only-foci (1)
- Open (1)
- Open data (1)
- Parallel processing (1)
- Parameter estimation (1)
- Parsing difficulty (1)
- Persian (1)
- Power (1)
- Prediction (1)
- Prior and posterior predictive (1)
- Psycholinguistics (1)
- Reanalysis (1)
- Recurrent neural network model (1)
- Reflexives (1)
- Regressions (1)
- Relative clauses (1)
- Replicability (1)
- Replication (1)
- Reproducibility (1)
- Reproducible statistical analyses (1)
- SOPARSE (1)
- Self-paced listening (1)
- Sentence Comprehension (1)
- Sentence comprehension deficits (1)
- Sentence comprehension disorders (1)
- Sentence comprehension in aphasia (1)
- Shallow processing (1)
- Storage cost (1)
- Structural expectation (1)
- Syntactic reanalysis (1)
- Task demands (1)
- Test-retest reliability (1)
- Type M error (1)
- URM (1)
- Unrestricted race model (1)
- Variability (1)
- Visual world paradigm (1)
- Voice onset time (1)
- Vowel duration (1)
- Within-experiment adaptation (1)
- Working-memory (1)
- a priori (1)
- ambiguities (1)
- anaphor resolution (1)
- antecedent complexity (1)
- anterior PNP (1)
- anticipatory eye movements (1)
- attraction (1)
- bilingualism (1)
- building (1)
- clauses (1)
- competition-integration model (1)
- complex predicates (1)
- confirmatory analysis (1)
- constraint (1)
- context (1)
- contrasts (1)
- control (1)
- cue-based (1)
- dependencies (1)
- digging-in effects (1)
- discourse (1)
- distinctiveness (1)
- distributions (1)
- dynamical models (1)
- ellipsis processing (1)
- encoding (1)
- experimental linguistics (1)
- exploratory analysis (1)
- features (1)
- garden-path effect (1)
- gardenpath model (1)
- grammatical gender (1)
- hypotheses (1)
- hypothesis (1)
- implicit meter (1)
- implicit prosody (1)
- inference (1)
- integration cost (1)
- language production (1)
- latent processes (1)
- linear mixed models (1)
- linear models (1)
- lingering misinterpretation (1)
- linguistic (1)
- linguistic rhythm (1)
- locality effects (1)
- long distance (1)
- memory (1)
- memory pointer (1)
- memory retrieval (1)
- mixture modeling (1)
- model (1)
- multinomial processing tree (1)
- null hypothesis significance testing (1)
- oculomotor (1)
- online sentence processing (1)
- open science (1)
- picture-word interference (1)
- plausibility (1)
- posterior P600 (1)
- posterior predictive checks (1)
- power (1)
- pre-activation (1)
- preactivation (1)
- predictability (1)
- predictions (1)
- preregistration (1)
- prior predictive checks (1)
- probabilistic processing (1)
- re-reading probability (1)
- reading eye movements (1)
- reanalysis (1)
- science (1)
- semantic interference (1)
- sentence comprehension deficits (1)
- silent prosody (1)
- similarity (1)
- skipping rate (1)
- statistical data analysis (1)
- storage cost (1)
- stress-clash (1)
- subject-object asymmetry (1)
- subject-verb agreement (1)
- surprisal (1)
- syntactic parsing (1)
- syntactic reanalysis (1)
- temporal decay (1)
- uncertainty quantification (1)
- unrestricted race model (1)
- verb-phrase ellipsis (1)
- wh-questions (1)
- word embeddings (1)
- workflow (1)
- working memory (1)
Institute
There is a wealth of evidence showing that increasing the distance between an argument and its head leads to more processing effort, namely, locality effects: these are usually associated with constraints in working memory (DLT: Gibson, 2000: activation-based model: Lewis and Vasishth, 2005). In SOV languages, however, the opposite effect has been found: antilocality (see discussion in Levy et al., 2013). Antilocality effects can be explained by the expectation based approach as proposed by Levy (2008) or by the activation-based model of sentence processing as proposed by Lewis and Vasishth (2005). We report an eye-tracking and a self-paced reading study with sentences in Spanish together with measures of individual differences to examine the distinction between expectation- and memory based accounts, and within memory-based accounts the further distinction between DLT and the activation-based model. The experiments show that (i) antilocality effects as predicted by the expectation account appear only for high-capacity readers; (ii) increasing dependency length by interposing material that modifies the head of the dependency (the verb) produces stronger facilitation than increasing dependency length with material that does not modify the head; this is in agreement with the activation-based model but not with the expectation account; and (iii) a possible outcome of memory load on low-capacity readers is the increase in regressive saccades (locality effects as predicted by memory-based accounts) or, surprisingly, a speedup in the self-paced reading task; the latter consistent with good-enough parsing (Ferreira et al., 2002). In sum, the study suggests that individual differences in working memory capacity play a role in dependency resolution, and that some of the aspects of dependency resolution can be best explained with the activation-based model together with a prediction component.
There is a wealth of evidence showing that increasing the distance between an argument and its head leads to more processing effort, namely, locality effects; these are usually associated with constraints in working memory (DLT: Gibson, 2000; activation-based model: Lewis and Vasishth, 2005). In SOV languages, however, the opposite effect has been found: antilocality (see discussion in Levy et al., 2013). Antilocality effects can be explained by the expectation-based approach as proposed by Levy (2008) or by the activation-based model of sentence processing as proposed by Lewis and Vasishth (2005). We report an eye-tracking and a self-paced reading study with sentences in Spanish together with measures of individual differences to examine the distinction between expectation- and memory-based accounts, and within memory-based accounts the further distinction between DLT and the activation-based model. The experiments show that (i) antilocality effects as predicted by the expectation account appear only for high-capacity readers; (ii) increasing dependency length by interposing material that modifies the head of the dependency (the verb) produces stronger facilitation than increasing dependency length with material that does not modify the head; this is in agreement with the activation-based model but not with the expectation account; and (iii) a possible outcome of memory load on low-capacity readers is the increase in regressive saccades (locality effects as predicted by memory-based accounts) or, surprisingly, a speedup in the self-paced reading task; the latter consistent with good-enough parsing (Ferreira et al., 2002). In sum, the study suggests that individual differences in working memory capacity play a role in dependency resolution, and that some of the aspects of dependency resolution can be best explained with the activation-based model together with a prediction component.
We examined the effects of argument-head distance in SVO and SOV languages (Spanish and German), while taking into account readers' working memory capacity and controlling for expectation (Levy, 2008) and other factors. We predicted only locality effects, that is, a slowdown produced by increased dependency distance (Gibson, 2000; Lewis and Vasishth, 2005). Furthermore, we expected stronger locality effects for readers with low working memory capacity. Contrary to our predictions, low-capacity readers showed faster reading with increased distance, while high-capacity readers showed locality effects. We suggest that while the locality effects are compatible with memory-based explanations, the speedup of low-capacity readers can be explained by an increased probability of retrieval failure. We present a computational model based on ACT-R built under the previous assumptions, which is able to give a qualitative account for the present data and can be tested in future research. Our results suggest that in some cases, interpreting longer RTs as indexing increased processing difficulty and shorter RTs as facilitation may be too simplistic: The same increase in processing difficulty may lead to slowdowns in high-capacity readers and speedups in low-capacity ones. Ignoring individual level capacity differences when investigating locality effects may lead to misleading conclusions.
Which repair strategy does the language system deploy when it gets garden-pathed, and what can regressive eye movements in reading tell us about reanalysis strategies? Several influential eye-tracking studies on syntactic reanalysis (Frazier & Rayner, 1982; Meseguer, Carreiras, & Clifton, 2002; Mitchell, Shen, Green, & Hodgson, 2008) have addressed this question by examining scanpaths, i.e., sequential patterns of eye fixations. However, in the absence of a suitable method for analyzing scanpaths, these studies relied on simplified dependent measures that are arguably ambiguous and hard to interpret. We address the theoretical question of repair strategy by developing a new method that quantifies scanpath similarity. Our method reveals several distinct fixation strategies associated with reanalysis that went undetected in a previously published data set (Meseguer et al., 2002). One prevalent pattern suggests re-parsing of the sentence, a strategy that has been discussed in the literature (Frazier & Rayner, 1982); however, readers differed tremendously in how they orchestrated the various fixation strategies. Our results suggest that the human parsing system non-deterministically adopts different strategies when confronted with the disambiguating material in garden-path sentences.
When participants in an experiment have to name pictures while ignoring distractor words superimposed on the picture or presented auditorily (i.e., picture-word interference paradigm), they take more time when the word to be named (or target) and distractor words are from the same semantic category (e.g., cat-dog). This experimental effect is known as the semantic interference effect, and is probably one of the most studied in the language production literature. The functional origin of the effect and the exact conditions in which it occurs are however still debated. Since Lupker (1979) reported the effect in the first response time experiment about 40 years ago, more than 300 similar experiments have been conducted. The semantic interference effect was replicated in many experiments, but several studies also reported the absence of an effect in a subset of experimental conditions. The aim of the present study is to provide a comprehensive theoretical review of the existing evidence to date and several Bayesian meta-analyses and meta-regressions to determine the size of the effect and explore the experimental conditions in which the effect surfaces. The results are discussed in the light of current debates about the functional origin of the semantic interference effect and its implications for our understanding of the language production system.
An important aspect of aphasia is the observation of behavioral variability between and within individual participants. Our study addresses variability in sentence comprehension in German, by testing 21 individuals with aphasia and a control group and involving (a) several constructions (declarative sentences, relative clauses and control structures with an overt pronoun or PRO), (b) three response tasks (object manipulation, sentence-picture matching with/without self-paced listening), and (c) two test phases (to investigate test-retest performance). With this systematic, large-scale study we gained insights into variability in sentence comprehension. We found that the size of syntactic effects varied both in aphasia and in control participants. Whereas variability in control participants led to systematic changes, variability in individuals with aphasia was unsystematic across test phases or response tasks. The persistent occurrence of canonicity and interference effects across response tasks and test phases, however, shows that the performance is systematically influenced by syntactic complexity.
Within quantitative phonetics, it is common practice to draw conclusions based on statistical significance alone Using incomplete neutralization of final devoicing in German as a case study, we illustrate the problems with this approach. If researchers find a significant acoustic difference between voiceless and devoiced obstruents, they conclude that neutralization is incomplete, and if they find no significant difference, they conclude that neutralization is complete. However, such strong claims regarding the existence or absence of an effect based on significant results alone can be misleading. Instead, the totality of available evidence should be brought to bear on the question. Towards this end, we synthesize the evidence from 14 studies on incomplete neutralization in German using a Bayesian random-effects meta-analysis. Our meta-analysis provides evidence in favor of incomplete neutralization. We conclude with some suggestions for improving the quality of future research on phonetic phenomena: ensure that sample sizes allow for high-precision estimates of the effect; avoid the temptation to deploy researcher degrees of freedom when analyzing data; focus on estimates of the parameter of interest and the uncertainty about that parameter; attempt to replicate effects found; and, whenever possible, make both the data and analysis available publicly. (c) 2018 Elsevier Ltd. All rights reserved.
A commonly used approach to parameter estimation in computational models is the so-called grid search procedure: the entire parameter space is searched in small steps to determine the parameter value that provides the best fit to the observed data. This approach has several disadvantages: first, it can be computationally very expensive; second, one optimal point value of the parameter is reported as the best fit value; we cannot quantify our uncertainty about the parameter estimate. In the main journal article that this methods article accompanies (Jager et al., 2020, Interference patterns in subject-verb agreement and reflexives revisited: A large-sample study, Journal of Memory and Language), we carried out parameter estimation using Approximate Bayesian Computation (ABC), which is a Bayesian approach that allows us to quantify our uncertainty about the parameter's values given data. This customization has the further advantage that it allows us to generate both prior and posterior predictive distributions of reading times from the cue-based retrieval model of Lewis and Vasishth, 2005. <br /> Instead of the conventional method of using grid search, we use Approximate Bayesian Computation (ABC) for parameter estimation in the [4] model. <br /> The ABC method of parameter estimation has the advantage that the uncertainty of the parameter can be quantified.
Swets et al. (2008. Underspecification of syntactic ambiguities: Evidence from self-paced reading. Memory and Cognition, 36(1), 201–216) presented evidence that the so-called ambiguity advantage [Traxler et al. (1998 Traxler, M. J., Pickering, M. J., & Clifton, C. (1998). Adjunct attachment is not a form of lexical ambiguity resolution. Journal of Memory and Language, 39(4), 558–592. doi: 10.1006/jmla.1998.2600[CrossRef], [Web of Science ®], [Google Scholar]). Adjunct attachment is not a form of lexical ambiguity resolution. Journal of Memory and Language, 39(4), 558–592], which has been explained in terms of the Unrestricted Race Model, can equally well be explained by assuming underspecification in ambiguous conditions driven by task-demands. Specifically, if comprehension questions require that ambiguities be resolved, the parser tends to make an attachment: when questions are about superficial aspects of the target sentence, readers tend to pursue an underspecification strategy. It is reasonable to assume that individual differences in strategy will play a significant role in the application of such strategies, so that studying average behaviour may not be informative. In order to study the predictions of the good-enough processing theory, we implemented two versions of underspecification: the partial specification model (PSM), which is an implementation of the Swets et al. proposal, and a more parsimonious version, the non-specification model (NSM). We evaluate the relative fit of these two kinds of underspecification to Swets et al.’s data; as a baseline, we also fitted three models that assume no underspecification. We find that a model without underspecification provides a somewhat better fit than both underspecification models, while the NSM model provides a better fit than the PSM. We interpret the results as lack of unambiguous evidence in favour of underspecification; however, given that there is considerable existing evidence for good-enough processing in the literature, it is reasonable to assume that some underspecification might occur. Under this assumption, the results can be interpreted as tentative evidence for NSM over PSM. More generally, our work provides a method for choosing between models of real-time processes in sentence comprehension that make qualitative predictions about the relationship between several dependent variables. We believe that sentence processing research will greatly benefit from a wider use of such methods.
Intuitively, strongly constraining contexts should lead to stronger probabilistic representations of sentences in memory. Encountering unexpected words could therefore be expected to trigger costlier shifts in these representations than expected words. However, psycholinguistic measures commonly used to study probabilistic processing, such as the N400 event-related potential (ERP) component, are sensitive to word predictability but not to contextual constraint. Some research suggests that constraint-related processing cost may be measurable via an ERP positivity following the N400, known as the anterior post-N400 positivity (PNP). The PNP is argued to reflect update of a sentence representation and to be distinct from the posterior P600, which reflects conflict detection and reanalysis. However, constraint-related PNP findings are inconsistent. We sought to conceptually replicate Federmeier et al. (2007) and Kuperberg et al. (2020), who observed that the PNP, but not the N400 or the P600, was affected by constraint at unexpected but plausible words. Using a pre-registered design and statistical approach maximising power, we demonstrated a dissociated effect of predictability and constraint: strong evidence for predictability but not constraint in the N400 window, and strong evidence for constraint but not predictability in the later window. However, the constraint effect was consistent with a P600 and not a PNP, suggesting increased conflict between a strong representation and unexpected input rather than greater update of the representation. We conclude that either a simple strong/weak constraint design is not always sufficient to elicit the PNP, or that previous PNP constraint findings could be an artifact of smaller sample size.