Filtern
Volltext vorhanden
- nein (84) (entfernen)
Erscheinungsjahr
Dokumenttyp
- Wissenschaftlicher Artikel (78)
- Rezension (3)
- Monographie/Sammelband (2)
- Sonstiges (1)
Gehört zur Bibliographie
- ja (84)
Schlagworte
- Eye movements (7)
- German (7)
- Bayesian data analysis (6)
- Reading (6)
- eye-tracking (6)
- Cue-based retrieval (5)
- Sentence processing (5)
- interference (5)
- locality (5)
- Aphasia (4)
- Computational modeling (4)
- Individual differences (4)
- Sentence comprehension (4)
- Working memory (4)
- individual differences (4)
- sentence processing (4)
- ACT-R (3)
- ERP (3)
- Parsing (3)
- Scanpaths (3)
- Similarity-based interference (3)
- Spanish (3)
- Surprisal (3)
- Underspecification (3)
- activation (3)
- antilocality (3)
- computational modeling (3)
- entropy (3)
- expectation (3)
- self-paced reading (3)
- sentence comprehension (3)
- working memory capacity (3)
- Agreement (2)
- Bayesian hierarchical modeling (2)
- Bayesian inference (2)
- Bayesian meta-analysis (2)
- Chance performance (2)
- Chinese (2)
- Chinese reflexives (2)
- Cognitive modeling (2)
- DLT (2)
- Eastern Armenian (2)
- Non-canonical sentences (2)
- Reflexives (2)
- Relative clause (2)
- Self-paced reading (2)
- Sentence-picture matching (2)
- Swedish (2)
- anaphors (2)
- comprehension (2)
- content-addressable memory (2)
- cue-based retrieval (2)
- eye tracking (2)
- possessives (2)
- psycholinguistics (2)
- reading (2)
- reflexives (2)
- retrieval (2)
- statistical (2)
- working-memory (2)
- ziji (2)
- Adaptation (1)
- Aging (1)
- Agrammatic aphasia (1)
- Agreement attraction (1)
- Ambiguity (1)
- Autocorrelation (1)
- BEI (1)
- Bayes factor (1)
- Bayes factors (1)
- Bayesian model comparison (1)
- Bayesian parameter estimation (1)
- Bayesian random effects meta-analysis (1)
- Bilingualism (1)
- Broca's aphasia (1)
- Canonicity and interference effects (1)
- Case (1)
- Centre embedding (1)
- Classifiers (1)
- Cognitive architecture (1)
- Computational modelling (1)
- Computer model (1)
- Confirmatory versus exploratory data analysis (1)
- Corpus (1)
- Cross-linguistic differences (1)
- Cue‐based retrieval (1)
- Development (1)
- Expectation (1)
- Experience (1)
- Experimental time series (1)
- Exploratory and confirmatory analyses (1)
- Eye tracking (1)
- Eye-tracking (1)
- Final devoicing (1)
- Gender effects (1)
- Generalized additive mixed models (1)
- German syntax (1)
- Good-enough processing (1)
- Grammaticality illusion (1)
- Hindi (1)
- Hypothesis testing (1)
- Incomplete neutralization (1)
- Information structure (1)
- Interference (1)
- It- clefts (1)
- Journal policy (1)
- Language understanding (1)
- Linear mixed effect model (1)
- Linear mixed models (1)
- Local coherence (1)
- Locality (1)
- Meta-analysis (1)
- Meta-research (1)
- Model selection (1)
- Morphological cues (1)
- N400 (1)
- Null hypothesis significance testing (1)
- Number interference (1)
- Object manipulation (1)
- Oculo-motor control (1)
- Online and offline processing (1)
- Online morpho-syntactic processing (1)
- Online sentence processing (1)
- Only-foci (1)
- Open (1)
- Open data (1)
- Parallel processing (1)
- Parameter estimation (1)
- Parsing difficulty (1)
- Persian (1)
- Power (1)
- Prediction (1)
- Prior and posterior predictive (1)
- Psycholinguistics (1)
- Reanalysis (1)
- Recurrent neural network model (1)
- Regressions (1)
- Relative clauses (1)
- Replicability (1)
- Replication (1)
- Reproducibility (1)
- Reproducible statistical analyses (1)
- SOPARSE (1)
- Self-paced listening (1)
- Sentence Comprehension (1)
- Sentence comprehension deficits (1)
- Sentence comprehension disorders (1)
- Sentence comprehension in aphasia (1)
- Shallow processing (1)
- Storage cost (1)
- Structural expectation (1)
- Syntactic dependency processing (1)
- Syntactic reanalysis (1)
- Task demands (1)
- Test-retest reliability (1)
- Type M error (1)
- URM (1)
- Unrestricted race model (1)
- Variability (1)
- Visual world paradigm (1)
- Voice onset time (1)
- Vowel duration (1)
- Within-experiment adaptation (1)
- Working-memory (1)
- a priori (1)
- ambiguities (1)
- anaphor resolution (1)
- antecedent complexity (1)
- anterior PNP (1)
- anticipatory eye movements (1)
- attraction (1)
- bilingualism (1)
- building (1)
- clauses (1)
- competition-integration model (1)
- complex predicates (1)
- confirmatory analysis (1)
- constraint (1)
- context (1)
- contrasts (1)
- control (1)
- cue-based (1)
- dependencies (1)
- digging-in effects (1)
- discourse (1)
- distinctiveness (1)
- distributions (1)
- dynamical models (1)
- ellipsis processing (1)
- encoding (1)
- experimental linguistics (1)
- exploratory analysis (1)
- features (1)
- garden-path effect (1)
- gardenpath model (1)
- grammatical gender (1)
- hypotheses (1)
- hypothesis (1)
- implicit meter (1)
- implicit prosody (1)
- inference (1)
- integration cost (1)
- language production (1)
- latent processes (1)
- linear mixed models (1)
- linear models (1)
- lingering misinterpretation (1)
- linguistic (1)
- linguistic rhythm (1)
- locality effects (1)
- long distance (1)
- memory (1)
- memory pointer (1)
- memory retrieval (1)
- mixture modeling (1)
- model (1)
- multinomial processing tree (1)
- null hypothesis significance testing (1)
- oculomotor (1)
- online sentence processing (1)
- open science (1)
- picture-word interference (1)
- plausibility (1)
- posterior (1)
- posterior P600 (1)
- posterior predictive checks (1)
- power (1)
- pre-activation (1)
- preactivation (1)
- predictability (1)
- predictions (1)
- preregistration (1)
- prior (1)
- prior predictive checks (1)
- probabilistic processing (1)
- re-reading probability (1)
- reading eye movements (1)
- reanalysis (1)
- science (1)
- semantic interference (1)
- sentence comprehension deficits (1)
- silent prosody (1)
- similarity (1)
- simulation-based calibration (1)
- skipping rate (1)
- statistical data analysis (1)
- storage cost (1)
- stress-clash (1)
- subject-object asymmetry (1)
- subject-verb agreement (1)
- surprisal (1)
- syntactic parsing (1)
- syntactic reanalysis (1)
- temporal decay (1)
- uncertainty quantification (1)
- unrestricted race model (1)
- verb-phrase ellipsis (1)
- wh-questions (1)
- word embeddings (1)
- workflow (1)
- working memory (1)
Institut
Intuitively, strongly constraining contexts should lead to stronger probabilistic representations of sentences in memory. Encountering unexpected words could therefore be expected to trigger costlier shifts in these representations than expected words. However, psycholinguistic measures commonly used to study probabilistic processing, such as the N400 event-related potential (ERP) component, are sensitive to word predictability but not to contextual constraint. Some research suggests that constraint-related processing cost may be measurable via an ERP positivity following the N400, known as the anterior post-N400 positivity (PNP). The PNP is argued to reflect update of a sentence representation and to be distinct from the posterior P600, which reflects conflict detection and reanalysis. However, constraint-related PNP findings are inconsistent. We sought to conceptually replicate Federmeier et al. (2007) and Kuperberg et al. (2020), who observed that the PNP, but not the N400 or the P600, was affected by constraint at unexpected but plausible words. Using a pre-registered design and statistical approach maximising power, we demonstrated a dissociated effect of predictability and constraint: strong evidence for predictability but not constraint in the N400 window, and strong evidence for constraint but not predictability in the later window. However, the constraint effect was consistent with a P600 and not a PNP, suggesting increased conflict between a strong representation and unexpected input rather than greater update of the representation. We conclude that either a simple strong/weak constraint design is not always sufficient to elicit the PNP, or that previous PNP constraint findings could be an artifact of smaller sample size.
Dynamical models make specific assumptions about cognitive processes that generate human behavior. In data assimilation, these models are tested against timeordered data. Recent progress on Bayesian data assimilation demonstrates that this approach combines the strengths of statistical modeling of individual differences with the those of dynamical cognitive models.
In this paper we examine the effect of uncertainty on readers’ predictions about meaning. In particular, we were interested in how uncertainty might influence the likelihood of committing to a specific sentence meaning. We conducted two event-related potential (ERP) experiments using particle verbs such as turn down and manipulated uncertainty by constraining the context such that readers could be either highly certain about the identity of a distant verb particle, such as turn the bed […] down, or less certain due to competing particles, such as turn the music […] up/down. The study was conducted in German, where verb particles appear clause-finally and may be separated from the verb by a large amount of material. We hypothesised that this separation would encourage readers to predict the particle, and that high certainty would make prediction of a specific particle more likely than lower certainty. If a specific particle was predicted, this would reflect a strong commitment to sentence meaning that should incur a higher processing cost if the prediction is wrong. If a specific particle was less likely to be predicted, commitment should be weaker and the processing cost of a wrong prediction lower. If true, this could suggest that uncertainty discourages predictions via an unacceptable cost-benefit ratio. However, given the clear predictions made by the literature, it was surprisingly unclear whether the uncertainty manipulation affected the two ERP components studied, the N400 and the PNP. Bayes factor analyses showed that evidence for our a priori hypothesised effect sizes was inconclusive, although there was decisive evidence against a priori hypothesised effect sizes larger than 1μV for the N400 and larger than 3μV for the PNP. We attribute the inconclusive finding to the properties of verb-particle dependencies that differ from the verb-noun dependencies in which the N400 and PNP are often studied.
In 2019 the Journal of Memory and Language instituted an open data and code policy; this policy requires that, as a rule, code and data be released at the latest upon publication. How effective is this policy? We compared 59 papers published before, and 59 papers published after, the policy took effect. After the policy was in place, the rate of data sharing increased by more than 50%. We further looked at whether papers published under the open data policy were reproducible, in the sense that the published results should be possible to regenerate given the data, and given the code, when code was provided. For 8 out of the 59 papers, data sets were inaccessible. The reproducibility rate ranged from 34% to 56%, depending on the reproducibility criteria. The strongest predictor of whether an attempt to reproduce would be successful is the presence of the analysis code: it increases the probability of reproducing reported results by almost 40%. We propose two simple steps that can increase the reproducibility of published papers: share the analysis code, and attempt to reproduce one's own analysis using only the shared materials.
What is the processing cost of being garden-pathed by a temporary syntactic ambiguity? We argue that comparing average reading times in garden-path versus non-garden-path sentences is not enough to answer this question. Trial-level contaminants such as inattention, the fact that garden pathing may occur non-deterministically in the ambiguous condition, and "triage" (rejecting the sentence without reanalysis; Fodor & Inoue, 2000) lead to systematic underestimates of the true cost of garden pathing. Furthermore, the "pure" garden-path effect due to encountering an unexpected word needs to be separated from the additional cost of syntactic reanalysis. To get more realistic estimates for the individual processing costs of garden pathing and syntactic reanalysis, we implement a novel computational model that includes trial-level contaminants as probabilistically occurring latent cognitive processes. The model shows a good predictive fit to existing reading time and judgment data. Furthermore, the latent-process approach captures differences between noun phrase/zero complement (NP/Z) garden-path sentences and semantically biased reduced relative clause (RRC) garden-path sentences: The NP/Z garden path occurs nearly deterministically but can be mostly eliminated by adding a comma. By contrast, the RRC garden path occurs with a lower probability, but disambiguation via semantic plausibility is not always effective.
In this paper we examine the effect of uncertainty on readers' predictions about meaning. In particular, we were interested in how uncertainty might influence the likelihood of committing to a specific sentence meaning. We conducted two event-related potential (ERP) experiments using particle verbs such as turn down and manipulated uncertainty by constraining the context such that readers could be either highly certain about the identity of a distant verb particle, such as turn the bed [...] down, or less certain due to competing particles, such as turn the music [...] up/down. The study was conducted in German, where verb particles appear clause-finally and may be separated from the verb by a large amount of material. We hypothesised that this separation would encourage readers to predict the particle, and that high certainty would make prediction of a specific particle more likely than lower certainty. If a specific particle was predicted, this would reflect a strong commitment to sentence meaning that should incur a higher processing cost if the prediction is wrong. If a specific particle was less likely to be predicted, commitment should be weaker and the processing cost of a wrong prediction lower. If true, this could suggest that uncertainty discourages predictions via an unacceptable cost-benefit ratio. However, given the clear predictions made by the literature, it was surprisingly unclear whether the uncertainty manipulation affected the two ERP components studied, the N400 and the PNP. Bayes factor analyses showed that evidence for our a priori hypothesised effect sizes was inconclusive, although there was decisive evidence against a priori hypothesised effect sizes larger than 1 mu Vfor the N400 and larger than 3 mu V for the PNP. We attribute the inconclusive finding to the properties of verb-particle dependencies that differ from the verb-noun dependencies in which the N400 and PNP are often studied.
When researchers carry out a null hypothesis significance test, it is tempting to assume that a statistically significant result lowers Prob(H0), the probability of the null hypothesis being true. Technically, such a statement is meaningless for various reasons: e.g., the null hypothesis does not have a probability associated with it. However, it is possible to relax certain assumptions to compute the posterior probability Prob(H0) under repeated sampling. We show in a step-by-step guide that the intuitively appealing belief, that Prob(H0) is low when significant results have been obtained under repeated sampling, is in general incorrect and depends greatly on: (a) the prior probability of the null being true; (b) type-I error rate, (c) type-II error rate, and (d) replication of a result. Through step-by-step simulations using open-source code in the R System of Statistical Computing, we show that uncertainty about the null hypothesis being true often remains high despite a significant result. To help the reader develop intuitions about this common misconception, we provide a Shiny app (https://danielschad.shinyapps.io/probnull/). We expect that this tutorial will help researchers better understand and judge results from null hypothesis significance tests.
Inferences about hypotheses are ubiquitous in the cognitive sciences. Bayes factors provide one general way to compare different hypotheses by their compatibility with the observed data. Those quantifications can then also be used to choose between hypotheses. While Bayes factors provide an immediate approach to hypothesis testing, they are highly sensitive to details of the data/model assumptions and it's unclear whether the details of the computational implementation (such as bridge sampling) are unbiased for complex analyses. Hem, we study how Bayes factors misbehave under different conditions. This includes a study of errors in the estimation of Bayes factors; the first-ever use of simulation-based calibration to test the accuracy and bias of Bayes factor estimates using bridge sampling; a study of the stability of Bayes factors against different MCMC draws and sampling variation in the data; and a look at the variability of decisions based on Bayes factors using a utility function. We outline a Bayes factor workflow that researchers can use to study whether Bayes factors are robust for their individual analysis. Reproducible code is available from haps://osf.io/y354c/. <br /> Translational Abstract <br /> In psychology and related areas, scientific hypotheses are commonly tested by asking questions like "is [some] effect present or absent." Such hypothesis testing is most often carried out using frequentist null hypothesis significance testing (NIIST). The NHST procedure is very simple: It usually returns a p-value, which is then used to make binary decisions like "the effect is present/abscnt." For example, it is common to see studies in the media that draw simplistic conclusions like "coffee causes cancer," or "coffee reduces the chances of geuing cancer." However, a powerful and more nuanced alternative approach exists: Bayes factors. Bayes factors have many advantages over NHST. However, for the complex statistical models that arc commonly used for data analysis today, computing Bayes factors is not at all a simple matter. In this article, we discuss the main complexities associated with computing Bayes factors. This is the first article to provide a detailed workflow for understanding and computing Bayes factors in complex statistical models. The article provides a statistically more nuanced way to think about hypothesis testing than the overly simplistic tendency to declare effects as being "present" or "absent".
An important aspect of aphasia is the observation of behavioral variability between and within individual participants. Our study addresses variability in sentence comprehension in German, by testing 21 individuals with aphasia and a control group and involving (a) several constructions (declarative sentences, relative clauses and control structures with an overt pronoun or PRO), (b) three response tasks (object manipulation, sentence-picture matching with/without self-paced listening), and (c) two test phases (to investigate test-retest performance). With this systematic, large-scale study we gained insights into variability in sentence comprehension. We found that the size of syntactic effects varied both in aphasia and in control participants. Whereas variability in control participants led to systematic changes, variability in individuals with aphasia was unsystematic across test phases or response tasks. The persistent occurrence of canonicity and interference effects across response tasks and test phases, however, shows that the performance is systematically influenced by syntactic complexity.
Preregistration is an open science practice that requires the specification of research hypotheses and analysis plans before the data are inspected. Here, we discuss the benefits of preregistration for hypothesis-driven, confirmatory bilingualism research. Using examples from psycholinguistics and bilingualism, we illustrate how non-peer reviewed preregistrations can serve to implement a clean distinction between hypothesis testing and data exploration. This distinction helps researchers avoid casting post-hoc hypotheses and analyses as confirmatory ones. We argue that, in keeping with current best practices in the experimental sciences, preregistration, along with sharing data and code, should be an integral part of hypothesis-driven bilingualism research.