Refine
Has Fulltext
- no (32)
Year of publication
Document Type
- Article (32) (remove)
Language
- English (32) (remove)
Is part of the Bibliography
- yes (32)
Keywords
- Alcohol dependence (4)
- Eye movements (4)
- Reading (3)
- Reinforcement learning (3)
- Computational psychiatry (2)
- Decision-making (2)
- Pavlovian-to-instrumental transfer (2)
- Perceptual span (2)
- alcohol (2)
- Affective modulation (1)
When the mind wanders, attention turns away from the external environment and cognitive processing is decoupled from perceptual information. Mind wandering is usually treated as a dichotomy (dichotomy-hypothesis), and is often measured using self-reports. Here, we propose the levels of inattention hypothesis, which postulates attentional decoupling to graded degrees at different hierarchical levels of cognitive processing. To measure graded levels of attentional decoupling during reading we introduce the sustained attention to stimulus task (SAST), which is based on psychophysics of error detection. Under experimental conditions likely to induce mind wandering, we found that subjects were less likely to notice errors that required high-level processing for their detection as opposed to errors that only required low-level processing. Eye tracking revealed that before errors were overlooked influences of high- and low-level linguistic variables on eye fixations were reduced in a graded fashion, indicating episodes of mindless reading at weak and deep levels. Individual fixation durations predicted overlooking of lexical errors 5 s before they occurred. Our findings support the levels of inattention hypothesis and suggest that different levels of mindless reading can be measured behaviorally in the SAST. Using eye tracking to detect mind wandering online represents a promising approach for the development of new techniques to study mind wandering and to ameliorate its negative consequences.
Inferences about hypotheses are ubiquitous in the cognitive sciences. Bayes factors provide one general way to compare different hypotheses by their compatibility with the observed data. Those quantifications can then also be used to choose between hypotheses. While Bayes factors provide an immediate approach to hypothesis testing, they are highly sensitive to details of the data/model assumptions and it's unclear whether the details of the computational implementation (such as bridge sampling) are unbiased for complex analyses. Hem, we study how Bayes factors misbehave under different conditions. This includes a study of errors in the estimation of Bayes factors; the first-ever use of simulation-based calibration to test the accuracy and bias of Bayes factor estimates using bridge sampling; a study of the stability of Bayes factors against different MCMC draws and sampling variation in the data; and a look at the variability of decisions based on Bayes factors using a utility function. We outline a Bayes factor workflow that researchers can use to study whether Bayes factors are robust for their individual analysis. Reproducible code is available from haps://osf.io/y354c/. <br /> Translational Abstract <br /> In psychology and related areas, scientific hypotheses are commonly tested by asking questions like "is [some] effect present or absent." Such hypothesis testing is most often carried out using frequentist null hypothesis significance testing (NIIST). The NHST procedure is very simple: It usually returns a p-value, which is then used to make binary decisions like "the effect is present/abscnt." For example, it is common to see studies in the media that draw simplistic conclusions like "coffee causes cancer," or "coffee reduces the chances of geuing cancer." However, a powerful and more nuanced alternative approach exists: Bayes factors. Bayes factors have many advantages over NHST. However, for the complex statistical models that arc commonly used for data analysis today, computing Bayes factors is not at all a simple matter. In this article, we discuss the main complexities associated with computing Bayes factors. This is the first article to provide a detailed workflow for understanding and computing Bayes factors in complex statistical models. The article provides a statistically more nuanced way to think about hypothesis testing than the overly simplistic tendency to declare effects as being "present" or "absent".
Numerous studies have demonstrated effects of word frequency on eye movements during reading, but the precise timing of this influence has remained unclear. The fast priming paradigm was previously used to study influences of related versus unrelated primes on the target word. Here, we use this procedure to investigate whether the frequency of the prime word has a direct influence on eye movements during reading when the prime-target relation is not manipulated. We found that with average prime intervals of 32 ms readers made longer single fixation durations on the target word in the low than in the high frequency prime condition. Distributional analyses demonstrated that the effect of prime frequency on single fixation durations occurred very early, supporting theories of immediate cognitive control of eye movements. Finding prime frequency effects only 207 ms after visibility of the prime and for prime durations of 32 ms yields new time constraints for cognitive processes controlling eye movements during reading. Our variant of the fast priming paradigm provides a new approach to test early influences of word processing on eye movement control during reading.
How is reading development reflected in eye-movement measures? How does the perceptual span change during the initial years of reading instruction? Does parafoveal processing require competence in basic word-decoding processes? We report data from the first cross-sectional measurement of the perceptual span of German beginning readers (n = 139), collected in the context of the large longitudinal PIER (Potsdamer Intrapersonale Entwicklungsrisiken/Potsdam study of intra-personal developmental risk factors) study of intrapersonal developmental risk factors. Using the moving-window paradigm, eye movements of three groups of students (Grades 1-3) were measured with gaze-contingent presentation of a variable amount of text around fixation. Reading rate increased from Grades 1-3, with smaller increases for higher grades. Perceptual-span results showed the expected main effects of grade and window size: fixation durations and refixation probability decreased with grade and window size, whereas reading rate and saccade length increased. Critically, for reading rate, first-fixation duration, saccade length and refixation probability, there were significant interactions of grade and window size that were mainly based on the contrast between Grades 3 and 2 rather than Grades 2 and 1. Taken together, development of the perceptual span only really takes off between Grades 2 and 3, suggesting that efficient parafoveal processing presupposes that basic processes of reading have been mastered.
BACKGROUND: Addiction is supposedly characterized by a shift from goal-directed to habitual decision making, thus facilitating automatic drug intake. The two-step task allows distinguishing between these mechanisms by computationally modeling goal-directed and habitual behavior as model-based and model-free control. In addicted patients, decision making may also strongly depend upon drug-associated expectations. Therefore, we investigated model-based versus model-free decision making and its neural correlates as well as alcohol expectancies in alcohol-dependent patients and healthy controls and assessed treatment outcome in patients. METHODS: Ninety detoxified, medication-free, alcohol-dependent patients and 96 age-and gender-matched control subjects underwent functional magnetic resonance imaging during the two-step task. Alcohol expectancies were measured with the Alcohol Expectancy Questionnaire. Over a follow-up period of 48 weeks, 37 patients remained abstinent and 53 patients relapsed as indicated by the Alcohol Timeline Followback method. RESULTS: Patients who relapsed displayed reduced medial prefrontal cortex activation during model-based decision making. Furthermore, high alcohol expectancies were associated with low model-based control in relapsers, while the opposite was observed in abstainers and healthy control subjects. However, reduced model-based control per se was not associated with subsequent relapse. CONCLUSIONS: These findings suggest that poor treatment outcome in alcohol dependence does not simply result from a shift from model-based to model-free control but is instead dependent on the interaction between high drug expectancies and low model-based decision making. Reduced model-based medial prefrontal cortex signatures in those who relapse point to a neural correlate of relapse risk. These observations suggest that therapeutic interventions should target subjective alcohol expectancies.
Experiments in research on memory, language, and in other areas of cognitive science are increasingly being analyzed using Bayesian methods. This has been facilitated by the development of probabilistic programming languages such as Stan, and easily accessible front-end packages such as brms. The utility of Bayesian methods, however, ultimately depends on the relevance of the Bayesian model, in particular whether or not it accurately captures the structure of the data and the data analyst's domain expertise. Even with powerful software, the analyst is responsible for verifying the utility of their model. To demonstrate this point, we introduce a principled Bayesian workflow (Betancourt, 2018) to cognitive science. Using a concrete working example, we describe basic questions one should ask about the model: prior predictive checks, computational faithfulness, model sensitivity, and posterior predictive checks. The running example for demonstrating the workflow is data on reading times with a linguistic manipulation of object versus subject relative clause sentences. This principled Bayesian workflow also demonstrates how to use domain knowledge to inform prior distributions. It provides guidelines and checks for valid data analysis, avoiding overfitting complex models to noise, and capturing relevant data structure in a probabilistic model. Given the increasing use of Bayesian methods, we aim to discuss how these methods can be properly employed to obtain robust answers to scientific questions.
The zoom lens of attention simulating shuffled versus normal text reading using the SWIFT model
(2012)
Assumptions on the allocation of attention during reading are crucial for theoretical models of eye guidance. The zoom lens model of attention postulates that attentional deployment can vary from a sharp focus to a broad window. The model is closely related to the foveal load hypothesis, i.e., the assumption that the perceptual span is modulated by the difficulty of the fixated word. However, these important theoretical concepts for cognitive research have not been tested quantitatively in eye movement models. Here we show that the zoom lens model, implemented in the SWIFT model of saccade generation, captures many important patterns of eye movements. We compared the model's performance to experimental data from normal and shuffled text reading. Our results demonstrate that the zoom lens of attention might be an important concept for eye movement control in reading.
Eye movements during the reading of multi-line pages of texts were analyzed to determine the trajectory of reading saccades. The results of two experiments showed that the trajectory of the majority of forward-directed saccades was negatively biased, i.e., the trajectory fell below the start and end location of the saccadic movement. This is attributed to a global top-to-bottom orienting of attention. The curvature size and the proportion of negative trajectories were diminished when linguistic processing demands were high and when the beginning lines of a page were read. Longer pre-saccadic fixations also yielded smaller saccadic curvatures, and they resulted in fewer negatively curved forward-directed saccades in Experiment 1 although not in Experiment 2. These findings indicate that the top-to- bottom pull of saccadic trajectories is modulated by processing demands and processing opportunities. The results are in general agreement with a time-locked attraction-inhibition hypothesis, according to which the horizontal movement component of a saccade is initially subject to an automatic top-to-bottom orienting of attention that is subsequently inhibited.
e movements during the reading of multi-line pages of texts were analyzed to determine the trajectory of reading saccades. The results of two experiments showed that the trajectory of the majority of forward-directed saccades was negatively biased, i.e., the trajectory fell below the start and end location of the saccadic movement. This is attributed to a global top-to-bottom orienting of attention. The curvature size and the proportion of negative trajectories were diminished when linguistic processing demands were high and when the beginning lines of a page were read. Longer pre-saccadic fixations also yielded smaller saccadic curvatures, and they resulted in fewer negatively curved forward-directed saccades in Experiment 1 although not in Experiment 2. These findings indicate that the top-to- bottom pull of saccadic trajectories is modulated by processing demands and processing opportunities. The results are in general agreement with a time-locked attraction-inhibition hypothesis, according to which the horizontal movement component of a saccade is initially subject to an automatic top-to-bottom orienting of attention that is subsequently inhibited.
When researchers carry out a null hypothesis significance test, it is tempting to assume that a statistically significant result lowers Prob(H0), the probability of the null hypothesis being true. Technically, such a statement is meaningless for various reasons: e.g., the null hypothesis does not have a probability associated with it. However, it is possible to relax certain assumptions to compute the posterior probability Prob(H0) under repeated sampling. We show in a step-by-step guide that the intuitively appealing belief, that Prob(H0) is low when significant results have been obtained under repeated sampling, is in general incorrect and depends greatly on: (a) the prior probability of the null being true; (b) type-I error rate, (c) type-II error rate, and (d) replication of a result. Through step-by-step simulations using open-source code in the R System of Statistical Computing, we show that uncertainty about the null hypothesis being true often remains high despite a significant result. To help the reader develop intuitions about this common misconception, we provide a Shiny app (https://danielschad.shinyapps.io/probnull/). We expect that this tutorial will help researchers better understand and judge results from null hypothesis significance tests.