Refine
Has Fulltext
- no (123) (remove)
Year of publication
Document Type
- Article (112)
- Other (5)
- Conference Proceeding (4)
- Monograph/Edited Volume (1)
- Review (1)
Is part of the Bibliography
- yes (123)
Keywords
- Eye movements (12)
- eye movements (10)
- scene viewing (6)
- Reading (5)
- attention (5)
- saccades (5)
- spatial frequencies (5)
- Computational modelling (3)
- gaze-contingent displays (3)
- Bayesian inference (2)
Institute
Dynamical models make specific assumptions about cognitive processes that generate human behavior. In data assimilation, these models are tested against timeordered data. Recent progress on Bayesian data assimilation demonstrates that this approach combines the strengths of statistical modeling of individual differences with the those of dynamical cognitive models.
Sequential data assimilation of the stochastic SEIR epidemic model for regional COVID-19 dynamics
(2021)
Newly emerging pandemics like COVID-19 call for predictive models to implement precisely tuned responses to limit their deep impact on society. Standard epidemic models provide a theoretically well-founded dynamical description of disease incidence. For COVID-19 with infectiousness peaking before and at symptom onset, the SEIR model explains the hidden build-up of exposed individuals which creates challenges for containment strategies. However, spatial heterogeneity raises questions about the adequacy of modeling epidemic outbreaks on the level of a whole country. Here, we show that by applying sequential data assimilation to the stochastic SEIR epidemic model, we can capture the dynamic behavior of outbreaks on a regional level. Regional modeling, with relatively low numbers of infected and demographic noise, accounts for both spatial heterogeneity and stochasticity. Based on adapted models, short-term predictions can be achieved. Thus, with the help of these sequential data assimilation methods, more realistic epidemic models are within reach.
Skilled reading requires information processing of the fixated and the not-yet-fixated words to generate precise control of gaze. Over the last 30 years, experimental research provided evidence that word processing is distributed across the perceptual span, which permits recognition of the fixated (foveal) word as well as preview of parafoveal words to the right of fixation. However, theoretical models have been unable to differentiate the specific influences of foveal and parafoveal information on saccade control. Here we show how parafoveal word difficulty modulates spatial and temporal control of gaze in a computational model to reproduce experimental results. In a fully Bayesian framework, we estimated model parameters for different models of parafoveal processing and carried out large-scale predictive simulations and model comparisons for a gaze-contingent reading experiment. We conclude that mathematical modeling of data from gaze-contingent experiments permits the precise identification of pathways from parafoveal information processing to gaze control, uncovering potential mechanisms underlying the parafoveal contribution to eye-movement control.
Skilled reading requires information processing of the fixated and the not-yet-fixated words to generate precise control of gaze. Over the last 30 years, experimental research provided evidence that word processing is distributed across the perceptual span, which permits recognition of the fixated (foveal) word as well as preview of parafoveal words to the right of fixation. However, theoretical models have been unable to differentiate the specific influences of foveal and parafoveal information on saccade control. Here we show how parafoveal word difficulty modulates spatial and temporal control of gaze in a computational model to reproduce experimental results. In a fully Bayesian framework, we estimated model parameters for different models of parafoveal processing and carried out large-scale predictive simulations and model comparisons for a gaze-contingent reading experiment. We conclude that mathematical modeling of data from gaze-contingent experiments permits the precise identification of pathways from parafoveal information processing to gaze control, uncovering potential mechanisms underlying the parafoveal contribution to eye-movement control.
In eye-movement control during reading, advanced process-oriented models have been developed to reproduce behavioral data. So far, model complexity and large numbers of model parameters prevented rigorous statistical inference and modeling of interindividual differences. Here we propose a Bayesian approach to both problems for one representative computational model of sentence reading (SWIFT; Engbert et al., Psychological Review, 112, 2005, pp. 777-813). We used experimental data from 36 subjects who read the text in a normal and one of four manipulated text layouts (e.g., mirrored and scrambled letters). The SWIFT model was fitted to subjects and experimental conditions individually to investigate between- subject variability. Based on posterior distributions of model parameters, fixation probabilities and durations are reliably recovered from simulated data and reproduced for withheld empirical data, at both the experimental condition and subject levels. A subsequent statistical analysis of model parameters across reading conditions generates model-driven explanations for observable effects between conditions.
Lisa Schwetlick et al. present a computational model linking visual scan path generation in scene viewing to physiological and experimental work on perisaccadic covert attention, the act of attending to an object visually without obviously moving the eyes toward it. They find that integrating covert attention into predictive models of visual scan paths greatly improves the model's agreement with experimental data. <br /> How we perceive a visual scene depends critically on the selection of gaze positions. For this selection process, visual attention is known to play a key role in two ways. First, image-features attract visual attention, a fact that is captured well by time-independent fixation models. Second, millisecond-level attentional dynamics around the time of saccade drives our gaze from one position to the next. These two related research areas on attention are typically perceived as separate, both theoretically and experimentally. Here we link the two research areas by demonstrating that perisaccadic attentional dynamics improve predictions on scan path statistics. In a mathematical model, we integrated perisaccadic covert attention with dynamic scan path generation. Our model reproduces saccade amplitude distributions, angular statistics, intersaccadic turning angles, and their impact on fixation durations as well as inter-individual differences using Bayesian inference. Therefore, our result lend support to the relevance of perisaccadic attention to gaze statistics.
The interplay between cognitive and oculomotor processes during reading can be explored when the spatial layout of text deviates from the typical display. In this study, we investigate various eye-movement measures during reading of text with experimentally manipulated layout (word-wise and letter-wise mirrored-reversed text as well as inverted and scrambled text). While typical findings (e.g., longer mean fixation times, shorter mean saccades lengths) in reading manipulated texts compared to normal texts were reported in earlier work, little is known about changes of oculomotor targeting observed in within-word landing positions under the above text layouts. Here we carry out precise analyses of landing positions and find substantial changes in the so-called launch-site effect in addition to the expected overall slow-down of reading performance. Specifically, during reading of our manipulated text conditions with reversed letter order (against overall reading direction), we find a reduced launch-site effect, while in all other manipulated text conditions, we observe an increased launch-site effect. Our results clearly indicate that the oculomotor system is highly adaptive when confronted with unusual reading conditions.
In an influential theoretical model, human sensorimotor control is achieved by a Bayesian decision process, which combines noisy sensory information and learned prior knowledge. A ubiquitous signature of prior knowledge and Bayesian integration in human perception and motor behavior is the frequently observed bias toward an average stimulus magnitude (i.e., a central-tendency bias, range effect, regression-to-the-mean effect). However, in the domain of eye movements, there is a recent controversy about the fundamental existence of a range effect in the saccadic system. Here we argue that the problem of the existence of a range effect is linked to the availability of prior knowledge for saccade control. We present results from two prosaccade experiments that both employ an informative prior structure (i.e., a nonuniform Gaussian distribution of saccade target distances). Our results demonstrate the validity of Bayesian integration in saccade control, which generates a range effect in saccades. According to Bayesian integration principles, the saccadic range effect depends on the availability of prior knowledge and varies in size as a function of the reliability of the prior and the sensory likelihood.
During reading, rapid eye movements (saccades) shift the reader's line of sight from one word to another for high-acuity visual information processing. While experimental data and theoretical models show that readers aim at word centers, the eye-movement (oculomotor) accuracy is low compared to other tasks. As a consequence, distributions of saccadic landing positions indicate large (i) random errors and (ii) systematic over- and undershoot of word centers, which additionally depend on saccade lengths (McConkie et al.Visual Research, 28(10), 1107-1118,1988). Here we show that both error components can be simultaneously reduced by reading texts from right to left in German language (N= 32). We used our experimental data to test a Bayesian model of saccade planning. First, experimental data are consistent with the model. Second, the model makes specific predictions of the effects of the precision of prior and (sensory) likelihood. Our results suggest that it is a more precise sensory likelihood that can explain the reduction of both random and systematic error components.