Refine
Year of publication
Document Type
- Article (114) (remove)
Is part of the Bibliography
- yes (114)
Keywords
- Eye movements (11)
- eye movements (10)
- scene viewing (6)
- Reading (5)
- attention (5)
- saccades (5)
- spatial frequencies (5)
- Computational modelling (3)
- gaze-contingent displays (3)
- Bayesian inference (2)
- Mathematical model (2)
- Perceptual span (2)
- Working memory (2)
- color (2)
- dynamical model (2)
- dynamical models (2)
- eye-movement control (2)
- fixation locations (2)
- modeling (2)
- object search (2)
- saliency (2)
- scene perception (2)
- spatial statistics (2)
- tunnel vision (2)
- ADHD (1)
- Attention (1)
- Background texture (1)
- Bayesian estimation (1)
- Bayesian modeling (1)
- Bayesian sensorimotor (1)
- COVID-19 (1)
- Cognitive eye movements (1)
- Computational modeling (1)
- Computational models (1)
- Dyslexia (1)
- Endogenous attention (1)
- Ensemble Kalman (1)
- Eye movements during reading (1)
- Fixational eye movements (1)
- Foveal load hypothesis (1)
- Gaze-contingent displays (1)
- Human behaviour (1)
- Inhibition of return (1)
- Levels of processing (1)
- MCMC (1)
- Memory (1)
- Microsaccade (1)
- Microsaccades (1)
- Mind wandering (1)
- Motor control (1)
- Parsing difficulty (1)
- Posner cueing (1)
- Preview (1)
- Pronominal anaphora (1)
- Reading comprehension (1)
- Saccade latency (1)
- Saccade planning (1)
- Saccadic facilitation effect (1)
- Saliency (1)
- Scene viewing (1)
- Sentence comprehension (1)
- Sequential data assimilation (1)
- Serial recall (1)
- Short-term memory (1)
- Signal detection theory (1)
- Skipping (1)
- Spatial frequencies (1)
- Stochastic epidemic model (1)
- Superior colliculus (1)
- Surprisal (1)
- Sustained attention (1)
- Visual attention (1)
- Visual fixation (1)
- Visual scanpath (1)
- Visual search (1)
- Visual system (1)
- Visual working memory (1)
- Word boundaries (1)
- Zoom lens model of attention (1)
- audition (1)
- categorization (1)
- central and peripheral (1)
- central and peripheral vision (1)
- central fixation bias (1)
- central-tendency bias (1)
- cognitive-control (1)
- computational modeling (1)
- control (1)
- corpus dataset (1)
- decision-theory (1)
- distributed processing (1)
- dynamic models (1)
- e-z reader (1)
- eye movements and reading (1)
- filter (1)
- fixation (1)
- fixation durations (1)
- fixations (1)
- heartbeat (1)
- human behaviour (1)
- ideal-observer model (1)
- individual differences (1)
- influence (1)
- integration (1)
- interindividual differences (1)
- landing positions (1)
- likelihood (1)
- likelihood function (1)
- mental chronometry (1)
- microsaccade (1)
- microsaccades (1)
- mobile eye-tracking (1)
- model (1)
- model comparison (1)
- model fitting (1)
- natural scenes (1)
- oculomotor (1)
- pair correlation function (1)
- parafoveal-on-foveal effects (1)
- point process (1)
- psychology (1)
- range effect (1)
- reading (1)
- reading eye movements (1)
- real-world scenarios (1)
- saccade generation (1)
- saccadic accuracy (1)
- scene memorization (1)
- sequential attention shifts (1)
- skipping costs/benefits (1)
- spatial correlations (1)
- swift (1)
- task (1)
- vision (1)
- visual attention (1)
- visual scanpath (1)
- visual search (1)
- words (1)
In eye-movement control during reading, advanced process-oriented models have been developed to reproduce behavioral data. So far, model complexity and large numbers of model parameters prevented rigorous statistical inference and modeling of interindividual differences. Here we propose a Bayesian approach to both problems for one representative computational model of sentence reading (SWIFT; Engbert et al., Psychological Review, 112, 2005, pp. 777-813). We used experimental data from 36 subjects who read the text in a normal and one of four manipulated text layouts (e.g., mirrored and scrambled letters). The SWIFT model was fitted to subjects and experimental conditions individually to investigate between- subject variability. Based on posterior distributions of model parameters, fixation probabilities and durations are reliably recovered from simulated data and reproduced for withheld empirical data, at both the experimental condition and subject levels. A subsequent statistical analysis of model parameters across reading conditions generates model-driven explanations for observable effects between conditions.
Skilled reading requires information processing of the fixated and the not-yet-fixated words to generate precise control of gaze. Over the last 30 years, experimental research provided evidence that word processing is distributed across the perceptual span, which permits recognition of the fixated (foveal) word as well as preview of parafoveal words to the right of fixation. However, theoretical models have been unable to differentiate the specific influences of foveal and parafoveal information on saccade control. Here we show how parafoveal word difficulty modulates spatial and temporal control of gaze in a computational model to reproduce experimental results. In a fully Bayesian framework, we estimated model parameters for different models of parafoveal processing and carried out large-scale predictive simulations and model comparisons for a gaze-contingent reading experiment. We conclude that mathematical modeling of data from gaze-contingent experiments permits the precise identification of pathways from parafoveal information processing to gaze control, uncovering potential mechanisms underlying the parafoveal contribution to eye-movement control.
Dynamical models of cognition play an increasingly important role in driving theoretical and experimental research in psychology. Therefore, parameter estimation, model analysis and comparison of dynamical models are of essential importance. In this article, we propose a maximum likelihood approach for model analysis in a fully dynamical framework that includes time-ordered experimental data. Our methods can be applied to dynamical models for the prediction of discrete behavior (e.g., movement onsets); in particular, we use a dynamical model of saccade generation in scene viewing as a case study for our approach. For this model, the likelihood function can be computed directly by numerical simulation, which enables more efficient parameter estimation including Bayesian inference to obtain reliable estimates and corresponding credible intervals. Using hierarchical models inference is even possible for individual observers. Furthermore, our likelihood approach can be used to compare different models. In our example, the dynamical framework is shown to outperform nondynamical statistical models. Additionally, the likelihood based evaluation differentiates model variants, which produced indistinguishable predictions on hitherto used statistics. Our results indicate that the likelihood approach is a promising framework for dynamical cognitive models.
Skilled reading requires information processing of the fixated and the not-yet-fixated words to generate precise control of gaze. Over the last 30 years, experimental research provided evidence that word processing is distributed across the perceptual span, which permits recognition of the fixated (foveal) word as well as preview of parafoveal words to the right of fixation. However, theoretical models have been unable to differentiate the specific influences of foveal and parafoveal information on saccade control. Here we show how parafoveal word difficulty modulates spatial and temporal control of gaze in a computational model to reproduce experimental results. In a fully Bayesian framework, we estimated model parameters for different models of parafoveal processing and carried out large-scale predictive simulations and model comparisons for a gaze-contingent reading experiment. We conclude that mathematical modeling of data from gaze-contingent experiments permits the precise identification of pathways from parafoveal information processing to gaze control, uncovering potential mechanisms underlying the parafoveal contribution to eye-movement control.
When searching a target in a natural scene, it has been shown that both the target’s visual properties and similarity to the background influence whether and how fast humans are able to find it. So far, it was unclear whether searchers adjust the dynamics of their eye movements (e.g., fixation durations, saccade amplitudes) to the target they search for. In our experiment, participants searched natural scenes for six artificial targets with different spatial frequency content throughout eight consecutive sessions. High-spatial frequency targets led to smaller saccade amplitudes and shorter fixation durations than low-spatial frequency targets if target identity was known. If a saccade was programmed in the same direction as the previous saccade, fixation durations and successive saccade amplitudes were not influenced by target type. Visual saliency and empirical fixation density at the endpoints of saccades which maintain direction were comparatively low, indicating that these saccades were less selective. Our results suggest that searchers adjust their eye movement dynamics to the search target efficiently, since previous research has shown that low-spatial frequencies are visible farther into the periphery than high-spatial frequencies. We interpret the saccade direction specificity of our effects as an underlying separation into a default scanning mechanism and a selective, target-dependent mechanism.
Bottom-up and top-down as well as low-level and high-level factors influence where we fixate when viewing natural scenes. However, the importance of each of these factors and how they interact remains a matter of debate. Here, we disentangle these factors by analyzing their influence over time. For this purpose, we develop a saliency model that is based on the internal representation of a recent early spatial vision model to measure the low-level, bottom-up factor. To measure the influence of high-level, bottom-up features, we use a recent deep neural network-based saliency model. To account for top-down influences, we evaluate the models on two large data sets with different tasks: first, a memorization task and, second, a search task. Our results lend support to a separation of visual scene exploration into three phases: the first saccade, an initial guided exploration characterized by a gradual broadening of the fixation density, and a steady state that is reached after roughly 10 fixations. Saccade-target selection during the initial exploration and in the steady state is related to similar areas of interest, which are better predicted when including high-level features. In the search data set, fixation locations are determined predominantly by top-down processes. In contrast, the first fixation follows a different fixation density and contains a strong central fixation bias. Nonetheless, first fixations are guided strongly by image properties, and as early as 200 ms after image onset, fixations are better predicted by high-level information. We conclude that any low-level, bottom-up factors are mainly limited to the generation of the first saccade. All saccades are better explained when high-level features are considered, and later, this high-level, bottom-up control can be overruled by top-down influences.
During reading, rapid eye movements (saccades) shift the reader's line of sight from one word to another for high-acuity visual information processing. While experimental data and theoretical models show that readers aim at word centers, the eye-movement (oculomotor) accuracy is low compared to other tasks. As a consequence, distributions of saccadic landing positions indicate large (i) random errors and (ii) systematic over- and undershoot of word centers, which additionally depend on saccade lengths (McConkie et al.Visual Research, 28(10), 1107-1118,1988). Here we show that both error components can be simultaneously reduced by reading texts from right to left in German language (N= 32). We used our experimental data to test a Bayesian model of saccade planning. First, experimental data are consistent with the model. Second, the model makes specific predictions of the effects of the precision of prior and (sensory) likelihood. Our results suggest that it is a more precise sensory likelihood that can explain the reduction of both random and systematic error components.
The interplay between cognitive and oculomotor processes during reading can be explored when the spatial layout of text deviates from the typical display. In this study, we investigate various eye-movement measures during reading of text with experimentally manipulated layout (word-wise and letter-wise mirrored-reversed text as well as inverted and scrambled text). While typical findings (e.g., longer mean fixation times, shorter mean saccades lengths) in reading manipulated texts compared to normal texts were reported in earlier work, little is known about changes of oculomotor targeting observed in within-word landing positions under the above text layouts. Here we carry out precise analyses of landing positions and find substantial changes in the so-called launch-site effect in addition to the expected overall slow-down of reading performance. Specifically, during reading of our manipulated text conditions with reversed letter order (against overall reading direction), we find a reduced launch-site effect, while in all other manipulated text conditions, we observe an increased launch-site effect. Our results clearly indicate that the oculomotor system is highly adaptive when confronted with unusual reading conditions.
Scene viewing is used to study attentional selection in complex but still controlled environments. One of the main observations on eye movements during scene viewing is the inhomogeneous distribution of fixation locations: While some parts of an image are fixated by almost all observers and are inspected repeatedly by the same observer, other image parts remain unfixated by observers even after long exploration intervals. Here, we apply spatial point process methods to investigate the relationship between pairs of fixations. More precisely, we use the pair correlation function, a powerful statistical tool, to evaluate dependencies between fixation locations along individual scanpaths. We demonstrate that aggregation of fixation locations within 4 degrees is stronger than expected from chance. Furthermore, the pair correlation function reveals stronger aggregation of fixations when the same image is presented a second time. We use simulations of a dynamical model to show that a narrower spatial attentional span may explain differences in pair correlations between the first and the second inspection of the same image.