TY - JOUR A1 - Schwetlick, Lisa A1 - Backhaus, Daniel A1 - Engbert, Ralf T1 - A dynamical scan-path model for task-dependence during scene viewing JF - Psychological review N2 - In real-world scene perception, human observers generate sequences of fixations to move image patches into the high-acuity center of the visual field. Models of visual attention developed over the last 25 years aim to predict two-dimensional probabilities of gaze positions for a given image via saliency maps. Recently, progress has been made on models for the generation of scan paths under the constraints of saliency as well as attentional and oculomotor restrictions. Experimental research demonstrated that task constraints can have a strong impact on viewing behavior. Here, we propose a scan-path model for both fixation positions and fixation durations, which include influences of task instructions and interindividual differences. Based on an eye-movement experiment with four different task conditions, we estimated model parameters for each individual observer and task condition using a fully Bayesian dynamical modeling framework using a joint spatial-temporal likelihood approach with sequential estimation. Resulting parameter values demonstrate that model properties such as the attentional span are adjusted to task requirements. Posterior predictive checks indicate that our dynamical model can reproduce task differences in scan-path statistics across individual observers. KW - scene viewing KW - eye movements KW - task dependence KW - individual differences; KW - Bayesian inference Y1 - 2023 U6 - https://doi.org/10.1037/rev0000379 SN - 0033-295X SN - 1939-1471 VL - 130 IS - 3 SP - 807 EP - 840 PB - American Psychological Association CY - Washington ER - TY - JOUR A1 - Cajar, Anke A1 - Engbert, Ralf A1 - Laubrock, Jochen T1 - How spatial frequencies and color drive object search in real-world scenes BT - a new eye-movement corpus JF - Journal of vision N2 - When studying how people search for objects in scenes, the inhomogeneity of the visual field is often ignored. Due to physiological limitations, peripheral vision is blurred and mainly uses coarse-grained information (i.e., low spatial frequencies) for selecting saccade targets, whereas high-acuity central vision uses fine-grained information (i.e., high spatial frequencies) for analysis of details. Here we investigated how spatial frequencies and color affect object search in real-world scenes. Using gaze-contingent filters, we attenuated high or low frequencies in central or peripheral vision while viewers searched color or grayscale scenes. Results showed that peripheral filters and central high-pass filters hardly affected search accuracy, whereas accuracy dropped drastically with central low-pass filters. Peripheral filtering increased the time to localize the target by decreasing saccade amplitudes and increasing number and duration of fixations. The use of coarse-grained information in the periphery was limited to color scenes. Central filtering increased the time to verify target identity instead, especially with low-pass filters. We conclude that peripheral vision is critical for object localization and central vision is critical for object identification. Visual guidance during peripheral object localization is dominated by low-frequency color information, whereas high-frequency information, relatively independent of color, is most important for object identification in central vision. KW - scene viewing KW - eye movements KW - object search KW - central and peripheral KW - vision KW - spatial frequencies KW - color KW - gaze-contingent displays Y1 - 2020 U6 - https://doi.org/10.1167/jov.20.7.8 SN - 1534-7362 VL - 20 IS - 7 PB - Association for Research in Vision and Opthalmology CY - Rockville ER - TY - JOUR A1 - Seelig, Stefan A. A1 - Rabe, Maximilian Michael A1 - Malem-Shinitski, Noa A1 - Risse, Sarah A1 - Reich, Sebastian A1 - Engbert, Ralf T1 - Bayesian parameter estimation for the SWIFT model of eye-movement control during reading JF - Journal of mathematical psychology N2 - Process-oriented theories of cognition must be evaluated against time-ordered observations. Here we present a representative example for data assimilation of the SWIFT model, a dynamical model of the control of fixation positions and fixation durations during natural reading of single sentences. First, we develop and test an approximate likelihood function of the model, which is a combination of a spatial, pseudo-marginal likelihood and a temporal likelihood obtained by probability density approximation Second, we implement a Bayesian approach to parameter inference using an adaptive Markov chain Monte Carlo procedure. Our results indicate that model parameters can be estimated reliably for individual subjects. We conclude that approximative Bayesian inference represents a considerable step forward for computational models of eye-movement control, where modeling of individual data on the basis of process-based dynamic models has not been possible so far. KW - dynamical models KW - reading KW - eye movements KW - saccades KW - likelihood function KW - Bayesian inference KW - MCMC KW - interindividual differences Y1 - 2020 U6 - https://doi.org/10.1016/j.jmp.2019.102313 SN - 0022-2496 SN - 1096-0880 VL - 95 PB - Elsevier CY - San Diego ER - TY - GEN A1 - Cajar, Anke A1 - Engbert, Ralf A1 - Laubrock, Jochen T1 - Potsdam Eye-Movement Corpus for Scene Memorization and Search With Color and Spatial-Frequency Filtering T2 - Zweitveröffentlichungen der Universität Potsdam : Humanwissenschaftliche Reihe T3 - Zweitveröffentlichungen der Universität Potsdam : Humanwissenschaftliche Reihe - 788 KW - eye movements KW - corpus dataset KW - scene viewing KW - object search KW - scene memorization KW - spatial frequencies KW - color KW - central and peripheral vision Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-563184 SN - 1866-8364 SP - 1 EP - 7 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - JOUR A1 - Cajar, Anke A1 - Engbert, Ralf A1 - Laubrock, Jochen T1 - Potsdam Eye-Movement Corpus for Scene Memorization and Search With Color and Spatial-Frequency Filtering JF - Frontiers in psychology / Frontiers Research Foundation KW - eye movements KW - corpus dataset KW - scene viewing KW - object search KW - scene memorization KW - spatial frequencies KW - color KW - central and peripheral vision Y1 - 2022 U6 - https://doi.org/10.3389/fpsyg.2022.850482 SN - 1664-1078 VL - 13 SP - 1 EP - 7 PB - Frontiers Research Foundation CY - Lausanne, Schweiz ER - TY - JOUR A1 - Schütt, Heiko Herbert A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Reich, Sebastian A1 - Wichmann, Felix A. A1 - Engbert, Ralf T1 - Likelihood-based parameter estimation and comparison of dynamical cognitive models JF - Psychological Review N2 - Dynamical models of cognition play an increasingly important role in driving theoretical and experimental research in psychology. Therefore, parameter estimation, model analysis and comparison of dynamical models are of essential importance. In this article, we propose a maximum likelihood approach for model analysis in a fully dynamical framework that includes time-ordered experimental data. Our methods can be applied to dynamical models for the prediction of discrete behavior (e.g., movement onsets); in particular, we use a dynamical model of saccade generation in scene viewing as a case study for our approach. For this model, the likelihood function can be computed directly by numerical simulation, which enables more efficient parameter estimation including Bayesian inference to obtain reliable estimates and corresponding credible intervals. Using hierarchical models inference is even possible for individual observers. Furthermore, our likelihood approach can be used to compare different models. In our example, the dynamical framework is shown to outperform nondynamical statistical models. Additionally, the likelihood based evaluation differentiates model variants, which produced indistinguishable predictions on hitherto used statistics. Our results indicate that the likelihood approach is a promising framework for dynamical cognitive models. KW - likelihood KW - model fitting KW - dynamical model KW - eye movements KW - model comparison Y1 - 2017 U6 - https://doi.org/10.1037/rev0000068 SN - 0033-295X SN - 1939-1471 VL - 124 IS - 4 SP - 505 EP - 524 PB - American Psychological Association CY - Washington ER - TY - JOUR A1 - Schütt, Heiko Herbert A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Engbert, Ralf A1 - Wichmann, Felix A. T1 - Disentangling bottom-up versus top-down and low-level versus high-level influences on eye movements over time JF - Journal of vision N2 - Bottom-up and top-down as well as low-level and high-level factors influence where we fixate when viewing natural scenes. However, the importance of each of these factors and how they interact remains a matter of debate. Here, we disentangle these factors by analyzing their influence over time. For this purpose, we develop a saliency model that is based on the internal representation of a recent early spatial vision model to measure the low-level, bottom-up factor. To measure the influence of high-level, bottom-up features, we use a recent deep neural network-based saliency model. To account for top-down influences, we evaluate the models on two large data sets with different tasks: first, a memorization task and, second, a search task. Our results lend support to a separation of visual scene exploration into three phases: the first saccade, an initial guided exploration characterized by a gradual broadening of the fixation density, and a steady state that is reached after roughly 10 fixations. Saccade-target selection during the initial exploration and in the steady state is related to similar areas of interest, which are better predicted when including high-level features. In the search data set, fixation locations are determined predominantly by top-down processes. In contrast, the first fixation follows a different fixation density and contains a strong central fixation bias. Nonetheless, first fixations are guided strongly by image properties, and as early as 200 ms after image onset, fixations are better predicted by high-level information. We conclude that any low-level, bottom-up factors are mainly limited to the generation of the first saccade. All saccades are better explained when high-level features are considered, and later, this high-level, bottom-up control can be overruled by top-down influences. KW - saliency KW - fixations KW - natural scenes KW - visual search KW - eye movements Y1 - 2019 U6 - https://doi.org/10.1167/19.3.1 SN - 1534-7362 VL - 19 IS - 3 PB - Association for Research in Vision and Opthalmology CY - Rockville ER - TY - JOUR A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Schütt, Heiko Herbert A1 - Wichmann, Felix A. A1 - Engbert, Ralf T1 - Temporal evolution of the central fixation bias in scene viewing JF - Journal of vision N2 - When watching the image of a natural scene on a computer screen, observers initially move their eyes toward the center of the image—a reliable experimental finding termed central fixation bias. This systematic tendency in eye guidance likely masks attentional selection driven by image properties and top-down cognitive processes. Here, we show that the central fixation bias can be reduced by delaying the initial saccade relative to image onset. In four scene-viewing experiments we manipulated observers' initial gaze position and delayed their first saccade by a specific time interval relative to the onset of an image. We analyzed the distance to image center over time and show that the central fixation bias of initial fixations was significantly reduced after delayed saccade onsets. We additionally show that selection of the initial saccade target strongly depended on the first saccade latency. A previously published model of saccade generation was extended with a central activation map on the initial fixation whose influence declined with increasing saccade latency. This extension was sufficient to replicate the central fixation bias from our experiments. Our results suggest that the central fixation bias is generated by default activation as a response to the sudden image onset and that this default activation pattern decreases over time. Thus, it may often be preferable to use a modified version of the scene viewing paradigm that decouples image onset from the start signal for scene exploration to explicitly reduce the central fixation bias. KW - eye movements KW - dynamic models KW - visual scanpath KW - visual attention Y1 - 2017 U6 - https://doi.org/10.1167/17.13.3 SN - 1534-7362 VL - 17 SP - 1626 EP - 1638 PB - Association for Research in Vision and Opthalmology CY - Rockville ER - TY - JOUR A1 - Ohl, Sven A1 - Wohltat, Christian A1 - Kliegl, Reinhold A1 - Pollatos, Olga A1 - Engbert, Ralf T1 - Microsaccades Are Coupled to Heartbeat JF - The journal of neuroscience N2 - During visual fixation, the eye generates microsaccades and slower components of fixational eye movements that are part of the visual processing strategy in humans. Here, we show that ongoing heartbeat is coupled to temporal rate variations in the generation of microsaccades. Using coregistration of eye recording and ECG in humans, we tested the hypothesis that microsaccade onsets are coupled to the relative phase of the R-R intervals in heartbeats. We observed significantly more microsaccades during the early phase after the R peak in the ECG. This form of coupling between heartbeat and eye movements was substantiated by the additional finding of a coupling between heart phase and motion activity in slow fixational eye movements; i.e., retinal image slip caused by physiological drift. Our findings therefore demonstrate a coupling of the oculomotor system and ongoing heartbeat, which provides further evidence for bodily influences on visuomotor functioning. KW - eye movements KW - heartbeat KW - microsaccades Y1 - 2016 U6 - https://doi.org/10.1523/JNEUROSCI.2211-15.2016 SN - 0270-6474 VL - 36 SP - 1237 EP - 1241 PB - Society for Neuroscience CY - Washington ER - TY - GEN A1 - Krügel, André A1 - Vitu, Françoise A1 - Engbert, Ralf T1 - Fixation positions after skipping saccades BT - a single space makes a large difference T2 - Postprints der Universität Potsdam : Mathematisch Naturwissenschaftliche Reihe N2 - During reading, saccadic eye movements are generated to shift words into the center of the visual field for lexical processing. Recently, Krugel and Engbert (Vision Research 50:1532-1539, 2010) demonstrated that within-word fixation positions are largely shifted to the left after skipped words. However, explanations of the origin of this effect cannot be drawn from normal reading data alone. Here we show that the large effect of skipped words on the distribution of within-word fixation positions is primarily based on rather subtle differences in the low-level visual information acquired before saccades. Using arrangements of "x" letter strings, we reproduced the effect of skipped character strings in a highly controlled single-saccade task. Our results demonstrate that the effect of skipped words in reading is the signature of a general visuomotor phenomenon. Moreover, our findings extend beyond the scope of the widely accepted range-error model, which posits that within-word fixation positions in reading depend solely on the distances of target words. We expect that our results will provide critical boundary conditions for the development of visuomotor models of saccade planning during reading. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 856 KW - eye movements KW - reading KW - motor control KW - skipping Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-432887 SN - 1866-8372 IS - 856 SP - 1556 EP - 1561 ER -