TY - JOUR A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Schütt, Heiko Herbert A1 - Wichmann, Felix A. A1 - Engbert, Ralf T1 - Temporal evolution of the central fixation bias in scene viewing JF - Journal of vision N2 - When watching the image of a natural scene on a computer screen, observers initially move their eyes toward the center of the image—a reliable experimental finding termed central fixation bias. This systematic tendency in eye guidance likely masks attentional selection driven by image properties and top-down cognitive processes. Here, we show that the central fixation bias can be reduced by delaying the initial saccade relative to image onset. In four scene-viewing experiments we manipulated observers' initial gaze position and delayed their first saccade by a specific time interval relative to the onset of an image. We analyzed the distance to image center over time and show that the central fixation bias of initial fixations was significantly reduced after delayed saccade onsets. We additionally show that selection of the initial saccade target strongly depended on the first saccade latency. A previously published model of saccade generation was extended with a central activation map on the initial fixation whose influence declined with increasing saccade latency. This extension was sufficient to replicate the central fixation bias from our experiments. Our results suggest that the central fixation bias is generated by default activation as a response to the sudden image onset and that this default activation pattern decreases over time. Thus, it may often be preferable to use a modified version of the scene viewing paradigm that decouples image onset from the start signal for scene exploration to explicitly reduce the central fixation bias. KW - eye movements KW - dynamic models KW - visual scanpath KW - visual attention Y1 - 2017 U6 - https://doi.org/10.1167/17.13.3 SN - 1534-7362 VL - 17 SP - 1626 EP - 1638 PB - Association for Research in Vision and Opthalmology CY - Rockville ER - TY - JOUR A1 - Backhaus, Daniel A1 - Engbert, Ralf A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne T1 - Task-dependence in scene perception: Head unrestrained viewing using mobile eye-tracking JF - Journal of vision N2 - Real-world scene perception is typically studied in the laboratory using static picture viewing with restrained head position. Consequently, the transfer of results obtained in this paradigm to real-word scenarios has been questioned. The advancement of mobile eye-trackers and the progress in image processing, however, permit a more natural experimental setup that, at the same time, maintains the high experimental control from the standard laboratory setting. We investigated eye movements while participants were standing in front of a projector screen and explored images under four specific task instructions. Eye movements were recorded with a mobile eye-tracking device and raw gaze data were transformed from head-centered into image-centered coordinates. We observed differences between tasks in temporal and spatial eye-movement parameters and found that the bias to fixate images near the center differed between tasks. Our results demonstrate that current mobile eye-tracking technology and a highly controlled design support the study of fine-scaled task dependencies in an experimental setting that permits more natural viewing behavior than the static picture viewing paradigm. KW - scene viewing KW - real-world scenarios KW - mobile eye-tracking KW - task KW - influence KW - central fixation bias Y1 - 2020 U6 - https://doi.org/10.1167/jov.20.5.3 SN - 1534-7362 VL - 20 IS - 5 SP - 1 EP - 21 PB - Association for Research in Vision and Opthalmology CY - Rockville ER - TY - JOUR A1 - Rothkegel, Lars Oliver Martin A1 - Schütt, Heiko Herbert A1 - Trukenbrod, Hans Arne A1 - Wichmann, Felix A. A1 - Engbert, Ralf T1 - Searchers adjust their eye-movement dynamics to target characteristics in natural scenes JF - Scientific reports N2 - When searching a target in a natural scene, it has been shown that both the target’s visual properties and similarity to the background influence whether and how fast humans are able to find it. So far, it was unclear whether searchers adjust the dynamics of their eye movements (e.g., fixation durations, saccade amplitudes) to the target they search for. In our experiment, participants searched natural scenes for six artificial targets with different spatial frequency content throughout eight consecutive sessions. High-spatial frequency targets led to smaller saccade amplitudes and shorter fixation durations than low-spatial frequency targets if target identity was known. If a saccade was programmed in the same direction as the previous saccade, fixation durations and successive saccade amplitudes were not influenced by target type. Visual saliency and empirical fixation density at the endpoints of saccades which maintain direction were comparatively low, indicating that these saccades were less selective. Our results suggest that searchers adjust their eye movement dynamics to the search target efficiently, since previous research has shown that low-spatial frequencies are visible farther into the periphery than high-spatial frequencies. We interpret the saccade direction specificity of our effects as an underlying separation into a default scanning mechanism and a selective, target-dependent mechanism. Y1 - 2019 U6 - https://doi.org/10.1038/s41598-018-37548-w SN - 2045-2322 VL - 9 PB - Nature Publ. Group CY - London ER - TY - JOUR A1 - Schwetlick, Lisa A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Engbert, Ralf T1 - Modeling the effects of perisaccadic attention on gaze statistics during scene viewing JF - Communications biology N2 - Lisa Schwetlick et al. present a computational model linking visual scan path generation in scene viewing to physiological and experimental work on perisaccadic covert attention, the act of attending to an object visually without obviously moving the eyes toward it. They find that integrating covert attention into predictive models of visual scan paths greatly improves the model's agreement with experimental data.
How we perceive a visual scene depends critically on the selection of gaze positions. For this selection process, visual attention is known to play a key role in two ways. First, image-features attract visual attention, a fact that is captured well by time-independent fixation models. Second, millisecond-level attentional dynamics around the time of saccade drives our gaze from one position to the next. These two related research areas on attention are typically perceived as separate, both theoretically and experimentally. Here we link the two research areas by demonstrating that perisaccadic attentional dynamics improve predictions on scan path statistics. In a mathematical model, we integrated perisaccadic covert attention with dynamic scan path generation. Our model reproduces saccade amplitude distributions, angular statistics, intersaccadic turning angles, and their impact on fixation durations as well as inter-individual differences using Bayesian inference. Therefore, our result lend support to the relevance of perisaccadic attention to gaze statistics. KW - Computational models KW - Human behaviour KW - Visual system Y1 - 2020 U6 - https://doi.org/10.1038/s42003-020-01429-8 SN - 2399-3642 VL - 3 IS - 1 PB - Springer Nature CY - London ER - TY - JOUR A1 - Schütt, Heiko Herbert A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Reich, Sebastian A1 - Wichmann, Felix A. A1 - Engbert, Ralf T1 - Likelihood-based parameter estimation and comparison of dynamical cognitive models JF - Psychological Review N2 - Dynamical models of cognition play an increasingly important role in driving theoretical and experimental research in psychology. Therefore, parameter estimation, model analysis and comparison of dynamical models are of essential importance. In this article, we propose a maximum likelihood approach for model analysis in a fully dynamical framework that includes time-ordered experimental data. Our methods can be applied to dynamical models for the prediction of discrete behavior (e.g., movement onsets); in particular, we use a dynamical model of saccade generation in scene viewing as a case study for our approach. For this model, the likelihood function can be computed directly by numerical simulation, which enables more efficient parameter estimation including Bayesian inference to obtain reliable estimates and corresponding credible intervals. Using hierarchical models inference is even possible for individual observers. Furthermore, our likelihood approach can be used to compare different models. In our example, the dynamical framework is shown to outperform nondynamical statistical models. Additionally, the likelihood based evaluation differentiates model variants, which produced indistinguishable predictions on hitherto used statistics. Our results indicate that the likelihood approach is a promising framework for dynamical cognitive models. KW - likelihood KW - model fitting KW - dynamical model KW - eye movements KW - model comparison Y1 - 2017 U6 - https://doi.org/10.1037/rev0000068 SN - 0033-295X SN - 1939-1471 VL - 124 IS - 4 SP - 505 EP - 524 PB - American Psychological Association CY - Washington ER - TY - JOUR A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Schütt, Heiko Herbert A1 - Wichmann, Felix A. A1 - Engbert, Ralf T1 - Influence of initial fixation position in scene viewing JF - Vision research : an international journal for functional aspects of vision. KW - Visual scanpath KW - Visual attention KW - Inhibition of return KW - Eye movements KW - Saliency Y1 - 2016 U6 - https://doi.org/10.1016/j.visres.2016.09.012 SN - 0042-6989 SN - 1878-5646 VL - 129 SP - 33 EP - 49 PB - Elsevier CY - Oxford ER - TY - JOUR A1 - Schütt, Heiko Herbert A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Engbert, Ralf A1 - Wichmann, Felix A. T1 - Disentangling bottom-up versus top-down and low-level versus high-level influences on eye movements over time JF - Journal of vision N2 - Bottom-up and top-down as well as low-level and high-level factors influence where we fixate when viewing natural scenes. However, the importance of each of these factors and how they interact remains a matter of debate. Here, we disentangle these factors by analyzing their influence over time. For this purpose, we develop a saliency model that is based on the internal representation of a recent early spatial vision model to measure the low-level, bottom-up factor. To measure the influence of high-level, bottom-up features, we use a recent deep neural network-based saliency model. To account for top-down influences, we evaluate the models on two large data sets with different tasks: first, a memorization task and, second, a search task. Our results lend support to a separation of visual scene exploration into three phases: the first saccade, an initial guided exploration characterized by a gradual broadening of the fixation density, and a steady state that is reached after roughly 10 fixations. Saccade-target selection during the initial exploration and in the steady state is related to similar areas of interest, which are better predicted when including high-level features. In the search data set, fixation locations are determined predominantly by top-down processes. In contrast, the first fixation follows a different fixation density and contains a strong central fixation bias. Nonetheless, first fixations are guided strongly by image properties, and as early as 200 ms after image onset, fixations are better predicted by high-level information. We conclude that any low-level, bottom-up factors are mainly limited to the generation of the first saccade. All saccades are better explained when high-level features are considered, and later, this high-level, bottom-up control can be overruled by top-down influences. KW - saliency KW - fixations KW - natural scenes KW - visual search KW - eye movements Y1 - 2019 U6 - https://doi.org/10.1167/19.3.1 SN - 1534-7362 VL - 19 IS - 3 PB - Association for Research in Vision and Opthalmology CY - Rockville ER -