TY - THES A1 - Rothkegel, Lars Oliver Martin T1 - Human scanpaths in natural scene viewing and natural scene search T1 - Menschliche Blickspuren beim Betrachten und Durchsuchen natürlicher Szenen BT - the role of systematic eye-movement tendencies N2 - Understanding how humans move their eyes is an important part for understanding the functioning of the visual system. Analyzing eye movements from observations of natural scenes on a computer screen is a step to understand human visual behavior in the real world. When analyzing eye-movement data from scene-viewing experiments, the impor- tant questions are where (fixation locations), how long (fixation durations) and when (ordering of fixations) participants fixate on an image. By answering these questions, computational models can be developed which predict human scanpaths. Models serve as a tool to understand the underlying cognitive processes while observing an image, especially the allocation of visual attention. The goal of this thesis is to provide new contributions to characterize and model human scanpaths on natural scenes. The results from this thesis will help to understand and describe certain systematic eye-movement tendencies, which are mostly independent of the image. One eye-movement tendency I focus on throughout this thesis is the tendency to fixate more in the center of an image than on the outer parts, called the central fixation bias. Another tendency, which I will investigate thoroughly, is the characteristic distribution of angles between successive eye movements. The results serve to evaluate and improve a previously published model of scanpath generation from our laboratory, the SceneWalk model. Overall, six experiments were conducted for this thesis which led to the following five core results: i) A spatial inhibition of return can be found in scene-viewing data. This means that locations which have already been fixated are afterwards avoided for a certain time interval (Chapter 2). ii) The initial fixation position when observing an image has a long-lasting influence of up to five seconds on further scanpath progression (Chapter 2 & 3). iii) The often described central fixation bias on images depends strongly on the duration of the initial fixation. Long-lasting initial fixations lead to a weaker central fixation bias than short fixations (Chapter 2 & 3). iv) Human observers adjust their basic eye-movement parameters, like fixation dura- tions and saccade amplitudes, to the visual properties of a target they look for in visual search (Chapter 4). v) The angle between two adjacent saccades is an indicator for the selectivity of the upcoming saccade target (Chapter 4). All results emphasize the importance of systematic behavioral eye-movement tenden- cies and dynamic aspects of human scanpaths in scene viewing. N2 - Die Art und Weise, wie wir unsere Augen bewegen, ist ein bedeutender Aspekt des visuellen Systems. Die Analyse von Augenbewegungen beim Betrachten natürlicher Szenen auf einem Bildschirm soll helfen, natürliches Blickverhalten zu verstehen. Durch Beantwortung der Fragen wohin (Fixationsposition), wie lange (Fixationsdauern) und wann (Reihenfolge von Fixationen) Versuchspersonen auf einem Bild fixieren, lassen sich computationale Modelle entwickeln, welche Blickspuren auf natürlichen Bildern vorhersagen. Modelle sind ein Werkzeug, um zugrunde liegende kognitive Prozesse, insbesondere die Zuweisung visueller Aufmerksamkeit, während der Betrachtung von Bildern zu verstehen. Das Ziel der hier vorliegenden Arbeit ist es, neue Beitr¨age zur Modellierung und Charakterisierung menschlicher Blickspuren auf natürlichen Szenen zu liefern. Speziell systematische Blicksteuerungstendenzen, welche größtenteils unabhängig vom betrachteten Bild sind, sollen durch die vorliegenden Studien besser verstanden und beschrieben werden. Eine dieser Tendenzen, welche ich gezielt untersuche, ist die Neigung von Versuchspersonen, die Mitte eines Bildes häufiger als äußere Bildregionen zu fixieren. Außerdem wird die charakteristische Verteilung der Winkel zwischen zwei aufeinanderfolgenden Sakkaden systematisch untersucht. Die Ergebnisse dienen der Evaluation und Verbesserung des SceneWalk Modells für Blicksteuerung aus unserer Arbeitsgruppe. Insgesamt wurden 6 Experimente durchgeführt, welche zu den folgenden fünf Kernbefunden führten: i) Ein örtlicher inhibition of return kann in Blickbewegungsdaten von Szenenbetrachtungsexperimenten gefunden werden. Das bedeutet, fixierte Positionen werden nach der Fixation für einen bestimmten Zeitraum gemieden (Kapitel 2). ii) Die Startposition der Betrachtung eines Bildes hat einen langanhaltenden Einfluss von bis zu fünf Sekunden auf die nachfolgende Blickspur (Kapitel 2 & 3). iii) Die viel beschriebene zentrale Fixationstendenz auf Bildern hängt davon ab, wie lange die erste Fixation dauert. Lange initiale Fixationen führen zu deutlich geringerer zentraler Fixationstendenz als kurze Fixationen (Kapitel 2 & 3). iv) Menschliche Betrachter passen Fixationsdauern und Sakkadenamplituden an die visuellen Eigenschaften eines Zielreizes in visueller Suche an (Kapitel 4). v) Der Winkel zwischen zwei Sakkaden ist ein Indikator dafür, wie selektiv das Ziel der zweiten Sakkade ist (Kapitel 4). Alle Ergebnisse betonen die Wichtigkeit von systematischem Blickbewegungsverhalten und dynamischen Aspekten menschlicher Blickspuren beim Betrachten von natürlichen Szenen. KW - eye movements KW - scene viewing KW - computational modeling KW - scanpaths KW - visual attention KW - Augenbewegungen KW - Szenenbetrachtung KW - Computationale Modellierung KW - Blickspuren KW - Visuelle Aufmerksamkeit Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-420005 ER - TY - JOUR A1 - Schwetlick, Lisa A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Engbert, Ralf T1 - Modeling the effects of perisaccadic attention on gaze statistics during scene viewing JF - Communications biology N2 - Lisa Schwetlick et al. present a computational model linking visual scan path generation in scene viewing to physiological and experimental work on perisaccadic covert attention, the act of attending to an object visually without obviously moving the eyes toward it. They find that integrating covert attention into predictive models of visual scan paths greatly improves the model's agreement with experimental data.
How we perceive a visual scene depends critically on the selection of gaze positions. For this selection process, visual attention is known to play a key role in two ways. First, image-features attract visual attention, a fact that is captured well by time-independent fixation models. Second, millisecond-level attentional dynamics around the time of saccade drives our gaze from one position to the next. These two related research areas on attention are typically perceived as separate, both theoretically and experimentally. Here we link the two research areas by demonstrating that perisaccadic attentional dynamics improve predictions on scan path statistics. In a mathematical model, we integrated perisaccadic covert attention with dynamic scan path generation. Our model reproduces saccade amplitude distributions, angular statistics, intersaccadic turning angles, and their impact on fixation durations as well as inter-individual differences using Bayesian inference. Therefore, our result lend support to the relevance of perisaccadic attention to gaze statistics. KW - Computational models KW - Human behaviour KW - Visual system Y1 - 2020 U6 - https://doi.org/10.1038/s42003-020-01429-8 SN - 2399-3642 VL - 3 IS - 1 PB - Springer Nature CY - London ER - TY - JOUR A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Schütt, Heiko Herbert A1 - Wichmann, Felix A. A1 - Engbert, Ralf T1 - Temporal evolution of the central fixation bias in scene viewing JF - Journal of vision N2 - When watching the image of a natural scene on a computer screen, observers initially move their eyes toward the center of the image—a reliable experimental finding termed central fixation bias. This systematic tendency in eye guidance likely masks attentional selection driven by image properties and top-down cognitive processes. Here, we show that the central fixation bias can be reduced by delaying the initial saccade relative to image onset. In four scene-viewing experiments we manipulated observers' initial gaze position and delayed their first saccade by a specific time interval relative to the onset of an image. We analyzed the distance to image center over time and show that the central fixation bias of initial fixations was significantly reduced after delayed saccade onsets. We additionally show that selection of the initial saccade target strongly depended on the first saccade latency. A previously published model of saccade generation was extended with a central activation map on the initial fixation whose influence declined with increasing saccade latency. This extension was sufficient to replicate the central fixation bias from our experiments. Our results suggest that the central fixation bias is generated by default activation as a response to the sudden image onset and that this default activation pattern decreases over time. Thus, it may often be preferable to use a modified version of the scene viewing paradigm that decouples image onset from the start signal for scene exploration to explicitly reduce the central fixation bias. KW - eye movements KW - dynamic models KW - visual scanpath KW - visual attention Y1 - 2017 U6 - https://doi.org/10.1167/17.13.3 SN - 1534-7362 VL - 17 SP - 1626 EP - 1638 PB - Association for Research in Vision and Opthalmology CY - Rockville ER - TY - JOUR A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Schütt, Heiko Herbert A1 - Wichmann, Felix A. A1 - Engbert, Ralf T1 - Influence of initial fixation position in scene viewing JF - Vision research : an international journal for functional aspects of vision. KW - Visual scanpath KW - Visual attention KW - Inhibition of return KW - Eye movements KW - Saliency Y1 - 2016 U6 - https://doi.org/10.1016/j.visres.2016.09.012 SN - 0042-6989 SN - 1878-5646 VL - 129 SP - 33 EP - 49 PB - Elsevier CY - Oxford ER - TY - JOUR A1 - Krügel, André A1 - Rothkegel, Lars A1 - Engbert, Ralf T1 - No exception from Bayes’ rule BT - the presence and absence of the range effect for saccades explained JF - Journal of vision N2 - In an influential theoretical model, human sensorimotor control is achieved by a Bayesian decision process, which combines noisy sensory information and learned prior knowledge. A ubiquitous signature of prior knowledge and Bayesian integration in human perception and motor behavior is the frequently observed bias toward an average stimulus magnitude (i.e., a central-tendency bias, range effect, regression-to-the-mean effect). However, in the domain of eye movements, there is a recent controversy about the fundamental existence of a range effect in the saccadic system. Here we argue that the problem of the existence of a range effect is linked to the availability of prior knowledge for saccade control. We present results from two prosaccade experiments that both employ an informative prior structure (i.e., a nonuniform Gaussian distribution of saccade target distances). Our results demonstrate the validity of Bayesian integration in saccade control, which generates a range effect in saccades. According to Bayesian integration principles, the saccadic range effect depends on the availability of prior knowledge and varies in size as a function of the reliability of the prior and the sensory likelihood. KW - saccades KW - saccadic accuracy KW - range effect KW - Bayesian sensorimotor KW - integration KW - central-tendency bias Y1 - 2020 U6 - https://doi.org/10.1167/jov.20.7.15 SN - 1534-7362 VL - 20 IS - 7 PB - ARVO CY - Rockville ER - TY - JOUR A1 - Schütt, Heiko Herbert A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Engbert, Ralf A1 - Wichmann, Felix A. T1 - Disentangling bottom-up versus top-down and low-level versus high-level influences on eye movements over time JF - Journal of vision N2 - Bottom-up and top-down as well as low-level and high-level factors influence where we fixate when viewing natural scenes. However, the importance of each of these factors and how they interact remains a matter of debate. Here, we disentangle these factors by analyzing their influence over time. For this purpose, we develop a saliency model that is based on the internal representation of a recent early spatial vision model to measure the low-level, bottom-up factor. To measure the influence of high-level, bottom-up features, we use a recent deep neural network-based saliency model. To account for top-down influences, we evaluate the models on two large data sets with different tasks: first, a memorization task and, second, a search task. Our results lend support to a separation of visual scene exploration into three phases: the first saccade, an initial guided exploration characterized by a gradual broadening of the fixation density, and a steady state that is reached after roughly 10 fixations. Saccade-target selection during the initial exploration and in the steady state is related to similar areas of interest, which are better predicted when including high-level features. In the search data set, fixation locations are determined predominantly by top-down processes. In contrast, the first fixation follows a different fixation density and contains a strong central fixation bias. Nonetheless, first fixations are guided strongly by image properties, and as early as 200 ms after image onset, fixations are better predicted by high-level information. We conclude that any low-level, bottom-up factors are mainly limited to the generation of the first saccade. All saccades are better explained when high-level features are considered, and later, this high-level, bottom-up control can be overruled by top-down influences. KW - saliency KW - fixations KW - natural scenes KW - visual search KW - eye movements Y1 - 2019 U6 - https://doi.org/10.1167/19.3.1 SN - 1534-7362 VL - 19 IS - 3 PB - Association for Research in Vision and Opthalmology CY - Rockville ER - TY - JOUR A1 - Rothkegel, Lars Oliver Martin A1 - Schütt, Heiko Herbert A1 - Trukenbrod, Hans Arne A1 - Wichmann, Felix A. A1 - Engbert, Ralf T1 - Searchers adjust their eye-movement dynamics to target characteristics in natural scenes JF - Scientific reports N2 - When searching a target in a natural scene, it has been shown that both the target’s visual properties and similarity to the background influence whether and how fast humans are able to find it. So far, it was unclear whether searchers adjust the dynamics of their eye movements (e.g., fixation durations, saccade amplitudes) to the target they search for. In our experiment, participants searched natural scenes for six artificial targets with different spatial frequency content throughout eight consecutive sessions. High-spatial frequency targets led to smaller saccade amplitudes and shorter fixation durations than low-spatial frequency targets if target identity was known. If a saccade was programmed in the same direction as the previous saccade, fixation durations and successive saccade amplitudes were not influenced by target type. Visual saliency and empirical fixation density at the endpoints of saccades which maintain direction were comparatively low, indicating that these saccades were less selective. Our results suggest that searchers adjust their eye movement dynamics to the search target efficiently, since previous research has shown that low-spatial frequencies are visible farther into the periphery than high-spatial frequencies. We interpret the saccade direction specificity of our effects as an underlying separation into a default scanning mechanism and a selective, target-dependent mechanism. Y1 - 2019 U6 - https://doi.org/10.1038/s41598-018-37548-w SN - 2045-2322 VL - 9 PB - Nature Publ. Group CY - London ER - TY - GEN A1 - Schütt, Heiko Herbert A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Engbert, Ralf A1 - Wichmann, Felix A. T1 - Predicting fixation densities over time from early visual processing T2 - Perception N2 - Bottom-up saliency is often cited as a factor driving the choice of fixation locations of human observers, based on the (partial) success of saliency models to predict fixation densities in free viewing. However, these observations are only weak evidence for a causal role of bottom-up saliency in natural viewing behaviour. To test bottom-up saliency more directly, we analyse the performance of a number of saliency models---including our own saliency model based on our recently published model of early visual processing (Schütt & Wichmann, 2017, JoV)---as well as the theoretical limits for predictions over time. On free viewing data our model performs better than classical bottom-up saliency models, but worse than the current deep learning based saliency models incorporating higher-level information like knowledge about objects. However, on search data all saliency models perform worse than the optimal image independent prediction. We observe that the fixation density in free viewing is not stationary over time, but changes over the course of a trial. It starts with a pronounced central fixation bias on the first chosen fixation, which is nonetheless influenced by image content. Starting with the 2nd to 3rd fixation, the fixation density is already well predicted by later densities, but more concentrated. From there the fixation distribution broadens until it reaches a stationary distribution around the 10th fixation. Taken together these observations argue against bottom-up saliency as a mechanistic explanation for eye movement control after the initial orienting reaction in the first one to two saccades, although we confirm the predictive value of early visual representations for fixation locations. The fixation distribution is, first, not well described by any stationary density, second, is predicted better when including object information and, third, is badly predicted by any saliency model in a search task. Y1 - 2019 SN - 0301-0066 SN - 1468-4233 VL - 48 SP - 64 EP - 65 PB - Sage Publ. CY - London ER - TY - JOUR A1 - Schütt, Heiko Herbert A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Reich, Sebastian A1 - Wichmann, Felix A. A1 - Engbert, Ralf T1 - Likelihood-based parameter estimation and comparison of dynamical cognitive models JF - Psychological Review N2 - Dynamical models of cognition play an increasingly important role in driving theoretical and experimental research in psychology. Therefore, parameter estimation, model analysis and comparison of dynamical models are of essential importance. In this article, we propose a maximum likelihood approach for model analysis in a fully dynamical framework that includes time-ordered experimental data. Our methods can be applied to dynamical models for the prediction of discrete behavior (e.g., movement onsets); in particular, we use a dynamical model of saccade generation in scene viewing as a case study for our approach. For this model, the likelihood function can be computed directly by numerical simulation, which enables more efficient parameter estimation including Bayesian inference to obtain reliable estimates and corresponding credible intervals. Using hierarchical models inference is even possible for individual observers. Furthermore, our likelihood approach can be used to compare different models. In our example, the dynamical framework is shown to outperform nondynamical statistical models. Additionally, the likelihood based evaluation differentiates model variants, which produced indistinguishable predictions on hitherto used statistics. Our results indicate that the likelihood approach is a promising framework for dynamical cognitive models. KW - likelihood KW - model fitting KW - dynamical model KW - eye movements KW - model comparison Y1 - 2017 U6 - https://doi.org/10.1037/rev0000068 SN - 0033-295X SN - 1939-1471 VL - 124 IS - 4 SP - 505 EP - 524 PB - American Psychological Association CY - Washington ER - TY - GEN A1 - Backhaus, Daniel A1 - Engbert, Ralf A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne T1 - Task-dependence in scene perception: Head unrestrained viewing using mobile eye-tracking T2 - Zweitveröffentlichungen der Universität Potsdam : Humanwissenschaftliche Reihe N2 - Real-world scene perception is typically studied in the laboratory using static picture viewing with restrained head position. Consequently, the transfer of results obtained in this paradigm to real-word scenarios has been questioned. The advancement of mobile eye-trackers and the progress in image processing, however, permit a more natural experimental setup that, at the same time, maintains the high experimental control from the standard laboratory setting. We investigated eye movements while participants were standing in front of a projector screen and explored images under four specific task instructions. Eye movements were recorded with a mobile eye-tracking device and raw gaze data were transformed from head-centered into image-centered coordinates. We observed differences between tasks in temporal and spatial eye-movement parameters and found that the bias to fixate images near the center differed between tasks. Our results demonstrate that current mobile eye-tracking technology and a highly controlled design support the study of fine-scaled task dependencies in an experimental setting that permits more natural viewing behavior than the static picture viewing paradigm. T3 - Zweitveröffentlichungen der Universität Potsdam : Humanwissenschaftliche Reihe - 871 KW - scene viewing KW - real-world scenarios KW - mobile eye-tracking KW - task influence KW - central fixation bias Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-519124 SN - 1866-8364 IS - 5 ER - TY - JOUR A1 - Backhaus, Daniel A1 - Engbert, Ralf A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne T1 - Task-dependence in scene perception: Head unrestrained viewing using mobile eye-tracking JF - Journal of vision N2 - Real-world scene perception is typically studied in the laboratory using static picture viewing with restrained head position. Consequently, the transfer of results obtained in this paradigm to real-word scenarios has been questioned. The advancement of mobile eye-trackers and the progress in image processing, however, permit a more natural experimental setup that, at the same time, maintains the high experimental control from the standard laboratory setting. We investigated eye movements while participants were standing in front of a projector screen and explored images under four specific task instructions. Eye movements were recorded with a mobile eye-tracking device and raw gaze data were transformed from head-centered into image-centered coordinates. We observed differences between tasks in temporal and spatial eye-movement parameters and found that the bias to fixate images near the center differed between tasks. Our results demonstrate that current mobile eye-tracking technology and a highly controlled design support the study of fine-scaled task dependencies in an experimental setting that permits more natural viewing behavior than the static picture viewing paradigm. KW - scene viewing KW - real-world scenarios KW - mobile eye-tracking KW - task KW - influence KW - central fixation bias Y1 - 2020 U6 - https://doi.org/10.1167/jov.20.5.3 SN - 1534-7362 VL - 20 IS - 5 SP - 1 EP - 21 PB - Association for Research in Vision and Opthalmology CY - Rockville ER -