TY - JOUR A1 - Schütt, Heiko Herbert A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Reich, Sebastian A1 - Wichmann, Felix A. A1 - Engbert, Ralf T1 - Likelihood-based parameter estimation and comparison of dynamical cognitive models JF - Psychological Review N2 - Dynamical models of cognition play an increasingly important role in driving theoretical and experimental research in psychology. Therefore, parameter estimation, model analysis and comparison of dynamical models are of essential importance. In this article, we propose a maximum likelihood approach for model analysis in a fully dynamical framework that includes time-ordered experimental data. Our methods can be applied to dynamical models for the prediction of discrete behavior (e.g., movement onsets); in particular, we use a dynamical model of saccade generation in scene viewing as a case study for our approach. For this model, the likelihood function can be computed directly by numerical simulation, which enables more efficient parameter estimation including Bayesian inference to obtain reliable estimates and corresponding credible intervals. Using hierarchical models inference is even possible for individual observers. Furthermore, our likelihood approach can be used to compare different models. In our example, the dynamical framework is shown to outperform nondynamical statistical models. Additionally, the likelihood based evaluation differentiates model variants, which produced indistinguishable predictions on hitherto used statistics. Our results indicate that the likelihood approach is a promising framework for dynamical cognitive models. KW - likelihood KW - model fitting KW - dynamical model KW - eye movements KW - model comparison Y1 - 2017 U6 - https://doi.org/10.1037/rev0000068 SN - 0033-295X SN - 1939-1471 VL - 124 IS - 4 SP - 505 EP - 524 PB - American Psychological Association CY - Washington ER - TY - JOUR A1 - Schütt, Heiko Herbert A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Engbert, Ralf A1 - Wichmann, Felix A. T1 - Disentangling bottom-up versus top-down and low-level versus high-level influences on eye movements over time JF - Journal of vision N2 - Bottom-up and top-down as well as low-level and high-level factors influence where we fixate when viewing natural scenes. However, the importance of each of these factors and how they interact remains a matter of debate. Here, we disentangle these factors by analyzing their influence over time. For this purpose, we develop a saliency model that is based on the internal representation of a recent early spatial vision model to measure the low-level, bottom-up factor. To measure the influence of high-level, bottom-up features, we use a recent deep neural network-based saliency model. To account for top-down influences, we evaluate the models on two large data sets with different tasks: first, a memorization task and, second, a search task. Our results lend support to a separation of visual scene exploration into three phases: the first saccade, an initial guided exploration characterized by a gradual broadening of the fixation density, and a steady state that is reached after roughly 10 fixations. Saccade-target selection during the initial exploration and in the steady state is related to similar areas of interest, which are better predicted when including high-level features. In the search data set, fixation locations are determined predominantly by top-down processes. In contrast, the first fixation follows a different fixation density and contains a strong central fixation bias. Nonetheless, first fixations are guided strongly by image properties, and as early as 200 ms after image onset, fixations are better predicted by high-level information. We conclude that any low-level, bottom-up factors are mainly limited to the generation of the first saccade. All saccades are better explained when high-level features are considered, and later, this high-level, bottom-up control can be overruled by top-down influences. KW - saliency KW - fixations KW - natural scenes KW - visual search KW - eye movements Y1 - 2019 U6 - https://doi.org/10.1167/19.3.1 SN - 1534-7362 VL - 19 IS - 3 PB - Association for Research in Vision and Opthalmology CY - Rockville ER - TY - JOUR A1 - Rothkegel, Lars Oliver Martin A1 - Trukenbrod, Hans Arne A1 - Schütt, Heiko Herbert A1 - Wichmann, Felix A. A1 - Engbert, Ralf T1 - Temporal evolution of the central fixation bias in scene viewing JF - Journal of vision N2 - When watching the image of a natural scene on a computer screen, observers initially move their eyes toward the center of the image—a reliable experimental finding termed central fixation bias. This systematic tendency in eye guidance likely masks attentional selection driven by image properties and top-down cognitive processes. Here, we show that the central fixation bias can be reduced by delaying the initial saccade relative to image onset. In four scene-viewing experiments we manipulated observers' initial gaze position and delayed their first saccade by a specific time interval relative to the onset of an image. We analyzed the distance to image center over time and show that the central fixation bias of initial fixations was significantly reduced after delayed saccade onsets. We additionally show that selection of the initial saccade target strongly depended on the first saccade latency. A previously published model of saccade generation was extended with a central activation map on the initial fixation whose influence declined with increasing saccade latency. This extension was sufficient to replicate the central fixation bias from our experiments. Our results suggest that the central fixation bias is generated by default activation as a response to the sudden image onset and that this default activation pattern decreases over time. Thus, it may often be preferable to use a modified version of the scene viewing paradigm that decouples image onset from the start signal for scene exploration to explicitly reduce the central fixation bias. KW - eye movements KW - dynamic models KW - visual scanpath KW - visual attention Y1 - 2017 U6 - https://doi.org/10.1167/17.13.3 SN - 1534-7362 VL - 17 SP - 1626 EP - 1638 PB - Association for Research in Vision and Opthalmology CY - Rockville ER - TY - JOUR A1 - Engbert, Ralf A1 - Trukenbrod, Hans Arne A1 - Barthelme, Simon A1 - Wichmann, Felix A. T1 - Spatial statistics and attentional dynamics in scene viewing JF - Journal of vision N2 - In humans and in foveated animals visual acuity is highly concentrated at the center of gaze, so that choosing where to look next is an important example of online, rapid decision-making. Computational neuroscientists have developed biologically-inspired models of visual attention, termed saliency maps, which successfully predict where people fixate on average. Using point process theory for spatial statistics, we show that scanpaths contain, however, important statistical structure, such as spatial clustering on top of distributions of gaze positions. Here, we develop a dynamical model of saccadic selection that accurately predicts the distribution of gaze positions as well as spatial clustering along individual scanpaths. Our model relies on activation dynamics via spatially-limited (foveated) access to saliency information, and, second, a leaky memory process controlling the re-inspection of target regions. This theoretical framework models a form of context-dependent decision-making, linking neural dynamics of attention to behavioral gaze data. KW - scene perception KW - eye movements KW - attention KW - saccades KW - modeling KW - spatial statistics Y1 - 2015 U6 - https://doi.org/10.1167/15.1.14 SN - 1534-7362 VL - 15 IS - 1 PB - Association for Research in Vision and Opthalmology CY - Rockville ER - TY - JOUR A1 - Trukenbrod, Hans Arne A1 - Engbert, Ralf T1 - Eye movements in a sequential scanning task - evidence for distributed processing JF - Journal of vision N2 - Current models of eye movement control are derived from theories assuming serial processing of single items or from theories based on parallel processing of multiple items at a time. This issue has persisted because most investigated paradigms generated data compatible with both serial and parallel models. Here, we study eye movements in a sequential scanning task, where stimulus n indicates the position of the next stimulus n + 1. We investigate whether eye movements are controlled by sequential attention shifts when the task requires serial order of processing. Our measures of distributed processing in the form of parafoveal-on-foveal effects, long-range modulations of target selection, and skipping saccades provide evidence against models strictly based on serial attention shifts. We conclude that our results lend support to parallel processing as a strategy for eye movement control. KW - eye movements KW - distributed processing KW - sequential attention shifts KW - parafoveal-on-foveal effects KW - skipping costs/benefits Y1 - 2012 U6 - https://doi.org/10.1167/12.1.5 SN - 1534-7362 VL - 12 IS - 1 PB - Association for Research in Vision and Opthalmology CY - Rockville ER - TY - JOUR A1 - Barthelme, Simon A1 - Trukenbrod, Hans Arne A1 - Engbert, Ralf A1 - Wichmann, Felix A. T1 - Modeling fixation locations using spatial point processes JF - Journal of vision N2 - Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation. KW - eye movements KW - fixation locations KW - saliency KW - modeling KW - point process KW - spatial statistics Y1 - 2013 U6 - https://doi.org/10.1167/13.12.1 SN - 1534-7362 VL - 13 IS - 12 PB - Association for Research in Vision and Opthalmology CY - Rockville ER -