Refine
Year of publication
Document Type
- Article (115)
- Postprint (16)
- Other (5)
- Conference Proceeding (4)
- Preprint (2)
- Monograph/Edited Volume (1)
- Doctoral Thesis (1)
- Review (1)
Keywords
- eye movements (15)
- Eye movements (12)
- scene viewing (9)
- attention (6)
- spatial frequencies (6)
- Reading (5)
- saccades (5)
- eye-movement control (4)
- Bayesian inference (3)
- Computational modelling (3)
- color (3)
- dynamical model (3)
- fixation locations (3)
- gaze-contingent displays (3)
- object search (3)
- reading (3)
- Computational modeling (2)
- Fixational eye movements (2)
- Mathematical model (2)
- Microsaccades (2)
- Perceptual span (2)
- Working memory (2)
- central and peripheral vision (2)
- central fixation bias (2)
- cognitive-control (2)
- corpus dataset (2)
- decision-theory (2)
- dynamical models (2)
- e-z reader (2)
- human behaviour (2)
- ideal-observer model (2)
- landing positions (2)
- microsaccade (2)
- microsaccades (2)
- mobile eye-tracking (2)
- modeling (2)
- psychology (2)
- real-world scenarios (2)
- saccade generation (2)
- saliency (2)
- scene memorization (2)
- scene perception (2)
- spatial statistics (2)
- tunnel vision (2)
- ADHD (1)
- Adaptive control (1)
- Attention (1)
- Background texture (1)
- Bayesian estimation (1)
- Bayesian modeling (1)
- Bayesian sensorimotor (1)
- COVID-19 (1)
- Cognitive eye movements (1)
- Computational models (1)
- Covert orienting (1)
- Dyslexia (1)
- Endogenous attention (1)
- Ensemble Kalman (1)
- Eye movements during reading (1)
- Fixation duration (1)
- Foveal load hypothesis (1)
- Gaze-contingent displays (1)
- Human behaviour (1)
- Inhibition of return (1)
- Levels of processing (1)
- MCMC (1)
- Memory (1)
- Microsaccade (1)
- Mind wandering (1)
- Motor control (1)
- Motorik (1)
- Multisensory (1)
- Nichtlineare Dynamik (1)
- Parsing difficulty (1)
- Polyrhythmen (1)
- Posner cueing (1)
- Preview (1)
- Pronominal anaphora (1)
- Reading comprehension (1)
- Saccade latency (1)
- Saccade planning (1)
- Saccadic facilitation effect (1)
- Saliency (1)
- Scene viewing (1)
- Sentence comprehension (1)
- Sequential data assimilation (1)
- Serial recall (1)
- Short-term memory (1)
- Signal detection theory (1)
- Skipping (1)
- Spatial frequencies (1)
- Stochastic epidemic model (1)
- Superior colliculus (1)
- Surprisal (1)
- Sustained attention (1)
- Visual attention (1)
- Visual fixation (1)
- Visual scanpath (1)
- Visual search (1)
- Visual system (1)
- Visual working memory (1)
- Word boundaries (1)
- Z-reader model (1)
- Zoom lens model of attention (1)
- accuracy (1)
- audition (1)
- background texture (1)
- categorization (1)
- central and peripheral (1)
- central-tendency bias (1)
- computational modeling (1)
- control (1)
- distributed processing (1)
- dynamic models (1)
- eccentricity (1)
- eye movements and reading (1)
- filter (1)
- fixation (1)
- fixation duration (1)
- fixation durations (1)
- fixations (1)
- gaze (1)
- heartbeat (1)
- individual differences (1)
- individual differences; (1)
- influence (1)
- integration (1)
- interindividual differences (1)
- kognitive Prozesse (1)
- likelihood (1)
- likelihood function (1)
- mental chronometry (1)
- model (1)
- model comparison (1)
- model fitting (1)
- motor control (1)
- natural scenes (1)
- oculomotor (1)
- oculomotor control (1)
- pair correlation function (1)
- parafoveal-on-foveal effects (1)
- point process (1)
- psychophysics toolbox (1)
- range effect (1)
- reading eye movements (1)
- saccade latency (1)
- saccadic accuracy (1)
- saccadic facilitation effect (1)
- scleral search coils (1)
- sequential attention shifts (1)
- skipping (1)
- skipping costs/benefits (1)
- spatial correlations (1)
- swift (1)
- task (1)
- task dependence (1)
- task influence (1)
- tracking (1)
- video-oculography (1)
- vision (1)
- visual attention (1)
- visual scanpath (1)
- visual search (1)
- word recognition (1)
- words (1)
Institute
- Department Psychologie (119)
- Institut für Physik und Astronomie (15)
- Strukturbereich Kognitionswissenschaften (6)
- Institut für Mathematik (5)
- Department Linguistik (2)
- Humanwissenschaftliche Fakultät (2)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (2)
- Referat für Presse- und Öffentlichkeitsarbeit (2)
- Extern (1)
- Mathematisch-Naturwissenschaftliche Fakultät (1)
The launch-site effect, a systematic variation of within-word landing position as a function of launch-site distance, is among the most important oculomotor phenomena in reading. Here we show that the launch-site effect is strongly modulated in word skipping, a finding which is inconsistent with the view that the launch-site effect is caused by a saccadic-range error. We observe that distributions of landing positions in skipping saccades show an increased leftward shift compared to non-skipping saccades at equal launch-site distances. Using an improved algorithm for the estimation of mislocated fixations, we demonstrate the reliability of our results.
The aim of this work was to verify the processing of pronominal anaphora by children that have attention deficit hyperactivity disorder or dyslexia. The sample studied consisted of 75 children that speak German, which read two texts of 80 words containing pronominal anaphora. The eye movements of all participants were recorded and, to make sure they were reading with attention, two activities that tested reading comprehension were proposed. Through the analysis of eye movements, specifically the fixations, the data indicate that children with disorders have difficulty to process the pronominal anaphora, especially dyslexic children.
Background
Body image distortion is highly prevalent among overweight individuals. Whilst there is evidence that body-dissatisfied women and those suffering from disordered eating show a negative attentional bias towards their own unattractive body parts and others’ attractive body parts, little is known about visual attention patterns in the area of obesity and with respect to males. Since eating disorders and obesity share common features in terms of distorted body image and body dissatisfaction, the aim of this study was to examine whether overweight men and women show a similar attentional bias.
Methods/Design
We analyzed eye movements in 30 overweight individuals (18 females) and 28 normalweight individuals (16 females) with respect to the participants’ own pictures as well as gender-
and BMI-matched control pictures (front and back view). Additionally, we assessed body image and disordered eating using validated questionnaires.
Discussion
The overweight sample rated their own body as less attractive and showed a more disturbed body image. Contrary to our assumptions, they focused significantly longer on attractive
compared to unattractive regions of both their own and the control body. For one’s own body, this was more pronounced for women. A higher weight status and more frequent body checking predicted attentional bias towards attractive body parts. We found that overweight adults exhibit an unexpected and stable pattern of selective attention, with a distinctive focus on their own attractive body regions despite higher levels of body dissatisfaction. This positive attentional bias may either be an indicator of a more pronounced pattern of attentional avoidance or a self-enhancing strategy. Further research is warranted to clarify these results.
Background
Body image distortion is highly prevalent among overweight individuals. Whilst there is evidence that body-dissatisfied women and those suffering from disordered eating show a negative attentional bias towards their own unattractive body parts and others’ attractive body parts, little is known about visual attention patterns in the area of obesity and with respect to males. Since eating disorders and obesity share common features in terms of distorted body image and body dissatisfaction, the aim of this study was to examine whether overweight men and women show a similar attentional bias.
Methods/Design
We analyzed eye movements in 30 overweight individuals (18 females) and 28 normalweight individuals (16 females) with respect to the participants’ own pictures as well as gender-
and BMI-matched control pictures (front and back view). Additionally, we assessed body image and disordered eating using validated questionnaires.
Discussion
The overweight sample rated their own body as less attractive and showed a more disturbed body image. Contrary to our assumptions, they focused significantly longer on attractive
compared to unattractive regions of both their own and the control body. For one’s own body, this was more pronounced for women. A higher weight status and more frequent body checking predicted attentional bias towards attractive body parts. We found that overweight adults exhibit an unexpected and stable pattern of selective attention, with a distinctive focus on their own attractive body regions despite higher levels of body dissatisfaction. This positive attentional bias may either be an indicator of a more pronounced pattern of attentional avoidance or a self-enhancing strategy. Further research is warranted to clarify these results.
Even during visual fixation of a stationary target, our eyes perform rather erratic miniature movements, which represent a random walk. These "fixational" eye movements counteract perceptual fading, a consequence of fast adaptation of the retinal receptor systems to constant input. The most important contribution to fixational eye movements is produced by microsaccades; however, a specific function of microsaccades only recently has been found. Here we show that the occurrence of microsaccades is correlated with low retinal image slip approximate to 200 ms before microsaccade onset. This result suggests that microsaccades are triggered dynamically, in contrast to the current view that microsaccades are randomly distributed in time characterized by their rate-of-occurrence of 1 to 2 per second. As a result of the dynamic triggering mechanism, individual microsaccade rate can be predicted by the fractal dimension of trajectories. Finally, we propose a minimal computational model for the dynamic triggering of microsaccades
Microsaccades are miniature eye movements produced involuntarily during visual fixation of stationary objects. Since their first description more than 40 years ago, the role of microsaccades in vision has been controversial. In this issue, Martinez-Conde and colleagues present a solution to the long-standing research problem connecting this basic oculomotor function to visual perception, by showing that microsaccades may control peripheral vision during visual fixation by inducing flips in bistable peripheral percepts in head-unrestrained viewing. Their study provides new insight into the functional connectivity between oculomotor function and visual perception
Using a serial search paradigm, we observed several effects of within-object fixation position on spatial and temporal control of eye movements: the preferred viewing location, launch site effect, the optimal viewing position, and the inverted optimal viewing position of fixation duration. While these effects were first identified by eye-movement studies in reading, our approach permits an analysis of the functional relationships between the effects in a different paradigm. Our results demonstrate that the fixation position is an important predictor of the subsequent saccade by influencing both fixation duration and the selection of the next saccade target.
Lisa Schwetlick et al. present a computational model linking visual scan path generation in scene viewing to physiological and experimental work on perisaccadic covert attention, the act of attending to an object visually without obviously moving the eyes toward it. They find that integrating covert attention into predictive models of visual scan paths greatly improves the model's agreement with experimental data. <br /> How we perceive a visual scene depends critically on the selection of gaze positions. For this selection process, visual attention is known to play a key role in two ways. First, image-features attract visual attention, a fact that is captured well by time-independent fixation models. Second, millisecond-level attentional dynamics around the time of saccade drives our gaze from one position to the next. These two related research areas on attention are typically perceived as separate, both theoretically and experimentally. Here we link the two research areas by demonstrating that perisaccadic attentional dynamics improve predictions on scan path statistics. In a mathematical model, we integrated perisaccadic covert attention with dynamic scan path generation. Our model reproduces saccade amplitude distributions, angular statistics, intersaccadic turning angles, and their impact on fixation durations as well as inter-individual differences using Bayesian inference. Therefore, our result lend support to the relevance of perisaccadic attention to gaze statistics.
Saccades move objects of interest into the center of the visual field for high-acuity visual analysis. White, Stritzke, and Gegenfurtner (Current Biology, 18, 124–128, 2008) have shown that saccadic latencies in the context of a structured background are much shorter than those with an unstructured background at equal levels of visibility. This effect has been explained by possible preactivation of the saccadic circuitry whenever a structured background acts as a mask for potential saccade targets. Here, we show that background textures modulate rates of microsaccades during visual fixation. First, after a display change, structured backgrounds induce a stronger decrease of microsaccade rates than do uniform backgrounds. Second, we demonstrate that the occurrence of a microsaccade in a critical time window can delay a subsequent saccadic response. Taken together, our findings suggest that microsaccades contribute to the saccadic facilitation effect, due to a modulation of microsaccade rates by properties of the background.
Hulleman & Olivers' (H&O's) model introduces variation of the functional visual field (FVF) for explaining visual search behavior. Our research shows how the FVF can be studied using gaze-contingent displays and how FVF variation can be implemented in models of gaze control. Contrary to H&O, we believe that fixation duration is an important factor when modeling visual search behavior.
Eye-movement control during scene viewing can be represented as a series of individual decisions about where and when to move the eyes. While substantial behavioral and computational research has been devoted to investigating the placement of fixations in scenes, relatively little is known about the mechanisms that control fixation durations. Here, we propose a computational model (CRISP) that accounts for saccade timing and programming and thus for variations in fixation durations in scene viewing. First, timing signals are modeled as continuous-time random walks. Second, difficulties at the level of visual and cognitive processing can inhibit and thus modulate saccade timing. Inhibition generates moment-by-moment changes in the random walk's transition rate and processing-related saccade cancellation. Third, saccade programming is completed in 2 stages: an initial, labile stage that is subject to cancellation and a subsequent, nonlabile stage. Several simulation studies tested the model's adequacy and generality. An initial simulation study explored the role of cognitive factors in scene viewing by examining how fixation durations differed under different viewing task instructions. Additional simulations investigated the degree to which fixation durations were under direct moment-to-moment control of the current visual scene. The present work further supports the conclusion that fixation durations, to a certain degree, reflect perceptual and cognitive activity in scene viewing. Computational model simulations contribute to an understanding of the underlying processes of gaze control.
In this article, we revisit the mindless reading paradigm from the perspective of computational modeling. In the standard version of the paradigm, participants read sentences in both their normal version as well as the transformed (or mindless) version where each letter is replaced with a z. z-String scanning shares the oculomotor requirements with reading but none of the higher-level lexical and semantic processes. Here we use the z-string scanning task to validate the SWIFT model of saccade generation [Engbert, R., Nuthmann, A., Richter, E., & Kliegl, R. (2005). SWIFT: A dynamical model of saccade generation during reading. Psychological Review, 112(4), 777-813] as an example for an advanced theory of eye-movement control in reading. We test the central assumption of spatially distributed processing across an attentional gradient proposed by the SWIFT model. Key experimental results like prolonged average fixation durations in z-string scanning compared to normal reading and the existence of a string-length effect on fixation durations and probabilities were reproduced by the model, which lends support to the model's assumptions on visual processing. Moreover, simulation results for patterns of regressive saccades in z-string scanning confirm SWIFT's concept of activation field dynamics for the selection of saccade targets.
When watching the image of a natural scene on a computer screen, observers initially move their eyes toward the center of the image—a reliable experimental finding termed central fixation bias. This systematic tendency in eye guidance likely masks attentional selection driven by image properties and top-down cognitive processes. Here, we show that the central fixation bias can be reduced by delaying the initial saccade relative to image onset. In four scene-viewing experiments we manipulated observers' initial gaze position and delayed their first saccade by a specific time interval relative to the onset of an image. We analyzed the distance to image center over time and show that the central fixation bias of initial fixations was significantly reduced after delayed saccade onsets. We additionally show that selection of the initial saccade target strongly depended on the first saccade latency. A previously published model of saccade generation was extended with a central activation map on the initial fixation whose influence declined with increasing saccade latency. This extension was sufficient to replicate the central fixation bias from our experiments. Our results suggest that the central fixation bias is generated by default activation as a response to the sudden image onset and that this default activation pattern decreases over time. Thus, it may often be preferable to use a modified version of the scene viewing paradigm that decouples image onset from the start signal for scene exploration to explicitly reduce the central fixation bias.