Refine
Year of publication
Document Type
- Article (95)
- Postprint (10)
- Other (5)
- Conference Proceeding (3)
- Preprint (2)
Language
- English (115) (remove)
Keywords
- Eye movements (9)
- eye movements (7)
- Reading (5)
- attention (5)
- scene viewing (5)
- saccades (4)
- Computational modelling (3)
- dynamical model (3)
- eye-movement control (3)
- fixation locations (3)
- spatial frequencies (3)
- Bayesian inference (2)
- Perceptual span (2)
- Working memory (2)
- central fixation bias (2)
- cognitive-control (2)
- decision-theory (2)
- dynamical models (2)
- e-z reader (2)
- gaze-contingent displays (2)
- human behaviour (2)
- ideal-observer model (2)
- landing positions (2)
- mobile eye-tracking (2)
- modeling (2)
- psychology (2)
- reading (2)
- real-world scenarios (2)
- saccade generation (2)
- saliency (2)
- scene perception (2)
- spatial statistics (2)
- Attention (1)
- Background texture (1)
- Bayesian estimation (1)
- Bayesian modeling (1)
- Bayesian sensorimotor (1)
- COVID-19 (1)
- Cognitive eye movements (1)
- Computational modeling (1)
- Computational models (1)
- Endogenous attention (1)
- Ensemble Kalman (1)
- Eye movements during reading (1)
- Fixational eye movements (1)
- Foveal load hypothesis (1)
- Gaze-contingent displays (1)
- Human behaviour (1)
- Inhibition of return (1)
- Levels of processing (1)
- MCMC (1)
- Mathematical model (1)
- Memory (1)
- Microsaccade (1)
- Mind wandering (1)
- Motor control (1)
- Parsing difficulty (1)
- Posner cueing (1)
- Preview (1)
- Saccade latency (1)
- Saccade planning (1)
- Saccadic facilitation effect (1)
- Saliency (1)
- Scene viewing (1)
- Sentence comprehension (1)
- Sequential data assimilation (1)
- Serial recall (1)
- Short-term memory (1)
- Signal detection theory (1)
- Skipping (1)
- Spatial frequencies (1)
- Stochastic epidemic model (1)
- Superior colliculus (1)
- Surprisal (1)
- Sustained attention (1)
- Visual attention (1)
- Visual scanpath (1)
- Visual search (1)
- Visual system (1)
- Visual working memory (1)
- Word boundaries (1)
- Zoom lens model of attention (1)
- audition (1)
- categorization (1)
- central and peripheral (1)
- central-tendency bias (1)
- color (1)
- computational modeling (1)
- control (1)
- eye movements and reading (1)
- filter (1)
- fixation (1)
- fixation duration (1)
- fixation durations (1)
- fixations (1)
- gaze (1)
- individual differences (1)
- influence (1)
- integration (1)
- interindividual differences (1)
- likelihood (1)
- likelihood function (1)
- mental chronometry (1)
- microsaccade (1)
- model (1)
- model comparison (1)
- model fitting (1)
- natural scenes (1)
- object search (1)
- oculomotor (1)
- pair correlation function (1)
- point process (1)
- range effect (1)
- reading eye movements (1)
- saccadic accuracy (1)
- spatial correlations (1)
- swift (1)
- task (1)
- task influence (1)
- tunnel vision (1)
- vision (1)
- visual search (1)
- word recognition (1)
- words (1)
Institute
- Department Psychologie (115) (remove)
In eye-movement control during reading, advanced process-oriented models have been developed to reproduce behavioral data. So far, model complexity and large numbers of model parameters prevented rigorous statistical inference and modeling of interindividual differences. Here we propose a Bayesian approach to both problems for one representative computational model of sentence reading (SWIFT; Engbert et al., Psychological Review, 112, 2005, pp. 777-813). We used experimental data from 36 subjects who read the text in a normal and one of four manipulated text layouts (e.g., mirrored and scrambled letters). The SWIFT model was fitted to subjects and experimental conditions individually to investigate between- subject variability. Based on posterior distributions of model parameters, fixation probabilities and durations are reliably recovered from simulated data and reproduced for withheld empirical data, at both the experimental condition and subject levels. A subsequent statistical analysis of model parameters across reading conditions generates model-driven explanations for observable effects between conditions.
A dynamical model of saccade generation in reading based on spatially distributed lexical processing
(2002)
We explore the interaction between oculomotor control and language comprehension on the sentence level using two well-tested computational accounts of parsing difficulty. Previous work (Boston, Hale, Vasishth, & Kliegl, 2011) has shown that surprisal (Hale, 2001; Levy, 2008) and cue-based memory retrieval (Lewis & Vasishth, 2005) are significant and complementary predictors of reading time in an eyetracking corpus. It remains an open question how the sentence processor interacts with oculomotor control. Using a simple linking hypothesis proposed in Reichle, Warren, and McConnell (2009), we integrated both measures with the eye movement model EMMA (Salvucci, 2001) inside the cognitive architecture ACT-R (Anderson et al., 2004). We built a reading model that could initiate short Time Out regressions (Mitchell, Shen, Green, & Hodgson, 2008) that compensate for slow postlexical processing. This simple interaction enabled the model to predict the re-reading of words based on parsing difficulty. The model was evaluated in different configurations on the prediction of frequency effects on the Potsdam Sentence Corpus. The extension of EMMA with postlexical processing improved its predictions and reproduced re-reading rates and durations with a reasonable fit to the data. This demonstration, based on simple and independently motivated assumptions, serves as a foundational step toward a precise investigation of the interaction between high-level language processing and eye movement control.
Author summary <br /> Switching between local and global attention is a general strategy in human information processing. We investigate whether this strategy is a viable approach to model sequences of fixations generated by a human observer in a free viewing task with natural scenes. Variants of the basic model are used to predict the experimental data based on Bayesian inference. Results indicate a high predictive power for both aggregated data and individual differences across observers. The combination of a novel model with state-of-the-art Bayesian methods lends support to our two-state model using local and global internal attention states for controlling eye movements. <br /> Understanding the decision process underlying gaze control is an important question in cognitive neuroscience with applications in diverse fields ranging from psychology to computer vision. The decision for choosing an upcoming saccade target can be framed as a selection process between two states: Should the observer further inspect the information near the current gaze position (local attention) or continue with exploration of other patches of the given scene (global attention)? Here we propose and investigate a mathematical model motivated by switching between these two attentional states during scene viewing. The model is derived from a minimal set of assumptions that generates realistic eye movement behavior. We implemented a Bayesian approach for model parameter inference based on the model's likelihood function. In order to simplify the inference, we applied data augmentation methods that allowed the use of conjugate priors and the construction of an efficient Gibbs sampler. This approach turned out to be numerically efficient and permitted fitting interindividual differences in saccade statistics. Thus, the main contribution of our modeling approach is two-fold; first, we propose a new model for saccade generation in scene viewing. Second, we demonstrate the use of novel methods from Bayesian inference in the field of scan path modeling.
Eye-movement experiments suggest that the perceptual span during reading is larger than the fixated word, asymmetric around the fixation position, and shrinks in size contingent on the foveal processing load. We used the SWIFT model of eye-movement control during reading to test these hypotheses and their implications under the assumption of graded parallel processing of all words inside the perceptual span. Specifically, we simulated reading in the boundary paradigm and analysed the effects of denying the model to have valid preview of a parafoveal word n + 2 two words to the right of fixation. Optimizing the model parameters for the valid preview condition only, we obtained span parameters with remarkably realistic estimates conforming to the empirical findings on the size of the perceptual span. More importantly, the SWIFT model generated parafoveal processing up to word n + 2 without fitting the model to such preview effects. Our results suggest that asymmetry and dynamic modulation are plausible properties of the perceptual span in a parallel word-processing model such as SWIFT. Moreover, they seem to guide the flexible distribution of processing resources during reading between foveal and parafoveal words.
When we fixate a stationary target, our eyes generate miniature (or fixational) eye movements involuntarily. These fixational eye movements are classified as slow components (physiological drift, tremor) and microsaccades, which represent rapid, small-amplitude movements. Here we propose an integrated mathematical model for the generation of slow fixational eye movements and microsaccades. The model is based on the concept of self-avoiding random walks in a potential, a process driven by a self-generated activation field. The self-avoiding walk generates persistent movements on a short timescale, whereas, on a longer timescale, the potential produces antipersistent motions that keep the eye close to an intended fixation position. We introduce microsaccades as fast movements triggered by critical activation values. As a consequence, both slow movements and microsaccades follow the same law of motion; i.e., movements are driven by the self-generated activation field. Thus, the model contributes a unified explanation of why it has been a long-standing problem to separate slow movements and microsaccades with respect to their motion-generating principles. We conclude that the concept of a self-avoiding random walk captures fundamental properties of fixational eye movements and provides a coherent theoretical framework for two physiologically distinct movement types.
An lterative algorithm for the estimation of the distribution of mislocated fixations during reading
(2007)
Background
Body image distortion is highly prevalent among overweight individuals. Whilst there is evidence that body-dissatisfied women and those suffering from disordered eating show a negative attentional bias towards their own unattractive body parts and others’ attractive body parts, little is known about visual attention patterns in the area of obesity and with respect to males. Since eating disorders and obesity share common features in terms of distorted body image and body dissatisfaction, the aim of this study was to examine whether overweight men and women show a similar attentional bias.
Methods/Design
We analyzed eye movements in 30 overweight individuals (18 females) and 28 normalweight individuals (16 females) with respect to the participants’ own pictures as well as gender-
and BMI-matched control pictures (front and back view). Additionally, we assessed body image and disordered eating using validated questionnaires.
Discussion
The overweight sample rated their own body as less attractive and showed a more disturbed body image. Contrary to our assumptions, they focused significantly longer on attractive
compared to unattractive regions of both their own and the control body. For one’s own body, this was more pronounced for women. A higher weight status and more frequent body checking predicted attentional bias towards attractive body parts. We found that overweight adults exhibit an unexpected and stable pattern of selective attention, with a distinctive focus on their own attractive body regions despite higher levels of body dissatisfaction. This positive attentional bias may either be an indicator of a more pronounced pattern of attentional avoidance or a self-enhancing strategy. Further research is warranted to clarify these results.
Background
Body image distortion is highly prevalent among overweight individuals. Whilst there is evidence that body-dissatisfied women and those suffering from disordered eating show a negative attentional bias towards their own unattractive body parts and others’ attractive body parts, little is known about visual attention patterns in the area of obesity and with respect to males. Since eating disorders and obesity share common features in terms of distorted body image and body dissatisfaction, the aim of this study was to examine whether overweight men and women show a similar attentional bias.
Methods/Design
We analyzed eye movements in 30 overweight individuals (18 females) and 28 normalweight individuals (16 females) with respect to the participants’ own pictures as well as gender-
and BMI-matched control pictures (front and back view). Additionally, we assessed body image and disordered eating using validated questionnaires.
Discussion
The overweight sample rated their own body as less attractive and showed a more disturbed body image. Contrary to our assumptions, they focused significantly longer on attractive
compared to unattractive regions of both their own and the control body. For one’s own body, this was more pronounced for women. A higher weight status and more frequent body checking predicted attentional bias towards attractive body parts. We found that overweight adults exhibit an unexpected and stable pattern of selective attention, with a distinctive focus on their own attractive body regions despite higher levels of body dissatisfaction. This positive attentional bias may either be an indicator of a more pronounced pattern of attentional avoidance or a self-enhancing strategy. Further research is warranted to clarify these results.
Background
Body image distortion is highly prevalent among overweight individuals. Whilst there is evidence that body-dissatisfied women and those suffering from disordered eating show a negative attentional bias towards their own unattractive body parts and others' attractive body parts, little is known about visual attention patterns in the area of obesity and with respect to males. Since eating disorders and obesity share common features in terms of distorted body image and body dissatisfaction, the aim of this study was to examine whether overweight men and women show a similar attentional bias.
Methods/Design
We analyzed eye movements in 30 overweight individuals (18 females) and 28 normal-weight individuals (16 females) with respect to the participants' own pictures as well as gender- and BMI-matched control pictures (front and back view). Additionally, we assessed body image and disordered eating using validated questionnaires.
Discussion
The overweight sample rated their own body as less attractive and showed a more disturbed body image. Contrary to our assumptions, they focused significantly longer on attractive compared to unattractive regions of both their own and the control body. For one's own body, this was more pronounced for women. A higher weight status and more frequent body checking predicted attentional bias towards attractive body parts. We found that overweight adults exhibit an unexpected and stable pattern of selective attention, with a distinctive focus on their own attractive body regions despite higher levels of body dissatisfaction. This positive attentional bias may either be an indicator of a more pronounced pattern of attentional avoidance or a self-enhancing strategy. Further research is warranted to clarify these results.
Process-oriented theories of cognition must be evaluated against time-ordered observations. Here we present a representative example for data assimilation of the SWIFT model, a dynamical model of the control of fixation positions and fixation durations during natural reading of single sentences. First, we develop and test an approximate likelihood function of the model, which is a combination of a spatial, pseudo-marginal likelihood and a temporal likelihood obtained by probability density approximation Second, we implement a Bayesian approach to parameter inference using an adaptive Markov chain Monte Carlo procedure. Our results indicate that model parameters can be estimated reliably for individual subjects. We conclude that approximative Bayesian inference represents a considerable step forward for computational models of eye-movement control, where modeling of individual data on the basis of process-based dynamic models has not been possible so far.
Sudden visual changes attract our gaze, and related eye movement control requires attentional resources. Attention is a limited resource that is also involved in working memory-for instance, memory encoding. As a consequence, theory suggests that gaze capture could impair the buildup of memory respresentations due to an attentional resource bottleneck. Here we developed an experimental design combining a serial memory task (verbal or spatial) and concurrent gaze capture by a distractor (of high or low similarity to the relevant item). The results cannot be explained by a general resource bottleneck. Specifically, we observed that capture by the low-similar distractor resulted in delayed and reduced saccade rates to relevant items in both memory tasks. However, while spatial memory performance decreased, verbal memory remained unaffected. In contrast, the high-similar distractor led to capture and memory loss for both tasks. Our results lend support to the view that gaze capture leads to activation of irrelevant representations in working memory that compete for selection at recall. Activation of irrelevant spatial representations distracts spatial recall, whereas activation of irrelevant verbal features impairs verbal memory performance.
During reading, our eyes perform complicated sequences of fixations on words. Stochastic models of eye movement control suggest that this seemingly erratic behaviour can be attributed to noise in the oculomotor system and random fluctuations in lexical processing. Here, we present a qualitative analysis of a recently published dynamical model [Engbert et al., 2002] and propose that deterministic nonlinear control accounts for much of the observed complexity of eye movement patterns during reading. Based on a symbolic coding technique we analyze robust statistical features of simulated fixation sequences
During visual fixation on a target object, our eyes are not motionless but generate slow fixational eye movements and microsaccades. Effects of visual attention have been observed in both microsaccade rates and spatial directions. Experimental results, however, range from early (<200 ms) to late (>600 ms) effects combined with cue-congruent as well as cue-incongruent microsaccade directions. On the basis of well characterized neural circuitry in superior colliculus, we construct a dynamical model of neural activation that is modulated by perceptual input and visual attention. Our results show that additive integration of low-level perceptual responses and visual attention can explain microsaccade rate and direction effects across a range of visual cueing tasks. These findings suggest that the patterns of microsaccade direction observed in experiments are compatible with a single dynamical mechanism. The basic principles of the model are highly relevant to the general problem of integration of low-level perception and top-down selective attention.
Control of fixation duration during scene viewing by interaction of foveal and peripheral processing
(2013)
Processing in our visual system is functionally segregated, with the fovea specialized in processing fine detail (high spatial frequencies) for object identification, and the periphery in processing coarse information (low frequencies) for spatial orienting and saccade target selection. Here we investigate the consequences of this functional segregation for the control of fixation durations during scene viewing. Using gaze-contingent displays, we applied high-pass or low-pass filters to either the central or the peripheral visual field and compared eye-movement patterns with an unfiltered control condition. In contrast with predictions from functional segregation, fixation durations were unaffected when the critical information for vision was strongly attenuated (foveal low-pass and peripheral high-pass filtering); fixation durations increased, however, when useful information was left mostly intact by the filter (foveal high-pass and peripheral low-pass filtering). These patterns of results are difficult to explain under the assumption that fixation durations are controlled by foveal processing difficulty. As an alternative explanation, we developed the hypothesis that the interaction of foveal and peripheral processing controls fixation duration. To investigate the viability of this explanation, we implemented a computational model with two compartments, approximating spatial aspects of processing by foveal and peripheral activations that change according to a small set of dynamical rules. The model reproduced distributions of fixation durations from all experimental conditions by variation of few parameters that were affected by specific filtering conditions.
Coupling of attention and saccades when viewing scenes with central and peripheral degradation
(2016)
Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations.
Eye-movement control during scene viewing can be represented as a series of individual decisions about where and when to move the eyes. While substantial behavioral and computational research has been devoted to investigating the placement of fixations in scenes, relatively little is known about the mechanisms that control fixation durations. Here, we propose a computational model (CRISP) that accounts for saccade timing and programming and thus for variations in fixation durations in scene viewing. First, timing signals are modeled as continuous-time random walks. Second, difficulties at the level of visual and cognitive processing can inhibit and thus modulate saccade timing. Inhibition generates moment-by-moment changes in the random walk's transition rate and processing-related saccade cancellation. Third, saccade programming is completed in 2 stages: an initial, labile stage that is subject to cancellation and a subsequent, nonlabile stage. Several simulation studies tested the model's adequacy and generality. An initial simulation study explored the role of cognitive factors in scene viewing by examining how fixation durations differed under different viewing task instructions. Additional simulations investigated the degree to which fixation durations were under direct moment-to-moment control of the current visual scene. The present work further supports the conclusion that fixation durations, to a certain degree, reflect perceptual and cognitive activity in scene viewing. Computational model simulations contribute to an understanding of the underlying processes of gaze control.
Fixational eye movements occur involuntarily during visual fixation of stationary scenes. The fastest components of these miniature eye movements are microsaccades, which can be observed about once per second. Recent studies demonstrated that microsaccades are linked to covert shifts of visual attention. Here, we generalized this finding in two ways. First, we used peripheral cues, rather than the centrally presented cues of earlier studies. Second, we spatially cued attention in vision and audition to visual and auditory targets. An analysis of microsaccade responses revealed an equivalent impact of visual and auditory cues on microsaccade-rate signature (i.e. an initial inhibition followed by an overshoot and a final return to the pre-cue baseline rate). With visual cues or visual targets, microsaccades were briefly aligned with cue direction and then opposite to cue direction during the overshoot epoch, probably as a result of an inhibition of an automatic saccade to the peripheral cue. With left auditory cues and auditory targets microsaccades oriented in cue direction. We argue that microsaccades can be used to study crossmodal integration of sensory information and to map the time course of saccade preparation during covert shifts of visual and auditory attention
Current advances in SWIFT
(2006)
Models of eye movement control are very useful for gaining insights into the intricate connections of different cognitive and oculomotor subsystems involved in reading. The SWIFT model (Engbert, Longtin, & Kliegl (2002). Vision Research, 42, 621 - 636) proposed a unified mechanism to account for all types of eye movement patterns that might be observed in reading behavior. The model is based on the notion of spatially distributed, or parallel, processing of words in a sentence. We present a refined version of SWIFT introducing a letter-based approach that proposes a processing gradient in the shape of a smooth function. We show that SWIFT extents its capabilities by accounting for distributions of landing positions.
Dynamical models make specific assumptions about cognitive processes that generate human behavior. In data assimilation, these models are tested against timeordered data. Recent progress on Bayesian data assimilation demonstrates that this approach combines the strengths of statistical modeling of individual differences with the those of dynamical cognitive models.
Visual information processing is guided by an active mechanism generating saccadic eye movements to salient stimuli. Here we investigate the specific contribution of saccades to memory encoding of verbal and spatial properties in a serial recall task. In the first experiment, participants moved their eyes freely without specific instruction. We demonstrate the existence of qualitative differences in eye-movement strategies during verbal and spatial memory encoding. While verbal memory encoding was characterized by shifting the gaze to the to-be-encoded stimuli, saccadic activity was suppressed during spatial encoding. In the second experiment, participants were required to suppress saccades by fixating centrally during encoding or to make precise saccades onto the memory items. Active suppression of saccades had no effect on memory performance, but tracking the upcoming stimuli decreased memory performance dramatically in both tasks, indicating a resource bottleneck between display-controlled saccadic control and memory encoding. We conclude that optimized encoding strategies for verbal and spatial features are underlying memory performance in serial recall, but such strategies work on an involuntary level only and do not support memory encoding when they are explicitly required by the task.
Bottom-up and top-down as well as low-level and high-level factors influence where we fixate when viewing natural scenes. However, the importance of each of these factors and how they interact remains a matter of debate. Here, we disentangle these factors by analyzing their influence over time. For this purpose, we develop a saliency model that is based on the internal representation of a recent early spatial vision model to measure the low-level, bottom-up factor. To measure the influence of high-level, bottom-up features, we use a recent deep neural network-based saliency model. To account for top-down influences, we evaluate the models on two large data sets with different tasks: first, a memorization task and, second, a search task. Our results lend support to a separation of visual scene exploration into three phases: the first saccade, an initial guided exploration characterized by a gradual broadening of the fixation density, and a steady state that is reached after roughly 10 fixations. Saccade-target selection during the initial exploration and in the steady state is related to similar areas of interest, which are better predicted when including high-level features. In the search data set, fixation locations are determined predominantly by top-down processes. In contrast, the first fixation follows a different fixation density and contains a strong central fixation bias. Nonetheless, first fixations are guided strongly by image properties, and as early as 200 ms after image onset, fixations are better predicted by high-level information. We conclude that any low-level, bottom-up factors are mainly limited to the generation of the first saccade. All saccades are better explained when high-level features are considered, and later, this high-level, bottom-up control can be overruled by top-down influences.
During reading, rapid eye movements (saccades) shift the reader's line of sight from one word to another for high-acuity visual information processing. While experimental data and theoretical models show that readers aim at word centers, the eye-movement (oculomotor) accuracy is low compared to other tasks. As a consequence, distributions of saccadic landing positions indicate large (i) random errors and (ii) systematic over- and undershoot of word centers, which additionally depend on saccade lengths (McConkie et al.Visual Research, 28(10), 1107-1118,1988). Here we show that both error components can be simultaneously reduced by reading texts from right to left in German language (N= 32). We used our experimental data to test a Bayesian model of saccade planning. First, experimental data are consistent with the model. Second, the model makes specific predictions of the effects of the precision of prior and (sensory) likelihood. Our results suggest that it is a more precise sensory likelihood that can explain the reduction of both random and systematic error components.
In research on eye-movement control during reading, the importance of cognitive processes related to language comprehension relative to visuomotor aspects of saccade generation is the topic of an ongoing debate. Here we investigate various eye-movement measures during reading of randomly shuffled meaningless text as compared to normal meaningful text. To ensure processing of the material, readers were occasionally probed for words occurring in normal or shuffled text. For reading of shuffled text we observed longer fixation times, less word skippings, and more refixations than in normal reading. Shuffled-text reading further differed from normal reading in that low-frequency words were not overall fixated longer than high-frequency words. However, the frequency effect was present on long words, but was reversed for short words. Also, consistent with our prior research we found distinct experimental effects of spatially distributed processing over several words at a time, indicating how lexical word processing affected eye movements. Based on analyses of statistical linear mixed-effect models we argue that the results are compatible with the hypothesis that the perceptual span is more strongly modulated by foveal load in the shuffled reading task than in normal reading. Results are discussed in the context of computational models of reading.
We resolve a controversy about reading fixations before word-skipping saccades which were reported as longer or shorter than control fixations in earlier studies. Our statistics are based on resampling of matched sets of fixations before skipped and nonskipped words, drawn from a database of 121,321 single fixations contributed by 230 readers of the Potsdam sentence corpus. Matched fixations originated from single-fixation forward-reading patterns and were equated for their positions within words. Fixations before skipped words were shorter before short or high-frequency words and longer before long or low-frequency words in comparison with control fixations. Reasons for inconsistencies in past research and implications for computational models are discussed
During reading, saccadic eye movements are generated to shift words into the center of the visual field for lexical processing. Recently, Krugel and Engbert (Vision Research 50:1532-1539, 2010) demonstrated that within-word fixation positions are largely shifted to the left after skipped words. However, explanations of the origin of this effect cannot be drawn from normal reading data alone. Here we show that the large effect of skipped words on the distribution of within-word fixation positions is primarily based on rather subtle differences in the low-level visual information acquired before saccades. Using arrangements of "x" letter strings, we reproduced the effect of skipped character strings in a highly controlled single-saccade task. Our results demonstrate that the effect of skipped words in reading is the signature of a general visuomotor phenomenon. Moreover, our findings extend beyond the scope of the widely accepted range-error model, which posits that within-word fixation positions in reading depend solely on the distances of target words. We expect that our results will provide critical boundary conditions for the development of visuomotor models of saccade planning during reading.
Neuronal activity in area LIP is correlated with the perceived direction of ambiguous apparent motion (Z. M. Williams, J. C. Elfar, E. N. Eskandar, L. J. Toth, & J. A. Assad, 2003). Here we show that a similar correlation exists for small eye movements made during fixation. A moving dot grid with superimposed fixation point was presented through an aperture. In a motion discrimination task, unambiguous motion was compared with ambiguous motion obtained by shifting the grid by half of the dot distance. In three experiments we show that (a) microsaccadic inhibition, i.e., a drop in microsaccade frequency precedes reports of perceptual flips, (b) microsaccadic inhibition does not accompany simple response changes, and (c) the direction of microsaccades occurring before motion onset biases the subsequent perception of ambiguous motion. We conclude that microsaccades provide a signal on which perceptual judgments rely in the absence of objective disambiguating stimulus information.
Microsaccades are miniature eye movements produced involuntarily during visual fixation of stationary objects. Since their first description more than 40 years ago, the role of microsaccades in vision has been controversial. In this issue, Martinez-Conde and colleagues present a solution to the long-standing research problem connecting this basic oculomotor function to visual perception, by showing that microsaccades may control peripheral vision during visual fixation by inducing flips in bistable peripheral percepts in head-unrestrained viewing. Their study provides new insight into the functional connectivity between oculomotor function and visual perception
Hulleman & Olivers' (H&O's) model introduces variation of the functional visual field (FVF) for explaining visual search behavior. Our research shows how the FVF can be studied using gaze-contingent displays and how FVF variation can be implemented in models of gaze control. Contrary to H&O, we believe that fixation duration is an important factor when modeling visual search behavior.
When studying how people search for objects in scenes, the inhomogeneity of the visual field is often ignored. Due to physiological limitations, peripheral vision is blurred and mainly uses coarse-grained information (i.e., low spatial frequencies) for selecting saccade targets, whereas high-acuity central vision uses fine-grained information (i.e., high spatial frequencies) for analysis of details. Here we investigated how spatial frequencies and color affect object search in real-world scenes. Using gaze-contingent filters, we attenuated high or low frequencies in central or peripheral vision while viewers searched color or grayscale scenes. Results showed that peripheral filters and central high-pass filters hardly affected search accuracy, whereas accuracy dropped drastically with central low-pass filters. Peripheral filtering increased the time to localize the target by decreasing saccade amplitudes and increasing number and duration of fixations. The use of coarse-grained information in the periphery was limited to color scenes. Central filtering increased the time to verify target identity instead, especially with low-pass filters. We conclude that peripheral vision is critical for object localization and central vision is critical for object identification. Visual guidance during peripheral object localization is dominated by low-frequency color information, whereas high-frequency information, relatively independent of color, is most important for object identification in central vision.
We question the assumption of serial attention shifts and the assumption that saccade programs are initiated or canceled only after stage one of word identification. Evidence: (1) Fixation durations prior to skipped words are not consistently higher compared to those prior to non-skipped words. (2) Attentional modulation of microsaccade rate might occur after early visual processing. Saccades are probably triggered by attentional selection
We question the assumption of serial attention shifts and the assumption that saccade programs are initiated or canceled only after stage one of word identification. Evidence: (1) Fixation durations prior to skipped words are not consistently higher compared to those prior to nonskipped words. (2) Attentional modulation of microsaccade rate might occur after early visual processing. Saccades are probably triggered by attentional selection.
The method of twin surrogates has been introduced to test for phase synchronization of complex systems in the case of passive experiments. In this paper we derive new analytical expressions for the number of twins depending on the size of the neighborhood, as well as on the length of the trajectory. This allows us to determine the optimal parameters for the generation of twin surrogates. Furthermore, we determine the quality of the twin surrogates with respect to several linear and nonlinear statistics depending on the parameters of the method. In the second part of the paper we perform a hypothesis test for phase synchronization in the case of experimental data from fixational eye movements. These miniature eye movements have been shown to play a central role in neural information processing underlying the perception of static visual scenes. The high number of data sets (21 subjects and 30 trials per person) allows us to compare the generated twin surrogates with the "natural" surrogates that correspond to the different trials. We show that the generated twin surrogates reproduce very well all linear and nonlinear characteristics of the underlying experimental system. The synchronization analysis of fixational eye movements by means of twin surrogates reveals that the synchronization between the left and right eye is significant, indicating that either the centers in the brain stem generating fixational eye movements are closely linked, or, alternatively that there is only one center controlling both eyes.
Dynamical models of cognition play an increasingly important role in driving theoretical and experimental research in psychology. Therefore, parameter estimation, model analysis and comparison of dynamical models are of essential importance. In this article, we propose a maximum likelihood approach for model analysis in a fully dynamical framework that includes time-ordered experimental data. Our methods can be applied to dynamical models for the prediction of discrete behavior (e.g., movement onsets); in particular, we use a dynamical model of saccade generation in scene viewing as a case study for our approach. For this model, the likelihood function can be computed directly by numerical simulation, which enables more efficient parameter estimation including Bayesian inference to obtain reliable estimates and corresponding credible intervals. Using hierarchical models inference is even possible for individual observers. Furthermore, our likelihood approach can be used to compare different models. In our example, the dynamical framework is shown to outperform nondynamical statistical models. Additionally, the likelihood based evaluation differentiates model variants, which produced indistinguishable predictions on hitherto used statistics. Our results indicate that the likelihood approach is a promising framework for dynamical cognitive models.
We compared effects of covert spatial-attention shifts induced with exogenous or endogenous cues on microsaccade rate and direction. Separate and dissociated effects were obtained in rate and direction measures. Display changes caused microsaccade rate inhibition, followed by sustained rate enhancement. Effects on microsaccade direction were differentially tied to cue class (exogenous vs. endogenous) and type (neutral vs. directional). For endogenous cues, direction effects were weak and occurred late. Exogenous cues caused a fast direction bias towards the cue (i.e., early automatic triggering of saccade programs), followed by a shift in the opposite direction (i.e, controlled inhibition of cue-directed saccades, leading to a 'leakage' of microsaccades in the opposite direction). (C) 2004 Elsevier Ltd. All rights reserved
Eye-fixation durations are among the best and most widely used measures of ongoing cognition in visual tasks, e.g., reading, visual search or scene perception. However, fixations are characterized by ongoing motor activity (or fixational eye movements) with microsaccades as their most pronounced components. Recent work demonstrated the similarities of microsaccades and inspection saccades. Here, we show that distinct properties of microsaccades and inspection saccades can be found in a scene perception task, based on descriptive measures (e.g., a bimodal amplitude distribution) as well as functional characteristics (e.g., inter saccadic-event intervals and generating processes). Besides these specific differences, microsaccade rates produced by individual participants in a fixation paradigm are correlated with microsaccade rates extracted from fixations in scene perception, indicating a common neurophysiological basis. Finally, we observed that slow fixational eye movements, called drift, are significantly reduced during long fixations in scene viewing, which informs about the control of eye movements in scene viewing.