Institut für Psychologie
Refine
Year of publication
Document Type
- Article (66)
- Postprint (6)
- Conference Proceeding (3)
- Other (2)
- Preprint (2)
Language
- English (79) (remove)
Keywords
- Eye movements (9)
- Reading (5)
- Computational modelling (3)
- eye movements (3)
- Perceptual span (2)
- Working memory (2)
- modeling (2)
- scene perception (2)
- spatial statistics (2)
- Attention (1)
Microsaccades - i.e., small fixational saccades generated in the superior colliculus (SC) - have been linked to spatial attention. While maintaining fixation, voluntary shifts of covert attention toward peripheral targets result in a sequence of attention-aligned and attention-opposing microsaccades. In most previous studies the direction of the voluntary shift is signaled by a spatial cue (e.g., a leftwards pointing arrow) that presents the most informative part of the cue (e.g., the arrowhead) in the to-be attended visual field. Here we directly investigated the influence of cue position and tested the hypothesis that microsaccades align with cue position rather than with the attention shift. In a spatial cueing task, we presented the task-relevant part of a symmetric cue either in the to-be attended visual field or in the opposite field. As a result, microsaccades were still weakly related to the covert attention shift; however, they were strongly related to the position of the cue even if that required a movement opposite to the cued attention shift. Moreover, if microsaccades aligned with cue position, we observed stronger cueing effects on manual response times. Our interpretation of the data is supported by numerical simulations of a computational model of microsaccade generation that is based on SC properties, where we explain our findings by separate attentional mechanisms for cue localization and the cued attention shift. We conclude that during cueing of voluntary attention, microsaccades are related to both - the overt attentional selection of the task-relevant part of the cue stimulus and the subsequent covert attention shift.(C) 2017 Elsevier Ltd. All rights reserved.
Saccades to single targets in peripheral vision are typically characterized by an undershoot bias. Putting this bias to a test, Kapoula [1] used a paradigm in which observers were presented with two different sets of target eccentricities that partially overlapped each other. Her data were suggestive of a saccadic range effect (SRE): There was a tendency for saccades to overshoot close targets and undershoot far targets in a block, suggesting that there was a response bias towards the center of eccentricities in a given block. Our Experiment 1 was a close replication of the original study by Kapoula [1]. In addition, we tested whether the SRE is sensitive to top-down requirements associated with the task, and we also varied the target presentation duration. In Experiments 1 and 2, we expected to replicate the SRE for a visual discrimination task. The simple visual saccade-targeting task in Experiment 3, entailing minimal top-down influence, was expected to elicit a weaker SRE. Voluntary saccades to remembered target locations in Experiment 3 were expected to elicit the strongest SRE. Contrary to these predictions, we did not observe a SRE in any of the tasks. Our findings complement the results reported by Gillen et al. [2] who failed to find the effect in a saccade-targeting task with a very brief target presentation. Together, these results suggest that unlike arm movements, saccadic eye movements are not biased towards making saccades of a constant, optimal amplitude for the task.
Visuospatial attention and gaze control depend on the interaction of foveal and peripheral processing. The foveal and peripheral regions of the visual field are differentially sensitive to parts of the spatial frequency spectrum. In two experiments, we investigated how the selective attenuation of spatial frequencies in the central or the peripheral visual field affects eye-movement behavior during real-world scene viewing. Gaze-contingent low-pass or high-pass filters with varying filter levels (i.e., cutoff frequencies; Experiment 1) or filter sizes (Experiment 2) were applied. Compared to unfiltered control conditions, mean fixation durations increased most with central high-pass and peripheral low-pass filtering. Increasing filter size prolonged fixation durations with peripheral filtering, but not with central filtering. Increasing filter level prolonged fixation durations with low-pass filtering, but not with high-pass filtering. These effects indicate that fixation durations are not always longer under conditions of increased processing difficulty. Saccade amplitudes largely adapted to processing difficulty: amplitudes increased with central filtering and decreased with peripheral filtering; the effects strengthened with increasing filter size and filter level. In addition, we observed a trade-off between saccade timing and saccadic selection, since saccade amplitudes were modulated when fixation durations were unaffected by the experimental manipulations. We conclude that interactions of perception and gaze control are highly sensitive to experimental manipulations of input images as long as the residual information can still be accessed for gaze control. (C) 2016 Elsevier Ltd. All rights reserved.
In humans and in foveated animals visual acuity is highly concentrated at the center of gaze, so that choosing where to look next is an important example of online, rapid decision-making. Computational neuroscientists have developed biologically-inspired models of visual attention, termed saliency maps, which successfully predict where people fixate on average. Using point process theory for spatial statistics, we show that scanpaths contain, however, important statistical structure, such as spatial clustering on top of distributions of gaze positions. Here, we develop a dynamical model of saccadic selection that accurately predicts the distribution of gaze positions as well as spatial clustering along individual scanpaths. Our model relies on activation dynamics via spatially-limited (foveated) access to saliency information, and, second, a leaky memory process controlling the re-inspection of target regions. This theoretical framework models a form of context-dependent decision-making, linking neural dynamics of attention to behavioral gaze data.
Background
Body image distortion is highly prevalent among overweight individuals. Whilst there is evidence that body-dissatisfied women and those suffering from disordered eating show a negative attentional bias towards their own unattractive body parts and others' attractive body parts, little is known about visual attention patterns in the area of obesity and with respect to males. Since eating disorders and obesity share common features in terms of distorted body image and body dissatisfaction, the aim of this study was to examine whether overweight men and women show a similar attentional bias.
Methods/Design
We analyzed eye movements in 30 overweight individuals (18 females) and 28 normal-weight individuals (16 females) with respect to the participants' own pictures as well as gender- and BMI-matched control pictures (front and back view). Additionally, we assessed body image and disordered eating using validated questionnaires.
Discussion
The overweight sample rated their own body as less attractive and showed a more disturbed body image. Contrary to our assumptions, they focused significantly longer on attractive compared to unattractive regions of both their own and the control body. For one's own body, this was more pronounced for women. A higher weight status and more frequent body checking predicted attentional bias towards attractive body parts. We found that overweight adults exhibit an unexpected and stable pattern of selective attention, with a distinctive focus on their own attractive body regions despite higher levels of body dissatisfaction. This positive attentional bias may either be an indicator of a more pronounced pattern of attentional avoidance or a self-enhancing strategy. Further research is warranted to clarify these results.
Eye-movement experiments suggest that the perceptual span during reading is larger than the fixated word, asymmetric around the fixation position, and shrinks in size contingent on the foveal processing load. We used the SWIFT model of eye-movement control during reading to test these hypotheses and their implications under the assumption of graded parallel processing of all words inside the perceptual span. Specifically, we simulated reading in the boundary paradigm and analysed the effects of denying the model to have valid preview of a parafoveal word n + 2 two words to the right of fixation. Optimizing the model parameters for the valid preview condition only, we obtained span parameters with remarkably realistic estimates conforming to the empirical findings on the size of the perceptual span. More importantly, the SWIFT model generated parafoveal processing up to word n + 2 without fitting the model to such preview effects. Our results suggest that asymmetry and dynamic modulation are plausible properties of the perceptual span in a parallel word-processing model such as SWIFT. Moreover, they seem to guide the flexible distribution of processing resources during reading between foveal and parafoveal words.
The mental chronometry of the human brain's processing of sounds to be categorized as targets has intensively been studied in cognitive neuroscience. According to current theories, a series of successive stages consisting of the registration, identification, and categorization of the sound has to be completed before participants are able to report the sound as a target by button press after similar to 300-500 ms. Here we use miniature eye movements as a tool to study the categorization of a sound as a target or nontarget, indicating that an initial categorization is present already after 80-100 ms. During visual fixation, the rate of microsaccades, the fastest components of miniature eye movements, is transiently modulated after auditory stimulation. In two experiments, we measured microsaccade rates in human participants in an auditory three-tone oddball paradigm (including rare nontarget sounds) and observed a difference in the microsaccade rates between targets and nontargets as early as 142 ms after sound onset. This finding was replicated in a third experiment with directed saccades measured in a paradigm in which tones had to be matched to score-like visual symbols. Considering the delays introduced by (motor) signal transmission and data analysis constraints, the brain must have differentiated target from nontarget sounds as fast as 80-100 ms after sound onset in both paradigms. We suggest that predictive information processing for expected input makes higher cognitive attributes, such as a sound's identity and category, available already during early sensory processing. The measurement of eye movements is thus a promising approach to investigate hearing.
Saccades move objects of interest into the center of the visual field for high-acuity visual analysis. White, Stritzke, and Gegenfurtner (Current Biology, 18, 124-128, 2008) have shown that saccadic latencies in the context of a structured background are much shorter than those with an unstructured background at equal levels of visibility. This effect has been explained by possible preactivation of the saccadic circuitry whenever a structured background acts as a mask for potential saccade targets. Here, we show that background textures modulate rates of microsaccades during visual fixation. First, after a display change, structured backgrounds induce a stronger decrease of microsaccade rates than do uniform backgrounds. Second, we demonstrate that the occurrence of a microsaccade in a critical time window can delay a subsequent saccadic response. Taken together, our findings suggest that microsaccades contribute to the saccadic facilitation effect, due to a modulation of microsaccade rates by properties of the background.
When we fixate a stationary target, our eyes generate miniature (or fixational) eye movements involuntarily. These fixational eye movements are classified as slow components (physiological drift, tremor) and microsaccades, which represent rapid, small-amplitude movements. Here we propose an integrated mathematical model for the generation of slow fixational eye movements and microsaccades. The model is based on the concept of self-avoiding random walks in a potential, a process driven by a self-generated activation field. The self-avoiding walk generates persistent movements on a short timescale, whereas, on a longer timescale, the potential produces antipersistent motions that keep the eye close to an intended fixation position. We introduce microsaccades as fast movements triggered by critical activation values. As a consequence, both slow movements and microsaccades follow the same law of motion; i.e., movements are driven by the self-generated activation field. Thus, the model contributes a unified explanation of why it has been a long-standing problem to separate slow movements and microsaccades with respect to their motion-generating principles. We conclude that the concept of a self-avoiding random walk captures fundamental properties of fixational eye movements and provides a coherent theoretical framework for two physiologically distinct movement types.
The zoom lens of attention simulating shuffled versus normal text reading using the SWIFT model
(2012)
Assumptions on the allocation of attention during reading are crucial for theoretical models of eye guidance. The zoom lens model of attention postulates that attentional deployment can vary from a sharp focus to a broad window. The model is closely related to the foveal load hypothesis, i.e., the assumption that the perceptual span is modulated by the difficulty of the fixated word. However, these important theoretical concepts for cognitive research have not been tested quantitatively in eye movement models. Here we show that the zoom lens model, implemented in the SWIFT model of saccade generation, captures many important patterns of eye movements. We compared the model's performance to experimental data from normal and shuffled text reading. Our results demonstrate that the zoom lens of attention might be an important concept for eye movement control in reading.
During visual fixation on a target object, our eyes are not motionless but generate slow fixational eye movements and microsaccades. Effects of visual attention have been observed in both microsaccade rates and spatial directions. Experimental results, however, range from early (<200 ms) to late (>600 ms) effects combined with cue-congruent as well as cue-incongruent microsaccade directions. On the basis of well characterized neural circuitry in superior colliculus, we construct a dynamical model of neural activation that is modulated by perceptual input and visual attention. Our results show that additive integration of low-level perceptual responses and visual attention can explain microsaccade rate and direction effects across a range of visual cueing tasks. These findings suggest that the patterns of microsaccade direction observed in experiments are compatible with a single dynamical mechanism. The basic principles of the model are highly relevant to the general problem of integration of low-level perception and top-down selective attention.
Sudden visual changes attract our gaze, and related eye movement control requires attentional resources. Attention is a limited resource that is also involved in working memory-for instance, memory encoding. As a consequence, theory suggests that gaze capture could impair the buildup of memory respresentations due to an attentional resource bottleneck. Here we developed an experimental design combining a serial memory task (verbal or spatial) and concurrent gaze capture by a distractor (of high or low similarity to the relevant item). The results cannot be explained by a general resource bottleneck. Specifically, we observed that capture by the low-similar distractor resulted in delayed and reduced saccade rates to relevant items in both memory tasks. However, while spatial memory performance decreased, verbal memory remained unaffected. In contrast, the high-similar distractor led to capture and memory loss for both tasks. Our results lend support to the view that gaze capture leads to activation of irrelevant representations in working memory that compete for selection at recall. Activation of irrelevant spatial representations distracts spatial recall, whereas activation of irrelevant verbal features impairs verbal memory performance.
When the mind wanders, attention turns away from the external environment and cognitive processing is decoupled from perceptual information. Mind wandering is usually treated as a dichotomy (dichotomy-hypothesis), and is often measured using self-reports. Here, we propose the levels of inattention hypothesis, which postulates attentional decoupling to graded degrees at different hierarchical levels of cognitive processing. To measure graded levels of attentional decoupling during reading we introduce the sustained attention to stimulus task (SAST), which is based on psychophysics of error detection. Under experimental conditions likely to induce mind wandering, we found that subjects were less likely to notice errors that required high-level processing for their detection as opposed to errors that only required low-level processing. Eye tracking revealed that before errors were overlooked influences of high- and low-level linguistic variables on eye fixations were reduced in a graded fashion, indicating episodes of mindless reading at weak and deep levels. Individual fixation durations predicted overlooking of lexical errors 5 s before they occurred. Our findings support the levels of inattention hypothesis and suggest that different levels of mindless reading can be measured behaviorally in the SAST. Using eye tracking to detect mind wandering online represents a promising approach for the development of new techniques to study mind wandering and to ameliorate its negative consequences.
During reading, saccadic eye movements are generated to shift words into the center of the visual field for lexical processing. Recently, Krugel and Engbert (Vision Research 50:1532-1539, 2010) demonstrated that within-word fixation positions are largely shifted to the left after skipped words. However, explanations of the origin of this effect cannot be drawn from normal reading data alone. Here we show that the large effect of skipped words on the distribution of within-word fixation positions is primarily based on rather subtle differences in the low-level visual information acquired before saccades. Using arrangements of "x" letter strings, we reproduced the effect of skipped character strings in a highly controlled single-saccade task. Our results demonstrate that the effect of skipped words in reading is the signature of a general visuomotor phenomenon. Moreover, our findings extend beyond the scope of the widely accepted range-error model, which posits that within-word fixation positions in reading depend solely on the distances of target words. We expect that our results will provide critical boundary conditions for the development of visuomotor models of saccade planning during reading.
Control of fixation duration during scene viewing by interaction of foveal and peripheral processing
(2013)
Processing in our visual system is functionally segregated, with the fovea specialized in processing fine detail (high spatial frequencies) for object identification, and the periphery in processing coarse information (low frequencies) for spatial orienting and saccade target selection. Here we investigate the consequences of this functional segregation for the control of fixation durations during scene viewing. Using gaze-contingent displays, we applied high-pass or low-pass filters to either the central or the peripheral visual field and compared eye-movement patterns with an unfiltered control condition. In contrast with predictions from functional segregation, fixation durations were unaffected when the critical information for vision was strongly attenuated (foveal low-pass and peripheral high-pass filtering); fixation durations increased, however, when useful information was left mostly intact by the filter (foveal high-pass and peripheral low-pass filtering). These patterns of results are difficult to explain under the assumption that fixation durations are controlled by foveal processing difficulty. As an alternative explanation, we developed the hypothesis that the interaction of foveal and peripheral processing controls fixation duration. To investigate the viability of this explanation, we implemented a computational model with two compartments, approximating spatial aspects of processing by foveal and peripheral activations that change according to a small set of dynamical rules. The model reproduced distributions of fixation durations from all experimental conditions by variation of few parameters that were affected by specific filtering conditions.
Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.
We explore the interaction between oculomotor control and language comprehension on the sentence level using two well-tested computational accounts of parsing difficulty. Previous work (Boston, Hale, Vasishth, & Kliegl, 2011) has shown that surprisal (Hale, 2001; Levy, 2008) and cue-based memory retrieval (Lewis & Vasishth, 2005) are significant and complementary predictors of reading time in an eyetracking corpus. It remains an open question how the sentence processor interacts with oculomotor control. Using a simple linking hypothesis proposed in Reichle, Warren, and McConnell (2009), we integrated both measures with the eye movement model EMMA (Salvucci, 2001) inside the cognitive architecture ACT-R (Anderson et al., 2004). We built a reading model that could initiate short Time Out regressions (Mitchell, Shen, Green, & Hodgson, 2008) that compensate for slow postlexical processing. This simple interaction enabled the model to predict the re-reading of words based on parsing difficulty. The model was evaluated in different configurations on the prediction of frequency effects on the Potsdam Sentence Corpus. The extension of EMMA with postlexical processing improved its predictions and reproduced re-reading rates and durations with a reasonable fit to the data. This demonstration, based on simple and independently motivated assumptions, serves as a foundational step toward a precise investigation of the interaction between high-level language processing and eye movement control.
Visual information processing is guided by an active mechanism generating saccadic eye movements to salient stimuli. Here we investigate the specific contribution of saccades to memory encoding of verbal and spatial properties in a serial recall task. In the first experiment, participants moved their eyes freely without specific instruction. We demonstrate the existence of qualitative differences in eye-movement strategies during verbal and spatial memory encoding. While verbal memory encoding was characterized by shifting the gaze to the to-be-encoded stimuli, saccadic activity was suppressed during spatial encoding. In the second experiment, participants were required to suppress saccades by fixating centrally during encoding or to make precise saccades onto the memory items. Active suppression of saccades had no effect on memory performance, but tracking the upcoming stimuli decreased memory performance dramatically in both tasks, indicating a resource bottleneck between display-controlled saccadic control and memory encoding. We conclude that optimized encoding strategies for verbal and spatial features are underlying memory performance in serial recall, but such strategies work on an involuntary level only and do not support memory encoding when they are explicitly required by the task.
When we fixate our gaze on a stable object, our eyes move continuously with extremely small involuntary and autonomic movements, that even we are unaware of during their occurrence. One of the roles of these fixational eye movements is to prevent the adaptation of the visual system to continuous illumination and inhibit fading of the image. These random, small movements are restricted at long time scales so as to keep the target at the centre of the field of view. In addition, the synchronisation properties between both eyes are related to binocular coordination in order to provide stereopsis. We investigated the roles of different time scale behaviours, especially how they are expressed in the different spatial directions (vertical versus horizontal). We also tested the synchronisation between both eyes. Results show different scaling behaviour between horizontal and vertical movements. When the small ballistic movements, i.e., microsaccades, are removed, the scaling behaviour in both axes becomes similar. Our findings suggest that microsaccades enhance the persistence at short time scales mostly in the horizontal component and much less in the vertical component. We also applied the phase synchronisation decay method to study the synchronisation between six combinations of binocular fixational eye movement components. We found that the vertical-vertical components of right and left eyes are significantly more synchronised than the horizontal-horizontal components. These differences may be due to the need for continuously moving the eyes in the horizontal plane in order to match the stereoscopic image for different viewing distances.
Neuronal activity in area LIP is correlated with the perceived direction of ambiguous apparent motion (Z. M. Williams, J. C. Elfar, E. N. Eskandar, L. J. Toth, & J. A. Assad, 2003). Here we show that a similar correlation exists for small eye movements made during fixation. A moving dot grid with superimposed fixation point was presented through an aperture. In a motion discrimination task, unambiguous motion was compared with ambiguous motion obtained by shifting the grid by half of the dot distance. In three experiments we show that (a) microsaccadic inhibition, i.e., a drop in microsaccade frequency precedes reports of perceptual flips, (b) microsaccadic inhibition does not accompany simple response changes, and (c) the direction of microsaccades occurring before motion onset biases the subsequent perception of ambiguous motion. We conclude that microsaccades provide a signal on which perceptual judgments rely in the absence of objective disambiguating stimulus information.
Fixation durations in reading are longer for within-word fixation positions close to word center than for positions near word boundaries. This counterintuitive result was termed the Inverted-Optimal Viewing Position (IOVP) effect. We proposed an explanation of the effect based on error-correction of mislocated fixations [Nuthmann, A., Engbert, R., & Kliegl, R. (2005). Mislocated fixations during reading and the inverted optimal viewing position effect. Vision Research, 45, 2201-2217], that suggests that the IOVP effect is not related to word processing. Here we demonstrate the existence of an IOVP effect in "mindless reading", a G-string scanning task. We compare the results from experimental data with results obtained from computer simulations of a simple model of the IOVP effect and discuss alternative accounts. We conclude that oculornotor errors, which often induce mislocalized fixations, represent the most important source of the IOVP effect. (c) 2006 Elsevier Ltd. All rights reserved.
Following up on an exchange about the relation between microsaccades and spatial attention (Horowitz, Fencsik, Fine, Yurgenson, & Wolfe, 2007; Horowitz, Fine, Fencsik, Yurgenson, & Wolfe, 2007; Laubrock, Engbert, Rolfs, & Kliegl, 2007), we examine the effects of selection criteria and response modality. We show that for Posner cuing with saccadic responses, microsaccades go with attention in at least 75% of cases (almost 90% if probability matching is assumed) when they are first (or only) microsaccades in the cue target interval and when they occur between 200 and 400 msec after the cue. The relation between spatial attention and the direction of microsaccades drops to chance level for unselected microsaccades collected during manual-response conditions. Analyses of data from four cross-modal cuing experiments demonstrate an above-chance, intermediate link for visual cues, but no systematic relation for auditory cues. Thus, the link between spatial attention and direction of microsaccades depends on the experimental condition and time of occurrence, but it can be very strong.
In this paper we apply symbolic transformations as a visualisation technique for analysing rhythm production. It is shown that qualitative information can be extracted from the experimental data. This approach may provide new insights into the organisation of temporal order by the brain on different levels of description. A simple phenomenological model for the explanation of the observed phenomena is proposed.
We investigate the cognitive control in polyrhythmic hand movements as a model paradigm for bimanual coordination. Using a symbolic coding of the recorded time series, we demonstrate the existence of qualitative transitions induced by experimental manipulation of the tempo. A nonlinear model with delayed feedback control is proposed, which accounts for these dynamical transitions in terms of bifurcations resulting from variation of the external control parameter. Furthermore, it is shown that transitions can also be observed due to fluctuations in the timing control level. We conclude that the complexity of coordinated bimanual movements results from interactions between nonlinear control mechanisms with delayed feedback and stochastic timing components.
The fast and the slow of skilled bimanual rhythm production : parallel versus integrated timing
(2000)
A dynamical model of saccade generation in reading based on spatially distributed lexical processing
(2002)
We question the assumption of serial attention shifts and the assumption that saccade programs are initiated or canceled only after stage one of word identification. Evidence: (1) Fixation durations prior to skipped words are not consistently higher compared to those prior to non-skipped words. (2) Attentional modulation of microsaccade rate might occur after early visual processing. Saccades are probably triggered by attentional selection
Computational models such as E-Z Reader and SWIFT are ideal theoretical tools to test quantitatively our current understanding of eye-movement control in reading. Here we present a mathematical analysis of word skipping in the E-Z Reader model by semianalytic methods, to highlight the differences in current modeling approaches. In E-Z Reader, the word identification system must outperform the oculomotor system to induce word skipping. In SWIFT, there is competition among words to be selected as a saccade target. We conclude that it is the question of competitors in the "game" of word skipping that must be solved in eye movement research
SWIFT explorations
(2003)
During reading, our eyes perform complicated sequences of fixations on words. Stochastic models of eye movement control suggest that this seemingly erratic behaviour can be attributed to noise in the oculomotor system and random fluctuations in lexical processing. Here, we present a qualitative analysis of a recently published dynamical model [Engbert et al., 2002] and propose that deterministic nonlinear control accounts for much of the observed complexity of eye movement patterns during reading. Based on a symbolic coding technique we analyze robust statistical features of simulated fixation sequences
During fixation of a stationary target, small involuntary eye movements exhibit an erratic trajectory-a random walk. Two types of these fixational eye movements are drift and microsaccades (small-amplitude saccades). We investigated fixational eye movements and binocular coordination using a statistical analysis that had previously been applied to human posture control. This random-walk analysis uncovered two different time scales in fixational eye movements and identified specific functions for microsaccades. On a short time scale, microsaccades enhanced perception by increasing fixation errors. On a long time scale, microsaccades reduced fixation errors and binocular disparity (relative to pure drift movements). Thus, our findings clarify the role of oculomotor processes during fixation
Fixational eye movements occur involuntarily during visual fixation of stationary scenes. The fastest components of these miniature eye movements are microsaccades, which can be observed about once per second. Recent studies demonstrated that microsaccades are linked to covert shifts of visual attention. Here, we generalized this finding in two ways. First, we used peripheral cues, rather than the centrally presented cues of earlier studies. Second, we spatially cued attention in vision and audition to visual and auditory targets. An analysis of microsaccade responses revealed an equivalent impact of visual and auditory cues on microsaccade-rate signature (i.e. an initial inhibition followed by an overshoot and a final return to the pre-cue baseline rate). With visual cues or visual targets, microsaccades were briefly aligned with cue direction and then opposite to cue direction during the overshoot epoch, probably as a result of an inhibition of an automatic saccade to the peripheral cue. With left auditory cues and auditory targets microsaccades oriented in cue direction. We argue that microsaccades can be used to study crossmodal integration of sensory information and to map the time course of saccade preparation during covert shifts of visual and auditory attention
Refixation probability during reading is lowest near the word center, suggestive of an optimal viewing position (OVP). Counter-intuitively, fixation durations are largest at the OVP, a result called the inverted optimal viewing position (IOVP) effect [Vitu, McConkie, Kerr, & O'Regan, (2001). Vision Research 41, 3513-3533]. Current models of eye-movement control in reading fail to reproduce the IOVP effect. We propose a simple mechanism for generating this effect based on error-correction of mislocated fixations due to saccadic errors, First, we propose an algorithm for estimating proportions of mislocated fixations from experimental data yielding a higher probability for mislocated fixations near word boundaries. Second, we assume that mislocated fixations trigger an immediate start of a new saccade program causing a decrease of associated durations. Thus, the IOVP effect could emerge as a result of a coupling between cognitive and oculomotor processes. (c) 2005 Elsevier Ltd. All rights reserved
We compared effects of covert spatial-attention shifts induced with exogenous or endogenous cues on microsaccade rate and direction. Separate and dissociated effects were obtained in rate and direction measures. Display changes caused microsaccade rate inhibition, followed by sustained rate enhancement. Effects on microsaccade direction were differentially tied to cue class (exogenous vs. endogenous) and type (neutral vs. directional). For endogenous cues, direction effects were weak and occurred late. Exogenous cues caused a fast direction bias towards the cue (i.e., early automatic triggering of saccade programs), followed by a shift in the opposite direction (i.e, controlled inhibition of cue-directed saccades, leading to a 'leakage' of microsaccades in the opposite direction). (C) 2004 Elsevier Ltd. All rights reserved
We resolve a controversy about reading fixations before word-skipping saccades which were reported as longer or shorter than control fixations in earlier studies. Our statistics are based on resampling of matched sets of fixations before skipped and nonskipped words, drawn from a database of 121,321 single fixations contributed by 230 readers of the Potsdam sentence corpus. Matched fixations originated from single-fixation forward-reading patterns and were equated for their positions within words. Fixations before skipped words were shorter before short or high-frequency words and longer before long or low-frequency words in comparison with control fixations. Reasons for inconsistencies in past research and implications for computational models are discussed
Mathematical, models,have become an important tool for understanding the control of eye movements during reading. Main goals of the development of the SWIFT model (R. Engbert, A. Longtin, & R. Kliegl, 2002) were to investigate the possibility of spatially distributed processing and to implement a general mechanism for all types of eye movements observed in reading experiments. The authors present an advanced version of SWIFT that integrates properties of the oculomotor system and effects of word recognition to explain many of the experimental phenomena faced in reading research. They propose new procedures for the estimation of model parameters and for the test of the model's performance. They also present a mathematical analysis of the dynamics of the SWIFT model. Finally, within this framework, they present an analysis of the transition from parallel to serial processing
Current advances in SWIFT
(2006)
Models of eye movement control are very useful for gaining insights into the intricate connections of different cognitive and oculomotor subsystems involved in reading. The SWIFT model (Engbert, Longtin, & Kliegl (2002). Vision Research, 42, 621 - 636) proposed a unified mechanism to account for all types of eye movement patterns that might be observed in reading behavior. The model is based on the notion of spatially distributed, or parallel, processing of words in a sentence. We present a refined version of SWIFT introducing a letter-based approach that proposes a processing gradient in the shape of a smooth function. We show that SWIFT extents its capabilities by accounting for distributions of landing positions.
Microsaccades are miniature eye movements produced involuntarily during visual fixation of stationary objects. Since their first description more than 40 years ago, the role of microsaccades in vision has been controversial. In this issue, Martinez-Conde and colleagues present a solution to the long-standing research problem connecting this basic oculomotor function to visual perception, by showing that microsaccades may control peripheral vision during visual fixation by inducing flips in bistable peripheral percepts in head-unrestrained viewing. Their study provides new insight into the functional connectivity between oculomotor function and visual perception
Reading requires the orchestration of visual, attentional, language-related, and oculomotor processing constraints. This study replicates previous effects of frequency, predictability, and length of fixated words on fixation durations in natural reading and demonstrates new effects of these variables related to previous and next words. Results are based on fixation durations recorded from 222 persons, each reading 144 sentences. Such evidence for distributed processing of words across fixation durations challenges psycholinguistic immediacy-of-processing and eye- mind assumptions. Most of the time the mind processes several words in parallel at different perceptual and cognitive levels. Eye movements can help to unravel these processes
Even during visual fixation of a stationary target, our eyes perform rather erratic miniature movements, which represent a random walk. These "fixational" eye movements counteract perceptual fading, a consequence of fast adaptation of the retinal receptor systems to constant input. The most important contribution to fixational eye movements is produced by microsaccades; however, a specific function of microsaccades only recently has been found. Here we show that the occurrence of microsaccades is correlated with low retinal image slip approximate to 200 ms before microsaccade onset. This result suggests that microsaccades are triggered dynamically, in contrast to the current view that microsaccades are randomly distributed in time characterized by their rate-of-occurrence of 1 to 2 per second. As a result of the dynamic triggering mechanism, individual microsaccade rate can be predicted by the fractal dimension of trajectories. Finally, we propose a minimal computational model for the dynamic triggering of microsaccades
Using a serial search paradigm, we observed several effects of within-object fixation position on spatial and temporal control of eye movements: the preferred viewing location, launch site effect, the optimal viewing position, and the inverted optimal viewing position of fixation duration. While these effects were first identified by eye-movement studies in reading, our approach permits an analysis of the functional relationships between the effects in a different paradigm. Our results demonstrate that the fixation position is an important predictor of the subsequent saccade by influencing both fixation duration and the selection of the next saccade target.
Reading requires the orchestration of visual, attentional, language-related, and oculomotor processing constraints. This study replicates previous effects of frequency, predictability, and length of fixated words on fixation durations in natural reading and demonstrates new effects of these variables related to previous and next words. Results are based on fixation durations recorded from 222 persons, each reading 144 sentences. Such evidence for distributed processing of words across fixation durations challenges psycholinguistic immediacy-of-processing and eye-mind assumptions. Most of the time the mind processes several words in parallel at different perceptual and cognitive levels. Eye movements can help to unravel these processes.
Mathematical models have become an important tool for understanding the control of eye movements during reading. Main goals of the development of the SWIFT model (Engbert, Longtin, & Kliegl, 2002)were to investigate the possibility of spatially distributed processing and to implement a general mechanism for all types of eye movements we observe in reading experiments. Here, we present an advanced version of SWIFT which integrates properties of the oculomotor system and effects of word recognition to explain many of the experimental phenomena faced in reading research. We propose new procedures for the estimation of model parameters and for the test of the model’s performance. A mathematical analysis of the dynamics of the SWIFT model is presented. Finally, within this framework, we present an analysis of the transition from parallel to serial processing.
Covert shifts of attention are usually reflected in RT differences between responses to valid and invalid cues in the Posner spatial attention task. Such inferences about covert shifts of attention do not control for microsaccades in the cue target interval. We analyzed the effects of microsaccade orientation on RTs in four conditions, crossing peripheral visual and auditory cues with peripheral visual and auditory discrimination targets. Reaction time was generally faster on trials without microsaccades in the cue-target interval. If microsaccades occurred, the target-location congruency of the last microsaccade in the cuetarget interval interacted in a complex way with cue validity. For valid visual cues, irrespective of whether the discrimination target was visual or auditory, target-congruent microsaccades delayed RT. For invalid cues, target-incongruent microsaccades facilitated RTs for visual target discrimination, but delayed RT for auditory target discrimination. No reliable effects on RT were associated with auditory cues or with the first microsaccade in the cue-target interval. We discuss theoretical implications on the relation about spatial attention and oculomotor processes.
We question the assumption of serial attention shifts and the assumption that saccade programs are initiated or canceled only after stage one of word identification. Evidence: (1) Fixation durations prior to skipped words are not consistently higher compared to those prior to nonskipped words. (2) Attentional modulation of microsaccade rate might occur after early visual processing. Saccades are probably triggered by attentional selection.
Computational models such as E-Z Reader and SWIFT are ideal theoretical tools to test quantitatively our current understanding of eye-movement control in reading. Here we present a mathematical analysis of word skipping in the E-Z Reader model by semianalytic methods, to highlight the differences in current modeling approaches. In E-Z Reader, the word identification system must outperform the oculomotor system to induce word skipping. In SWIFT, there is competition among words to be selected as a saccade target. We conclude that it is the question of competitors in the “game” of word skipping that must be solved in eye movement research.
Contents: 1 Introduction 2 Experiment 3 Data 4 Symbolic dynamics 4.1 Symbolic dynamics as a tool for data analysis 4.2 2-symbols coding 4.3 3-symbols coding 5 Measures of complexity 5.1 Word statistics 5.2 Shannon entropy 6 Testing for stationarity 6.1 Stationarity 6.2 Time series of cycle durations 6.3 Chi-square test 7 Control parameters in the production of rhythms 8 Analysis of relative phases 9 Discussion 10 Outlook
We investigate the cognitive control in polyrhythmic hand movements as a model paradigm for bimanual coordination. Using a symbolic coding of the recorded time series, we demonstrate the existence of qualitative transitions induced by experimental manipulation of the tempo. A nonlinear model with delayed feedback control is proposed, which accounts for these dynamical transitions in terms of bifurcations resulting from variation of the external control parameter. Furthermore, it is shown that transitions can also be observed due to fluctuations in the timing control level. We conclude that the complexity of coordinated bimanual movements results from interactions between nonlinear control mechanisms with delayed feedback and stochastic timing components.