Refine
Year of publication
Document Type
- Article (26)
- Doctoral Thesis (13)
- Postprint (9)
Language
- English (48) (remove)
Is part of the Bibliography
- yes (48) (remove)
Keywords
- eye movements (48) (remove)
Institute
- Department Psychologie (27)
- Strukturbereich Kognitionswissenschaften (7)
- Humanwissenschaftliche Fakultät (4)
- Department Linguistik (2)
- Institut für Physik und Astronomie (2)
- Potsdam Research Institute for Multilingualism (PRIM) (2)
- Extern (1)
- Institut für Informatik und Computational Science (1)
- Institut für Mathematik (1)
- Mathematisch-Naturwissenschaftliche Fakultät (1)
Eye movements serve as a window into ongoing visual-cognitive processes and can thus be used to investigate how people perceive real-world scenes. A key issue for understanding eye-movement control during scene viewing is the roles of central and peripheral vision, which process information differently and are therefore specialized for different tasks (object identification and peripheral target selection respectively). Yet, rather little is known about the contributions of central and peripheral processing to gaze control and how they are coordinated within a fixation during scene viewing. Additionally, the factors determining fixation durations have long been neglected, as scene perception research has mainly been focused on the factors determining fixation locations. The present thesis aimed at increasing the knowledge on how central and peripheral vision contribute to spatial and, in particular, to temporal aspects of eye-movement control during scene viewing. In a series of five experiments, we varied processing difficulty in the central or the peripheral visual field by attenuating selective parts of the spatial-frequency spectrum within these regions. Furthermore, we developed a computational model on how foveal and peripheral processing might be coordinated for the control of fixation duration. The thesis provides three main findings. First, the experiments indicate that increasing processing demands in central or peripheral vision do not necessarily prolong fixation durations; instead, stimulus-independent timing is adapted when processing becomes too difficult. Second, peripheral vision seems to play a prominent role in the control of fixation durations, a notion also implemented in the computational model. The model assumes that foveal and peripheral processing proceed largely in parallel and independently during fixation, but can interact to modulate fixation duration. Thus, we propose that the variation in fixation durations can in part be accounted for by the interaction between central and peripheral processing. Third, the experiments indicate that saccadic behavior largely adapts to processing demands, with a bias of avoiding spatial-frequency filtered scene regions as saccade targets. We demonstrate that the observed saccade amplitude patterns reflect corresponding modulations of visual attention. The present work highlights the individual contributions and the interplay of central and peripheral vision for gaze control during scene viewing, particularly for the control of fixation duration. Our results entail new implications for computational models and for experimental research on scene perception.
During reading, saccadic eye movements are generated to shift words into the center of the visual field for lexical processing. Recently, Krugel and Engbert (Vision Research 50:1532-1539, 2010) demonstrated that within-word fixation positions are largely shifted to the left after skipped words. However, explanations of the origin of this effect cannot be drawn from normal reading data alone. Here we show that the large effect of skipped words on the distribution of within-word fixation positions is primarily based on rather subtle differences in the low-level visual information acquired before saccades. Using arrangements of "x" letter strings, we reproduced the effect of skipped character strings in a highly controlled single-saccade task. Our results demonstrate that the effect of skipped words in reading is the signature of a general visuomotor phenomenon. Moreover, our findings extend beyond the scope of the widely accepted range-error model, which posits that within-word fixation positions in reading depend solely on the distances of target words. We expect that our results will provide critical boundary conditions for the development of visuomotor models of saccade planning during reading.
Recent studies using the gaze-contingent boundary paradigm reported a reversed preview benefit- shorter fixations on a target word when an unrelated preview was easier to process than the fixated target (Schotter & Leinenger, 2016). This is explained viaforeedfixatiotzs-short fixations on words that would ideally be skipped (because lexical processing has progressed enough) but could not be because saccade planning reached a point of no return. This contrasts with accounts of preview effects via trans-saccadic integration-shorter fixations on a target word when the preview is more similar to it (see Cutter. Drieghe, & Liversedge, 2015). In addition, if the previewed word-not the fixated target-determines subsequent eye movements, is it also this word that enters the linguistic processing stream? We tested these accounts by having 24 subjects read 150 sentences in the boundary paradigm in which both the preview and target were initially plausible but later one, both, or neither became implausible, providing an opportunity to probe which one was linguistically encoded. In an intervening buffer region, both words were plausible, providing an opportunity to investigate trans-saccadic integration. The frequency of the previewed word affected progressive saccades (i.e.. forced fixations) as well as when transsaccadic integration failure increased regressions, but, only the implausibility of the target word affected semantic encoding. These data support a hybrid account of saccadic control (Reingold, Reichle. Glaholt, & Sheridan, 2012) driven by incomplete (often parafoveal) word recognition, which occurs prior to complete (often foveal) word recognition.
When studying how people search for objects in scenes, the inhomogeneity of the visual field is often ignored. Due to physiological limitations, peripheral vision is blurred and mainly uses coarse-grained information (i.e., low spatial frequencies) for selecting saccade targets, whereas high-acuity central vision uses fine-grained information (i.e., high spatial frequencies) for analysis of details. Here we investigated how spatial frequencies and color affect object search in real-world scenes. Using gaze-contingent filters, we attenuated high or low frequencies in central or peripheral vision while viewers searched color or grayscale scenes. Results showed that peripheral filters and central high-pass filters hardly affected search accuracy, whereas accuracy dropped drastically with central low-pass filters. Peripheral filtering increased the time to localize the target by decreasing saccade amplitudes and increasing number and duration of fixations. The use of coarse-grained information in the periphery was limited to color scenes. Central filtering increased the time to verify target identity instead, especially with low-pass filters. We conclude that peripheral vision is critical for object localization and central vision is critical for object identification. Visual guidance during peripheral object localization is dominated by low-frequency color information, whereas high-frequency information, relatively independent of color, is most important for object identification in central vision.
Understanding how humans move their eyes is an important part for understanding the functioning of the visual system. Analyzing eye movements from observations of natural scenes on a computer screen is a step to understand human visual behavior in the real world. When analyzing eye-movement data from scene-viewing experiments, the impor- tant questions are where (fixation locations), how long (fixation durations) and when (ordering of fixations) participants fixate on an image. By answering these questions, computational models can be developed which predict human scanpaths. Models serve as a tool to understand the underlying cognitive processes while observing an image, especially the allocation of visual attention.
The goal of this thesis is to provide new contributions to characterize and model human scanpaths on natural scenes. The results from this thesis will help to understand and describe certain systematic eye-movement tendencies, which are mostly independent of the image. One eye-movement tendency I focus on throughout this thesis is the tendency to fixate more in the center of an image than on the outer parts, called the central fixation bias. Another tendency, which I will investigate thoroughly, is the characteristic distribution of angles between successive eye movements.
The results serve to evaluate and improve a previously published model of scanpath generation from our laboratory, the SceneWalk model. Overall, six experiments were conducted for this thesis which led to the following five core results:
i) A spatial inhibition of return can be found in scene-viewing data. This means that locations which have already been fixated are afterwards avoided for a certain time interval (Chapter 2).
ii) The initial fixation position when observing an image has a long-lasting influence of up to five seconds on further scanpath progression (Chapter 2 & 3).
iii) The often described central fixation bias on images depends strongly on the duration of the initial fixation. Long-lasting initial fixations lead to a weaker central fixation bias than short fixations (Chapter 2 & 3).
iv) Human observers adjust their basic eye-movement parameters, like fixation dura- tions and saccade amplitudes, to the visual properties of a target they look for in visual search (Chapter 4).
v) The angle between two adjacent saccades is an indicator for the selectivity of the upcoming saccade target (Chapter 4).
All results emphasize the importance of systematic behavioral eye-movement tenden- cies and dynamic aspects of human scanpaths in scene viewing.
Learning to read in German
(2021)
In the present dissertation, the development of eye movement behavior and the perceptual span of German beginning readers was investigated in Grades 1 to 3 (Study 1) and longitudinally within a one-year time interval (Study 2), as well as in relation to intrinsic and extrinsic reading motivation (Study 3). The presented results are intended to fill the gap of only sparse information on young readers’ eye movements and completely missing information on German young readers’ perceptual span and its development. On the other hand, reading motivation data have been scrutinized with respect to reciprocal effects on reading comprehension but not with respect to more immediate, basic cognitive processing (e.g., word decoding) that is indicated by different eye movement measures. Based on a longitudinal study design, children in Grades 1–3 participated in a moving window reading experiment with eye movement recordings in two successive years. All children were participants of a larger longitudinal study on intrapersonal developmental risk factors in childhood and adolescence (PIER study). Motivation data and other psychometric reading data were collected during individual inquiries and tests at school. Data analyses were realized in three separate studies that focused on different but related aspects of reading and perceptual span development. Study 1 presents the first cross-sectional report on the perceptual span of beginning German readers. The focus was on reading rate changes in Grades 1 to 3 and on the issue of the onset of the perceptual span development and its dependence on basic foveal reading processes. Study 2 presents a successor of Study 1 providing first longitudinal data of the perceptual span in elementary school children. It also includes information on the stability of observed and predicted reading rates and perceptual span sizes and introduces a new measure of the perceptual span based on nonlinear mixed-effects models. Another issue addressed in this study is the longitudinal between-group comparison of slower and faster readers which refers to the detection of developmental patterns. Study 3 includes longitudinal reading motivation data and investigates the relation between different eye movement measures including perceptual span and intrinsic as well as extrinsic reading motivation. In Study 1, a decelerated increase in reading rate was observed between Grades 1 to 3. Grade effects were also reported for saccade length, refixation probability, and different fixation duration measures. With higher grade, mean saccade length increased, whereas refixation probability, first-fixation duration, gaze duration, and total reading time decreased. Perceptual span development was indicated by an increase in window size effects with grade level. Grade level differences with respect to window size effects were stronger between Grades 2 and 3 than between Grades 1 and 2. These results were replicated longitudinally in Study 2. Again, perceptual span size significantly changed between Grades 2 and 3, but not between Grades 1 and 2 or Grades 3 and 4. Observed and predicted reading rates were found to be highly stable after first grade, whereas stability of perceptual span was only moderate for all grade levels. Group differences between slower and faster readers in Year 1 remained observable in Year 2 showing a pattern of stable achievement differences rather than a compensatory pattern. Between Grades 2 and 3, between-group differences in reading rate even increased resulting in a Matthew effect. A similar effect was observed for perceptual span development between Grades 3 and 4. Finally, in Study 3, significant relations between beginning readers’ eye movements and their reading motivation were observed. In both years of measurement, higher intrinsic reading motivation was related to more skilled eye movement patterns as indicated by short fixations, longer saccades, and higher reading rates. In Year 2, intrinsic reading motivation was also significantly and negatively correlated with refixation probability. These correlational patterns were confirmed in cross-sectional linear models controlling for grade level and reading amount and including both reading motivation measures, extrinsic and intrinsic motivation. While there were significant positive relations between intrinsic reading motivation and word decoding as indicated by the above stated eye movement measures, extrinsic reading motivation only predicted variance in eye movements in Year 2 (significant for fixation durations and reading rate), with a consistently opposite pattern of effects as compared to intrinsic reading motivation. Finally, longitudinal effects of Year 1 intrinsic reading motivation on Year 2 word decoding were observed for gaze duration, total reading time, refixation probability, and perceptual span within cross-lagged panel models. These effects were reciprocal because all eye movement measures significantly predicted variance in intrinsic reading motivation. Extrinsic reading motivation in Year 1 did not affect any eye movement measure in Year 2, and vice versa, except for a significant, negative relation with perceptual span. Concluding, the present dissertation demonstrates that largest gains in reading development in terms of eye movement changes are observable between Grades 1 and 2. Together with the observed pattern of stable differences between slower and faster readers and a widening achievement gap between Grades 2 and 3 for reading rate, these results underline the importance of the first year(s) of formal reading instruction. The development of the perceptual span lags behind as it is most apparent between Grades 2 and 3. This suggests that efficient parafoveal processing presupposes a certain degree of foveal reading proficiency (e.g., word decoding). Finally, this dissertation demonstrates that intrinsic reading motivation—but not extrinsic motivation—effectively supports the development of skilled reading.
Dynamical models of cognition play an increasingly important role in driving theoretical and experimental research in psychology. Therefore, parameter estimation, model analysis and comparison of dynamical models are of essential importance. In this article, we propose a maximum likelihood approach for model analysis in a fully dynamical framework that includes time-ordered experimental data. Our methods can be applied to dynamical models for the prediction of discrete behavior (e.g., movement onsets); in particular, we use a dynamical model of saccade generation in scene viewing as a case study for our approach. For this model, the likelihood function can be computed directly by numerical simulation, which enables more efficient parameter estimation including Bayesian inference to obtain reliable estimates and corresponding credible intervals. Using hierarchical models inference is even possible for individual observers. Furthermore, our likelihood approach can be used to compare different models. In our example, the dynamical framework is shown to outperform nondynamical statistical models. Additionally, the likelihood based evaluation differentiates model variants, which produced indistinguishable predictions on hitherto used statistics. Our results indicate that the likelihood approach is a promising framework for dynamical cognitive models.
Linked linear mixed models
(2016)
The complexity of eye-movement control during reading allows measurement of many dependent variables, the most prominent ones being fixation durations and their locations in words. In current practice, either variable may serve as dependent variable or covariate for the other in linear mixed models (LMMs) featuring also psycholinguistic covariates of word recognition and sentence comprehension. Rather than analyzing fixation location and duration with separate LMMs, we propose linking the two according to their sequential dependency. Specifically, we include predicted fixation location (estimated in the first LMM from psycholinguistic covariates) and its associated residual fixation location as covariates in the second, fixation-duration LMM. This linked LMM affords a distinction between direct and indirect effects (mediated through fixation location) of psycholinguistic covariates on fixation durations. Results confirm the robustness of distributed processing in the perceptual span. They also offer a resolution of the paradox of the inverted optimal viewing position (IOVP) effect (i.e., longer fixation durations in the center than at the beginning and end of words) although the opposite (i.e., an OVP effect) is predicted from default assumptions of psycholinguistic processing efficiency: The IOVP effect in fixation durations is due to the residual fixation-location covariate, presumably driven primarily by saccadic error, and the OVP effect (at least the left part of it) is uncovered with the predicted fixation-location covariate, capturing the indirect effects of psycholinguistic covariates. We expect that linked LMMs will be useful for the analysis of other dynamically related multiple outcomes, a conundrum of most psychonomic research.
During visual fixation, the eye generates microsaccades and slower components of fixational eye movements that are part of the visual processing strategy in humans. Here, we show that ongoing heartbeat is coupled to temporal rate variations in the generation of microsaccades. Using coregistration of eye recording and ECG in humans, we tested the hypothesis that microsaccade onsets are coupled to the relative phase of the R-R intervals in heartbeats. We observed significantly more microsaccades during the early phase after the R peak in the ECG. This form of coupling between heartbeat and eye movements was substantiated by the additional finding of a coupling between heart phase and motion activity in slow fixational eye movements; i.e., retinal image slip caused by physiological drift. Our findings therefore demonstrate a coupling of the oculomotor system and ongoing heartbeat, which provides further evidence for bodily influences on visuomotor functioning.
Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.