Refine
Year of publication
Document Type
- Article (37)
- Postprint (9)
- Other (2)
- Conference Proceeding (1)
- Doctoral Thesis (1)
- Preprint (1)
Language
- English (51)
Keywords
- eye movements (9)
- spatial frequencies (8)
- Eye movements (6)
- reading (6)
- scene viewing (6)
- Perceptual span (4)
- attention (4)
- gaze-contingent displays (4)
- color (3)
- embodied cognition (3)
Institute
Parafoveal preview benefit (PB) is an implicit measure of lexical activation in reading. PB has been demonstrated for orthographic and phonological but not for semantically related information in English. In contrast, semantic PB is obtained in German and Chinese. We propose that these language differences reveal differential resource demands and timing of phonological and semantic decoding in different orthographic systems.
The present study explores the perceptual span, that is, the physical extent of the area from which useful visual information is obtained during a single fixation, during oral reading of Chinese sentences. Characters outside a window of legible text were replaced by visually similar characters. Results show that the influence of window size on the perceptual span was consistent across different fixation and oculomotor measures. To maintain normal reading behavior when reading aloud, it was necessary to have information provided from three characters to the right of the fixation. Together with findings from previous research, our findings suggest that the physical size of the perceptual span is smaller when reading aloud than in silent reading. This is in agreement with previous studies in English, suggesting that the mechanisms causing the reduced span in oral reading have a common base that generalizes across languages and writing systems.
The perceptual span describes the size of the visual field from which information is obtained during a fixation in reading. Its size depends on characteristics of writing system and reader, but-according to the foveal load hypothesis-it is also adjusted dynamically as a function of lexical processing difficulty. Using the moving window paradigm to manipulate the amount of preview, here we directly test whether the perceptual span shrinks as foveal word difficulty increases. We computed the momentary size of the span from word-based eye-movement measures as a function of foveal word frequency, allowing us to separately describe the perceptual span for information affecting spatial saccade targeting and temporal saccade execution. First fixation duration and gaze duration on the upcoming (parafoveal) word N + 1 were significantly shorter when the current (foveal) word N was more frequent. We show that the word frequency effect is modulated by window size. Fixation durations on word N + 1 decreased with high-frequency words N, but only for large windows, that is, when sufficient parafoveal preview was available. This provides strong support for the foveal load hypothesis. To investigate the development of the foveal load effect, we analyzed data from three waves of a longitudinal study on the perceptual span with German children in Grades 1 to 6. Perceptual span adjustment emerged early in development at around second grade and remained stable in later grades. We conclude that the local modulation of the perceptual span indicates a general cognitive process, perhaps an attentional gradient with rapid readjustment.
The visual number world
(2018)
In the domain of language research, the simultaneous presentation of a visual scene and its auditory description (i.e., the visual world paradigm) has been used to reveal the timing of mental mechanisms. Here we apply this rationale to the domain of numerical cognition in order to explore the differences between fast and slow arithmetic performance, and to further study the role of spatial-numerical associations during mental arithmetic. We presented 30 healthy adults simultaneously with visual displays containing four numbers and with auditory addition and subtraction problems. Analysis of eye movements revealed that participants look spontaneously at the numbers they currently process (operands, solution). Faster performance was characterized by shorter latencies prior to fixating the relevant numbers and fewer revisits to the first operand while computing the solution. These signatures of superior task performance were more pronounced for addition and visual numbers arranged in ascending order, and for subtraction and numbers arranged in descending order (compared to the opposite pairings). Our results show that the visual number world-paradigm provides on-line access to the mind during mental arithmetic, is able to capture variability in arithmetic performance, and is sensitive to visual layout manipulations that are otherwise not reflected in response time measurements.
What is the time course of activation of phonological information in logographic writing systems like Chinese, in which meaning is prioritized over sound? We used a manipulation of phonological regularity to examine foveal and parafoveal phonological processing of Chinese phonograms at lexical and sublexical levels during Chinese sentence reading in 2 eye-tracking experiments. In Experiment 1, using an error disruption task during silent reading, we observed foveal lexical phonological activation in second-pass reading. In Experiment 2, using the boundary paradigm, both parafoveal lexical and sublexical phonological preview benefits were found in first-fixation duration in oral reading, whereas only lexical phonological benefits were found in gaze duration during silent reading. Thus, phonological information had earlier and more pronounced parafoveal effects in oral reading, and these extended to sublexical processing. These results are compatible with the view that oral reading prioritizes parafoveal phonological processing in Chinese.
In two eye-tracking experiments, we investigated the processing of information about phonological consistency of Chinese phonograms during sentence reading. In Experiment 1, we adopted the error disruption paradigm in silent reading and found significant effects of phonological consistency and homophony in the foveal vision, but only in a late processing stage. Adding oral reading to Experiment 2, we found both effects shifted to earlier indices of parafoveal processing. Specifically, low-consistency characters led to a better homophonic foveal recovery effect in Experiment 1 and stronger homophonic preview benefits in Experiment 2. These findings suggest that phonological consistency information can be obtained during sentence reading, and compared to the low-consistency previews the high-consistency previews are processed faster, which leads to greater interference to the recognition of target characters.
This article presents results of an exploratory investigation combining multimodal cohesion analysis and eye-tracking studies. Multimodal cohesion, as a tool of multimodal discourse analysis, goes beyond lin-guistic cohesive mechanisms to enable the construction of cross-modal discourse structures that system-atically relate technical details of audio, visual and verbal modalities. Patterns of multimodal cohesion from these discourse structures were used to design eye-tracking experiments and questionnaires in order to empirically investigate how auditory and visual cohesive cues affect attention and comprehen-sion. We argue that the cross-modal structures of cohesion revealed by our method offer a strong methodology for addressing empirical questions concerning viewers' comprehension of narrative settings and the comparative salience of visual, verbal and audio cues. Analyses are presented of the beginning of Hitchcock's The Birds (1963) and a sketch from Monty Python filmed in 1971. Our approach balances the narrative-based issue of how narrative elements in film guide meaning interpretation and the recipient -based question of where a film viewer's attention is directed during viewing and how this affects comprehension.
Hulleman & Olivers' (H&O's) model introduces variation of the functional visual field (FVF) for explaining visual search behavior. Our research shows how the FVF can be studied using gaze-contingent displays and how FVF variation can be implemented in models of gaze control. Contrary to H&O, we believe that fixation duration is an important factor when modeling visual search behavior.
Control of fixation duration during scene viewing by interaction of foveal and peripheral processing
(2013)
Processing in our visual system is functionally segregated, with the fovea specialized in processing fine detail (high spatial frequencies) for object identification, and the periphery in processing coarse information (low frequencies) for spatial orienting and saccade target selection. Here we investigate the consequences of this functional segregation for the control of fixation durations during scene viewing. Using gaze-contingent displays, we applied high-pass or low-pass filters to either the central or the peripheral visual field and compared eye-movement patterns with an unfiltered control condition. In contrast with predictions from functional segregation, fixation durations were unaffected when the critical information for vision was strongly attenuated (foveal low-pass and peripheral high-pass filtering); fixation durations increased, however, when useful information was left mostly intact by the filter (foveal high-pass and peripheral low-pass filtering). These patterns of results are difficult to explain under the assumption that fixation durations are controlled by foveal processing difficulty. As an alternative explanation, we developed the hypothesis that the interaction of foveal and peripheral processing controls fixation duration. To investigate the viability of this explanation, we implemented a computational model with two compartments, approximating spatial aspects of processing by foveal and peripheral activations that change according to a small set of dynamical rules. The model reproduced distributions of fixation durations from all experimental conditions by variation of few parameters that were affected by specific filtering conditions.
Visuospatial attention and gaze control depend on the interaction of foveal and peripheral processing. The foveal and peripheral regions of the visual field are differentially sensitive to parts of the spatial frequency spectrum. In two experiments, we investigated how the selective attenuation of spatial frequencies in the central or the peripheral visual field affects eye-movement behavior during real-world scene viewing. Gaze-contingent low-pass or high-pass filters with varying filter levels (i.e., cutoff frequencies; Experiment 1) or filter sizes (Experiment 2) were applied. Compared to unfiltered control conditions, mean fixation durations increased most with central high-pass and peripheral low-pass filtering. Increasing filter size prolonged fixation durations with peripheral filtering, but not with central filtering. Increasing filter level prolonged fixation durations with low-pass filtering, but not with high-pass filtering. These effects indicate that fixation durations are not always longer under conditions of increased processing difficulty. Saccade amplitudes largely adapted to processing difficulty: amplitudes increased with central filtering and decreased with peripheral filtering; the effects strengthened with increasing filter size and filter level. In addition, we observed a trade-off between saccade timing and saccadic selection, since saccade amplitudes were modulated when fixation durations were unaffected by the experimental manipulations. We conclude that interactions of perception and gaze control are highly sensitive to experimental manipulations of input images as long as the residual information can still be accessed for gaze control. (C) 2016 Elsevier Ltd. All rights reserved.
Coupling of attention and saccades when viewing scenes with central and peripheral degradation
(2016)
Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations.
When studying how people search for objects in scenes, the inhomogeneity of the visual field is often ignored. Due to physiological limitations, peripheral vision is blurred and mainly uses coarse-grained information (i.e., low spatial frequencies) for selecting saccade targets, whereas high-acuity central vision uses fine-grained information (i.e., high spatial frequencies) for analysis of details. Here we investigated how spatial frequencies and color affect object search in real-world scenes. Using gaze-contingent filters, we attenuated high or low frequencies in central or peripheral vision while viewers searched color or grayscale scenes. Results showed that peripheral filters and central high-pass filters hardly affected search accuracy, whereas accuracy dropped drastically with central low-pass filters. Peripheral filtering increased the time to localize the target by decreasing saccade amplitudes and increasing number and duration of fixations. The use of coarse-grained information in the periphery was limited to color scenes. Central filtering increased the time to verify target identity instead, especially with low-pass filters. We conclude that peripheral vision is critical for object localization and central vision is critical for object identification. Visual guidance during peripheral object localization is dominated by low-frequency color information, whereas high-frequency information, relatively independent of color, is most important for object identification in central vision.
The serial reaction time task (SRTT) is a standard task used to investigate incidental sequence learning. Whereas incidental learning of motor sequences is well-established, few and disputed results support learning of perceptual sequences. Here we adapt a motion coherence discrimination task (Newsome & Pare, 1988) to the sequence learning paradigm. The new task has 2 advantages: (a) the stimulus is presented at fixation, thereby obviating overt eye movements, and (b) by varying coherence a perceptual threshold measure is available in addition to the performance measure of RT. Results from 3 experiments show that action relevance of the sequence is necessary for sequence learning to occur, that the amount of sequence knowledge varies with the ease of encoding the motor sequence, and that sequence knowledge, once acquired, has the ability to modify perceptual thresholds.
The defocused attention hypothesis (von Hecker and Meiser, 2005) assumes that negative mood broadens attention, whereas the analytical rumination hypothesis (Andrews and Thompson, 2009) suggests a narrowing of the attentional focus with depression. We tested these conflicting hypotheses by directly measuring the perceptual span in groups of dysphoric and control subjects, using eye tracking. In the moving window paradigm, information outside of a variable-width gaze-contingent window was masked during reading of sentences. In measures of sentence reading time and mean fixation duration, dysphoric subjects were more pronouncedly affected than controls by a reduced window size. This difference supports the defocused attention hypothesis and seems hard to reconcile with a narrowing of attentional focus.
The perceptual span is a standard measure of parafoveal processing, which is considered highly important for efficient reading. Is the perceptual span a stable indicator of reading performance? What drives its development? Do initially slower and faster readers converge or diverge over development? Here we present the first longitudinal data on the development of the perceptual span in elementary school children. Using the moving window technique, eye movements of 127 German children in three age groups (Grades 1, 2, and 3 in Year 1) were recorded at two time points (T1 and T2) 1 year apart. Introducing a new measure of the perceptual span, nonlinear mixed-effects modeling was used to separate window size effects from asymptotic reading performance. Cross-sectional differences were well replicated longitudinally. Asymptotic reading rate increased monotonously with grade, but in a decelerating fashion. A significant change in the perceptual span was observed only between Grades 2 and 3. Together with results from a cross-lagged panel model, this suggests that the perceptual span increases as a consequence of relatively well established word reading. Stabilities of observed and predicted reading rates were high after Grade 1, whereas the perceptual span was only moderately stable for all grades. Comparing faster and slower readers as assessed at T1, in general, a pattern of stable between-group differences emerged rather than a compensatory pattern; second and third graders even showed a Matthew effect in reading rate and the perceptual span, respectively. (C) 2016 Elsevier Inc. All rights reserved.