Refine
Year of publication
Document Type
- Article (37)
- Postprint (9)
- Other (2)
- Conference Proceeding (1)
- Doctoral Thesis (1)
- Preprint (1)
Language
- English (51)
Keywords
- eye movements (9)
- spatial frequencies (8)
- Eye movements (6)
- reading (6)
- scene viewing (6)
- Perceptual span (4)
- attention (4)
- gaze-contingent displays (4)
- color (3)
- embodied cognition (3)
Institute
The perceptual span describes the size of the visual field from which information is obtained during a fixation in reading. Its size depends on characteristics of writing system and reader, but-according to the foveal load hypothesis-it is also adjusted dynamically as a function of lexical processing difficulty. Using the moving window paradigm to manipulate the amount of preview, here we directly test whether the perceptual span shrinks as foveal word difficulty increases. We computed the momentary size of the span from word-based eye-movement measures as a function of foveal word frequency, allowing us to separately describe the perceptual span for information affecting spatial saccade targeting and temporal saccade execution. First fixation duration and gaze duration on the upcoming (parafoveal) word N + 1 were significantly shorter when the current (foveal) word N was more frequent. We show that the word frequency effect is modulated by window size. Fixation durations on word N + 1 decreased with high-frequency words N, but only for large windows, that is, when sufficient parafoveal preview was available. This provides strong support for the foveal load hypothesis. To investigate the development of the foveal load effect, we analyzed data from three waves of a longitudinal study on the perceptual span with German children in Grades 1 to 6. Perceptual span adjustment emerged early in development at around second grade and remained stable in later grades. We conclude that the local modulation of the perceptual span indicates a general cognitive process, perhaps an attentional gradient with rapid readjustment.
This article presents results of an exploratory investigation combining multimodal cohesion analysis and eye-tracking studies. Multimodal cohesion, as a tool of multimodal discourse analysis, goes beyond lin-guistic cohesive mechanisms to enable the construction of cross-modal discourse structures that system-atically relate technical details of audio, visual and verbal modalities. Patterns of multimodal cohesion from these discourse structures were used to design eye-tracking experiments and questionnaires in order to empirically investigate how auditory and visual cohesive cues affect attention and comprehen-sion. We argue that the cross-modal structures of cohesion revealed by our method offer a strong methodology for addressing empirical questions concerning viewers' comprehension of narrative settings and the comparative salience of visual, verbal and audio cues. Analyses are presented of the beginning of Hitchcock's The Birds (1963) and a sketch from Monty Python filmed in 1971. Our approach balances the narrative-based issue of how narrative elements in film guide meaning interpretation and the recipient -based question of where a film viewer's attention is directed during viewing and how this affects comprehension.
Commentary
(2020)
Commentary
(2020)
In two eye-tracking experiments, we investigated the processing of information about phonological consistency of Chinese phonograms during sentence reading. In Experiment 1, we adopted the error disruption paradigm in silent reading and found significant effects of phonological consistency and homophony in the foveal vision, but only in a late processing stage. Adding oral reading to Experiment 2, we found both effects shifted to earlier indices of parafoveal processing. Specifically, low-consistency characters led to a better homophonic foveal recovery effect in Experiment 1 and stronger homophonic preview benefits in Experiment 2. These findings suggest that phonological consistency information can be obtained during sentence reading, and compared to the low-consistency previews the high-consistency previews are processed faster, which leads to greater interference to the recognition of target characters.
"Left" and "right" coordinates control our spatial behavior and even influence abstract thoughts. For number concepts, horizontal spatial-numerical associations (SNAs) have been widely documented: we associate few with left and many with right. Importantly, increments are universally coded on the right side even in preverbal humans and nonhuman animals, thus questioning the fundamental role of directional cultural habits, such as reading or finger counting. Here, we propose a biological, nonnumerical mechanism for the origin of SNAs on the basis of asymmetric tuning of animal brains for different spatial frequencies (SFs). The resulting selective visual processing predicts both universal SNAs and their context-dependence. We support our proposal by analyzing the stimuli used to document SNAs in newborns for their SF content. As predicted, the SFs contained in visual patterns with few versus many elements preferentially engage right versus left brain hemispheres, respectively, thus predicting left-versus rightward behavioral biases. Our "brain's asymmetric frequency tuning" hypothesis explains the perceptual origin of horizontal SNAs for nonsymbolic visual numerosities and might be extensible to the auditory domain.
To construct a coherent multi-modal percept, vertebrate brains extract low-level features (such as spatial and temporal frequencies) from incoming sensory signals. However, because frequency processing is lateralized with the right hemisphere favouring low frequencies while the left favours higher frequencies, this introduces asymmetries between the hemispheres. Here, we describe how this lateralization shapes the development of several cognitive domains, ranging from visuo-spatial and numerical cognition to language, social cognition, and even aesthetic appreciation, and leads to the emergence of asymmetries in behaviour. We discuss the neuropsychological and educational implications of these emergent asymmetries and suggest future research approaches.
When studying how people search for objects in scenes, the inhomogeneity of the visual field is often ignored. Due to physiological limitations, peripheral vision is blurred and mainly uses coarse-grained information (i.e., low spatial frequencies) for selecting saccade targets, whereas high-acuity central vision uses fine-grained information (i.e., high spatial frequencies) for analysis of details. Here we investigated how spatial frequencies and color affect object search in real-world scenes. Using gaze-contingent filters, we attenuated high or low frequencies in central or peripheral vision while viewers searched color or grayscale scenes. Results showed that peripheral filters and central high-pass filters hardly affected search accuracy, whereas accuracy dropped drastically with central low-pass filters. Peripheral filtering increased the time to localize the target by decreasing saccade amplitudes and increasing number and duration of fixations. The use of coarse-grained information in the periphery was limited to color scenes. Central filtering increased the time to verify target identity instead, especially with low-pass filters. We conclude that peripheral vision is critical for object localization and central vision is critical for object identification. Visual guidance during peripheral object localization is dominated by low-frequency color information, whereas high-frequency information, relatively independent of color, is most important for object identification in central vision.
How is semantic information in the mental lexicon accessed and selected during reading? Readers process information of both the foveal and parafoveal words. Recent eye-tracking studies hint at bi-phasic lexical activation dynamics, demonstrating that semantically related parafoveal previews can either facilitate, or interfere with lexical processing of target words in comparison to unrelated previews, with the size and direction of the effect depending on exposure time to parafoveal previews. However, evidence to date is only correlational, because exposure time was determined by participants' pre-target fixation durations. Here we experimentally controlled parafoveal preview exposure duration using a combination of the gaze-contingent fast-priming and boundary paradigms. We manipulated preview duration and examined the time course of parafoveal semantic activation during the oral reading of Chinese sentences in three experiments. Semantic previews led to faster lexical access of target words than unrelated previews only when the previews were presented briefly (80 ms in Experiments 1 and 3). Longer exposure time (100 ms or 150 ms) eliminated semantic preview effects, and full preview without duration limit resulted in preview cost, i.e., a reversal of preview benefit. Our results indicate that high-level semantic information can be obtained from parafoveal words and the size and direction of the parafoveal semantic effect depends on the level of lexical activation.
What is the time course of activation of phonological information in logographic writing systems like Chinese, in which meaning is prioritized over sound? We used a manipulation of phonological regularity to examine foveal and parafoveal phonological processing of Chinese phonograms at lexical and sublexical levels during Chinese sentence reading in 2 eye-tracking experiments. In Experiment 1, using an error disruption task during silent reading, we observed foveal lexical phonological activation in second-pass reading. In Experiment 2, using the boundary paradigm, both parafoveal lexical and sublexical phonological preview benefits were found in first-fixation duration in oral reading, whereas only lexical phonological benefits were found in gaze duration during silent reading. Thus, phonological information had earlier and more pronounced parafoveal effects in oral reading, and these extended to sublexical processing. These results are compatible with the view that oral reading prioritizes parafoveal phonological processing in Chinese.
The visual number world
(2018)
In the domain of language research, the simultaneous presentation of a visual scene and its auditory description (i.e., the visual world paradigm) has been used to reveal the timing of mental mechanisms. Here we apply this rationale to the domain of numerical cognition in order to explore the differences between fast and slow arithmetic performance, and to further study the role of spatial-numerical associations during mental arithmetic. We presented 30 healthy adults simultaneously with visual displays containing four numbers and with auditory addition and subtraction problems. Analysis of eye movements revealed that participants look spontaneously at the numbers they currently process (operands, solution). Faster performance was characterized by shorter latencies prior to fixating the relevant numbers and fewer revisits to the first operand while computing the solution. These signatures of superior task performance were more pronounced for addition and visual numbers arranged in ascending order, and for subtraction and numbers arranged in descending order (compared to the opposite pairings). Our results show that the visual number world-paradigm provides on-line access to the mind during mental arithmetic, is able to capture variability in arithmetic performance, and is sensitive to visual layout manipulations that are otherwise not reflected in response time measurements.
The present study explores the perceptual span, that is, the physical extent of the area from which useful visual information is obtained during a single fixation, during oral reading of Chinese sentences. Characters outside a window of legible text were replaced by visually similar characters. Results show that the influence of window size on the perceptual span was consistent across different fixation and oculomotor measures. To maintain normal reading behavior when reading aloud, it was necessary to have information provided from three characters to the right of the fixation. Together with findings from previous research, our findings suggest that the physical size of the perceptual span is smaller when reading aloud than in silent reading. This is in agreement with previous studies in English, suggesting that the mechanisms causing the reduced span in oral reading have a common base that generalizes across languages and writing systems.
The present study explores the perceptual span, that is, the physical extent of
the area from which useful visual information is obtained during a single
fixation, during oral reading of Chinese sentences. Characters outside a
window of legible text were replaced by visually similar characters. Results
show that the influence of window size on the perceptual span was consistent
across different fixation and oculomotor measures. To maintain normal
reading behavior when reading aloud, it was necessary to have information
provided from three characters to the right of the fixation. Together with
findings from previous research, our findings suggest that the physical size of
the perceptual span is smaller when reading aloud than in silent reading. This
is in agreement with previous studies in English, suggesting that the mechanisms
causing the reduced span in oral reading have a common base that
generalizes across languages and writing systems.
Hulleman & Olivers' (H&O's) model introduces variation of the functional visual field (FVF) for explaining visual search behavior. Our research shows how the FVF can be studied using gaze-contingent displays and how FVF variation can be implemented in models of gaze control. Contrary to H&O, we believe that fixation duration is an important factor when modeling visual search behavior.
The perceptual span is a standard measure of parafoveal processing, which is considered highly important for efficient reading. Is the perceptual span a stable indicator of reading performance? What drives its development? Do initially slower and faster readers converge or diverge over development? Here we present the first longitudinal data on the development of the perceptual span in elementary school children. Using the moving window technique, eye movements of 127 German children in three age groups (Grades 1, 2, and 3 in Year 1) were recorded at two time points (T1 and T2) 1 year apart. Introducing a new measure of the perceptual span, nonlinear mixed-effects modeling was used to separate window size effects from asymptotic reading performance. Cross-sectional differences were well replicated longitudinally. Asymptotic reading rate increased monotonously with grade, but in a decelerating fashion. A significant change in the perceptual span was observed only between Grades 2 and 3. Together with results from a cross-lagged panel model, this suggests that the perceptual span increases as a consequence of relatively well established word reading. Stabilities of observed and predicted reading rates were high after Grade 1, whereas the perceptual span was only moderately stable for all grades. Comparing faster and slower readers as assessed at T1, in general, a pattern of stable between-group differences emerged rather than a compensatory pattern; second and third graders even showed a Matthew effect in reading rate and the perceptual span, respectively. (C) 2016 Elsevier Inc. All rights reserved.
Coupling of attention and saccades when viewing scenes with central and peripheral degradation
(2016)
Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations.
Coupling of attention and saccades when viewing scenes with central and peripheral degradation
(2016)
Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations.
Visuospatial attention and gaze control depend on the interaction of foveal and peripheral processing. The foveal and peripheral regions of the visual field are differentially sensitive to parts of the spatial frequency spectrum. In two experiments, we investigated how the selective attenuation of spatial frequencies in the central or the peripheral visual field affects eye-movement behavior during real-world scene viewing. Gaze-contingent low-pass or high-pass filters with varying filter levels (i.e., cutoff frequencies; Experiment 1) or filter sizes (Experiment 2) were applied. Compared to unfiltered control conditions, mean fixation durations increased most with central high-pass and peripheral low-pass filtering. Increasing filter size prolonged fixation durations with peripheral filtering, but not with central filtering. Increasing filter level prolonged fixation durations with low-pass filtering, but not with high-pass filtering. These effects indicate that fixation durations are not always longer under conditions of increased processing difficulty. Saccade amplitudes largely adapted to processing difficulty: amplitudes increased with central filtering and decreased with peripheral filtering; the effects strengthened with increasing filter size and filter level. In addition, we observed a trade-off between saccade timing and saccadic selection, since saccade amplitudes were modulated when fixation durations were unaffected by the experimental manipulations. We conclude that interactions of perception and gaze control are highly sensitive to experimental manipulations of input images as long as the residual information can still be accessed for gaze control. (C) 2016 Elsevier Ltd. All rights reserved.