Refine
Year of publication
Document Type
- Article (37)
- Postprint (9)
- Other (2)
- Conference Proceeding (1)
- Doctoral Thesis (1)
- Preprint (1)
Language
- English (51)
Keywords
- eye movements (9)
- spatial frequencies (8)
- Eye movements (6)
- reading (6)
- scene viewing (6)
- Perceptual span (4)
- attention (4)
- gaze-contingent displays (4)
- color (3)
- embodied cognition (3)
Institute
"Left" and "right" coordinates control our spatial behavior and even influence abstract thoughts. For number concepts, horizontal spatial-numerical associations (SNAs) have been widely documented: we associate few with left and many with right. Importantly, increments are universally coded on the right side even in preverbal humans and nonhuman animals, thus questioning the fundamental role of directional cultural habits, such as reading or finger counting. Here, we propose a biological, nonnumerical mechanism for the origin of SNAs on the basis of asymmetric tuning of animal brains for different spatial frequencies (SFs). The resulting selective visual processing predicts both universal SNAs and their context-dependence. We support our proposal by analyzing the stimuli used to document SNAs in newborns for their SF content. As predicted, the SFs contained in visual patterns with few versus many elements preferentially engage right versus left brain hemispheres, respectively, thus predicting left-versus rightward behavioral biases. Our "brain's asymmetric frequency tuning" hypothesis explains the perceptual origin of horizontal SNAs for nonsymbolic visual numerosities and might be extensible to the auditory domain.
Commentary
(2020)
Commentary
(2020)
Control of fixation duration during scene viewing by interaction of foveal and peripheral processing
(2013)
Processing in our visual system is functionally segregated, with the fovea specialized in processing fine detail (high spatial frequencies) for object identification, and the periphery in processing coarse information (low frequencies) for spatial orienting and saccade target selection. Here we investigate the consequences of this functional segregation for the control of fixation durations during scene viewing. Using gaze-contingent displays, we applied high-pass or low-pass filters to either the central or the peripheral visual field and compared eye-movement patterns with an unfiltered control condition. In contrast with predictions from functional segregation, fixation durations were unaffected when the critical information for vision was strongly attenuated (foveal low-pass and peripheral high-pass filtering); fixation durations increased, however, when useful information was left mostly intact by the filter (foveal high-pass and peripheral low-pass filtering). These patterns of results are difficult to explain under the assumption that fixation durations are controlled by foveal processing difficulty. As an alternative explanation, we developed the hypothesis that the interaction of foveal and peripheral processing controls fixation duration. To investigate the viability of this explanation, we implemented a computational model with two compartments, approximating spatial aspects of processing by foveal and peripheral activations that change according to a small set of dynamical rules. The model reproduced distributions of fixation durations from all experimental conditions by variation of few parameters that were affected by specific filtering conditions.
Coupling of attention and saccades when viewing scenes with central and peripheral degradation
(2016)
Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations.
Coupling of attention and saccades when viewing scenes with central and peripheral degradation
(2016)
Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations.
Coupling of attention and saccades when viewing scenes with central and peripheral degradation
(2016)
Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations.
The perceptual span is a standard measure of parafoveal processing, which is considered highly important for efficient reading. Is the perceptual span a stable indicator of reading performance? What drives its development? Do initially slower and faster readers converge or diverge over development? Here we present the first longitudinal data on the development of the perceptual span in elementary school children. Using the moving window technique, eye movements of 127 German children in three age groups (Grades 1, 2, and 3 in Year 1) were recorded at two time points (T1 and T2) 1 year apart. Introducing a new measure of the perceptual span, nonlinear mixed-effects modeling was used to separate window size effects from asymptotic reading performance. Cross-sectional differences were well replicated longitudinally. Asymptotic reading rate increased monotonously with grade, but in a decelerating fashion. A significant change in the perceptual span was observed only between Grades 2 and 3. Together with results from a cross-lagged panel model, this suggests that the perceptual span increases as a consequence of relatively well established word reading. Stabilities of observed and predicted reading rates were high after Grade 1, whereas the perceptual span was only moderately stable for all grades. Comparing faster and slower readers as assessed at T1, in general, a pattern of stable between-group differences emerged rather than a compensatory pattern; second and third graders even showed a Matthew effect in reading rate and the perceptual span, respectively. (C) 2016 Elsevier Inc. All rights reserved.
The defocused attention hypothesis (von Hecker and Meiser, 2005) assumes that negative mood broadens attention, whereas the analytical rumination hypothesis (Andrews and Thompson, 2009) suggests a narrowing of the attentional focus with depression. We tested these conflicting hypotheses by directly measuring the perceptual span in groups of dysphoric and control subjects, using eye tracking. In the moving window paradigm, information outside of a variable-width gaze-contingent window was masked during reading of sentences. In measures of sentence reading time and mean fixation duration, dysphoric subjects were more pronouncedly affected than controls by a reduced window size. This difference supports the defocused attention hypothesis and seems hard to reconcile with a narrowing of attentional focus.
We measured Chinese dyslexic and control children's eye movements during rapid automatized naming (RAN) with alphanumeric (digits) and symbolic (dice surfaces) stimuli. Both types of stimuli required identical oral responses, controlling for effects associated with speech production. Results showed that naming dice was much slower than naming digits for both groups, but group differences in eye-movement measures and in the eye-voice span (i.e. the distance between the currently fixated item and the voiced item) were generally larger in digit-RAN than in dice-RAN. In addition, dyslexics were less efficient in parafoveal processing in these RAN tasks. Since the two RAN tasks required the same phonological output and on the assumption that naming dice is less practiced than naming digits in general, the results suggest that the translation of alphanumeric visual symbols into phonological codes is less efficient in dyslexic children. The dissociation of the print-to-sound conversion and phonological representation suggests that the degree of automaticity in translation from visual symbols to phonological codes in addition to phonological processing per se is also critical to understanding dyslexia.
Neuronal activity in area LIP is correlated with the perceived direction of ambiguous apparent motion (Z. M. Williams, J. C. Elfar, E. N. Eskandar, L. J. Toth, & J. A. Assad, 2003). Here we show that a similar correlation exists for small eye movements made during fixation. A moving dot grid with superimposed fixation point was presented through an aperture. In a motion discrimination task, unambiguous motion was compared with ambiguous motion obtained by shifting the grid by half of the dot distance. In three experiments we show that (a) microsaccadic inhibition, i.e., a drop in microsaccade frequency precedes reports of perceptual flips, (b) microsaccadic inhibition does not accompany simple response changes, and (c) the direction of microsaccades occurring before motion onset biases the subsequent perception of ambiguous motion. We conclude that microsaccades provide a signal on which perceptual judgments rely in the absence of objective disambiguating stimulus information.
Hulleman & Olivers' (H&O's) model introduces variation of the functional visual field (FVF) for explaining visual search behavior. Our research shows how the FVF can be studied using gaze-contingent displays and how FVF variation can be implemented in models of gaze control. Contrary to H&O, we believe that fixation duration is an important factor when modeling visual search behavior.
When studying how people search for objects in scenes, the inhomogeneity of the visual field is often ignored. Due to physiological limitations, peripheral vision is blurred and mainly uses coarse-grained information (i.e., low spatial frequencies) for selecting saccade targets, whereas high-acuity central vision uses fine-grained information (i.e., high spatial frequencies) for analysis of details. Here we investigated how spatial frequencies and color affect object search in real-world scenes. Using gaze-contingent filters, we attenuated high or low frequencies in central or peripheral vision while viewers searched color or grayscale scenes. Results showed that peripheral filters and central high-pass filters hardly affected search accuracy, whereas accuracy dropped drastically with central low-pass filters. Peripheral filtering increased the time to localize the target by decreasing saccade amplitudes and increasing number and duration of fixations. The use of coarse-grained information in the periphery was limited to color scenes. Central filtering increased the time to verify target identity instead, especially with low-pass filters. We conclude that peripheral vision is critical for object localization and central vision is critical for object identification. Visual guidance during peripheral object localization is dominated by low-frequency color information, whereas high-frequency information, relatively independent of color, is most important for object identification in central vision.
The serial reaction time task (SRTT) is a standard task used to investigate incidental sequence learning. Whereas incidental learning of motor sequences is well-established, few and disputed results support learning of perceptual sequences. Here we adapt a motion coherence discrimination task (Newsome & Pare, 1988) to the sequence learning paradigm. The new task has 2 advantages: (a) the stimulus is presented at fixation, thereby obviating overt eye movements, and (b) by varying coherence a perceptual threshold measure is available in addition to the performance measure of RT. Results from 3 experiments show that action relevance of the sequence is necessary for sequence learning to occur, that the amount of sequence knowledge varies with the ease of encoding the motor sequence, and that sequence knowledge, once acquired, has the ability to modify perceptual thresholds.
The perception of time is a fundamental part of human experience. Recent research suggests that the experience of time emerges from emotional and interoceptive (bodily) states as processed in the insular cortex. Whether there is an interaction between the conscious awareness of interoceptive states and time distortions induced by emotions has rarely been investigated so far. We aimed to address this question by the use of a retrospective time estimation task comparing two groups of participants. One group had a focus on interoceptive states and one had a focus on exteroceptive information while watching film clips depicting fear, amusement and neutral content. Main results were that attention to interoceptive processes significantly affected subjective time experience. Fear was accompanied with subjective time dilation that was more pronounced in the group with interoceptive focus, while amusement led to a quicker passage of time which was also increased by interoceptive focus. We conclude that retrospective temporal distortions are directly influenced by attention to bodily responses. These effects might crucially interact with arousal levels. Sympathetic nervous system activation affecting memory build-up might be the decisive factor influencing retrospective time judgments. Our data substantially extend former research findings underscoring the relevance of interoception for the effects of emotional states on subjective time experience.
What is the time course of activation of phonological information in logographic writing systems like Chinese, in which meaning is prioritized over sound? We used a manipulation of phonological regularity to examine foveal and parafoveal phonological processing of Chinese phonograms at lexical and sublexical levels during Chinese sentence reading in 2 eye-tracking experiments. In Experiment 1, using an error disruption task during silent reading, we observed foveal lexical phonological activation in second-pass reading. In Experiment 2, using the boundary paradigm, both parafoveal lexical and sublexical phonological preview benefits were found in first-fixation duration in oral reading, whereas only lexical phonological benefits were found in gaze duration during silent reading. Thus, phonological information had earlier and more pronounced parafoveal effects in oral reading, and these extended to sublexical processing. These results are compatible with the view that oral reading prioritizes parafoveal phonological processing in Chinese.
We compared effects of covert spatial-attention shifts induced with exogenous or endogenous cues on microsaccade rate and direction. Separate and dissociated effects were obtained in rate and direction measures. Display changes caused microsaccade rate inhibition, followed by sustained rate enhancement. Effects on microsaccade direction were differentially tied to cue class (exogenous vs. endogenous) and type (neutral vs. directional). For endogenous cues, direction effects were weak and occurred late. Exogenous cues caused a fast direction bias towards the cue (i.e., early automatic triggering of saccade programs), followed by a shift in the opposite direction (i.e, controlled inhibition of cue-directed saccades, leading to a 'leakage' of microsaccades in the opposite direction). (C) 2004 Elsevier Ltd. All rights reserved