Filtern
Erscheinungsjahr
Dokumenttyp
- Wissenschaftlicher Artikel (34)
- Postprint (6)
- Sonstiges (2)
- Konferenzveröffentlichung (1)
- Dissertation (1)
- Preprint (1)
Sprache
- Englisch (45)
Schlagworte
- eye movements (7)
- Eye movements (6)
- reading (6)
- spatial frequencies (5)
- Perceptual span (4)
- attention (3)
- eye-voice span (3)
- gaze-contingent displays (3)
- scene viewing (3)
- synchronization (3)
Institut
- Department Psychologie (45) (entfernen)
Dyslexic children are known to be slower than normal readers in rapid automatized naming (RAN). This suggests that dyslexics encounter local processing difficulties, which presumably induce a narrower perceptual span. Consequently, dyslexics should suffer less than normal readers from removing parafoveal preview. Here we used a gaze-contingent moving window paradigm in a RAN task to experimentally test this prediction. Results indicate that dyslexics extract less parafoveal information than control children. We propose that more attentional resources are recruited to the foveal processing because of dyslexics' less automatized translation of visual symbols into phonological output, thereby causing a reduction of the perceptual span. This in turn leads to less efficient preactivation of parafoveal information and, hence, more difficulty in processing the next foveal item.
This article presents results of an exploratory investigation combining multimodal cohesion analysis and eye-tracking studies. Multimodal cohesion, as a tool of multimodal discourse analysis, goes beyond lin-guistic cohesive mechanisms to enable the construction of cross-modal discourse structures that system-atically relate technical details of audio, visual and verbal modalities. Patterns of multimodal cohesion from these discourse structures were used to design eye-tracking experiments and questionnaires in order to empirically investigate how auditory and visual cohesive cues affect attention and comprehen-sion. We argue that the cross-modal structures of cohesion revealed by our method offer a strong methodology for addressing empirical questions concerning viewers' comprehension of narrative settings and the comparative salience of visual, verbal and audio cues. Analyses are presented of the beginning of Hitchcock's The Birds (1963) and a sketch from Monty Python filmed in 1971. Our approach balances the narrative-based issue of how narrative elements in film guide meaning interpretation and the recipient -based question of where a film viewer's attention is directed during viewing and how this affects comprehension.
How is reading development reflected in eye-movement measures? How does the perceptual span change during the initial years of reading instruction? Does parafoveal processing require competence in basic word-decoding processes? We report data from the first cross-sectional measurement of the perceptual span of German beginning readers (n = 139), collected in the context of the large longitudinal PIER (Potsdamer Intrapersonale Entwicklungsrisiken/Potsdam study of intra-personal developmental risk factors) study of intrapersonal developmental risk factors. Using the moving-window paradigm, eye movements of three groups of students (Grades 1-3) were measured with gaze-contingent presentation of a variable amount of text around fixation. Reading rate increased from Grades 1-3, with smaller increases for higher grades. Perceptual-span results showed the expected main effects of grade and window size: fixation durations and refixation probability decreased with grade and window size, whereas reading rate and saccade length increased. Critically, for reading rate, first-fixation duration, saccade length and refixation probability, there were significant interactions of grade and window size that were mainly based on the contrast between Grades 3 and 2 rather than Grades 2 and 1. Taken together, development of the perceptual span only really takes off between Grades 2 and 3, suggesting that efficient parafoveal processing presupposes that basic processes of reading have been mastered.
The perceptual span is a standard measure of parafoveal processing, which is considered highly important for efficient reading. Is the perceptual span a stable indicator of reading performance? What drives its development? Do initially slower and faster readers converge or diverge over development? Here we present the first longitudinal data on the development of the perceptual span in elementary school children. Using the moving window technique, eye movements of 127 German children in three age groups (Grades 1, 2, and 3 in Year 1) were recorded at two time points (T1 and T2) 1 year apart. Introducing a new measure of the perceptual span, nonlinear mixed-effects modeling was used to separate window size effects from asymptotic reading performance. Cross-sectional differences were well replicated longitudinally. Asymptotic reading rate increased monotonously with grade, but in a decelerating fashion. A significant change in the perceptual span was observed only between Grades 2 and 3. Together with results from a cross-lagged panel model, this suggests that the perceptual span increases as a consequence of relatively well established word reading. Stabilities of observed and predicted reading rates were high after Grade 1, whereas the perceptual span was only moderately stable for all grades. Comparing faster and slower readers as assessed at T1, in general, a pattern of stable between-group differences emerged rather than a compensatory pattern; second and third graders even showed a Matthew effect in reading rate and the perceptual span, respectively. (C) 2016 Elsevier Inc. All rights reserved.
When the eyes fixate at a point in a visual scene, small saccades rapidly shift the image on the retina. The effect of these microsaccades on the latency of subsequent large-scale saccades may be twofold. First, microsaccades are associated with an enhancement of visual perception. Their occurrence during saccade target perception should, thus, decrease saccade latencies. On the other hand, microsaccades likely indicate activity in fixation-related oculomotor neurons. These represent competitors to saccade-related cells in the interplay of gaze holding and shifting. Consequently, an increase in saccade latencies after microsaccades would be expected. Here, we present evidence for both aspects of microsaccadic impact on saccade latency. In a delayed response task, participants made saccades to visible or memorized targets. First, microsaccade occurrence up to 50 ms before target disappearance correlated with 18 ms (or 8%) faster saccades to memorized targets. Second, if microsaccades occurred shortly (i.e., < 150 ms) before a saccade was required, saccadic reaction times in visual and memory trials were increased by about 40 ms (or 16%). Hence, microsaccades can have opposite consequences for saccade latencies, pointing at a differential role of these fixational eye movements in preparation of motor programs.
When the eyes fixate at a point in a visual scene, small saccades rapidly shift the image on the retina. The effect of these microsaccades on the latency of subsequent large-scale saccades may be twofold. First, microsaccades are associated with an enhancement of visual perception. Their occurrence during saccade target perception could, thus, decrease saccade latencies. Second, microsaccades are likely to indicate activity in fixation-related oculomotor neurons. These represent competitors to saccade-related cells in the interplay of gaze holding and shifting. Consequently, an increase in saccade latencies would be expected after microsaccades. Here, we present evidence for both aspects of microsaccadic impact on saccade latency. In a delayed response task, participants made saccades to visible or memorized targets. First, microsaccade occurrence up to 50 ms before target disappearance correlated with 18 ms (or 8%) faster saccades to memorized targets. Second, if microsaccades occurred shortly (i.e., < 150 ms) before a saccade was required, mean saccadic reaction time in visual and memory trials was increased by about 40 ms (or 16%). Hence, microsaccades can have opposite consequences for saccade latencies, pointing at a differential role of these fixational eye movements in the preparation of saccade motor programs
The perception of time is a fundamental part of human experience. Recent research suggests that the experience of time emerges from emotional and interoceptive (bodily) states as processed in the insular cortex. Whether there is an interaction between the conscious awareness of interoceptive states and time distortions induced by emotions has rarely been investigated so far. We aimed to address this question by the use of a retrospective time estimation task comparing two groups of participants. One group had a focus on interoceptive states and one had a focus on exteroceptive information while watching film clips depicting fear, amusement and neutral content. Main results were that attention to interoceptive processes significantly affected subjective time experience. Fear was accompanied with subjective time dilation that was more pronounced in the group with interoceptive focus, while amusement led to a quicker passage of time which was also increased by interoceptive focus. We conclude that retrospective temporal distortions are directly influenced by attention to bodily responses. These effects might crucially interact with arousal levels. Sympathetic nervous system activation affecting memory build-up might be the decisive factor influencing retrospective time judgments. Our data substantially extend former research findings underscoring the relevance of interoception for the effects of emotional states on subjective time experience.
This study investigates the eye movements of dyslexic children and their age-matched controls when reading Chinese. Dyslexic children exhibited more and longer fixations than age-matched control children, and an increase of word length resulted in a greater increase in the number of fixations and gaze durations for the dyslexic than for the control readers. The report focuses on the finding that there was a significant difference between the two groups in the fixation landing position as a function of word length in single-fixation cases, while there was no such difference in the initial fixation of multi-fixation cases. We also found that both groups had longer incoming saccade amplitudes while the launch sites were closer to the word in single fixation cases than in multi-fixation cases. Our results suggest that dyslexic children's inefficient lexical processing, in combination with the absence of orthographic word boundaries in Chinese, leads them to select saccade targets at the beginning of words conservatively. These findings provide further evidence for parafoveal word segmentation during reading of Chinese sentences.
We measured Chinese dyslexic and control children's eye movements during rapid automatized naming (RAN) with alphanumeric (digits) and symbolic (dice surfaces) stimuli. Both types of stimuli required identical oral responses, controlling for effects associated with speech production. Results showed that naming dice was much slower than naming digits for both groups, but group differences in eye-movement measures and in the eye-voice span (i.e. the distance between the currently fixated item and the voiced item) were generally larger in digit-RAN than in dice-RAN. In addition, dyslexics were less efficient in parafoveal processing in these RAN tasks. Since the two RAN tasks required the same phonological output and on the assumption that naming dice is less practiced than naming digits in general, the results suggest that the translation of alphanumeric visual symbols into phonological codes is less efficient in dyslexic children. The dissociation of the print-to-sound conversion and phonological representation suggests that the degree of automaticity in translation from visual symbols to phonological codes in addition to phonological processing per se is also critical to understanding dyslexia.
What is the time course of activation of phonological information in logographic writing systems like Chinese, in which meaning is prioritized over sound? We used a manipulation of phonological regularity to examine foveal and parafoveal phonological processing of Chinese phonograms at lexical and sublexical levels during Chinese sentence reading in 2 eye-tracking experiments. In Experiment 1, using an error disruption task during silent reading, we observed foveal lexical phonological activation in second-pass reading. In Experiment 2, using the boundary paradigm, both parafoveal lexical and sublexical phonological preview benefits were found in first-fixation duration in oral reading, whereas only lexical phonological benefits were found in gaze duration during silent reading. Thus, phonological information had earlier and more pronounced parafoveal effects in oral reading, and these extended to sublexical processing. These results are compatible with the view that oral reading prioritizes parafoveal phonological processing in Chinese.
The present study explores the perceptual span, that is, the physical extent of the area from which useful visual information is obtained during a single fixation, during oral reading of Chinese sentences. Characters outside a window of legible text were replaced by visually similar characters. Results show that the influence of window size on the perceptual span was consistent across different fixation and oculomotor measures. To maintain normal reading behavior when reading aloud, it was necessary to have information provided from three characters to the right of the fixation. Together with findings from previous research, our findings suggest that the physical size of the perceptual span is smaller when reading aloud than in silent reading. This is in agreement with previous studies in English, suggesting that the mechanisms causing the reduced span in oral reading have a common base that generalizes across languages and writing systems.
How is semantic information in the mental lexicon accessed and selected during reading? Readers process information of both the foveal and parafoveal words. Recent eye-tracking studies hint at bi-phasic lexical activation dynamics, demonstrating that semantically related parafoveal previews can either facilitate, or interfere with lexical processing of target words in comparison to unrelated previews, with the size and direction of the effect depending on exposure time to parafoveal previews. However, evidence to date is only correlational, because exposure time was determined by participants' pre-target fixation durations. Here we experimentally controlled parafoveal preview exposure duration using a combination of the gaze-contingent fast-priming and boundary paradigms. We manipulated preview duration and examined the time course of parafoveal semantic activation during the oral reading of Chinese sentences in three experiments. Semantic previews led to faster lexical access of target words than unrelated previews only when the previews were presented briefly (80 ms in Experiments 1 and 3). Longer exposure time (100 ms or 150 ms) eliminated semantic preview effects, and full preview without duration limit resulted in preview cost, i.e., a reversal of preview benefit. Our results indicate that high-level semantic information can be obtained from parafoveal words and the size and direction of the parafoveal semantic effect depends on the level of lexical activation.
In two eye-tracking experiments, we investigated the processing of information about phonological consistency of Chinese phonograms during sentence reading. In Experiment 1, we adopted the error disruption paradigm in silent reading and found significant effects of phonological consistency and homophony in the foveal vision, but only in a late processing stage. Adding oral reading to Experiment 2, we found both effects shifted to earlier indices of parafoveal processing. Specifically, low-consistency characters led to a better homophonic foveal recovery effect in Experiment 1 and stronger homophonic preview benefits in Experiment 2. These findings suggest that phonological consistency information can be obtained during sentence reading, and compared to the low-consistency previews the high-consistency previews are processed faster, which leads to greater interference to the recognition of target characters.
The perceptual span describes the size of the visual field from which information is obtained during a fixation in reading. Its size depends on characteristics of writing system and reader, but-according to the foveal load hypothesis-it is also adjusted dynamically as a function of lexical processing difficulty. Using the moving window paradigm to manipulate the amount of preview, here we directly test whether the perceptual span shrinks as foveal word difficulty increases. We computed the momentary size of the span from word-based eye-movement measures as a function of foveal word frequency, allowing us to separately describe the perceptual span for information affecting spatial saccade targeting and temporal saccade execution. First fixation duration and gaze duration on the upcoming (parafoveal) word N + 1 were significantly shorter when the current (foveal) word N was more frequent. We show that the word frequency effect is modulated by window size. Fixation durations on word N + 1 decreased with high-frequency words N, but only for large windows, that is, when sufficient parafoveal preview was available. This provides strong support for the foveal load hypothesis. To investigate the development of the foveal load effect, we analyzed data from three waves of a longitudinal study on the perceptual span with German children in Grades 1 to 6. Perceptual span adjustment emerged early in development at around second grade and remained stable in later grades. We conclude that the local modulation of the perceptual span indicates a general cognitive process, perhaps an attentional gradient with rapid readjustment.
Following up on an exchange about the relation between microsaccades and spatial attention (Horowitz, Fencsik, Fine, Yurgenson, & Wolfe, 2007; Horowitz, Fine, Fencsik, Yurgenson, & Wolfe, 2007; Laubrock, Engbert, Rolfs, & Kliegl, 2007), we examine the effects of selection criteria and response modality. We show that for Posner cuing with saccadic responses, microsaccades go with attention in at least 75% of cases (almost 90% if probability matching is assumed) when they are first (or only) microsaccades in the cue target interval and when they occur between 200 and 400 msec after the cue. The relation between spatial attention and the direction of microsaccades drops to chance level for unselected microsaccades collected during manual-response conditions. Analyses of data from four cross-modal cuing experiments demonstrate an above-chance, intermediate link for visual cues, but no systematic relation for auditory cues. Thus, the link between spatial attention and direction of microsaccades depends on the experimental condition and time of occurrence, but it can be very strong.
Although eye movements during reading are modulated by cognitive processing demands, they also reflect visual sampling of the input, and possibly preparation of output for speech or the inner voice. By simultaneously recording eye movements and the voice during reading aloud, we obtained an output measure that constrains the length of time spent on cognitive processing. Here we investigate the dynamics of the eye-voice span (EVS), the distance between eye and voice. We show that the EVS is regulated immediately during fixation of a word by either increasing fixation duration or programming a regressive eye movement against the reading direction. EVS size at the beginning of a fixation was positively correlated with the likelihood of regressions and refixations. Regression probability was further increased if the EVS was still large at the end of a fixation: if adjustment of fixation duration did not sufficiently reduce the EVS during a fixation, then a regression rather than a refixation followed with high probability. We further show that the EVS can help understand cognitive influences on fixation duration during reading: in mixed model analyses, the EVS was a stronger predictor of fixation durations than either word frequency or word length. The EVS modulated the influence of several other predictors on single fixation durations (SFDs). For example, word-N frequency effects were larger with a large EVS, especially when word N-1 frequency was low. Finally, a comparison of SFDs during oral and silent reading showed that reading is governed by similar principles in both reading modes, although EVS maintenance and articulatory processing also cause some differences. In summary, the EVS is regulated by adjusting fixation duration and/or by programming a regressive eye movement when the EVS gets too large. Overall, the EVS appears to be directly related to updating of the working memory buffer during reading.
Although eye movements during reading are modulated by cognitive processing demands, they also reflect visual sampling of the input, and possibly preparation of output for speech or the inner voice. By simultaneously recording eye movements and the voice during reading aloud, we obtained an output measure that constrains the length of time spent on cognitive processing. Here we investigate the dynamics of the eye-voice span (EVS), the distance between eye and voice. We show that the EVS is regulated immediately during fixation of a word by either increasing fixation duration or programming a regressive eye movement against the reading direction. EVS size at the beginning of a fixation was positively correlated with the likelihood of regressions and refixations. Regression probability was further increased if the EVS was still large at the end of a fixation: if adjustment of fixation duration did not sufficiently reduce the EVS during a fixation, then a regression rather than a refixation followed with high probability. We further show that the EVS can help understand cognitive influences on fixation duration during reading: in mixed model analyses, the EVS was a stronger predictor of fixation durations than either word frequency or word length. The EVS modulated the influence of several other predictors on single fixation durations (SFDs). For example, word-N frequency effects were larger with a large EVS, especially when word N-1 frequency was low. Finally, a comparison of SFDs during oral and silent reading showed that reading is governed by similar principles in both reading modes, although EVS maintenance and articulatory processing also cause some differences. In summary, the EVS is regulated by adjusting fixation duration and/or by programming a regressive eye movement when the EVS gets too large. Overall, the EVS appears to be directly related to updating of the working memory buffer during reading.
Although eye movements during reading are modulated by cognitive processing demands, they also reflect visual sampling of the input, and possibly preparation of output for speech or the inner voice. By simultaneously recording eye movements and the voice during reading aloud, we obtained an output measure that constrains the length of time spent on cognitive processing. Here we investigate the dynamics of the eye-voice span (EVS), the distance between eye and voice. We show that the EVS is regulated immediately during fixation of a word by either increasing fixation duration or programming a regressive eye movement against the reading direction. EVS size at the beginning of a fixation was positively correlated with the likelihood of regressions and refixations. Regression probability was further increased if the EVS was still large at the end of a fixation: if adjustment of fixation duration did not sufficiently reduce the EVS during a fixation, then a regression rather than a refixation followed with high probability. We further show that the EVS can help understand cognitive influences on fixation duration during reading: in mixed model analyses, the EVS was a stronger predictor of fixation durations than either word frequency or word length. The EVS modulated the influence of several other predictors on single fixation durations (SFDs). For example, word-N frequency effects were larger with a large EVS, especially when word N-1 frequency was low. Finally, a comparison of SFDs during oral and silent reading showed that reading is governed by similar principles in both reading modes, although EVS maintenance and articulatory processing also cause some differences. In summary, the EVS is regulated by adjusting fixation duration and/or by programming a regressive eye movement when the EVS gets too large. Overall, the EVS appears to be directly related to updating of the working memory buffer during reading.
The serial reaction time task (SRTT) is a standard task used to investigate incidental sequence learning. Whereas incidental learning of motor sequences is well-established, few and disputed results support learning of perceptual sequences. Here we adapt a motion coherence discrimination task (Newsome & Pare, 1988) to the sequence learning paradigm. The new task has 2 advantages: (a) the stimulus is presented at fixation, thereby obviating overt eye movements, and (b) by varying coherence a perceptual threshold measure is available in addition to the performance measure of RT. Results from 3 experiments show that action relevance of the sequence is necessary for sequence learning to occur, that the amount of sequence knowledge varies with the ease of encoding the motor sequence, and that sequence knowledge, once acquired, has the ability to modify perceptual thresholds.
Parafoveal preview benefit (PB) is an implicit measure of lexical activation in reading. PB has been demonstrated for orthographic and phonological but not for semantically related information in English. In contrast, semantic PB is obtained in German and Chinese. We propose that these language differences reveal differential resource demands and timing of phonological and semantic decoding in different orthographic systems.
We compared effects of covert spatial-attention shifts induced with exogenous or endogenous cues on microsaccade rate and direction. Separate and dissociated effects were obtained in rate and direction measures. Display changes caused microsaccade rate inhibition, followed by sustained rate enhancement. Effects on microsaccade direction were differentially tied to cue class (exogenous vs. endogenous) and type (neutral vs. directional). For endogenous cues, direction effects were weak and occurred late. Exogenous cues caused a fast direction bias towards the cue (i.e., early automatic triggering of saccade programs), followed by a shift in the opposite direction (i.e, controlled inhibition of cue-directed saccades, leading to a 'leakage' of microsaccades in the opposite direction). (C) 2004 Elsevier Ltd. All rights reserved
Neuronal activity in area LIP is correlated with the perceived direction of ambiguous apparent motion (Z. M. Williams, J. C. Elfar, E. N. Eskandar, L. J. Toth, & J. A. Assad, 2003). Here we show that a similar correlation exists for small eye movements made during fixation. A moving dot grid with superimposed fixation point was presented through an aperture. In a motion discrimination task, unambiguous motion was compared with ambiguous motion obtained by shifting the grid by half of the dot distance. In three experiments we show that (a) microsaccadic inhibition, i.e., a drop in microsaccade frequency precedes reports of perceptual flips, (b) microsaccadic inhibition does not accompany simple response changes, and (c) the direction of microsaccades occurring before motion onset biases the subsequent perception of ambiguous motion. We conclude that microsaccades provide a signal on which perceptual judgments rely in the absence of objective disambiguating stimulus information.
Hulleman & Olivers' (H&O's) model introduces variation of the functional visual field (FVF) for explaining visual search behavior. Our research shows how the FVF can be studied using gaze-contingent displays and how FVF variation can be implemented in models of gaze control. Contrary to H&O, we believe that fixation duration is an important factor when modeling visual search behavior.
Control of fixation duration during scene viewing by interaction of foveal and peripheral processing
(2013)
Processing in our visual system is functionally segregated, with the fovea specialized in processing fine detail (high spatial frequencies) for object identification, and the periphery in processing coarse information (low frequencies) for spatial orienting and saccade target selection. Here we investigate the consequences of this functional segregation for the control of fixation durations during scene viewing. Using gaze-contingent displays, we applied high-pass or low-pass filters to either the central or the peripheral visual field and compared eye-movement patterns with an unfiltered control condition. In contrast with predictions from functional segregation, fixation durations were unaffected when the critical information for vision was strongly attenuated (foveal low-pass and peripheral high-pass filtering); fixation durations increased, however, when useful information was left mostly intact by the filter (foveal high-pass and peripheral low-pass filtering). These patterns of results are difficult to explain under the assumption that fixation durations are controlled by foveal processing difficulty. As an alternative explanation, we developed the hypothesis that the interaction of foveal and peripheral processing controls fixation duration. To investigate the viability of this explanation, we implemented a computational model with two compartments, approximating spatial aspects of processing by foveal and peripheral activations that change according to a small set of dynamical rules. The model reproduced distributions of fixation durations from all experimental conditions by variation of few parameters that were affected by specific filtering conditions.
The age-by-complexity effect is the dominant empirical pattern in cognitive aging. The current report investigates whether a specific high-level mechanism---an age-related decrease in the reliability of episodic accumulators---can account for the age-by-complexity-effect, which is commonly assumed to be caused by an unspecific, low-level deficit. Groups of younger and older adults are compared in six reaction time experiments, using orthogonal manipulations of early cognitive difficulty (e.g., Stroop condition) and episodic demands (e.g., stimulus-response mapping). The predicted three-way interaction of age and the two factors was observed fairly consistently across experiments. A modified Brinley analysis shows that different regression slopes in old-young-space are required for conditions with low and high episodic difficulty. As a methodological contribution, a Brinley regression model following from certain simple processing assumptions is developed. It is shown that in contrast to a standard Brinley meta-analysis, the regression slopes in this model are not influenced by theoretically un-interesting between-experiment variance.
Covert shifts of attention are usually reflected in RT differences between responses to valid and invalid cues in the Posner spatial attention task. Such inferences about covert shifts of attention do not control for microsaccades in the cue target interval. We analyzed the effects of microsaccade orientation on RTs in four conditions, crossing peripheral visual and auditory cues with peripheral visual and auditory discrimination targets. Reaction time was generally faster on trials without microsaccades in the cue-target interval. If microsaccades occurred, the target-location congruency of the last microsaccade in the cuetarget interval interacted in a complex way with cue validity. For valid visual cues, irrespective of whether the discrimination target was visual or auditory, target-congruent microsaccades delayed RT. For invalid cues, target-incongruent microsaccades facilitated RTs for visual target discrimination, but delayed RT for auditory target discrimination. No reliable effects on RT were associated with auditory cues or with the first microsaccade in the cue-target interval. We discuss theoretical implications on the relation about spatial attention and oculomotor processes.
Using the gaze-contingent boundary paradigm with the boundary placed after word n, we manipulated preview of word n+2 for fixations on word n. There was no preview benefit for first-pass reading on word n+2, replicating the results of Rayner, Juhasz, and Brown (2007), but there was a preview benefit on the three-letter word n+1, that is, after the boundary, but before word n+2. Additionally, both word n+1 and word n+2 exhibited parafoveal-on-foveal effects on word n. Thus, during a fixation on word n and given a short word n+1, some information is extracted from word n+2, supporting the hypothesis of distributed processing in the perceptual span.
Using the gaze-contingent boundary paradigm with the boundary placed after word n, the experiment manipulated preview of word n + 2 for fixations on word n. There was no preview benefit for 1st-pass reading on word n + 2, replicating the results of K. Rayner, B. J. Juhasz, and S. J. Brown (2007), but there was a preview benefit on the 3- letter word n + 1, that is, after the boundary but before word n + 2. Additionally, both word n + 1 and word n + 2 exhibited parafoveal-on-foveal effects on word n. Thus, during a fixation on word n and given a short word n + 1, some information is extracted from word n + 2, supporting the hypothesis of distributed processing in the perceptual span.
Eye movements in reading are sensitive to foveal and parafoveal word features. Whereas the influence of orthographic or phonological parafoveal information on gaze control is undisputed, there has been no reliable evidence for early parafoveal extraction of semantic information in alphabetic script. Using a novel combination of the gaze-contingent fast-priming and boundary paradigms, we demonstrate semantic preview benefit when a semantically related parafoveal word was available during the initial 125 ms of a fixation on the pre-target word (Experiments 1 and 2). When the target location was made more salient, significant parafoveal semantic priming occurred only at 80 ms (Experiment 3). Finally, with short primes only (20, 40, 60 ms) effects were not significant but numerically in the expected direction for 40 and 60 ms (Experiment 4). In all experiments, fixation durations on the target word increased with prime durations under all conditions. The evidence for extraction of semantic information from the parafoveal word favors an explanation in terms of parallel word processing in reading.
The visual number world
(2018)
In the domain of language research, the simultaneous presentation of a visual scene and its auditory description (i.e., the visual world paradigm) has been used to reveal the timing of mental mechanisms. Here we apply this rationale to the domain of numerical cognition in order to explore the differences between fast and slow arithmetic performance, and to further study the role of spatial-numerical associations during mental arithmetic. We presented 30 healthy adults simultaneously with visual displays containing four numbers and with auditory addition and subtraction problems. Analysis of eye movements revealed that participants look spontaneously at the numbers they currently process (operands, solution). Faster performance was characterized by shorter latencies prior to fixating the relevant numbers and fewer revisits to the first operand while computing the solution. These signatures of superior task performance were more pronounced for addition and visual numbers arranged in ascending order, and for subtraction and numbers arranged in descending order (compared to the opposite pairings). Our results show that the visual number world-paradigm provides on-line access to the mind during mental arithmetic, is able to capture variability in arithmetic performance, and is sensitive to visual layout manipulations that are otherwise not reflected in response time measurements.
"Left" and "right" coordinates control our spatial behavior and even influence abstract thoughts. For number concepts, horizontal spatial-numerical associations (SNAs) have been widely documented: we associate few with left and many with right. Importantly, increments are universally coded on the right side even in preverbal humans and nonhuman animals, thus questioning the fundamental role of directional cultural habits, such as reading or finger counting. Here, we propose a biological, nonnumerical mechanism for the origin of SNAs on the basis of asymmetric tuning of animal brains for different spatial frequencies (SFs). The resulting selective visual processing predicts both universal SNAs and their context-dependence. We support our proposal by analyzing the stimuli used to document SNAs in newborns for their SF content. As predicted, the SFs contained in visual patterns with few versus many elements preferentially engage right versus left brain hemispheres, respectively, thus predicting left-versus rightward behavioral biases. Our "brain's asymmetric frequency tuning" hypothesis explains the perceptual origin of horizontal SNAs for nonsymbolic visual numerosities and might be extensible to the auditory domain.
To construct a coherent multi-modal percept, vertebrate brains extract low-level features (such as spatial and temporal frequencies) from incoming sensory signals. However, because frequency processing is lateralized with the right hemisphere favouring low frequencies while the left favours higher frequencies, this introduces asymmetries between the hemispheres. Here, we describe how this lateralization shapes the development of several cognitive domains, ranging from visuo-spatial and numerical cognition to language, social cognition, and even aesthetic appreciation, and leads to the emergence of asymmetries in behaviour. We discuss the neuropsychological and educational implications of these emergent asymmetries and suggest future research approaches.
Coupling of attention and saccades when viewing scenes with central and peripheral degradation
(2016)
Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations.
Coupling of attention and saccades when viewing scenes with central and peripheral degradation
(2016)
Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations.
Visuospatial attention and gaze control depend on the interaction of foveal and peripheral processing. The foveal and peripheral regions of the visual field are differentially sensitive to parts of the spatial frequency spectrum. In two experiments, we investigated how the selective attenuation of spatial frequencies in the central or the peripheral visual field affects eye-movement behavior during real-world scene viewing. Gaze-contingent low-pass or high-pass filters with varying filter levels (i.e., cutoff frequencies; Experiment 1) or filter sizes (Experiment 2) were applied. Compared to unfiltered control conditions, mean fixation durations increased most with central high-pass and peripheral low-pass filtering. Increasing filter size prolonged fixation durations with peripheral filtering, but not with central filtering. Increasing filter level prolonged fixation durations with low-pass filtering, but not with high-pass filtering. These effects indicate that fixation durations are not always longer under conditions of increased processing difficulty. Saccade amplitudes largely adapted to processing difficulty: amplitudes increased with central filtering and decreased with peripheral filtering; the effects strengthened with increasing filter size and filter level. In addition, we observed a trade-off between saccade timing and saccadic selection, since saccade amplitudes were modulated when fixation durations were unaffected by the experimental manipulations. We conclude that interactions of perception and gaze control are highly sensitive to experimental manipulations of input images as long as the residual information can still be accessed for gaze control. (C) 2016 Elsevier Ltd. All rights reserved.
When studying how people search for objects in scenes, the inhomogeneity of the visual field is often ignored. Due to physiological limitations, peripheral vision is blurred and mainly uses coarse-grained information (i.e., low spatial frequencies) for selecting saccade targets, whereas high-acuity central vision uses fine-grained information (i.e., high spatial frequencies) for analysis of details. Here we investigated how spatial frequencies and color affect object search in real-world scenes. Using gaze-contingent filters, we attenuated high or low frequencies in central or peripheral vision while viewers searched color or grayscale scenes. Results showed that peripheral filters and central high-pass filters hardly affected search accuracy, whereas accuracy dropped drastically with central low-pass filters. Peripheral filtering increased the time to localize the target by decreasing saccade amplitudes and increasing number and duration of fixations. The use of coarse-grained information in the periphery was limited to color scenes. Central filtering increased the time to verify target identity instead, especially with low-pass filters. We conclude that peripheral vision is critical for object localization and central vision is critical for object identification. Visual guidance during peripheral object localization is dominated by low-frequency color information, whereas high-frequency information, relatively independent of color, is most important for object identification in central vision.
The defocused attention hypothesis (von Hecker and Meiser, 2005) assumes that negative mood broadens attention, whereas the analytical rumination hypothesis (Andrews and Thompson, 2009) suggests a narrowing of the attentional focus with depression. We tested these conflicting hypotheses by directly measuring the perceptual span in groups of dysphoric and control subjects, using eye tracking. In the moving window paradigm, information outside of a variable-width gaze-contingent window was masked during reading of sentences. In measures of sentence reading time and mean fixation duration, dysphoric subjects were more pronouncedly affected than controls by a reduced window size. This difference supports the defocused attention hypothesis and seems hard to reconcile with a narrowing of attentional focus.