Refine
Has Fulltext
- no (40) (remove)
Year of publication
Is part of the Bibliography
- yes (40)
Keywords
- Eye movements (40) (remove)
Numerous studies have demonstrated effects of word frequency on eye movements during reading, but the precise timing of this influence has remained unclear. The fast priming paradigm was previously used to study influences of related versus unrelated primes on the target word. Here, we use this procedure to investigate whether the frequency of the prime word has a direct influence on eye movements during reading when the prime-target relation is not manipulated. We found that with average prime intervals of 32 ms readers made longer single fixation durations on the target word in the low than in the high frequency prime condition. Distributional analyses demonstrated that the effect of prime frequency on single fixation durations occurred very early, supporting theories of immediate cognitive control of eye movements. Finding prime frequency effects only 207 ms after visibility of the prime and for prime durations of 32 ms yields new time constraints for cognitive processes controlling eye movements during reading. Our variant of the fast priming paradigm provides a new approach to test early influences of word processing on eye movement control during reading.
Eye-movement experiments suggest that the perceptual span during reading is larger than the fixated word, asymmetric around the fixation position, and shrinks in size contingent on the foveal processing load. We used the SWIFT model of eye-movement control during reading to test these hypotheses and their implications under the assumption of graded parallel processing of all words inside the perceptual span. Specifically, we simulated reading in the boundary paradigm and analysed the effects of denying the model to have valid preview of a parafoveal word n + 2 two words to the right of fixation. Optimizing the model parameters for the valid preview condition only, we obtained span parameters with remarkably realistic estimates conforming to the empirical findings on the size of the perceptual span. More importantly, the SWIFT model generated parafoveal processing up to word n + 2 without fitting the model to such preview effects. Our results suggest that asymmetry and dynamic modulation are plausible properties of the perceptual span in a parallel word-processing model such as SWIFT. Moreover, they seem to guide the flexible distribution of processing resources during reading between foveal and parafoveal words.
This study investigates the eye movements of dyslexic children and their age-matched controls when reading Chinese. Dyslexic children exhibited more and longer fixations than age-matched control children, and an increase of word length resulted in a greater increase in the number of fixations and gaze durations for the dyslexic than for the control readers. The report focuses on the finding that there was a significant difference between the two groups in the fixation landing position as a function of word length in single-fixation cases, while there was no such difference in the initial fixation of multi-fixation cases. We also found that both groups had longer incoming saccade amplitudes while the launch sites were closer to the word in single fixation cases than in multi-fixation cases. Our results suggest that dyslexic children's inefficient lexical processing, in combination with the absence of orthographic word boundaries in Chinese, leads them to select saccade targets at the beginning of words conservatively. These findings provide further evidence for parafoveal word segmentation during reading of Chinese sentences.
It is generally accepted that low-level features (e.g., inter-word spaces) are responsible for saccade-target selection in eye-movement control during reading. In two experiments using Uighur script known for its rich suffixes, we demonstrate that, in addition to word length and launch site, the number of suffixes influences initial landing positions. We also demonstrate an influence of word frequency. These results are difficult to explain purely by low-level guidance of eye movements and indicate that due to properties specific to Uighur script low-level visual information and high-level information such as morphological structure of parafoveal words jointly influence saccade programming. (C) 2014 Elsevier B.V. All rights reserved.
Eye movements depend on cognitive processes related to visual information processing. Much has been learned about the spatial selection of fixation locations, while the principles governing the temporal control (fixation durations) are less clear. Here, we review current theories for the control of fixation durations in tasks like visual search, scanning, scene perception, and reading and propose a new model for the control of fixation durations. We distinguish two local principles from one global principle of control. First, an autonomous saccade timer initiates saccades after random time intervals (local-I). Second, foveal inhibition permits immediate prolongation of fixation durations by ongoing processing (local-II). Third, saccade timing is adaptive, so that the mean timer value depends on task requirements and fixation history (Global). We demonstrate by numerical simulations that our model qualitatively reproduces patterns of mean fixation durations and fixation duration distributions observed in typical experiments. When combined with assumptions of saccade target selection and oculomotor control, the model accounts for both temporal and spatial aspects of eye movement control in two versions of a visual search task. We conclude that the model provides a promising framework for the control of fixation durations in saccadic tasks.
Recent research showed that past events are associated with the back and left side, whereas future events are associated with the front and right side of space. These spatial-temporal associations have an impact on our sensorimotor system: thinking about one's past and future leads to subtle body sways in the sagittal dimension of space (Miles, Nind, & Macrae, 2010). In this study we investigated whether mental time travel leads to sensorimotor correlates in the horizontal dimension of space. Participants were asked to mentally displace themselves into the past or future while measuring their spontaneous eye movements on a blank screen. Eye gaze was directed more rightward and upward when thinking about the future than when thinking about the past. Our results provide further insight into the spatial nature of temporal thoughts, and show that not only body, but also eye movements follow a (diagonal) "time line" during mental time travel. (C) 2014 Elsevier Inc. All rights reserved.
Background: In addition to the canonical subject-verb-object (SVO) word order, German also allows for non-canonical order (OVS), and the case-marking system supports thematic role interpretation. Previous eye-tracking studies (Kamide et al., 2003; Knoeferle, 2007) have shown that unambiguous case information in non-canonical sentences is processed incrementally. For individuals with agrammatic aphasia, comprehension of non-canonical sentences is at chance level (Burchert et al., 2003). The trace deletion hypothesis (Grodzinsky 1995, 2000) claims that this is due to structural impairments in syntactic representations, which force the individual with aphasia (IWA) to apply a guessing strategy. However, recent studies investigating online sentence processing in aphasia (Caplan et al., 2007; Dickey et al., 2007) found that divergences exist in IWAs' sentence-processing routines depending on whether they comprehended non-canonical sentences correctly or not, pointing rather to a processing deficit explanation. Aims: The aim of the current study was to investigate agrammatic IWAs' online and offline sentence comprehension simultaneously in order to reveal what online sentence-processing strategies they rely on and how these differ from controls' processing routines. We further asked whether IWAs' offline chance performance for non-canonical sentences does indeed result from guessing. Methods Procedures: We used the visual-world paradigm and measured eye movements (as an index of online sentence processing) of controls (N = 8) and individuals with aphasia (N = 7) during a sentence-picture matching task. Additional offline measures were accuracy and reaction times. Outcomes Results: While the offline accuracy results corresponded to the pattern predicted by the TDH, IWAs' eye movements revealed systematic differences depending on the response accuracy. Conclusions: These findings constitute evidence against attributing IWAs' chance performance for non-canonical structures to mere guessing. Instead, our results support processing deficit explanations and characterise the agrammatic parser as deterministic and inefficient: it is slowed down, affected by intermittent deficiencies in performing syntactic operations, and fails to compute reanalysis even when one is detected.
Saccades move objects of interest into the center of the visual field for high-acuity visual analysis. White, Stritzke, and Gegenfurtner (Current Biology, 18, 124-128, 2008) have shown that saccadic latencies in the context of a structured background are much shorter than those with an unstructured background at equal levels of visibility. This effect has been explained by possible preactivation of the saccadic circuitry whenever a structured background acts as a mask for potential saccade targets. Here, we show that background textures modulate rates of microsaccades during visual fixation. First, after a display change, structured backgrounds induce a stronger decrease of microsaccade rates than do uniform backgrounds. Second, we demonstrate that the occurrence of a microsaccade in a critical time window can delay a subsequent saccadic response. Taken together, our findings suggest that microsaccades contribute to the saccadic facilitation effect, due to a modulation of microsaccade rates by properties of the background.
Which repair strategy does the language system deploy when it gets garden-pathed, and what can regressive eye movements in reading tell us about reanalysis strategies? Several influential eye-tracking studies on syntactic reanalysis (Frazier & Rayner, 1982; Meseguer, Carreiras, & Clifton, 2002; Mitchell, Shen, Green, & Hodgson, 2008) have addressed this question by examining scanpaths, i.e., sequential patterns of eye fixations. However, in the absence of a suitable method for analyzing scanpaths, these studies relied on simplified dependent measures that are arguably ambiguous and hard to interpret. We address the theoretical question of repair strategy by developing a new method that quantifies scanpath similarity. Our method reveals several distinct fixation strategies associated with reanalysis that went undetected in a previously published data set (Meseguer et al., 2002). One prevalent pattern suggests re-parsing of the sentence, a strategy that has been discussed in the literature (Frazier & Rayner, 1982); however, readers differed tremendously in how they orchestrated the various fixation strategies. Our results suggest that the human parsing system non-deterministically adopts different strategies when confronted with the disambiguating material in garden-path sentences.
We investigated the mental rehearsal of complex action instructions by recording spontaneous eye movements of healthy adults as they looked at objects on a monitor. Participants heard consecutive instructions, each of the form "move [object] to [location]''. Instructions were only to be executed after a go signal, by manipulating all objects successively with a mouse. Participants re-inspected previously mentioned objects already while listening to further instructions. This rehearsal behavior broke down after 4 instructions, coincident with participants' instruction span, as determined from subsequent execution accuracy. These results suggest that spontaneous eye movements while listening to instructions predict their successful execution.