Refine
Document Type
- Article (9)
- Postprint (2)
- Monograph/Edited Volume (1)
Language
- English (12)
Is part of the Bibliography
- yes (12) (remove)
Keywords
- Bayesian brain (1)
- Bottom-up (1)
- Brain potentials (1)
- Cloze probability (1)
- EEG (1)
- Event-related potentials (ERPs) (1)
- Frequency (1)
- Frequenz (1)
- Frontopolar (1)
- Interactive activation model (1)
- Lesen (1)
- Orbitofrontal (1)
- Predictability (1)
- Predictive coding (1)
- Sentence reading (1)
- Spatial attention (1)
- Stimulus onset asynchrony (SOA) (1)
- Stimulus-Onset Asynchrony (1)
- Top-down (1)
- Top-down influences (1)
- Vorhersagbarkeit (1)
- Word form area (1)
- Word recognition (1)
- bottom-up (1)
- ereigniskorrelierte Potentiale (1)
- event-related potentials (1)
- eye movements (1)
- fixation durations (1)
- frequency (1)
- individual differences (1)
- linear mixed model (1)
- object-based attention (1)
- oculomotor control (1)
- predictability (1)
- reading (1)
- sentence processing (1)
- sentence reading (1)
- spatial attention (1)
- stimulus-onset asynchrony (1)
- stimulus-onset delay (1)
- top-down (1)
- visual attention (1)
- visual word recognition (1)
- visuelle Worterkennung (1)
- word processing (1)
Institute
The present study explores the role of the word position-in-text in sentence and paragraph reading. Three eye-movement data sets based on the reading of Dutch and German unrelated sentences reveal a sizeable, replicable increase in reading times over several words in the beginning and the end of sentences. The data from the paragraphbased English-language Dundee corpus replicate the pattern and also indicate that the increase in inspection times is driven by the visual boundaries of the text organized in lines, rather than by syntactic sentence boundaries. We argue that this effect is independent of several established lexical, contextual and oculomotor predictors of eye-movement behavior. We also provide evidence that the effect of word position-intext has two independent components: a start-up effect arguably caused by a strategic oculomotor program of saccade planning over the line of text, and a wrap-up effect originating in cognitive processes of comprehension and semantic integration.
We easily recover the causal properties of visual events, enabling us to understand and predict changes in the physical world. We see a tennis racket hitting a ball and sense that it caused the ball to fly over the net; we may also have an eerie but equally compelling experience of causality if the streetlights turn on just as we slam our car's door. Both perceptual [1] and cognitive [2] processes have been proposed to explain these spontaneous inferences, but without decisive evidence one way or the other, the question remains wide open [3-8]. Here, we address this long-standing debate using visual adaptation-a powerful tool to uncover neural populations that specialize in the analysis of specific visual features [9-12]. After prolonged viewing of causal collision events called "launches" [1], subsequently viewed events were judged more often as noncausal. These negative aftereffects of exposure to collisions are spatially localized in retinotopic coordinates, the reference frame shared by the retina and visual cortex. They are not explained by adaptation to other stimulus features and reveal visual routines in retinotopic cortex that detect and adapt to cause and effect in simple collision stimuli.