Refine
Year of publication
Document Type
- Article (11)
- Postprint (4)
- Doctoral Thesis (3)
- Preprint (2)
Is part of the Bibliography
- yes (20) (remove)
Keywords
- attention (20) (remove)
Institute
- Department Psychologie (20) (remove)
Due to their ability to capture attention, emotional stimuli tend to benefit from enhanced perceptual processing, which can be helpful when such stimuli are task-relevant but hindering when they are task-irrelevant. Altered emotion-attention interactions have been associated with symptoms of affective disturbances, and emerging research focuses on improving emotion-attention interactions to prevent or treat affective disorders. In line with the Human Affectome Project's emphasis on linguistic components, we also analyzed the language used to describe attention-related aspects of emotion, and highlighted terms related to domains such as conscious awareness, motivational effects of attention, social attention, and emotion regulation. These terms were discussed within a broader review of available evidence regarding the neural correlates of (1) Emotion-Attention Interactions in Perception, (2) Emotion-Attention Interactions in Learning and Memory, (3) Individual Differences in Emotion-Attention Interactions, and (4) Training and Interventions to Optimize Emotion-Attention Interactions. This comprehensive approach enabled an integrative overview of the current knowledge regarding the mechanisms of emotion-attention interactions at multiple levels of analysis, and identification of emerging directions for future investigations.
Due to their ability to capture attention, emotional stimuli tend to benefit from enhanced perceptual processing, which can be helpful when such stimuli are task-relevant but hindering when they are task-irrelevant. Altered emotion-attention interactions have been associated with symptoms of affective disturbances, and emerging research focuses on improving emotion-attention interactions to prevent or treat affective disorders. In line with the Human Affectome Project's emphasis on linguistic components, we also analyzed the language used to describe attention-related aspects of emotion, and highlighted terms related to domains such as conscious awareness, motivational effects of attention, social attention, and emotion regulation. These terms were discussed within a broader review of available evidence regarding the neural correlates of (1) Emotion-Attention Interactions in Perception, (2) Emotion-Attention Interactions in Learning and Memory, (3) Individual Differences in Emotion-Attention Interactions, and (4) Training and Interventions to Optimize Emotion-Attention Interactions. This comprehensive approach enabled an integrative overview of the current knowledge regarding the mechanisms of emotion-attention interactions at multiple levels of analysis, and identification of emerging directions for future investigations.
Visual perception is a complex and dynamic process that plays a crucial role in how we perceive and interact with the world. The eyes move in a sequence of saccades and fixations, actively modulating perception by moving different parts of the visual world into focus. Eye movement behavior can therefore offer rich insights into the underlying cognitive mechanisms and decision processes. Computational models in combination with a rigorous statistical framework are critical for advancing our understanding in this field, facilitating the testing of theory-driven predictions and accounting for observed data. In this thesis, I investigate eye movement behavior through the development of two mechanistic, generative, and theory-driven models. The first model is based on experimental research regarding the distribution of attention, particularly around the time of a saccade, and explains statistical characteristics of scan paths. The second model implements a self-avoiding random walk within a confining potential to represent the microscopic fixational drift, which is present even while the eye is at rest, and investigates the relationship to microsaccades. Both models are implemented in a likelihood-based framework, which supports the use of data assimilation methods to perform Bayesian parameter inference at the level of individual participants, analyses of the marginal posteriors of the interpretable parameters, model comparisons, and posterior predictive checks. The application of these methods enables a thorough investigation of individual variability in the space of parameters. Results show that dynamical modeling and the data assimilation framework are highly suitable for eye movement research and, more generally, for cognitive modeling.
Rodin has it!
(2020)
We report a new discovery on the role of hands in guiding attention, using the classic Stroop effect as our assay. We show that the Stroop effect diminishes, hence selective attention improves, when observers hold their chin, emulating Rodin's famous sculpture, "The Thinker." In two experiments we show that the Rodin posture improves the selectivity of attention as efficiently as holding the hands nearby the visual stimulus (the near-hands effect). Because spatial proximity to the displayed stimulus is neither present nor intended, the presence of the Rodin effect implies that attentional prioritization by the hands is not limited to the space between the hands.
The perceptual span describes the size of the visual field from which information is obtained during a fixation in reading. Its size depends on characteristics of writing system and reader, but-according to the foveal load hypothesis-it is also adjusted dynamically as a function of lexical processing difficulty. Using the moving window paradigm to manipulate the amount of preview, here we directly test whether the perceptual span shrinks as foveal word difficulty increases. We computed the momentary size of the span from word-based eye-movement measures as a function of foveal word frequency, allowing us to separately describe the perceptual span for information affecting spatial saccade targeting and temporal saccade execution. First fixation duration and gaze duration on the upcoming (parafoveal) word N + 1 were significantly shorter when the current (foveal) word N was more frequent. We show that the word frequency effect is modulated by window size. Fixation durations on word N + 1 decreased with high-frequency words N, but only for large windows, that is, when sufficient parafoveal preview was available. This provides strong support for the foveal load hypothesis. To investigate the development of the foveal load effect, we analyzed data from three waves of a longitudinal study on the perceptual span with German children in Grades 1 to 6. Perceptual span adjustment emerged early in development at around second grade and remained stable in later grades. We conclude that the local modulation of the perceptual span indicates a general cognitive process, perhaps an attentional gradient with rapid readjustment.
When infants observe a human grasping action, experience-based accounts predict that all infants familiar with grasping actions should be able to predict the goal regardless of additional agency cues such as an action effect. Cue-based accounts, however, suggest that infants use agency cues to identify and predict action goals when the action or the agent is not familiar. From these accounts, we hypothesized that younger infants would need additional agency cues such as a salient action effect to predict the goal of a human grasping action, whereas older infants should be able to predict the goal regardless of agency cues. In three experiments, we presented 6-, 7-, and 11-month-olds with videos of a manual grasping action presented either with or without an additional salient action effect (Exp. 1 and 2), or we presented 7-month-olds with videos of a mechanical claw performing a grasping action presented with a salient action effect (Exp. 3). The 6-month-olds showed tracking gaze behavior, and the 11-month-olds showed predictive gaze behavior, regardless of the action effect. However, the 7-month-olds showed predictive gaze behavior in the action-effect condition, but tracking gaze behavior in the no-action-effect condition and in the action-effect condition with a mechanical claw. The results therefore support the idea that salient action effects are especially important for infants' goal predictions from 7 months on, and that this facilitating influence of action effects is selective for the observation of human hands.
Previous research found that memory is not only better for emotional information but also for neutral information that has been encoded in the context of an emotional event. In the present ERP study, we investigated two factors that may influence memory for neutral and emotional items: temporal proximity between emotional and neutral items during encoding, and retention interval (immediate vs. delayed). Forty-nine female participants incidentally encoded 36 unpleasant and 108 neutral pictures (36 neutral pictures preceded an unpleasant picture, 36 followed an unpleasant picture, and 36 neutral pictures were preceded and followed by neutral pictures) and participated in a recognition memory task either immediately (N=24) or 1 week (N=25) after encoding. Results showed better memory for emotional pictures relative to neutral pictures. In accordance, enhanced centroparietal old/new differences (500-900 ms) during recognition were observed for unpleasant compared to neutral pictures, most pronounced for the 1-week interval. Picture position effects, however, were only subtle. During encoding, late positive potentials for neutral pictures were slightly lower for neutral pictures following unpleasant ones, but only at trend level. To summarize, we could replicate and extend previous ERP findings showing that emotionally arousing events are better recollected than neutral events, particularly when memory is tested after longer retention intervals. Picture position during encoding, however, had only small effects on elaborative processing and no effects on memory retrieval.
During reading, rapid eye movements (saccades) shift the reader's line of sight from one word to another for high-acuity visual information processing. While experimental data and theoretical models show that readers aim at word centers, the eye-movement (oculomotor) accuracy is low compared to other tasks. As a consequence, distributions of saccadic landing positions indicate large (i) random errors and (ii) systematic over- and undershoot of word centers, which additionally depend on saccade lengths (McConkie et al.Visual Research, 28(10), 1107-1118,1988). Here we show that both error components can be simultaneously reduced by reading texts from right to left in German language (N= 32). We used our experimental data to test a Bayesian model of saccade planning. First, experimental data are consistent with the model. Second, the model makes specific predictions of the effects of the precision of prior and (sensory) likelihood. Our results suggest that it is a more precise sensory likelihood that can explain the reduction of both random and systematic error components.
The interplay between cognitive and oculomotor processes during reading can be explored when the spatial layout of text deviates from the typical display. In this study, we investigate various eye-movement measures during reading of text with experimentally manipulated layout (word-wise and letter-wise mirrored-reversed text as well as inverted and scrambled text). While typical findings (e.g., longer mean fixation times, shorter mean saccades lengths) in reading manipulated texts compared to normal texts were reported in earlier work, little is known about changes of oculomotor targeting observed in within-word landing positions under the above text layouts. Here we carry out precise analyses of landing positions and find substantial changes in the so-called launch-site effect in addition to the expected overall slow-down of reading performance. Specifically, during reading of our manipulated text conditions with reversed letter order (against overall reading direction), we find a reduced launch-site effect, while in all other manipulated text conditions, we observe an increased launch-site effect. Our results clearly indicate that the oculomotor system is highly adaptive when confronted with unusual reading conditions.
The interplay between cognitive and oculomotor processes during reading can be explored when the spatial layout of text deviates from the typical display. In this study, we investigate various eye-movement measures during reading of text with experimentally manipulated layout (word-wise and letter-wise mirrored-reversed text as well as inverted and scrambled text). While typical findings (e.g., longer mean fixation times, shorter mean saccades lengths) in reading manipulated texts compared to normal texts were reported in earlier work, little is known about changes of oculomotor targeting observed in within-word landing positions under the above text layouts. Here we carry out precise analyses of landing positions and find substantial changes in the so-called launch-site effect in addition to the expected overall slow-down of reading performance. Specifically, during reading of our manipulated text conditions with reversed letter order (against overall reading direction), we find a reduced launch-site effect, while in all other manipulated text conditions, we observe an increased launch-site effect. Our results clearly indicate that the oculomotor system is highly adaptive when confronted with unusual reading conditions.