Refine
Year of publication
Document Type
- Article (21)
- Postprint (8)
- Doctoral Thesis (4)
- Preprint (2)
Is part of the Bibliography
- yes (35) (remove)
Keywords
- attention (35) (remove)
Institute
- Department Psychologie (20)
- Department Linguistik (3)
- Extern (3)
- Department Sport- und Gesundheitswissenschaften (2)
- Fakultät für Gesundheitswissenschaften (2)
- Humanwissenschaftliche Fakultät (2)
- Strukturbereich Bildungswissenschaften (2)
- Strukturbereich Kognitionswissenschaften (2)
- Department für Inklusionspädagogik (1)
Many studies have suggested that emotional stimuli orient and engage attention. There is also evidence that animate stimuli, such as those from humans and animals, cause attentional bias. However, categorical and emotional factors are usually mixed, and it is unclear to what extent human context influences attentional allocation. To address this issue, we tracked participants' eye movements while they viewed pictures with animals and inanimate images (i.e., category) as focal objects. These pictures had either negative or neutral emotional valence, and either human body parts or nonhuman parts were near the focal objects (i.e., context). The picture's valence, arousal, position, size, and most of the low-level visual features were matched across categories. The results showed that nonhuman animals were more likely to be attended to and to be attended to for longer times than inanimate objects. The same pattern held for the human contexts (vs. nonhuman contexts). The effects of emotional valence, category, and context interacted. Specifically, in images with a negative valence, focal animals and objects with human context had comparable numbers of gaze fixations and gaze duration. These results highlighted the attentional bias to animate parts of a picture and clarified that the effects of category, valence, and picture context interacted to influence attentional allocation.
Diagnosis and treatment of breast cancer is a very emotionally aversive and stressful life event, which can lead to impaired cognitive functioning and mental health. Breast cancer survivors responding with repressive emotion regulation strategies often show less adaptive coping and adverse outcomes. We investigated cognitive functioning and neural correlates of emotion processing using ERPs. Self-report measures of depression, anxiety, and fatigue, as well as hair cortisol as an index of chronic stress, were assessed. Twenty breast cancer survivors (BCS) and 31 carefully matched healthy controls participated in the study. After neuropsychological testing and subjective assessments, participants viewed 30 neutral, 30 unpleasant, and 30 pleasant pictures, and ERPs were recorded. Recognition memory was tested 1 week later. BCS reported stronger complaints about cognitive impairments and more symptoms of depression, anxiety, and fatigue. Moreover, they showed elevated hair cortisol levels. Except for verbal memory, cognitive functioning was predominantly in the normative range. Recognition memory performance was decreased in cancer survivors, especially for emotional contents. In ERPs, survivors showed smaller late positive potential amplitudes for unpleasant pictures relative to controls in a later time window, which may indicate less elaborative processing of this material. Taken together, we found cognitive impairments in BCS in verbal memory, impaired emotional picture memory accuracy, and reduced neural activity when breast cancer survivors were confronted with unpleasant materials. Further studies and larger sample sizes, however, are needed to evaluate the relationship between altered emotion processing and reduced memory in BCS in order to develop new treatment strategies.
Previous research found that memory is not only better for emotional information but also for neutral information that has been encoded in the context of an emotional event. In the present ERP study, we investigated two factors that may influence memory for neutral and emotional items: temporal proximity between emotional and neutral items during encoding, and retention interval (immediate vs. delayed). Forty-nine female participants incidentally encoded 36 unpleasant and 108 neutral pictures (36 neutral pictures preceded an unpleasant picture, 36 followed an unpleasant picture, and 36 neutral pictures were preceded and followed by neutral pictures) and participated in a recognition memory task either immediately (N=24) or 1 week (N=25) after encoding. Results showed better memory for emotional pictures relative to neutral pictures. In accordance, enhanced centroparietal old/new differences (500-900 ms) during recognition were observed for unpleasant compared to neutral pictures, most pronounced for the 1-week interval. Picture position effects, however, were only subtle. During encoding, late positive potentials for neutral pictures were slightly lower for neutral pictures following unpleasant ones, but only at trend level. To summarize, we could replicate and extend previous ERP findings showing that emotionally arousing events are better recollected than neutral events, particularly when memory is tested after longer retention intervals. Picture position during encoding, however, had only small effects on elaborative processing and no effects on memory retrieval.
Previous research on the interplay between static manual postures and visual attention revealed enhanced visual selection near the hands (near-hand effect). During active movements there is also superior visual performance when moving toward compared to away from the stimulus (direction effect). The "modulated visual pathways" hypothesis argues that differential involvement of magno- and parvocellular visual processing streams causes the near-hand effect. The key finding supporting this hypothesis is an increase in temporal and a reduction in spatial processing in near-hand space (Gozli et al., 2012). Since this hypothesis has, so far, only been tested with static hand postures, we provide a conceptual replication of Gozli et al.'s (2012) result with moving hands, thus also probing the generality of the direction effect. Participants performed temporal or spatial gap discriminations while their right hand was moving below the display. In contrast to Gozli et al (2012), temporal gap discrimination was superior at intermediate and not near hand proximity. In spatial gap discrimination, a direction effect without hand proximity effect suggests that pragmatic attentional maps overshadowed temporal/spatial processing biases for far/near-hand space.
Wick, K, Kriemler, S, and Granacher, U. Effects of a strength-dominated exercise program on physical fitness and cognitive performance in preschool children. J Strength Cond Res 35(4): 983-990, 2021-Childhood is characterized by high neuroplasticity that affords qualitative rather than quantitative components of physical activity to maximize the potential to sufficiently develop motor skills and foster long-term engagement in regular physical activity. This study examined the effects of an integrative strength-dominated exercise program on measures of physical fitness and cognitive performance in preschool children. Children aged 4-6 years from 3 kindergartens were randomized into an intervention (INT) group (n = 32) or a control group (n = 22). The 10-week intervention period was conducted 3 times per week (each session lasted 30 minutes) and included exercises for the promotion of muscle strength and power, coordination, and balance. Pre and post training, tests were conducted for the assessment of muscle strength (i.e., handgrip strength), muscle power (i.e., standing long jump), balance (i.e., timed single-leg stand), coordination (hopping on right/left leg), and attentional span (i.e., "Konzentrations-Handlungsverfahren fur Vorschulkinder" [concentration-action procedure for preschoolers]). Results from 2 x 2 repeated-measures analysis of covariance revealed a significant (p <= 0.05) and near significant (p = 0.051) group x time interaction for the standing long jump test and the Konzentrations-Handlungsverfahren. Post hoc tests showed significant pre-post changes for the INT (p < 0.001; d = 1.53) but not the CON (p = 0.72; d = 0.83). Our results indicate that a 10-week strength-dominated exercise program increased jump performance with a concomitant trend toward improvements in attentional capacity of preschool children. Thus, we recommend implementing this type of exercise program for preschoolers.
This study examines the discourse basis for referent accessibility and its relation to the choice of referring expressions by children with Autism Spectrum Disorder (ASD) and typically developing children. The aim is to delineate how the linguistic and extra-linguistic context affects referent accessibility to the speaker. The study also examines the degree to which accessibility effects are modulated by cognitive factors such as working memory capacity. In the study, the contrast levels between the referent and a competitor (one contrast/two contrasts) and the syntactic prominence of the referent (subject/object position in the preceding question) were manipulated in an elicited production task. The results provide evidence that the referring expressions of children with ASD correlate with the discourse status of referents to a similar extent as in typically developing controls. All children were more likely to refer with lexical NPs to referents that contrasted on two levels with a highly prominent competitor, compared to referents that contrasted on one level. They were also more likely to produce pronouns for referents previously mentioned in the subject than the object position. The effect of both discourse factors was modulated by the age and working memory capacity of the children with and without ASD. Accordingly, the study suggests that children with ASD do not generally differ from children with typical development in their referential choices when the discourse status of a referent allows them to model the referent's accessibility from their own discourse perspective in a way that is modulated by working memory capacity.
Visual perception is a complex and dynamic process that plays a crucial role in how we perceive and interact with the world. The eyes move in a sequence of saccades and fixations, actively modulating perception by moving different parts of the visual world into focus. Eye movement behavior can therefore offer rich insights into the underlying cognitive mechanisms and decision processes. Computational models in combination with a rigorous statistical framework are critical for advancing our understanding in this field, facilitating the testing of theory-driven predictions and accounting for observed data. In this thesis, I investigate eye movement behavior through the development of two mechanistic, generative, and theory-driven models. The first model is based on experimental research regarding the distribution of attention, particularly around the time of a saccade, and explains statistical characteristics of scan paths. The second model implements a self-avoiding random walk within a confining potential to represent the microscopic fixational drift, which is present even while the eye is at rest, and investigates the relationship to microsaccades. Both models are implemented in a likelihood-based framework, which supports the use of data assimilation methods to perform Bayesian parameter inference at the level of individual participants, analyses of the marginal posteriors of the interpretable parameters, model comparisons, and posterior predictive checks. The application of these methods enables a thorough investigation of individual variability in the space of parameters. Results show that dynamical modeling and the data assimilation framework are highly suitable for eye movement research and, more generally, for cognitive modeling.
Cognitive psychology is traditionally interested in the interaction of perception, cognition, and behavioral control. Investigating eye movements in reading constitutes a field of research in which the processes and interactions of these subsystems can be studied in a well-defined environment. Thereby, the following questions are pursued: How much information is visually perceived during a fixation, how is processing achieved and temporally coordinated from visual letter encoding to final sentence comprehension, and how do such processes reflect on behavior such as the control of the eyes’ movements during reading. Various theoretical models have been proposed to account for the specific eye-movement behavior in reading (for a review see Reichle, Rayner, & Pollatsek, 2003). Some models are based on the idea of shifting attention serially from one word to the next within the sentence whereas others propose distributed attention allocating processing resources to more than one word at a time. As attention is assumed to drive word recognition processes one major difference between these models is that word processing must either occur in strict serial order, or that word processing is achieved in parallel. In spite of this crucial difference in the time course of word processing, both model classes perform well on explaining many of the benchmark effects in reading. In fact, there seems to be not much empirical evidence that challenges the models to a point at which their basic assumptions could be falsified. One issue often perceived as being decisive in the debate on serial and parallel word processing is how not-yet-fixated words to the right of fixation affect eye movements. Specifically, evidence is discussed as to what spatial extent such parafoveal words are previewed and how this influences current and subsequent word processing. Four experiments investigated parafoveal processing close to the spatial limits of the perceptual span. The present work aims to go beyond mere existence proofs of previewing words at such spatial distances. Introducing a manipulation that dissociates the sources of long-range preview effects, benefits and costs of parafoveal processing can be investigated in a single analysis and the differing impact is tracked across a three-word target region. In addition, the same manipulation evaluates the role of oculomotor error as the cause of non-local distributed effects. In this respect, the results contribute to a better understanding of the time course of word processing inside the perceptual span and attention allocation during reading.
Older adults often experience hearing difficulties in multitalker situations. Attentional control of auditory perception is crucial in situations where a plethora of auditory inputs compete for further processing. We combined an intensity-modulated dichotic listening paradigm with attentional manipulations to study adult age differences in the interplay between perceptual saliency and attentional control of auditory processing. When confronted with two competing sources of verbal auditory input, older adults modulated their attention less flexibly and were more driven by perceptual saliency than younger adults. These findings suggest that aging severely impairs the attentional regulation of auditory perception.