Refine
Has Fulltext
- no (42) (remove)
Year of publication
Document Type
- Article (42)
Is part of the Bibliography
- yes (42)
Keywords
- Infancy (6)
- Eye tracking (4)
- middle childhood (4)
- Action processing (3)
- Middle childhood (3)
- longitudinal (3)
- theory of mind (3)
- Action (2)
- Anticipatory gaze shifts (2)
- Imitation (2)
Institute
There is considerable evidence for an association between obesity and impaired executive function (EF) in adolescents and adults. However, little research has examined EF in overweight or obese children. Furthermore, data on EF in underweight individuals is lacking. In addition, there is no consensus on the directionality of the relationship between Body Mass Index (BMI) and EF, and longitudinal studies are rare. Thus, the present study examined whether children differ in their performance on a battery of EF tasks depending on their weight status (underweight, normal-weight, overweight), and investigated the longitudinal cross-lagged associations between EF and BMI. Hot EF (delay of gratification, affective decision-making), cool EF (attention shifting, inhibition, working memory [WM] updating), and BMI were assessed in 1,657 German elementary-school children at two time points, approximately one year apart. Overweight children exhibited slightly poorer attention shifting, WM updating, and affective decision-making abilities as compared to normal-weight children. Unexpectedly, they did not show any deficits in inhibition or delay of gratification. EF levels of underweight children did not differ significantly from those of normal-weight children. Furthermore, poor attention shifting and enhanced affective decision-making predicted a slightly higher BMI one year later, and a higher BMI also predicted poorer attention shifting and WM updating one year later. The latter association between BMI and subsequent EF scores, however, diminished when controlling for socioeconomic status. Results indicate that hot and cool EF plays a role in the weight development of children, and might be a promising factor to address in preventive interventions.
From about 7 months of age onward, infants start to reliably fixate the goal of an observed action, such as a grasp, before the action is complete. The available research has identified a variety of factors that influence such goal-anticipatory gaze shifts, including the experience with the shown action events and familiarity with the observed agents. However, the underlying cognitive processes are still heavily debated. We propose that our minds (i) tend to structure sensorimotor dynamics into probabilistic, generative event-predictive, and event boundary predictive models, and, meanwhile, (ii) choose actions with the objective to minimize predicted uncertainty. We implement this proposition by means of event-predictive learning and active inference. The implemented learning mechanism induces an inductive, event-predictive bias, thus developing schematic encodings of experienced events and event boundaries. The implemented active inference principle chooses actions by aiming at minimizing expected future uncertainty. We train our system on multiple object-manipulation events. As a result, the generation of goal-anticipatory gaze shifts emerges while learning about object manipulations: the model starts fixating the inferred goal already at the start of an observed event after having sampled some experience with possible events and when a familiar agent (i.e., a hand) is involved. Meanwhile, the model keeps reactively tracking an unfamiliar agent (i.e., a mechanical claw) that is performing the same movement. We qualitatively compare these modeling results to behavioral data of infants and conclude that event-predictive learning combined with active inference may be critical for eliciting goal-anticipatory gaze behavior in infants.
Around their first year of life, infants are able to anticipate the goal of others' ongoing actions. For instance, 12-month-olds anticipate the goal of everyday feeding actions and manual actions such as reaching and grasping. However, little is known whether the salience of the goal influences infants' online assessment of others' actions. The aim of the current eye-tracking study was to elucidate infants' ability to anticipate reaching actions depending on the visual salience of the goal object. In Experiment 1, 12-month-old infants' goal-directed gaze shifts were recorded as they observed a hand reaching for and grasping either a large (high-salience condition) or a small (low-salience condition) goal object. Infants exhibited predictive gaze shifts significantly earlier when the observed hand reached for the large goal object compared to when it reached for the small goal object. In addition, findings revealed rapid learning over the course of trials in the high-salience condition and no learning in the low-salience condition. Experiment 2 demonstrated that the results could not be simply attributed to the different grip aperture of the hand used when reaching for small and large objects. Together, our data indicate that by the end of their first year of life, infants rely on information about the goal salience to make inferences about the action goal.
We investigated whether 12-month-old infants rely on information about the certainty of goal selection in order to predict observed reaching actions. Infants' goal-directed gaze shifts were recorded as they observed action sequences in a multiple-goals design. We found that 12-month-old infants exhibited gaze shifts significantly earlier when the observed hand reached for the same goal object in all trials (frequent condition) compared with when the observed hand reached for different goal objects across trials (nonfrequent condition). Infants in the frequent condition were significantly more accurate at predicting the action goal than infants in the nonfrequent condition. In addition, findings revealed rapid learning in the case of certainty and no learning in the case of uncertainty of goal selection over the course of trials. Together, our data indicate that by the end of their first year of life, infants rely on information about the certainty of goal selection to make inferences about others' action goals.
Speech and action sequences are continuous streams of information that can be segmented into sub-units. In both domains, this segmentation can be facilitated by perceptual cues contained within the information stream. In speech, prosodic cues (e.g., a pause, pre-boundary lengthening, and pitch rise) mark boundaries between words and phrases, while boundaries between actions of an action sequence can be marked by kinematic cues (e.g., a pause, pre-boundary deceleration). The processing of prosodic boundary cues evokes an Event-related Potentials (ERP) component known as the Closure Positive Shift (CPS), and it is possible that the CPS reflects domaingeneral cognitive processes involved in segmentation, given that the CPS is also evoked by boundaries between subunits of non-speech auditory stimuli. This study further probed the domain-generality of the CPS and its underlying processes by investigating electrophysiological correlates of the processing of boundary cues in sequences of spoken verbs (auditory stimuli; Experiment 1; N = 23 adults) and actions (visual stimuli; Experiment 2; N = 23 adults). The EEG data from both experiments revealed a CPS-like broadly distributed positivity during the 250 ms prior to the onset of the post-boundary word or action, indicating similar electrophysiological correlates of boundary processing across domains, suggesting that the cognitive processes underlying speech and action segmentation might also be shared.
Human infants can segment action sequences into their constituent actions already during the first year of life. However, work to date has almost exclusively examined the role of infants' conceptual knowledge of actions and their outcomes in driving this segmentation. The present study examined electrophysiological correlates of infants' processing of lower-level perceptual cues that signal a boundary between two actions of an action sequence. Specifically, we tested the effect of kinematic boundary cues (pre-boundary lengthening and pause) on 12-month-old infants' (N = 27) processing of a sequence of three arbitrary actions, performed by an animated figure. Using the Event-Related Potential (ERP) approach, evidence of a positivity following the onset of the boundary cues was found, in line with previous work that has found an ERP positivity (Closure Positive Shift, CPS) related to boundary processing in auditory stimuli and action sequences in adults. Moreover, an ERP negativity (Negative Central, Nc) indicated that infants' encoding of the post-boundary action was modulated by the presence or absence of prior boundary cues. We therefore conclude that 12-month-old infants are sensitive to lower-level perceptual kinematic boundary cues, which can support segmentation of a continuous stream of movement into individual action units.
Theory of mind is one of the most important cognitive factors in social information-processing, and deficits in theory of mind have been linked to aggressive behavior in childhood. The present longitudinal study investigated reciprocal links between theory of mind and two forms of aggression – physical and relational – in middle childhood with three data waves over 3 years. Theory of mind was assessed by participants’ responses to cartoons, and physical and relational aggression were assessed through teacher reports in a community sample of 1657 children (mean age at Time 1: 8 years). Structural equation modeling analyses showed that theory of mind was a negative predictor of subsequent physical and relational aggression, both from Time 1 to Time 2 as well as from Time 2 to Time 3. Moreover, relational aggression was a negative predictor of theory of mind from Time 1 to Time 2. There were no significant gender or age differences in the tested pathways. The results suggest that reciprocal and negative longitudinal relations exist between children’s theory of mind and aggressive behavior. Our study extends current knowledge about the development of such relations across middle childhood.
One of the earliest categorical distinctions to be made by preverbal infants is the animate-inanimate distinction. To explore the neural basis for this distinction in 7-8-month-olds, an equal number of animal and furniture pictures was presented in an ERP-paradigm. The total of 118 pictures, all looking different from each other, were presented in a semi-randomized order for 1000 ms each. Infants' brain responses to exemplars from both categories differed systematically regarding the negative central component (Nc: 400-600 ms) at anterior channels. More specifically, the Nc was enhanced for animals in one subgroup of infants, and for furniture items in another subgroup of infants. Explorative analyses related to categorical priming further revealed category-specific differences in brain responses in the late time window (650-1550 ms) at right frontal channels: Unprimed stimuli (preceded by a different-category item) elicited a more positive response as compared to primed stimuli (preceded by a same-category item). In sum, these findings suggest that the infant's brain discriminates exemplars from both global domains. Given the design of our task, we conclude that processes of category identification are more likely to account for our findings than processes of on-line category formation during the experimental session.
Simple geometric shapes moving in a self-propelled manner, and violating Newtonian laws of motion by acting against gravitational forces tend to induce a judgement that an object is animate. Objects that change their motion only due to external causes are more likely judged as inanimate. How the developing brain is employed in the perception of animacy in early ontogeny is currently unknown. The aim of this study was to use ERP techniques to determine if the negative central component (Nc), a waveform related to attention allocation, was differentially affected when an infant observed animate or inanimate motion. Short animated movies comprising a marble moving along a marble run either in an animate or an inanimate manner were presented to 15 infants who were 9 months of age. The ERPs were time-locked to a still frame representing animate or inanimate motion that was displayed following each movie. We found that 9-month-olds are able to discriminate between animate and inanimate motion based on motion cues alone and most likely allocate more attentional resources to the inanimate motion. The present data contribute to our understanding of the animate-inanimate distinction and the Nc as a correlate of infant cognitive processing.