Refine
Year of publication
Document Type
- Article (47) (remove)
Is part of the Bibliography
- yes (47)
Keywords
- Infancy (6)
- Eye tracking (4)
- middle childhood (4)
- Action processing (3)
- Middle childhood (3)
- longitudinal (3)
- theory of mind (3)
- Action (2)
- Adolescence (2)
- Anticipatory gaze shifts (2)
Action effects have been stated to be important for infants’ processing of goal-directed actions. In this study, 11-month-olds showed equally fast predictive gaze shifts to a claw’s action goal when the grasping action was presented either with three agency cues (self-propelled movement, equifinality of goal achievement and a salient action effect) or with only a salient action effect, but infants showed tracking gaze when the claw showed only self-propelled movement and equifinality of goal achievement. The results suggest that action effects, compared to purely kinematic cues, seem to be especially important for infants' online processing of goal-directed actions.
Previous research indicates that infants’ prediction of the goals of observed actions is influenced by own experience with the type of agent performing the action (i.e., human hand vs. non-human agent) as well as by action-relevant features of goal objects (e.g., object size). The present study investigated the combined effects of these factors on 12-month-olds’ action prediction. Infants’ (N = 49) goal-directed gaze shifts were recorded as they observed 14 trials in which either a human hand or a mechanical claw reached for a small goal area (low-saliency goal) or a large goal area (high-saliency goal). Only infants who had observed the human hand reaching for a high-saliency goal fixated the goal object ahead of time, and they rapidly learned to predict the action goal across trials. By contrast, infants in all other conditions did not track the observed action in a predictive manner, and their gaze shifts to the action goal did not change systematically across trials. Thus, high-saliency goals seem to boost infants’ predictive gaze shifts during the observation of human manual actions, but not of actions performed by a mechanical device. This supports the assumption that infants’ action predictions are based on interactive effects of action-relevant object features (e.g., size) and own action experience.
For the processing of goal-directed actions, some accounts emphasize the importance of experience with the action or the agent. Other accounts stress the importance of agency cues. We investigated the impact of agency cues on 11-month-olds’ and adults’ goal anticipation for a grasping-action performed by a mechanical claw. With an eyetracker, we measured anticipations in two conditions, where the claw was displayed either with or without agency cues. In two experiments, 11-month-olds were predictive when agency cues were present, but reactive when no agency cues were presented. Adults were predictive in both conditions. Furthermore, 11-month-olds rapidly learned to predict the goal in the agency condition, but not in the mechanical condition. Adults’ predictions did not change across trials in the agency condition, but decelerated in the mechanical condition. Thus, agency cues and own action experience are important for infants’ and adults’ online processing of goal-directed actions by non-human agents.
Research on voluntary action has focused on the question of how we represent our behavior on a motor and cognitive level. However, the question of how we represent voluntary not acting has been completely neglected. The aim of the present study was to investigate the cognitive and motor representation of intentionally not acting. By using an action-effect binding approach, we demonstrate similarities of action and nonaction. In particular, our results reveal that voluntary nonactions can be bound to an effect tone. This finding suggests that effect binding is not restricted to an association between a motor representation and a successive effect (action-effect binding) but can also occur for an intended nonaction and its effect (nonaction-effect binding). Moreover, we demonstrate that nonactions have to be initiated voluntarily in order to elicit nonaction-effect binding.
Communication with young children is often multimodal in nature, involving, for example, language and actions. The simultaneous presentation of information from both domains may boost language learning by highlighting the connection between an object and a word, owing to temporal overlap in the presentation of multimodal input. However, the overlap is not merely temporal but can also covary in the extent to which particular actions co-occur with particular words and objects, e.g. carers typically produce a hopping action when talking about rabbits and a snapping action for crocodiles. The frequency with which actions and words co-occurs in the presence of the referents of these words may also impact young children’s word learning. We, therefore, examined the extent to which consistency in the co-occurrence of particular actions and words impacted children’s learning of novel word–object associations. Children (18 months, 30 months and 36–48 months) and adults were presented with two novel objects and heard their novel labels while different actions were performed on these objects, such that the particular actions and word–object pairings always co-occurred (Consistent group) or varied across trials (Inconsistent group). At test, participants saw both objects and heard one of the labels to examine whether participants recognized the target object upon hearing its label. Growth curve models revealed that 18-month-olds did not learn words for objects in either condition, and 30-month-old and 36- to 48-month-old children learned words for objects only in the Consistent condition, in contrast to adults who learned words for objects independent of the actions presented. Thus, consistency in the multimodal input influenced word learning in early childhood but not in adulthood. In terms of a dynamic systems account of word learning, our study shows how multimodal learning settings interact with the child’s perceptual abilities to shape the learning experience.
Executive functions (EFs) may help children to regulate their food-intake in an “obesogenic” environment, where energy-dense food is easily available. There is mounting evidence that overweight is associated with diminished hot and cool EFs, and several longitudinal studies found evidence for a predictive effect of hot EFs on children’s bodyweight, but longitudinal research examining the effect of cool EF on weight development in children is still scarce. The current 3-year longitudinal study examined the effect of a latent cool EF factor, which was based on three behavioral EF tasks, on subsequent mean levels and 3-year growth trajectories of body-mass-index z-scores (zBMI). Data from a large sample of children, with zBMI ranging from normal weight to obesity (n = 1474, aged 6–11 years at T1, 52% girls) was analyzed using structural-equation modeling and linear latent growth-curve modeling. Cool EF at the first wave (T1) negatively predicted subsequent zBMI and zBMI development throughout the 3-year period in middle childhood such that children with better EF had a lower zBMI and less steep zBMI growth. These effects were not moderated by the children’s age or gender. In conclusion, as early as in middle childhood, cool EFs seem to support the self-regulation of food-intake and consequently may play a causal role in the multifactorial etiology of overweight.
Although middle childhood is an important period for the development of hot and cool executive functions (EFs), longitudinal studies investigating trajectories of childhood EF development are still limited and little is known about predictors for individual developmental trajectories. The current study examined the development of two typical facets of cool and hot EFs over a 3-year period during middle childhood, comparing a younger cohort (6- and 7-year-olds at the first wave [T1]; n = 621) and an older cohort (8- and 9-year olds at T1; n = 975) of children. "Cool" working memory updating (WM) was assessed using a backward digit span task, and "hot" decision making (DM) was assessed using a child variant of the Iowa Gambling Task. Linear latent growth curve analyses revealed evidence for developmental growth as well as interindividual variance in the initial level and rate of change in both EF facets. Initial level of WM was positively associated with age (both between and within cohorts), socioeconomic status, verbal ability, and processing speed, whereas initial levels of DM were, in addition to a (potentially age-related) cohort effect, exclusively predicted by gender, with boys outperforming girls. None of the variables predicted the rate of change, that is, the developmental trajectories. However, younger children, as compared with older children, had slightly steeper WM growth curves over time, hinting at a leveling off in the development of WM during middle childhood. In sum, these data add important evidence to the understanding of hot and cool EF development during middle childhood. (C) 2018 Elsevier Inc. All rights reserved.
Infants use others' emotional signals to regulate their own object-directed behavior and action reproduction, and they typically produce more actions after having observed positive as compared to negative emotional cues. This study explored infants' understanding of the referential specificity of others' emotional cues when being confronted with two actions that are accompanied by different emotional displays. Selective action reproduction was measured after 18-month-olds (N = 42) had observed two actions directed at the same object, one of which was modeled with a positive emotional expression and the other with a negative emotional expression. Across four trials with different objects, infants' first actions matched the positively-emoted actions more often than the negatively-emoted actions. In comparison with baseline-level, infants' initial performance changed only for the positively-emoted actions, in that it increased during test. Latencies to first object-touch during test did not differ when infants reproduced the positively- or negatively-emoted actions, respectively, indicating that infants related the cues to the respective actions rather than to the object. During demonstration, infants looked relatively longer at the object than at the model's face, with no difference in positive or negative displays. Infants during their second year of life thus capture the action-related referential specificity of others' emotional cues and seem to follow positive signals more readily when actively selecting which of two actions to reproduce preferentially.
Simple geometric shapes moving in a self-propelled manner, and violating Newtonian laws of motion by acting against gravitational forces tend to induce a judgement that an object is animate. Objects that change their motion only due to external causes are more likely judged as inanimate. How the developing brain is employed in the perception of animacy in early ontogeny is currently unknown. The aim of this study was to use ERP techniques to determine if the negative central component (Nc), a waveform related to attention allocation, was differentially affected when an infant observed animate or inanimate motion. Short animated movies comprising a marble moving along a marble run either in an animate or an inanimate manner were presented to 15 infants who were 9 months of age. The ERPs were time-locked to a still frame representing animate or inanimate motion that was displayed following each movie. We found that 9-month-olds are able to discriminate between animate and inanimate motion based on motion cues alone and most likely allocate more attentional resources to the inanimate motion. The present data contribute to our understanding of the animate-inanimate distinction and the Nc as a correlate of infant cognitive processing.
One of the earliest categorical distinctions to be made by preverbal infants is the animate-inanimate distinction. To explore the neural basis for this distinction in 7-8-month-olds, an equal number of animal and furniture pictures was presented in an ERP-paradigm. The total of 118 pictures, all looking different from each other, were presented in a semi-randomized order for 1000 ms each. Infants' brain responses to exemplars from both categories differed systematically regarding the negative central component (Nc: 400-600 ms) at anterior channels. More specifically, the Nc was enhanced for animals in one subgroup of infants, and for furniture items in another subgroup of infants. Explorative analyses related to categorical priming further revealed category-specific differences in brain responses in the late time window (650-1550 ms) at right frontal channels: Unprimed stimuli (preceded by a different-category item) elicited a more positive response as compared to primed stimuli (preceded by a same-category item). In sum, these findings suggest that the infant's brain discriminates exemplars from both global domains. Given the design of our task, we conclude that processes of category identification are more likely to account for our findings than processes of on-line category formation during the experimental session.
Do as I say - or as I do?!
(2019)
Infants use behavioral and verbal cues to infer another person’s action intention. However, it is still unclear how infants integrate these often co-occurring cues depending on the cues’ coherence (i.e., the degree to which the cues provide coherent information about another’s intention). This study investigated how 18- and 24-month-olds’ (N = 88 per age group) action selection was influenced by varying the coherence of a model’s verbal and behavioral cues. Using a between-subjects design, infants received six trials with different stimulus objects. In the conditions Congruent, Incongruent, and Failed-attempt, the model uttered a telic verb particle that was followed by a matching or contradicting goal-directed action demonstration, or by a non goal-directed slipping motion, respectively. In the condition Pseudo-word, a nonsense word was combined with a goal-directed action demonstration. Infants’ action selection indicated an adherence to the verbal cue in Congruent, Incongruent, and Failed-attempt, and this was stronger in 24- than 18-month-olds. Additionally, in Incongruent and Failed-attempt, patterns of cue integration across the six trials varied in the two age groups. Regarding the behavioral cue, infants in Congruent and Pseudo-word preferentially followed this cue in both age groups, which also suggested a rather unspecific effect of the verbal cue in Congruent. Relatively longer first action-latencies in Incongruent and Failed-attempt implied that these types of coherence elicited higher cognitive demands than in Congruent and Pseudo-word. Results are discussed in light of infants’ flexibility in using social cues, depending on the cue’s coherence and on age-related social-cognitive differences.
Behavioral research has shown that infants use both behavioral cues and verbal cues when processing the goals of others' actions. For instance, 18-month-olds selectively imitate an observed goal-directed action depending on its (in)congruence with a model's previous verbal announcement of a desired action goal. This EEG-study analyzed the electrophysiological underpinnings of these behavioral findings on the two functional levels of conceptual action processing and motor activation. Mid-latency mean negative ERP amplitude and mu-frequency band power were analyzed while 18-month-olds (N = 38) watched videos of an adult who performed one out of two potential actions on a novel object. In a within-subjects design, the action demonstration was preceded by either a congruent or an incongruent verbally announced action goal (e.g., "up" or "down" and upward movement). Overall, ERP negativity did not differ between conditions, but a closer inspection revealed that in two subgroups, about half of the infants showed a broadly distributed increased mid-latency ERP negativity (indicating enhanced conceptual action processing) for either the congruent or the incongruent stimuli, respectively. As expected, mu power at sensorimotor sites was reduced (indicating enhanced motor activation) for congruent relative to incongruent stimuli in the entire sample. Both EEG correlates were related to infants' language skills. Hence, 18-month-olds integrate action-goal-related verbal cues into their processing of others' actions, at the functional levels of both conceptual processing and motor activation. Further, cue integration when inferring others' action goals is related to infants' language proficiency.
From about 7 months of age onward, infants start to reliably fixate the goal of an observed action, such as a grasp, before the action is complete. The available research has identified a variety of factors that influence such goal-anticipatory gaze shifts, including the experience with the shown action events and familiarity with the observed agents. However, the underlying cognitive processes are still heavily debated. We propose that our minds (i) tend to structure sensorimotor dynamics into probabilistic, generative event-predictive, and event boundary predictive models, and, meanwhile, (ii) choose actions with the objective to minimize predicted uncertainty. We implement this proposition by means of event-predictive learning and active inference. The implemented learning mechanism induces an inductive, event-predictive bias, thus developing schematic encodings of experienced events and event boundaries. The implemented active inference principle chooses actions by aiming at minimizing expected future uncertainty. We train our system on multiple object-manipulation events. As a result, the generation of goal-anticipatory gaze shifts emerges while learning about object manipulations: the model starts fixating the inferred goal already at the start of an observed event after having sampled some experience with possible events and when a familiar agent (i.e., a hand) is involved. Meanwhile, the model keeps reactively tracking an unfamiliar agent (i.e., a mechanical claw) that is performing the same movement. We qualitatively compare these modeling results to behavioral data of infants and conclude that event-predictive learning combined with active inference may be critical for eliciting goal-anticipatory gaze behavior in infants.
Infants in the second year of life not only detect the visible goals or end-states of other people's action, but they also seem to be able to infer others’ underlying intentions. The present study used event-related potentials (ERPs) to investigate the biological basis of infants’ processing of others’ goal-directed actions, with special regard to the involvement of bottom-up perceptual and top-down conceptual processes. In an adaptation of the behavioral re-enactment procedure, 14-month-olds were first familiarized with either full demonstrations (FD), failed attempts (FA), or arbitrary (AA) object-directed actions. Next, ERPs were measured while all infants saw the same two pictures of the end-states of the full demonstration (complete end-state) and the failed attempt (incomplete end-state). In the time-windows related to perceptual processing (100–200 ms after stimulus onset) and to conceptual processing (300–700 ms), ERP negativity over frontal and central regions was higher for the complete than for the incomplete end-state in the FD and FA conditions. When comparing the FA and AA conditions, this pattern of results occurred only for the conceptual time domain. Moreover, beginning slow-wave activity (700–1000 ms) differed for the end-state pictures in the three conditions, suggesting differential encoding demands. Together, the electrophysiological data indicate that infants in the second year of life use bottom-up perceptual as well as top-down conceptual processing to give meaning to others' goal-directed actions.
Event-related potentials (ERPs) to single visual stimuli were recorded in 7-month-old infants. In a three-stimulus oddball paradigm, infants watched one frequently occurring standard stimulus (either an animal or a furniture item) and two infrequently occurring oddball stimuli, presenting one exemplar from the same and one from the different super-ordinate category as compared to the standard stimulus. Additionally, visual attributes of the stimuli were controlled to investigate whether infants focus on category membership or on perceptual similarity when processing the stimuli. Infant ERPs indicated encoding of the standard stimulus and discriminating it from the two oddball stimuli by larger Nc peak amplitude and late-slow-wave activity for the infrequent stimuli. Moreover, larger Nc latency and positive-slow-wave activity indicated increased processing for the different-category as compared to the same-category oddball. Thus, 7-month-olds seem to encode single stimuli not only by surface perceptual features, but they also regard information of category membership, leading to facilitated processing of the oddball that belongs to the same domain as the standard stimulus.
Executive function (EF) has long been considered to be a unitary, domain-general cognitive ability. However, recent research suggests differentiating "hot" affective and "cool" cognitive aspects of EF. Yet, findings regarding this two-factor construct are still inconsistent. In particular, the development of this factor structure remains unclear and data on school-aged children is lacking. Furthermore, studies linking EF and overweight or obesity suggest that EF contributes to the regulation of eating behavior. So far, however, the links between EF and eating behavior have rarely been investigated in children and non-clinical populations. First, we examined whether EF can be divided into hot and cool factors or whether they actually correspond to a unitary construct in middle childhood. Second, we examined how hot and cool EF are associated with different eating styles that put children at risk of becoming overweight during development. Hot and cool EF were assessed experimentally in a non-clinical population of 1657 elementary-school children (aged 6-11 years). The "food approach" behavior was rated mainly via parent questionnaires. Findings indicate that hot EF is distinguishable from cool EF. However, only cool EF seems to represent a coherent functional entity, whereas hot EF does not seem to be a homogenous construct. This was true for a younger and an older subgroup of children. Furthermore, different EF components were correlated with eating styles, such as responsiveness to food, desire to drink, and restrained eating in girls but not in boys. This shows that lower levels of EF are not only seen in clinical populations of obese patients but are already associated with food approach styles in a normal population of elementary school-aged girls. Although the direction of effect still has to be clarified, results point to the possibility that EF constitutes a risk factor for eating styles contributing to the development of overweight in the long-term.
There is considerable evidence for an association between obesity and impaired executive function (EF) in adolescents and adults. However, little research has examined EF in overweight or obese children. Furthermore, data on EF in underweight individuals is lacking. In addition, there is no consensus on the directionality of the relationship between Body Mass Index (BMI) and EF, and longitudinal studies are rare. Thus, the present study examined whether children differ in their performance on a battery of EF tasks depending on their weight status (underweight, normal-weight, overweight), and investigated the longitudinal cross-lagged associations between EF and BMI. Hot EF (delay of gratification, affective decision-making), cool EF (attention shifting, inhibition, working memory [WM] updating), and BMI were assessed in 1,657 German elementary-school children at two time points, approximately one year apart. Overweight children exhibited slightly poorer attention shifting, WM updating, and affective decision-making abilities as compared to normal-weight children. Unexpectedly, they did not show any deficits in inhibition or delay of gratification. EF levels of underweight children did not differ significantly from those of normal-weight children. Furthermore, poor attention shifting and enhanced affective decision-making predicted a slightly higher BMI one year later, and a higher BMI also predicted poorer attention shifting and WM updating one year later. The latter association between BMI and subsequent EF scores, however, diminished when controlling for socioeconomic status. Results indicate that hot and cool EF plays a role in the weight development of children, and might be a promising factor to address in preventive interventions.
Studies show relations between executive function (EF), Theory of Mind (ToM), and conduct-problem (CP) symptoms. However, many studies have involved cross-sectional data, small clinical samples, pre-school children, and/or did not consider potential mediation effects. The present study examined the longitudinal relations between EF, ToM abilities, and CP symptoms in a population-based sample of 1,657 children between 6 and 11 years (T1: M = 8.3 years, T2: M = 9.1 years; 51.9% girls). We assessed EF skills and ToM abilities via computerized tasks at first measurement (T1), CP symptoms were rated via parent questionnaires at T1 and approximately 1 year later (T2). Structural-equation models showed a negative relation between T1 EF and T2 CP symptoms even when controlling for attention-deficit hyperactivity disorder (ADHD) symptoms and other variables. This relation was fully mediated by T1 ToM abilities. The study shows how children's abilities to control their thoughts and behaviors and to understand others' mental states interact in the development of CP symptoms.
We investigated whether 12-month-old infants rely on information about the certainty of goal selection in order to predict observed reaching actions. Infants' goal-directed gaze shifts were recorded as they observed action sequences in a multiple-goals design. We found that 12-month-old infants exhibited gaze shifts significantly earlier when the observed hand reached for the same goal object in all trials (frequent condition) compared with when the observed hand reached for different goal objects across trials (nonfrequent condition). Infants in the frequent condition were significantly more accurate at predicting the action goal than infants in the nonfrequent condition. In addition, findings revealed rapid learning in the case of certainty and no learning in the case of uncertainty of goal selection over the course of trials. Together, our data indicate that by the end of their first year of life, infants rely on information about the certainty of goal selection to make inferences about others' action goals.