Refine
Year of publication
Document Type
- Article (43)
- Postprint (4)
- Part of Periodical (1)
Is part of the Bibliography
- yes (48)
Keywords
- Infancy (6)
- Eye tracking (4)
- middle childhood (4)
- Action processing (3)
- Middle childhood (3)
- action processing (3)
- childhood (3)
- executive function (3)
- longitudinal (3)
- longitudinal study (3)
Action effects have been stated to be important for infants’ processing of goal-directed actions. In this study, 11-month-olds showed equally fast predictive gaze shifts to a claw’s action goal when the grasping action was presented either with three agency cues (self-propelled movement, equifinality of goal achievement and a salient action effect) or with only a salient action effect, but infants showed tracking gaze when the claw showed only self-propelled movement and equifinality of goal achievement. The results suggest that action effects, compared to purely kinematic cues, seem to be especially important for infants' online processing of goal-directed actions.
Previous research indicates that infants’ prediction of the goals of observed actions is influenced by own experience with the type of agent performing the action (i.e., human hand vs. non-human agent) as well as by action-relevant features of goal objects (e.g., object size). The present study investigated the combined effects of these factors on 12-month-olds’ action prediction. Infants’ (N = 49) goal-directed gaze shifts were recorded as they observed 14 trials in which either a human hand or a mechanical claw reached for a small goal area (low-saliency goal) or a large goal area (high-saliency goal). Only infants who had observed the human hand reaching for a high-saliency goal fixated the goal object ahead of time, and they rapidly learned to predict the action goal across trials. By contrast, infants in all other conditions did not track the observed action in a predictive manner, and their gaze shifts to the action goal did not change systematically across trials. Thus, high-saliency goals seem to boost infants’ predictive gaze shifts during the observation of human manual actions, but not of actions performed by a mechanical device. This supports the assumption that infants’ action predictions are based on interactive effects of action-relevant object features (e.g., size) and own action experience.
For the processing of goal-directed actions, some accounts emphasize the importance of experience with the action or the agent. Other accounts stress the importance of agency cues. We investigated the impact of agency cues on 11-month-olds’ and adults’ goal anticipation for a grasping-action performed by a mechanical claw. With an eyetracker, we measured anticipations in two conditions, where the claw was displayed either with or without agency cues. In two experiments, 11-month-olds were predictive when agency cues were present, but reactive when no agency cues were presented. Adults were predictive in both conditions. Furthermore, 11-month-olds rapidly learned to predict the goal in the agency condition, but not in the mechanical condition. Adults’ predictions did not change across trials in the agency condition, but decelerated in the mechanical condition. Thus, agency cues and own action experience are important for infants’ and adults’ online processing of goal-directed actions by non-human agents.
Research on voluntary action has focused on the question of how we represent our behavior on a motor and cognitive level. However, the question of how we represent voluntary not acting has been completely neglected. The aim of the present study was to investigate the cognitive and motor representation of intentionally not acting. By using an action-effect binding approach, we demonstrate similarities of action and nonaction. In particular, our results reveal that voluntary nonactions can be bound to an effect tone. This finding suggests that effect binding is not restricted to an association between a motor representation and a successive effect (action-effect binding) but can also occur for an intended nonaction and its effect (nonaction-effect binding). Moreover, we demonstrate that nonactions have to be initiated voluntarily in order to elicit nonaction-effect binding.
Communication with young children is often multimodal in nature, involving, for example, language and actions. The simultaneous presentation of information from both domains may boost language learning by highlighting the connection between an object and a word, owing to temporal overlap in the presentation of multimodal input. However, the overlap is not merely temporal but can also covary in the extent to which particular actions co-occur with particular words and objects, e.g. carers typically produce a hopping action when talking about rabbits and a snapping action for crocodiles. The frequency with which actions and words co-occurs in the presence of the referents of these words may also impact young children’s word learning. We, therefore, examined the extent to which consistency in the co-occurrence of particular actions and words impacted children’s learning of novel word–object associations. Children (18 months, 30 months and 36–48 months) and adults were presented with two novel objects and heard their novel labels while different actions were performed on these objects, such that the particular actions and word–object pairings always co-occurred (Consistent group) or varied across trials (Inconsistent group). At test, participants saw both objects and heard one of the labels to examine whether participants recognized the target object upon hearing its label. Growth curve models revealed that 18-month-olds did not learn words for objects in either condition, and 30-month-old and 36- to 48-month-old children learned words for objects only in the Consistent condition, in contrast to adults who learned words for objects independent of the actions presented. Thus, consistency in the multimodal input influenced word learning in early childhood but not in adulthood. In terms of a dynamic systems account of word learning, our study shows how multimodal learning settings interact with the child’s perceptual abilities to shape the learning experience.
Executive functions (EFs) may help children to regulate their food-intake in an “obesogenic” environment, where energy-dense food is easily available. There is mounting evidence that overweight is associated with diminished hot and cool EFs, and several longitudinal studies found evidence for a predictive effect of hot EFs on children’s bodyweight, but longitudinal research examining the effect of cool EF on weight development in children is still scarce. The current 3-year longitudinal study examined the effect of a latent cool EF factor, which was based on three behavioral EF tasks, on subsequent mean levels and 3-year growth trajectories of body-mass-index z-scores (zBMI). Data from a large sample of children, with zBMI ranging from normal weight to obesity (n = 1474, aged 6–11 years at T1, 52% girls) was analyzed using structural-equation modeling and linear latent growth-curve modeling. Cool EF at the first wave (T1) negatively predicted subsequent zBMI and zBMI development throughout the 3-year period in middle childhood such that children with better EF had a lower zBMI and less steep zBMI growth. These effects were not moderated by the children’s age or gender. In conclusion, as early as in middle childhood, cool EFs seem to support the self-regulation of food-intake and consequently may play a causal role in the multifactorial etiology of overweight.
Although middle childhood is an important period for the development of hot and cool executive functions (EFs), longitudinal studies investigating trajectories of childhood EF development are still limited and little is known about predictors for individual developmental trajectories. The current study examined the development of two typical facets of cool and hot EFs over a 3-year period during middle childhood, comparing a younger cohort (6- and 7-year-olds at the first wave [T1]; n = 621) and an older cohort (8- and 9-year olds at T1; n = 975) of children. "Cool" working memory updating (WM) was assessed using a backward digit span task, and "hot" decision making (DM) was assessed using a child variant of the Iowa Gambling Task. Linear latent growth curve analyses revealed evidence for developmental growth as well as interindividual variance in the initial level and rate of change in both EF facets. Initial level of WM was positively associated with age (both between and within cohorts), socioeconomic status, verbal ability, and processing speed, whereas initial levels of DM were, in addition to a (potentially age-related) cohort effect, exclusively predicted by gender, with boys outperforming girls. None of the variables predicted the rate of change, that is, the developmental trajectories. However, younger children, as compared with older children, had slightly steeper WM growth curves over time, hinting at a leveling off in the development of WM during middle childhood. In sum, these data add important evidence to the understanding of hot and cool EF development during middle childhood. (C) 2018 Elsevier Inc. All rights reserved.
Infants use others' emotional signals to regulate their own object-directed behavior and action reproduction, and they typically produce more actions after having observed positive as compared to negative emotional cues. This study explored infants' understanding of the referential specificity of others' emotional cues when being confronted with two actions that are accompanied by different emotional displays. Selective action reproduction was measured after 18-month-olds (N = 42) had observed two actions directed at the same object, one of which was modeled with a positive emotional expression and the other with a negative emotional expression. Across four trials with different objects, infants' first actions matched the positively-emoted actions more often than the negatively-emoted actions. In comparison with baseline-level, infants' initial performance changed only for the positively-emoted actions, in that it increased during test. Latencies to first object-touch during test did not differ when infants reproduced the positively- or negatively-emoted actions, respectively, indicating that infants related the cues to the respective actions rather than to the object. During demonstration, infants looked relatively longer at the object than at the model's face, with no difference in positive or negative displays. Infants during their second year of life thus capture the action-related referential specificity of others' emotional cues and seem to follow positive signals more readily when actively selecting which of two actions to reproduce preferentially.
Simple geometric shapes moving in a self-propelled manner, and violating Newtonian laws of motion by acting against gravitational forces tend to induce a judgement that an object is animate. Objects that change their motion only due to external causes are more likely judged as inanimate. How the developing brain is employed in the perception of animacy in early ontogeny is currently unknown. The aim of this study was to use ERP techniques to determine if the negative central component (Nc), a waveform related to attention allocation, was differentially affected when an infant observed animate or inanimate motion. Short animated movies comprising a marble moving along a marble run either in an animate or an inanimate manner were presented to 15 infants who were 9 months of age. The ERPs were time-locked to a still frame representing animate or inanimate motion that was displayed following each movie. We found that 9-month-olds are able to discriminate between animate and inanimate motion based on motion cues alone and most likely allocate more attentional resources to the inanimate motion. The present data contribute to our understanding of the animate-inanimate distinction and the Nc as a correlate of infant cognitive processing.