Refine
Year of publication
Is part of the Bibliography
- yes (40)
Keywords
- Infancy (6)
- Eye tracking (4)
- middle childhood (4)
- Action processing (3)
- longitudinal (3)
- theory of mind (3)
- Action (2)
- Anticipatory gaze shifts (2)
- Imitation (2)
- Middle childhood (2)
Institute
- Department Psychologie (40) (remove)
Action effects have been stated to be important for infants’ processing of goal-directed actions. In this study, 11-month-olds showed equally fast predictive gaze shifts to a claw’s action goal when the grasping action was presented either with three agency cues (self-propelled movement, equifinality of goal achievement and a salient action effect) or with only a salient action effect, but infants showed tracking gaze when the claw showed only self-propelled movement and equifinality of goal achievement. The results suggest that action effects, compared to purely kinematic cues, seem to be especially important for infants' online processing of goal-directed actions.
Previous research indicates that infants’ prediction of the goals of observed actions is influenced by own experience with the type of agent performing the action (i.e., human hand vs. non-human agent) as well as by action-relevant features of goal objects (e.g., object size). The present study investigated the combined effects of these factors on 12-month-olds’ action prediction. Infants’ (N = 49) goal-directed gaze shifts were recorded as they observed 14 trials in which either a human hand or a mechanical claw reached for a small goal area (low-saliency goal) or a large goal area (high-saliency goal). Only infants who had observed the human hand reaching for a high-saliency goal fixated the goal object ahead of time, and they rapidly learned to predict the action goal across trials. By contrast, infants in all other conditions did not track the observed action in a predictive manner, and their gaze shifts to the action goal did not change systematically across trials. Thus, high-saliency goals seem to boost infants’ predictive gaze shifts during the observation of human manual actions, but not of actions performed by a mechanical device. This supports the assumption that infants’ action predictions are based on interactive effects of action-relevant object features (e.g., size) and own action experience.
For the processing of goal-directed actions, some accounts emphasize the importance of experience with the action or the agent. Other accounts stress the importance of agency cues. We investigated the impact of agency cues on 11-month-olds’ and adults’ goal anticipation for a grasping-action performed by a mechanical claw. With an eyetracker, we measured anticipations in two conditions, where the claw was displayed either with or without agency cues. In two experiments, 11-month-olds were predictive when agency cues were present, but reactive when no agency cues were presented. Adults were predictive in both conditions. Furthermore, 11-month-olds rapidly learned to predict the goal in the agency condition, but not in the mechanical condition. Adults’ predictions did not change across trials in the agency condition, but decelerated in the mechanical condition. Thus, agency cues and own action experience are important for infants’ and adults’ online processing of goal-directed actions by non-human agents.
Research on voluntary action has focused on the question of how we represent our behavior on a motor and cognitive level. However, the question of how we represent voluntary not acting has been completely neglected. The aim of the present study was to investigate the cognitive and motor representation of intentionally not acting. By using an action-effect binding approach, we demonstrate similarities of action and nonaction. In particular, our results reveal that voluntary nonactions can be bound to an effect tone. This finding suggests that effect binding is not restricted to an association between a motor representation and a successive effect (action-effect binding) but can also occur for an intended nonaction and its effect (nonaction-effect binding). Moreover, we demonstrate that nonactions have to be initiated voluntarily in order to elicit nonaction-effect binding.
Communication with young children is often multimodal in nature, involving, for example, language and actions. The simultaneous presentation of information from both domains may boost language learning by highlighting the connection between an object and a word, owing to temporal overlap in the presentation of multimodal input. However, the overlap is not merely temporal but can also covary in the extent to which particular actions co-occur with particular words and objects, e.g. carers typically produce a hopping action when talking about rabbits and a snapping action for crocodiles. The frequency with which actions and words co-occurs in the presence of the referents of these words may also impact young children’s word learning. We, therefore, examined the extent to which consistency in the co-occurrence of particular actions and words impacted children’s learning of novel word–object associations. Children (18 months, 30 months and 36–48 months) and adults were presented with two novel objects and heard their novel labels while different actions were performed on these objects, such that the particular actions and word–object pairings always co-occurred (Consistent group) or varied across trials (Inconsistent group). At test, participants saw both objects and heard one of the labels to examine whether participants recognized the target object upon hearing its label. Growth curve models revealed that 18-month-olds did not learn words for objects in either condition, and 30-month-old and 36- to 48-month-old children learned words for objects only in the Consistent condition, in contrast to adults who learned words for objects independent of the actions presented. Thus, consistency in the multimodal input influenced word learning in early childhood but not in adulthood. In terms of a dynamic systems account of word learning, our study shows how multimodal learning settings interact with the child’s perceptual abilities to shape the learning experience.
Executive functions (EFs) may help children to regulate their food-intake in an “obesogenic” environment, where energy-dense food is easily available. There is mounting evidence that overweight is associated with diminished hot and cool EFs, and several longitudinal studies found evidence for a predictive effect of hot EFs on children’s bodyweight, but longitudinal research examining the effect of cool EF on weight development in children is still scarce. The current 3-year longitudinal study examined the effect of a latent cool EF factor, which was based on three behavioral EF tasks, on subsequent mean levels and 3-year growth trajectories of body-mass-index z-scores (zBMI). Data from a large sample of children, with zBMI ranging from normal weight to obesity (n = 1474, aged 6–11 years at T1, 52% girls) was analyzed using structural-equation modeling and linear latent growth-curve modeling. Cool EF at the first wave (T1) negatively predicted subsequent zBMI and zBMI development throughout the 3-year period in middle childhood such that children with better EF had a lower zBMI and less steep zBMI growth. These effects were not moderated by the children’s age or gender. In conclusion, as early as in middle childhood, cool EFs seem to support the self-regulation of food-intake and consequently may play a causal role in the multifactorial etiology of overweight.
Infants use others' emotional signals to regulate their own object-directed behavior and action reproduction, and they typically produce more actions after having observed positive as compared to negative emotional cues. This study explored infants' understanding of the referential specificity of others' emotional cues when being confronted with two actions that are accompanied by different emotional displays. Selective action reproduction was measured after 18-month-olds (N = 42) had observed two actions directed at the same object, one of which was modeled with a positive emotional expression and the other with a negative emotional expression. Across four trials with different objects, infants' first actions matched the positively-emoted actions more often than the negatively-emoted actions. In comparison with baseline-level, infants' initial performance changed only for the positively-emoted actions, in that it increased during test. Latencies to first object-touch during test did not differ when infants reproduced the positively- or negatively-emoted actions, respectively, indicating that infants related the cues to the respective actions rather than to the object. During demonstration, infants looked relatively longer at the object than at the model's face, with no difference in positive or negative displays. Infants during their second year of life thus capture the action-related referential specificity of others' emotional cues and seem to follow positive signals more readily when actively selecting which of two actions to reproduce preferentially.
Simple geometric shapes moving in a self-propelled manner, and violating Newtonian laws of motion by acting against gravitational forces tend to induce a judgement that an object is animate. Objects that change their motion only due to external causes are more likely judged as inanimate. How the developing brain is employed in the perception of animacy in early ontogeny is currently unknown. The aim of this study was to use ERP techniques to determine if the negative central component (Nc), a waveform related to attention allocation, was differentially affected when an infant observed animate or inanimate motion. Short animated movies comprising a marble moving along a marble run either in an animate or an inanimate manner were presented to 15 infants who were 9 months of age. The ERPs were time-locked to a still frame representing animate or inanimate motion that was displayed following each movie. We found that 9-month-olds are able to discriminate between animate and inanimate motion based on motion cues alone and most likely allocate more attentional resources to the inanimate motion. The present data contribute to our understanding of the animate-inanimate distinction and the Nc as a correlate of infant cognitive processing.
One of the earliest categorical distinctions to be made by preverbal infants is the animate-inanimate distinction. To explore the neural basis for this distinction in 7-8-month-olds, an equal number of animal and furniture pictures was presented in an ERP-paradigm. The total of 118 pictures, all looking different from each other, were presented in a semi-randomized order for 1000 ms each. Infants' brain responses to exemplars from both categories differed systematically regarding the negative central component (Nc: 400-600 ms) at anterior channels. More specifically, the Nc was enhanced for animals in one subgroup of infants, and for furniture items in another subgroup of infants. Explorative analyses related to categorical priming further revealed category-specific differences in brain responses in the late time window (650-1550 ms) at right frontal channels: Unprimed stimuli (preceded by a different-category item) elicited a more positive response as compared to primed stimuli (preceded by a same-category item). In sum, these findings suggest that the infant's brain discriminates exemplars from both global domains. Given the design of our task, we conclude that processes of category identification are more likely to account for our findings than processes of on-line category formation during the experimental session.