Refine
Year of publication
Document Type
- Article (32)
- Postprint (4)
- Part of Periodical (1)
Is part of the Bibliography
- yes (37)
Keywords
- Infancy (5)
- middle childhood (4)
- Action processing (3)
- Eye tracking (3)
- executive function (3)
- longitudinal study (3)
- relational aggression (3)
- Action (2)
- Anticipatory gaze shifts (2)
- Closure Positive Shift (CPS) (2)
Around their first year of life, infants are able to anticipate the goal of others' ongoing actions. For instance, 12-month-olds anticipate the goal of everyday feeding actions and manual actions such as reaching and grasping. However, little is known whether the salience of the goal influences infants' online assessment of others' actions. The aim of the current eye-tracking study was to elucidate infants' ability to anticipate reaching actions depending on the visual salience of the goal object. In Experiment 1, 12-month-old infants' goal-directed gaze shifts were recorded as they observed a hand reaching for and grasping either a large (high-salience condition) or a small (low-salience condition) goal object. Infants exhibited predictive gaze shifts significantly earlier when the observed hand reached for the large goal object compared to when it reached for the small goal object. In addition, findings revealed rapid learning over the course of trials in the high-salience condition and no learning in the low-salience condition. Experiment 2 demonstrated that the results could not be simply attributed to the different grip aperture of the hand used when reaching for small and large objects. Together, our data indicate that by the end of their first year of life, infants rely on information about the goal salience to make inferences about the action goal.
We investigated whether 12-month-old infants rely on information about the certainty of goal selection in order to predict observed reaching actions. Infants' goal-directed gaze shifts were recorded as they observed action sequences in a multiple-goals design. We found that 12-month-old infants exhibited gaze shifts significantly earlier when the observed hand reached for the same goal object in all trials (frequent condition) compared with when the observed hand reached for different goal objects across trials (nonfrequent condition). Infants in the frequent condition were significantly more accurate at predicting the action goal than infants in the nonfrequent condition. In addition, findings revealed rapid learning in the case of certainty and no learning in the case of uncertainty of goal selection over the course of trials. Together, our data indicate that by the end of their first year of life, infants rely on information about the certainty of goal selection to make inferences about others' action goals.
Previous research indicates that infants’ prediction of the goals of observed actions is influenced by own experience with the type of agent performing the action (i.e., human hand vs. non-human agent) as well as by action-relevant features of goal objects (e.g., object size). The present study investigated the combined effects of these factors on 12-month-olds’ action prediction. Infants’ (N = 49) goal-directed gaze shifts were recorded as they observed 14 trials in which either a human hand or a mechanical claw reached for a small goal area (low-saliency goal) or a large goal area (high-saliency goal). Only infants who had observed the human hand reaching for a high-saliency goal fixated the goal object ahead of time, and they rapidly learned to predict the action goal across trials. By contrast, infants in all other conditions did not track the observed action in a predictive manner, and their gaze shifts to the action goal did not change systematically across trials. Thus, high-saliency goals seem to boost infants’ predictive gaze shifts during the observation of human manual actions, but not of actions performed by a mechanical device. This supports the assumption that infants’ action predictions are based on interactive effects of action-relevant object features (e.g., size) and own action experience.
Successful communication often involves comprehension of both spoken language and observed actions with and without objects. Even very young infants can learn associations between actions and objects as well as between words and objects. However, in daily life, children are usually confronted with both kinds of input simultaneously. Choosing the critical information to attend to in such situations might help children structure the input, and thereby, allow for successful learning. In the current study, we therefore, investigated the developmental time course of children’s and adults’ word and action learning when given the opportunity to learn both word-object and action-object associations for the same object. All participants went through a learning phase and a test phase. In the learning phase, they were presented with two novel objects which were associated with a distinct novel name (e.g., “Look, a Tanu”) and a distinct novel action (e.g., moving up and down while tilting sideways). In the test phase, participants were presented with both objects on screen in a baseline phase, then either heard one of the two labels or saw one of the two actions in a prime phase, and then saw the two objects again on screen in a recognition phase. Throughout the trial, participants’ target looking was recorded to investigate whether participants looked at the target object upon hearing its label or seeing its action, and thus, would show learning of the word-object and action-object associations. Growth curve analyses revealed that 12-month-olds showed modest learning of action-object associations, 36-month-olds learned word-object associations, and adults learned word-object and action-object associations. These results highlight how children attend to the different information types from the two modalities through which communication is addressed to them. Over time, with increased exposure to systematic word-object mappings, children attend less to action-object mappings, with the latter potentially being mediated by word-object learning even in adulthood. Thus, choosing between different kinds of input that may be more relevant in their rich environment encompassing different modalities might help learning at different points in development.
Infants in the second year of life not only detect the visible goals or end-states of other people's action, but they also seem to be able to infer others’ underlying intentions. The present study used event-related potentials (ERPs) to investigate the biological basis of infants’ processing of others’ goal-directed actions, with special regard to the involvement of bottom-up perceptual and top-down conceptual processes. In an adaptation of the behavioral re-enactment procedure, 14-month-olds were first familiarized with either full demonstrations (FD), failed attempts (FA), or arbitrary (AA) object-directed actions. Next, ERPs were measured while all infants saw the same two pictures of the end-states of the full demonstration (complete end-state) and the failed attempt (incomplete end-state). In the time-windows related to perceptual processing (100–200 ms after stimulus onset) and to conceptual processing (300–700 ms), ERP negativity over frontal and central regions was higher for the complete than for the incomplete end-state in the FD and FA conditions. When comparing the FA and AA conditions, this pattern of results occurred only for the conceptual time domain. Moreover, beginning slow-wave activity (700–1000 ms) differed for the end-state pictures in the three conditions, suggesting differential encoding demands. Together, the electrophysiological data indicate that infants in the second year of life use bottom-up perceptual as well as top-down conceptual processing to give meaning to others' goal-directed actions.
Communication with young children is often multimodal in nature, involving, for example, language and actions. The simultaneous presentation of information from both domains may boost language learning by highlighting the connection between an object and a word, owing to temporal overlap in the presentation of multimodal input. However, the overlap is not merely temporal but can also covary in the extent to which particular actions co-occur with particular words and objects, e.g. carers typically produce a hopping action when talking about rabbits and a snapping action for crocodiles. The frequency with which actions and words co-occurs in the presence of the referents of these words may also impact young children’s word learning. We, therefore, examined the extent to which consistency in the co-occurrence of particular actions and words impacted children’s learning of novel word–object associations. Children (18 months, 30 months and 36–48 months) and adults were presented with two novel objects and heard their novel labels while different actions were performed on these objects, such that the particular actions and word–object pairings always co-occurred (Consistent group) or varied across trials (Inconsistent group). At test, participants saw both objects and heard one of the labels to examine whether participants recognized the target object upon hearing its label. Growth curve models revealed that 18-month-olds did not learn words for objects in either condition, and 30-month-old and 36- to 48-month-old children learned words for objects only in the Consistent condition, in contrast to adults who learned words for objects independent of the actions presented. Thus, consistency in the multimodal input influenced word learning in early childhood but not in adulthood. In terms of a dynamic systems account of word learning, our study shows how multimodal learning settings interact with the child’s perceptual abilities to shape the learning experience.
Do as I say - or as I do?!
(2019)
Infants use behavioral and verbal cues to infer another person’s action intention. However, it is still unclear how infants integrate these often co-occurring cues depending on the cues’ coherence (i.e., the degree to which the cues provide coherent information about another’s intention). This study investigated how 18- and 24-month-olds’ (N = 88 per age group) action selection was influenced by varying the coherence of a model’s verbal and behavioral cues. Using a between-subjects design, infants received six trials with different stimulus objects. In the conditions Congruent, Incongruent, and Failed-attempt, the model uttered a telic verb particle that was followed by a matching or contradicting goal-directed action demonstration, or by a non goal-directed slipping motion, respectively. In the condition Pseudo-word, a nonsense word was combined with a goal-directed action demonstration. Infants’ action selection indicated an adherence to the verbal cue in Congruent, Incongruent, and Failed-attempt, and this was stronger in 24- than 18-month-olds. Additionally, in Incongruent and Failed-attempt, patterns of cue integration across the six trials varied in the two age groups. Regarding the behavioral cue, infants in Congruent and Pseudo-word preferentially followed this cue in both age groups, which also suggested a rather unspecific effect of the verbal cue in Congruent. Relatively longer first action-latencies in Incongruent and Failed-attempt implied that these types of coherence elicited higher cognitive demands than in Congruent and Pseudo-word. Results are discussed in light of infants’ flexibility in using social cues, depending on the cue’s coherence and on age-related social-cognitive differences.
Research on voluntary action has focused on the question of how we represent our behavior on a motor and cognitive level. However, the question of how we represent voluntary not acting has been completely neglected. The aim of the present study was to investigate the cognitive and motor representation of intentionally not acting. By using an action-effect binding approach, we demonstrate similarities of action and nonaction. In particular, our results reveal that voluntary nonactions can be bound to an effect tone. This finding suggests that effect binding is not restricted to an association between a motor representation and a successive effect (action-effect binding) but can also occur for an intended nonaction and its effect (nonaction-effect binding). Moreover, we demonstrate that nonactions have to be initiated voluntarily in order to elicit nonaction-effect binding.