Refine
Has Fulltext
- no (3)
Year of publication
- 2021 (3) (remove)
Document Type
- Article (3)
Language
- English (3)
Is part of the Bibliography
- yes (3)
Keywords
- Action events (1)
- Active inference (1)
- Computational model (1)
- Event cognition (1)
- Eye tracking (1)
- Feedforward processes (1)
- Goal-anticipatory gaze (1)
- Infancy (1)
- Infant action‐ goal prediction (1)
- Infant gaze (1)
- Perception of (1)
- agency cues (1)
- agency cues (1)
- behavior (1)
- developing agentive self (1)
- eye tracking (1)
- infancy (1)
- non-human grasping (1)
- predictive gaze behavior (1)
- tool-use actions (1)
Institute
Looking times and gaze behavior indicate that infants can predict the goal state of an observed simple action event (e.g., object-directed grasping) already in the first year of life. The present paper mainly focuses on infants' predictive gaze-shifts toward the goal of an ongoing action. For this, infants need to generate a forward model of the to-be-obtained goal state and to disengage their gaze from the moving agent at a time when information about the action event is still incomplete. By about 6 months of age, infants show goal-predictive gaze-shifts, but mainly for familiar actions that they can perform themselves (e.g., grasping) and for familiar agents (e.g., a human hand). Therefore, some theoretical models have highlighted close relations between infants' ability for action-goal prediction and their motor development and/or emerging action experience. Recent research indicates that infants can also predict action goals of familiar simple actions performed by non-human agents (e.g., object-directed grasping by a mechanical claw) when these agents display agency cues, such as self-propelled movement, equifinality of goal approach, or production of a salient action effect. This paper provides a review on relevant findings and theoretical models, and proposes that the impacts of action experience and of agency cues can be explained from an action-event perspective. In particular, infants' goal-predictive gaze-shifts are seen as resulting from an interplay between bottom-up processing of perceptual information and top-down influences exerted by event schemata that store information about previously executed or observed actions.
During the observation of goal-directed actions, infants usually predict the goal at an earlier age when the agent is familiar (e.g., human hand) compared to unfamiliar (e.g., mechanical claw). These findings implicate a crucial role of the developing agentive self for infants' processing of others' action goals. Recent theoretical accounts suggest that predictive gaze behavior relies on an interplay between infants' agentive experience (top-down processes) and perceptual information about the agent and the action-event (bottom-up information; e.g., agency cues). The present study examined 7-, 11-, and 18-month-old infants' predictive gaze behavior for a grasping action performed by an unfamiliar tool, depending on infants' age-related action knowledge about tool-use and the display of the agency cue of producing a salient action effect. The results are in line with the notion of a systematic interplay between experience-based top-down processes and cue-based bottom-up information: Regardless of the salient action effect, predictive gaze shifts did not occur in the 7-month-olds (least experienced age group), but did occur in the 18-month-olds (most experienced age group). In the 11-month-olds, however, predictive gaze shifts occurred only when a salient action effect was presented. This sheds new light on how the developing agentive self, in interplay with available agency cues, supports infants' action-goal prediction also for observed tool-use actions.
From about 7 months of age onward, infants start to reliably fixate the goal of an observed action, such as a grasp, before the action is complete. The available research has identified a variety of factors that influence such goal-anticipatory gaze shifts, including the experience with the shown action events and familiarity with the observed agents. However, the underlying cognitive processes are still heavily debated. We propose that our minds (i) tend to structure sensorimotor dynamics into probabilistic, generative event-predictive, and event boundary predictive models, and, meanwhile, (ii) choose actions with the objective to minimize predicted uncertainty. We implement this proposition by means of event-predictive learning and active inference. The implemented learning mechanism induces an inductive, event-predictive bias, thus developing schematic encodings of experienced events and event boundaries. The implemented active inference principle chooses actions by aiming at minimizing expected future uncertainty. We train our system on multiple object-manipulation events. As a result, the generation of goal-anticipatory gaze shifts emerges while learning about object manipulations: the model starts fixating the inferred goal already at the start of an observed event after having sampled some experience with possible events and when a familiar agent (i.e., a hand) is involved. Meanwhile, the model keeps reactively tracking an unfamiliar agent (i.e., a mechanical claw) that is performing the same movement. We qualitatively compare these modeling results to behavioral data of infants and conclude that event-predictive learning combined with active inference may be critical for eliciting goal-anticipatory gaze behavior in infants.