Refine
Year of publication
- 2019 (7) (remove)
Language
- English (7) (remove)
Is part of the Bibliography
- yes (7)
Keywords
- Closure Positive Shift (CPS) (2)
- Event-related Potentials (ERP) (2)
- Infancy (2)
- action processing (2)
- action segmentation (2)
- kinematic boundary cues (2)
- prosodic boundary cues (2)
- prosody processing (2)
- speech segmentation (2)
- Action (1)
Institute
Successful communication often involves comprehension of both spoken language and observed actions with and without objects. Even very young infants can learn associations between actions and objects as well as between words and objects. However, in daily life, children are usually confronted with both kinds of input simultaneously. Choosing the critical information to attend to in such situations might help children structure the input, and thereby, allow for successful learning. In the current study, we therefore, investigated the developmental time course of children’s and adults’ word and action learning when given the opportunity to learn both word-object and action-object associations for the same object. All participants went through a learning phase and a test phase. In the learning phase, they were presented with two novel objects which were associated with a distinct novel name (e.g., “Look, a Tanu”) and a distinct novel action (e.g., moving up and down while tilting sideways). In the test phase, participants were presented with both objects on screen in a baseline phase, then either heard one of the two labels or saw one of the two actions in a prime phase, and then saw the two objects again on screen in a recognition phase. Throughout the trial, participants’ target looking was recorded to investigate whether participants looked at the target object upon hearing its label or seeing its action, and thus, would show learning of the word-object and action-object associations. Growth curve analyses revealed that 12-month-olds showed modest learning of action-object associations, 36-month-olds learned word-object associations, and adults learned word-object and action-object associations. These results highlight how children attend to the different information types from the two modalities through which communication is addressed to them. Over time, with increased exposure to systematic word-object mappings, children attend less to action-object mappings, with the latter potentially being mediated by word-object learning even in adulthood. Thus, choosing between different kinds of input that may be more relevant in their rich environment encompassing different modalities might help learning at different points in development.
Speech and action sequences are continuous streams of information that can be segmented into sub-units. In both domains, this segmentation can be facilitated by perceptual cues contained within the information stream. In speech, prosodic cues (e.g., a pause, pre-boundary lengthening, and pitch rise) mark boundaries between words and phrases, while boundaries between actions of an action sequence can be marked by kinematic cues (e.g., a pause, pre-boundary deceleration). The processing of prosodic boundary cues evokes an Event-related Potentials (ERP) component known as the Closure Positive Shift (CPS), and it is possible that the CPS reflects domaingeneral cognitive processes involved in segmentation, given that the CPS is also evoked by boundaries between subunits of non-speech auditory stimuli. This study further probed the domain-generality of the CPS and its underlying processes by investigating electrophysiological correlates of the processing of boundary cues in sequences of spoken verbs (auditory stimuli; Experiment 1; N = 23 adults) and actions (visual stimuli; Experiment 2; N = 23 adults). The EEG data from both experiments revealed a CPS-like broadly distributed positivity during the 250 ms prior to the onset of the post-boundary word or action, indicating similar electrophysiological correlates of boundary processing across domains, suggesting that the cognitive processes underlying speech and action segmentation might also be shared.
Speech and action sequences are continuous streams of information that can be segmented into sub-units. In both domains, this segmentation can be facilitated by perceptual cues contained within the information stream. In speech, prosodic cues (e.g., a pause, pre-boundary lengthening, and pitch rise) mark boundaries between words and phrases, while boundaries between actions of an action sequence can be marked by kinematic cues (e.g., a pause, pre-boundary deceleration). The processing of prosodic boundary cues evokes an Event-related Potentials (ERP) component known as the Closure Positive Shift (CPS), and it is possible that the CPS reflects domaingeneral cognitive processes involved in segmentation, given that the CPS is also evoked by boundaries between subunits of non-speech auditory stimuli. This study further probed the domain-generality of the CPS and its underlying processes by investigating electrophysiological correlates of the processing of boundary cues in sequences of spoken verbs (auditory stimuli; Experiment 1; N = 23 adults) and actions (visual stimuli; Experiment 2; N = 23 adults). The EEG data from both experiments revealed a CPS-like broadly distributed positivity during the 250 ms prior to the onset of the post-boundary word or action, indicating similar electrophysiological correlates of boundary processing across domains, suggesting that the cognitive processes underlying speech and action segmentation might also be shared.
Infants in the second year of life not only detect the visible goals or end-states of other people's action, but they also seem to be able to infer others’ underlying intentions. The present study used event-related potentials (ERPs) to investigate the biological basis of infants’ processing of others’ goal-directed actions, with special regard to the involvement of bottom-up perceptual and top-down conceptual processes. In an adaptation of the behavioral re-enactment procedure, 14-month-olds were first familiarized with either full demonstrations (FD), failed attempts (FA), or arbitrary (AA) object-directed actions. Next, ERPs were measured while all infants saw the same two pictures of the end-states of the full demonstration (complete end-state) and the failed attempt (incomplete end-state). In the time-windows related to perceptual processing (100–200 ms after stimulus onset) and to conceptual processing (300–700 ms), ERP negativity over frontal and central regions was higher for the complete than for the incomplete end-state in the FD and FA conditions. When comparing the FA and AA conditions, this pattern of results occurred only for the conceptual time domain. Moreover, beginning slow-wave activity (700–1000 ms) differed for the end-state pictures in the three conditions, suggesting differential encoding demands. Together, the electrophysiological data indicate that infants in the second year of life use bottom-up perceptual as well as top-down conceptual processing to give meaning to others' goal-directed actions.
Do as I say - or as I do?!
(2019)
Infants use behavioral and verbal cues to infer another person’s action intention. However, it is still unclear how infants integrate these often co-occurring cues depending on the cues’ coherence (i.e., the degree to which the cues provide coherent information about another’s intention). This study investigated how 18- and 24-month-olds’ (N = 88 per age group) action selection was influenced by varying the coherence of a model’s verbal and behavioral cues. Using a between-subjects design, infants received six trials with different stimulus objects. In the conditions Congruent, Incongruent, and Failed-attempt, the model uttered a telic verb particle that was followed by a matching or contradicting goal-directed action demonstration, or by a non goal-directed slipping motion, respectively. In the condition Pseudo-word, a nonsense word was combined with a goal-directed action demonstration. Infants’ action selection indicated an adherence to the verbal cue in Congruent, Incongruent, and Failed-attempt, and this was stronger in 24- than 18-month-olds. Additionally, in Incongruent and Failed-attempt, patterns of cue integration across the six trials varied in the two age groups. Regarding the behavioral cue, infants in Congruent and Pseudo-word preferentially followed this cue in both age groups, which also suggested a rather unspecific effect of the verbal cue in Congruent. Relatively longer first action-latencies in Incongruent and Failed-attempt implied that these types of coherence elicited higher cognitive demands than in Congruent and Pseudo-word. Results are discussed in light of infants’ flexibility in using social cues, depending on the cue’s coherence and on age-related social-cognitive differences.
Executive functions (EFs) may help children to regulate their food-intake in an “obesogenic” environment, where energy-dense food is easily available. There is mounting evidence that overweight is associated with diminished hot and cool EFs, and several longitudinal studies found evidence for a predictive effect of hot EFs on children’s bodyweight, but longitudinal research examining the effect of cool EF on weight development in children is still scarce. The current 3-year longitudinal study examined the effect of a latent cool EF factor, which was based on three behavioral EF tasks, on subsequent mean levels and 3-year growth trajectories of body-mass-index z-scores (zBMI). Data from a large sample of children, with zBMI ranging from normal weight to obesity (n = 1474, aged 6–11 years at T1, 52% girls) was analyzed using structural-equation modeling and linear latent growth-curve modeling. Cool EF at the first wave (T1) negatively predicted subsequent zBMI and zBMI development throughout the 3-year period in middle childhood such that children with better EF had a lower zBMI and less steep zBMI growth. These effects were not moderated by the children’s age or gender. In conclusion, as early as in middle childhood, cool EFs seem to support the self-regulation of food-intake and consequently may play a causal role in the multifactorial etiology of overweight.
Communication with young children is often multimodal in nature, involving, for example, language and actions. The simultaneous presentation of information from both domains may boost language learning by highlighting the connection between an object and a word, owing to temporal overlap in the presentation of multimodal input. However, the overlap is not merely temporal but can also covary in the extent to which particular actions co-occur with particular words and objects, e.g. carers typically produce a hopping action when talking about rabbits and a snapping action for crocodiles. The frequency with which actions and words co-occurs in the presence of the referents of these words may also impact young children’s word learning. We, therefore, examined the extent to which consistency in the co-occurrence of particular actions and words impacted children’s learning of novel word–object associations. Children (18 months, 30 months and 36–48 months) and adults were presented with two novel objects and heard their novel labels while different actions were performed on these objects, such that the particular actions and word–object pairings always co-occurred (Consistent group) or varied across trials (Inconsistent group). At test, participants saw both objects and heard one of the labels to examine whether participants recognized the target object upon hearing its label. Growth curve models revealed that 18-month-olds did not learn words for objects in either condition, and 30-month-old and 36- to 48-month-old children learned words for objects only in the Consistent condition, in contrast to adults who learned words for objects independent of the actions presented. Thus, consistency in the multimodal input influenced word learning in early childhood but not in adulthood. In terms of a dynamic systems account of word learning, our study shows how multimodal learning settings interact with the child’s perceptual abilities to shape the learning experience.