Refine
Year of publication
Language
- English (21)
Is part of the Bibliography
- yes (21)
Keywords
- musicality (5)
- prosody (5)
- rhythmic grouping (5)
- Iambic (3)
- Trochaic Law (3)
- grouping (3)
- speech perception (3)
- Artificial language learning (2)
- German (2)
- Iambic/Trochaic Law (2)
Institute
An exploration of rhythmic grouping of speech sequences by french- and german-learning infants
(2016)
Rhythm in music and speech can be characterized by a constellation of several acoustic cues. Individually, these cues have different effects on rhythmic perception: sequences of sounds alternating in duration are perceived as short-long pairs (weak-strong/iambicpattern), whereas sequences of sounds alternating in intensity or pitch are perceived as loud-soft, or high-low pairs (strong-weak/trochaic pattern). This perceptual bias-called the lambic-Trochaic Law (ITL) has been claimed to be an universal property of the auditory system applying in both the music and the language domains. Recent studies have shown that language experience can modulate the effects of the ITL on rhythmic perception of both speech and non-speech sequences in adults, and of non-speech sequences in 7.5-month-old infants. The goal of the present study was to explore whether language experience also modulates infants' grouping of speech. To do so, we presented sequences of syllables to monolingual French- and German-learning 7.5-month-olds. Using the Headturn Preference Procedure (HPP), we examined whether they were able to perceive a rhythmic structure in sequences of syllables that alternated in duration, pitch, or intensity. Our findings show that both French- and German-learning infants perceived a rhythmic structure when it was cued by duration or pitch but not intensity. Our findings also show differences in how these infants use duration and pitch cues to group syllable sequences, suggesting that pitch cues were the easier ones to use. Moreover, performance did not differ across languages, failing to reveal early language effects on rhythmic perception. These results contribute to our understanding of the origin of rhythmic perception and perceptual mechanisms shared across music and speech, which may bootstrap language acquisition.
An Exploration of Rhythmic Grouping of Speech Sequences by French- and German-Learning Infants
(2016)
Rhythm in music and speech can be characterized by a constellation of several acoustic cues. Individually, these cues have different effects on rhythmic perception: sequences of sounds alternating in duration are perceived as short-long pairs (weak-strong/iambic pattern), whereas sequences of sounds alternating in intensity or pitch are perceived as loud-soft, or high-low pairs (strong-weak/trochaic pattern). This perceptual bias—called the Iambic-Trochaic Law (ITL)–has been claimed to be an universal property of the auditory system applying in both the music and the language domains. Recent studies have shown that language experience can modulate the effects of the ITL on rhythmic perception of both speech and non-speech sequences in adults, and of non-speech sequences in 7.5-month-old infants. The goal of the present study was to explore whether language experience also modulates infants’ grouping of speech. To do so, we presented sequences of syllables to monolingual French- and German-learning 7.5-month-olds. Using the Headturn Preference Procedure (HPP), we examined whether they were able to perceive a rhythmic structure in sequences of syllables that alternated in duration, pitch, or intensity. Our findings show that both French- and German-learning infants perceived a rhythmic structure when it was cued by duration or pitch but not intensity. Our findings also show differences in how these infants use duration and pitch cues to group syllable sequences, suggesting that pitch cues were the easier ones to use. Moreover, performance did not differ across languages, failing to reveal early language effects on rhythmic perception. These results contribute to our understanding of the origin of rhythmic perception and perceptual mechanisms shared across music and speech, which may bootstrap language acquisition.
Rhythm perception is assumed to be guided by a domain-general auditory principle, the Iambic/Trochaic Law, stating that sounds varying in intensity are grouped as strong-weak, and sounds varying in duration are grouped as weak-strong. Recently, Bhatara et al. (2013) showed that rhythmic grouping is influenced by native language experience, French listeners having weaker grouping preferences than German listeners. This study explores whether L2 knowledge and musical experience also affect rhythmic grouping. In a grouping task, French late learners of German listened to sequences of coarticulated syllables varying in either intensity or duration. Data on their language and musical experience were obtained by a questionnaire. Mixed-effect model comparisons showed influences of musical experience as well as L2 input quality and quantity on grouping preferences. These results imply that adult French listeners' sensitivity to rhythm can be enhanced through L2 and musical experience.
Language and music share many rhythmic properties, such as variations in intensity and duration leading to repeating patterns. Perception of rhythmic properties may rely on cognitive networks that are shared between the two domains. If so, then variability in speech rhythm perception may relate to individual differences in musicality. To examine this possibility, the present study focuses on rhythmic grouping, which is assumed to be guided by a domain-general principle, the Iambic/Trochaic law, stating that sounds alternating in intensity are grouped as strong-weak, and sounds alternating in duration are grouped as weak-strong. German listeners completed a grouping task: They heard streams of syllables alternating in intensity, duration, or neither, and had to indicate whether they perceived a strong-weak or weak-strong pattern. Moreover, their music perception abilities were measured, and they filled out a questionnaire reporting their productive musical experience. Results showed that better musical rhythm perception - ability was associated with more consistent rhythmic grouping of speech, while melody perception - ability and productive musical experience were not. This suggests shared cognitive procedures in the perception of rhythm in music and speech. Also, the results highlight the relevance of - considering individual differences in musicality when aiming to explain variability in prosody perception.
Many languages restrict their lexicons by OCP-Place, a phonotactic constraint against co-occurrences of consonants with shared [place] (e.g., McCarthy, 1986). While many previous studies have suggested that listeners have knowledge of OCP-Place and use this for speech processing, it is less clear whether they make reference to an abstract representation of this constraint. In Dutch, OCP-Place gradiently restricts non-adjacent consonant co-occurrences in the lexicon. Focusing on labial-vowel-labial co-occurrences, we found that there are, however, exceptions from the general effect of OCP-Labial: (A) co-occurrences of identical labials are systematically less restricted than co-occurrences of homorganic labials, and (B) some specific pairs (e.g., /pVp/, /bVv/) occur more often than expected. Setting out to study whether exceptions such as (A) and (B) had an effect on processing, the current study presents an artificial language learning experiment and a reanalysis of Boll-Avetisyan and Kager's (2014) speech segmentation data. Results indicate that Dutch listeners can use both knowledge of phonotactic detail and an abstract constraint OCP-Labial as a cue for speech segmentation. We suggest that whether detailed or abstract representations are drawn on depends on the complexity of processing demands.
This study compares the development of prosodic processing in French- and German-learning infants. The emergence of language-specific perception of phrase boundaries was directly tested using the same stimuli across these two languages. French-learning (Experiment 1, 2) and German-learning 6- and 8-month-olds (Experiment 3) listened to the same French noun sequences with or without major prosodic boundaries ([Loulou et Manou] [et Nina]; [Loulou et Manou et Nina], respectively). The boundaries were either naturally cued (Experiment 1), or cued exclusively by pitch and duration (Experiment 2, 3). French-learning 6- and 8-month-olds both perceived the natural boundary, but neither perceived the boundary when only two cues were present. In contrast, German-learning infants develop from not perceiving the two-cue boundary at 6 months to perceiving it at 8 months, just like German-learning 8-month-olds listening to German (Wellmann, Holzgrefe, Truckenbrodt, Wartenburger, & Hohle, 2012). In a control experiment (Experiment 4), we found little difference between German and French adult listeners, suggesting that later, French listeners catch up with German listeners. Taken together, these cross-linguistic differences in the perception of identical stimuli provide direct evidence for language-specific development of prosodic boundary perception.
The ‘social brain’, consisting of areas sensitive to social information, supposedly gates the mechanisms involved in human language learning. Early preverbal interactions are guided by ostensive signals, such as gaze patterns, which are coordinated across body, brain, and environment. However, little is known about how the infant brain processes social gaze in naturalistic interactions and how this relates to infant language development. During free-play of 9-month-olds with their mothers, we recorded hemodynamic cortical activity of ´social brain` areas (prefrontal cortex, temporo-parietal junctions) via fNIRS, and micro-coded mother’s and infant’s social gaze. Infants’ speech processing was assessed with a word segmentation task. Using joint recurrence quantification analysis, we examined the connection between infants’ ´social brain` activity and the temporal dynamics of social gaze at intrapersonal (i.e., infant’s coordination, maternal coordination) and interpersonal (i.e., dyadic coupling) levels. Regression modeling revealed that intrapersonal dynamics in maternal social gaze (but not infant’s coordination or dyadic coupling) coordinated significantly with infant’s cortical activity. Moreover, recurrence quantification analysis revealed that intrapersonal maternal social gaze dynamics (in terms of entropy) were the best predictor of infants’ word segmentation. The findings support the importance of social interaction in language development, particularly highlighting maternal social gaze dynamics.
Perceptual attunement to one's native language results in language-specific processing of speech sounds. This includes stress cues, instantiated by differences in intensity, pitch, and duration. The present study investigates the effects of linguistic experience on the perception of these cues by studying the Iambic-Trochaic Law (ITL), which states that listeners group sounds trochaically (strong-weak) if the sounds vary in loudness or pitch and iambically (weak-strong) if they vary in duration. Participants were native listeners either of French or German; this comparison was chosen because French adults have been shown to be less sensitive than speakers of German and other languages to word-level stress, which is communicated by variation in cues such as intensity, fundamental frequency (F0), or duration. In experiment 1, participants listened to sequences of co-articulated syllables varying in either intensity or duration. The German participants were more consistent in their grouping than the French for both cues. Experiment 2 was identical to experiment 1 except that intensity variation was replaced by pitch variation. German participants again showed more consistency for both cues, and French participants showed especially inconsistent grouping for the pitch-varied sequences. These experiments show that the perception of linguistic rhythm is strongly influenced by linguistic experience.