Refine
Language
- English (18)
Is part of the Bibliography
- yes (18)
Keywords
- musicality (5)
- prosody (5)
- rhythmic grouping (5)
- Iambic (3)
- Trochaic Law (3)
- grouping (3)
- speech perception (3)
- Artificial language learning (2)
- German (2)
- Iambic/Trochaic Law (2)
Institute
This study compares the development of prosodic processing in French- and German-learning infants. The emergence of language-specific perception of phrase boundaries was directly tested using the same stimuli across these two languages. French-learning (Experiment 1, 2) and German-learning 6- and 8-month-olds (Experiment 3) listened to the same French noun sequences with or without major prosodic boundaries ([Loulou et Manou] [et Nina]; [Loulou et Manou et Nina], respectively). The boundaries were either naturally cued (Experiment 1), or cued exclusively by pitch and duration (Experiment 2, 3). French-learning 6- and 8-month-olds both perceived the natural boundary, but neither perceived the boundary when only two cues were present. In contrast, German-learning infants develop from not perceiving the two-cue boundary at 6 months to perceiving it at 8 months, just like German-learning 8-month-olds listening to German (Wellmann, Holzgrefe, Truckenbrodt, Wartenburger, & Hohle, 2012). In a control experiment (Experiment 4), we found little difference between German and French adult listeners, suggesting that later, French listeners catch up with German listeners. Taken together, these cross-linguistic differences in the perception of identical stimuli provide direct evidence for language-specific development of prosodic boundary perception.
Respect the surroundings
(2021)
Fourteen-month-olds' ability to distinguish a just learned word, /bu?k/, from its minimally different word, /du?k/, was assessed under two pre-exposure conditions: one where /b, d/-initial forms occurred in a varying vowel context and another where the vowel was fixed but the final consonant varied. Infants in the experiments benefited from the variable vowel but not from the variable final consonant context, suggesting that vowel variability but not all kinds of variability are beneficial. These results are discussed in the context of time-honored observations on the vowel-dependent nature of place of articulation cues for consonants.
We investigated online electrophysiological components of distributional learning, specifically of tones by listeners of a non tonal language. German listeners were presented with a bimodal distribution of syllables with lexical tones from a synthesized continuum based on Cantonese level tones. Tones were presented in sets of four standards (within-category tokens) followed by a deviant (across-category token). Mismatch negativity (MMN) was measured. Earlier behavioral data showed that exposure to this bimodal distribution improved both categorical perception and perceptual acuity for level tones [I]. In the present study we present analyses of the electrophysiological response recorded during this exposure, i.e., the development of the MMN response during distributional learning. This development over time is analyzed using Generalized Additive Mixed Models and results showed that the MMN amplitude increased for both within and across-category tokens, reflecting higher perceptual acuity accompanying category formation. This is evidence that learners zooming in on phonological categories undergo neural changes associated with more accurate phonetic perception.
OCP-Place, a cross-linguistically well-attested constraint against pairs of consonants with shared [place], is psychologically real. Studies have shown that the processing of words violating OCP-Place is inhibited. Functionalists assume that OCP arises as a consequence of low-level perception: a consonant following another with the same [place] cannot be faithfully perceived as an independent unit. If functionalist theories were correct, then lexical access would be inhibited if two homorganic consonants conjoin at word boundaries-a problem that can only be solved with lexical feedback.
Here, we experimentally challenge the functional account by showing that OCP-Place can be used as a speech segmentation cue during pre-lexical processing without lexical feedback, and that the use relates to distributions in the input.
In Experiment 1, native listeners of Dutch located word boundaries between two labials when segmenting an artificial language. This indicates a use of OCP-Labial as a segmentation cue, implying a full perception of both labials. Experiment 2 shows that segmentation performance cannot solely be explained by well-formedness intuitions. Experiment 3 shows that knowledge of OCP-Place depends on language-specific input: in Dutch, co-occurrences of labials are under-represented, but co-occurrences of coronals are not. Accordingly, Dutch listeners fail to use OCP-Coronal for segmentation.
OCP-Place, a cross-linguistically well-attested constraint against pairs of consonants with shared [place], is psychologically real. Studies have shown that the processing of words violating OCP-Place is inhibited. Functionalists assume that OCP arises as a consequence of low-level perception: a consonant following another with the same [place] cannot be faithfully perceived as an independent unit. If functionalist theories were correct, then lexical access would be inhibited if two homorganic consonants conjoin at word boundaries-a problem that can only be solved with lexical feedback.
Here, we experimentally challenge the functional account by showing that OCP-Place can be used as a speech segmentation cue during pre-lexical processing without lexical feedback, and that the use relates to distributions in the input.
In Experiment 1, native listeners of Dutch located word boundaries between two labials when segmenting an artificial language. This indicates a use of OCP-Labial as a segmentation cue, implying a full perception of both labials. Experiment 2 shows that segmentation performance cannot solely be explained by well-formedness intuitions. Experiment 3 shows that knowledge of OCP-Place depends on language-specific input: in Dutch, co-occurrences of labials are under-represented, but co-occurrences of coronals are not. Accordingly, Dutch listeners fail to use OCP-Coronal for segmentation.
Many languages restrict their lexicons by OCP-Place, a phonotactic constraint against co-occurrences of consonants with shared [place] (e.g., McCarthy, 1986). While many previous studies have suggested that listeners have knowledge of OCP-Place and use this for speech processing, it is less clear whether they make reference to an abstract representation of this constraint. In Dutch, OCP-Place gradiently restricts non-adjacent consonant co-occurrences in the lexicon. Focusing on labial-vowel-labial co-occurrences, we found that there are, however, exceptions from the general effect of OCP-Labial: (A) co-occurrences of identical labials are systematically less restricted than co-occurrences of homorganic labials, and (B) some specific pairs (e.g., /pVp/, /bVv/) occur more often than expected. Setting out to study whether exceptions such as (A) and (B) had an effect on processing, the current study presents an artificial language learning experiment and a reanalysis of Boll-Avetisyan and Kager's (2014) speech segmentation data. Results indicate that Dutch listeners can use both knowledge of phonotactic detail and an abstract constraint OCP-Labial as a cue for speech segmentation. We suggest that whether detailed or abstract representations are drawn on depends on the complexity of processing demands.
This study provides a novel approach for testing the universality of perceptual biases by looking at speech processing in simultaneous bilingual adults learning two languages that support the maintenance of this bias to different degrees. Specifically, we investigated the Iambic/Trochaic Law, an assumed universal grouping bias, in simultaneous French-German bilinguals, presenting them with streams of syllables varying in intensity, duration or neither and asking them whether they perceived them as strong-weak or weak-strong groupings. Results showed robust, consistent grouping preferences. A comparison to monolinguals from previous studies revealed that they pattern with German-speaking monolinguals, and differ from French-speaking monolinguals. The distribution of simultaneous bilinguals' individual performance was best explained by a model fitting a unimodal (not bimodal) distribution, failing to support two subgroups of language dominance. Moreover, neither language experience nor language context predicted their performance. These findings suggest a special role for universal biases in simultaneous bilinguals.
Rhythm perception is assumed to be guided by a domain-general auditory principle, the Iambic/Trochaic Law, stating that sounds varying in intensity are grouped as strong-weak, and sounds varying in duration are grouped as weak-strong. Recently, Bhatara et al. (2013) showed that rhythmic grouping is influenced by native language experience, French listeners having weaker grouping preferences than German listeners. This study explores whether L2 knowledge and musical experience also affect rhythmic grouping. In a grouping task, French late learners of German listened to sequences of coarticulated syllables varying in either intensity or duration. Data on their language and musical experience were obtained by a questionnaire. Mixed-effect model comparisons showed influences of musical experience as well as L2 input quality and quantity on grouping preferences. These results imply that adult French listeners' sensitivity to rhythm can be enhanced through L2 and musical experience.
Recent studies have suggested that musical rhythm perception ability can affect the phonological system. The most prevalent causal account for developmental dyslexia is the phonological deficit hypothesis. As rhythm is a subpart of phonology, we hypothesized that reading deficits in dyslexia are associated with rhythm processing in speech and in music. In a rhythmic grouping task, adults with diagnosed dyslexia and age-matched controls listened to speech streams with syllables alternating in intensity, duration, or neither, and indicated whether they perceived a strong-weak or weak-strong rhythm pattern. Additionally, their reading and musical rhythm abilities were measured. Results showed that adults with dyslexia had lower musical rhythm abilities than adults without dyslexia. Moreover, lower musical rhythm ability was associated with lower reading ability in dyslexia. However, speech grouping by adults with dyslexia was not impaired when musical rhythm perception ability was controlled: like adults without dyslexia, they showed consistent preferences. However, rhythmic grouping was predicted by musical rhythm perception ability, irrespective of dyslexia. The results suggest associations among musical rhythm perception ability, speech rhythm perception, and reading ability. This highlights the importance of considering individual variability to better understand dyslexia and raises the possibility that musical rhythm perception ability is a key to phonological and reading acquisition.