Refine
Year of publication
Language
- English (21) (remove)
Is part of the Bibliography
- yes (21)
Keywords
- musicality (5)
- prosody (5)
- rhythmic grouping (5)
- Iambic (3)
- Trochaic Law (3)
- grouping (3)
- speech perception (3)
- Artificial language learning (2)
- German (2)
- Iambic/Trochaic Law (2)
Institute
We investigated online electrophysiological components of distributional learning, specifically of tones by listeners of a non tonal language. German listeners were presented with a bimodal distribution of syllables with lexical tones from a synthesized continuum based on Cantonese level tones. Tones were presented in sets of four standards (within-category tokens) followed by a deviant (across-category token). Mismatch negativity (MMN) was measured. Earlier behavioral data showed that exposure to this bimodal distribution improved both categorical perception and perceptual acuity for level tones [I]. In the present study we present analyses of the electrophysiological response recorded during this exposure, i.e., the development of the MMN response during distributional learning. This development over time is analyzed using Generalized Additive Mixed Models and results showed that the MMN amplitude increased for both within and across-category tokens, reflecting higher perceptual acuity accompanying category formation. This is evidence that learners zooming in on phonological categories undergo neural changes associated with more accurate phonetic perception.
OCP-Place, a cross-linguistically well-attested constraint against pairs of consonants with shared [place], is psychologically real. Studies have shown that the processing of words violating OCP-Place is inhibited. Functionalists assume that OCP arises as a consequence of low-level perception: a consonant following another with the same [place] cannot be faithfully perceived as an independent unit. If functionalist theories were correct, then lexical access would be inhibited if two homorganic consonants conjoin at word boundaries-a problem that can only be solved with lexical feedback.
Here, we experimentally challenge the functional account by showing that OCP-Place can be used as a speech segmentation cue during pre-lexical processing without lexical feedback, and that the use relates to distributions in the input.
In Experiment 1, native listeners of Dutch located word boundaries between two labials when segmenting an artificial language. This indicates a use of OCP-Labial as a segmentation cue, implying a full perception of both labials. Experiment 2 shows that segmentation performance cannot solely be explained by well-formedness intuitions. Experiment 3 shows that knowledge of OCP-Place depends on language-specific input: in Dutch, co-occurrences of labials are under-represented, but co-occurrences of coronals are not. Accordingly, Dutch listeners fail to use OCP-Coronal for segmentation.
OCP-Place, a cross-linguistically well-attested constraint against pairs of consonants with shared [place], is psychologically real. Studies have shown that the processing of words violating OCP-Place is inhibited. Functionalists assume that OCP arises as a consequence of low-level perception: a consonant following another with the same [place] cannot be faithfully perceived as an independent unit. If functionalist theories were correct, then lexical access would be inhibited if two homorganic consonants conjoin at word boundaries-a problem that can only be solved with lexical feedback.
Here, we experimentally challenge the functional account by showing that OCP-Place can be used as a speech segmentation cue during pre-lexical processing without lexical feedback, and that the use relates to distributions in the input.
In Experiment 1, native listeners of Dutch located word boundaries between two labials when segmenting an artificial language. This indicates a use of OCP-Labial as a segmentation cue, implying a full perception of both labials. Experiment 2 shows that segmentation performance cannot solely be explained by well-formedness intuitions. Experiment 3 shows that knowledge of OCP-Place depends on language-specific input: in Dutch, co-occurrences of labials are under-represented, but co-occurrences of coronals are not. Accordingly, Dutch listeners fail to use OCP-Coronal for segmentation.
Recent studies have suggested that musical rhythm perception ability can affect the phonological system. The most prevalent causal account for developmental dyslexia is the phonological deficit hypothesis. As rhythm is a subpart of phonology, we hypothesized that reading deficits in dyslexia are associated with rhythm processing in speech and in music. In a rhythmic grouping task, adults with diagnosed dyslexia and age-matched controls listened to speech streams with syllables alternating in intensity, duration, or neither, and indicated whether they perceived a strong-weak or weak-strong rhythm pattern. Additionally, their reading and musical rhythm abilities were measured. Results showed that adults with dyslexia had lower musical rhythm abilities than adults without dyslexia. Moreover, lower musical rhythm ability was associated with lower reading ability in dyslexia. However, speech grouping by adults with dyslexia was not impaired when musical rhythm perception ability was controlled: like adults without dyslexia, they showed consistent preferences. However, rhythmic grouping was predicted by musical rhythm perception ability, irrespective of dyslexia. The results suggest associations among musical rhythm perception ability, speech rhythm perception, and reading ability. This highlights the importance of considering individual variability to better understand dyslexia and raises the possibility that musical rhythm perception ability is a key to phonological and reading acquisition.
Recent studies have suggested that musical rhythm perception ability can affect the phonological system. The most prevalent causal account for developmental dyslexia is the phonological deficit hypothesis. As rhythm is a subpart of phonology, we hypothesized that reading deficits in dyslexia are associated with rhythm processing in speech and in music. In a rhythmic grouping task, adults with diagnosed dyslexia and age-matched controls listened to speech streams with syllables alternating in intensity, duration, or neither, and indicated whether they perceived a strong-weak or weak-strong rhythm pattern. Additionally, their reading and musical rhythm abilities were measured. Results showed that adults with dyslexia had lower musical rhythm abilities than adults without dyslexia. Moreover, lower musical rhythm ability was associated with lower reading ability in dyslexia. However, speech grouping by adults with dyslexia was not impaired when musical rhythm perception ability was controlled: like adults without dyslexia, they showed consistent preferences. However, rhythmic grouping was predicted by musical rhythm perception ability, irrespective of dyslexia. The results suggest associations among musical rhythm perception ability, speech rhythm perception, and reading ability. This highlights the importance of considering individual variability to better understand dyslexia and raises the possibility that musical rhythm perception ability is a key to phonological and reading acquisition.
More than 30 years have passed since Mehler et al. (1988) proposed that newborns can discriminate between languages that belong to different rhythm classes: stress-, syllable- or mora-timed. Thereupon they developed the hypothesis that infants are sensitive to differences in vowel and consonant interval durations as acoustic correlates of rhythm classes. It remains unknown exactly which durational computations infants use when perceiving speech for the purposes of distinguishing languages. Here, a meta-analysis of studies on infants' language discrimination skills over the first year of life was conducted, aiming to quantify how language discrimination skills change with age and are modulated by rhythm classes or durational metrics. A systematic literature search identified 42 studies that tested infants' (birth to 12 months) discrimination or preference of two language varieties, by presenting infants with auditory or audio-visual continuous speech. Quantitative data synthesis was conducted using multivariate random effects meta-analytic models with the factors rhythm class difference, age, stimulus manipulation, method, and metrics operationalising proportions of and variability in vowel and consonant interval durations, to explore which factors best account for language discrimination or preference. Results revealed that smaller differences in vowel interval variability (oV) and larger differences in successive consonantal interval variability (rPVI-C) were associated with more successful language discrimination, and better accounted for discrimination results than the factor rhythm class. There were no effects of age for discrimination but results on preference studies were affected by age: the older infants get, the more they prefer non-native languages that are rhythmically similar to their native language, but not non-native languages that are rhythmically distinct. These findings can inform theories on language discrimination that have previously focussed on rhythm class, by providing a novel way to operationalise rhythm in language in the extent to which it accounts for infants' language discrimination abilities.
Respect the surroundings
(2021)
Fourteen-month-olds' ability to distinguish a just learned word, /bu?k/, from its minimally different word, /du?k/, was assessed under two pre-exposure conditions: one where /b, d/-initial forms occurred in a varying vowel context and another where the vowel was fixed but the final consonant varied. Infants in the experiments benefited from the variable vowel but not from the variable final consonant context, suggesting that vowel variability but not all kinds of variability are beneficial. These results are discussed in the context of time-honored observations on the vowel-dependent nature of place of articulation cues for consonants.
This study provides a novel approach for testing the universality of perceptual biases by looking at speech processing in simultaneous bilingual adults learning two languages that support the maintenance of this bias to different degrees. Specifically, we investigated the Iambic/Trochaic Law, an assumed universal grouping bias, in simultaneous French-German bilinguals, presenting them with streams of syllables varying in intensity, duration or neither and asking them whether they perceived them as strong-weak or weak-strong groupings. Results showed robust, consistent grouping preferences. A comparison to monolinguals from previous studies revealed that they pattern with German-speaking monolinguals, and differ from French-speaking monolinguals. The distribution of simultaneous bilinguals' individual performance was best explained by a model fitting a unimodal (not bimodal) distribution, failing to support two subgroups of language dominance. Moreover, neither language experience nor language context predicted their performance. These findings suggest a special role for universal biases in simultaneous bilinguals.
The present study examines the effect of language experience on vocal emotion perception in a second language. Native speakers of French with varying levels of self-reported English ability were asked to identify emotions from vocal expressions produced by American actors in a forced-choice task, and to rate their pleasantness, power, alertness and intensity on continuous scales. Stimuli included emotionally expressive English speech (emotional prosody) and non-linguistic vocalizations (affect bursts), and a baseline condition with Swiss-French pseudo-speech. Results revealed effects of English ability on the recognition of emotions in English speech but not in non-linguistic vocalizations. Specifically, higher English ability was associated with less accurate identification of positive emotions, but not with the interpretation of negative emotions. Moreover, higher English ability was associated with lower ratings of pleasantness and power, again only for emotional prosody. This suggests that second language skills may sometimes interfere with emotion recognition from speech prosody, particularly for positive emotions.
The present study examines the effect of language experience on vocal emotion perception in a second language. Native speakers of French with varying levels of self-reported English ability were asked to identify emotions from vocal expressions produced by American actors in a forced-choice task, and to rate their pleasantness, power, alertness and intensity on continuous scales. Stimuli included emotionally expressive English speech (emotional prosody) and non-linguistic vocalizations (affect bursts), and a baseline condition with Swiss-French pseudo-speech. Results revealed effects of English ability on the recognition of emotions in English speech but not in non-linguistic vocalizations. Specifically, higher English ability was associated with less accurate identification of positive emotions, but not with the interpretation of negative emotions. Moreover, higher English ability was associated with lower ratings of pleasantness and power, again only for emotional prosody. This suggests that second language skills may sometimes interfere with emotion recognition from speech prosody, particularly for positive emotions.
Rhythmicity characterizes both interpersonal synchrony and spoken language. Emotions and language are forms of interpersonal communication, which interact with each other throughout development. We investigated whether and how emotional synchrony between mothers and their 9-month-old infants relates to infants' word segmentation as an early marker of language development. Twenty-six 9-month-old infants and their German-speaking mothers took part in the study. To measure emotional synchrony, we coded positive, neutral and negative emotional expressions of the mothers and their infants during a free play session. We then calculated the degree to which the mothers' and their infants' matching emotional expressions followed a predictable pattern. To measure word segmentation, we familiarized infants with auditory text passages and tested how long they looked at the screen while listening to familiar versus novel words. We found that higher levels of predictability (i.e. low entropy) during mother-infant interaction is associated with infants' word segmentation performance. These findings suggest that individual differences in word segmentation relate to the complexity and predictability of emotional expressions during mother-infant interactions.