Refine
Has Fulltext
- yes (5) (remove)
Year of publication
- 2020 (5) (remove)
Document Type
- Postprint (5)
Language
- English (5) (remove)
Is part of the Bibliography
- yes (5)
Keywords
- eye-tracking (2)
- Akan (1)
- ERPs (1)
- Iambic/Trochaic Law (1)
- acoustic variability (1)
- common ground (1)
- derivational complexity (1)
- developmental dyslexia (1)
- face discrimination (1)
- habituation (1)
Institute
Recent studies have suggested that musical rhythm perception ability can affect the phonological system. The most prevalent causal account for developmental dyslexia is the phonological deficit hypothesis. As rhythm is a subpart of phonology, we hypothesized that reading deficits in dyslexia are associated with rhythm processing in speech and in music. In a rhythmic grouping task, adults with diagnosed dyslexia and age-matched controls listened to speech streams with syllables alternating in intensity, duration, or neither, and indicated whether they perceived a strong-weak or weak-strong rhythm pattern. Additionally, their reading and musical rhythm abilities were measured. Results showed that adults with dyslexia had lower musical rhythm abilities than adults without dyslexia. Moreover, lower musical rhythm ability was associated with lower reading ability in dyslexia. However, speech grouping by adults with dyslexia was not impaired when musical rhythm perception ability was controlled: like adults without dyslexia, they showed consistent preferences. However, rhythmic grouping was predicted by musical rhythm perception ability, irrespective of dyslexia. The results suggest associations among musical rhythm perception ability, speech rhythm perception, and reading ability. This highlights the importance of considering individual variability to better understand dyslexia and raises the possibility that musical rhythm perception ability is a key to phonological and reading acquisition.
The other-race effect (ORE) can be described as difficulties in discriminating between faces of ethnicities other than one’s own, and can already be observed at approximately 9 months of age. Recent studies also showed that infants visually explore same-and other-race faces differently. However, it is still unclear whether infants’ looking behavior for same- and other-race faces is related to their face discrimination abilities. To investigate this question we conducted a habituation–dishabituation experiment to examine Caucasian 9-month-old infants’ gaze behavior, and their discrimination of same- and other-race faces, using eye-tracking measurements. We found that infants looked longer at the eyes of same-race faces over the course of habituation, as compared to other-race faces. After habituation, infants demonstrated a clear other-race effect by successfully discriminating between same-race faces, but not other-race faces. Importantly, the infants’ ability to discriminate between same-race faces significantly correlated with their fixation time towards the eyes of same-race faces during habituation. Thus, our findings suggest that for infants old enough to begin exhibiting the ORE, gaze behavior during habituation is related to their ability to differentiate among same-race faces, compared to other-race faces.
Acquiring Syntactic Variability: The Production of Wh-Questions in Children and Adults Speaking Akan
(2020)
This paper investigates the predictions of the Derivational Complexity Hypothesis by studying the acquisition of wh-questions in 4- and 5-year-old Akan-speaking children in an experimental approach using an elicited production and an elicited imitation task. Akan has two types of wh-question structures (wh-in-situ and wh-ex-situ questions), which allows an investigation of children’s acquisition of these two question structures and their preferences for one or the other. Our results show that adults prefer to use wh-ex-situ questions over wh-in-situ questions. The results from the children show that both age groups have the two question structures in their linguistic repertoire. However, they differ in their preferences in usage in the elicited production task: while the 5-year-olds preferred the wh-in-situ structure over the wh-ex-situ structure, the 4-year-olds showed a selective preference for the wh-in-situ structure in who-questions. These findings suggest a developmental change in wh-question preferences in Akan-learning children between 4 and 5 years of age with a so far unobserved u-shaped developmental pattern. In the elicited imitation task, all groups showed a strong tendency to maintain the structure of in-situ and ex-situ questions in repeating grammatical questions. When repairing ungrammatical ex-situ questions, structural changes to grammatical in-situ questions were hardly observed but the insertion of missing morphemes while keeping the ex-situ structure. Together, our findings provide only partial support for the Derivational Complexity Hypothesis.
Only the right noise?
(2020)
Seminal work by Werker and colleagues (Stager & Werker [1997]Nature, 388, 381-382) has found that 14-month-old infants do not show evidence for learning minimal pairs in the habituation-switch paradigm. However, when multiple speakers produce the minimal pair in acoustically variable ways, infants' performance improves in comparison to a single speaker condition (Rost & McMurray [2009]Developmental Science, 12, 339-349). The current study further extends these results and assesses how different kinds of input variability affect 14-month-olds' minimal pair learning in the habituation-switch paradigm testing German learning infants. The first two experiments investigated word learning when the labels were spoken by a single speaker versus when the labels were spoken by multiple speakers. In the third experiment we studied whether non-acoustic variability, implemented by visual variability of the objects presented together with the labels, would also affect minimal pair learning. We found enhanced learning in the multiple speakers compared to the single speaker condition, confirming previous findings with English-learning infants. In contrast, visual variability of the presented objects did not support learning. These findings both confirm and better delimit the beneficial role of speech-specific variability in minimal pair learning. Finally, we review different proposals on the mechanisms via which variability confers benefits to learning and outline what may be likely principles that underlie this benefit. We highlight among these the multiplicity of acoustic cues signalling phonemic contrasts and the presence of relations among these cues. It is in these relations where we trace part of the source for the apparent paradoxical benefit of variability in learning.
One of the most important social cognitive skills in humans is the ability to “put oneself in someone else’s shoes,” that is, to take another person’s perspective. In socially situated communication, perspective taking enables the listener to arrive at a meaningful interpretation of what is said (sentence meaning) and what is meant (speaker’s meaning) by the speaker. To successfully decode the speaker’s meaning, the listener has to take into account which information he/she and the speaker share in their common ground (CG). We here further investigated competing accounts about when and how CG information affects language comprehension by means of reaction time (RT) measures, accuracy data, event-related potentials (ERPs), and eye-tracking. Early integration accounts would predict that CG information is considered immediately and would hence not expect to find costs of CG integration. Late integration accounts would predict a rather late and effortful integration of CG information during the parsing process that might be reflected in integration or updating costs. Other accounts predict the simultaneous integration of privileged ground (PG) and CG perspectives. We used a computerized version of the referential communication game with object triplets of different sizes presented visually in CG or PG. In critical trials (i.e., conflict trials), CG information had to be integrated while privileged information had to be suppressed. Listeners mastered the integration of CG (response accuracy 99.8%). Yet, slower RTs, and enhanced late positivities in the ERPs showed that CG integration had its costs. Moreover, eye-tracking data indicated an early anticipation of referents in CG but an inability to suppress looks to the privileged competitor, resulting in later and longer looks to targets in those trials, in which CG information had to be considered. Our data therefore support accounts that foresee an early anticipation of referents to be in CG but a rather late and effortful integration if conflicting information has to be processed. We show that both perspectives, PG and CG, contribute to socially situated language processing and discuss the data with reference to theoretical accounts and recent findings on the use of CG information for reference resolution.