Department Linguistik
Refine
Year of publication
Document Type
- Article (14)
- Monograph/Edited Volume (3)
- Postprint (2)
Language
- English (19)
Keywords
- entropy (3)
- individual differences (3)
- Bayesian inference (2)
- DLT (2)
- N400 (2)
- Spanish (2)
- activation (2)
- anterior PNP (2)
- antilocality (2)
- constraint (2)
Institute
- Department Psychologie (19) (remove)
The ‘social brain’, consisting of areas sensitive to social information, supposedly gates the mechanisms involved in human language learning. Early preverbal interactions are guided by ostensive signals, such as gaze patterns, which are coordinated across body, brain, and environment. However, little is known about how the infant brain processes social gaze in naturalistic interactions and how this relates to infant language development. During free-play of 9-month-olds with their mothers, we recorded hemodynamic cortical activity of ´social brain` areas (prefrontal cortex, temporo-parietal junctions) via fNIRS, and micro-coded mother’s and infant’s social gaze. Infants’ speech processing was assessed with a word segmentation task. Using joint recurrence quantification analysis, we examined the connection between infants’ ´social brain` activity and the temporal dynamics of social gaze at intrapersonal (i.e., infant’s coordination, maternal coordination) and interpersonal (i.e., dyadic coupling) levels. Regression modeling revealed that intrapersonal dynamics in maternal social gaze (but not infant’s coordination or dyadic coupling) coordinated significantly with infant’s cortical activity. Moreover, recurrence quantification analysis revealed that intrapersonal maternal social gaze dynamics (in terms of entropy) were the best predictor of infants’ word segmentation. The findings support the importance of social interaction in language development, particularly highlighting maternal social gaze dynamics.
Rhythmicity characterizes both interpersonal synchrony and spoken language. Emotions and language are forms of interpersonal communication, which interact with each other throughout development. We investigated whether and how emotional synchrony between mothers and their 9-month-old infants relates to infants' word segmentation as an early marker of language development. Twenty-six 9-month-old infants and their German-speaking mothers took part in the study. To measure emotional synchrony, we coded positive, neutral and negative emotional expressions of the mothers and their infants during a free play session. We then calculated the degree to which the mothers' and their infants' matching emotional expressions followed a predictable pattern. To measure word segmentation, we familiarized infants with auditory text passages and tested how long they looked at the screen while listening to familiar versus novel words. We found that higher levels of predictability (i.e. low entropy) during mother-infant interaction is associated with infants' word segmentation performance. These findings suggest that individual differences in word segmentation relate to the complexity and predictability of emotional expressions during mother-infant interactions.
In this paper we examine the effect of uncertainty on readers' predictions about meaning. In particular, we were interested in how uncertainty might influence the likelihood of committing to a specific sentence meaning. We conducted two event-related potential (ERP) experiments using particle verbs such as turn down and manipulated uncertainty by constraining the context such that readers could be either highly certain about the identity of a distant verb particle, such as turn the bed [...] down, or less certain due to competing particles, such as turn the music [...] up/down. The study was conducted in German, where verb particles appear clause-finally and may be separated from the verb by a large amount of material. We hypothesised that this separation would encourage readers to predict the particle, and that high certainty would make prediction of a specific particle more likely than lower certainty. If a specific particle was predicted, this would reflect a strong commitment to sentence meaning that should incur a higher processing cost if the prediction is wrong. If a specific particle was less likely to be predicted, commitment should be weaker and the processing cost of a wrong prediction lower. If true, this could suggest that uncertainty discourages predictions via an unacceptable cost-benefit ratio. However, given the clear predictions made by the literature, it was surprisingly unclear whether the uncertainty manipulation affected the two ERP components studied, the N400 and the PNP. Bayes factor analyses showed that evidence for our a priori hypothesised effect sizes was inconclusive, although there was decisive evidence against a priori hypothesised effect sizes larger than 1 mu Vfor the N400 and larger than 3 mu V for the PNP. We attribute the inconclusive finding to the properties of verb-particle dependencies that differ from the verb-noun dependencies in which the N400 and PNP are often studied.
When researchers carry out a null hypothesis significance test, it is tempting to assume that a statistically significant result lowers Prob(H0), the probability of the null hypothesis being true. Technically, such a statement is meaningless for various reasons: e.g., the null hypothesis does not have a probability associated with it. However, it is possible to relax certain assumptions to compute the posterior probability Prob(H0) under repeated sampling. We show in a step-by-step guide that the intuitively appealing belief, that Prob(H0) is low when significant results have been obtained under repeated sampling, is in general incorrect and depends greatly on: (a) the prior probability of the null being true; (b) type-I error rate, (c) type-II error rate, and (d) replication of a result. Through step-by-step simulations using open-source code in the R System of Statistical Computing, we show that uncertainty about the null hypothesis being true often remains high despite a significant result. To help the reader develop intuitions about this common misconception, we provide a Shiny app (https://danielschad.shinyapps.io/probnull/). We expect that this tutorial will help researchers better understand and judge results from null hypothesis significance tests.
Intuitively, strongly constraining contexts should lead to stronger probabilistic representations of sentences in memory. Encountering unexpected words could therefore be expected to trigger costlier shifts in these representations than expected words. However, psycholinguistic measures commonly used to study probabilistic processing, such as the N400 event-related potential (ERP) component, are sensitive to word predictability but not to contextual constraint. Some research suggests that constraint-related processing cost may be measurable via an ERP positivity following the N400, known as the anterior post-N400 positivity (PNP). The PNP is argued to reflect update of a sentence representation and to be distinct from the posterior P600, which reflects conflict detection and reanalysis. However, constraint-related PNP findings are inconsistent. We sought to conceptually replicate Federmeier et al. (2007) and Kuperberg et al. (2020), who observed that the PNP, but not the N400 or the P600, was affected by constraint at unexpected but plausible words. Using a pre-registered design and statistical approach maximising power, we demonstrated a dissociated effect of predictability and constraint: strong evidence for predictability but not constraint in the N400 window, and strong evidence for constraint but not predictability in the later window. However, the constraint effect was consistent with a P600 and not a PNP, suggesting increased conflict between a strong representation and unexpected input rather than greater update of the representation. We conclude that either a simple strong/weak constraint design is not always sufficient to elicit the PNP, or that previous PNP constraint findings could be an artifact of smaller sample size.
Intuitively, strongly constraining contexts should lead to stronger probabilistic representations of sentences in memory. Encountering unexpected words could therefore be expected to trigger costlier shifts in these representations than expected words. However, psycholinguistic measures commonly used to study probabilistic processing, such as the N400 event-related potential (ERP) component, are sensitive to word predictability but not to contextual constraint. Some research suggests that constraint-related processing cost may be measurable via an ERP positivity following the N400, known as the anterior post-N400 positivity (PNP). The PNP is argued to reflect update of a sentence representation and to be distinct from the posterior P600, which reflects conflict detection and reanalysis. However, constraint-related PNP findings are inconsistent. We sought to conceptually replicate Federmeier et al. (2007) and Kuperberg et al. (2020), who observed that the PNP, but not the N400 or the P600, was affected by constraint at unexpected but plausible words. Using a pre-registered design and statistical approach maximising power, we demonstrated a dissociated effect of predictability and constraint: strong evidence for predictability but not constraint in the N400 window, and strong evidence for constraint but not predictability in the later window. However, the constraint effect was consistent with a P600 and not a PNP, suggesting increased conflict between a strong representation and unexpected input rather than greater update of the representation. We conclude that either a simple strong/weak constraint design is not always sufficient to elicit the PNP, or that previous PNP constraint findings could be an artifact of smaller sample size.
Dynamical models make specific assumptions about cognitive processes that generate human behavior. In data assimilation, these models are tested against timeordered data. Recent progress on Bayesian data assimilation demonstrates that this approach combines the strengths of statistical modeling of individual differences with the those of dynamical cognitive models.
Human infants can segment action sequences into their constituent actions already during the first year of life. However, work to date has almost exclusively examined the role of infants' conceptual knowledge of actions and their outcomes in driving this segmentation. The present study examined electrophysiological correlates of infants' processing of lower-level perceptual cues that signal a boundary between two actions of an action sequence. Specifically, we tested the effect of kinematic boundary cues (pre-boundary lengthening and pause) on 12-month-old infants' (N = 27) processing of a sequence of three arbitrary actions, performed by an animated figure. Using the Event-Related Potential (ERP) approach, evidence of a positivity following the onset of the boundary cues was found, in line with previous work that has found an ERP positivity (Closure Positive Shift, CPS) related to boundary processing in auditory stimuli and action sequences in adults. Moreover, an ERP negativity (Negative Central, Nc) indicated that infants' encoding of the post-boundary action was modulated by the presence or absence of prior boundary cues. We therefore conclude that 12-month-old infants are sensitive to lower-level perceptual kinematic boundary cues, which can support segmentation of a continuous stream of movement into individual action units.
In this paper we examine the effect of uncertainty on readers’ predictions about meaning. In particular, we were interested in how uncertainty might influence the likelihood of committing to a specific sentence meaning. We conducted two event-related potential (ERP) experiments using particle verbs such as turn down and manipulated uncertainty by constraining the context such that readers could be either highly certain about the identity of a distant verb particle, such as turn the bed […] down, or less certain due to competing particles, such as turn the music […] up/down. The study was conducted in German, where verb particles appear clause-finally and may be separated from the verb by a large amount of material. We hypothesised that this separation would encourage readers to predict the particle, and that high certainty would make prediction of a specific particle more likely than lower certainty. If a specific particle was predicted, this would reflect a strong commitment to sentence meaning that should incur a higher processing cost if the prediction is wrong. If a specific particle was less likely to be predicted, commitment should be weaker and the processing cost of a wrong prediction lower. If true, this could suggest that uncertainty discourages predictions via an unacceptable cost-benefit ratio. However, given the clear predictions made by the literature, it was surprisingly unclear whether the uncertainty manipulation affected the two ERP components studied, the N400 and the PNP. Bayes factor analyses showed that evidence for our a priori hypothesised effect sizes was inconclusive, although there was decisive evidence against a priori hypothesised effect sizes larger than 1μV for the N400 and larger than 3μV for the PNP. We attribute the inconclusive finding to the properties of verb-particle dependencies that differ from the verb-noun dependencies in which the N400 and PNP are often studied.
Factorial experiments in research on memory, language, and in other areas are often analyzed using analysis of variance (ANOVA). However, for effects with more than one numerator degrees of freedom, e.g., for experimental factors with more than two levels, the ANOVA omnibus F-test is not informative about the source of a main effect or interaction. Because researchers typically have specific hypotheses about which condition means differ from each other, a priori contrasts (i.e., comparisons planned before the sample means are known) between specific conditions or combinations of conditions are the appropriate way to represent such hypotheses in the statistical model. Many researchers have pointed out that contrasts should be "tested instead of, rather than as a supplement to, the ordinary 'omnibus' F test" (Hays, 1973, p. 601). In this tutorial, we explain the mathematics underlying different kinds of contrasts (i.e., treatment, sum, repeated, polynomial, custom, nested, interaction contrasts), discuss their properties, and demonstrate how they are applied in the R System for Statistical Computing (R Core Team, 2018). In this context, we explain the generalized inverse which is needed to compute the coefficients for contrasts that test hypotheses that are not covered by the default set of contrasts. A detailed understanding of contrast coding is crucial for successful and correct specification in linear models (including linear mixed models). Contrasts defined a priori yield far more useful confirmatory tests of experimental hypotheses than standard omnibus F-tests. Reproducible code is available from https://osf.io/7ukf6/.