Refine
Has Fulltext
- no (7)
Document Type
- Article (7) (remove)
Language
- English (7)
Is part of the Bibliography
- yes (7) (remove)
Keywords
- ERPs (3)
- N400 (3)
- meaning (3)
- neural networks (3)
- cue validity (2)
- prediction (2)
- EEG (1)
- P600 (1)
- adaptation (1)
- attentional blink (1)
Institute
The functional significance of the two prominent language-related ERP components N400 and P600 is still under debate.
It has recently been suggested that one important dimension along which the two vary is in terms of automaticity versus attentional control, with N400 amplitudes reflecting more automatic and P600 amplitudes reflecting more controlled aspects of sentence comprehension.
The availability of executive resources necessary for controlled processes depends on sustained attention, which fluctuates over time.
Here, we thus tested whether P600 and N400 amplitudes depend on the level of sustained attention.
We reanalyzed EEG and behavioral data from a sentence processing task by Sassenhagen and Bornkessel-Schlesewsky [The P600 as a correlate of ventral attention network reorientation. Cortex, 66, A3-A20, 2015], which included sentences with morphosyntactic and semantic violations.
Participants read sentences phrase by phrase and indicated whether a sentence contained any type of anomaly as soon as they had the relevant information.
To quantify the varying degrees of sustained attention, we extracted a moving reaction time coefficient of variation over the entire course of the task.
We found that the P600 amplitude was significantly larger during periods of low reaction time variability (high sustained attention) than in periods of high reaction time variability (low sustained attention). In contrast, the amplitude of the N400 was not affected by reaction time variability.
These results thus suggest that the P600 component is sensitive to sustained attention whereas the N400 component is not, which provides independent evidence for accounts suggesting that P600 amplitudes reflect more controlled and N400 amplitudes reflect more automatic aspects of sentence comprehension.
One of the ongoing debates about visual consciousness is whether it can be considered as an all-or-none or a graded phenomenon. While there is increasing evidence for the existence of graded states of conscious awareness based on paradigms such as visual masking, only little and mixed evidence is available for the attentional blink paradigm, specifically in regard to electrophysiological measures. Thereby, the all-or-none pattern reported in some attentional blink studies might have originated from specifics of the experimental design, suggesting the need to examine the generalizability of results. In the present event-related potential (ERP) study (N = 32), visual awareness of T2 face targets was assessed via subjective visibility ratings on a perceptual awareness scale in combination with ERPs time-locked to T2 onset (components P1, N1, N2, and P3). Furthermore, a classification task preceding visibility ratings allowed to track task performance. The behavioral results indicate a graded rather than an all-or-none pattern of visual awareness. Corresponding graded differences in the N1, N2, and P3 components were observed for the comparison of visibility levels. These findings suggest that conscious perception during the attentional blink can occur in a graded fashion.
Language production ultimately aims to convey meaning. Yet words differ widely in the richness and density of their semantic representations, and these differences impact conceptual and lexical processes during speech planning. Here, we replicated the recent finding that semantic richness, measured as the number of associated semantic features according to semantic feature production norms, facilitates object naming. In contrast, intercorrelational semantic feature density, measured as the degree of intercorrelation of a concept's features, presumably resulting in the coactivation of closely related concepts, has an inhibitory influence. We replicated the behavioral effects and investigated their relative time course and electrophysiological correlates. Both the facilitatory effect of high semantic richness and the inhibitory influence of high feature density were reflected in an increased posterior positivity starting at about 250 ms, in line with previous reports of posterior positivities in paradigms employing contextual manipulations to induce semantic interference during language production. Furthermore, amplitudes at the same posterior electrode sites were positively correlated with object naming times between about 230 and 380 ms. The observed effects follow naturally from the assumption of conceptual facilitation and simultaneous lexical competition and are difficult to explain by language production theories dismissing lexical competition.
We argue that natural language can be usefully described as quasi-compositional and we suggest that deep learning-based neural language models bear long-term promise to capture how language conveys meaning. We also note that a successful account of human language processing should explain both the outcome of the comprehension process and the continuous internal processes underlying this performance. These points motivate our discussion of a neural network model of sentence comprehension, the Sentence Gestalt model, which we have used to account for the N400 component of the event-related brain potential (ERP), which tracks meaning processing as it happens in real time. The model, which shares features with recent deep learning-based language models, simulates N400 amplitude as the automatic update of a probabilistic representation of the situation or event described by the sentence, corresponding to a temporal difference learning signal at the level of meaning. We suggest that this process happens relatively automatically, and that sometimes a more-controlled attention-dependent process is necessary for successful comprehension, which may be reflected in the subsequent P600 ERP component. We relate this account to current deep learning models as well as classic linguistic theory, and use it to illustrate a domain general perspective on some specific linguistic operations postulated based on compositional analyses of natural language. This article is part of the theme issue 'Towards mechanistic models of meaning composition'.
Increased N400 amplitudes on indefinite articles (a/an) incompatible with expected nouns have been initially taken as strong evidence for probabilistic pre-activation of phonological word forms, and recently been intensely debated because they have been difficult to replicate. Here, these effects are simulated using a neural network model of sentence comprehension that we previously used to simulate a broad range of empirical N400 effects. The model produces the effects when the cue validity of the articles concerning upcoming noun meaning in the learning environment is high, but fails to produce the effects when the cue validity of the articles is low due to adjectives presented between articles and nouns during training. These simulations provide insight into one of the factors potentially contributing to the small size of the effects in empirical studies and generate predictions for cross-linguistic differences in article induced N400 effects based on articles’ cue validity. The model accounts for article induced N400 effects without assuming pre-activation of word forms, and instead simulates these effects as the stimulus-induced change in a probabilistic representation of meaning corresponding to an implicit semantic prediction error.
The functional significance of the N400 evoked-response component is still actively debated. An increasing amount of theoretical and computational modelling work is built on the interpretation of the N400 as a prediction error. In neural network modelling work, it was proposed that the N400 component can be interpreted as the change in a probabilistic representation of meaning that drives the continuous adaptation of an internal model of the statistics of the environment. These results imply that increased N400 amplitudes should correspond to greater adaptation, which can be measured via implicit memory. To investigate this model derived hypothesis, the current study manipulated expectancy in a sentence reading task to influence N400 amplitudes and subsequently presented the previously expected vs. unexpected words in a perceptual identification task to measure implicit memory. As predicted, reaction times in the perceptual identification task were significantly faster for previously unexpected words that induced larger N400 amplitudes in the previous sentence reading task. Additionally, it could be demonstrated that this adaptation seems to specifically depend on the process underlying N400 amplitudes, as participants with larger N400 differences during sentence reading also exhibited a larger implicit memory benefit in the perceptual identification task. These findings support the interpretation of the N400 as an implicit learning signal driving adaptation in language processing.
Increased N400 amplitudes on indefinite articles (a/an) incompatible with expected nouns have been initially taken as strong evidence for probabilistic pre-activation of phonological word forms, and recently been intensely debated because they have been difficult to replicate. Here, these effects are simulated using a neural network model of sentence comprehension that we previously used to simulate a broad range of empirical N400 effects. The model produces the effects when the cue validity of the articles concerning upcoming noun meaning in the learning environment is high, but fails to produce the effects when the cue validity of the articles is low due to adjectives presented between articles and nouns during training. These simulations provide insight into one of the factors potentially contributing to the small size of the effects in empirical studies and generate predictions for cross-linguistic differences in article induced N400 effects based on articles’ cue validity. The model accounts for article induced N400 effects without assuming pre-activation of word forms, and instead simulates these effects as the stimulus-induced change in a probabilistic representation of meaning corresponding to an implicit semantic prediction error.