Refine
Year of publication
Document Type
- Article (66)
- Postprint (7)
- Part of Periodical (5)
- Conference Proceeding (1)
Keywords
- Patholinguistik (6)
- Sprachtherapie (6)
- patholinguistics (6)
- Age of acquisition (5)
- prosodic boundary cues (5)
- aging (4)
- speech perception (4)
- Aphasia (3)
- Closure Positive Shift (CPS) (3)
- ERP (3)
Recent treatment protocols have been successful in improving working memory (WM) in individuals with aphasia. However, the evidence to date is small and the extent to which improvements in trained tasks of WM transfer to untrained memory tasks, spoken sentence comprehension, and functional communication is yet poorly understood. To address these issues, we conducted a multiple baseline study with three German-speaking individuals with chronic post stroke aphasia. Participants practised two computerised WM tasks (n-back with pictures and aback with spoken words) four times a week for a month, targeting two WM processes: updating WM representations and resolving interference. All participants showed improvement on at least one measure of spoken sentence comprehension and everyday memory activities. Two of them showed improvement also on measures of WM and functional communication. Our results suggest that WM can be improved through computerised training in chronic aphasia and this can transfer to spoken sentence comprehension and functional communication in some individuals.
This study on analogical reasoning evaluates the impact of fluid intelligence on adaptive changes in neural efficiency over the course of an experiment and specifies the underlying cognitive processes. Grade 10 students (N = 80) solved unfamiliar geometric analogy tasks of varying difficulty. Neural efficiency was measured by the event-related desynchronization (ERD) in the alpha band, an indicator of cortical activity. Neural efficiency was defined as a low amount of cortical activity accompanying high performance during problem-solving. Students solved the tasks faster and more accurately the higher their FI was. Moreover, while high FI led to greater cortical activity in the first half of the experiment, high FI was associated with a neurally more efficient processing (i.e., better performance but same amount of cortical activity) in the second half of the experiment. Performance in difficult tasks improved over the course of the experiment for all students while neural efficiency increased for students with higher but decreased for students with lower fluid intelligence. Based on analyses of the alpha sub-bands, we argue that high fluid intelligence was associated with a stronger investment of attentional resource in the integration of information and the encoding of relations in this unfamiliar task in the first half of the experiment (lower-2 alpha band). Students with lower fluid intelligence seem to adapt their applied strategies over the course of the experiment (i.e., focusing on task-relevant information; lower-1 alpha band). Thus, the initially lower cortical activity and its increase in students with lower fluid intelligence might reflect the overcoming of mental overload that was present in the first half of the experiment. (C) 2016 Elsevier Inc. All rights reserved.
Age of acquisition (AOA) has frequently been shown to influence response times and accuracy rates in word processing and constitutes a meaningful variable in aphasic language processing, while its origin in the language processing system is still under debate. To find out where AOA originates and whether and how it is related to another important psycholinguistic variable, namely semantic typicality (TYP), we studied healthy, elderly controls and semantically impaired individuals using semantic priming. For this purpose, we collected reaction times and accuracy rates as well as event-related potential data in an auditory category-member-verification task. The present results confirm a semantic origin of TYP, but question the same for AOA while favouring its origin at the phonology-semantics interface. The data are further interpreted in consideration of recent theories of ageing. (C) 2016 Elsevier Ltd. All rights reserved.
The semantics of focus particles like only requires a set of alternatives (Rooth, 1992). In two experiments, we investigated the impact of such particles on the retrieval of alternatives that are mentioned in the prior context or unmentioned. The first experiment used a probe recognition task and showed that focus particles interfere with the recognition of mentioned alternatives and the rejection of unmentioned alternatives relative to a condition without a particle. A second lexical decision experiment demonstrated priming effects for mentioned and unmentioned alternatives (compared with an unrelated condition) while focus particles caused additional interference effects. Overall, our results indicate that focus particles trigger an active search for alternatives and lead to a competition between mentioned alternatives, unmentioned alternatives, and the focused element.
The semantics of focus particles like only requires a set of alternatives (Rooth, 1992). In two experiments, we investigated the impact of such particles on the retrieval of alternatives that are mentioned in the prior context or unmentioned. The first experiment used a probe recognition task and showed that focus particles interfere with the recognition of mentioned alternatives and the rejection of unmentioned alternatives relative to a condition without a particle. A second lexical decision experiment demonstrated priming effects for mentioned and unmentioned alternatives (compared with an unrelated condition) while focus particles caused additional interference effects. Overall, our results indicate that focus particles trigger an active search for alternatives and lead to a competition between mentioned alternatives, unmentioned alternatives, and the focused element.
The functional significance of the two prominent language-related ERP components N400 and P600 is still under debate.
It has recently been suggested that one important dimension along which the two vary is in terms of automaticity versus attentional control, with N400 amplitudes reflecting more automatic and P600 amplitudes reflecting more controlled aspects of sentence comprehension.
The availability of executive resources necessary for controlled processes depends on sustained attention, which fluctuates over time.
Here, we thus tested whether P600 and N400 amplitudes depend on the level of sustained attention.
We reanalyzed EEG and behavioral data from a sentence processing task by Sassenhagen and Bornkessel-Schlesewsky [The P600 as a correlate of ventral attention network reorientation. Cortex, 66, A3-A20, 2015], which included sentences with morphosyntactic and semantic violations.
Participants read sentences phrase by phrase and indicated whether a sentence contained any type of anomaly as soon as they had the relevant information.
To quantify the varying degrees of sustained attention, we extracted a moving reaction time coefficient of variation over the entire course of the task.
We found that the P600 amplitude was significantly larger during periods of low reaction time variability (high sustained attention) than in periods of high reaction time variability (low sustained attention). In contrast, the amplitude of the N400 was not affected by reaction time variability.
These results thus suggest that the P600 component is sensitive to sustained attention whereas the N400 component is not, which provides independent evidence for accounts suggesting that P600 amplitudes reflect more controlled and N400 amplitudes reflect more automatic aspects of sentence comprehension.
Das 15. Herbsttreffen Patholinguistik mit dem Schwerpunktthema »Interdisziplinär (be-)handeln – Multiprofessionelle Zusammenarbeit in der Sprachtherapie« fand am 20.11.2021 als Online-Veranstaltung statt. Das Herbsttreffen wird seit 2007 jährlich vom Verband für Patholinguistik e.V. (vpl), seit 2021 vom Deutschen Bundesverband für akademische Sprachtherapie und Logopädie (dbs) in Kooperation mit der Universität Potsdam durchgeführt. Der vorliegende Tagungsband beinhaltet die Vorträge zum Schwerpunktthema und Informationen aus der Podiumsdiskussion sowie die Posterpräsentationen zu weiteren Themen aus der sprachtherapeutischen Forschung und Praxis.
Das 12. Herbsttreffen Patholinguistik mit dem Schwerpunktthema »Weg(e) mit dem Stottern: Therapie und Selbsthilfe für Kinder und Erwachsene« fand am 24.11.2018 in Potsdam statt. Das Herbsttreffen wird seit 2007 jährlich vom Verband für Patholinguistik e.V. (vpl) durchgeführt. Der vorliegende Tagungsband beinhaltet die Vorträge zum Schwerpunktthema sowie Beiträge der Posterpräsentationen zu weiteren Themen aus der sprachtherapeutischen Forschung und Praxis.
Das 11. Herbsttreffen Patholinguistik mit dem Schwerpunktthema »Gut gestimmt: Diagnostik und Therapie bei Dysphonie« fand am 18.11.2017 in Potsdam statt. Das Herbsttreffen wird seit 2007 jährlich vom Verband für Patholinguistik e.V. (vpl) durchgeführt. Der vorliegende Tagungsband beinhaltet die Hauptvorträge zum Schwerpunktthema sowie Beiträge zu den Kurzvorträgen »Spektrum Patholinguistik« und der Posterpräsentationen zu weiteren Themen aus der sprachtherapeutischen Forschung und Praxis.
Das 3. Herbsttreffen Patholinguistik fand am 21. November 2009 an der Universität Potsdam statt. Der vorliegende Tagungsband enthält die drei Hauptvorträge zum Schwerpunktthema „Von der Programmierung zu Artikulation: Sprechapraxie bei Kindern und Erwachsenen“. Darüber hinaus enthält der Band die Beiträge aus dem Spektrum Patholinguistik, sowie die Abstracts der Posterpräsentationen.
Das 8. Herbsttreffen Patholinguistik mit dem Schwerpunktthema "Besonders behandeln? Sprachtherapie im Rahmen primärer Störungsbilder" fand am 15.11.2014 in Potsdam statt. Das Herbsttreffen wird seit 2007 jährlich vom Verband für Patholinguistik e.V. (vpl) durchgeführt.
Der vorliegende Tagungsband beinhaltet die vier Hauptvorträge zum Schwerpunktthema, die vier Kurzvorträge aus dem Spektrum Patholinguisitk sowie die Beiträge der Posterpräsentationen zu weiteren Themen aus der sprachtherapeutischen Forschung und Praxis.
Investigating the neuronal network underlying language processing may contribute to a better understanding of how the brain masters this complex cognitive function with surprising ease and how language is acquired at a fast pace in infancy. Modern neuroimaging methods permit to visualize the evolvement and the function of the language network. The present paper focuses on a specific methodology, functional near-infrared spectroscopy (fNIRS), providing an overview over studies on auditory language processing and acquisition. The methodology detects oxygenation changes elicited by functional activation of the cerebral cortex. The main advantages for research on auditory language processing and its development during infancy are an undemanding application, the lack of instrumental noise, and its potential to simultaneously register electrophysiological responses. Also it constitutes an innovative approach for studying developmental issues in infants and children. The review will focus on studies on word and sentence processing including research in infants and adults.
Sensitivity to salience
(2018)
Sentence comprehension is optimised by indicating entities as salient through linguistic (i.e., information-structural) or visual means. We compare how salience of a depicted referent due to a linguistic (i.e., topic status) or visual cue (i.e., a virtual person’s gaze shift) modulates sentence comprehension in German. We investigated processing of sentences with varying word order and pronoun resolution by means of self-paced reading and an antecedent choice task, respectively. Our results show that linguistic as well as visual salience cues immediately speeded up reading times of sentences mentioning the salient referent first. In contrast, for pronoun resolution, linguistic and visual cues modulated antecedent choice preferences less congruently. In sum, our findings speak in favour of a significant impact of linguistic and visual salience cues on sentence comprehension, substantiating that salient information delivered via language as well as the visual environment is integrated in the current mental representation of the discourse.
Sensitivity to salience
(2018)
Sentence comprehension is optimised by indicating entities as salient through linguistic (i.e., information-structural) or visual means. We compare how salience of a depicted referent due to a linguistic (i.e., topic status) or visual cue (i.e., a virtual person's gaze shift) modulates sentence comprehension in German. We investigated processing of sentences with varying word order and pronoun resolution by means of self-paced reading and an antecedent choice task, respectively. Our results show that linguistic as well as visual salience cues immediately speeded up reading times of sentences mentioning the salient referent first. In contrast, for pronoun resolution, linguistic and visual cues modulated antecedent choice preferences less congruently. In sum, our findings speak in favour of a significant impact of linguistic and visual salience cues on sentence comprehension, substantiating that salient information delivered via language as well as the visual environment is integrated in the current mental representation of the discourse.
Understanding the rapidly developing building blocks of speech perception in infancy requires a close look at the auditory prerequisites for speech sound processing. Pioneering studies have demonstrated that hemispheric specializations for language processing are already present in early infancy. However, whether these computational asymmetries can be considered a function of linguistic attributes or a consequence of basic temporal signal properties is under debate. Several studies in adults link hemispheric specialization for certain aspects of speech perception to an asymmetry in cortical tuning and reveal that the auditory cortices are differentially sensitive to spectrotemporal features of speech. Applying concurrent electrophysiological (EEG) and hemodynamic (near-infrared spectroscopy) recording to newborn infants listening to temporally structured nonspeech signals, we provide evidence that newborns process nonlinguistic acoustic stimuli that share critical temporal features with language in a differential manner. The newborn brain preferentially processes temporal modulations especially relevant for phoneme perception. In line with multi-time-resolution conceptions, modulations on the time scale of phonemes elicit strong bilateral cortical responses. Our data furthermore suggest that responses to slow acoustic modulations are lateralized to the right hemisphere. That is, the newborn auditory cortex is sensitive to the temporal structure of the auditory input and shows an emerging tendency for functional asymmetry. Hence, our findings support the hypothesis that development of speech perception is linked to basic capacities in auditory processing. From birth, the brain is tuned to critical temporal properties of linguistic signals to facilitate one of the major needs of humans: to communicate.
This study investigates prosodic phrasing of bracketed lists in German. We analyze variation in pauses, phrase-final lengthening and f0 in speech production and how these cues affect boundary perception. In line with the literature, it was found that pauses are often used to signal intonation phrase boundaries, while final lengthening and f0 are employed across different levels of the prosodic hierarchy. Deviations from expectations based on the standard syntax-prosody mapping are interpreted in terms of task-specific effects. That is, we argue that speakers add/delete prosodic boundaries to enhance the phonological contrast between different bracketings in the experimental task. In perception, three experiments were run, in which we tested only single cues (but temporally distributed at different locations of the sentences). Results from identification tasks and reaction time measurements indicate that pauses lead to a more abrupt shift in listeners׳ prosodic judgments, while f0 and final lengthening are exploited in a more gradient manner. Hence, pauses, final lengthening and f0 have an impact on boundary perception, though listeners show different sensitivity to the three acoustic cues.
Prosodic boundaries can be used to disambiguate the syntactic structure of coordinated name sequences (coordinates). To answer the question whether disambiguating prosody is produced in a situationally dependent or independent manner and to contribute to our understanding of the nature of the prosody-syntax link, we systematically explored variability in the prosody of boundary productions of coordinates evoked by different contextual settings in a referential communication task. Our analysis focused on prosodic boundaries produced to distinguish sequences with different syntactic structures (i.e., with or without internal grouping of the constituents). In German, these prosodic boundaries are indicated by three major prosodic cues: f0-range, final lengthening, and pause. In line with the Proximity/Anti-Proximity principle of the syntax-prosody model by Kentner and Fery (2013), speakers clearly use all three cues for constituent grouping and prosodically mark groups within and at their right boundary, indicating that prosodic phrasing is not a local phenomenon. Intra-individually, we found a rather stable prosodic pattern across contexts. However, inter-individually speakers differed from each other with respect to the prosodic cue combinations that they (consistently) used to mark the boundaries. Overall, our data speak in favour of a close link between syntax and prosody and for situational independence of disambiguating prosody.
Production and comprehension of prosodic boundary marking in persons with unilateral brain lesions
(2022)
Purpose: Persons with unilateral brain damage in the right hemisphere (RH) or left hemisphere (LH) show limitations in processing linguistic prosody, with yet inconclusive results on their ability to process prosodically marked structural boundaries for syntactic ambiguity resolution. We aimed at systematically investigating production and comprehension of three prosodic cues (f(0) range, final lengthening, and pause) at structural boundaries in coordinate sequences in participants with right hemisphere brain damage (RHDP) and participants with left hemisphere brain damage (LHDP). Method: Twenty RHDP and 15 LHDP participated in our study.
Comprehension experiment: Participants and a control group listened to coordinate name sequences with internal grouping by a prosodically marked structural boundary (grouped condition, e.g., "(Gabi und Leni) # und Nina") or without internal grouping (ungrouped condition, e.g., "Gabi und Leni und Nina") and had to identify the target condition. The strength and combinations of prosodic cues in the stimuli were manipulated.
Production experiment: Participants were asked to produce coordinate sequences in the two conditions (grouped, ungrouped) in two different tasks: a Reading Aloud and a Repetition experiment. Accuracy of participants' productions was subsequently assessed in a rating study and productions were analyzed with respect to use of prosodic cues.
Results: In the Comprehension experiment, RHDP and LHDP had overall lower identification accuracies than unimpaired control participants and LHDP were found to have particular problems with boundary identification when the pause cue was reduced. In production, LHDP and RHDP employed all three prosodic cues for boundary marking, but struggled to clearly mark prosodic boundaries in 28% of all productions. Both groups showed better performance in reading aloud than in repetition. LHDP relied more on using f(0) range and pause duration to prosodically mark structural boundaries, whereas RHDP employed final lengthening more vigorously than LHDP in reading aloud.
Conclusions: We conclude that processing of linguistic prosody is affected in RHDP and LHDP, but not completely impaired. Therefore, prosody can serve as a relevant communicative resource. However, it should also be considered as a target area for assessment and treatment in both groups.
Aphasia, the language disorder following brain damage, is frequently accompanied by deficits of working memory (WM) and executive functions (EFs). Recent studies suggest that WM, together with certain EFs, can play a role in sentence comprehension in individuals with aphasia (IWA), and that WM can be enhanced with intensive practice. Our aim was to investigate whether a combined WM and EF training improves the understanding of spoken sentences in IWA. We used a pre-post-test case control design. Three individuals with chronic aphasia practised an adaptive training task (a modified n-back task) three to four times a week for a month. Their performance was assessed before and after the training on outcome measures related to WM and spoken sentence comprehension. One participant showed significant improvement on the training task, another showed a tendency for improvement, and both of them improved significantly in spoken sentence comprehension. The third participant did not improve on the training task, however, she showed improvement on one measure of spoken sentence comprehension. Compared to controls, two individuals improved at least in one condition of the WM outcome measures. Thus, our results suggest that a combined WM and EF training can be beneficial for IWA.
Speech and action sequences are continuous streams of information that can be segmented into sub-units. In both domains, this segmentation can be facilitated by perceptual cues contained within the information stream. In speech, prosodic cues (e.g., a pause, pre-boundary lengthening, and pitch rise) mark boundaries between words and phrases, while boundaries between actions of an action sequence can be marked by kinematic cues (e.g., a pause, pre-boundary deceleration). The processing of prosodic boundary cues evokes an Event-related Potentials (ERP) component known as the Closure Positive Shift (CPS), and it is possible that the CPS reflects domaingeneral cognitive processes involved in segmentation, given that the CPS is also evoked by boundaries between subunits of non-speech auditory stimuli. This study further probed the domain-generality of the CPS and its underlying processes by investigating electrophysiological correlates of the processing of boundary cues in sequences of spoken verbs (auditory stimuli; Experiment 1; N = 23 adults) and actions (visual stimuli; Experiment 2; N = 23 adults). The EEG data from both experiments revealed a CPS-like broadly distributed positivity during the 250 ms prior to the onset of the post-boundary word or action, indicating similar electrophysiological correlates of boundary processing across domains, suggesting that the cognitive processes underlying speech and action segmentation might also be shared.
Speech and action sequences are continuous streams of information that can be segmented into sub-units. In both domains, this segmentation can be facilitated by perceptual cues contained within the information stream. In speech, prosodic cues (e.g., a pause, pre-boundary lengthening, and pitch rise) mark boundaries between words and phrases, while boundaries between actions of an action sequence can be marked by kinematic cues (e.g., a pause, pre-boundary deceleration). The processing of prosodic boundary cues evokes an Event-related Potentials (ERP) component known as the Closure Positive Shift (CPS), and it is possible that the CPS reflects domaingeneral cognitive processes involved in segmentation, given that the CPS is also evoked by boundaries between subunits of non-speech auditory stimuli. This study further probed the domain-generality of the CPS and its underlying processes by investigating electrophysiological correlates of the processing of boundary cues in sequences of spoken verbs (auditory stimuli; Experiment 1; N = 23 adults) and actions (visual stimuli; Experiment 2; N = 23 adults). The EEG data from both experiments revealed a CPS-like broadly distributed positivity during the 250 ms prior to the onset of the post-boundary word or action, indicating similar electrophysiological correlates of boundary processing across domains, suggesting that the cognitive processes underlying speech and action segmentation might also be shared.
Neuropsychological lesion studies evidence the necessity to differentiate between various forms of tool-related actions such as real tool use, tool use demonstration with tool in hand and without physical target object, and pantomime without tool in hand. However, thus far, neuroimaging studies have primarily focused only on investigating tool use pantomimes. The present fMRI study investigates pantomime without tool in hand as compared to tool use demonstration with tool in hand in order to explore patterns of cerebral signal modulation associated with acting with imaginary tools in hand.
Fifteen participants performed with either hand (i) tool use pantomime with an imaginary tool in hand in response to visual tool presentation and (ii) tool use demonstration with tool in hand in response to visual-tactile tool presentation. In both conditions, no physical target object was present. The conjunction analysis of the right and left hands executions of tool use pantomime relative to tool use demonstration yielded significant activity in the left middle and superior temporal lobe. In contrast, demonstration relative to pantomime revealed large bihemispherically distributed homologous areas of activity.
Thus far, fMRI studies have demonstrated the relevance of the left middle and superior temporal gyri in viewing, naming, and matching tools and related actions and contexts. Since in our study all these factors were equally (ir)relevant both in the tool use pantomime and the tool use demonstration conditions, the present findings enhance the knowledge about the function of these brain regions in tool-related cognitive processes. The two contrasted conditions only differ regarding the fact that the pantomime condition requires the individual to act with an imaginary tool in hand. Therefore, we suggest that the left middle and superior temporal gyri are specifically involved in integrating the projected mental image of a tool in the execution of a tool-specific movement concept. (C) 2015 Elsevier Ltd. All rights reserved.
Individuals scoring high in fluid intelligence tasks generally perform very efficiently in problem solving tasks and analogical reasoning tasks presumably because they are able to select the task-relevant information very quickly and focus on a limited set of task-relevant cognitive operations. Moreover, individuals with high fluid intelligence produce more representational hand and arm gestures when describing a geometric analogy task than individuals with average fluid intelligence. No study has yet addressed the relationship between intelligence, gesture production, and brain structure, to our knowledge. That was the purpose of our study. To characterize the relation between intelligence, gesture production, and brain structure we assessed the frequency of representational gestures and cortical thickness values in a group of adolescents differing in fluid intelligence. Individuals scoring high in fluid intelligence showed higher accuracy in the geometric analogy task and produced more representational gestures (in particular more movement gestures) when explaining how they solved the task and showed larger cortical thickness values in some regions in the left hemisphere (namely the pars opercularis, superior frontal, and temporal cortex) than individuals with average fluid intelligence. Moreover, the left pars opercularis (a part of Broca's area) and left transverse temporal cortex showed larger cortical thickness values in participants who produced representational and in particular movement gestures compared to those who did not. Our results thus indicate that cortical thickness of those brain regions is related to both high fluid intelligence and the production of gestures. Results are discussed in the gestures-as-simulated-action framework that states that gestures result from simulated perception and simulated action that underlie embodied language and mental imagery.
Comprehension of transitive sentences relies on different kinds of information, like word order, case marking, and animacy contrasts between arguments. When no formal cues like case marking or number congruency are available, a contrast in animacy helps the parser to decide which argument is the grammatical subject and which the object. Processing costs are enhanced when neither formal cues nor animacy contrasts are available in a transitive sentence. We present an ERP study on the comprehension of grammatical transitive German sentences, manipulating animacy contrasts between subjects and objects as well as the verbal case marking pattern. Our study shows strong object animacy effects even in the absence of violations, and in addition suggests that this effect of object animacy is modulated by the verbal case marking pattern.
Various behavioural studies show that semantic typicality (TYP) and age of acquisition (AOA) of a specific word influence processing time and accuracy during the performance of lexical-semantic tasks. This study examines the influence of TYP and AOA on semantic processing at behavioural (response times and accuracy data) and electrophysiological levels using an auditory category-member-verification task. Reaction time data reveal independent TYP and AOA effects, while in the accuracy data and the event-related potentials predominantly effects of TYP can be found. The present study thus confirms previous findings and extends evidence found in the visual modality to the auditory modality. A modality-independent influence on semantic word processing is manifested. However, with regard to the influence of AOA, the diverging results raise questions on the origin of AOA effects as well as on the interpretation of offline and online data. Hence, results will be discussed against the background of recent theories on N400 correlates in semantic processing. In addition, an argument in favour of a complementary use of research techniques will be made. (C) 2015 Elsevier Ltd. All rights reserved.
Not only the apples
(2014)
Focus sensitive particles highlight the relevance of contextual alternatives for the interpretation of a sentence. Two experiments tested whether this leads to better encoding and therefore, ultimately, better recall of focus alternatives. Participants were presented with auditory stimuli that introduced a set of elements ("context sentence") and continued in three different versions: the critical sentences either contained the exclusive particle nur ("only"), the inclusive particle sogar ("even"), or no particle (control condition). After being exposed to blocks of ten trials, participants were asked to recall the elements in the context sentence. The results show that both particles enhanced memory performance for the alternatives to the focused element, relative to the control condition. The results support the assumption that information-structural alternatives are better encoded in memory in the presence of a focus sensitive particle.
Many agrammatic aphasics have a specific syntactic comprehension deficit involving processing syntactic transformations. It has been proposed that this deficit is due to a dysfunction of Broca's area, an area that is thought to be critical for comprehension of complex transformed sentences. The goal of this study was to investigate the role of Broca's area in processing canonical and non-canonical sentences in healthy subjects. The sentences were presented auditorily and were controlled for task difficulty. Subjects were asked to judge the grammaticality of the sentences while their brain activity was monitored using event-related functional magnetic resonance imaging. Processing both kinds of sentences resulted in activation of language-related brain regions. Comparison of non-canonical and canonical sentences showed greater activation in bilateral temporal regions; a greater activation of Broca's area in processing antecedent-gap relations was not found. Moreover, the posterior part of Broca's area was conjointly activated by both sentence conditions. Broca's area is thus involved in general syntactic processing as required by grammaticality judgments and does not seem to have a specific role in processing syntactic transformations. (C) 2004 Wiley-Liss, Inc
Background: Individuals with agrammatic aphasia (IWAs) have problems with grammatical decoding of tense inflection. However, these difficulties depend on the time frame that the tense refers to. Verb morphology with reference to the past is more difficult than with reference to the non-past, because a link needs to be made to the past event in discourse, as captured in the PAst Discourse Linking Hypothesis (PADILIH; Bastiaanse, R., Bamyaci, E., Hsu, C., Lee, J., Yarbay Duman, T., Thompson, C. K., 2011. Time reference in agrammatic aphasia: A cross-linguistic study. J. Neurolinguist. 24, 652-673). With respect to reference to the (non-discourse-linked) future, data so far indicate that IWAs experience less difficulties as compared to past time reference (Bastiaanse, R., Bamyaci, E., Hsu, C., Lee, J., Yarbay Duman, T., Thompson, C. K., 2011. Time reference in agrammatic aphasia: A cross-linguistic study. J. Neurolinguist. 24, 652-673), supporting the assumptions of the PADILIH. Previous online studies of time reference in aphasia used methods such as reaction times analysis (e.g., Faroqi-Shah, Y., Dickey, M. W., 2009. On-line processing of tense and temporality in agrammatic aphasia. Brain Lang. 108, 97-111). So far, no such study used eye-tracking, even though this technique can bring additional insights (Burchert, F., Hanne, S., Vasishth, S., 2013. Sentence comprehension disorders in aphasia: the concept of chance performance revisited. Aphasiology 27, 112-125, doi:10.1080/02687038.2012.730603).
Aims: This study investigated (1) whether processing of future and past time reference inflection differs between non-brain-damaged individuals (NBDs) and IWAs, and (2) underlying mechanisms of time reference comprehension failure by IWAs.
Results and discussion: NBDs scored at ceiling and significantly higher than the IWAs. IWAs had below-ceiling performance on the future condition, and both participant groups were faster to respond to the past than to the future condition. These differences are attributed to a pre-existing preference to look at a past picture, which has to be overcome. Eye movement patterns suggest that both groups interpret future time reference similarly, while IWAs show a delay relative to NBDs in interpreting past time reference inflection. The eye tracking results support the PADILIH, because processing reference to the past in discourse syntax requires additional resources and thus, is problematic and delayed for people with aphasia. (C) 2014 Elsevier Ltd. All rights reserved.
Human infants can segment action sequences into their constituent actions already during the first year of life. However, work to date has almost exclusively examined the role of infants' conceptual knowledge of actions and their outcomes in driving this segmentation. The present study examined electrophysiological correlates of infants' processing of lower-level perceptual cues that signal a boundary between two actions of an action sequence. Specifically, we tested the effect of kinematic boundary cues (pre-boundary lengthening and pause) on 12-month-old infants' (N = 27) processing of a sequence of three arbitrary actions, performed by an animated figure. Using the Event-Related Potential (ERP) approach, evidence of a positivity following the onset of the boundary cues was found, in line with previous work that has found an ERP positivity (Closure Positive Shift, CPS) related to boundary processing in auditory stimuli and action sequences in adults. Moreover, an ERP negativity (Negative Central, Nc) indicated that infants' encoding of the post-boundary action was modulated by the presence or absence of prior boundary cues. We therefore conclude that 12-month-old infants are sensitive to lower-level perceptual kinematic boundary cues, which can support segmentation of a continuous stream of movement into individual action units.
Judging the animacy of words
(2017)
The age at which members of a semantic category are learned (age of acquisition), the typicality they demonstrate within their corresponding category, and the semantic domain to which they belong (living, non-living) are known to influence the speed and accuracy of lexical/semantic processing. So far, only a few studies have looked at the origin of age of acquisition and its interdependence with typicality and semantic domain within the same experimental design. Twenty adult participants performed an animacy decision task in which nouns were classified according to their semantic domain as being living or non-living. Response times were influenced by the independent main effects of each parameter: typicality, age of acquisition, semantic domain, and frequency. However, there were no interactions. The results are discussed with respect to recent models concerning the origin of age of acquisition effects.
Judging the animacy of words
(2016)
The age at which members of a semantic category are learned (age of acquisition), the typicality they demonstrate within their corresponding category, and the semantic domain to which they belong (living, non-living) are known to influence the speed and accuracy of lexical/semantic processing. So far, only a few studies have looked at the origin of age of acquisition and its interdependence with typicality and semantic domain within the same experimental design. Twenty adult participants performed an animacy decision task in which nouns were classified according to their semantic domain as being living or non-living. Response times were influenced by the independent main effects of each parameter: typicality, age of acquisition, semantic domain, and frequency. However, there were no interactions. The results are discussed with respect to recent models concerning the origin of age of acquisition effects.
People differ with regard to how they perceive, experience, and express negative affect. While trait negative affect reflects a stable, sustained personality trait, state negative affect represents a stimulus limited and temporally acute emotion. So far, little is known about the neural systems mediating the relationship between negative affect and acute emotion processing. To address this issue we investigated in a healthy female sample how individual differences in state negative affect are reflected in changes in blood oxygen level-dependent responses during passive viewing of emotional stimuli. To assess autonomic arousal we simultaneously recorded changes in skin conductance level. At the psychophysiological level we found increased skin conductance level in response to aversive relative to neutral pictures. However, there was no association of state negative affect with skin conductance level. At the neural level we found that high state negative affect was associated with increased left insular activity during passive viewing of aversive stimuli. The insula has been implicated in interoceptive processes and in the integration of sensory, visceral, and affective information thus contributing to subjective emotional experience. Greater recruitment of the insula in response to aversive relative to neutral stimuli in subjects with high state negative affect may represent increased processing of salient aversive stimuli.
Moral decision-making is central to everyday social life because the evaluation of the actions of another agent or our own actions made with respect to the norms and values guides our behavior in a community. There is previous evidence that the presence of bodily harm-even if irrelevant for a decision-may affect the decision-making, process. While recent neuroimaging studies found a common neural substrate of moral decision-making, the role of bodily harm has not been systematically studied so far. Here we used event-related functional magnetic resonance imaging (fMRI) to investigate how behavioral and neural correlates of semantic and moral decision-making processes are modulated by the presence of direct bodily harm or violence in the stimuli. Twelve participants made moral and semantic decisions about sentences describing actions of agents that either contained bodily harm or not and that could easily be judged as being good or bad or correct/incorrect, respectively. During moral and semantic decision-making, the presence of bodily harm resulted in faster response times (RT) and weaker activity in the temporal poles relative to trials devoid of bodily harm/violence, indicating a processing advantage and reduced processing depth for violence-related linguistic stimuli. Notably, there was no increase in activity in the amygdala and the posterior cingulate cortex (PCC) in response to trials containing bodily harm. These findings might be a correlate of limited generation of the semantic and emotional context in the anterior temporal poles during the evaluation of actions of another agent related to violence that is made with respect to the norms and values guiding our behavior in a community. (C) 2004 Elsevier Inc. All rights reserved
Infants as young as six months are sensitive to prosodic phrase boundaries marked by three acoustic cues: pitch change, final lengthening, and pause. Behavioral studies suggest that a language-specific weighting of these cues develops during the first year of life; recent work on German revealed that eight-month-olds, unlike six-month-olds, are capable of perceiving a prosodic boundary on the basis of pitch change and final lengthening only. The present study uses Event-Related Potentials (ERPs) to investigate the neuro-cognitive development of prosodic cue perception in German-learning infants. In adults’ ERPs, prosodic boundary perception is clearly reflected by the so-called Closure Positive Shift (CPS). To date, there is mixed evidence on whether an infant CPS exists that signals early prosodic cue perception, or whether the CPS emerges only later—the latter implying that infantile brain responses to prosodic boundaries reflect acoustic, low-level pause detection.
We presented six- and eight-month-olds with stimuli containing either no boundary cues, only a pitch cue, or a combination of both pitch change and final lengthening. For both age groups, responses to the former two conditions did not differ, while brain responses to prosodic boundaries cued by pitch change and final lengthening showed a positivity that we interpret as a CPS-like infant ERP component. This hints at an early sensitivity to prosodic boundaries that cannot exclusively be based on pause detection. Instead, infants’ brain responses indicate an early ability to exploit subtle, relational prosodic cues in speech perception—presumably even earlier than could be concluded from previous behavioral results.
Implicit processing of phonotactic cues evidence from electrophysiological and vascular responses
(2011)
Spoken word recognition is achieved via competition between activated lexical candidates that match the incoming speech input. The competition is modulated by prelexical cues that are important for segmenting the auditory speech stream into linguistic units. One such prelexical cue that listeners rely on in spoken word recognition is phonotactics. Phonotactics defines possible combinations of phonemes within syllables or words in a given language. The present study aimed at investigating both temporal and topographical aspects of the neuronal correlates of phonotactic processing by simultaneously applying ERPs and functional near-infrared spectroscopy (fNIRS). Pseudowords, either phonotactically legal or illegal with respect to the participants' native language, were acoustically presented to passively listening adult native German speakers. ERPs showed a larger N400 effect for phonotactically legal compared to illegal pseudowords, suggesting stronger lexical activation mechanisms in phonotactically legal material. fNIRS revealed a left hemispheric network including fronto-temporal regions with greater response to phonotactically legal pseudowords than to illegal pseudowords. This confirms earlier hypotheses on a left hemispheric dominance of phonotactic processing most likely due to the fact that phonotactics is related to phonological processing and represents a segmental feature of language comprehension. These segmental linguistic properties of a stimulus are predominantly processed in the left hemisphere. Thus, our study provides first insights into temporal and topographical characteristics of phonotactic processing mechanisms in a passive listening task. Differential brain responses between known and unknown phonotactic rules thus supply evidence for an implicit use of phonotactic cues to guide lexical activation mechanisms.
Older adults often experience hearing difficulties in multitalker situations. Attentional control of auditory perception is crucial in situations where a plethora of auditory inputs compete for further processing. We combined an intensity-modulated dichotic listening paradigm with attentional manipulations to study adult age differences in the interplay between perceptual saliency and attentional control of auditory processing. When confronted with two competing sources of verbal auditory input, older adults modulated their attention less flexibly and were more driven by perceptual saliency than younger adults. These findings suggest that aging severely impairs the attentional regulation of auditory perception.
This study examines the role of pitch and final lengthening in German intonation phrase boundary (IPB) perception. Since a prosody-related event-related potential (ERP) component termed Closure Positive Shift reflects the processing of major prosodic boundaries, we combined ERP and behavioural measures (i.e. a prosodic judgement task) to systematically test the impact of sole and combined cue occurrences on IPB perception. In two experiments we investigated whether adult listeners perceived an IPB in acoustically manipulated speech material that contained none, one, or two of the prosodic boundary cues. Both ERP and behavioural results suggest that pitch and final lengthening cues have to occur in combination to trigger IPB perception. Hence, the combination of behavioural and electrophysiological measures provides a comprehensive insight into prosodic boundary cue perception in German and leads to an argument in favour of interrelated cues from the frequency (i.e. pitch change) and the time (i.e. final lengthening) domain.
Previous studies have revealed that infants aged 6-10 months are able to use the acoustic correlates of major prosodic boundaries, that is, pitch change, preboundary lengthening, and pause, for the segmentation of the continuous speech signal. Moreover, investigations with American-English- and Dutch-learning infants suggest that processing prosodic boundary markings involves a weighting of these cues. This weighting seems to develop with increasing exposure to the native language and to underlie crosslinguistic variation. In the following, we report the results of four experiments using the headturn preference procedure to explore the perception of prosodic boundary cues in German infants. We presented 8-month-old infants with a sequence of names in two different prosodic groupings, with or without boundary markers. Infants discriminated both sequences when the boundary was marked by all three cues (Experiment 1) and when it was marked by a pitch change and preboundary lengthening in combination (Experiment 2). The presence of a pitch change (Experiment 3) or preboundary lengthening (Experiment 4) as single cues did not lead to a successful discrimination. Our results indicate that pause is not a necessary cue for German infants. Pitch change and preboundary lengthening in combination, but not as single cues, are sufficient. Hence, by 8 months infants only rely on a convergence of boundary markers. Comparisons with adults' performance on the same stimulus materials suggest that the pattern observed with the 8-month-olds is already consistent with that of adults. We discuss our findings with respect to crosslinguistic variation and the development of a language-specific prosodic cue weighting.
The present study introduces the first substantial German database with norms for semantic typicality, age of acquisition, and concept familiarity for 824 exemplars of 11 semantic categories, including four natural ( and ) and five man-made (, and ) categories, as well as and Each category exemplar in the database was collected empirically in an exemplar generation study. For each category exemplar, norms for semantic typicality, estimated age of acquisition, and concept familiarity were gathered in three different rating studies. Reliability data and additional analyses on effects of semantic category and intercorrelations between age of acquisition, semantic typicality, concept familiarity, word length, and word frequency are provided. Overall, the data show high inter- and intrastudy reliabilities, providing a new resource tool for designing experiments with German word materials. The full database is available in the supplementary material of this file and also at www.psychonomic.org/archive.
Fluid intelligence is the ability to think flexibly and to understand abstract relations. People with high fluid intelligence (hi-fluIQ) perform better in analogical reasoning tasks than people with average fluid intelligence (ave-fluIQ). Although previous neuroimaging studies reported involvement of parietal and frontal brain regions in geometric analogical reasoning (which is a prototypical task for fluid intelligence), however, neuroimaging findings on geometric analogical reasoning in hi-fluIQ are sparse. Furthermore, evidence on the relation between brain activation and intelligence while solving cognitive tasks is contradictory. The present study was designed to elucidate the cerebral correlates of geometric analogical reasoning in a sample of hi-fluIQ and ave-fluIQ high school students. We employed a geometric analogical reasoning task with graded levels of task difficulty and confirmed the involvement of the parieto-frontal network in solving this task. In addition to characterizing the brain regions involved in geometric analogical reasoning in hi-fluIQ and ave-fluIQ, we found that blood oxygenation level dependency (BOLD) signal changes were greater for hi-fluIQ than for ave-fluIQ in parietal brain regions. However, ave-fluIQ showed greater BOLD signal changes in the anterior cingulate cortex and medial frontal gyrus than hi-fluIQ. Thus, we showed that a similar network of brain regions is involved in geometric analogical reasoning in both groups. Interestingly, the relation between brain activation and intelligence is not mono-directional, but rather, it is specific for each brain region. The negative brain activation-intelligence relationship in frontal brain regions in hi-fluIQ goes along with a better behavioral performance and reflects a lower demand for executive monitoring compared to ave-fluIQ individuals. In conclusion, our data indicate that flexibly modulating the extent of regional cerebral activity is characteristic for fluid intelligence.
Express yourself!
(2022)
In addition to sensory decline, age-related losses in auditory perception also reflect impairments in attentional modulation of perceptual saliency. Using an attention and intensity-modulated dichotic listening paradigm, we investigated electrophysiological correlates of processing conflicts between attentional focus and perceptual saliency in 25 younger and 26 older adults. Participants were instructed to attend to the right or left ear, and perceptual saliency was manipulated by varying the intensities of both ears. Attentional control demand was higher in conditions when attentional focus and perceptual saliency favored opposing ears than in conditions without such conflicts. Relative to younger adults, older adults modulated their attention less flexibly and were more influenced by perceptual saliency. Our results show, for the first time, that in younger adults a late negativity in the event-related potential (ERP) at fronto-central and parietal electrodes was sensitive to perceptual-attentional conflicts during auditory processing (N450 modulation effect). Crucially, the magnitude of the N450 modulation effect correlated positively with task performance. In line with lower attentional flexibility, the ERP waveforms of older adults showed absence of the late negativity and the modulation effect. This suggests that aging compromises the activation of the frontoparietal attentional network when processing the competing and conflicting auditory information.
The flexible learning of stimulus-reward associations when required by situational context is essential for everyday behavior. Older adults experience a progressive decline in several cognitive functions and show deficiencies in neuropsychological tasks requiring flexible adaptation to external feedback, which could be related to impairments in reward association learning. To study the effect of aging on stimulus-reward association learning 20 young and 20 older adults performed a probabilistic object reversal task (pORT) along with a battery of tests assessing executive functions and general intellectual abilities. The pORT requires learning and reversing associations between actions and their outcomes. Older participants collected fewer points, needed more trials to reach the learning criterion, and completed less blocks successfully compared to young adults. This difference remained statistically significant after correcting for the age effect of other tests assessing executive functions. This suggests that there is an age-related difference in reward association learning as measured using the pORT, which is not closely related to other executive functions with respect to the age effect. In human aging, structural alterations of reward detecting structures and functional changes of the dopaminergic as well as the serotonergic system might contribute to the deficit in reward association learning observed in this study. (C) 2004 Elsevier Ltd. All rights reserved
Die Konkurrenz schläft nie!
(2020)
Response inhibition is an attention function which develops relatively early during childhood. Behavioral data suggest that by the age of 3, children master the basic task requirements for the assessment of response inhibition but performance improves substantially until the age of 7. The neuronal mechanisms underlying these developmental processes, however, are not well understood. In this study, we examined brain activation patterns and behavioral performance of children aged between 4 and 6 years compared to adults by applying a go/no-go paradigm during near-infrared spectroscopy (NIRS) brain imaging. We furthermore applied task-independent functional connectivity measures to the imaging data to identify maturation of intrinsic neural functional networks. We found a significant group x condition related interaction in terms of inhibition-related reduced right fronto-parietal activation in children compared to adults. In contrast, motor-related activation did not differ between age groups. Functional connectivity analysis revealed that in the children's group, short-range coherence within frontal areas was stronger, and long-range coherence between frontal and parietal areas was weaker, compared to adults. Our findings show that in children aged from 4 to 6 years fronto-parietal brain maturation plays a crucial part in the cognitive development of response inhibition.
Multitalker situations confront listeners with a plethora of competing auditory inputs, and hence require selective attention to relevant information, especially when the perceptual saliency of distracting inputs is high. This study augmented the classical forced-attention dichotic listening paradigm by adding an interaural intensity manipulation to investigate developmental differences in the interplay between perceptual saliency and attentional control during auditory processing between early and middle childhood. We found that older children were able to flexibly focus on instructed auditory inputs from either the right or the left ear, overcoming the effects of perceptual saliency. In contrast, younger children implemented their attentional focus less efficiently. Direct comparisons of the present data with data from a recently published study of younger and older adults from our group suggest that younger children and older adults show similar levels of performance. Critically, follow-up comparisons revealed that younger children's performance restrictions reflect difficulties in attentional control only, whereas older adults' performance deficits also reflect an exaggerated reliance on perceptual saliency. We conclude that auditory attentional control improves considerably from middle to late childhood and that auditory attention deficits in healthy aging cannot be reduced to a simple reversal of child developmental improvements.
One of the most important social cognitive skills in humans is the ability to “put oneself in someone else’s shoes,” that is, to take another person’s perspective. In socially situated communication, perspective taking enables the listener to arrive at a meaningful interpretation of what is said (sentence meaning) and what is meant (speaker’s meaning) by the speaker. To successfully decode the speaker’s meaning, the listener has to take into account which information he/she and the speaker share in their common ground (CG). We here further investigated competing accounts about when and how CG information affects language comprehension by means of reaction time (RT) measures, accuracy data, event-related potentials (ERPs), and eye-tracking. Early integration accounts would predict that CG information is considered immediately and would hence not expect to find costs of CG integration. Late integration accounts would predict a rather late and effortful integration of CG information during the parsing process that might be reflected in integration or updating costs. Other accounts predict the simultaneous integration of privileged ground (PG) and CG perspectives. We used a computerized version of the referential communication game with object triplets of different sizes presented visually in CG or PG. In critical trials (i.e., conflict trials), CG information had to be integrated while privileged information had to be suppressed. Listeners mastered the integration of CG (response accuracy 99.8%). Yet, slower RTs, and enhanced late positivities in the ERPs showed that CG integration had its costs. Moreover, eye-tracking data indicated an early anticipation of referents in CG but an inability to suppress looks to the privileged competitor, resulting in later and longer looks to targets in those trials, in which CG information had to be considered. Our data therefore support accounts that foresee an early anticipation of referents to be in CG but a rather late and effortful integration if conflicting information has to be processed. We show that both perspectives, PG and CG, contribute to socially situated language processing and discuss the data with reference to theoretical accounts and recent findings on the use of CG information for reference resolution.
One of the most important social cognitive skills in humans is the ability to “put oneself in someone else’s shoes,” that is, to take another person’s perspective. In socially situated communication, perspective taking enables the listener to arrive at a meaningful interpretation of what is said (sentence meaning) and what is meant (speaker’s meaning) by the speaker. To successfully decode the speaker’s meaning, the listener has to take into account which information he/she and the speaker share in their common ground (CG). We here further investigated competing accounts about when and how CG information affects language comprehension by means of reaction time (RT) measures, accuracy data, event-related potentials (ERPs), and eye-tracking. Early integration accounts would predict that CG information is considered immediately and would hence not expect to find costs of CG integration. Late integration accounts would predict a rather late and effortful integration of CG information during the parsing process that might be reflected in integration or updating costs. Other accounts predict the simultaneous integration of privileged ground (PG) and CG perspectives. We used a computerized version of the referential communication game with object triplets of different sizes presented visually in CG or PG. In critical trials (i.e., conflict trials), CG information had to be integrated while privileged information had to be suppressed. Listeners mastered the integration of CG (response accuracy 99.8%). Yet, slower RTs, and enhanced late positivities in the ERPs showed that CG integration had its costs. Moreover, eye-tracking data indicated an early anticipation of referents in CG but an inability to suppress looks to the privileged competitor, resulting in later and longer looks to targets in those trials, in which CG information had to be considered. Our data therefore support accounts that foresee an early anticipation of referents to be in CG but a rather late and effortful integration if conflicting information has to be processed. We show that both perspectives, PG and CG, contribute to socially situated language processing and discuss the data with reference to theoretical accounts and recent findings on the use of CG information for reference resolution.
Breaking Continuous Flash Suppression (bCFS) has been adopted as an appealing means to study human visual awareness, but the literature is beclouded by inconsistent and contradictory results. Although previous reviews have focused chiefly on design pitfalls and instances of false reasoning, we show in this study that the choice of analysis pathway can have severe effects on the statistical output when applied to bCFS data. Using a representative dataset designed to address a specific controversy in the realm of language processing under bCFS, namely whether psycholinguistic variables affect access to awareness, we present a range of analysis methods based on real instances in the published literature, and indicate how each approach affects the perceived outcome. We provide a summary of published bCFS studies indicating the use of data transformation and trimming, and highlight that more compelling analysis methods are sparsely used in this field. We discuss potential interpretations based on both classical and more complex analyses, to highlight how these differ. We conclude that an adherence to openly available data and analysis pathways could provide a great benefit to this field, so that conclusions can be tested against multiple analyses as standard practices are updated.
Infants show impressive speech decoding abilities and detect acoustic regularities that highlight the syntactic relations of a language, often coded via non-adjacent dependencies (NADs, e.g., is singing). It has been claimed that infants learn NADs implicitly and associatively through passive listening and that there is a shift from effortless associative learning to a more controlled learning of NADs after the age of 2 years, potentially driven by the maturation of the prefrontal cortex. To investigate if older children are able to learn NADs, Lammertink et al. (2019) recently developed a word-monitoring serial reaction time (SRT) task and could show that 6–11-year-old children learned the NADs, as their reaction times (RTs) increased then they were presented with violated NADs. In the current study we adapted their experimental paradigm and tested NAD learning in a younger group of 52 children between the age of 4–8 years in a remote, web-based, game-like setting (whack-a-mole). Children were exposed to Italian phrases containing NADs and had to monitor the occurrence of a target syllable, which was the second element of the NAD. After exposure, children did a “Stem Completion” task in which they were presented with the first element of the NAD and had to choose the second element of the NAD to complete the stimuli. Our findings show that, despite large variability in the data, children aged 4–8 years are sensitive to NADs; they show the expected differences in r RTs in the SRT task and could transfer the NAD-rule in the Stem Completion task. We discuss these results with respect to the development of NAD dependency learning in childhood and the practical impact and limitations of collecting these data in a web-based setting.
Infants show impressive speech decoding abilities and detect acoustic regularities that highlight the syntactic relations of a language, often coded via non-adjacent dependencies (NADs, e.g., is singing). It has been claimed that infants learn NADs implicitly and associatively through passive listening and that there is a shift from effortless associative learning to a more controlled learning of NADs after the age of 2 years, potentially driven by the maturation of the prefrontal cortex. To investigate if older children are able to learn NADs, Lammertink et al. (2019) recently developed a word-monitoring serial reaction time (SRT) task and could show that 6–11-year-old children learned the NADs, as their reaction times (RTs) increased then they were presented with violated NADs. In the current study we adapted their experimental paradigm and tested NAD learning in a younger group of 52 children between the age of 4–8 years in a remote, web-based, game-like setting (whack-a-mole). Children were exposed to Italian phrases containing NADs and had to monitor the occurrence of a target syllable, which was the second element of the NAD. After exposure, children did a “Stem Completion” task in which they were presented with the first element of the NAD and had to choose the second element of the NAD to complete the stimuli. Our findings show that, despite large variability in the data, children aged 4–8 years are sensitive to NADs; they show the expected differences in r RTs in the SRT task and could transfer the NAD-rule in the Stem Completion task. We discuss these results with respect to the development of NAD dependency learning in childhood and the practical impact and limitations of collecting these data in a web-based setting.
There is increasing interest ill understanding the neural systems that mediate analogical thinking, which is essential for learning and fluid intelligence. The aim of the present study was to shed light on the cerebral correlates of geometric analogical processing and on training-induced changes at the behavioral and brain level. In healthy participants a bilateral fronto-parietal network was engaged in processing geometric analogies and showed greater blood oxygenation dependent (BOLD) signals as resource demands increased. This network, as well as fusiform and subcortical brain regions, additionally showed training-induced decreases in the BOLD signal over time. The general finding that brain regions were modulated by the amount of resources demanded by the task, and/or by the reduction of allocated resources due to short term training, reflects increased efficiency - in terms of more focal and more specialized brain activation - to more economically process the geometric analogies. Our data indicate a rapid adaptation of the cognitive system which is efficiently modulated by short term training based on a positive correlation of resource demands and brain activation.
Prosodic information is crucial for spoken language comprehension and especially for syntactic parsing, because prosodic cues guide the hearer's syntactic analysis. The time course and mechanisms of this interplay of prosody and syntax are not yet well-understood. In particular, there is an ongoing debate whether local prosodic cues are taken into account automatically or whether they are processed in relation to the global prosodic context in which they appear. The present study explores whether the perception of a prosodic boundary is affected by its position within an utterance. In an event-related potential (PRP) study we tested if the brain response evoked by the prosodic boundary differs when the boundary occurs early in a list of three names connected by conjunctions (i.e., after the first name) as compared to later in the utterance (i.e., after the second name). A closure positive shift (CPS)-marking the processing of a prosodic phrase boundary-was elicited for stimuli with a late boundary, but not for stimuli with an early boundary. This result is further evidence for an immediate integration of prosodic information into the parsing of an utterance. In addition, it shows that the processing of prosodic boundary cues depends on the previously processed information from the preceding prosodic context.
The attentional bias to negative information enables humans to quickly identify and to respond appropriately to potentially threatening situations. Because of its adaptive function, the enhanced sensitivity to negative information is expected to represent a universal trait, shared by all humans regardless of their cultural background. However, existing research focuses almost exclusively on humans from Western industrialized societies, who are not representative for the human species. Therefore, we compare humans from two distinct cultural contexts: adolescents and children from Germany, a Western industrialized society, and from the not equal Akhoe Hai parallel to om, semi-nomadic hunter-gatherers in Namibia. We predicted that both groups show an attentional bias toward negative facial expressions as compared to neutral or positive faces. We used eye-tracking to measure their fixation duration on facial expressions depicting different emotions, including negative (fear, anger), positive (happy), and neutral faces. Both Germans and the not equal Akhoe Hai parallel to om gazed longer at fearful faces, but shorter on angry faces, challenging the notion of a general bias toward negative emotions. For happy faces, fixation durations varied between the two groups, suggesting more flexibility in the response to positive emotions. Our findings emphasize the need for placing research on emotion perception into an evolutionary, cross-cultural comparative framework that considers the adaptive significance of specific emotions, rather than differentiating between positive and negative information, and enables systematic comparisons across participants from diverse cultural backgrounds.
Normal aging is associated with a decline in different cognitive domains and local structural atrophy as well as decreases in dopamine concentration and receptor density. To date, it is largely unknown how these reductions in dopaminergic neurotransmission affect human brain regions responsible for reward-based decision making in older adults. Using a learning criterion in a probabilistic object reversal task, we found a learning stage by age interaction in the dorsolateral prefrontal cortex (dIPFC) during decision making. While young adults recruited the dlPFC in an early stage of learning reward associations, older adults recruited the dlPFC when reward associations had already been learned. Furthermore, we found a reduced change in ventral striatal BOLD signal in older as compared to younger adults in response to high probability rewards. Our data are in line with behavioral evidence that older adults show altered stimulus-reward learning and support the view of an altered fronto-striatal interaction during reward-based decision making in old age, which contributes to prolonged learning of reward associations.
Speech perception requires rapid extraction of the linguistic content from the acoustic signal. The ability to efficiently process rapid changes in auditory information is important for decoding speech and thereby crucial during language acquisition. Investigating functional networks of speech perception in infancy might elucidate neuronal ensembles supporting perceptual abilities that gate language acquisition. Interhemispheric specializations for language have been demonstrated in infants. How these asymmetries are shaped by basic temporal acoustic properties is under debate. We recently provided evidence that newborns process non-linguistic sounds sharing temporal features with language in a differential and lateralized fashion. The present study used the same material while measuring brain responses of 6 and 3 month old infants using simultaneous recordings of electroencephalography (EEG) and near-infrared spectroscopy (NIRS). NIRS reveals that the lateralization observed in newborns remains constant over the first months of life. While fast acoustic modulations elicit bilateral neuronal activations, slow modulations lead to right-lateralized responses. Additionally, auditory-evoked potentials and oscillatory EEG responses show differential responses for fast and slow modulations indicating a sensitivity for temporal acoustic variations. Oscillatory responses reveal an effect of development, that is, 6 but not 3 month old infants show stronger theta-band desynchronization for slowly modulated sounds. Whether this developmental effect is due to increasing fine-grained perception for spectrotemporal sounds in general remains speculative. Our findings support the notion that a more general specialization for acoustic properties can be considered the basis for lateralization of speech perception. The results show that concurrent assessment of vascular based imaging and electrophysiological responses have great potential in the research on language acquisition.
Children born preterm are at higher risk to develop language deficits. Auditory speech discrimination deficits may be early signs for language developmental problems. The present study used functional near-infrared spectroscopy to investigate neural speech discrimination in 15 preterm infants at term-equivalent age compared to 15 full term neonates. The full term group revealed a significantly greater hemodynamic response to forward compared to backward speech within the left hemisphere extending from superior temporal to inferior parietal and middle and inferior frontal areas. In contrast, the preterm group did not show differences in their hemodynamic responses during forward versus backward speech, thus, they did not discriminate speech from nonspeech. Groups differed significantly in their responses to forward speech, whereas they did not differ in their responses to backward speech. The significant differences between groups point to an altered development of the functional network underlying language acquisition in preterm infants as early as in term-equivalent age.
A close call
(2018)
The present study investigated how lexical selection is influenced by the number of semantically related representations (semantic neighbourhood density) and their similarity (semantic distance) to the target in a speeded picture-naming task. Semantic neighbourhood density and similarity as continuous variables were used to assess lexical selection for which competitive and noncompetitive mechanisms have been proposed. Previous studies found mixed effects of semantic neighbourhood variables, leaving this issue unresolved. Here, we demonstrate interference of semantic neighbourhood similarity with less accurate naming responses and a higher likelihood of producing semantic errors and omissions over accurate responses for words with semantically more similar (closer) neighbours. No main effect of semantic neighbourhood density and no interaction between semantic neighbourhood density and similarity was found. We assessed further whether semantic neighbourhood density can affect naming performance if semantic neighbours exceed a certain degree of semantic similarity. Semantic similarity between the target and each neighbour was used to split semantic neighbourhood density into two different density variables: The number of semantically close neighbours versus distant neighbours. The results showed a significant effect of close, but not of distant, semantic neighbourhood density: Naming pictures of targets with more close semantic neighbours led to longer naming latencies, less accurate responses, and a higher likelihood for the production of semantic errors and omissions over accurate responses. The results show that word inherent semantic attributes such as semantic neighbourhood similarity and the number of coactivated close semantic neighbours modulate lexical selection supporting theories of competitive lexical processing.