Refine
Year of publication
Document Type
- Article (64)
- Postprint (7)
- Part of Periodical (5)
- Conference Proceeding (1)
Keywords
- Patholinguistik (6)
- Sprachtherapie (6)
- patholinguistics (6)
- Age of acquisition (5)
- prosodic boundary cues (5)
- aging (4)
- speech perception (4)
- Aphasia (3)
- Closure Positive Shift (CPS) (3)
- ERP (3)
- ERPs (3)
- Semantic typicality (3)
- auditory perception (3)
- prosody processing (3)
- speech/language therapy (3)
- Animacy decision (2)
- Event-related Potentials (ERP) (2)
- German (2)
- Information structure (2)
- Language acquisition (2)
- N400 (2)
- SRT (2)
- Semantic classification task (2)
- Sentence processing (2)
- Typicality (2)
- action processing (2)
- action segmentation (2)
- alternative-set semantics (2)
- antecedent choice (2)
- attention (2)
- common ground (2)
- competitive inhibition (2)
- dichotic listening (2)
- event-related potentials (2)
- eye gaze (2)
- eye-tracking (2)
- focus particles (2)
- geistige Behinderung (2)
- implicit learning (2)
- infants (2)
- kinematic boundary cues (2)
- language acquisition (2)
- lexical decision task (2)
- mental deficiency (2)
- non-adjacent dependencies (2)
- perspective-taking (2)
- primary progessive aphasia (2)
- primär progessive Aphasie (2)
- privileged ground (2)
- probe recognition task (2)
- prosody (2)
- reading times (2)
- rule learning (2)
- serial reaction time (SRT) task (2)
- speech segmentation (2)
- speech therapy (2)
- visual context (2)
- web-based (2)
- Aboutness topic (1)
- Action segmentation (1)
- Ageing (1)
- Agrammatism (1)
- Alpha ERD/ERS (1)
- Alternative set (1)
- Analogical reasoning (1)
- Aphasia rehabilitation (1)
- Boundary cues (1)
- Category verification (1)
- Concept familiarity (1)
- Delayed recall (1)
- Development (1)
- Discourse context (1)
- Discourse linking (1)
- Dysphonie (1)
- Early childhood (1)
- Electroencephalography (EEG) (1)
- Event-Related Potential (ERP) technique (1)
- Event-related potentials (1)
- Exemplar generation (1)
- Eye-tracking (1)
- Fluid intelligence (1)
- Focus particles (1)
- Functional connectivity (1)
- German database (1)
- Germans (1)
- Hemispheric specialization (1)
- Kinematic boundary processing (1)
- Language (1)
- Language development (1)
- Language production (1)
- Late positivity (1)
- Lateralization (1)
- Left middle and superior temporal gyri (1)
- Lexical selection (1)
- Lexical-semantic processing (1)
- Memory (1)
- Mental image (1)
- Morphology (1)
- NIRS (1)
- Near-infrared spectroscopy (1)
- Near-infrared spectroscopy (NIRS) (1)
- Neural efficiency (1)
- Newborn infants (1)
- Norm data (1)
- Optical imaging (OI) (1)
- Optical tomography (1)
- Pause (1)
- Perception (1)
- Phrase-final lengthening (1)
- Preterm birth (1)
- Production (1)
- Prosodic boundaries (1)
- Prosodic boundary (1)
- Redeflussstörungen (1)
- Response inhibition (1)
- Selbsthilfe (1)
- Semantic categories (1)
- Semantic neighbours (1)
- Semantic priming (1)
- Semantic processing (1)
- Sentence comprehension (1)
- Short-term learning (1)
- Speech discrimination (1)
- Speech perception (1)
- Spoken language comprehension (1)
- Sprechapraxie (1)
- Stimmstörung (1)
- Stimmtherapie (1)
- Stottern (1)
- Syntax-Discourse Model (1)
- Time reference (1)
- Tool use demonstration (1)
- Tool use pantomime (1)
- Topic status (1)
- Transfer (1)
- Verbal communication (1)
- Visual-world paradigm (1)
- Word order variation (1)
- Word processing (1)
- Working memory (1)
- adolescents (1)
- animacy (1)
- apraxia of speech (1)
- attentional bias (1)
- attentional control (1)
- auditory processing (1)
- brain oscillations (1)
- case marking (1)
- child development (1)
- closure positive shift (1)
- conflict monitoring (1)
- coordinates (1)
- cross-cultural comparison (1)
- cue weighting (1)
- decision making (1)
- dorsolateral prefrontal cortex (1)
- duration (1)
- dysphonia (1)
- emotions (1)
- event related potentials (1)
- f0 (1)
- f0 peaks (1)
- fMRI (1)
- facial expressions (1)
- fear bias (1)
- fluency disorder (1)
- functional magnetic resonance imaging (1)
- geometric analogical reasoning (1)
- headturn preference procedure (1)
- hearing (1)
- high fluid intelligence (1)
- interdisciplinary treatment (1)
- interdisziplinäre Behandlung (1)
- interference control (1)
- intonation phrase boundary (1)
- multiprofessional cooperation (1)
- multiprofessionelle Zusammenarbeit (1)
- n-back training (1)
- near-infrared spectroscopy (1)
- not equal Akhoe Hai parallel to om (1)
- parieto-frontal network (1)
- pause (1)
- pre-final lengthening (1)
- prosodic bootstrapping (1)
- prosodic boundaries (1)
- prosodic cues (1)
- reward association learning (1)
- self-help (1)
- sentence comprehension (1)
- sentence comprehension deficit (1)
- speech and language therapy (1)
- stuttering (1)
- task difficulty (1)
- topic status (1)
- transfer effect (1)
- updating training (1)
- variability (1)
- varying interlocutors (1)
- ventral striatum (1)
- voice therapy (1)
Institute
Express yourself!
(2022)
Das 15. Herbsttreffen Patholinguistik mit dem Schwerpunktthema »Interdisziplinär (be-)handeln – Multiprofessionelle Zusammenarbeit in der Sprachtherapie« fand am 20.11.2021 als Online-Veranstaltung statt. Das Herbsttreffen wird seit 2007 jährlich vom Verband für Patholinguistik e.V. (vpl), seit 2021 vom Deutschen Bundesverband für akademische Sprachtherapie und Logopädie (dbs) in Kooperation mit der Universität Potsdam durchgeführt. Der vorliegende Tagungsband beinhaltet die Vorträge zum Schwerpunktthema und Informationen aus der Podiumsdiskussion sowie die Posterpräsentationen zu weiteren Themen aus der sprachtherapeutischen Forschung und Praxis.
Infants show impressive speech decoding abilities and detect acoustic regularities that highlight the syntactic relations of a language, often coded via non-adjacent dependencies (NADs, e.g., is singing). It has been claimed that infants learn NADs implicitly and associatively through passive listening and that there is a shift from effortless associative learning to a more controlled learning of NADs after the age of 2 years, potentially driven by the maturation of the prefrontal cortex. To investigate if older children are able to learn NADs, Lammertink et al. (2019) recently developed a word-monitoring serial reaction time (SRT) task and could show that 6–11-year-old children learned the NADs, as their reaction times (RTs) increased then they were presented with violated NADs. In the current study we adapted their experimental paradigm and tested NAD learning in a younger group of 52 children between the age of 4–8 years in a remote, web-based, game-like setting (whack-a-mole). Children were exposed to Italian phrases containing NADs and had to monitor the occurrence of a target syllable, which was the second element of the NAD. After exposure, children did a “Stem Completion” task in which they were presented with the first element of the NAD and had to choose the second element of the NAD to complete the stimuli. Our findings show that, despite large variability in the data, children aged 4–8 years are sensitive to NADs; they show the expected differences in r RTs in the SRT task and could transfer the NAD-rule in the Stem Completion task. We discuss these results with respect to the development of NAD dependency learning in childhood and the practical impact and limitations of collecting these data in a web-based setting.
Infants show impressive speech decoding abilities and detect acoustic regularities that highlight the syntactic relations of a language, often coded via non-adjacent dependencies (NADs, e.g., is singing). It has been claimed that infants learn NADs implicitly and associatively through passive listening and that there is a shift from effortless associative learning to a more controlled learning of NADs after the age of 2 years, potentially driven by the maturation of the prefrontal cortex. To investigate if older children are able to learn NADs, Lammertink et al. (2019) recently developed a word-monitoring serial reaction time (SRT) task and could show that 6–11-year-old children learned the NADs, as their reaction times (RTs) increased then they were presented with violated NADs. In the current study we adapted their experimental paradigm and tested NAD learning in a younger group of 52 children between the age of 4–8 years in a remote, web-based, game-like setting (whack-a-mole). Children were exposed to Italian phrases containing NADs and had to monitor the occurrence of a target syllable, which was the second element of the NAD. After exposure, children did a “Stem Completion” task in which they were presented with the first element of the NAD and had to choose the second element of the NAD to complete the stimuli. Our findings show that, despite large variability in the data, children aged 4–8 years are sensitive to NADs; they show the expected differences in r RTs in the SRT task and could transfer the NAD-rule in the Stem Completion task. We discuss these results with respect to the development of NAD dependency learning in childhood and the practical impact and limitations of collecting these data in a web-based setting.
Prosodic boundaries can be used to disambiguate the syntactic structure of coordinated name sequences (coordinates). To answer the question whether disambiguating prosody is produced in a situationally dependent or independent manner and to contribute to our understanding of the nature of the prosody-syntax link, we systematically explored variability in the prosody of boundary productions of coordinates evoked by different contextual settings in a referential communication task. Our analysis focused on prosodic boundaries produced to distinguish sequences with different syntactic structures (i.e., with or without internal grouping of the constituents). In German, these prosodic boundaries are indicated by three major prosodic cues: f0-range, final lengthening, and pause. In line with the Proximity/Anti-Proximity principle of the syntax-prosody model by Kentner and Fery (2013), speakers clearly use all three cues for constituent grouping and prosodically mark groups within and at their right boundary, indicating that prosodic phrasing is not a local phenomenon. Intra-individually, we found a rather stable prosodic pattern across contexts. However, inter-individually speakers differed from each other with respect to the prosodic cue combinations that they (consistently) used to mark the boundaries. Overall, our data speak in favour of a close link between syntax and prosody and for situational independence of disambiguating prosody.
Human infants can segment action sequences into their constituent actions already during the first year of life. However, work to date has almost exclusively examined the role of infants' conceptual knowledge of actions and their outcomes in driving this segmentation. The present study examined electrophysiological correlates of infants' processing of lower-level perceptual cues that signal a boundary between two actions of an action sequence. Specifically, we tested the effect of kinematic boundary cues (pre-boundary lengthening and pause) on 12-month-old infants' (N = 27) processing of a sequence of three arbitrary actions, performed by an animated figure. Using the Event-Related Potential (ERP) approach, evidence of a positivity following the onset of the boundary cues was found, in line with previous work that has found an ERP positivity (Closure Positive Shift, CPS) related to boundary processing in auditory stimuli and action sequences in adults. Moreover, an ERP negativity (Negative Central, Nc) indicated that infants' encoding of the post-boundary action was modulated by the presence or absence of prior boundary cues. We therefore conclude that 12-month-old infants are sensitive to lower-level perceptual kinematic boundary cues, which can support segmentation of a continuous stream of movement into individual action units.
Das 12. Herbsttreffen Patholinguistik mit dem Schwerpunktthema »Weg(e) mit dem Stottern: Therapie und Selbsthilfe für Kinder und Erwachsene« fand am 24.11.2018 in Potsdam statt. Das Herbsttreffen wird seit 2007 jährlich vom Verband für Patholinguistik e.V. (vpl) durchgeführt. Der vorliegende Tagungsband beinhaltet die Vorträge zum Schwerpunktthema sowie Beiträge der Posterpräsentationen zu weiteren Themen aus der sprachtherapeutischen Forschung und Praxis.
One of the most important social cognitive skills in humans is the ability to “put oneself in someone else’s shoes,” that is, to take another person’s perspective. In socially situated communication, perspective taking enables the listener to arrive at a meaningful interpretation of what is said (sentence meaning) and what is meant (speaker’s meaning) by the speaker. To successfully decode the speaker’s meaning, the listener has to take into account which information he/she and the speaker share in their common ground (CG). We here further investigated competing accounts about when and how CG information affects language comprehension by means of reaction time (RT) measures, accuracy data, event-related potentials (ERPs), and eye-tracking. Early integration accounts would predict that CG information is considered immediately and would hence not expect to find costs of CG integration. Late integration accounts would predict a rather late and effortful integration of CG information during the parsing process that might be reflected in integration or updating costs. Other accounts predict the simultaneous integration of privileged ground (PG) and CG perspectives. We used a computerized version of the referential communication game with object triplets of different sizes presented visually in CG or PG. In critical trials (i.e., conflict trials), CG information had to be integrated while privileged information had to be suppressed. Listeners mastered the integration of CG (response accuracy 99.8%). Yet, slower RTs, and enhanced late positivities in the ERPs showed that CG integration had its costs. Moreover, eye-tracking data indicated an early anticipation of referents in CG but an inability to suppress looks to the privileged competitor, resulting in later and longer looks to targets in those trials, in which CG information had to be considered. Our data therefore support accounts that foresee an early anticipation of referents to be in CG but a rather late and effortful integration if conflicting information has to be processed. We show that both perspectives, PG and CG, contribute to socially situated language processing and discuss the data with reference to theoretical accounts and recent findings on the use of CG information for reference resolution.
Die Konkurrenz schläft nie!
(2020)
One of the most important social cognitive skills in humans is the ability to “put oneself in someone else’s shoes,” that is, to take another person’s perspective. In socially situated communication, perspective taking enables the listener to arrive at a meaningful interpretation of what is said (sentence meaning) and what is meant (speaker’s meaning) by the speaker. To successfully decode the speaker’s meaning, the listener has to take into account which information he/she and the speaker share in their common ground (CG). We here further investigated competing accounts about when and how CG information affects language comprehension by means of reaction time (RT) measures, accuracy data, event-related potentials (ERPs), and eye-tracking. Early integration accounts would predict that CG information is considered immediately and would hence not expect to find costs of CG integration. Late integration accounts would predict a rather late and effortful integration of CG information during the parsing process that might be reflected in integration or updating costs. Other accounts predict the simultaneous integration of privileged ground (PG) and CG perspectives. We used a computerized version of the referential communication game with object triplets of different sizes presented visually in CG or PG. In critical trials (i.e., conflict trials), CG information had to be integrated while privileged information had to be suppressed. Listeners mastered the integration of CG (response accuracy 99.8%). Yet, slower RTs, and enhanced late positivities in the ERPs showed that CG integration had its costs. Moreover, eye-tracking data indicated an early anticipation of referents in CG but an inability to suppress looks to the privileged competitor, resulting in later and longer looks to targets in those trials, in which CG information had to be considered. Our data therefore support accounts that foresee an early anticipation of referents to be in CG but a rather late and effortful integration if conflicting information has to be processed. We show that both perspectives, PG and CG, contribute to socially situated language processing and discuss the data with reference to theoretical accounts and recent findings on the use of CG information for reference resolution.
The attentional bias to negative information enables humans to quickly identify and to respond appropriately to potentially threatening situations. Because of its adaptive function, the enhanced sensitivity to negative information is expected to represent a universal trait, shared by all humans regardless of their cultural background. However, existing research focuses almost exclusively on humans from Western industrialized societies, who are not representative for the human species. Therefore, we compare humans from two distinct cultural contexts: adolescents and children from Germany, a Western industrialized society, and from the not equal Akhoe Hai parallel to om, semi-nomadic hunter-gatherers in Namibia. We predicted that both groups show an attentional bias toward negative facial expressions as compared to neutral or positive faces. We used eye-tracking to measure their fixation duration on facial expressions depicting different emotions, including negative (fear, anger), positive (happy), and neutral faces. Both Germans and the not equal Akhoe Hai parallel to om gazed longer at fearful faces, but shorter on angry faces, challenging the notion of a general bias toward negative emotions. For happy faces, fixation durations varied between the two groups, suggesting more flexibility in the response to positive emotions. Our findings emphasize the need for placing research on emotion perception into an evolutionary, cross-cultural comparative framework that considers the adaptive significance of specific emotions, rather than differentiating between positive and negative information, and enables systematic comparisons across participants from diverse cultural backgrounds.
Das 11. Herbsttreffen Patholinguistik mit dem Schwerpunktthema »Gut gestimmt: Diagnostik und Therapie bei Dysphonie« fand am 18.11.2017 in Potsdam statt. Das Herbsttreffen wird seit 2007 jährlich vom Verband für Patholinguistik e.V. (vpl) durchgeführt. Der vorliegende Tagungsband beinhaltet die Hauptvorträge zum Schwerpunktthema sowie Beiträge zu den Kurzvorträgen »Spektrum Patholinguistik« und der Posterpräsentationen zu weiteren Themen aus der sprachtherapeutischen Forschung und Praxis.
Speech and action sequences are continuous streams of information that can be segmented into sub-units. In both domains, this segmentation can be facilitated by perceptual cues contained within the information stream. In speech, prosodic cues (e.g., a pause, pre-boundary lengthening, and pitch rise) mark boundaries between words and phrases, while boundaries between actions of an action sequence can be marked by kinematic cues (e.g., a pause, pre-boundary deceleration). The processing of prosodic boundary cues evokes an Event-related Potentials (ERP) component known as the Closure Positive Shift (CPS), and it is possible that the CPS reflects domaingeneral cognitive processes involved in segmentation, given that the CPS is also evoked by boundaries between subunits of non-speech auditory stimuli. This study further probed the domain-generality of the CPS and its underlying processes by investigating electrophysiological correlates of the processing of boundary cues in sequences of spoken verbs (auditory stimuli; Experiment 1; N = 23 adults) and actions (visual stimuli; Experiment 2; N = 23 adults). The EEG data from both experiments revealed a CPS-like broadly distributed positivity during the 250 ms prior to the onset of the post-boundary word or action, indicating similar electrophysiological correlates of boundary processing across domains, suggesting that the cognitive processes underlying speech and action segmentation might also be shared.
Speech and action sequences are continuous streams of information that can be segmented into sub-units. In both domains, this segmentation can be facilitated by perceptual cues contained within the information stream. In speech, prosodic cues (e.g., a pause, pre-boundary lengthening, and pitch rise) mark boundaries between words and phrases, while boundaries between actions of an action sequence can be marked by kinematic cues (e.g., a pause, pre-boundary deceleration). The processing of prosodic boundary cues evokes an Event-related Potentials (ERP) component known as the Closure Positive Shift (CPS), and it is possible that the CPS reflects domaingeneral cognitive processes involved in segmentation, given that the CPS is also evoked by boundaries between subunits of non-speech auditory stimuli. This study further probed the domain-generality of the CPS and its underlying processes by investigating electrophysiological correlates of the processing of boundary cues in sequences of spoken verbs (auditory stimuli; Experiment 1; N = 23 adults) and actions (visual stimuli; Experiment 2; N = 23 adults). The EEG data from both experiments revealed a CPS-like broadly distributed positivity during the 250 ms prior to the onset of the post-boundary word or action, indicating similar electrophysiological correlates of boundary processing across domains, suggesting that the cognitive processes underlying speech and action segmentation might also be shared.
Children born preterm are at higher risk to develop language deficits. Auditory speech discrimination deficits may be early signs for language developmental problems. The present study used functional near-infrared spectroscopy to investigate neural speech discrimination in 15 preterm infants at term-equivalent age compared to 15 full term neonates. The full term group revealed a significantly greater hemodynamic response to forward compared to backward speech within the left hemisphere extending from superior temporal to inferior parietal and middle and inferior frontal areas. In contrast, the preterm group did not show differences in their hemodynamic responses during forward versus backward speech, thus, they did not discriminate speech from nonspeech. Groups differed significantly in their responses to forward speech, whereas they did not differ in their responses to backward speech. The significant differences between groups point to an altered development of the functional network underlying language acquisition in preterm infants as early as in term-equivalent age.
Sensitivity to salience
(2018)
Sentence comprehension is optimised by indicating entities as salient through linguistic (i.e., information-structural) or visual means. We compare how salience of a depicted referent due to a linguistic (i.e., topic status) or visual cue (i.e., a virtual person’s gaze shift) modulates sentence comprehension in German. We investigated processing of sentences with varying word order and pronoun resolution by means of self-paced reading and an antecedent choice task, respectively. Our results show that linguistic as well as visual salience cues immediately speeded up reading times of sentences mentioning the salient referent first. In contrast, for pronoun resolution, linguistic and visual cues modulated antecedent choice preferences less congruently. In sum, our findings speak in favour of a significant impact of linguistic and visual salience cues on sentence comprehension, substantiating that salient information delivered via language as well as the visual environment is integrated in the current mental representation of the discourse.
A close call
(2018)
The present study investigated how lexical selection is influenced by the number of semantically related representations (semantic neighbourhood density) and their similarity (semantic distance) to the target in a speeded picture-naming task. Semantic neighbourhood density and similarity as continuous variables were used to assess lexical selection for which competitive and noncompetitive mechanisms have been proposed. Previous studies found mixed effects of semantic neighbourhood variables, leaving this issue unresolved. Here, we demonstrate interference of semantic neighbourhood similarity with less accurate naming responses and a higher likelihood of producing semantic errors and omissions over accurate responses for words with semantically more similar (closer) neighbours. No main effect of semantic neighbourhood density and no interaction between semantic neighbourhood density and similarity was found. We assessed further whether semantic neighbourhood density can affect naming performance if semantic neighbours exceed a certain degree of semantic similarity. Semantic similarity between the target and each neighbour was used to split semantic neighbourhood density into two different density variables: The number of semantically close neighbours versus distant neighbours. The results showed a significant effect of close, but not of distant, semantic neighbourhood density: Naming pictures of targets with more close semantic neighbours led to longer naming latencies, less accurate responses, and a higher likelihood for the production of semantic errors and omissions over accurate responses. The results show that word inherent semantic attributes such as semantic neighbourhood similarity and the number of coactivated close semantic neighbours modulate lexical selection supporting theories of competitive lexical processing.
Recent treatment protocols have been successful in improving working memory (WM) in individuals with aphasia. However, the evidence to date is small and the extent to which improvements in trained tasks of WM transfer to untrained memory tasks, spoken sentence comprehension, and functional communication is yet poorly understood. To address these issues, we conducted a multiple baseline study with three German-speaking individuals with chronic post stroke aphasia. Participants practised two computerised WM tasks (n-back with pictures and aback with spoken words) four times a week for a month, targeting two WM processes: updating WM representations and resolving interference. All participants showed improvement on at least one measure of spoken sentence comprehension and everyday memory activities. Two of them showed improvement also on measures of WM and functional communication. Our results suggest that WM can be improved through computerised training in chronic aphasia and this can transfer to spoken sentence comprehension and functional communication in some individuals.